Skip to content

Latest commit

 

History

History
400 lines (247 loc) · 29.3 KB

README.md

File metadata and controls

400 lines (247 loc) · 29.3 KB

Awesome Large Multimodal Agents

Last update: 09/25/2024

Table of Contents


Papers

Taxonomy

Type Ⅰ

  • CLOVA - CLOVA: A Closed-Loop Visual Assistant with Tool Usage and Update

  • CRAFT - CRAFT: Customizing LLMs by Creating and Retrieving from Specialized Toolsets

  • ViperGPT - ViperGPT: Visual Inference via Python Execution for Reasoning Github Star

  • HuggingGPT - HuggingGPT: Solving AI Tasks with ChatGPT and its Friends in Hugging Face Github Star

  • Chameleon - Chameleon: Plug-and-Play Compositional Reasoning with Large Language Models Github Star

  • Visual ChatGPT - Visual ChatGPT: Talking, Drawing and Editing with Visual Foundation Models Github Star

  • AssistGPT - AssistGPT: A General Multi-modal Assistant that can Plan, Execute, Inspect, and Learn Github Star

  • M3 - Towards Robust Multi-Modal Reasoning via Model Selection Github Star

  • VisProgram - Visual Programming: Compositional visual reasoning without training

  • DDCoT - DDCoT: Duty-Distinct Chain-of-Thought Prompting for Multimodal Reasoning in Language Models Github Star

  • ASSISTGUI - ASSISTGUI: Task-Oriented Desktop Graphical User Interface Automation Github Star

  • GPT-Driver - GPT-Driver: Learning to Drive with GPT Github Star

  • LLaVA-Interactive - LLaVA-Interactive: An All-in-One Demo for Image Chat, Segmentation, Generation and Editing Github Star

  • MusicAgent - MusicAgent: An AI Agent for Music Understanding and Generation with Large Language Models Github Star

  • AudioGPT - AudioGPT: Understanding and Generating Speech, Music, Sound, and Talking Head Github Star

  • DroidBot-GPT - DroidBot-GPT: GPT-powered UI Automation for Android Github Star

  • GRID - GRID: A Platform for General Robot Intelligence Development Github Star

  • DEPS - Describe, Explain, Plan and Select: Interactive Planning with Large Language Models Enables Open-World Multi-Task Agents Github Star

  • MM-REACT - MM-REACT: Prompting ChatGPT for Multimodal Reasoning and Action Github Star

  • MuLan - MuLan: Multimodal-LLM Agent for Progressive Multi-Object Diffusion Github Star

  • Mobile-Agent - Mobile-Agent: Autonomous Multi-Modal Mobile Device Agent with Visual Perception GithubStar

  • SeeAct - GPT-4V(ision) is a Generalist Web Agent, if Grounded GithubStar

Type Ⅱ

  • STEVE - See and Think: Embodied Agent in Virtual Environment Github Star

  • EMMA - Embodied Multi-Modal Agent trained by an LLM from a Parallel TextWorld Github Star

  • MLLM-Tool - MLLM-Tool: A Multimodal Large Language Model For Tool Agent Learning Github Star

  • LLaVA-Plus - LLaVA-Plus: Large Language and Vision Assistants that Plug and Learn to Use Skills Github Star

  • GPT4Tools - GPT4Tools: Teaching Large Language Model to Use Tools via Self-instruction Github Star

  • WebWISE - WebWISE: Web Interface Control and Sequential Exploration with Large Language Models

  • Auto-UI - You Only Look at Screens: Multimodal Chain-of-Action Agents Github Star

Type Ⅲ

  • DoraemonGPT - DoraemonGPT: Toward Understanding Dynamic Scenes with Large Language Models Github Star

  • ChatVideo - ChatVideo: A Tracklet-centric Multimodal and Versatile Video Understanding System Github

  • VideoAgent -- VideoAgent: A Memory-augmented Multimodal Agent for Video Understanding Project page

Type Ⅳ

  • JARV IS-1 - JARVIS-1: Open-world Multi-task Agents with Memory-Augmented Multimodal Language Models Github Star

  • AppAgent - AppAgent: Multimodal Agents as Smartphone Users Github Star

  • MM-Navigator - GPT-4V in Wonderland: Large Multimodal Models for Zero-Shot Smartphone GUI Navigation Github Star

  • Copilot - Loop Copilot: Conducting AI Ensembles for Music Generation and Iterative Editing Github Star

  • WavJourney - WavJourney: Compositional Audio Creation with Large Language Models Github Star

  • DLAH - Drive Like a Human: Rethinking Autonomous Driving with Large Language Models Github Star

  • Cradle - Towards General Computer Control: A Multimodal Agent for Red Dead Redemption II as a Case Study Github Star

  • VideoAgent -- VideoAgent: A Memory-augmented Multimodal Agent for Video Understanding Project page

Multi-Agent

  • MP5 - MP5: A Multi-modal Open-ended Embodied System in Minecraft via Active Perception Github Star

  • MemoDroid - Explore, Select, Derive, and Recall: Augmenting LLM with Human-like Memory for Mobile Task Automation

  • Avis - avis: autonomous visual information seeking with large language model agent

  • Agent-Smith - Agent Smith: A Single Image Can Jailbreak One Million Multimodal LLM Agents Exponentially Fast GithubStar

  • GenAI - The Art of Storytelling: Multi-Agent Generative AI for Dynamic Multimodal Narratives GithubStar

  • P2H - Propaganda to Hate: A Multimodal Analysis of Arabic Memes with Multi-Agent LLMs

Application

💡 Complex Visual Reasoning Tasks

  • ViperGPT - ViperGPT: Visual Inference via Python Execution for Reasoning Github Star

  • HuggingGPT - HuggingGPT: Solving AI Tasks with ChatGPT and its Friends in Hugging Face Github Star

  • Chameleon - Chameleon: Plug-and-Play Compositional Reasoning with Large Language Models Github Star

  • Visual ChatGPT - Visual ChatGPT: Talking, Drawing and Editing with Visual Foundation Models Github Star

  • AssistGPT - AssistGPT: A General Multi-modal Assistant that can Plan, Execute, Inspect, and Learn Github Star

  • LLaVA-Plus - LLaVA-Plus: Large Language and Vision Assistants that Plug and Learn to Use Skills Github Star

  • GPT4Tools - GPT4Tools: Teaching Large Language Model to Use Tools via Self-instruction Github Star

  • MLLM-Tool - MLLM-Tool: A Multimodal Large Language Model For Tool Agent Learning Github Star

  • M3 - Towards Robust Multi-Modal Reasoning via Model Selection Github Star

  • VisProgram - Visual Programming: Compositional visual reasoning without training

  • DDCoT - DDCoT: Duty-Distinct Chain-of-Thought Prompting for Multimodal Reasoning in Language Models Github Star

  • Avis - Explore, Select, Derive, and Recall: Augmenting LLM with Human-like Memory for Mobile Task Automation

  • CLOVA - CLOVA: A Closed-Loop Visual Assistant with Tool Usage and Update

  • CRAFT - CRAFT: Customizing LLMs by Creating and Retrieving from Specialized Toolsets

  • MuLan - MuLan: Multimodal-LLM Agent for Progressive Multi-Object Diffusion Github Star

🎵 Audio Editing & Generation

  • Copilot - Loop Copilot: Conducting AI Ensembles for Music Generation and Iterative Editing Github Star

  • MusicAgent - MusicAgent: An AI Agent for Music Understanding and Generation with Large Language Models Github Star

  • AudioGPT - AudioGPT: Understanding and Generating Speech, Music, Sound, and Talking Head Github Star

  • WavJourney - WavJourney: Compositional Audio Creation with Large Language Models Github Star

  • OpenOmni - OpenOmni: A Collaborative Open Source Tool for Building Future-Ready Multimodal Conversational Agents Github Star

🤖 Embodied AI & Robotics

  • JARV IS-1 - JARVIS-1: Open-world Multi-task Agents with Memory-Augmented Multimodal Language Models Github Star

  • DEPS - Describe, Explain, Plan and Select: Interactive Planning with Large Language Models Enables Open-World Multi-Task Agents Github Star

  • Octopus - Octopus: Embodied Vision-Language Programmer from Environmental Feedback Github Star

  • GRID - GRID: A Platform for General Robot Intelligence Development Github Star

  • MP5 - MP5: A Multi-modal Open-ended Embodied System in Minecraft via Active Perception Github Star

  • STEVE - See and Think: Embodied Agent in Virtual Environment Github Star

  • EMMA - Embodied Multi-Modal Agent trained by an LLM from a Parallel TextWorld Github Star

  • MEIA - Multimodal Embodied Interactive Agent for Cafe Scene

🖱️💻 UI-assistants

  • AppAgent - AppAgent: Multimodal Agents as Smartphone Users Github Star

  • DroidBot-GPT - DroidBot-GPT: GPT-powered UI Automation for Android Github Star

  • WebWISE - WebWISE: Web Interface Control and Sequential Exploration with Large Language Models

  • Auto-UI - You Only Look at Screens: Multimodal Chain-of-Action Agents Github Star

  • MemoDroid - Explore, Select, Derive, and Recall: Augmenting LLM with Human-like Memory for Mobile Task Automation

  • ASSISTGUI - ASSISTGUI: Task-Oriented Desktop Graphical User Interface Automation Github Star

  • MM-Navigator - GPT-4V in Wonderland: Large Multimodal Models for Zero-Shot Smartphone GUI Navigation Github Star

  • AutoDroid - Empowering LLM to use Smartphone for Intelligent Task Automation Github Star

  • GPT-4V-Act - GPT-4V-Act: Chromium Copilot Github Star

  • Mobile-Agent - Mobile-Agent: Autonomous Multi-Modal Mobile Device Agent with Visual Perception GithubStar

  • [OpenAdapt]- OpenAdapt: AI-First Process Automation with Large Multimodal Models GithubStar

  • [EnvDistraction]- Caution for the Environment: Multimodal Agents are Susceptible to Environmental Distractions GithubStar

🎨 Visual Generation & Editing

  • LLaVA-Interactive - LLaVA-Interactive: An All-in-One Demo for Image Chat, Segmentation, Generation and Editing Github Star

  • MM-REACT - MM-REACT: Prompting ChatGPT for Multimodal Reasoning and Action Github Star

  • SeeAct - GPT-4V(ision) is a Generalist Web Agent, if Grounded GithubStar

  • GenAI - The Art of Storytelling: Multi-Agent Generative AI for Dynamic Multimodal Narratives GithubStar

  • GenArtist - GenArtist: Multimodal LLM as an Agent for Unified Image Generation and Editing GithubStar

🎥 Video Understanding

  • DoraemonGPT - DoraemonGPT: Toward Understanding Dynamic Scenes with Large Language Models Github Star

  • ChatVideo - ChatVideo: A Tracklet-centric Multimodal and Versatile Video Understanding System Github

  • AssistGPT - AssistGPT: A General Multi-modal Assistant that can Plan, Execute, Inspect, and Learn Github Star

  • VideoAgent-M -- VideoAgent: A Memory-augmented Multimodal Agent for Video Understanding Project page

  • VideoAgent-L - VideoAgent: Long-form Video Understanding with Large Language Model as Agent Project page

  • Kubrick - Kubrick: Multimodal Agent Collaborations for Synthetic Video Generation Github Star

  • Anim-Director - Anim-Director: A Large Multimodal Model Powered Agent for Controllable Animation Video Generation Github Star

🚗 Autonomous Driving

  • GPT-Driver - GPT-Driver: Learning to Drive with GPT Github Star

  • DLAH - Drive Like a Human: Rethinking Autonomous Driving with Large Language Models Github Star

🎮 Game-developer

  • SmartPlay - SmartPlay: A Benchmark for LLMs as Intelligent Agents Github Star

  • VisualWebArena - VisualWebArena: Evaluating Multimodal Agents on Realistic Visual Web Tasks Github Star

  • Cradle - Towards General Computer Control: A Multimodal Agent for Red Dead Redemption II as a Case Study Github Star

  • Cradle - Can AI Prompt Humans? Multimodal Agents Prompt Players’ Game Actions and Show Consequences to Raise Sustainability Awareness Github Star

Other

  • FinAgent - A Multimodal Foundation Agent for Financial Trading: Tool-Augmented, Diversified, and Generalist

  • VisionGPT - VisionGPT: Vision-Language Understanding Agent Using Generalized Multimodal Framework

  • WirelessAgent - WirelessAgent: Large Language Model Agents for Intelligent Wireless Networks

  • PhishAgent - PhishAgent: A Robust Multimodal Agent for Phishing Webpage Detection

  • MMRole - MMRole: A Comprehensive Framework for Developing and Evaluating Multimodal Role-Playing Agents Github Star

Benchmark

  • SmartPlay - SmartPlay: A Benchmark for LLMs as Intelligent Agents Github Star

  • VisualWebArena - VisualWebArena: Evaluating Multimodal Agents on Realistic Visual Web Tasks Github Star

  • Mind2Web - MIND2WEB: Towards a Generalist Agent for the Web Github Star

  • GAIA - GAIA: a benchmark for General AI Assistants Github

  • OmniACT - OmniACT: A Dataset and Benchmark for Enabling Multimodal Generalist

  • DSBench - DSBENCH: HOW FAR ARE DATA SCIENCE AGENTS TO BECOMING DATA SCIENCE EXPERTS? Github Star

  • GTA - GTA: A Benchmark for General Tool Agents Github Star