-
Notifications
You must be signed in to change notification settings - Fork 498
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Mamba (State Spaces Models) #447
Comments
@johnnynunez that's great! - can you start scoping what an optimized container might look like for supporting Mamba? is it already supported by other popular LLM API's, or does it have its own? |
it's supported in llama.cpp: ggerganov/llama.cpp#5328 |
Hmm that PR says it's CPU-only, and scanning the comments I didn't see mention of GPU support being added. If you know of a fast CUDA-accelerated implementation to add to jetson-containers, that would be cool. I don't think it's in MLC...or exllama neither.
What do you think of RKVW also? That is in MLC, and it's another recurrent one. Other ppl asked about that one in the past. It would be cool to do a page on Jetson AI Lab for these recurrent models.
…________________________________
From: Johnny ***@***.***>
Sent: Saturday, March 23, 2024 7:20:35 PM
To: dusty-nv/jetson-containers ***@***.***>
Cc: Dustin Franklin ***@***.***>; Comment ***@***.***>
Subject: Re: [dusty-nv/jetson-containers] Mamba (State Spaces Models) (Issue #447)
@johnnynunez<https://github.com/johnnynunez> that's great! - can you start scoping what an optimized container might look like for supporting Mamba? is it already supported by other popular LLM API's, or does it have its own?
it's supported in llama.cpp: ggerganov/llama.cpp#5328<ggerganov/llama.cpp#5328>
—
Reply to this email directly, view it on GitHub<#447 (comment)>, or unsubscribe<https://github.com/notifications/unsubscribe-auth/ADVEGK2YHZDXOADVOPW3CATYZYE4HAVCNFSM6AAAAABFCKMS7SVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMZDAMJWGYZDQNZUGM>.
You are receiving this because you commented.Message ID: ***@***.***>
|
I've checked the last versions of llama.cpp can run mamba-ssm!! |
Hi!
this is only a draft and summary of all papers and implementations of mamba.
I will put my feedback here, from Orin AGX 64Gb
Original paper:
(arXiv 2024.01) Vision Mamba: Efficient Visual Representation Learning with Bidirectional State Space Model, [Paper], [Code]
(arXiv 2024.01) VMamba: Visual State Space Model, [Paper], [Code]
(arXiv 2024.03) LocalMamba: Visual State Space Model with Windowed Selective Scan, [Paper], [Code]
(arXiv 2024.03) EfficientVMamba: Atrous Selective Scan for Light Weight Visual Mamba, [Paper], [Code]
(arXiv 2024.03) DenseMamba: State Space Models with Dense Hidden Connection for Efficient Large Language Models, [Paper],[Code]
(arXiv 2024.03) ZigMa: Zigzag Mamba Diffusion Model, [Paper],[Code]
(arXiv 2024.03) VideoMamba: State Space Model for Efficient Video Understanding, [Paper],[Code]
(arXiv 2024.03) Video Mamba Suite: State Space Model as a Versatile Alternative for Video Understanding, [Paper],[Code]
(arXiv 2024.03) SSM Meets Video Diffusion Models: Efficient Video Generation with Structured State Spaces, [Paper],[Code]
The text was updated successfully, but these errors were encountered: