-
Notifications
You must be signed in to change notification settings - Fork 12
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
eGPU not enabled after running script #6
Comments
Hi I'm on holiday away from my eGPU until January 3rd so I'll be able to test this more after that, but just from your description it seems like a similar issue to #5 where the glob is not expanding properly. Are you using bash or a different shell? |
Yes, this looks similar. I am using bash.
…On Fri, Dec 30, 2022, 1:28 PM ewagner12 ***@***.***> wrote:
Hi I'm on holiday away from my eGPU until January 3rd so I'll be able to
test this more after that, but just from your description it seems like a
similar issue to #5 <#5>
where the glob is not expanding properly. Are you using bash or a different
shell?
—
Reply to this email directly, view it on GitHub
<#6 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ACNTUVFI3WDDPSNWZW54PXTWP4ST7ANCNFSM6AAAAAATM7HHTA>
.
You are receiving this because you authored the thread.Message ID:
***@***.***>
|
After reading issue #5 , I took a look at my script and it seems that change is already included. Line 229:
` |
Ok so a couple of things here. First off, I noticed that the script is trying to find the file "/sys/bus/pci/devices/0000:0000:00:02.0/remove" which isn't working because there's an extra "0000:". Did you use the guided setup or did you manually enter the bus IDs? If you manually enter the ids they should be in a form like "00:02.0". Second just a note on how this script works, Method 2 is the recommended method and if it works for you, you don't need to setup the internal bus ids to remove. Did you try just setting the eGPU as primary with method 2 and not entering any internal gpu ids to remove? Lastly, I also took a look at the other issues you're seeing here and I believe I worked out the issues causing the output you're seeing with this part of the script. I'm in the process of testing these on my end to make sure they work correctly and I'll let you know when I push these changes to the github repo. Hopefully with all of these changes this should fix this issue. |
Thank you so much for your quick replies. I will address your questions
in-line;
"First off, I noticed that the script is trying to find the file
"/sys/bus/pci/devices/0000:0000:00:02.0/remove" which isn't working because
there's an extra "0000:". Did you use the guided setup or did you manually
enter the bus IDs? If you manually enter the ids they should be in a form
like "00:02.0"."
* I used the guided setup and did not enter any ids manually.
"Second just a note on how this script works, Method 2 is the recommended
method and if it works for you, you don't need to setup the internal bus
ids to remove. Did you try just setting the eGPU as primary with method 2
and not entering any internal gpu ids to remove?"
* I tried method 2 first and it did not work so then I tried method 1.
* I was unsure if there was something I should do to undo 2 to use 1, or
leave them both.
* If my understanding is correct, I will try the following when the
changes are pushed;
1) Use the guided setup
2) Using the script, select number 2
3) Reboot and test
4) If this did not work, run the script again and select method 1. There is
no need to undo anything from method 2.
"Lastly, I also took a look at the other issues you're seeing here and I
believe I worked out the issues causing the output you're seeing with this
part of the script. I'm in the process of testing these on my end to make
sure they work correctly and I'll let you know when I push these changes to
the github repo."
*I will look forward to these changes in the script.
One other question that I have for you is how systemd is setting up the
services to run. I noticed that after I logged in (to the eGPU not working)
there was a pop-up to enter the root password to allow a user service to
run. I am wondering if this could also be a potential cause. From my (very
limited) understanding, once a user is logged in and the (internal) display
is up, it's too late to "do stuff".
…On Wed, Jan 4, 2023, 5:05 PM ewagner12 ***@***.***> wrote:
Ok so a couple of things here.
First off, I noticed that the script is trying to find the file
"/sys/bus/pci/devices/0000:0000:00:02.0/remove" which isn't working because
there's an extra "0000:". Did you use the guided setup or did you manually
enter the bus IDs? If you manually enter the ids they should be in a form
like "00:02.0".
Second just a note on how this script works, Method 2 is the recommended
method and if it works for you, you don't need to setup the internal bus
ids to remove. Did you try just setting the eGPU as primary with method 2
and not entering any internal gpu ids to remove?
Lastly, I also took a look at the other issues you're seeing here and I
believe I worked out the issues causing the output you're seeing with this
part of the script. I'm in the process of testing these on my end to make
sure they work correctly and I'll let you know when I push these changes to
the github repo.
Hopefully with all of these changes this should fix this issue.
—
Reply to this email directly, view it on GitHub
<#6 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ACNTUVCT76MKUWTYXSC2TWTWQXXY7ANCNFSM6AAAAAATM7HHTA>
.
You are receiving this because you authored the thread.Message ID:
***@***.***>
|
You're correct that you don't need to do anything to undo method 2 and that's all correct on what you should try once the changes are pushed. One reason method 2 didn't work in the first place could be because the guided setup was giving it the wrong IDs in the first place. To help debug this could you post your output of The systemd prompt is expected when you login if you say yes to both prompts during setup. There's 2 different systemd services, one that runs before the display manager starts and is supposed to remove the iGPU and one that runs after the login and can restart the iGPU after login. With the gnome wayland desktop that lets you get a picture on the laptop screen while still keeping the eGPU as primary. |
$ lspci |
@ewagner12 Thank you for the changes. I cloned the repo, ran the install command, and then used the guided setup. When I tried to boot (with the eGPU connected), I got stuck in the boot screen and never made it to the GDM login. I was able to boot into the system with the eGPU powered off. |
Ok can you post the output of the all-way-egpu status command? |
I can try again. However there are a few things I found out on my system that might make a difference.
|
If I had to guess, I would guess that the iGPU is being removed correctly, but the nvidia card is not being picked up by X/Wayland for whatever reason. If that's the case here's some things I would try based on my experience with this:
|
FYI I just pushed commit 618fd62 which improves Method 1 removal reliability and sometimes prevents black screens at least on my end. So you may want to try the latest git again and see if anything changes for you. |
I have an RTX 3060 Ti inside a Razer Core X enclosure. This is working great (dual)booting into windows, however there is no display in linux [gentoo, systemd, gnome, waylad].
I took a look at the service status:
The first error about not being able to find: --> No such file or directory, I verfied the following files exist:
I am happy to help debug etc to get this working, Thank you for your scripts!
The text was updated successfully, but these errors were encountered: