Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ydiago solver without mpi #134

Open
sangallidavide opened this issue Sep 9, 2024 · 2 comments
Open

ydiago solver without mpi #134

sangallidavide opened this issue Sep 9, 2024 · 2 comments

Comments

@sangallidavide
Copy link
Member

I would like to ask: is MPI C compiler mandatory for Yambo (even when not compiled with MPI)? As I mentioned in my first PR, Ydiago always needs to be compiled with MPI libraries, which means we always need a MPI C compiler or link with ```-libmpi``. I know this is not ideally what we want, but I will address this in the future. Another option is to completely skip compiling Ydiago when not using MPI and guard all Ydiago calls with -D_MPI and use the old solver.

@muralidhar-nalabothula yambo can be

a) compiled without mpi: --disable-mpi at configure time, -D_MPI is not defined. Of course scalapck/blacs cannot be linked. Probably it does not make sense to link elpa neither. We can de-activate the ydiago implementation as well in such case.

b) compiled with mpi but mpi disabled at runtime using yambo -nompi. In such case the code can be compiled with all mpi libraries (including sclapack/blacs/etc .. ), but MPI_Init is not called. Moreover, all calls such as call MPI_COMM_SIZE are protected by something like if(ncpu>1) return.

For case (a), we can leave it as an open issue, and work on this in the future.

For case (b) it might be enough to put some if(ncpu>1) then / else inside K_diagonalize to avoid seg-fault at runtime.

@muralidhar-nalabothula
Copy link
Contributor

Honestly, I do not see why -nompi flag is needed? Any specific reason why this is added? According to MPI standard, running mpirun -n 1 yambo must be same as running simply calling yambo and no special care needs to be taken in the code. Here is what is in standard (please note yambo does not call any special functions such as MPI_UNIVERSE_SIZE)

When an application enters MPI_INIT, clearly it must be able to determine if these
special steps were taken. If a process enters MPI_INIT and determines that no
special steps were taken (i.e., it has not been given the information to form an
MPI_COMM_WORLD with other processes) it succeeds and forms a singleton MPI program, that is, one in which MPI_COMM_WORLD has size 1.

@sangallidavide
Copy link
Member Author

sangallidavide commented Sep 9, 2024

In my machine with nvfortran and cuda support

  • mpirun -np 1 yambo works just fine
  • yambo -nompi works just fine
  • yambo hangs forever

I've reproduced the same behavior with nvfortran on a couple of other machines. No idea of why.

I do not know why -nompi was added either. Just it is there, and I use it sometimes.
First time I tested yambo with elpa+gpu I used yambo -nompi and it gave me segmentation fault. :-)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants