Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Why the real and imaginary number are subtracted and added together to make the output? #9

Open
kronee0516 opened this issue May 7, 2024 · 2 comments

Comments

@kronee0516
Copy link

kronee0516 commented May 7, 2024

Can't understand what is happening
Is that some kind of theory need to do that?

one more question
what is the pytorch version you use? i encountered different errors when about start to train

@pheepa
Copy link
Owner

pheepa commented May 8, 2024

Hi
Referring to the article

Complex-valued Building Blocks. Given a complex-valued convolutional filter W = A+iB with
real-valued matrices A and B, the complex convolution operation on complex vector h = x + iy
with W is done by W ∗h = (A∗x−B ∗y)+i(B ∗x+A∗y)

Basically, it's just a way of multiplying complex numbers:
(a + ib) (c + id) = ac + iad + ibc + i2bd ⇒ (a + ib) (c + id) = (ac - bd) + i(ad + bc) [Because i2 = -1]

Unfortunately, it is impossible to find out which pytorch version I used (no artefacts like requirements.txt left) because at that time I did not have enough experience to write reproducible code.
In any case, you can share your errors and we will try to solve them, it can be usefull for someone.

@JangraManisha
Copy link

JangraManisha commented Nov 29, 2024

Hi

I am writing about the errors i got while reproducing the code:

  1. error required to set return_complex= True in torch.stft method. (Done)
  2. if return_complex=True, the error asks to convert bias also to be of complex datatype. (Done by setting Bias=False while defining conv2D layer)
  3. The next error required to set datatype of conv2D layer : self.real_conv = nn.Conv2d(in_channels=self.in_channels,
    out_channels=self.out_channels,
    kernel_size=self.kernel_size,
    padding=self.padding,
    stride=self.stride, bias=False,dtype=torch.cfloat)
    I don't remember the error.
  4. Now I am getting the error of mismatch size of kernel and input. given groups=1, weight of size [45, 1, 7, 5], expected input[1, 2, 1, 1539] to have 1 channels, but got 2 channels instead. I am stuck here. After dataloader, the input shape that I am getting is torch.Size([2, 1, 1539, 214]). Can anyone help?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants