Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Notebook tweaks for Google Colab #7

Open
woctezuma opened this issue Dec 15, 2021 · 6 comments
Open

Notebook tweaks for Google Colab #7

woctezuma opened this issue Dec 15, 2021 · 6 comments

Comments

@woctezuma
Copy link

woctezuma commented Dec 15, 2021

For info, on Google Colab, the provided notebook examples/sampling_interactive_demo.ipynb has to be slightly edited.

One has to:

  • toggle ON the GPU usage,
  • run the following cell at the top of the notebook:
%cd /content
!git clone https://github.com/kakaobrain/minDALL-E.git
%cd /content/minDALL-E
%pip install -q pytorch-lightning omegaconf einops tokenizers
%pip install -q git+https://github.com/openai/CLIP.git

I could have run:

%pip install -q -r requirements.txt

However, it takes a long time for no added value, as some packages are already installed on Colab.

@woctezuma
Copy link
Author

woctezuma commented Dec 15, 2021

Sadly, Google Colab runs out of memory. :'(

100%|█████████████████████████████████████| 4.72G/4.72G [04:52<00:00, 17.3MiB/s]
extracting: ./1.3B/tokenizer/bpe-16k-vocab.json (size:0MB): 100%|██████████| 7/7 [01:34<00:00, 13.45s/it]

/root/.cache/minDALL-E/1.3B/tokenizer successfully restored..
/root/.cache/minDALL-E/1.3B/stage1_last.ckpt successfully restored..
/root/.cache/minDALL-E/1.3B/stage2_last.ckpt succesfully restored..

100%|████████████████████████████████████████| 338M/338M [00:03<00:00, 114MiB/s]
SEED: 0
Softmax Temperature: 1.0
Top-K: 16
Text prompt: A painting of a monkey with sunglasses in the frame

  0%|          | 0/256 [00:00<?, ?it/s]

  0%|          | 0/256 [00:16<?, ?it/s]


---------------------------------------------------------------------------

RuntimeError                              Traceback (most recent call last)

<ipython-input-11-f720bdfc32bb> in btn_eventhandler(obj)
     17 
     18     with plot_output:
---> 19         sampling(prompt=wd_text.value, top_k=slider_topk.value, softmax_temperature=slider_temp.value, seed=slider_seed.value)
     20 
     21 slider_seed = widgets.IntSlider(

12 frames

/usr/local/lib/python3.7/dist-packages/torch/nn/functional.py in linear(input, weight, bias)
   1846     if has_torch_function_variadic(input, weight, bias):
   1847         return handle_torch_function(linear, (input, weight, bias), input, weight, bias=bias)
-> 1848     return torch._C._nn.linear(input, weight, bias)
   1849 
   1850 

RuntimeError: CUDA out of memory. Tried to allocate 72.00 MiB (GPU 0; 11.17 GiB total capacity; 10.34 GiB already allocated; 10.81 MiB free; 10.65 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation.  See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

I suspect one has to decrease the number of candidates num_candidates following #1, or num_samples_for_display.

@woctezuma
Copy link
Author

woctezuma commented Dec 15, 2021

I have set both values to 16 for a try in:

    with plot_output:
        sampling(prompt=wd_text.value, 
                 top_k=slider_topk.value,
                 softmax_temperature=slider_temp.value,
                 seed=slider_seed.value,
                 num_candidates=16,
                 num_samples_for_display=16)

There is another error on Colab which is triggered when executing:

display(plot_output)
 98%|█████████▊| 250/256 [03:28<00:06,  1.01s/it]

 98%|█████████▊| 251/256 [03:29<00:05,  1.02s/it]

 98%|█████████▊| 252/256 [03:30<00:04,  1.02s/it]

 99%|█████████▉| 253/256 [03:31<00:03,  1.02s/it]

 99%|█████████▉| 254/256 [03:32<00:02,  1.02s/it]

100%|█████████▉| 255/256 [03:33<00:01,  1.02s/it]

100%|██████████| 256/256 [03:34<00:00,  1.02s/it]

100%|██████████| 256/256 [03:34<00:00,  1.19it/s]


---------------------------------------------------------------------------

RuntimeError                              Traceback (most recent call last)

<ipython-input-4-d07aad4f4bbc> in btn_eventhandler(obj)
     17 
     18     with plot_output:
---> 19         sampling(prompt=wd_text.value, top_k=slider_topk.value, softmax_temperature=slider_temp.value, seed=slider_seed.value, num_candidates=16, num_samples_for_display=16)
     20 
     21 slider_seed = widgets.IntSlider(

7 frames

/usr/local/lib/python3.7/dist-packages/torch/nn/modules/conv.py in _conv_forward(self, input, weight, bias)
    441                             _pair(0), self.dilation, self.groups)
    442         return F.conv2d(input, weight, bias, self.stride,
--> 443                         self.padding, self.dilation, self.groups)
    444 
    445     def forward(self, input: Tensor) -> Tensor:

RuntimeError: cuDNN error: CUDNN_STATUS_NOT_INITIALIZED

I guess I truly had to run:

%pip install -q -r requirements.txt

@LeeDoYup
Copy link
Contributor

LeeDoYup commented Dec 15, 2021

Thanks for your interest in our project ! You can adjust the number of candidates for the image generation according to the memory size of used GPUs (please refer to #1).

As you mention, RuntimeError: cuDNN error: CUDNN_STATUS_NOT_INITIALIZEDoften occurs when the versions of CUDA, cuDNN, and Pytorch are not compatible. Please follow our instruction on the Environment Setup in README.md.

@ouhenio
Copy link

ouhenio commented Dec 16, 2021

Here is a working Colab notebook, in case someone is looking for one.

@woctezuma
Copy link
Author

woctezuma commented Dec 16, 2021

Nice job! :) Sadly, I have only had K80 GPU on Colab for my few tries, so I cannot try the notebook right now.

I don't close the issue for now so that you get more visibility! :D

@annasajkh
Copy link

here is my notebook https://colab.research.google.com/drive/1TqH6rQy_SQULFURIssVm8RLvi7tLRLFR?usp=sharing it worked on K80 it have batching and generate 16 images / 3 mins

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants