Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ValueError: The hardcoded shape for the number of rows in the image (8) isn't the run time shape (7) #21

Open
gom7745 opened this issue Nov 16, 2016 · 2 comments

Comments

@gom7745
Copy link

gom7745 commented Nov 16, 2016

Error messages are shown below:

ERROR:blocks.main_loop:Error occured during training.

Blocks will attempt to run `on_error` extensions, potentially saving data, before exiting and reraising the error. Note that the usual `after_training` extensions will *not* be run$
 The original error will be re-raised and also stored in the training log. Press CTRL + C to halt Blocks immediately.
Traceback (most recent call last):
  File "./run.py", line 652, in <module>
    if train(d) is None:
  File "./run.py", line 501, in train
    main_loop.run()
  File "/home/alexchang/ENV/local/lib/python2.7/site-packages/blocks/main_loop.py", line 197, in run
    reraise_as(e)
  File "/home/alexchang/ENV/local/lib/python2.7/site-packages/blocks/utils/__init__.py", line 258, in reraise_as
    six.reraise(type(new_exc), new_exc, orig_exc_traceback)
  File "/home/alexchang/ENV/local/lib/python2.7/site-packages/blocks/main_loop.py", line 183, in run
    while self._run_epoch():
  File "/home/alexchang/ENV/local/lib/python2.7/site-packages/blocks/main_loop.py", line 232, in _run_epoch
    while self._run_iteration():
  File "/home/alexchang/ENV/local/lib/python2.7/site-packages/blocks/main_loop.py", line 253, in _run_iteration
    self.algorithm.process_batch(batch)
  File "/home/alexchang/ENV/local/lib/python2.7/site-packages/blocks/algorithms/__init__.py", line 287, in process_batch
    self._function(*ordered_batch)
  File "/usr/local/lib/python2.7/dist-packages/theano/compile/function_module.py", line 871, in __call__
    storage_map=getattr(self.fn, 'storage_map', None))
  File "/usr/local/lib/python2.7/dist-packages/theano/gof/link.py", line 314, in raise_with_op
    reraise(exc_type, exc_value, exc_trace)
  File "/usr/local/lib/python2.7/dist-packages/theano/compile/function_module.py", line 859, in __call__
    outputs = self.fn()
ValueError: The hardcoded shape for the number of rows in the image (8) isn't the run time shape (7).
Apply node that caused the error: ConvOp{('imshp', (192, 8, 8)),('kshp', (3, 3)),('nkern', 192),('bsize', 200),('dx', 1),('dy', 1),('out_mode', 'valid'),('unroll_batch', 5),('unrol$
_kern', 2),('unroll_patch', False),('imshp_logical', (192, 8, 8)),('kshp_logical', (3, 3)),('kshp_logical_top_aligned', True)}(Elemwise{Composite{(i0 + (i1 * i2))}}[(0, 2)].0, f_9_$
)
Toposort index: 1201
Inputs types: [TensorType(float32, 4D), TensorType(float32, 4D)]
Inputs shapes: [(200, 192, 7, 7), (192, 192, 3, 3)]
Inputs strides: [(37632, 196, 28, 4), (6912, 36, 12, 4)]
Inputs values: ['not shown', 'not shown']
Outputs clients: [[Subtensor{int64::}(ConvOp{('imshp', (192, 8, 8)),('kshp', (3, 3)),('nkern', 192),('bsize', 200),('dx', 1),('dy', 1),('out_mode', 'valid'),('unroll_batch', 5),('un
roll_kern', 2),('unroll_patch', False),('imshp_logical', (192, 8, 8)),('kshp_logical', (3, 3)),('kshp_logical_top_aligned', True)}.0, ScalarFromTensor.0), Subtensor{:int64:}(ConvOp{
('imshp', (192, 8, 8)),('kshp', (3, 3)),('nkern', 192),('bsize', 200),('dx', 1),('dy', 1),('out_mode', 'valid'),('unroll_batch', 5),('unroll_kern', 2),('unroll_patch', False),('imsh
p_logical', (192, 8, 8)),('kshp_logical', (3, 3)),('kshp_logical_top_aligned', True)}.0, ScalarFromTensor.0)]]

Backtrace when the node is created(use Theano flag traceback.limit=N to make it longer):
  File "./run.py", line 652, in <module>
    if train(d) is None:
  File "./run.py", line 411, in train
    ladder = setup_model(p)
  File "./run.py", line 182, in setup_model
    ladder.apply(x, y, x_only)
  File "/home/alexchang/Course_105_1/ML/ML2016/ladder_og/ladder.py", line 203, in apply
    noise_std=self.p.f_local_noise_std)
  File "/home/alexchang/Course_105_1/ML/ML2016/ladder_og/ladder.py", line 185, in encoder
    noise_std=noise)
  File "/home/alexchang/Course_105_1/ML/ML2016/ladder_og/ladder.py", line 350, in f
    z, output_size = self.f_conv(h, spec, in_dim, gen_id('W'))
  File "/home/alexchang/Course_105_1/ML/ML2016/ladder_og/ladder.py", line 452, in f_conv
    filter_size), border_mode=bm)

HINT: Use the Theano flag 'exception_verbosity=high' for a debugprint and storage map footprint of this apply node.

Original exception:
        ValueError: The hardcoded shape for the number of rows in the image (8) isn't the run time shape (7).
Apply node that caused the error: ConvOp{('imshp', (192, 8, 8)),('kshp', (3, 3)),('nkern', 192),('bsize', 200),('dx', 1),('dy', 1),('out_mode', 'valid'),('unroll_batch', 5),('unroll
_kern', 2),('unroll_patch', False),('imshp_logical', (192, 8, 8)),('kshp_logical', (3, 3)),('kshp_logical_top_aligned', True)}(Elemwise{Composite{(i0 + (i1 * i2))}}[(0, 2)].0, f_9_W
)
Toposort index: 1201
Inputs types: [TensorType(float32, 4D), TensorType(float32, 4D)]
Inputs shapes: [(200, 192, 7, 7), (192, 192, 3, 3)]
Inputs strides: [(37632, 196, 28, 4), (6912, 36, 12, 4)]
Inputs values: ['not shown', 'not shown']
Outputs clients: [[Subtensor{int64::}(ConvOp{('imshp', (192, 8, 8)),('kshp', (3, 3)),('nkern', 192),('bsize', 200),('dx', 1),('dy', 1),('out_mode', 'valid'),('unroll_batch', 5),('un
roll_kern', 2),('unroll_patch', False),('imshp_logical', (192, 8, 8)),('kshp_logical', (3, 3)),('kshp_logical_top_aligned', True)}.0, ScalarFromTensor.0), Subtensor{:int64:}(ConvOp{
('imshp', (192, 8, 8)),('kshp', (3, 3)),('nkern', 192),('bsize', 200),('dx', 1),('dy', 1),('out_mode', 'valid'),('unroll_batch', 5),('unroll_kern', 2),('unroll_patch', False),('imsh
p_logical', (192, 8, 8)),('kshp_logical', (3, 3)),('kshp_logical_top_aligned', True)}.0, ScalarFromTensor.0)]]

Backtrace when the node is created(use Theano flag traceback.limit=N to make it longer):
  File "./run.py", line 652, in <module>
    if train(d) is None:
  File "./run.py", line 411, in train
    ladder = setup_model(p)
  File "./run.py", line 182, in setup_model
    ladder.apply(x, y, x_only)
  File "/home/alexchang/Course_105_1/ML/ML2016/ladder_og/ladder.py", line 203, in apply
    noise_std=self.p.f_local_noise_std)
  File "/home/alexchang/Course_105_1/ML/ML2016/ladder_og/ladder.py", line 350, in f
    z, output_size = self.f_conv(h, spec, in_dim, gen_id('W'))
  File "/home/alexchang/Course_105_1/ML/ML2016/ladder_og/ladder.py", line 452, in f_conv
    filter_size), border_mode=bm)

HINT: Use the Theano flag 'exception_verbosity=high' for a debugprint and storage map footprint of this apply node.

Do you have any idea?

@ajaynagesh
Copy link

ajaynagesh commented Dec 3, 2017

I am facing the same issue as well. This is when I run the CIFAR-10 experiments. Any help in resolving this would be highly appreciated! Thanks.

@ajaynagesh
Copy link

The fix is mentioned in this issue! .. Thanks

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants