Skip to content

tensorflow.python.framework.errors_impl.InvalidArgumentError: padded_shape[1]=128 is not divisible by block_shape[1]=12 #3695

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
GeorgeBohw opened this issue Mar 22, 2018 · 15 comments

Comments

@GeorgeBohw
Copy link

Please go to Stack Overflow for help and support:

http://stackoverflow.com/questions/tagged/tensorflow

Also, please understand that many of the models included in this repository are experimental and research-style code. If you open a GitHub issue, here is our policy:

  1. It must be a bug, a feature request, or a significant problem with documentation (for small docs fixes please send a PR instead).
  2. The form below must be filled out.

Here's why we have that policy: TensorFlow developers respond to issues. We want to focus on work that benefits the whole community, e.g., fixing bugs and adding features. Support only helps individuals. GitHub also notifies thousands of people when issues are filed. We want them to see you communicating an interesting problem, rather than being redirected to Stack Overflow.


System information

  • What is the top-level directory of the model you are using:deeplabv3_pascal_trainval
  • Have I written custom code (as opposed to using a stock example script provided in TensorFlow):No
  • OS Platform and Distribution (e.g., Linux Ubuntu 16.04):Linux Ubuntu 16.04
  • TensorFlow installed from (source or binary):binary
  • TensorFlow version (use command below):1.6
  • Bazel version (if compiling from source):
  • CUDA/cuDNN version:9.0/7.0
  • GPU model and memory:11G
  • Exact command to reproduce:

You can collect some of this information using our environment capture script:

https://github.com/tensorflow/tensorflow/tree/master/tools/tf_env_collect.sh

You can obtain the TensorFlow version with

python -c "import tensorflow as tf; print(tf.GIT_VERSION, tf.VERSION)"

Describe the problem

Describe the problem clearly here. Be sure to convey here why it's a bug in TensorFlow or a feature request.
When I run "Jupyter notebook for off-the-shelf inference.",Set the INPUT_SIZE=769 which tell the code to resie the input image.When run the code,error comeout:

Caused by op u'aspp1_depthwise/depthwise/SpaceToBatchND', defined at:
File "mytry.py", line 130, in
model = DeepLabModel(download_path)
File "mytry.py", line 98, in init
tf.import_graph_def(graph_def, name='')
File "/home/george/anaconda2/lib/python2.7/site-packages/tensorflow/python/util/deprecation.py", line 432, in new_func
return func(*args, **kwargs)
File "/home/george/anaconda2/lib/python2.7/site-packages/tensorflow/python/framework/importer.py", line 553, in import_graph_def
op_def=op_def)
File "/home/george/anaconda2/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 3271, in create_op
op_def=op_def)
File "/home/george/anaconda2/lib/python2.7/site-packages/tensorflow/python/framework/ops.py", line 1650, in init
self._traceback = self._graph._extract_stack() # pylint: disable=protected-access

InvalidArgumentError (see above for traceback): padded_shape[1]=128 is not divisible by block_shape[1]=12
[[Node: aspp1_depthwise/depthwise/SpaceToBatchND = SpaceToBatchND[T=DT_FLOAT, Tblock_shape=DT_INT32, Tpaddings=DT_INT32, _device="/job:localhost/replica:0/task:0/device:GPU:0"](xception_65/exit_flow/block2/unit_1/xception_module/separable_conv3_pointwise/Relu, aspp1_depthwise/depthwise/SpaceToBatchND/block_shape, aspp1_depthwise/depthwise/SpaceToBatchND/paddings)]]
[[Node: ArgMax/_37 = _Recvclient_terminated=false, recv_device="/job:localhost/replica:0/task:0/device:CPU:0", send_device="/job:localhost/replica:0/task:0/device:GPU:0", send_device_incarnation=1, tensor_name="edge_1637_ArgMax", tensor_type=DT_INT64, _device="/job:localhost/replica:0/task:0/device:CPU:0"]]

Can't I process arbitrary size of image?

Source code / logs

Include any logs or source code that would be helpful to diagnose the problem. If including tracebacks, please include the full traceback. Large logs and files should be attached. Try to provide a reproducible test case that is the bare minimum necessary to generate the problem.

@GeorgeBohw
Copy link
Author

How can i use the model to predict for bigger image like 720p?

@GeorgeBohw
Copy link
Author

need i to train using the same size image,like 720p?

@GeorgeBohw
Copy link
Author

I have resolved it.
By now I think you should fine-tuning the models to make it fit for big size image,but the m_iou decrease.

@Aliennandy
Copy link

Hi George ... Could you pls elaborate on how to resolved it? I am stuck here!

@lxs802lxs8858
Copy link

我已经解决了。
到目前为止,我认为你应该对模型进行微调以使其适合大尺寸图像,但是m_iou会减少。
hi.did u like to share me how to solve this problem?

@lxs802lxs8858
Copy link

嗨乔治......你能详细说明如何解决它吗?我被困在这里!

Did u solve it? I also has the problem

@aliennandy13
Copy link

No, i couldn't

@northeastsquare
Copy link

northeastsquare commented Apr 25, 2019

I met this problem after train deeplab model.
This is caused by, deferent image size of train and test.
For example, train with (513,513), in input_preprocess.py preprocess_image_and_label function.
if is_training and label is not None

will crop the image.
but in test phase, is_training is False.
So the image size will not be changed.
So my temple solution is that, add else branch:

else:
  rr = tf.minimum(tf.cast(crop_height,tf.float32)/tf.cast(image_height,tf.float32),\
           tf.cast(crop_width,tf.float32)/tf.cast(image_width,tf.float32))
   newh = tf.cast(tf.cast(image_height, tf.float32)*rr, tf.float32)
   neww = tf.cast((tf.cast(image_width, tf.float32)*rr), tf.float32)
   processed_image = tf.image.resize_images(
       processed_image, (newh, neww), method=tf.image.ResizeMethod.BILINEAR, align_corners=True)
   processed_image = preprocess_utils.pad_to_bounding_box(
                       processed_image, 0, 0, crop_height, crop_width, mean_pixel)

*In detail, if you choose xception, in xception.py:
inputs = fixed_padding(inputs, kernel_size, rate)

@muxizju
Copy link

muxizju commented Apr 30, 2019

I met this problem after train deeplab model.
This is caused by, deferent image size of train and test.
For example, train with (513,513), in input_preprocess.py preprocess_image_and_label function.
if is_training and label is not None

will crop the image.
but in test phase, is_training is False.
So the image size will not be changed.
So my temple solution is that, add else branch:

else:
  rr = tf.minimum(tf.cast(crop_height,tf.float32)/tf.cast(image_height,tf.float32),\
           tf.cast(crop_width,tf.float32)/tf.cast(image_width,tf.float32))
   newh = tf.cast(tf.cast(image_height, tf.float32)*rr, tf.float32)
   neww = tf.cast((tf.cast(image_width, tf.float32)*rr), tf.float32)
   processed_image = tf.image.resize_images(
       processed_image, (newh, neww), method=tf.image.ResizeMethod.BILINEAR, align_corners=True)
   processed_image = preprocess_utils.pad_to_bounding_box(
                       processed_image, 0, 0, crop_height, crop_width, mean_pixel)

*In detail, if you choose xception, in xception.py:
inputs = fixed_padding(inputs, kernel_size, rate)

it works for the eval.py and vis.py of deeplab!
but one mistake: the cast of neww and newh should be tf.int32
wrong:

newh = tf.cast(tf.cast(image_height, tf.float32)*rr, tf.float32) 
neww = tf.cast((tf.cast(image_width, tf.float32)*rr), tf.float32)

right:

newh = tf.cast(tf.cast(image_height, tf.float32)*rr, tf.int32) 
neww = tf.cast((tf.cast(image_width, tf.float32)*rr), tf.int32)

and the visualize result is crop region of original image which may not be the purpose of the auther. here is a reference about the explaination of the auther:
https://github.com/tensorflow/models/issues/3939

@wszwedaEP
Copy link

@northeastsquare , @muxizju
Big thanks guys, now it works. After hours of struggling your else branch did the job :)

@mohhao
Copy link

mohhao commented Jul 25, 2019

I met this problem after train deeplab model.
This is caused by, deferent image size of train and test.
For example, train with (513,513), in input_preprocess.py preprocess_image_and_label function.
if is_training and label is not None

will crop the image.
but in test phase, is_training is False.
So the image size will not be changed.
So my temple solution is that, add else branch:

else:
  rr = tf.minimum(tf.cast(crop_height,tf.float32)/tf.cast(image_height,tf.float32),\
           tf.cast(crop_width,tf.float32)/tf.cast(image_width,tf.float32))
   newh = tf.cast(tf.cast(image_height, tf.float32)*rr, tf.float32)
   neww = tf.cast((tf.cast(image_width, tf.float32)*rr), tf.float32)
   processed_image = tf.image.resize_images(
       processed_image, (newh, neww), method=tf.image.ResizeMethod.BILINEAR, align_corners=True)
   processed_image = preprocess_utils.pad_to_bounding_box(
                       processed_image, 0, 0, crop_height, crop_width, mean_pixel)

*In detail, if you choose xception, in xception.py:
inputs = fixed_padding(inputs, kernel_size, rate)

what does In detail mean? need to change something ?

@rishab-sharma
Copy link

@mohhao Did you figure out what to do with the xception.py file?

@rishab-sharma
Copy link

@northeastsquare Can you explain the changes to be maid to xception.py file?

@ardila
Copy link

ardila commented Apr 30, 2020

I think instead of doing this hack you should just set the min_size and max_size flags both to the crop size. And keep the default flag value for preserving the aspect ratio.

This will result in scaling the larger dimension to the crop size, then padding to fill the rest.

@ardila
Copy link

ardila commented Apr 30, 2020

*I mean set the flags

--min_resize_value=513
--max_resize_value=513

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

10 participants