Skip to content

Tensorflow backend - bug in model._make_predict_function(...) #2397

Closed
@Froskekongen

Description

@Froskekongen

There appears to be a bug in the make_predict_function for the tensorflow backend. The following error message appears for me when trying to call model.predict(...)

self._make_predict_function()
  File "/usr/local/lib/python3.4/dist-packages/keras/engine/training.py", line 679, in _make_predict_function
    **self._function_kwargs)
  File "/usr/local/lib/python3.4/dist-packages/keras/backend/tensorflow_backend.py", line 615, in function
    return Function(inputs, outputs, updates=updates)
  File "/usr/local/lib/python3.4/dist-packages/keras/backend/tensorflow_backend.py", line 589, in __init__
    with tf.control_dependencies(self.outputs):
  File "/usr/local/lib/python3.4/dist-packages/tensorflow/python/framework/ops.py", line 3192, in control_dependencies
    return get_default_graph().control_dependencies(control_inputs)
  File "/usr/local/lib/python3.4/dist-packages/tensorflow/python/framework/ops.py", line 2993, in control_dependencies
    c = self.as_graph_element(c)
  File "/usr/local/lib/python3.4/dist-packages/tensorflow/python/framework/ops.py", line 2291, in as_graph_element
    raise ValueError("Tensor %s is not an element of this graph." % obj)
ValueError: Tensor Tensor("Sigmoid_2:0", shape=(?, 17), dtype=float32) is not an element of this graph.

This does not happen when using the theano backend.

Notes: The model is loaded from json, and is defined as follows:

    seq1=Input(dtype='int32',shape=(400,),name='input_text')
    seq2=Input(dtype='int32',shape=(20,),name='input_titles')

    embeddeding=Embedding(max_features,embedding_dims,dropout=0.3)

    encoding_1=embeddeding(seq1)
    encoding_2=embeddeding(seq2)

    filter_lengths = [1,3,6]


    def max_1d(X):
        return K.max(X, axis=1)
    convs1=[]
    convs2=[]
    for fl in filter_lengths:

        conv1=Convolution1D(nb_filter=nb_filter,
                        filter_length=fl,
                        border_mode='valid',
                        activation='relu',
                        subsample_length=1)(encoding_1)
        conv1=Lambda(max_1d, output_shape=(nb_filter,))(conv1)
        convs1.append(conv1)

        conv2=Convolution1D(nb_filter=nb_filter,
                        filter_length=fl,
                        border_mode='valid',
                        activation='relu',
                        subsample_length=1)(encoding_2)
        conv2=Lambda(max_1d, output_shape=(nb_filter,))(conv2)
        convs2.append(conv2)

    m=merge([*convs1,*convs2],mode='concat')
    m=Highway(activation='relu')(m)
    m=Highway(activation='relu')(m)
    m=Dropout(0.5)(m)
    hovedkategori_loss=Dense(labsHovedKat.shape[1],activation='sigmoid',name='hovedkategori')(m)

    m1=merge([hovedkategori_loss,m],mode='concat')
    underkategori_loss=Dense(labsUnderKat.shape[1],activation='sigmoid',name='underkategori')(m1)

    model=Model(input=[seq1,seq2],output=[hovedkategori_loss,underkategori_loss])
    model.compile(optimizer='adam',loss='binary_crossentropy',metrics={'hovedkategori':'accuracy','underkategori':'accuracy'})
  • Check that you are up-to-date with the master branch of Keras. You can update with:
    pip install git+git://github.com/fchollet/keras.git --upgrade --no-deps
    If running on Theano, check that you are up-to-date with the master branch of Theano. You can update with:
    pip install git+git://github.com/Theano/Theano.git --upgrade --no-deps

Activity

Froskekongen

Froskekongen commented on Apr 19, 2016

@Froskekongen
Author

I would appreciate any comments on this issue, as I want to deploy the model asap. And I need to know if I can use it or code something else.

fchollet

fchollet commented on Apr 19, 2016

@fchollet
Collaborator

Do you have a code snippet to reproduce this issue? I can guarantee you that predict does in fact work, including with TensorFlow.

Froskekongen

Froskekongen commented on Apr 20, 2016

@Froskekongen
Author

It appears this bug had nothing to do with either keras or tensorflow, but rather how async events were handled by the webserver I am using.

jstypka

jstypka commented on May 9, 2016

@jstypka
Contributor

@Froskekongen could you describe how you fixed this in more detail? I'm having an exactly the same error however in a different program.

It seems to work when I do it manually in a REPL, however when I deploy it as a webservice it breaks.

pxlong

pxlong commented on May 13, 2016

@pxlong

I also have the same error under the tensorflow backend, however, it works using the theano backend.
@jstypka @Froskekongen Have you found a solution to fix it?

jstypka

jstypka commented on May 13, 2016

@jstypka
Contributor

@pxlong it also works on Theano for me, I think it's exactly the same problem. I didn't manage to solve it though, was hoping for some hints from @Froskekongen

rkempter

rkempter commented on May 27, 2016

@rkempter

same here, same issue! Works fine in REPL, issues running it behind a webservice.

rkempter

rkempter commented on May 27, 2016

@rkempter

Running the webservice with gunicorn in sync mode solved the issue.

gladuo

gladuo commented on Aug 2, 2016

@gladuo

Hey everybody, I'm still not sure what's wrong with this combination.
But I use meinheld instead and it workes even better than gevent.
Hope this help.

AbhishekAshokDubey

AbhishekAshokDubey commented on Aug 16, 2016

@AbhishekAshokDubey

Same problem (model.predtict breaking) for me too, but it worked when i switched to theano backend from tensflow.

Nr90

Nr90 commented on Aug 26, 2016

@Nr90

Same problem here.
Seems to work fine normally. When deployed as a webservice using Flask, get this error.

Nr90

Nr90 commented on Aug 27, 2016

@Nr90

Works when using Theano as backend, doesn't work with tensorflow.

avital

avital commented on Oct 19, 2016

@avital

I had this problem when doing inference in a different thread than where I loaded my model. Here's how I fixed the problem:

Right after loading or constructing your model, save the TensorFlow graph:

graph = tf.get_default_graph()

In the other thread (or perhaps in an asynchronous event handler), do:

global graph
with graph.as_default():
    (... do inference here ...)

I learned about this from https://www.tensorflow.org/versions/r0.11/api_docs/python/framework.html#get_default_graph

Walid-Ahmed

Walid-Ahmed commented on Nov 4, 2016

@Walid-Ahmed

Thanks a lot.
it worked for me.

157 remaining items

Loading
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

      Development

      No branches or pull requests

        Participants

        @avital@morenoh149@fchollet@tushuhei@adityareddy

        Issue actions

          Tensorflow backend - bug in model._make_predict_function(...) · Issue #2397 · keras-team/keras