Skip to content

Freezing a BatchNormalization layer by setting trainable = false does not work. Weights are still being updated.  #7085

Closed
@afourast

Description

@afourast

The issue seems to be that that the updates that are created on applying the BatchNorm layer are added to the train function even when they act on non-trainable weights.

gist to code:
https://gist.github.com/afourast/0d7545174c1b8fb7b0f82d7efbf31743

Please make sure that the boxes below are checked before you submit your issue. If your issue is an implementation question, please ask your question on StackOverflow or join the Keras Slack channel and ask there instead of filing a GitHub issue.

Thank you!

  • Check that you are up-to-date with the master branch of Keras. You can update with:
    pip install git+git://github.com/fchollet/keras.git --upgrade --no-deps

    If running on TensorFlow, check that you are up-to-date with the latest version. The installation instructions can be found here.

    If running on Theano, check that you are up-to-date with the master branch of Theano. You can update with:
    pip install git+git://github.com/Theano/Theano.git --upgrade --no-deps

    Provide a link to a GitHub Gist of a Python script that can reproduce your issue (or just copy the script here if it is short).

Activity

Danielhiversen

Danielhiversen commented on Jun 22, 2017

@Danielhiversen
Contributor
hanzy123

hanzy123 commented on Jun 24, 2017

@hanzy123

BN layer has alpha,beta,varience and mean layers, the alpha and beta layers' param can change the
trainable attribute,but the varience and mean param can not set untrainable. Although you set the BN layer trainable = False ,it will not work

ahundt

ahundt commented on Jun 24, 2017

@ahundt
Contributor

Locking BN is a technique used in segmentation especially with fine tuning so I think this is an option we need to add if it does not currently exist

stale

stale commented on Sep 22, 2017

@stale

This issue has been automatically marked as stale because it has not had recent activity. It will be closed after 30 days if no further activity occurs, but feel free to re-open a closed issue if needed.

nikita-smetanin

nikita-smetanin commented on Oct 11, 2017

@nikita-smetanin

Is there any workaround available for this issue? It turns into hell when you're trying to use single shared model body (pre-trained model with trainable=False) inside multiple models which are trained on different datasets

afourast

afourast commented on Oct 11, 2017

@afourast
Author

If you are willing to give up on the Keras training routines (fit, fit_batch, fit_generator etc), you can define a custom training function in the way that Keras does for them https://github.com/fchollet/keras/blob/d3db58c8bf1ef9d078b3cf5d828d22346df6469b/keras/engine/training.py#L948, where you won't include the BN statistics updates. For example:

https://gist.github.com/afourast/018722309ac2a272ce4985190ec52742

In my models there are no other updates added to the 'updates' list besides the BN statistics, but you should check that with your models. Also I have only tested this with a tf backend. You will have to define a similar function for testing.

I've personally found that using a custom training function convenient since you can also do things like use tf queues and add tensorboard summaries.

Hope that helps

nikita-smetanin

nikita-smetanin commented on Oct 11, 2017

@nikita-smetanin

Thanks, but it seems too tricky for my case.

I've found another way to workaround that: you can set layer._per_input_updates = {} on your batch norm layers which shouldn't be updated during the training. It actually works, these layer weights stay the same, but it still looks like dirty hack.

NakramR

NakramR commented on Oct 29, 2017

@NakramR

I'm running into the same issue, is there a technical reason it can't be fixed?

eclique

eclique commented on Oct 31, 2017

@eclique

Also interested in the solution for this.

hanzy123

hanzy123 commented on Oct 31, 2017

@hanzy123

U can find:BatchNormalization layer was combined by four small layers:alpha layer,beta layer,and two other weight-layers,your Trainable=False order just keep two other weight-layers unchanged

NakramR

NakramR commented on Oct 31, 2017

@NakramR

Right, and can the alpha and beta layers be frozen, and if so how?

LazyMammal

LazyMammal commented on Jan 15, 2018

@LazyMammal

Thank you @nsmetanin for the suggestion. This worked to freeze batch norm layers.

for layer in model.layers:
    layer.trainable = False
    if isinstance(layer, keras.layers.normalization.BatchNormalization):
        layer._per_input_updates = {}
ahundt

ahundt commented on Jan 18, 2018

@ahundt
Contributor

Freezing BN layers is now available in the most recent release, simply set trainable=False for the batchnorm layers.

https://github.com/keras-team/keras/releases/tag/2.1.3

ViaFerrata

ViaFerrata commented on Jan 22, 2018

@ViaFerrata

Is there any way to get back the old behaviour?
I'd like to get the trainable weights frozen and the non-trainable weights unfrozen, as before.

ahundt

ahundt commented on Jan 25, 2018

@ahundt
Contributor

broken workflow
👍
@ViaFerrata Sorry I don't have a real answer, perhaps look back in the commit history to see how it was done before?

fchollet

fchollet commented on Jan 25, 2018

@fchollet
Collaborator
ViaFerrata

ViaFerrata commented on Jan 25, 2018

@ViaFerrata

Thanks a lot for the quick answer, I will try that :)

ozabluda

ozabluda commented on Jan 31, 2018

@ozabluda
Contributor

@ViaFerrata, just checking, what do you want the old behavior for?

ViaFerrata

ViaFerrata commented on Apr 12, 2018

@ViaFerrata

@ozabluda Sorry for the late answer, didn't notice the reply.

Well, in my case I'm training two single CNNs on the same dataset, but with different projections of the input (my dataset is 6D).
Then, I take the conv layers of both single CNNs, freeze them and put them together in a new CNN by adding a new fully connected layer.
After that I just retrain the randomly initialized fully connected layers. And in this case I noticed, that stateful=False yields much worse results for the test loss than stateful=True.

So if I understand it correctly, the moving mean and variance is calculated with the batch level statistics during training and for testing it is calculated based on the whole sample before the testing starts.
However, if the training is resumed in a new epoch with stateful=False, is the moving mean and variance then taken from the epoch before after the last batch?

ozabluda

ozabluda commented on Apr 12, 2018

@ozabluda
Contributor

Training never uses anything other than current batch for training, but it updates the running averages for inference. If training is resumed, I think running average is reset to zero.

ViaFerrata

ViaFerrata commented on Apr 12, 2018

@ViaFerrata

Thank you for the explanation, I've misread that in the paper. Then it makes sense that I get worse results with stateful=False during retraining.

zhimengfan1990

zhimengfan1990 commented on Dec 27, 2018

@zhimengfan1990

From the code of Keras 2.1.3, what I see is training=False rather than trainable=False in the call() method of BN.
https://github.com/keras-team/keras/blob/2.1.3/keras/layers/normalization.py#L175

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

      Development

      No branches or pull requests

        Participants

        @ahundt@Danielhiversen@fchollet@nikita-smetanin@afourast

        Issue actions

          Freezing a BatchNormalization layer by setting trainable = false does not work. Weights are still being updated. · Issue #7085 · keras-team/keras