-
Notifications
You must be signed in to change notification settings - Fork 19.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Freezing a BatchNormalization layer by setting trainable = false does not work. Weights are still being updated. #7085
Comments
BN layer has alpha,beta,varience and mean layers, the alpha and beta layers' param can change the |
Locking BN is a technique used in segmentation especially with fine tuning so I think this is an option we need to add if it does not currently exist |
This issue has been automatically marked as stale because it has not had recent activity. It will be closed after 30 days if no further activity occurs, but feel free to re-open a closed issue if needed. |
Is there any workaround available for this issue? It turns into hell when you're trying to use single shared model body (pre-trained model with trainable=False) inside multiple models which are trained on different datasets |
If you are willing to give up on the Keras training routines (fit, fit_batch, fit_generator etc), you can define a custom training function in the way that Keras does for them https://github.com/fchollet/keras/blob/d3db58c8bf1ef9d078b3cf5d828d22346df6469b/keras/engine/training.py#L948, where you won't include the BN statistics updates. For example: https://gist.github.com/afourast/018722309ac2a272ce4985190ec52742 In my models there are no other updates added to the 'updates' list besides the BN statistics, but you should check that with your models. Also I have only tested this with a tf backend. You will have to define a similar function for testing. I've personally found that using a custom training function convenient since you can also do things like use tf queues and add tensorboard summaries. Hope that helps |
Thanks, but it seems too tricky for my case. I've found another way to workaround that: you can set layer._per_input_updates = {} on your batch norm layers which shouldn't be updated during the training. It actually works, these layer weights stay the same, but it still looks like dirty hack. |
I'm running into the same issue, is there a technical reason it can't be fixed? |
Also interested in the solution for this. |
U can find:BatchNormalization layer was combined by four small layers:alpha layer,beta layer,and two other weight-layers,your |
Right, and can the alpha and beta layers be frozen, and if so how? |
Thank you @nsmetanin for the suggestion. This worked to freeze batch norm layers.
|
Freezing BN layers is now available in the most recent release, simply set trainable=False for the batchnorm layers. |
Is there any way to get back the old behaviour? |
|
You can set layer.stateful = True on your BN layer to get this behavior.
…On Jan 24, 2018 17:53, "Andrew Hundt" ***@***.***> wrote:
[image: broken workflow]
<https://camo.githubusercontent.com/082b3f1123b0dff4294002cb50e107a8a7a54dd3/68747470733a2f2f696d67732e786b63642e636f6d2f636f6d6963732f776f726b666c6f772e706e67>
@ViaFerrata <https://github.com/viaferrata> I don't have a real answer
other than look back in the commit history to see how it was done before.
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
<#7085 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AArWb1tGvN7W0FYqi4vOIT--8b98Jtehks5tN96wgaJpZM4OBkQB>
.
|
Thanks a lot for the quick answer, I will try that :) |
@ViaFerrata, just checking, what do you want the old behavior for? |
@ozabluda Sorry for the late answer, didn't notice the reply. Well, in my case I'm training two single CNNs on the same dataset, but with different projections of the input (my dataset is 6D). So if I understand it correctly, the moving mean and variance is calculated with the batch level statistics during training and for testing it is calculated based on the whole sample before the testing starts. |
Training never uses anything other than current batch for training, but it updates the running averages for inference. If training is resumed, I think running average is reset to zero. |
Thank you for the explanation, I've misread that in the paper. Then it makes sense that I get worse results with stateful=False during retraining. |
From the code of Keras 2.1.3, what I see is |
The issue seems to be that that the updates that are created on applying the BatchNorm layer are added to the train function even when they act on non-trainable weights.
gist to code:
https://gist.github.com/afourast/0d7545174c1b8fb7b0f82d7efbf31743
Please make sure that the boxes below are checked before you submit your issue. If your issue is an implementation question, please ask your question on StackOverflow or join the Keras Slack channel and ask there instead of filing a GitHub issue.
Thank you!
Check that you are up-to-date with the master branch of Keras. You can update with:
pip install git+git://github.com/fchollet/keras.git --upgrade --no-deps
If running on TensorFlow, check that you are up-to-date with the latest version. The installation instructions can be found here.
If running on Theano, check that you are up-to-date with the master branch of Theano. You can update with:
pip install git+git://github.com/Theano/Theano.git --upgrade --no-deps
Provide a link to a GitHub Gist of a Python script that can reproduce your issue (or just copy the script here if it is short).
The text was updated successfully, but these errors were encountered: