-
Notifications
You must be signed in to change notification settings - Fork 578
RFC: Variables in TensorFlow 2.0 #11
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Filename nit, can you rename to |
Done |
Would implementing |
iadd is something that cannot be supported in graph mode until we get rid of the build-graph-then-session-run-it API, and that's a separate design review (work in progress) |
To be honest I have trouble understanding the implied API changes. Can we have before/after examples of:
|
rfcs/20180817-variables-20.md
Outdated
* whether a variable is shared across sessions / processes will be controlled by a constructor argument to tf.Variable; no other type of scope reuse will be done in the framework | ||
* scoped partitioning will be implemented as a factory function at first | ||
* libraries and users are encouraged to reuse variables by reusing their objects, like Keras layers do | ||
* custom_getters will have the following API: [variable_creator_scope](https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/ops/variable_scope.py#L2395) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So variable_scope
will be replaced by name_scope
, right? Here url of variable_create_scope
is linked to a blank line, could you give more details about the function (say, some examples)?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fixed the link. The documentation has examples of how it's used.
|
||
There will be two main implementations of this interface: RefVariable, with the legacy ref edges, available only in tf.compat.v1, and ResourceVariable, which is the default for the v2 API. PartitionedVariable, MirroredVariable, _UnreadVariable, CastVariable, etc, are other implementations which are part of the core library. None of these implementations will be publicly visible, only tf.Variable will be. | ||
|
||
Constructing variables is done by calling tf.Variable(*args, **kwargs). Under the hood this will call a hierarchy of scoped constructor functions, similar to what is now done in variable_scope.variable. Each such constructor function can do some combination of: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
-
Could you explain why we choose
tf.Variable(*args, **kwargs)
, rather thantf.get_variable
, to construct variables?The tf.Variable class will be an abstract base class which defines a tf.Variable interface.
If
tf.Variable
will be an abstract base class, how to calltf.Variable(*args, **kwargs)
? -
Could you explain what is scoped constructor functions?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
-
tf.get_variable was created to handle silent sharing of variables in the graph. This behavior is being removed.
-
See the link I updated about variable_creator_scope
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
-
Will it be possible to recover tf.Variable objects only from a graph or graph_def, just like it's now possible to do with tf.Variable.from_proto? We work a lot with managing models restored purely from graph def files, without necessarily having all the code that produced the original graph. The ability to restore basic TF objects such as tf.Variables directly from graph def data only is a must for us.
-
How is the above affected by tf.Variable types written by users?
-
Will it be possible to explicitly recreate or recover tf.Variable objects from other non-python-object pieces of data like in some way?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- tf.get_variable was created to handle silent sharing of variables in the graph. This behavior is being removed.
- See the link I updated about variable_creator_scope
a related question: instead of tf.Variable why not calling factory function directly since it is supposed to call a factory function.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi @alextp.
Could you please show an example about how to create a PartitionedVariable
via API tf.Variable(*args, **kwargs)
? My question is whether user should pass an indicator to show what kinds of concrete Variable
to create ? Does it mean the parameters *args
and **kwargs
are exposed to users without any limit?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- Will it be possible to recover tf.Variable objects only from a graph or graph_def, just like it's now possible to do with tf.Variable.from_proto? We work a lot with managing models restored purely from graph def files, without necessarily having all the code that produced the original graph. The ability to restore basic TF objects such as tf.Variables directly from graph def data only is a must for us.
- How is the above affected by tf.Variable types written by users?
- Will it be possible to explicitly recreate or recover tf.Variable objects from other non-python-object pieces of data like in some way?
+1 on this. It is crucial for us to restore them from serialized graphdefs. Currently we use a RestoredVariable
class inheriting from tf.Variable
, but RefVariable changes in TF1.11 are breaking this inheritance. See issues #23591, #22648.
|
||
This is implemented by having a custom metaclass for tf.Variable which, when asked to construct a tf.Variable directly will call the factory functions, but when asked to construct subclasses of tf.Variable will do nothing and construct the child class. | ||
|
||
The tf.Variable interface will make no reference to graph collections, and tf.Variable will not add the Variable to any collections by default. tf.compat.v1.Variable, on the other hand, will have the collections argument and respect the existing semantics for it. Things which currently rely on collections (saving / loading, Optimizer.minimize, etc) will instead be expected to be passed either a list of variables or a CheckpointableBase-inheriting object. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So tf.global_variables_initializer
will be deprecated as well, right?
Can we let variable take care of the initialization by itself? I find that it's awkward to force user to call sess.run(tf.global_variables_initializer)
before training. When a variable is read, it knows whether its status is initialized or not in fact.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
global_variables_initializer will be deprecated, yes. I agree there could be a better solution to initialization but it's not in scope for this change.
Note that if eager is turned on by default and variables are created from eager then they're already automatically initialized even if most code runs inside graph functions, so most people in tf 2 will hopefully not be affected by this.
|
||
A resource-based variable is the simplest type of resource. What's stored in the device's resource manager is a pair of a Tensor and a mutex. The main operation to read the value of a variable is read_variable_op, and it simply outputs a Tensor which has the same value as the Tensor in the resource handle state. There are many ops which write to the resource (assign_variable_op, assign_add_variable_op, resource_apply_gradient_descent, etc), and the basic properties of the resource edges ensure that it's possible to order reading and writing ops to avoid undefined behavior. | ||
|
||
These ops are currently implemented using copy-on-write, but they could also be implemented using copy-on-read or other, more complex, mechanisms, as long as the semantics of the read-before-writes and write-before-read are respected and as long as no mutation is done to the Tensor returned by a read_variable_op after it's been read. Here are two examples of why mutating a Tensor returned by a read_variable_op might be dangerous: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I have trouble understanding the sentence, do you mean that:
v = tf.Variable(xxxxx)
v_read = v.read_variable_op()
v
is mutable, while v_read
not?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes, exactly
|
||
### Internal resource variable ops | ||
|
||
We will expose the internal ops used to implement ResourceVariable as tf.experimental.variable_operations (name TBD). This way users and libraries can, if they need to, modify the behavior of variables at will. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If you'll allow a slight digression, what is the role of tf.experimental
module? Would it become a next tf.contrib
? cc @martinwicke
It is generated purely by the tf_export decorator, and it is meant to mark
symbols that are excluded from our API guaranteed (all symbols with
experimental in the name are not guaranteed to stay).
While contrib was a collection of things, including functionality that was
related to, but would never be merged into TensorFlow proper, experimental
is only for things that we know we want in TF, but potentially want to
iterate on API details.
There will be more details on this once I finish the design for the
tf.contrib deprecation RFC.
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Couple comments.
|
||
### tf.Variable class | ||
|
||
The tf.Variable class will be an abstract base class which defines a tf.Variable interface. Initially this interface will have enough abstract methods such that the user-visible API of tf.Variable does not change. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Initially this interface will have enough abstract methods such that the user-visible API of tf.Variable does not change.
I'm not sure this makes sense: did it mean to read "enough concrete methods"? Adding many abstract methods doesn't change the user-visible tf.Variable
API (for those using the existing/TensorFlow 1.x tf.Variable
API)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This change has already been implemented. If you look at tf.Variable now it's a class with no implementations of methods, and most concrete instances are instances of subclasses (RefVariable for the old ones and ResourceVariable for the new ones).
* returning preexisting variables | ||
* changing some arguments to the base constructor, and maybe calling it multiple times | ||
|
||
This is implemented by having a custom metaclass for tf.Variable which, when asked to construct a tf.Variable directly will call the factory functions, but when asked to construct subclasses of tf.Variable will do nothing and construct the child class. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It would be good to include a justification for why the client API should be calling the constructor to an abstract base class instead of having users explicitly call the type of variable they want. This document just says "it will do this complicated thing" without saying what the rationale is.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The goal is that the user should not have to know what type they want. For example, code called under distribution strategies might create MirroredVariables when the user calls tf.Variable. Think of tf.Variable as a factory function for which isinstance also works.
|
||
### Variable sharing | ||
|
||
Sharing within a model will not be a part of the public API for tf.Variable. Users are strongly encouraged to share variables by sharing a reference to their objects. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Could we add an example of what that canonical approach for sharing variables will be? There are a large number of models that relied on tf.get_variable()
(as it was pushed to be the standard way to create/access variables), so demonstrating what the new uses would look like would be beneficial.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The canonical approach to sharing variables is by sharing their objects, as in Keras layers and Keras models, tf.make_template, and other ways of doing that.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ok, I'm glad you brought up tf.make_template
, as it's my preferred way of sharing weights between training/eval/inference in a single graph, but I'm wondering what the plan is to support tf.make_template
given that it heavily relies on the existing variable_scope
and naming semantics in order to work. There's a comment at the bottom which mentions it potentially being in scope, but I wonder what the mechanisms would like like without collections or special naming semantics.
rfcs/20180817-variables-20.md
Outdated
|
||
### Checkpointing | ||
|
||
Checkpointing will be done in tf 2.0 via the object-oriented checkpointing API. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Link to the API for reference.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
https://www.tensorflow.org/api_docs/python/tf/contrib/checkpoint/Checkpointable (added to the document too)
@mratsim can you be more specific? The part where tf.get_variable is deprecated should be pretty clear about what will need to change; most of the rest do not involve specific changes. |
1. Make this function active in all tf.compat.v1 endpoints which currently call get_variable (with a decorator, probably) | ||
1. Change the behavior in tf2 to call tf.Variable (which will redirect to tf.get_variable in tf.compat.v1, keeping the existing behavior but cleaning the codebase) | ||
1. [WARNING: checkpoint-breaking change] drop calls to variable_scope in parts of our API which use it. Right now they are: feature_column, rnn, canned estimators, optimizer slots, TPU estimator. Most can be replaced with judicious use of name= arguments | ||
1. [optional] Implement tf v2 make_template which does not rely on variable_scope internally and uses a factory creator function to track and reuse variables |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Going to request that this is a requirement instead of an optional.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Alternatively, expand layer-based APIs to make it easier to reuse existing variables imperatively.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Also want to request make_template
in v2.
One question: right now in make_template
, one can use get_variable
to create reused variable and tf.Variable(trainable=False)
to create local (unshared) variables. After get_variable
is deprecated I wonder what should be the alternative.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@alextp could you comment on how make_template
will be supported and how a user should create shared and unshared variables inside make_template
?
rfcs/variables-20.md
Outdated
### Optimizers | ||
|
||
The Optimizer.minimize method will no longer work if it's passed a Tensor and no list of variables. Users are expected to pass the list of variables to minimize wrt or pass an object which implements the CheckpointableBase interface to let the optimizer find the variables. The behavior of tf.compat.v1.Optimizer will not change. | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So how would this look in the following scenario?
There is a non-trainable floating point Variable in the model that affects the calculation of the loss function (e.g the discount factor in reinforcement learning). This Variable should be saved to the checkpoint, but obviously should not be considered a parameter in the optimizer.
@alextp There is no example of what those changes mean in practice for end users. I'd rather read:
to quickly identify migration issues. |
|
||
The tf.Variable class will be an abstract base class which defines a tf.Variable interface. Initially this interface will have enough abstract methods such that the user-visible API of tf.Variable does not change. | ||
|
||
There will be two main implementations of this interface: RefVariable, with the legacy ref edges, available only in tf.compat.v1, and ResourceVariable, which is the default for the v2 API. PartitionedVariable, MirroredVariable, _UnreadVariable, CastVariable, etc, are other implementations which are part of the core library. None of these implementations will be publicly visible, only tf.Variable will be. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nit: please escape _ in _UnreadVariable, markdown thinks you are trying to put stuff in italics (I think _UnreadVariable works).
Maybe it would be a good idea to port over the models provided as part of tensorflow/models/official to the new variable API before finalizing it. This would also help to provide examples how to port certain constructs, e.g. the ResNet model currently contains
for which I think it is not immediately clear how to convert that to the new API. |
* the default implementation of the tf.Variable interface will be ResourceVariable | ||
* RefVariable will be kept in tf.compat.v1 and will be the default implementation for tf.compat.v1.Variable | ||
* tf.compat.v1.Variable will have a use_resource argument to control whether a resource variable or a ref variable will be created | ||
* symbols like tf.assign* will be removed in favor of methods in tf.Variable |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please make item assignment possible:
>>> import tensorflow as tf
>>> a = tf.Variable([1, 2, 3])
>>> a[1] = 5
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: 'Variable' object does not support item assignment
- It's already possible via other methods (
tf.scatter*
) but really cumbersome. - Since it's already possible, I assume it can't be that hard to implement (but maybe I'm missing something)
- It would make teaching TensorFlow easier ("it's just like NumPy")
- It's one of those little things that makes some people prefer PyTorch: they can say "PyTorch is just like NumPy", but it's harder to say this about TensorFlow when something as fundamental to NumPy is missing.
- I have run into real-life use cases where I really needed it (porting a library from NumPy to TensorFlow to make it run on a GPU).
Please, pretty please with sugar on top? ;-)
Edit: Alex pointed out that it will be possible in TF 1.11 witha[1].assign(5)
.
Currently this slice assignment is done via a[1].assign(5). In tf 2.0 we do
not plan on allowing it since there is a lot of code out there relying on
session.run which would see incorrect behavior, but as we migrate more code
to eager/function based we can enable this.
…On Wed, Sep 5, 2018 at 1:59 PM Aurélien Geron ***@***.***> wrote:
***@***.**** commented on this pull request.
------------------------------
In rfcs/20180817-variables-20.md
<#11 (comment)>:
> +
+
+
+* tf.Variable will become an abstract base class with a well-defined interface and a scoped factory to construct instances
+ * users will be able to implement their own variable-like objects by subclassing tf.Variable and adding a scoped factory function to use those variables
+* variable_scope and get_variable will be removed
+ * the tf 1.0 version of variable_scope and get_variable will be left in tf.compat.v1
+ * to control variable naming users can use tf.name_scope + tf.Variable
+ * whether a variable is shared across sessions / processes will be controlled by a constructor argument to tf.Variable; no other type of scope reuse will be done in the framework
+ * scoped partitioning will be implemented as a factory function at first
+ * libraries and users are encouraged to reuse variables by reusing their objects, like Keras layers do
+ * custom_getters will have the following API: [variable_creator_scope](https://github.com/tensorflow/tensorflow/blob/567189980f7a1c2aa09a5170bd8d01a6ec37d303/tensorflow/python/ops/variable_scope.py#L2402)
+* the default implementation of the tf.Variable interface will be ResourceVariable
+ * RefVariable will be kept in tf.compat.v1 and will be the default implementation for tf.compat.v1.Variable
+ * tf.compat.v1.Variable will have a use_resource argument to control whether a resource variable or a ref variable will be created
+* symbols like tf.assign* will be removed in favor of methods in tf.Variable
Please make item assignment possible:
>>> import tensorflow as tf
>>> a = tf.Variable([1, 2, 3])
>>> a[1] = 5
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
TypeError: 'Variable' object does not support item assignment
- It's already possible via other methods (tf.scatter*) but really
cumbersome.
- Since it's already possible, I assume it can't be that hard
implement (but maybe I'm missing something)
- It would make teaching TensorFlow easier ("it's just like NumPy")
- It's one of those little things that makes some people prefer
PyTorch: they can say "PyTorch is just like NumPy", but it's harder to say
this about TensorFlow when something as fundamental to NumPy is missing.
Please, pretty please with sugar on top? ;-)
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#11 (review)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AAATxaw3UG2v3b8MaGZDIAVLGheNac9yks5uYDsegaJpZM4WB8F8>
.
--
- Alex
|
Thanks @alextp , |
The one incompatible with session run which we cannot support right away is
"a[1]=2"
…On Thu, Sep 6, 2018, 06:32 Aurélien Geron ***@***.***> wrote:
Thanks @alextp <https://github.com/alextp> ,
I did not know about a[1].assign(5), this is great! I'm confused about
your statement "we do not plan on allowing it". Are you referring to item
assigment (a[1] = 5) or to the assign method (a[1].assign(5))?
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#11 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AAATxflkokRd0avrpFpT45CXg5atqonfks5uYSPygaJpZM4WB8F8>
.
|
|
||
There will be two main implementations of this interface: RefVariable, with the legacy ref edges, available only in tf.compat.v1, and ResourceVariable, which is the default for the v2 API. PartitionedVariable, MirroredVariable, _UnreadVariable, CastVariable, etc, are other implementations which are part of the core library. None of these implementations will be publicly visible, only tf.Variable will be. | ||
|
||
Constructing variables is done by calling tf.Variable(*args, **kwargs). Under the hood this will call a hierarchy of scoped constructor functions, similar to what is now done in variable_scope.variable. Each such constructor function can do some combination of: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
-
Will it be possible to recover tf.Variable objects only from a graph or graph_def, just like it's now possible to do with tf.Variable.from_proto? We work a lot with managing models restored purely from graph def files, without necessarily having all the code that produced the original graph. The ability to restore basic TF objects such as tf.Variables directly from graph def data only is a must for us.
-
How is the above affected by tf.Variable types written by users?
-
Will it be possible to explicitly recreate or recover tf.Variable objects from other non-python-object pieces of data like in some way?
* whether a variable is shared across sessions / processes will be controlled by a constructor argument to tf.Variable; no other type of scope reuse will be done in the framework | ||
* scoped partitioning will be implemented as a factory function at first | ||
* libraries and users are encouraged to reuse variables by reusing their objects, like Keras layers do | ||
* custom_getters will have the following API: [variable_creator_scope](https://github.com/tensorflow/tensorflow/blob/567189980f7a1c2aa09a5170bd8d01a6ec37d303/tensorflow/python/ops/variable_scope.py#L2402) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I see this is a better generalization of custom_getters for creating variables, but it seems to miss another use case that worked for custom_getter: transparently intercepting variable reads.
Here are three example cases where this can be very useful.
-
Applying spectral normalization over arbitrary models. You could simply define a getter that applies it to the scope of a model and returns the normalized variable result, without having to change the code to explicitly do it everywhere. This becomes particularly important in more complex models, or in third party models that you are simply reusing and cannot change their code.
-
Automatically make models "mode-adaptive". This is a powerful technique that basically consists on creating K separate networks (by replicating or batching their variables) and using a soft-attention mechanism to create one combined network at weight level that then you use. That paper uses it with FCs, so it's easy, but consider how tricky that would become if it used RNNs, convnets or other more complex layers. By having a way to intercept variable access (and creation too in this case) we can simply add a K batch dimension and automatically apply soft-attention when reading. For all the network using it knows, it always had the shape it expected.
-
You can similarly implement differentiable plasticity in a transparent way if you store the extra information in the creator scope and apply their moving averages when trying to read it. Again, this would help reusing models and scaling to more complex ones easily.
Perhaps a better way to support this in this new model is to have a separate variable_reader_scope API? Combined with the proposed one it should allow doing both these examples.
It would be great to have some code examples of common use cases? |
* whether a variable is shared across sessions / processes will be controlled by a constructor argument to tf.Variable; no other type of scope reuse will be done in the framework | ||
* scoped partitioning will be implemented as a factory function at first | ||
* libraries and users are encouraged to reuse variables by reusing their objects, like Keras layers do | ||
* custom_getters will have the following API: [variable_creator_scope](https://github.com/tensorflow/tensorflow/blob/567189980f7a1c2aa09a5170bd8d01a6ec37d303/tensorflow/python/ops/variable_scope.py#L2402) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
instead of relying on a scope, can we ask the variable factory explicitly?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why would you? tf.Variable is the only public API symbol for constructing variables, and it'll eventually bottom out to one or more calls to the base variable class ResourceVariable (RefVariable exists only in tf.compat.v1).
If you want to control what variables a piece of code can construct you can add a creator to the stack around that piece of code, but you should not have access to the lower bits because that would allow you to break the behavior of things like distribution strategies or make_template.
Can you clarify what are you trying to do?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
IMHO, this is a chance to eliminate magic as much as possible. For example if DistributionStrategy needs to control how variables are created, they can provide it. Something like (speculating):
ds = MirroredStrategy(...)
model = tf.keras.Sequential(variable_factory=ds.variable_creator())
model.add(Dense....)
To the people asking for examples, saying what exactly you want examples of would make it possible for me to write those examples. |
just some examples: how would you transfer embeddings? how would you implement a Siamese Neural Networks? How would you partition a variable? ... |
To transfer embeddings reuse the python object for variable you have the
embeddings in.
What's a siamese network?
…On Thu, Sep 13, 2018 at 5:10 PM ispirmustafa ***@***.***> wrote:
To the people asking for examples, saying what exactly you want examples
of would make it possible for me to write those examples.
just some example: how would you transfer embeddings? how would you
implement a Siamese Neural Networks?...
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#11 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AAATxWMBXr36fzlRY2dCugmoIO1ldGJEks5uavQPgaJpZM4WB8F8>
.
--
- Alex
|
I know how to transfer embedding with this proposal :-) siamese network is sharing same weights for different part of the code. It's a simple example but shows simplicity of these approach compared to variable_scope(reuse). |
Hi @alextp - a couple of Qs: (a) is there a summary of notes from the design review meeting we could paste into these comments? thanks! |
I don't think any notes from the meeting require significant changes to
this document, so we should merge it as accepted.
…On Wed, Sep 19, 2018 at 2:11 PM Edd Wilder-James ***@***.***> wrote:
Hi @alextp <https://github.com/alextp> - a couple of Qs:
(a) is there a summary of notes from the design review meeting we could
paste into these comments?
(b) are there any revisions you need to make to the document before we
merge it as Accepted?
thanks!
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#11 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AAATxTV34EnFW3mT6I6svaXLziRXRMI1ks5ucrMWgaJpZM4WB8F8>
.
--
- Alex
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Some questions about tf.Variable(*args, **kwargs)
in 2.0.
|
||
There will be two main implementations of this interface: RefVariable, with the legacy ref edges, available only in tf.compat.v1, and ResourceVariable, which is the default for the v2 API. PartitionedVariable, MirroredVariable, _UnreadVariable, CastVariable, etc, are other implementations which are part of the core library. None of these implementations will be publicly visible, only tf.Variable will be. | ||
|
||
Constructing variables is done by calling tf.Variable(*args, **kwargs). Under the hood this will call a hierarchy of scoped constructor functions, similar to what is now done in variable_scope.variable. Each such constructor function can do some combination of: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi @alextp.
Could you please show an example about how to create a PartitionedVariable
via API tf.Variable(*args, **kwargs)
? My question is whether user should pass an indicator to show what kinds of concrete Variable
to create ? Does it mean the parameters *args
and **kwargs
are exposed to users without any limit?
On Thu, Nov 8, 2018 at 9:39 AM Sambhav Jain ***@***.***> wrote:
***@***.**** commented on this pull request.
1. Will it be possible to recover tf.Variable objects only from a
graph or graph_def, just like it's now possible to do with
tf.Variable.from_proto? We work a lot with managing models restored purely
from graph def files, without necessarily having all the code that produced
the original graph. The ability to restore basic TF objects such as
tf.Variables directly from graph def data only is a must for us.
Yes, via the SavedModel mechanism. The set of variables ending in the
SavedModel will no longer be implicit but restoring them will be possible.
1. How is the above affected by tf.Variable types written by users?
Currently tf.Variable types written by users are not very well supported
(we not have an internal stable tf.Variable API). I want to change this by
specifying the external tf.Variable API and providing convenience classes
to build variables around existing things. It won't block tf 2.0, though.
…
1. Will it be possible to explicitly recreate or recover tf.Variable
objects from other non-python-object pieces of data like in some way?
See above, from SavedModel (and from_proto will not go away either).
|
Hello, The recent change over to
and get the expected I would appreciate it if this were made more clear to users in version 2.0, and I would especially appreciate the ability to introspect without dropping into internal apis like Thank you for your consideration! Abe Leite |
We now have more than one internal implementation of variable so this type
of introspection is no longer valid. Use isinstance instead
…On Fri, Dec 7, 2018, 19:15 ajleite ***@***.*** wrote:
Hello,
The recent change over to RefVariable broke some code that introspected
variables' classes. In the past, one could write
a = tf.Variable(5)
type(a) == tf.Variable
and get the expected True but this is no longer the case.
I would appreciate it if this were made more clear to users in version
2.0, and I would especially appreciate the ability to introspect without
dropping into internal apis like
tensorflow.python.ops.variables.RefVariable, which is the only viable
solution at present.
Thank you for your consideration!
Abe Leite
Undergraduate Instructor, Artificial Intelligence
Indiana University Bloomington
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#11 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AAATxQcXZZjxnJxVMzOWoxid5XySy_a5ks5u2y7dgaJpZM4WB8F8>
.
|
Hi alextp, Thanks for the response. I haven't looked at the TF2.0 documentation, but I wonder if there's any way you could make it really clear to users that anything that is instantiated using Variable or get_variable is guaranteed to subclass Variable. That wasn't obvious to me at first! Thanks again, Abe |
In some cases it currently does not subclass Variable (MirroredVariable and
PartitionedVariable for example do not)
…On Mon, Dec 10, 2018 at 8:28 AM ajleite ***@***.***> wrote:
Hi alextp,
Thanks for the response. I haven't looked at the TF2.0 documentation, but
I wonder if there's any way you could make it really clear to users that
anything that is instantiated using Variable or get_variable is guaranteed
to subclass Variable. That wasn't obvious to me at first!
Thanks again,
Abe
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#11 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AAATxcIW-HjUaE2nlvpO_3wuQ9k-QuJ6ks5u3oulgaJpZM4WB8F8>
.
--
- Alex
|
I understand that this may be unfeasible, but is there some way this could be made more consistent? Introspection is one of Python's strong points, and allowing developers to quickly scope out the tf variables located in a namespace (regardless of how they are internally implemented) would be highly useful. My own use case was to allow my students to define the variables involved in their tensorflow model during a function, and then for my framework code to detect which of the new attributes of the class were Variables after that function, so that it could save and load the variables' state after training. If it's unfeasible to change the inheritance patterns, even some sort of utility tensorflow "type" function that specifies whether a tensor handle is a placeholder, a constant, a variable, a function, or something else (or not a tensor handle at all!) could be highly valuable. Thank you for your consideration! Abe |
Have you looked at the tf.Checkpointable API? If your class inherits from
tf.Checkpointable you can make this variable tracking fairly easy for you.
If you can't use Checkpointable I think you can rely on isinstance()
working, and when you stumble upon examples I can fix them or provide
workarounds.
…On Mon, Dec 10, 2018 at 9:07 AM ajleite ***@***.***> wrote:
I understand that this may be unfeasible, but is there some way this could
be made more consistent? Introspection is one of Python's strong points,
and allowing developers to quickly scope out the tf variables located in a
namespace (regardless of how they are internally implemented) would be
highly useful.
My own use case was to allow my students to define the variables involved
in their tensorflow model during a function, and then for my framework code
to detect which of the new attributes of the class were Variables after
that function, so that it could save and load the variables' state after
training.
If it's unfeasible to change the inheritance patterns, even some sort of
utility tensorflow "type" function that specifies whether a tensor handle
is a placeholder, a constant, a variable, a function, or something else (or
not a tensor handle at all!) could be highly valuable.
Thank you for your consideration!
Abe
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#11 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/AAATxcQNrF6UVfxu4hX4aqq7-HVhIsgkks5u3pS6gaJpZM4WB8F8>
.
--
- Alex
|
Hi alextp, That definitely sounds workable. I'll look into that! Thank you for all of your help. Best regards, Abe Leite |
Review open for comments until Thursday 8/31
Variables in TensorFlow 2.0
Objective
The API for TensorFlow variables has many drawbacks: impossible-to-reason-about semantics, reliance on global scopes, and reliance on global collections. As the TensorFlow API moves to become more pythonic and object oriented, with the Keras layers and models and the object-based serialization, we no longer have a need for much of this global infrastructure around variables.