How to use regularization with Dynamic_rnn









up vote
2
down vote

favorite
3












I want to use l2-regularizatin with Dynamic_rnn in tensorflow but it seems this is not handled gracefully currently. While loop is the source of error. Below is a sample code snippet to reproduce the problem



import numpy as np
import tensorflow as tf
tf.reset_default_graph()
batch = 2
dim = 3
hidden = 4

with tf.variable_scope('test', regularizer=tf.contrib.layers.l2_regularizer(0.001)):
lengths = tf.placeholder(dtype=tf.int32, shape=[batch])
inputs = tf.placeholder(dtype=tf.float32, shape=[batch, None, dim])
cell = tf.nn.rnn_cell.GRUCell(hidden)
cell_state = cell.zero_state(batch, tf.float32)
output, _ = tf.nn.dynamic_rnn(cell, inputs, lengths, initial_state=cell_state)
inputs_ = np.asarray([[[0, 0, 0], [1, 1, 1], [2, 2, 2], [3, 3, 3]],
[[6, 6, 6], [7, 7, 7], [8, 8, 8], [9, 9, 9]]],
dtype=np.int32)
lengths_ = np.asarray([3, 1], dtype=np.int32)
this_throws_error = tf.losses.get_regularization_loss()

with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
output_ = sess.run(output, inputs: inputs_, lengths: lengths_)
print(output_)

INFO:tensorflow:Cannot use 'test/rnn/gru_cell/gates/kernel/Regularizer/l2_regularizer' as input to 'total_regularization_loss' because 'test/rnn/gru_cell/gates/kernel/Regularizer/l2_regularizer' is in a while loop.

total_regularization_loss while context: None
test/rnn/gru_cell/gates/kernel/Regularizer/l2_regularizer while context: test/rnn/while/while_context


How can i add l2 regularization if i have dynamic_rnn in my network? Currently i can be going ahead with getting trainable collection at the loss calculation and adding l2 loss there but i also have word vectors as trainable parameters which i dont want to regularize on










share|improve this question





















  • Did you fond a solution to this by any chance?
    – w4nderlust
    Nov 8 at 23:32










  • @w4nderlust - No. I collect the variables and add the l2 loss manually for l2 regularization
    – MiloMinderbinder
    Nov 9 at 13:02














up vote
2
down vote

favorite
3












I want to use l2-regularizatin with Dynamic_rnn in tensorflow but it seems this is not handled gracefully currently. While loop is the source of error. Below is a sample code snippet to reproduce the problem



import numpy as np
import tensorflow as tf
tf.reset_default_graph()
batch = 2
dim = 3
hidden = 4

with tf.variable_scope('test', regularizer=tf.contrib.layers.l2_regularizer(0.001)):
lengths = tf.placeholder(dtype=tf.int32, shape=[batch])
inputs = tf.placeholder(dtype=tf.float32, shape=[batch, None, dim])
cell = tf.nn.rnn_cell.GRUCell(hidden)
cell_state = cell.zero_state(batch, tf.float32)
output, _ = tf.nn.dynamic_rnn(cell, inputs, lengths, initial_state=cell_state)
inputs_ = np.asarray([[[0, 0, 0], [1, 1, 1], [2, 2, 2], [3, 3, 3]],
[[6, 6, 6], [7, 7, 7], [8, 8, 8], [9, 9, 9]]],
dtype=np.int32)
lengths_ = np.asarray([3, 1], dtype=np.int32)
this_throws_error = tf.losses.get_regularization_loss()

with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
output_ = sess.run(output, inputs: inputs_, lengths: lengths_)
print(output_)

INFO:tensorflow:Cannot use 'test/rnn/gru_cell/gates/kernel/Regularizer/l2_regularizer' as input to 'total_regularization_loss' because 'test/rnn/gru_cell/gates/kernel/Regularizer/l2_regularizer' is in a while loop.

total_regularization_loss while context: None
test/rnn/gru_cell/gates/kernel/Regularizer/l2_regularizer while context: test/rnn/while/while_context


How can i add l2 regularization if i have dynamic_rnn in my network? Currently i can be going ahead with getting trainable collection at the loss calculation and adding l2 loss there but i also have word vectors as trainable parameters which i dont want to regularize on










share|improve this question





















  • Did you fond a solution to this by any chance?
    – w4nderlust
    Nov 8 at 23:32










  • @w4nderlust - No. I collect the variables and add the l2 loss manually for l2 regularization
    – MiloMinderbinder
    Nov 9 at 13:02












up vote
2
down vote

favorite
3









up vote
2
down vote

favorite
3






3





I want to use l2-regularizatin with Dynamic_rnn in tensorflow but it seems this is not handled gracefully currently. While loop is the source of error. Below is a sample code snippet to reproduce the problem



import numpy as np
import tensorflow as tf
tf.reset_default_graph()
batch = 2
dim = 3
hidden = 4

with tf.variable_scope('test', regularizer=tf.contrib.layers.l2_regularizer(0.001)):
lengths = tf.placeholder(dtype=tf.int32, shape=[batch])
inputs = tf.placeholder(dtype=tf.float32, shape=[batch, None, dim])
cell = tf.nn.rnn_cell.GRUCell(hidden)
cell_state = cell.zero_state(batch, tf.float32)
output, _ = tf.nn.dynamic_rnn(cell, inputs, lengths, initial_state=cell_state)
inputs_ = np.asarray([[[0, 0, 0], [1, 1, 1], [2, 2, 2], [3, 3, 3]],
[[6, 6, 6], [7, 7, 7], [8, 8, 8], [9, 9, 9]]],
dtype=np.int32)
lengths_ = np.asarray([3, 1], dtype=np.int32)
this_throws_error = tf.losses.get_regularization_loss()

with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
output_ = sess.run(output, inputs: inputs_, lengths: lengths_)
print(output_)

INFO:tensorflow:Cannot use 'test/rnn/gru_cell/gates/kernel/Regularizer/l2_regularizer' as input to 'total_regularization_loss' because 'test/rnn/gru_cell/gates/kernel/Regularizer/l2_regularizer' is in a while loop.

total_regularization_loss while context: None
test/rnn/gru_cell/gates/kernel/Regularizer/l2_regularizer while context: test/rnn/while/while_context


How can i add l2 regularization if i have dynamic_rnn in my network? Currently i can be going ahead with getting trainable collection at the loss calculation and adding l2 loss there but i also have word vectors as trainable parameters which i dont want to regularize on










share|improve this question













I want to use l2-regularizatin with Dynamic_rnn in tensorflow but it seems this is not handled gracefully currently. While loop is the source of error. Below is a sample code snippet to reproduce the problem



import numpy as np
import tensorflow as tf
tf.reset_default_graph()
batch = 2
dim = 3
hidden = 4

with tf.variable_scope('test', regularizer=tf.contrib.layers.l2_regularizer(0.001)):
lengths = tf.placeholder(dtype=tf.int32, shape=[batch])
inputs = tf.placeholder(dtype=tf.float32, shape=[batch, None, dim])
cell = tf.nn.rnn_cell.GRUCell(hidden)
cell_state = cell.zero_state(batch, tf.float32)
output, _ = tf.nn.dynamic_rnn(cell, inputs, lengths, initial_state=cell_state)
inputs_ = np.asarray([[[0, 0, 0], [1, 1, 1], [2, 2, 2], [3, 3, 3]],
[[6, 6, 6], [7, 7, 7], [8, 8, 8], [9, 9, 9]]],
dtype=np.int32)
lengths_ = np.asarray([3, 1], dtype=np.int32)
this_throws_error = tf.losses.get_regularization_loss()

with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
output_ = sess.run(output, inputs: inputs_, lengths: lengths_)
print(output_)

INFO:tensorflow:Cannot use 'test/rnn/gru_cell/gates/kernel/Regularizer/l2_regularizer' as input to 'total_regularization_loss' because 'test/rnn/gru_cell/gates/kernel/Regularizer/l2_regularizer' is in a while loop.

total_regularization_loss while context: None
test/rnn/gru_cell/gates/kernel/Regularizer/l2_regularizer while context: test/rnn/while/while_context


How can i add l2 regularization if i have dynamic_rnn in my network? Currently i can be going ahead with getting trainable collection at the loss calculation and adding l2 loss there but i also have word vectors as trainable parameters which i dont want to regularize on







tensorflow deep-learning regularized






share|improve this question













share|improve this question











share|improve this question




share|improve this question










asked Apr 7 at 7:31









MiloMinderbinder

493512




493512











  • Did you fond a solution to this by any chance?
    – w4nderlust
    Nov 8 at 23:32










  • @w4nderlust - No. I collect the variables and add the l2 loss manually for l2 regularization
    – MiloMinderbinder
    Nov 9 at 13:02
















  • Did you fond a solution to this by any chance?
    – w4nderlust
    Nov 8 at 23:32










  • @w4nderlust - No. I collect the variables and add the l2 loss manually for l2 regularization
    – MiloMinderbinder
    Nov 9 at 13:02















Did you fond a solution to this by any chance?
– w4nderlust
Nov 8 at 23:32




Did you fond a solution to this by any chance?
– w4nderlust
Nov 8 at 23:32












@w4nderlust - No. I collect the variables and add the l2 loss manually for l2 regularization
– MiloMinderbinder
Nov 9 at 13:02




@w4nderlust - No. I collect the variables and add the l2 loss manually for l2 regularization
– MiloMinderbinder
Nov 9 at 13:02












1 Answer
1






active

oldest

votes

















up vote
0
down vote













I've encountered the same issue, and I've been trying to solve it with tensorflow==1.9.0.



Code:



import numpy as np
import tensorflow as tf
tf.reset_default_graph()
batch = 2
dim = 3
hidden = 4

with tf.variable_scope('test', regularizer=tf.contrib.layers.l2_regularizer(0.001)):
lengths = tf.placeholder(dtype=tf.int32, shape=[batch])
inputs = tf.placeholder(dtype=tf.float32, shape=[batch, None, dim])
cell = tf.nn.rnn_cell.GRUCell(hidden)
cell_state = cell.zero_state(batch, tf.float32)
output, _ = tf.nn.dynamic_rnn(cell, inputs, lengths, initial_state=cell_state)
inputs_ = np.asarray([[[0, 0, 0], [1, 1, 1], [2, 2, 2], [3, 3, 3]],
[[6, 6, 6], [7, 7, 7], [8, 8, 8], [9, 9, 9]]],
dtype=np.int32)
lengths_ = np.asarray([3, 1], dtype=np.int32)
this_throws_error = tf.losses.get_regularization_loss()

with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
output_ = sess.run(output, inputs: inputs_, lengths: lengths_)
print(output_)
print(sess.run(this_throws_error))


This is the result of running the code:



...
File "/Users/piero/Development/mlenv3/lib/python3.6/site-packages/tensorflow/python/ops/control_flow_util.py", line 314, in CheckInputFromValidContext
raise ValueError(error_msg + " See info log for more details.")
ValueError: Cannot use 'test/rnn/gru_cell/gates/kernel/Regularizer/l2_regularizer' as input to 'total_regularization_loss' because 'test/rnn/gru_cell/gates/kernel/Regularizer/l2_regularizer' is in a while loop. See info log for more details.


Then I tried to put the dynamic_rnn call outside of the variable scope:



import numpy as np
import tensorflow as tf
tf.reset_default_graph()
batch = 2
dim = 3
hidden = 4

with tf.variable_scope('test', regularizer=tf.contrib.layers.l2_regularizer(0.001)):
lengths = tf.placeholder(dtype=tf.int32, shape=[batch])
inputs = tf.placeholder(dtype=tf.float32, shape=[batch, None, dim])
cell = tf.nn.rnn_cell.GRUCell(hidden)
cell_state = cell.zero_state(batch, tf.float32)
output, _ = tf.nn.dynamic_rnn(cell, inputs, lengths, initial_state=cell_state)
inputs_ = np.asarray([[[0, 0, 0], [1, 1, 1], [2, 2, 2], [3, 3, 3]],
[[6, 6, 6], [7, 7, 7], [8, 8, 8], [9, 9, 9]]],
dtype=np.int32)
lengths_ = np.asarray([3, 1], dtype=np.int32)
this_throws_error = tf.losses.get_regularization_loss()

with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
output_ = sess.run(output, inputs: inputs_, lengths: lengths_)
print(output_)
print(sess.run(this_throws_error))


In theory this should be fine, as the regularization applies to the weights of the rnn that should contain variables initialized when the rnn cells are created.



This is the output:



[[[ 0. 0. 0. 0. ]
[ 0.1526176 0.33048663 -0.02288104 -0.1016309 ]
[ 0.24402776 0.68280864 -0.04888818 -0.26671126]
[ 0. 0. 0. 0. ]]

[[ 0.01998052 0.82368904 -0.00891946 -0.38874635]
[ 0. 0. 0. 0. ]
[ 0. 0. 0. 0. ]
[ 0. 0. 0. 0. ]]]
0.0


So placing the dynami_rnn call outside the variable scope works, in the sense that does not return errors, but the value of the loss is 0, suggesting that it's actually not really considering any weight from the rnn to compute the l2 loss.



Then I tried with tensorflow==1.12.0.
This is the output for the first script with dynamic_rnn inside the scope:



[[[ 0. 0. 0. 0. ]
[-0.17653276 0.06490126 0.02065791 -0.05175343]
[-0.413078 0.14486027 0.03922977 -0.1465032 ]
[ 0. 0. 0. 0. ]]

[[-0.5176822 0.03947531 0.00206934 -0.5542746 ]
[ 0. 0. 0. 0. ]
[ 0. 0. 0. 0. ]
[ 0. 0. 0. 0. ]]]
0.010403235


And this is the output with dynamic_rnn outside the scope:



[[[ 0. 0. 0. 0. ]
[ 0.04208181 0.03031874 -0.1749279 0.04617848]
[ 0.12169671 0.09322995 -0.29029205 0.08247502]
[ 0. 0. 0. 0. ]]

[[ 0.09673716 0.13300316 -0.02427006 0.00156245]
[ 0. 0. 0. 0. ]
[ 0. 0. 0. 0. ]
[ 0. 0. 0. 0. ]]]
0.0


The fact that the version with dynamic_rnn inside the scope returns a non-zero value suggests that it is working correctly, while in the other case, the returned value 0 suggests that it is not behaving as expected.
So the bottom line is: this was a bug in tensorflow that they solved between version 1.9.0 and version 1.12.0.






share|improve this answer




















    Your Answer






    StackExchange.ifUsing("editor", function ()
    StackExchange.using("externalEditor", function ()
    StackExchange.using("snippets", function ()
    StackExchange.snippets.init();
    );
    );
    , "code-snippets");

    StackExchange.ready(function()
    var channelOptions =
    tags: "".split(" "),
    id: "1"
    ;
    initTagRenderer("".split(" "), "".split(" "), channelOptions);

    StackExchange.using("externalEditor", function()
    // Have to fire editor after snippets, if snippets enabled
    if (StackExchange.settings.snippets.snippetsEnabled)
    StackExchange.using("snippets", function()
    createEditor();
    );

    else
    createEditor();

    );

    function createEditor()
    StackExchange.prepareEditor(
    heartbeatType: 'answer',
    convertImagesToLinks: true,
    noModals: true,
    showLowRepImageUploadWarning: true,
    reputationToPostImages: 10,
    bindNavPrevention: true,
    postfix: "",
    imageUploader:
    brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
    contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
    allowUrls: true
    ,
    onDemand: true,
    discardSelector: ".discard-answer"
    ,immediatelyShowMarkdownHelp:true
    );



    );













     

    draft saved


    draft discarded


















    StackExchange.ready(
    function ()
    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f49705034%2fhow-to-use-regularization-with-dynamic-rnn%23new-answer', 'question_page');

    );

    Post as a guest















    Required, but never shown

























    1 Answer
    1






    active

    oldest

    votes








    1 Answer
    1






    active

    oldest

    votes









    active

    oldest

    votes






    active

    oldest

    votes








    up vote
    0
    down vote













    I've encountered the same issue, and I've been trying to solve it with tensorflow==1.9.0.



    Code:



    import numpy as np
    import tensorflow as tf
    tf.reset_default_graph()
    batch = 2
    dim = 3
    hidden = 4

    with tf.variable_scope('test', regularizer=tf.contrib.layers.l2_regularizer(0.001)):
    lengths = tf.placeholder(dtype=tf.int32, shape=[batch])
    inputs = tf.placeholder(dtype=tf.float32, shape=[batch, None, dim])
    cell = tf.nn.rnn_cell.GRUCell(hidden)
    cell_state = cell.zero_state(batch, tf.float32)
    output, _ = tf.nn.dynamic_rnn(cell, inputs, lengths, initial_state=cell_state)
    inputs_ = np.asarray([[[0, 0, 0], [1, 1, 1], [2, 2, 2], [3, 3, 3]],
    [[6, 6, 6], [7, 7, 7], [8, 8, 8], [9, 9, 9]]],
    dtype=np.int32)
    lengths_ = np.asarray([3, 1], dtype=np.int32)
    this_throws_error = tf.losses.get_regularization_loss()

    with tf.Session() as sess:
    sess.run(tf.global_variables_initializer())
    output_ = sess.run(output, inputs: inputs_, lengths: lengths_)
    print(output_)
    print(sess.run(this_throws_error))


    This is the result of running the code:



    ...
    File "/Users/piero/Development/mlenv3/lib/python3.6/site-packages/tensorflow/python/ops/control_flow_util.py", line 314, in CheckInputFromValidContext
    raise ValueError(error_msg + " See info log for more details.")
    ValueError: Cannot use 'test/rnn/gru_cell/gates/kernel/Regularizer/l2_regularizer' as input to 'total_regularization_loss' because 'test/rnn/gru_cell/gates/kernel/Regularizer/l2_regularizer' is in a while loop. See info log for more details.


    Then I tried to put the dynamic_rnn call outside of the variable scope:



    import numpy as np
    import tensorflow as tf
    tf.reset_default_graph()
    batch = 2
    dim = 3
    hidden = 4

    with tf.variable_scope('test', regularizer=tf.contrib.layers.l2_regularizer(0.001)):
    lengths = tf.placeholder(dtype=tf.int32, shape=[batch])
    inputs = tf.placeholder(dtype=tf.float32, shape=[batch, None, dim])
    cell = tf.nn.rnn_cell.GRUCell(hidden)
    cell_state = cell.zero_state(batch, tf.float32)
    output, _ = tf.nn.dynamic_rnn(cell, inputs, lengths, initial_state=cell_state)
    inputs_ = np.asarray([[[0, 0, 0], [1, 1, 1], [2, 2, 2], [3, 3, 3]],
    [[6, 6, 6], [7, 7, 7], [8, 8, 8], [9, 9, 9]]],
    dtype=np.int32)
    lengths_ = np.asarray([3, 1], dtype=np.int32)
    this_throws_error = tf.losses.get_regularization_loss()

    with tf.Session() as sess:
    sess.run(tf.global_variables_initializer())
    output_ = sess.run(output, inputs: inputs_, lengths: lengths_)
    print(output_)
    print(sess.run(this_throws_error))


    In theory this should be fine, as the regularization applies to the weights of the rnn that should contain variables initialized when the rnn cells are created.



    This is the output:



    [[[ 0. 0. 0. 0. ]
    [ 0.1526176 0.33048663 -0.02288104 -0.1016309 ]
    [ 0.24402776 0.68280864 -0.04888818 -0.26671126]
    [ 0. 0. 0. 0. ]]

    [[ 0.01998052 0.82368904 -0.00891946 -0.38874635]
    [ 0. 0. 0. 0. ]
    [ 0. 0. 0. 0. ]
    [ 0. 0. 0. 0. ]]]
    0.0


    So placing the dynami_rnn call outside the variable scope works, in the sense that does not return errors, but the value of the loss is 0, suggesting that it's actually not really considering any weight from the rnn to compute the l2 loss.



    Then I tried with tensorflow==1.12.0.
    This is the output for the first script with dynamic_rnn inside the scope:



    [[[ 0. 0. 0. 0. ]
    [-0.17653276 0.06490126 0.02065791 -0.05175343]
    [-0.413078 0.14486027 0.03922977 -0.1465032 ]
    [ 0. 0. 0. 0. ]]

    [[-0.5176822 0.03947531 0.00206934 -0.5542746 ]
    [ 0. 0. 0. 0. ]
    [ 0. 0. 0. 0. ]
    [ 0. 0. 0. 0. ]]]
    0.010403235


    And this is the output with dynamic_rnn outside the scope:



    [[[ 0. 0. 0. 0. ]
    [ 0.04208181 0.03031874 -0.1749279 0.04617848]
    [ 0.12169671 0.09322995 -0.29029205 0.08247502]
    [ 0. 0. 0. 0. ]]

    [[ 0.09673716 0.13300316 -0.02427006 0.00156245]
    [ 0. 0. 0. 0. ]
    [ 0. 0. 0. 0. ]
    [ 0. 0. 0. 0. ]]]
    0.0


    The fact that the version with dynamic_rnn inside the scope returns a non-zero value suggests that it is working correctly, while in the other case, the returned value 0 suggests that it is not behaving as expected.
    So the bottom line is: this was a bug in tensorflow that they solved between version 1.9.0 and version 1.12.0.






    share|improve this answer
























      up vote
      0
      down vote













      I've encountered the same issue, and I've been trying to solve it with tensorflow==1.9.0.



      Code:



      import numpy as np
      import tensorflow as tf
      tf.reset_default_graph()
      batch = 2
      dim = 3
      hidden = 4

      with tf.variable_scope('test', regularizer=tf.contrib.layers.l2_regularizer(0.001)):
      lengths = tf.placeholder(dtype=tf.int32, shape=[batch])
      inputs = tf.placeholder(dtype=tf.float32, shape=[batch, None, dim])
      cell = tf.nn.rnn_cell.GRUCell(hidden)
      cell_state = cell.zero_state(batch, tf.float32)
      output, _ = tf.nn.dynamic_rnn(cell, inputs, lengths, initial_state=cell_state)
      inputs_ = np.asarray([[[0, 0, 0], [1, 1, 1], [2, 2, 2], [3, 3, 3]],
      [[6, 6, 6], [7, 7, 7], [8, 8, 8], [9, 9, 9]]],
      dtype=np.int32)
      lengths_ = np.asarray([3, 1], dtype=np.int32)
      this_throws_error = tf.losses.get_regularization_loss()

      with tf.Session() as sess:
      sess.run(tf.global_variables_initializer())
      output_ = sess.run(output, inputs: inputs_, lengths: lengths_)
      print(output_)
      print(sess.run(this_throws_error))


      This is the result of running the code:



      ...
      File "/Users/piero/Development/mlenv3/lib/python3.6/site-packages/tensorflow/python/ops/control_flow_util.py", line 314, in CheckInputFromValidContext
      raise ValueError(error_msg + " See info log for more details.")
      ValueError: Cannot use 'test/rnn/gru_cell/gates/kernel/Regularizer/l2_regularizer' as input to 'total_regularization_loss' because 'test/rnn/gru_cell/gates/kernel/Regularizer/l2_regularizer' is in a while loop. See info log for more details.


      Then I tried to put the dynamic_rnn call outside of the variable scope:



      import numpy as np
      import tensorflow as tf
      tf.reset_default_graph()
      batch = 2
      dim = 3
      hidden = 4

      with tf.variable_scope('test', regularizer=tf.contrib.layers.l2_regularizer(0.001)):
      lengths = tf.placeholder(dtype=tf.int32, shape=[batch])
      inputs = tf.placeholder(dtype=tf.float32, shape=[batch, None, dim])
      cell = tf.nn.rnn_cell.GRUCell(hidden)
      cell_state = cell.zero_state(batch, tf.float32)
      output, _ = tf.nn.dynamic_rnn(cell, inputs, lengths, initial_state=cell_state)
      inputs_ = np.asarray([[[0, 0, 0], [1, 1, 1], [2, 2, 2], [3, 3, 3]],
      [[6, 6, 6], [7, 7, 7], [8, 8, 8], [9, 9, 9]]],
      dtype=np.int32)
      lengths_ = np.asarray([3, 1], dtype=np.int32)
      this_throws_error = tf.losses.get_regularization_loss()

      with tf.Session() as sess:
      sess.run(tf.global_variables_initializer())
      output_ = sess.run(output, inputs: inputs_, lengths: lengths_)
      print(output_)
      print(sess.run(this_throws_error))


      In theory this should be fine, as the regularization applies to the weights of the rnn that should contain variables initialized when the rnn cells are created.



      This is the output:



      [[[ 0. 0. 0. 0. ]
      [ 0.1526176 0.33048663 -0.02288104 -0.1016309 ]
      [ 0.24402776 0.68280864 -0.04888818 -0.26671126]
      [ 0. 0. 0. 0. ]]

      [[ 0.01998052 0.82368904 -0.00891946 -0.38874635]
      [ 0. 0. 0. 0. ]
      [ 0. 0. 0. 0. ]
      [ 0. 0. 0. 0. ]]]
      0.0


      So placing the dynami_rnn call outside the variable scope works, in the sense that does not return errors, but the value of the loss is 0, suggesting that it's actually not really considering any weight from the rnn to compute the l2 loss.



      Then I tried with tensorflow==1.12.0.
      This is the output for the first script with dynamic_rnn inside the scope:



      [[[ 0. 0. 0. 0. ]
      [-0.17653276 0.06490126 0.02065791 -0.05175343]
      [-0.413078 0.14486027 0.03922977 -0.1465032 ]
      [ 0. 0. 0. 0. ]]

      [[-0.5176822 0.03947531 0.00206934 -0.5542746 ]
      [ 0. 0. 0. 0. ]
      [ 0. 0. 0. 0. ]
      [ 0. 0. 0. 0. ]]]
      0.010403235


      And this is the output with dynamic_rnn outside the scope:



      [[[ 0. 0. 0. 0. ]
      [ 0.04208181 0.03031874 -0.1749279 0.04617848]
      [ 0.12169671 0.09322995 -0.29029205 0.08247502]
      [ 0. 0. 0. 0. ]]

      [[ 0.09673716 0.13300316 -0.02427006 0.00156245]
      [ 0. 0. 0. 0. ]
      [ 0. 0. 0. 0. ]
      [ 0. 0. 0. 0. ]]]
      0.0


      The fact that the version with dynamic_rnn inside the scope returns a non-zero value suggests that it is working correctly, while in the other case, the returned value 0 suggests that it is not behaving as expected.
      So the bottom line is: this was a bug in tensorflow that they solved between version 1.9.0 and version 1.12.0.






      share|improve this answer






















        up vote
        0
        down vote










        up vote
        0
        down vote









        I've encountered the same issue, and I've been trying to solve it with tensorflow==1.9.0.



        Code:



        import numpy as np
        import tensorflow as tf
        tf.reset_default_graph()
        batch = 2
        dim = 3
        hidden = 4

        with tf.variable_scope('test', regularizer=tf.contrib.layers.l2_regularizer(0.001)):
        lengths = tf.placeholder(dtype=tf.int32, shape=[batch])
        inputs = tf.placeholder(dtype=tf.float32, shape=[batch, None, dim])
        cell = tf.nn.rnn_cell.GRUCell(hidden)
        cell_state = cell.zero_state(batch, tf.float32)
        output, _ = tf.nn.dynamic_rnn(cell, inputs, lengths, initial_state=cell_state)
        inputs_ = np.asarray([[[0, 0, 0], [1, 1, 1], [2, 2, 2], [3, 3, 3]],
        [[6, 6, 6], [7, 7, 7], [8, 8, 8], [9, 9, 9]]],
        dtype=np.int32)
        lengths_ = np.asarray([3, 1], dtype=np.int32)
        this_throws_error = tf.losses.get_regularization_loss()

        with tf.Session() as sess:
        sess.run(tf.global_variables_initializer())
        output_ = sess.run(output, inputs: inputs_, lengths: lengths_)
        print(output_)
        print(sess.run(this_throws_error))


        This is the result of running the code:



        ...
        File "/Users/piero/Development/mlenv3/lib/python3.6/site-packages/tensorflow/python/ops/control_flow_util.py", line 314, in CheckInputFromValidContext
        raise ValueError(error_msg + " See info log for more details.")
        ValueError: Cannot use 'test/rnn/gru_cell/gates/kernel/Regularizer/l2_regularizer' as input to 'total_regularization_loss' because 'test/rnn/gru_cell/gates/kernel/Regularizer/l2_regularizer' is in a while loop. See info log for more details.


        Then I tried to put the dynamic_rnn call outside of the variable scope:



        import numpy as np
        import tensorflow as tf
        tf.reset_default_graph()
        batch = 2
        dim = 3
        hidden = 4

        with tf.variable_scope('test', regularizer=tf.contrib.layers.l2_regularizer(0.001)):
        lengths = tf.placeholder(dtype=tf.int32, shape=[batch])
        inputs = tf.placeholder(dtype=tf.float32, shape=[batch, None, dim])
        cell = tf.nn.rnn_cell.GRUCell(hidden)
        cell_state = cell.zero_state(batch, tf.float32)
        output, _ = tf.nn.dynamic_rnn(cell, inputs, lengths, initial_state=cell_state)
        inputs_ = np.asarray([[[0, 0, 0], [1, 1, 1], [2, 2, 2], [3, 3, 3]],
        [[6, 6, 6], [7, 7, 7], [8, 8, 8], [9, 9, 9]]],
        dtype=np.int32)
        lengths_ = np.asarray([3, 1], dtype=np.int32)
        this_throws_error = tf.losses.get_regularization_loss()

        with tf.Session() as sess:
        sess.run(tf.global_variables_initializer())
        output_ = sess.run(output, inputs: inputs_, lengths: lengths_)
        print(output_)
        print(sess.run(this_throws_error))


        In theory this should be fine, as the regularization applies to the weights of the rnn that should contain variables initialized when the rnn cells are created.



        This is the output:



        [[[ 0. 0. 0. 0. ]
        [ 0.1526176 0.33048663 -0.02288104 -0.1016309 ]
        [ 0.24402776 0.68280864 -0.04888818 -0.26671126]
        [ 0. 0. 0. 0. ]]

        [[ 0.01998052 0.82368904 -0.00891946 -0.38874635]
        [ 0. 0. 0. 0. ]
        [ 0. 0. 0. 0. ]
        [ 0. 0. 0. 0. ]]]
        0.0


        So placing the dynami_rnn call outside the variable scope works, in the sense that does not return errors, but the value of the loss is 0, suggesting that it's actually not really considering any weight from the rnn to compute the l2 loss.



        Then I tried with tensorflow==1.12.0.
        This is the output for the first script with dynamic_rnn inside the scope:



        [[[ 0. 0. 0. 0. ]
        [-0.17653276 0.06490126 0.02065791 -0.05175343]
        [-0.413078 0.14486027 0.03922977 -0.1465032 ]
        [ 0. 0. 0. 0. ]]

        [[-0.5176822 0.03947531 0.00206934 -0.5542746 ]
        [ 0. 0. 0. 0. ]
        [ 0. 0. 0. 0. ]
        [ 0. 0. 0. 0. ]]]
        0.010403235


        And this is the output with dynamic_rnn outside the scope:



        [[[ 0. 0. 0. 0. ]
        [ 0.04208181 0.03031874 -0.1749279 0.04617848]
        [ 0.12169671 0.09322995 -0.29029205 0.08247502]
        [ 0. 0. 0. 0. ]]

        [[ 0.09673716 0.13300316 -0.02427006 0.00156245]
        [ 0. 0. 0. 0. ]
        [ 0. 0. 0. 0. ]
        [ 0. 0. 0. 0. ]]]
        0.0


        The fact that the version with dynamic_rnn inside the scope returns a non-zero value suggests that it is working correctly, while in the other case, the returned value 0 suggests that it is not behaving as expected.
        So the bottom line is: this was a bug in tensorflow that they solved between version 1.9.0 and version 1.12.0.






        share|improve this answer












        I've encountered the same issue, and I've been trying to solve it with tensorflow==1.9.0.



        Code:



        import numpy as np
        import tensorflow as tf
        tf.reset_default_graph()
        batch = 2
        dim = 3
        hidden = 4

        with tf.variable_scope('test', regularizer=tf.contrib.layers.l2_regularizer(0.001)):
        lengths = tf.placeholder(dtype=tf.int32, shape=[batch])
        inputs = tf.placeholder(dtype=tf.float32, shape=[batch, None, dim])
        cell = tf.nn.rnn_cell.GRUCell(hidden)
        cell_state = cell.zero_state(batch, tf.float32)
        output, _ = tf.nn.dynamic_rnn(cell, inputs, lengths, initial_state=cell_state)
        inputs_ = np.asarray([[[0, 0, 0], [1, 1, 1], [2, 2, 2], [3, 3, 3]],
        [[6, 6, 6], [7, 7, 7], [8, 8, 8], [9, 9, 9]]],
        dtype=np.int32)
        lengths_ = np.asarray([3, 1], dtype=np.int32)
        this_throws_error = tf.losses.get_regularization_loss()

        with tf.Session() as sess:
        sess.run(tf.global_variables_initializer())
        output_ = sess.run(output, inputs: inputs_, lengths: lengths_)
        print(output_)
        print(sess.run(this_throws_error))


        This is the result of running the code:



        ...
        File "/Users/piero/Development/mlenv3/lib/python3.6/site-packages/tensorflow/python/ops/control_flow_util.py", line 314, in CheckInputFromValidContext
        raise ValueError(error_msg + " See info log for more details.")
        ValueError: Cannot use 'test/rnn/gru_cell/gates/kernel/Regularizer/l2_regularizer' as input to 'total_regularization_loss' because 'test/rnn/gru_cell/gates/kernel/Regularizer/l2_regularizer' is in a while loop. See info log for more details.


        Then I tried to put the dynamic_rnn call outside of the variable scope:



        import numpy as np
        import tensorflow as tf
        tf.reset_default_graph()
        batch = 2
        dim = 3
        hidden = 4

        with tf.variable_scope('test', regularizer=tf.contrib.layers.l2_regularizer(0.001)):
        lengths = tf.placeholder(dtype=tf.int32, shape=[batch])
        inputs = tf.placeholder(dtype=tf.float32, shape=[batch, None, dim])
        cell = tf.nn.rnn_cell.GRUCell(hidden)
        cell_state = cell.zero_state(batch, tf.float32)
        output, _ = tf.nn.dynamic_rnn(cell, inputs, lengths, initial_state=cell_state)
        inputs_ = np.asarray([[[0, 0, 0], [1, 1, 1], [2, 2, 2], [3, 3, 3]],
        [[6, 6, 6], [7, 7, 7], [8, 8, 8], [9, 9, 9]]],
        dtype=np.int32)
        lengths_ = np.asarray([3, 1], dtype=np.int32)
        this_throws_error = tf.losses.get_regularization_loss()

        with tf.Session() as sess:
        sess.run(tf.global_variables_initializer())
        output_ = sess.run(output, inputs: inputs_, lengths: lengths_)
        print(output_)
        print(sess.run(this_throws_error))


        In theory this should be fine, as the regularization applies to the weights of the rnn that should contain variables initialized when the rnn cells are created.



        This is the output:



        [[[ 0. 0. 0. 0. ]
        [ 0.1526176 0.33048663 -0.02288104 -0.1016309 ]
        [ 0.24402776 0.68280864 -0.04888818 -0.26671126]
        [ 0. 0. 0. 0. ]]

        [[ 0.01998052 0.82368904 -0.00891946 -0.38874635]
        [ 0. 0. 0. 0. ]
        [ 0. 0. 0. 0. ]
        [ 0. 0. 0. 0. ]]]
        0.0


        So placing the dynami_rnn call outside the variable scope works, in the sense that does not return errors, but the value of the loss is 0, suggesting that it's actually not really considering any weight from the rnn to compute the l2 loss.



        Then I tried with tensorflow==1.12.0.
        This is the output for the first script with dynamic_rnn inside the scope:



        [[[ 0. 0. 0. 0. ]
        [-0.17653276 0.06490126 0.02065791 -0.05175343]
        [-0.413078 0.14486027 0.03922977 -0.1465032 ]
        [ 0. 0. 0. 0. ]]

        [[-0.5176822 0.03947531 0.00206934 -0.5542746 ]
        [ 0. 0. 0. 0. ]
        [ 0. 0. 0. 0. ]
        [ 0. 0. 0. 0. ]]]
        0.010403235


        And this is the output with dynamic_rnn outside the scope:



        [[[ 0. 0. 0. 0. ]
        [ 0.04208181 0.03031874 -0.1749279 0.04617848]
        [ 0.12169671 0.09322995 -0.29029205 0.08247502]
        [ 0. 0. 0. 0. ]]

        [[ 0.09673716 0.13300316 -0.02427006 0.00156245]
        [ 0. 0. 0. 0. ]
        [ 0. 0. 0. 0. ]
        [ 0. 0. 0. 0. ]]]
        0.0


        The fact that the version with dynamic_rnn inside the scope returns a non-zero value suggests that it is working correctly, while in the other case, the returned value 0 suggests that it is not behaving as expected.
        So the bottom line is: this was a bug in tensorflow that they solved between version 1.9.0 and version 1.12.0.







        share|improve this answer












        share|improve this answer



        share|improve this answer










        answered Nov 11 at 1:19









        w4nderlust

        6371920




        6371920



























             

            draft saved


            draft discarded















































             


            draft saved


            draft discarded














            StackExchange.ready(
            function ()
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f49705034%2fhow-to-use-regularization-with-dynamic-rnn%23new-answer', 'question_page');

            );

            Post as a guest















            Required, but never shown





















































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown

































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown







            Popular posts from this blog

            Top Tejano songwriter Luis Silva dead of heart attack at 64

            ReactJS Fetched API data displays live - need Data displayed static

            政党