Can Tensorflow take gradient on matrix 2-norm?










0















Normally the matrix norm we took in Tensorflow is Frobenius norm which is easy to compute and easy to understand, e.g., a Bayesian view. But in many cases, it is the largest singular value matters. It is possible to optimize that in Tensorflow? It depends on whether tensorflow can take gradient with respect to matrix 2-norm.










share|improve this question


























    0















    Normally the matrix norm we took in Tensorflow is Frobenius norm which is easy to compute and easy to understand, e.g., a Bayesian view. But in many cases, it is the largest singular value matters. It is possible to optimize that in Tensorflow? It depends on whether tensorflow can take gradient with respect to matrix 2-norm.










    share|improve this question
























      0












      0








      0








      Normally the matrix norm we took in Tensorflow is Frobenius norm which is easy to compute and easy to understand, e.g., a Bayesian view. But in many cases, it is the largest singular value matters. It is possible to optimize that in Tensorflow? It depends on whether tensorflow can take gradient with respect to matrix 2-norm.










      share|improve this question














      Normally the matrix norm we took in Tensorflow is Frobenius norm which is easy to compute and easy to understand, e.g., a Bayesian view. But in many cases, it is the largest singular value matters. It is possible to optimize that in Tensorflow? It depends on whether tensorflow can take gradient with respect to matrix 2-norm.







      tensorflow autodiff






      share|improve this question













      share|improve this question











      share|improve this question




      share|improve this question










      asked Nov 15 '18 at 23:35









      ArtificiallyIntelligenceArtificiallyIntelligence

      336313




      336313






















          1 Answer
          1






          active

          oldest

          votes


















          0














          Actually, the spectral norm is equal the largest singular value. To get to this value you can use TensorFlow's linalg.svd.






          share|improve this answer























          • yes. I know but I mean how to take gradient on that so I can change the entry of matrix that reduce the max singular value.. Frankly speaking, I cannot imagine an explicit formula, not recursive, just straightforward how to take gradient on a matrix that reduce its largest singular value.

            – ArtificiallyIntelligence
            Nov 16 '18 at 19:26







          • 1





            @Shaowu There is an explicit form. As long as you do not brake the computational graph, I don't see a reason why this shouldn't work. It's been done.

            – Lucas Farias
            Nov 16 '18 at 20:00







          • 1





            Thanks for the information. The explicit form requires computation of SVD, while SVD is not that explicit in the numerical sense. It requires solving an eigenvalue problem to get the first singular vector, which corresponds to an iterative scheme nested with the backpropagation. I would be surprised if tensorflow can do that in default.

            – ArtificiallyIntelligence
            Nov 16 '18 at 21:24










          Your Answer






          StackExchange.ifUsing("editor", function ()
          StackExchange.using("externalEditor", function ()
          StackExchange.using("snippets", function ()
          StackExchange.snippets.init();
          );
          );
          , "code-snippets");

          StackExchange.ready(function()
          var channelOptions =
          tags: "".split(" "),
          id: "1"
          ;
          initTagRenderer("".split(" "), "".split(" "), channelOptions);

          StackExchange.using("externalEditor", function()
          // Have to fire editor after snippets, if snippets enabled
          if (StackExchange.settings.snippets.snippetsEnabled)
          StackExchange.using("snippets", function()
          createEditor();
          );

          else
          createEditor();

          );

          function createEditor()
          StackExchange.prepareEditor(
          heartbeatType: 'answer',
          autoActivateHeartbeat: false,
          convertImagesToLinks: true,
          noModals: true,
          showLowRepImageUploadWarning: true,
          reputationToPostImages: 10,
          bindNavPrevention: true,
          postfix: "",
          imageUploader:
          brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
          contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
          allowUrls: true
          ,
          onDemand: true,
          discardSelector: ".discard-answer"
          ,immediatelyShowMarkdownHelp:true
          );



          );













          draft saved

          draft discarded


















          StackExchange.ready(
          function ()
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53329357%2fcan-tensorflow-take-gradient-on-matrix-2-norm%23new-answer', 'question_page');

          );

          Post as a guest















          Required, but never shown

























          1 Answer
          1






          active

          oldest

          votes








          1 Answer
          1






          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes









          0














          Actually, the spectral norm is equal the largest singular value. To get to this value you can use TensorFlow's linalg.svd.






          share|improve this answer























          • yes. I know but I mean how to take gradient on that so I can change the entry of matrix that reduce the max singular value.. Frankly speaking, I cannot imagine an explicit formula, not recursive, just straightforward how to take gradient on a matrix that reduce its largest singular value.

            – ArtificiallyIntelligence
            Nov 16 '18 at 19:26







          • 1





            @Shaowu There is an explicit form. As long as you do not brake the computational graph, I don't see a reason why this shouldn't work. It's been done.

            – Lucas Farias
            Nov 16 '18 at 20:00







          • 1





            Thanks for the information. The explicit form requires computation of SVD, while SVD is not that explicit in the numerical sense. It requires solving an eigenvalue problem to get the first singular vector, which corresponds to an iterative scheme nested with the backpropagation. I would be surprised if tensorflow can do that in default.

            – ArtificiallyIntelligence
            Nov 16 '18 at 21:24















          0














          Actually, the spectral norm is equal the largest singular value. To get to this value you can use TensorFlow's linalg.svd.






          share|improve this answer























          • yes. I know but I mean how to take gradient on that so I can change the entry of matrix that reduce the max singular value.. Frankly speaking, I cannot imagine an explicit formula, not recursive, just straightforward how to take gradient on a matrix that reduce its largest singular value.

            – ArtificiallyIntelligence
            Nov 16 '18 at 19:26







          • 1





            @Shaowu There is an explicit form. As long as you do not brake the computational graph, I don't see a reason why this shouldn't work. It's been done.

            – Lucas Farias
            Nov 16 '18 at 20:00







          • 1





            Thanks for the information. The explicit form requires computation of SVD, while SVD is not that explicit in the numerical sense. It requires solving an eigenvalue problem to get the first singular vector, which corresponds to an iterative scheme nested with the backpropagation. I would be surprised if tensorflow can do that in default.

            – ArtificiallyIntelligence
            Nov 16 '18 at 21:24













          0












          0








          0







          Actually, the spectral norm is equal the largest singular value. To get to this value you can use TensorFlow's linalg.svd.






          share|improve this answer













          Actually, the spectral norm is equal the largest singular value. To get to this value you can use TensorFlow's linalg.svd.







          share|improve this answer












          share|improve this answer



          share|improve this answer










          answered Nov 16 '18 at 10:56









          Lucas FariasLucas Farias

          1781415




          1781415












          • yes. I know but I mean how to take gradient on that so I can change the entry of matrix that reduce the max singular value.. Frankly speaking, I cannot imagine an explicit formula, not recursive, just straightforward how to take gradient on a matrix that reduce its largest singular value.

            – ArtificiallyIntelligence
            Nov 16 '18 at 19:26







          • 1





            @Shaowu There is an explicit form. As long as you do not brake the computational graph, I don't see a reason why this shouldn't work. It's been done.

            – Lucas Farias
            Nov 16 '18 at 20:00







          • 1





            Thanks for the information. The explicit form requires computation of SVD, while SVD is not that explicit in the numerical sense. It requires solving an eigenvalue problem to get the first singular vector, which corresponds to an iterative scheme nested with the backpropagation. I would be surprised if tensorflow can do that in default.

            – ArtificiallyIntelligence
            Nov 16 '18 at 21:24

















          • yes. I know but I mean how to take gradient on that so I can change the entry of matrix that reduce the max singular value.. Frankly speaking, I cannot imagine an explicit formula, not recursive, just straightforward how to take gradient on a matrix that reduce its largest singular value.

            – ArtificiallyIntelligence
            Nov 16 '18 at 19:26







          • 1





            @Shaowu There is an explicit form. As long as you do not brake the computational graph, I don't see a reason why this shouldn't work. It's been done.

            – Lucas Farias
            Nov 16 '18 at 20:00







          • 1





            Thanks for the information. The explicit form requires computation of SVD, while SVD is not that explicit in the numerical sense. It requires solving an eigenvalue problem to get the first singular vector, which corresponds to an iterative scheme nested with the backpropagation. I would be surprised if tensorflow can do that in default.

            – ArtificiallyIntelligence
            Nov 16 '18 at 21:24
















          yes. I know but I mean how to take gradient on that so I can change the entry of matrix that reduce the max singular value.. Frankly speaking, I cannot imagine an explicit formula, not recursive, just straightforward how to take gradient on a matrix that reduce its largest singular value.

          – ArtificiallyIntelligence
          Nov 16 '18 at 19:26






          yes. I know but I mean how to take gradient on that so I can change the entry of matrix that reduce the max singular value.. Frankly speaking, I cannot imagine an explicit formula, not recursive, just straightforward how to take gradient on a matrix that reduce its largest singular value.

          – ArtificiallyIntelligence
          Nov 16 '18 at 19:26





          1




          1





          @Shaowu There is an explicit form. As long as you do not brake the computational graph, I don't see a reason why this shouldn't work. It's been done.

          – Lucas Farias
          Nov 16 '18 at 20:00






          @Shaowu There is an explicit form. As long as you do not brake the computational graph, I don't see a reason why this shouldn't work. It's been done.

          – Lucas Farias
          Nov 16 '18 at 20:00





          1




          1





          Thanks for the information. The explicit form requires computation of SVD, while SVD is not that explicit in the numerical sense. It requires solving an eigenvalue problem to get the first singular vector, which corresponds to an iterative scheme nested with the backpropagation. I would be surprised if tensorflow can do that in default.

          – ArtificiallyIntelligence
          Nov 16 '18 at 21:24





          Thanks for the information. The explicit form requires computation of SVD, while SVD is not that explicit in the numerical sense. It requires solving an eigenvalue problem to get the first singular vector, which corresponds to an iterative scheme nested with the backpropagation. I would be surprised if tensorflow can do that in default.

          – ArtificiallyIntelligence
          Nov 16 '18 at 21:24



















          draft saved

          draft discarded
















































          Thanks for contributing an answer to Stack Overflow!


          • Please be sure to answer the question. Provide details and share your research!

          But avoid


          • Asking for help, clarification, or responding to other answers.

          • Making statements based on opinion; back them up with references or personal experience.

          To learn more, see our tips on writing great answers.




          draft saved


          draft discarded














          StackExchange.ready(
          function ()
          StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53329357%2fcan-tensorflow-take-gradient-on-matrix-2-norm%23new-answer', 'question_page');

          );

          Post as a guest















          Required, but never shown





















































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown

































          Required, but never shown














          Required, but never shown












          Required, but never shown







          Required, but never shown







          Popular posts from this blog

          Top Tejano songwriter Luis Silva dead of heart attack at 64

          政党

          天津地下鉄3号線