numpy matrix vector multiplication [duplicate]









up vote
115
down vote

favorite
32













This question already has an answer here:



  • how does multiplication differ for NumPy Matrix vs Array classes?

    7 answers



When I multiply two numpy arrays of sizes (n x n)*(n x 1), I get a matrix of size (n x n). Following normal matrix multiplication rules, a (n x 1) vector is expected, but I simply cannot find any information about how this is done in Python's Numpy module.



The thing is that I don't want to implement it manually to preserve the speed of the program.



Example code is shown below:



a = np.array([[ 5, 1 ,3], [ 1, 1 ,1], [ 1, 2 ,1]])
b = np.array([1, 2, 3])

print a*b
>>
[[5 2 9]
[1 2 3]
[1 4 3]]


What i want is:



print a*b
>>
[16 6 8]









share|improve this question















marked as duplicate by Valerij, Pharabus, timrau, BobTheBuilder, Steve Czetty Feb 5 '14 at 15:58


This question has been asked before and already has an answer. If those answers do not fully address your question, please ask a new question.


















    up vote
    115
    down vote

    favorite
    32













    This question already has an answer here:



    • how does multiplication differ for NumPy Matrix vs Array classes?

      7 answers



    When I multiply two numpy arrays of sizes (n x n)*(n x 1), I get a matrix of size (n x n). Following normal matrix multiplication rules, a (n x 1) vector is expected, but I simply cannot find any information about how this is done in Python's Numpy module.



    The thing is that I don't want to implement it manually to preserve the speed of the program.



    Example code is shown below:



    a = np.array([[ 5, 1 ,3], [ 1, 1 ,1], [ 1, 2 ,1]])
    b = np.array([1, 2, 3])

    print a*b
    >>
    [[5 2 9]
    [1 2 3]
    [1 4 3]]


    What i want is:



    print a*b
    >>
    [16 6 8]









    share|improve this question















    marked as duplicate by Valerij, Pharabus, timrau, BobTheBuilder, Steve Czetty Feb 5 '14 at 15:58


    This question has been asked before and already has an answer. If those answers do not fully address your question, please ask a new question.
















      up vote
      115
      down vote

      favorite
      32









      up vote
      115
      down vote

      favorite
      32






      32






      This question already has an answer here:



      • how does multiplication differ for NumPy Matrix vs Array classes?

        7 answers



      When I multiply two numpy arrays of sizes (n x n)*(n x 1), I get a matrix of size (n x n). Following normal matrix multiplication rules, a (n x 1) vector is expected, but I simply cannot find any information about how this is done in Python's Numpy module.



      The thing is that I don't want to implement it manually to preserve the speed of the program.



      Example code is shown below:



      a = np.array([[ 5, 1 ,3], [ 1, 1 ,1], [ 1, 2 ,1]])
      b = np.array([1, 2, 3])

      print a*b
      >>
      [[5 2 9]
      [1 2 3]
      [1 4 3]]


      What i want is:



      print a*b
      >>
      [16 6 8]









      share|improve this question
















      This question already has an answer here:



      • how does multiplication differ for NumPy Matrix vs Array classes?

        7 answers



      When I multiply two numpy arrays of sizes (n x n)*(n x 1), I get a matrix of size (n x n). Following normal matrix multiplication rules, a (n x 1) vector is expected, but I simply cannot find any information about how this is done in Python's Numpy module.



      The thing is that I don't want to implement it manually to preserve the speed of the program.



      Example code is shown below:



      a = np.array([[ 5, 1 ,3], [ 1, 1 ,1], [ 1, 2 ,1]])
      b = np.array([1, 2, 3])

      print a*b
      >>
      [[5 2 9]
      [1 2 3]
      [1 4 3]]


      What i want is:



      print a*b
      >>
      [16 6 8]




      This question already has an answer here:



      • how does multiplication differ for NumPy Matrix vs Array classes?

        7 answers







      python arrays numpy vector matrix






      share|improve this question















      share|improve this question













      share|improve this question




      share|improve this question








      edited Aug 12 '16 at 2:24









      tarzanbappa

      2,798104172




      2,798104172










      asked Feb 4 '14 at 20:43









      user3272574

      6772613




      6772613




      marked as duplicate by Valerij, Pharabus, timrau, BobTheBuilder, Steve Czetty Feb 5 '14 at 15:58


      This question has been asked before and already has an answer. If those answers do not fully address your question, please ask a new question.






      marked as duplicate by Valerij, Pharabus, timrau, BobTheBuilder, Steve Czetty Feb 5 '14 at 15:58


      This question has been asked before and already has an answer. If those answers do not fully address your question, please ask a new question.
























          1 Answer
          1






          active

          oldest

          votes

















          up vote
          188
          down vote



          accepted










          Simplest solution



          Use numpy.dot or a.dot(b). See the documentation here.



          >>> a = np.array([[ 5, 1 ,3], 
          [ 1, 1 ,1],
          [ 1, 2 ,1]])
          >>> b = np.array([1, 2, 3])
          >>> print a.dot(b)
          array([16, 6, 8])


          This occurs because numpy arrays are not matrices, and the standard operations *, +, -, / work element-wise on arrays. Instead, you could try using numpy.matrix, and * will be treated like matrix multiplication.




          Other Solutions



          Also know there are other options:




          • As noted below, if using python3.5+ the @ operator works as you'd expect:



            >>> print(a @ b)
            array([16, 6, 8])



          • If you want overkill, you can use numpy.einsum. The documentation will give you a flavor for how it works, but honestly, I didn't fully understand how to use it until reading this answer and just playing around with it on my own.



            >>> np.einsum('ji,i->j', a, b)
            array([16, 6, 8])



          • As of mid 2016 (numpy 1.10.1), you can try the experimental numpy.matmul, which works like numpy.dot with two major exceptions: no scalar multiplication but it works with stacks of matrices.



            >>> np.matmul(a, b)
            array([16, 6, 8])



          • numpy.inner functions the same way as numpy.dot for matrix-vector multiplication but behaves differently for matrix-matrix and tensor multiplication (see Wikipedia regarding the differences between the inner product and dot product in general or see this SO answer regarding numpy's implementations).



            >>> np.inner(a, b)
            array([16, 6, 8])

            # Beware using for matrix-matrix multiplication though!
            >>> b = a.T
            >>> np.dot(a, b)
            array([[35, 9, 10],
            [ 9, 3, 4],
            [10, 4, 6]])
            >>> np.inner(a, b)
            array([[29, 12, 19],
            [ 7, 4, 5],
            [ 8, 5, 6]])



          Rarer options for edge cases




          • If you have tensors (arrays of dimension greater than or equal to one), you can use numpy.tensordot with the optional argument axes=1:



            >>> np.tensordot(a, b, axes=1)
            array([16, 6, 8])


          • Don't use numpy.vdot if you have a matrix of complex numbers, as the matrix will be flattened to a 1D array, then it will try to find the complex conjugate dot product between your flattened matrix and vector (which will fail due to a size mismatch n*m vs n).






          share|improve this answer


















          • 10




            For anybody who finds this later, numpy support the matrix multiplication operator @.
            – Tyler Crompton
            Mar 31 '16 at 20:07






          • 1




            Thanks for this explanation. Nevertheless I cannot understand why numpy does this way...
            – decadenza
            Jul 22 '17 at 11:52

















          1 Answer
          1






          active

          oldest

          votes








          1 Answer
          1






          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes








          up vote
          188
          down vote



          accepted










          Simplest solution



          Use numpy.dot or a.dot(b). See the documentation here.



          >>> a = np.array([[ 5, 1 ,3], 
          [ 1, 1 ,1],
          [ 1, 2 ,1]])
          >>> b = np.array([1, 2, 3])
          >>> print a.dot(b)
          array([16, 6, 8])


          This occurs because numpy arrays are not matrices, and the standard operations *, +, -, / work element-wise on arrays. Instead, you could try using numpy.matrix, and * will be treated like matrix multiplication.




          Other Solutions



          Also know there are other options:




          • As noted below, if using python3.5+ the @ operator works as you'd expect:



            >>> print(a @ b)
            array([16, 6, 8])



          • If you want overkill, you can use numpy.einsum. The documentation will give you a flavor for how it works, but honestly, I didn't fully understand how to use it until reading this answer and just playing around with it on my own.



            >>> np.einsum('ji,i->j', a, b)
            array([16, 6, 8])



          • As of mid 2016 (numpy 1.10.1), you can try the experimental numpy.matmul, which works like numpy.dot with two major exceptions: no scalar multiplication but it works with stacks of matrices.



            >>> np.matmul(a, b)
            array([16, 6, 8])



          • numpy.inner functions the same way as numpy.dot for matrix-vector multiplication but behaves differently for matrix-matrix and tensor multiplication (see Wikipedia regarding the differences between the inner product and dot product in general or see this SO answer regarding numpy's implementations).



            >>> np.inner(a, b)
            array([16, 6, 8])

            # Beware using for matrix-matrix multiplication though!
            >>> b = a.T
            >>> np.dot(a, b)
            array([[35, 9, 10],
            [ 9, 3, 4],
            [10, 4, 6]])
            >>> np.inner(a, b)
            array([[29, 12, 19],
            [ 7, 4, 5],
            [ 8, 5, 6]])



          Rarer options for edge cases




          • If you have tensors (arrays of dimension greater than or equal to one), you can use numpy.tensordot with the optional argument axes=1:



            >>> np.tensordot(a, b, axes=1)
            array([16, 6, 8])


          • Don't use numpy.vdot if you have a matrix of complex numbers, as the matrix will be flattened to a 1D array, then it will try to find the complex conjugate dot product between your flattened matrix and vector (which will fail due to a size mismatch n*m vs n).






          share|improve this answer


















          • 10




            For anybody who finds this later, numpy support the matrix multiplication operator @.
            – Tyler Crompton
            Mar 31 '16 at 20:07






          • 1




            Thanks for this explanation. Nevertheless I cannot understand why numpy does this way...
            – decadenza
            Jul 22 '17 at 11:52














          up vote
          188
          down vote



          accepted










          Simplest solution



          Use numpy.dot or a.dot(b). See the documentation here.



          >>> a = np.array([[ 5, 1 ,3], 
          [ 1, 1 ,1],
          [ 1, 2 ,1]])
          >>> b = np.array([1, 2, 3])
          >>> print a.dot(b)
          array([16, 6, 8])


          This occurs because numpy arrays are not matrices, and the standard operations *, +, -, / work element-wise on arrays. Instead, you could try using numpy.matrix, and * will be treated like matrix multiplication.




          Other Solutions



          Also know there are other options:




          • As noted below, if using python3.5+ the @ operator works as you'd expect:



            >>> print(a @ b)
            array([16, 6, 8])



          • If you want overkill, you can use numpy.einsum. The documentation will give you a flavor for how it works, but honestly, I didn't fully understand how to use it until reading this answer and just playing around with it on my own.



            >>> np.einsum('ji,i->j', a, b)
            array([16, 6, 8])



          • As of mid 2016 (numpy 1.10.1), you can try the experimental numpy.matmul, which works like numpy.dot with two major exceptions: no scalar multiplication but it works with stacks of matrices.



            >>> np.matmul(a, b)
            array([16, 6, 8])



          • numpy.inner functions the same way as numpy.dot for matrix-vector multiplication but behaves differently for matrix-matrix and tensor multiplication (see Wikipedia regarding the differences between the inner product and dot product in general or see this SO answer regarding numpy's implementations).



            >>> np.inner(a, b)
            array([16, 6, 8])

            # Beware using for matrix-matrix multiplication though!
            >>> b = a.T
            >>> np.dot(a, b)
            array([[35, 9, 10],
            [ 9, 3, 4],
            [10, 4, 6]])
            >>> np.inner(a, b)
            array([[29, 12, 19],
            [ 7, 4, 5],
            [ 8, 5, 6]])



          Rarer options for edge cases




          • If you have tensors (arrays of dimension greater than or equal to one), you can use numpy.tensordot with the optional argument axes=1:



            >>> np.tensordot(a, b, axes=1)
            array([16, 6, 8])


          • Don't use numpy.vdot if you have a matrix of complex numbers, as the matrix will be flattened to a 1D array, then it will try to find the complex conjugate dot product between your flattened matrix and vector (which will fail due to a size mismatch n*m vs n).






          share|improve this answer


















          • 10




            For anybody who finds this later, numpy support the matrix multiplication operator @.
            – Tyler Crompton
            Mar 31 '16 at 20:07






          • 1




            Thanks for this explanation. Nevertheless I cannot understand why numpy does this way...
            – decadenza
            Jul 22 '17 at 11:52












          up vote
          188
          down vote



          accepted







          up vote
          188
          down vote



          accepted






          Simplest solution



          Use numpy.dot or a.dot(b). See the documentation here.



          >>> a = np.array([[ 5, 1 ,3], 
          [ 1, 1 ,1],
          [ 1, 2 ,1]])
          >>> b = np.array([1, 2, 3])
          >>> print a.dot(b)
          array([16, 6, 8])


          This occurs because numpy arrays are not matrices, and the standard operations *, +, -, / work element-wise on arrays. Instead, you could try using numpy.matrix, and * will be treated like matrix multiplication.




          Other Solutions



          Also know there are other options:




          • As noted below, if using python3.5+ the @ operator works as you'd expect:



            >>> print(a @ b)
            array([16, 6, 8])



          • If you want overkill, you can use numpy.einsum. The documentation will give you a flavor for how it works, but honestly, I didn't fully understand how to use it until reading this answer and just playing around with it on my own.



            >>> np.einsum('ji,i->j', a, b)
            array([16, 6, 8])



          • As of mid 2016 (numpy 1.10.1), you can try the experimental numpy.matmul, which works like numpy.dot with two major exceptions: no scalar multiplication but it works with stacks of matrices.



            >>> np.matmul(a, b)
            array([16, 6, 8])



          • numpy.inner functions the same way as numpy.dot for matrix-vector multiplication but behaves differently for matrix-matrix and tensor multiplication (see Wikipedia regarding the differences between the inner product and dot product in general or see this SO answer regarding numpy's implementations).



            >>> np.inner(a, b)
            array([16, 6, 8])

            # Beware using for matrix-matrix multiplication though!
            >>> b = a.T
            >>> np.dot(a, b)
            array([[35, 9, 10],
            [ 9, 3, 4],
            [10, 4, 6]])
            >>> np.inner(a, b)
            array([[29, 12, 19],
            [ 7, 4, 5],
            [ 8, 5, 6]])



          Rarer options for edge cases




          • If you have tensors (arrays of dimension greater than or equal to one), you can use numpy.tensordot with the optional argument axes=1:



            >>> np.tensordot(a, b, axes=1)
            array([16, 6, 8])


          • Don't use numpy.vdot if you have a matrix of complex numbers, as the matrix will be flattened to a 1D array, then it will try to find the complex conjugate dot product between your flattened matrix and vector (which will fail due to a size mismatch n*m vs n).






          share|improve this answer














          Simplest solution



          Use numpy.dot or a.dot(b). See the documentation here.



          >>> a = np.array([[ 5, 1 ,3], 
          [ 1, 1 ,1],
          [ 1, 2 ,1]])
          >>> b = np.array([1, 2, 3])
          >>> print a.dot(b)
          array([16, 6, 8])


          This occurs because numpy arrays are not matrices, and the standard operations *, +, -, / work element-wise on arrays. Instead, you could try using numpy.matrix, and * will be treated like matrix multiplication.




          Other Solutions



          Also know there are other options:




          • As noted below, if using python3.5+ the @ operator works as you'd expect:



            >>> print(a @ b)
            array([16, 6, 8])



          • If you want overkill, you can use numpy.einsum. The documentation will give you a flavor for how it works, but honestly, I didn't fully understand how to use it until reading this answer and just playing around with it on my own.



            >>> np.einsum('ji,i->j', a, b)
            array([16, 6, 8])



          • As of mid 2016 (numpy 1.10.1), you can try the experimental numpy.matmul, which works like numpy.dot with two major exceptions: no scalar multiplication but it works with stacks of matrices.



            >>> np.matmul(a, b)
            array([16, 6, 8])



          • numpy.inner functions the same way as numpy.dot for matrix-vector multiplication but behaves differently for matrix-matrix and tensor multiplication (see Wikipedia regarding the differences between the inner product and dot product in general or see this SO answer regarding numpy's implementations).



            >>> np.inner(a, b)
            array([16, 6, 8])

            # Beware using for matrix-matrix multiplication though!
            >>> b = a.T
            >>> np.dot(a, b)
            array([[35, 9, 10],
            [ 9, 3, 4],
            [10, 4, 6]])
            >>> np.inner(a, b)
            array([[29, 12, 19],
            [ 7, 4, 5],
            [ 8, 5, 6]])



          Rarer options for edge cases




          • If you have tensors (arrays of dimension greater than or equal to one), you can use numpy.tensordot with the optional argument axes=1:



            >>> np.tensordot(a, b, axes=1)
            array([16, 6, 8])


          • Don't use numpy.vdot if you have a matrix of complex numbers, as the matrix will be flattened to a 1D array, then it will try to find the complex conjugate dot product between your flattened matrix and vector (which will fail due to a size mismatch n*m vs n).







          share|improve this answer














          share|improve this answer



          share|improve this answer








          edited Jan 10 at 18:46

























          answered Feb 4 '14 at 20:46









          wflynny

          11.5k32943




          11.5k32943







          • 10




            For anybody who finds this later, numpy support the matrix multiplication operator @.
            – Tyler Crompton
            Mar 31 '16 at 20:07






          • 1




            Thanks for this explanation. Nevertheless I cannot understand why numpy does this way...
            – decadenza
            Jul 22 '17 at 11:52












          • 10




            For anybody who finds this later, numpy support the matrix multiplication operator @.
            – Tyler Crompton
            Mar 31 '16 at 20:07






          • 1




            Thanks for this explanation. Nevertheless I cannot understand why numpy does this way...
            – decadenza
            Jul 22 '17 at 11:52







          10




          10




          For anybody who finds this later, numpy support the matrix multiplication operator @.
          – Tyler Crompton
          Mar 31 '16 at 20:07




          For anybody who finds this later, numpy support the matrix multiplication operator @.
          – Tyler Crompton
          Mar 31 '16 at 20:07




          1




          1




          Thanks for this explanation. Nevertheless I cannot understand why numpy does this way...
          – decadenza
          Jul 22 '17 at 11:52




          Thanks for this explanation. Nevertheless I cannot understand why numpy does this way...
          – decadenza
          Jul 22 '17 at 11:52



          Popular posts from this blog

          Top Tejano songwriter Luis Silva dead of heart attack at 64

          ReactJS Fetched API data displays live - need Data displayed static

          Evgeni Malkin