How to resolve error "AttributeError: 'SparkSession' object has no attribute 'serializer'?










3















I'm using pyspark dataframe. I have some code in which I'm trying to convert the dataframe to an rdd, but I receive the following error:




AttributeError: 'SparkSession' object has no attribute 'serializer'




What can be the issue?



training, test = rescaledData.randomSplit([0.8, 0.2])
nb = NaiveBayes(smoothing=1.0, modelType="multinomial")
# Train a naive Bayes model.
model = nb.fit(rescaledData)

# Make prediction and test accuracy.
predictionAndLabel = test.rdd.map(lambda p: (model.predict(p.features), p.label))
accuracy = 1.0 * predictionAndLabel.filter(lambda pl: pl[0] == pl[1]).count() / test.count()
print('model accuracy '.format(accuracy))


Does anyone have any insight to why the statement test.rdd causes an error? The dataframe contains Row object of (label, features).



Thanks










share|improve this question




























    3















    I'm using pyspark dataframe. I have some code in which I'm trying to convert the dataframe to an rdd, but I receive the following error:




    AttributeError: 'SparkSession' object has no attribute 'serializer'




    What can be the issue?



    training, test = rescaledData.randomSplit([0.8, 0.2])
    nb = NaiveBayes(smoothing=1.0, modelType="multinomial")
    # Train a naive Bayes model.
    model = nb.fit(rescaledData)

    # Make prediction and test accuracy.
    predictionAndLabel = test.rdd.map(lambda p: (model.predict(p.features), p.label))
    accuracy = 1.0 * predictionAndLabel.filter(lambda pl: pl[0] == pl[1]).count() / test.count()
    print('model accuracy '.format(accuracy))


    Does anyone have any insight to why the statement test.rdd causes an error? The dataframe contains Row object of (label, features).



    Thanks










    share|improve this question


























      3












      3








      3








      I'm using pyspark dataframe. I have some code in which I'm trying to convert the dataframe to an rdd, but I receive the following error:




      AttributeError: 'SparkSession' object has no attribute 'serializer'




      What can be the issue?



      training, test = rescaledData.randomSplit([0.8, 0.2])
      nb = NaiveBayes(smoothing=1.0, modelType="multinomial")
      # Train a naive Bayes model.
      model = nb.fit(rescaledData)

      # Make prediction and test accuracy.
      predictionAndLabel = test.rdd.map(lambda p: (model.predict(p.features), p.label))
      accuracy = 1.0 * predictionAndLabel.filter(lambda pl: pl[0] == pl[1]).count() / test.count()
      print('model accuracy '.format(accuracy))


      Does anyone have any insight to why the statement test.rdd causes an error? The dataframe contains Row object of (label, features).



      Thanks










      share|improve this question
















      I'm using pyspark dataframe. I have some code in which I'm trying to convert the dataframe to an rdd, but I receive the following error:




      AttributeError: 'SparkSession' object has no attribute 'serializer'




      What can be the issue?



      training, test = rescaledData.randomSplit([0.8, 0.2])
      nb = NaiveBayes(smoothing=1.0, modelType="multinomial")
      # Train a naive Bayes model.
      model = nb.fit(rescaledData)

      # Make prediction and test accuracy.
      predictionAndLabel = test.rdd.map(lambda p: (model.predict(p.features), p.label))
      accuracy = 1.0 * predictionAndLabel.filter(lambda pl: pl[0] == pl[1]).count() / test.count()
      print('model accuracy '.format(accuracy))


      Does anyone have any insight to why the statement test.rdd causes an error? The dataframe contains Row object of (label, features).



      Thanks







      apache-spark pyspark pyspark-sql






      share|improve this question















      share|improve this question













      share|improve this question




      share|improve this question








      edited Nov 16 '18 at 3:04









      Ali AzG

      7111717




      7111717










      asked Nov 15 '18 at 19:54









      mimi gigimimi gigi

      191




      191






















          1 Answer
          1






          active

          oldest

          votes


















          0














          Apologies as I don't have enough rep to comment. The answer to this question may resolve this, as this pertains to the way the SQL context is initiated:



          https://stackoverflow.com/a/54738984/8534357



          When I initiate the Spark Session and the SQL context, I was doing this, which is not right:



          sc = SparkSession.builder.appName('App Name').master("local[*]").getOrCreate()
          sqlContext = SQLContext(sc)


          This problem was resolved by doing this instead:



          sc = SparkSession.builder.appName('App Name').master("local[*]").getOrCreate()
          sqlContext = SQLContext(sparkContext=sc.sparkContext, sparkSession=sc)





          share|improve this answer






















            Your Answer






            StackExchange.ifUsing("editor", function ()
            StackExchange.using("externalEditor", function ()
            StackExchange.using("snippets", function ()
            StackExchange.snippets.init();
            );
            );
            , "code-snippets");

            StackExchange.ready(function()
            var channelOptions =
            tags: "".split(" "),
            id: "1"
            ;
            initTagRenderer("".split(" "), "".split(" "), channelOptions);

            StackExchange.using("externalEditor", function()
            // Have to fire editor after snippets, if snippets enabled
            if (StackExchange.settings.snippets.snippetsEnabled)
            StackExchange.using("snippets", function()
            createEditor();
            );

            else
            createEditor();

            );

            function createEditor()
            StackExchange.prepareEditor(
            heartbeatType: 'answer',
            autoActivateHeartbeat: false,
            convertImagesToLinks: true,
            noModals: true,
            showLowRepImageUploadWarning: true,
            reputationToPostImages: 10,
            bindNavPrevention: true,
            postfix: "",
            imageUploader:
            brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
            contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
            allowUrls: true
            ,
            onDemand: true,
            discardSelector: ".discard-answer"
            ,immediatelyShowMarkdownHelp:true
            );



            );













            draft saved

            draft discarded


















            StackExchange.ready(
            function ()
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53327006%2fhow-to-resolve-error-attributeerror-sparksession-object-has-no-attribute-se%23new-answer', 'question_page');

            );

            Post as a guest















            Required, but never shown

























            1 Answer
            1






            active

            oldest

            votes








            1 Answer
            1






            active

            oldest

            votes









            active

            oldest

            votes






            active

            oldest

            votes









            0














            Apologies as I don't have enough rep to comment. The answer to this question may resolve this, as this pertains to the way the SQL context is initiated:



            https://stackoverflow.com/a/54738984/8534357



            When I initiate the Spark Session and the SQL context, I was doing this, which is not right:



            sc = SparkSession.builder.appName('App Name').master("local[*]").getOrCreate()
            sqlContext = SQLContext(sc)


            This problem was resolved by doing this instead:



            sc = SparkSession.builder.appName('App Name').master("local[*]").getOrCreate()
            sqlContext = SQLContext(sparkContext=sc.sparkContext, sparkSession=sc)





            share|improve this answer



























              0














              Apologies as I don't have enough rep to comment. The answer to this question may resolve this, as this pertains to the way the SQL context is initiated:



              https://stackoverflow.com/a/54738984/8534357



              When I initiate the Spark Session and the SQL context, I was doing this, which is not right:



              sc = SparkSession.builder.appName('App Name').master("local[*]").getOrCreate()
              sqlContext = SQLContext(sc)


              This problem was resolved by doing this instead:



              sc = SparkSession.builder.appName('App Name').master("local[*]").getOrCreate()
              sqlContext = SQLContext(sparkContext=sc.sparkContext, sparkSession=sc)





              share|improve this answer

























                0












                0








                0







                Apologies as I don't have enough rep to comment. The answer to this question may resolve this, as this pertains to the way the SQL context is initiated:



                https://stackoverflow.com/a/54738984/8534357



                When I initiate the Spark Session and the SQL context, I was doing this, which is not right:



                sc = SparkSession.builder.appName('App Name').master("local[*]").getOrCreate()
                sqlContext = SQLContext(sc)


                This problem was resolved by doing this instead:



                sc = SparkSession.builder.appName('App Name').master("local[*]").getOrCreate()
                sqlContext = SQLContext(sparkContext=sc.sparkContext, sparkSession=sc)





                share|improve this answer













                Apologies as I don't have enough rep to comment. The answer to this question may resolve this, as this pertains to the way the SQL context is initiated:



                https://stackoverflow.com/a/54738984/8534357



                When I initiate the Spark Session and the SQL context, I was doing this, which is not right:



                sc = SparkSession.builder.appName('App Name').master("local[*]").getOrCreate()
                sqlContext = SQLContext(sc)


                This problem was resolved by doing this instead:



                sc = SparkSession.builder.appName('App Name').master("local[*]").getOrCreate()
                sqlContext = SQLContext(sparkContext=sc.sparkContext, sparkSession=sc)






                share|improve this answer












                share|improve this answer



                share|improve this answer










                answered Feb 18 at 0:33









                CobraCobra

                325




                325





























                    draft saved

                    draft discarded
















































                    Thanks for contributing an answer to Stack Overflow!


                    • Please be sure to answer the question. Provide details and share your research!

                    But avoid


                    • Asking for help, clarification, or responding to other answers.

                    • Making statements based on opinion; back them up with references or personal experience.

                    To learn more, see our tips on writing great answers.




                    draft saved


                    draft discarded














                    StackExchange.ready(
                    function ()
                    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53327006%2fhow-to-resolve-error-attributeerror-sparksession-object-has-no-attribute-se%23new-answer', 'question_page');

                    );

                    Post as a guest















                    Required, but never shown





















































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown

































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown







                    Popular posts from this blog

                    Top Tejano songwriter Luis Silva dead of heart attack at 64

                    政党

                    天津地下鉄3号線