What decides number of mappers and reducers of an operation in Spark










0















I am reading https://0x0fff.com/spark-architecture-shuffle/, the articles talks about number of files getting generated based on number of mappers and reducers tasks.



But I am not sure what decides number of mappers and reducers tasks.



Could you please help.










share|improve this question


























    0















    I am reading https://0x0fff.com/spark-architecture-shuffle/, the articles talks about number of files getting generated based on number of mappers and reducers tasks.



    But I am not sure what decides number of mappers and reducers tasks.



    Could you please help.










    share|improve this question
























      0












      0








      0








      I am reading https://0x0fff.com/spark-architecture-shuffle/, the articles talks about number of files getting generated based on number of mappers and reducers tasks.



      But I am not sure what decides number of mappers and reducers tasks.



      Could you please help.










      share|improve this question














      I am reading https://0x0fff.com/spark-architecture-shuffle/, the articles talks about number of files getting generated based on number of mappers and reducers tasks.



      But I am not sure what decides number of mappers and reducers tasks.



      Could you please help.







      apache-spark






      share|improve this question













      share|improve this question











      share|improve this question




      share|improve this question










      asked Nov 16 '18 at 5:18









      Anurag SharmaAnurag Sharma

      912513




      912513






















          1 Answer
          1






          active

          oldest

          votes


















          -1














          It depends on how your data is partitioned. In Spark SQL, when you read data from the source the number of partitions will depend on the size of your dataset, number of input files and on the number of cores that you have available. Spark will determine how many partitions should be created and so in the first stage of your job this will be the number of 'mappers tasks'. Then if you perform transformation that induces shuffle (like groupBy, join, dropDuplicates, ...), the number of 'reducers tasks' will be 200 by default, because Spark will create 200 partitions. You can change that by this setting:



          sparkSession.conf.set("spark.sql.shuffle.partitions", n)


          where n is the number of partitions that you want to use (number of tasks that you want to have after each shuffle). Here is a link to configuration options in Spark documentation which mentions this setting.






          share|improve this answer






















            Your Answer






            StackExchange.ifUsing("editor", function ()
            StackExchange.using("externalEditor", function ()
            StackExchange.using("snippets", function ()
            StackExchange.snippets.init();
            );
            );
            , "code-snippets");

            StackExchange.ready(function()
            var channelOptions =
            tags: "".split(" "),
            id: "1"
            ;
            initTagRenderer("".split(" "), "".split(" "), channelOptions);

            StackExchange.using("externalEditor", function()
            // Have to fire editor after snippets, if snippets enabled
            if (StackExchange.settings.snippets.snippetsEnabled)
            StackExchange.using("snippets", function()
            createEditor();
            );

            else
            createEditor();

            );

            function createEditor()
            StackExchange.prepareEditor(
            heartbeatType: 'answer',
            autoActivateHeartbeat: false,
            convertImagesToLinks: true,
            noModals: true,
            showLowRepImageUploadWarning: true,
            reputationToPostImages: 10,
            bindNavPrevention: true,
            postfix: "",
            imageUploader:
            brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
            contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
            allowUrls: true
            ,
            onDemand: true,
            discardSelector: ".discard-answer"
            ,immediatelyShowMarkdownHelp:true
            );



            );













            draft saved

            draft discarded


















            StackExchange.ready(
            function ()
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53331850%2fwhat-decides-number-of-mappers-and-reducers-of-an-operation-in-spark%23new-answer', 'question_page');

            );

            Post as a guest















            Required, but never shown

























            1 Answer
            1






            active

            oldest

            votes








            1 Answer
            1






            active

            oldest

            votes









            active

            oldest

            votes






            active

            oldest

            votes









            -1














            It depends on how your data is partitioned. In Spark SQL, when you read data from the source the number of partitions will depend on the size of your dataset, number of input files and on the number of cores that you have available. Spark will determine how many partitions should be created and so in the first stage of your job this will be the number of 'mappers tasks'. Then if you perform transformation that induces shuffle (like groupBy, join, dropDuplicates, ...), the number of 'reducers tasks' will be 200 by default, because Spark will create 200 partitions. You can change that by this setting:



            sparkSession.conf.set("spark.sql.shuffle.partitions", n)


            where n is the number of partitions that you want to use (number of tasks that you want to have after each shuffle). Here is a link to configuration options in Spark documentation which mentions this setting.






            share|improve this answer



























              -1














              It depends on how your data is partitioned. In Spark SQL, when you read data from the source the number of partitions will depend on the size of your dataset, number of input files and on the number of cores that you have available. Spark will determine how many partitions should be created and so in the first stage of your job this will be the number of 'mappers tasks'. Then if you perform transformation that induces shuffle (like groupBy, join, dropDuplicates, ...), the number of 'reducers tasks' will be 200 by default, because Spark will create 200 partitions. You can change that by this setting:



              sparkSession.conf.set("spark.sql.shuffle.partitions", n)


              where n is the number of partitions that you want to use (number of tasks that you want to have after each shuffle). Here is a link to configuration options in Spark documentation which mentions this setting.






              share|improve this answer

























                -1












                -1








                -1







                It depends on how your data is partitioned. In Spark SQL, when you read data from the source the number of partitions will depend on the size of your dataset, number of input files and on the number of cores that you have available. Spark will determine how many partitions should be created and so in the first stage of your job this will be the number of 'mappers tasks'. Then if you perform transformation that induces shuffle (like groupBy, join, dropDuplicates, ...), the number of 'reducers tasks' will be 200 by default, because Spark will create 200 partitions. You can change that by this setting:



                sparkSession.conf.set("spark.sql.shuffle.partitions", n)


                where n is the number of partitions that you want to use (number of tasks that you want to have after each shuffle). Here is a link to configuration options in Spark documentation which mentions this setting.






                share|improve this answer













                It depends on how your data is partitioned. In Spark SQL, when you read data from the source the number of partitions will depend on the size of your dataset, number of input files and on the number of cores that you have available. Spark will determine how many partitions should be created and so in the first stage of your job this will be the number of 'mappers tasks'. Then if you perform transformation that induces shuffle (like groupBy, join, dropDuplicates, ...), the number of 'reducers tasks' will be 200 by default, because Spark will create 200 partitions. You can change that by this setting:



                sparkSession.conf.set("spark.sql.shuffle.partitions", n)


                where n is the number of partitions that you want to use (number of tasks that you want to have after each shuffle). Here is a link to configuration options in Spark documentation which mentions this setting.







                share|improve this answer












                share|improve this answer



                share|improve this answer










                answered Nov 16 '18 at 6:43









                David VrbaDavid Vrba

                493




                493





























                    draft saved

                    draft discarded
















































                    Thanks for contributing an answer to Stack Overflow!


                    • Please be sure to answer the question. Provide details and share your research!

                    But avoid


                    • Asking for help, clarification, or responding to other answers.

                    • Making statements based on opinion; back them up with references or personal experience.

                    To learn more, see our tips on writing great answers.




                    draft saved


                    draft discarded














                    StackExchange.ready(
                    function ()
                    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53331850%2fwhat-decides-number-of-mappers-and-reducers-of-an-operation-in-spark%23new-answer', 'question_page');

                    );

                    Post as a guest















                    Required, but never shown





















































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown

































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown







                    Popular posts from this blog

                    Top Tejano songwriter Luis Silva dead of heart attack at 64

                    政党

                    天津地下鉄3号線