Using Firestore as a Cache for SQL Aggregation [closed]










1















I'm not sure if this is the place to ask this, but I have a best practices question.



I have a dashboarding service fed by Salesforce data that displays the number of Task X performed this week (X being Opportunities Closed - Won, Leads Created, etc).



Currently, the data is being pulled regularly and stored in a SQL database, which is mapped to a REST API that the Client App calls to get the aggregations between two date values, and will be fed additionally by Webhook calls via SF's Insert Triggers.



I want to know if having a Firestore Collection as a Cache for Aggregated SQL is a good idea, or if there is a better approach. The benefits I see are reduced traffic on my SQL server, instant updates (if the "cache" (Firestore) is updated, the client's value updates instantly as well).



When data is pulled from SF or a new record is received via the Insert Trigger/Webhook, I can update the Firestore record and the client will receive the change immediately.



My idea for a Firestore Document would be




user: "123",
sfOwnerId: "124",
sfTaskType: "Opportunities Closed Won This Week",
count: 23



Is this a good idea? Is there a better one out there?



Thank you in advanced!










share|improve this question















closed as primarily opinion-based by Doug Stevenson, TylerH, Graham, Ayush Gupta, GhostCat Nov 16 '18 at 7:55


Many good questions generate some degree of opinion based on expert experience, but answers to this question will tend to be almost entirely based on opinions, rather than facts, references, or specific expertise. If this question can be reworded to fit the rules in the help center, please edit the question.






















    1















    I'm not sure if this is the place to ask this, but I have a best practices question.



    I have a dashboarding service fed by Salesforce data that displays the number of Task X performed this week (X being Opportunities Closed - Won, Leads Created, etc).



    Currently, the data is being pulled regularly and stored in a SQL database, which is mapped to a REST API that the Client App calls to get the aggregations between two date values, and will be fed additionally by Webhook calls via SF's Insert Triggers.



    I want to know if having a Firestore Collection as a Cache for Aggregated SQL is a good idea, or if there is a better approach. The benefits I see are reduced traffic on my SQL server, instant updates (if the "cache" (Firestore) is updated, the client's value updates instantly as well).



    When data is pulled from SF or a new record is received via the Insert Trigger/Webhook, I can update the Firestore record and the client will receive the change immediately.



    My idea for a Firestore Document would be




    user: "123",
    sfOwnerId: "124",
    sfTaskType: "Opportunities Closed Won This Week",
    count: 23



    Is this a good idea? Is there a better one out there?



    Thank you in advanced!










    share|improve this question















    closed as primarily opinion-based by Doug Stevenson, TylerH, Graham, Ayush Gupta, GhostCat Nov 16 '18 at 7:55


    Many good questions generate some degree of opinion based on expert experience, but answers to this question will tend to be almost entirely based on opinions, rather than facts, references, or specific expertise. If this question can be reworded to fit the rules in the help center, please edit the question.




















      1












      1








      1


      1






      I'm not sure if this is the place to ask this, but I have a best practices question.



      I have a dashboarding service fed by Salesforce data that displays the number of Task X performed this week (X being Opportunities Closed - Won, Leads Created, etc).



      Currently, the data is being pulled regularly and stored in a SQL database, which is mapped to a REST API that the Client App calls to get the aggregations between two date values, and will be fed additionally by Webhook calls via SF's Insert Triggers.



      I want to know if having a Firestore Collection as a Cache for Aggregated SQL is a good idea, or if there is a better approach. The benefits I see are reduced traffic on my SQL server, instant updates (if the "cache" (Firestore) is updated, the client's value updates instantly as well).



      When data is pulled from SF or a new record is received via the Insert Trigger/Webhook, I can update the Firestore record and the client will receive the change immediately.



      My idea for a Firestore Document would be




      user: "123",
      sfOwnerId: "124",
      sfTaskType: "Opportunities Closed Won This Week",
      count: 23



      Is this a good idea? Is there a better one out there?



      Thank you in advanced!










      share|improve this question
















      I'm not sure if this is the place to ask this, but I have a best practices question.



      I have a dashboarding service fed by Salesforce data that displays the number of Task X performed this week (X being Opportunities Closed - Won, Leads Created, etc).



      Currently, the data is being pulled regularly and stored in a SQL database, which is mapped to a REST API that the Client App calls to get the aggregations between two date values, and will be fed additionally by Webhook calls via SF's Insert Triggers.



      I want to know if having a Firestore Collection as a Cache for Aggregated SQL is a good idea, or if there is a better approach. The benefits I see are reduced traffic on my SQL server, instant updates (if the "cache" (Firestore) is updated, the client's value updates instantly as well).



      When data is pulled from SF or a new record is received via the Insert Trigger/Webhook, I can update the Firestore record and the client will receive the change immediately.



      My idea for a Firestore Document would be




      user: "123",
      sfOwnerId: "124",
      sfTaskType: "Opportunities Closed Won This Week",
      count: 23



      Is this a good idea? Is there a better one out there?



      Thank you in advanced!







      firebase salesforce google-cloud-firestore






      share|improve this question















      share|improve this question













      share|improve this question




      share|improve this question








      edited Nov 16 '18 at 0:21









      Doug Stevenson

      80.4k996114




      80.4k996114










      asked Nov 15 '18 at 23:32









      Michael CampbellMichael Campbell

      10615




      10615




      closed as primarily opinion-based by Doug Stevenson, TylerH, Graham, Ayush Gupta, GhostCat Nov 16 '18 at 7:55


      Many good questions generate some degree of opinion based on expert experience, but answers to this question will tend to be almost entirely based on opinions, rather than facts, references, or specific expertise. If this question can be reworded to fit the rules in the help center, please edit the question.









      closed as primarily opinion-based by Doug Stevenson, TylerH, Graham, Ayush Gupta, GhostCat Nov 16 '18 at 7:55


      Many good questions generate some degree of opinion based on expert experience, but answers to this question will tend to be almost entirely based on opinions, rather than facts, references, or specific expertise. If this question can be reworded to fit the rules in the help center, please edit the question.
























          1 Answer
          1






          active

          oldest

          votes


















          1














          Your strategy of storing the aggregated data is what the Firestore documentation suggests for aggregation, so I think it's pretty solid idea.



          An alternative strategy would be to only store the Salesforce data in Firestore as it comes in, not aggregated, and let the client perform the aggregation. This can be achieved by subscribing to real-time updates to a query of Collection. In this setup, you would perform the calculation within the onSnapshot callback (assuming you're using the Web environment).



          The advantage here is a possible increase to performance, since Cloud Functions often suffer from "cold start" latency.




          Note: Several of the recommendations in this document center around
          what is known as a cold start. Functions are stateless, and the
          execution environment is often initialized from scratch, which is
          called a cold start. Cold starts can take significant amounts of time
          to complete. It is best practice to avoid unnecessary cold starts, and
          to streamline the cold start process to whatever extent possible (for
          example, by avoiding unnecessary dependencies).




          Source






          share|improve this answer























          • Would that be super heavy? I've looked into it, and my queries wouldn't be too insane - If the period is in between two dates, I can have the client fetch all after a specific date, and then splice it to not include the data after the endDate.

            – Michael Campbell
            Nov 16 '18 at 0:32






          • 1





            That depends on how complex your aggregation calculations are. Your date range filter strategy sounds good to me.

            – Nathan Jones
            Nov 16 '18 at 0:47







          • 1





            I just thought of this: You can cut down your initial results even further by querying for docs with a "start date" greater than or equal to your date range's "start date" AND less than or equal to your range's "end date" (with two where calls). This should cut down on the number of docs you have to exclude whose "end date" go past your date range's "end date" .

            – Nathan Jones
            Nov 16 '18 at 0:56












          • I'll do that! thank you for your input!

            – Michael Campbell
            Nov 16 '18 at 23:04

















          1 Answer
          1






          active

          oldest

          votes








          1 Answer
          1






          active

          oldest

          votes









          active

          oldest

          votes






          active

          oldest

          votes









          1














          Your strategy of storing the aggregated data is what the Firestore documentation suggests for aggregation, so I think it's pretty solid idea.



          An alternative strategy would be to only store the Salesforce data in Firestore as it comes in, not aggregated, and let the client perform the aggregation. This can be achieved by subscribing to real-time updates to a query of Collection. In this setup, you would perform the calculation within the onSnapshot callback (assuming you're using the Web environment).



          The advantage here is a possible increase to performance, since Cloud Functions often suffer from "cold start" latency.




          Note: Several of the recommendations in this document center around
          what is known as a cold start. Functions are stateless, and the
          execution environment is often initialized from scratch, which is
          called a cold start. Cold starts can take significant amounts of time
          to complete. It is best practice to avoid unnecessary cold starts, and
          to streamline the cold start process to whatever extent possible (for
          example, by avoiding unnecessary dependencies).




          Source






          share|improve this answer























          • Would that be super heavy? I've looked into it, and my queries wouldn't be too insane - If the period is in between two dates, I can have the client fetch all after a specific date, and then splice it to not include the data after the endDate.

            – Michael Campbell
            Nov 16 '18 at 0:32






          • 1





            That depends on how complex your aggregation calculations are. Your date range filter strategy sounds good to me.

            – Nathan Jones
            Nov 16 '18 at 0:47







          • 1





            I just thought of this: You can cut down your initial results even further by querying for docs with a "start date" greater than or equal to your date range's "start date" AND less than or equal to your range's "end date" (with two where calls). This should cut down on the number of docs you have to exclude whose "end date" go past your date range's "end date" .

            – Nathan Jones
            Nov 16 '18 at 0:56












          • I'll do that! thank you for your input!

            – Michael Campbell
            Nov 16 '18 at 23:04















          1














          Your strategy of storing the aggregated data is what the Firestore documentation suggests for aggregation, so I think it's pretty solid idea.



          An alternative strategy would be to only store the Salesforce data in Firestore as it comes in, not aggregated, and let the client perform the aggregation. This can be achieved by subscribing to real-time updates to a query of Collection. In this setup, you would perform the calculation within the onSnapshot callback (assuming you're using the Web environment).



          The advantage here is a possible increase to performance, since Cloud Functions often suffer from "cold start" latency.




          Note: Several of the recommendations in this document center around
          what is known as a cold start. Functions are stateless, and the
          execution environment is often initialized from scratch, which is
          called a cold start. Cold starts can take significant amounts of time
          to complete. It is best practice to avoid unnecessary cold starts, and
          to streamline the cold start process to whatever extent possible (for
          example, by avoiding unnecessary dependencies).




          Source






          share|improve this answer























          • Would that be super heavy? I've looked into it, and my queries wouldn't be too insane - If the period is in between two dates, I can have the client fetch all after a specific date, and then splice it to not include the data after the endDate.

            – Michael Campbell
            Nov 16 '18 at 0:32






          • 1





            That depends on how complex your aggregation calculations are. Your date range filter strategy sounds good to me.

            – Nathan Jones
            Nov 16 '18 at 0:47







          • 1





            I just thought of this: You can cut down your initial results even further by querying for docs with a "start date" greater than or equal to your date range's "start date" AND less than or equal to your range's "end date" (with two where calls). This should cut down on the number of docs you have to exclude whose "end date" go past your date range's "end date" .

            – Nathan Jones
            Nov 16 '18 at 0:56












          • I'll do that! thank you for your input!

            – Michael Campbell
            Nov 16 '18 at 23:04













          1












          1








          1







          Your strategy of storing the aggregated data is what the Firestore documentation suggests for aggregation, so I think it's pretty solid idea.



          An alternative strategy would be to only store the Salesforce data in Firestore as it comes in, not aggregated, and let the client perform the aggregation. This can be achieved by subscribing to real-time updates to a query of Collection. In this setup, you would perform the calculation within the onSnapshot callback (assuming you're using the Web environment).



          The advantage here is a possible increase to performance, since Cloud Functions often suffer from "cold start" latency.




          Note: Several of the recommendations in this document center around
          what is known as a cold start. Functions are stateless, and the
          execution environment is often initialized from scratch, which is
          called a cold start. Cold starts can take significant amounts of time
          to complete. It is best practice to avoid unnecessary cold starts, and
          to streamline the cold start process to whatever extent possible (for
          example, by avoiding unnecessary dependencies).




          Source






          share|improve this answer













          Your strategy of storing the aggregated data is what the Firestore documentation suggests for aggregation, so I think it's pretty solid idea.



          An alternative strategy would be to only store the Salesforce data in Firestore as it comes in, not aggregated, and let the client perform the aggregation. This can be achieved by subscribing to real-time updates to a query of Collection. In this setup, you would perform the calculation within the onSnapshot callback (assuming you're using the Web environment).



          The advantage here is a possible increase to performance, since Cloud Functions often suffer from "cold start" latency.




          Note: Several of the recommendations in this document center around
          what is known as a cold start. Functions are stateless, and the
          execution environment is often initialized from scratch, which is
          called a cold start. Cold starts can take significant amounts of time
          to complete. It is best practice to avoid unnecessary cold starts, and
          to streamline the cold start process to whatever extent possible (for
          example, by avoiding unnecessary dependencies).




          Source







          share|improve this answer












          share|improve this answer



          share|improve this answer










          answered Nov 16 '18 at 0:19









          Nathan JonesNathan Jones

          1,96142952




          1,96142952












          • Would that be super heavy? I've looked into it, and my queries wouldn't be too insane - If the period is in between two dates, I can have the client fetch all after a specific date, and then splice it to not include the data after the endDate.

            – Michael Campbell
            Nov 16 '18 at 0:32






          • 1





            That depends on how complex your aggregation calculations are. Your date range filter strategy sounds good to me.

            – Nathan Jones
            Nov 16 '18 at 0:47







          • 1





            I just thought of this: You can cut down your initial results even further by querying for docs with a "start date" greater than or equal to your date range's "start date" AND less than or equal to your range's "end date" (with two where calls). This should cut down on the number of docs you have to exclude whose "end date" go past your date range's "end date" .

            – Nathan Jones
            Nov 16 '18 at 0:56












          • I'll do that! thank you for your input!

            – Michael Campbell
            Nov 16 '18 at 23:04

















          • Would that be super heavy? I've looked into it, and my queries wouldn't be too insane - If the period is in between two dates, I can have the client fetch all after a specific date, and then splice it to not include the data after the endDate.

            – Michael Campbell
            Nov 16 '18 at 0:32






          • 1





            That depends on how complex your aggregation calculations are. Your date range filter strategy sounds good to me.

            – Nathan Jones
            Nov 16 '18 at 0:47







          • 1





            I just thought of this: You can cut down your initial results even further by querying for docs with a "start date" greater than or equal to your date range's "start date" AND less than or equal to your range's "end date" (with two where calls). This should cut down on the number of docs you have to exclude whose "end date" go past your date range's "end date" .

            – Nathan Jones
            Nov 16 '18 at 0:56












          • I'll do that! thank you for your input!

            – Michael Campbell
            Nov 16 '18 at 23:04
















          Would that be super heavy? I've looked into it, and my queries wouldn't be too insane - If the period is in between two dates, I can have the client fetch all after a specific date, and then splice it to not include the data after the endDate.

          – Michael Campbell
          Nov 16 '18 at 0:32





          Would that be super heavy? I've looked into it, and my queries wouldn't be too insane - If the period is in between two dates, I can have the client fetch all after a specific date, and then splice it to not include the data after the endDate.

          – Michael Campbell
          Nov 16 '18 at 0:32




          1




          1





          That depends on how complex your aggregation calculations are. Your date range filter strategy sounds good to me.

          – Nathan Jones
          Nov 16 '18 at 0:47






          That depends on how complex your aggregation calculations are. Your date range filter strategy sounds good to me.

          – Nathan Jones
          Nov 16 '18 at 0:47





          1




          1





          I just thought of this: You can cut down your initial results even further by querying for docs with a "start date" greater than or equal to your date range's "start date" AND less than or equal to your range's "end date" (with two where calls). This should cut down on the number of docs you have to exclude whose "end date" go past your date range's "end date" .

          – Nathan Jones
          Nov 16 '18 at 0:56






          I just thought of this: You can cut down your initial results even further by querying for docs with a "start date" greater than or equal to your date range's "start date" AND less than or equal to your range's "end date" (with two where calls). This should cut down on the number of docs you have to exclude whose "end date" go past your date range's "end date" .

          – Nathan Jones
          Nov 16 '18 at 0:56














          I'll do that! thank you for your input!

          – Michael Campbell
          Nov 16 '18 at 23:04





          I'll do that! thank you for your input!

          – Michael Campbell
          Nov 16 '18 at 23:04





          Popular posts from this blog

          Top Tejano songwriter Luis Silva dead of heart attack at 64

          ReactJS Fetched API data displays live - need Data displayed static

          Evgeni Malkin