Most efficient way to DECODE multiple columns — DB2










0














I am fairly new to DB2 (and SQL in general) and I am having trouble finding an efficient method to DECODE columns



Currently, the database has a number of tables most of which have a significant number of their columns as numbers, these numbers correspond to a table with the real values. We are talking 9,500 different values (e.g '502=yes' or '1413= Graduate Student')



In any situation, I would just do WHERE clause and show where they are equal, but since there are 20-30 columns that need to be decoded per table, I can't really do this (that I know of).



Is there a way to effectively just display the corresponding value from the other table?



Example:



SELECT TEST_ID, DECODE(TEST_STATUS, 5111, 'Approved, 5112, 'In Progress') TEST_STATUS
FROM TEST_TABLE


The above works fine.......but I manually look up the numbers and review them to build the statements. As I mentioned, some tables have 20-30 columns that would need this AND some need DECODE statements that would be 12-15 conditions.



Is there anything that would allow me to do something simpler like:



SELECT TEST_ID, DECODE(TEST_STATUS = *TableWithCodeValues*) TEST_STATUS
FROM TEST_TABLE


EDIT: Also, to be more clear, I know I can do a ton of INNER JOINS, but I wasn't sure if there was a more efficient way than that.










share|improve this question



















  • 1




    Is all lookup values in one table, or is it one lookup table for every column in the query? Regardless of which I would not use DECODE for this because it will be a nightmare to maintain in the long run. Things that come to mind are if a dimension is extended with another value, 'In Progress' should be changed to something else or <drumroll> if it is decided that you must support another language.
    – Lennart
    Nov 12 '18 at 22:10










  • All of the values exist in one table. So All 9,500 possible codes can be found by joining against a single table. I am not sure what you mean by 'if a dimension is extended with another value'.
    – DataforDays
    Nov 12 '18 at 22:56










  • If another value is added to that table you will have to change all your queries if you use decode.
    – Lennart
    Nov 12 '18 at 22:59










  • Exactly why I do not want to use it. I need to prep for all future values as well. Some will only have 2-3 values and they won't change, but others will have a bunch and could.
    – DataforDays
    Nov 13 '18 at 5:12















0














I am fairly new to DB2 (and SQL in general) and I am having trouble finding an efficient method to DECODE columns



Currently, the database has a number of tables most of which have a significant number of their columns as numbers, these numbers correspond to a table with the real values. We are talking 9,500 different values (e.g '502=yes' or '1413= Graduate Student')



In any situation, I would just do WHERE clause and show where they are equal, but since there are 20-30 columns that need to be decoded per table, I can't really do this (that I know of).



Is there a way to effectively just display the corresponding value from the other table?



Example:



SELECT TEST_ID, DECODE(TEST_STATUS, 5111, 'Approved, 5112, 'In Progress') TEST_STATUS
FROM TEST_TABLE


The above works fine.......but I manually look up the numbers and review them to build the statements. As I mentioned, some tables have 20-30 columns that would need this AND some need DECODE statements that would be 12-15 conditions.



Is there anything that would allow me to do something simpler like:



SELECT TEST_ID, DECODE(TEST_STATUS = *TableWithCodeValues*) TEST_STATUS
FROM TEST_TABLE


EDIT: Also, to be more clear, I know I can do a ton of INNER JOINS, but I wasn't sure if there was a more efficient way than that.










share|improve this question



















  • 1




    Is all lookup values in one table, or is it one lookup table for every column in the query? Regardless of which I would not use DECODE for this because it will be a nightmare to maintain in the long run. Things that come to mind are if a dimension is extended with another value, 'In Progress' should be changed to something else or <drumroll> if it is decided that you must support another language.
    – Lennart
    Nov 12 '18 at 22:10










  • All of the values exist in one table. So All 9,500 possible codes can be found by joining against a single table. I am not sure what you mean by 'if a dimension is extended with another value'.
    – DataforDays
    Nov 12 '18 at 22:56










  • If another value is added to that table you will have to change all your queries if you use decode.
    – Lennart
    Nov 12 '18 at 22:59










  • Exactly why I do not want to use it. I need to prep for all future values as well. Some will only have 2-3 values and they won't change, but others will have a bunch and could.
    – DataforDays
    Nov 13 '18 at 5:12













0












0








0


0





I am fairly new to DB2 (and SQL in general) and I am having trouble finding an efficient method to DECODE columns



Currently, the database has a number of tables most of which have a significant number of their columns as numbers, these numbers correspond to a table with the real values. We are talking 9,500 different values (e.g '502=yes' or '1413= Graduate Student')



In any situation, I would just do WHERE clause and show where they are equal, but since there are 20-30 columns that need to be decoded per table, I can't really do this (that I know of).



Is there a way to effectively just display the corresponding value from the other table?



Example:



SELECT TEST_ID, DECODE(TEST_STATUS, 5111, 'Approved, 5112, 'In Progress') TEST_STATUS
FROM TEST_TABLE


The above works fine.......but I manually look up the numbers and review them to build the statements. As I mentioned, some tables have 20-30 columns that would need this AND some need DECODE statements that would be 12-15 conditions.



Is there anything that would allow me to do something simpler like:



SELECT TEST_ID, DECODE(TEST_STATUS = *TableWithCodeValues*) TEST_STATUS
FROM TEST_TABLE


EDIT: Also, to be more clear, I know I can do a ton of INNER JOINS, but I wasn't sure if there was a more efficient way than that.










share|improve this question















I am fairly new to DB2 (and SQL in general) and I am having trouble finding an efficient method to DECODE columns



Currently, the database has a number of tables most of which have a significant number of their columns as numbers, these numbers correspond to a table with the real values. We are talking 9,500 different values (e.g '502=yes' or '1413= Graduate Student')



In any situation, I would just do WHERE clause and show where they are equal, but since there are 20-30 columns that need to be decoded per table, I can't really do this (that I know of).



Is there a way to effectively just display the corresponding value from the other table?



Example:



SELECT TEST_ID, DECODE(TEST_STATUS, 5111, 'Approved, 5112, 'In Progress') TEST_STATUS
FROM TEST_TABLE


The above works fine.......but I manually look up the numbers and review them to build the statements. As I mentioned, some tables have 20-30 columns that would need this AND some need DECODE statements that would be 12-15 conditions.



Is there anything that would allow me to do something simpler like:



SELECT TEST_ID, DECODE(TEST_STATUS = *TableWithCodeValues*) TEST_STATUS
FROM TEST_TABLE


EDIT: Also, to be more clear, I know I can do a ton of INNER JOINS, but I wasn't sure if there was a more efficient way than that.







db2 decode db2-luw






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited Nov 12 '18 at 21:30

























asked Nov 12 '18 at 21:02









DataforDays

5516




5516







  • 1




    Is all lookup values in one table, or is it one lookup table for every column in the query? Regardless of which I would not use DECODE for this because it will be a nightmare to maintain in the long run. Things that come to mind are if a dimension is extended with another value, 'In Progress' should be changed to something else or <drumroll> if it is decided that you must support another language.
    – Lennart
    Nov 12 '18 at 22:10










  • All of the values exist in one table. So All 9,500 possible codes can be found by joining against a single table. I am not sure what you mean by 'if a dimension is extended with another value'.
    – DataforDays
    Nov 12 '18 at 22:56










  • If another value is added to that table you will have to change all your queries if you use decode.
    – Lennart
    Nov 12 '18 at 22:59










  • Exactly why I do not want to use it. I need to prep for all future values as well. Some will only have 2-3 values and they won't change, but others will have a bunch and could.
    – DataforDays
    Nov 13 '18 at 5:12












  • 1




    Is all lookup values in one table, or is it one lookup table for every column in the query? Regardless of which I would not use DECODE for this because it will be a nightmare to maintain in the long run. Things that come to mind are if a dimension is extended with another value, 'In Progress' should be changed to something else or <drumroll> if it is decided that you must support another language.
    – Lennart
    Nov 12 '18 at 22:10










  • All of the values exist in one table. So All 9,500 possible codes can be found by joining against a single table. I am not sure what you mean by 'if a dimension is extended with another value'.
    – DataforDays
    Nov 12 '18 at 22:56










  • If another value is added to that table you will have to change all your queries if you use decode.
    – Lennart
    Nov 12 '18 at 22:59










  • Exactly why I do not want to use it. I need to prep for all future values as well. Some will only have 2-3 values and they won't change, but others will have a bunch and could.
    – DataforDays
    Nov 13 '18 at 5:12







1




1




Is all lookup values in one table, or is it one lookup table for every column in the query? Regardless of which I would not use DECODE for this because it will be a nightmare to maintain in the long run. Things that come to mind are if a dimension is extended with another value, 'In Progress' should be changed to something else or <drumroll> if it is decided that you must support another language.
– Lennart
Nov 12 '18 at 22:10




Is all lookup values in one table, or is it one lookup table for every column in the query? Regardless of which I would not use DECODE for this because it will be a nightmare to maintain in the long run. Things that come to mind are if a dimension is extended with another value, 'In Progress' should be changed to something else or <drumroll> if it is decided that you must support another language.
– Lennart
Nov 12 '18 at 22:10












All of the values exist in one table. So All 9,500 possible codes can be found by joining against a single table. I am not sure what you mean by 'if a dimension is extended with another value'.
– DataforDays
Nov 12 '18 at 22:56




All of the values exist in one table. So All 9,500 possible codes can be found by joining against a single table. I am not sure what you mean by 'if a dimension is extended with another value'.
– DataforDays
Nov 12 '18 at 22:56












If another value is added to that table you will have to change all your queries if you use decode.
– Lennart
Nov 12 '18 at 22:59




If another value is added to that table you will have to change all your queries if you use decode.
– Lennart
Nov 12 '18 at 22:59












Exactly why I do not want to use it. I need to prep for all future values as well. Some will only have 2-3 values and they won't change, but others will have a bunch and could.
– DataforDays
Nov 13 '18 at 5:12




Exactly why I do not want to use it. I need to prep for all future values as well. Some will only have 2-3 values and they won't change, but others will have a bunch and could.
– DataforDays
Nov 13 '18 at 5:12












2 Answers
2






active

oldest

votes


















2














From a logical point of view, I would consider splitting the lookup table into several domain/dimension tables. Not sure if that is possible to do for you, so I'll leave that part.



As mentioned in my comment I would stay away from using DECODE as described in your post. I would start by doing it as usual joins:



SELECT a.TEST_STATUS
, b.TEST_STATUS_DESCRIPTION
, a.ANOTHER_STATUS
, c.ANOTHER_STATUS_DESCRIPTION
, ...
FROM TEST_TABLE as a
JOIN TEST_STATUS_TABLE as b
ON a.TEST_STATUS = b.TEST_STATUS
JOIN ANOTHER_STATUS_TABLE as c
ON a.ANOTHER_STATUS = c.ANOTHER_STATUS
JOIN ...


If things are too slow there are a couple of things you can try:



  • Create a statistical view that can help determine cardinalities from the joins (may help the optimizer creating a better plan):

https://www.ibm.com/support/knowledgecenter/sl/SSEPGG_9.7.0/com.ibm.db2.luw.admin.perf.doc/doc/c0021713.html



  • If your license admits you can experiment with Materialized Query Tables (MQT). Note that there is a penalty for modifications of the base tables, so if you have more of a OLTP workload, this is probably not a good idea:

https://www.ibm.com/developerworks/data/library/techarticle/dm-0509melnyk/index.html



A third option if your lookup table is fairly static is to cache the lookup table in the application. Read the TEST_TABLE from the database, and lookup descriptions in the application. Further improvements may be to add triggers that invalidate the cache when lookup table is modified.






share|improve this answer




















  • Yeah, I was afraid I would have to do a bunch of joins. Ah well, at least I wont have to maintain it as closely as the DECODE. Also, thankfully I do not have to worry about the performance of the tables themselves as this is a reporting db that I have full control over, but I just want to make sure that the load time of the tables stays under 90 minutes for the whole 26 of them.
    – DataforDays
    Nov 13 '18 at 15:27


















0














If you don't want to do all these joins you could create yourself an own LOOKUP function.



create or replace function lookup(IN_ID INTEGER)
returns varchar(32)
deterministic reads sql data
begin atomic
declare OUT_TEXT varchar(32);--
set OUT_TEXT=(select text from test.lookup where id=IN_ID);--
return OUT_TEXT;--
end;


With a table TEST.LOOKUP like



create table test.lookup(id integer, text varchar(32))


containing some id/text pairs this will return the text value corrseponding to an id .. if not found NULL.



With your mentioned 10k id/text pairs and an index on the ID field this shouldn't be a performance issue as such data amount should be easily be cached in the corresponding bufferpool.






share|improve this answer




















    Your Answer






    StackExchange.ifUsing("editor", function ()
    StackExchange.using("externalEditor", function ()
    StackExchange.using("snippets", function ()
    StackExchange.snippets.init();
    );
    );
    , "code-snippets");

    StackExchange.ready(function()
    var channelOptions =
    tags: "".split(" "),
    id: "1"
    ;
    initTagRenderer("".split(" "), "".split(" "), channelOptions);

    StackExchange.using("externalEditor", function()
    // Have to fire editor after snippets, if snippets enabled
    if (StackExchange.settings.snippets.snippetsEnabled)
    StackExchange.using("snippets", function()
    createEditor();
    );

    else
    createEditor();

    );

    function createEditor()
    StackExchange.prepareEditor(
    heartbeatType: 'answer',
    autoActivateHeartbeat: false,
    convertImagesToLinks: true,
    noModals: true,
    showLowRepImageUploadWarning: true,
    reputationToPostImages: 10,
    bindNavPrevention: true,
    postfix: "",
    imageUploader:
    brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
    contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
    allowUrls: true
    ,
    onDemand: true,
    discardSelector: ".discard-answer"
    ,immediatelyShowMarkdownHelp:true
    );



    );













    draft saved

    draft discarded


















    StackExchange.ready(
    function ()
    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53270051%2fmost-efficient-way-to-decode-multiple-columns-db2%23new-answer', 'question_page');

    );

    Post as a guest















    Required, but never shown

























    2 Answers
    2






    active

    oldest

    votes








    2 Answers
    2






    active

    oldest

    votes









    active

    oldest

    votes






    active

    oldest

    votes









    2














    From a logical point of view, I would consider splitting the lookup table into several domain/dimension tables. Not sure if that is possible to do for you, so I'll leave that part.



    As mentioned in my comment I would stay away from using DECODE as described in your post. I would start by doing it as usual joins:



    SELECT a.TEST_STATUS
    , b.TEST_STATUS_DESCRIPTION
    , a.ANOTHER_STATUS
    , c.ANOTHER_STATUS_DESCRIPTION
    , ...
    FROM TEST_TABLE as a
    JOIN TEST_STATUS_TABLE as b
    ON a.TEST_STATUS = b.TEST_STATUS
    JOIN ANOTHER_STATUS_TABLE as c
    ON a.ANOTHER_STATUS = c.ANOTHER_STATUS
    JOIN ...


    If things are too slow there are a couple of things you can try:



    • Create a statistical view that can help determine cardinalities from the joins (may help the optimizer creating a better plan):

    https://www.ibm.com/support/knowledgecenter/sl/SSEPGG_9.7.0/com.ibm.db2.luw.admin.perf.doc/doc/c0021713.html



    • If your license admits you can experiment with Materialized Query Tables (MQT). Note that there is a penalty for modifications of the base tables, so if you have more of a OLTP workload, this is probably not a good idea:

    https://www.ibm.com/developerworks/data/library/techarticle/dm-0509melnyk/index.html



    A third option if your lookup table is fairly static is to cache the lookup table in the application. Read the TEST_TABLE from the database, and lookup descriptions in the application. Further improvements may be to add triggers that invalidate the cache when lookup table is modified.






    share|improve this answer




















    • Yeah, I was afraid I would have to do a bunch of joins. Ah well, at least I wont have to maintain it as closely as the DECODE. Also, thankfully I do not have to worry about the performance of the tables themselves as this is a reporting db that I have full control over, but I just want to make sure that the load time of the tables stays under 90 minutes for the whole 26 of them.
      – DataforDays
      Nov 13 '18 at 15:27















    2














    From a logical point of view, I would consider splitting the lookup table into several domain/dimension tables. Not sure if that is possible to do for you, so I'll leave that part.



    As mentioned in my comment I would stay away from using DECODE as described in your post. I would start by doing it as usual joins:



    SELECT a.TEST_STATUS
    , b.TEST_STATUS_DESCRIPTION
    , a.ANOTHER_STATUS
    , c.ANOTHER_STATUS_DESCRIPTION
    , ...
    FROM TEST_TABLE as a
    JOIN TEST_STATUS_TABLE as b
    ON a.TEST_STATUS = b.TEST_STATUS
    JOIN ANOTHER_STATUS_TABLE as c
    ON a.ANOTHER_STATUS = c.ANOTHER_STATUS
    JOIN ...


    If things are too slow there are a couple of things you can try:



    • Create a statistical view that can help determine cardinalities from the joins (may help the optimizer creating a better plan):

    https://www.ibm.com/support/knowledgecenter/sl/SSEPGG_9.7.0/com.ibm.db2.luw.admin.perf.doc/doc/c0021713.html



    • If your license admits you can experiment with Materialized Query Tables (MQT). Note that there is a penalty for modifications of the base tables, so if you have more of a OLTP workload, this is probably not a good idea:

    https://www.ibm.com/developerworks/data/library/techarticle/dm-0509melnyk/index.html



    A third option if your lookup table is fairly static is to cache the lookup table in the application. Read the TEST_TABLE from the database, and lookup descriptions in the application. Further improvements may be to add triggers that invalidate the cache when lookup table is modified.






    share|improve this answer




















    • Yeah, I was afraid I would have to do a bunch of joins. Ah well, at least I wont have to maintain it as closely as the DECODE. Also, thankfully I do not have to worry about the performance of the tables themselves as this is a reporting db that I have full control over, but I just want to make sure that the load time of the tables stays under 90 minutes for the whole 26 of them.
      – DataforDays
      Nov 13 '18 at 15:27













    2












    2








    2






    From a logical point of view, I would consider splitting the lookup table into several domain/dimension tables. Not sure if that is possible to do for you, so I'll leave that part.



    As mentioned in my comment I would stay away from using DECODE as described in your post. I would start by doing it as usual joins:



    SELECT a.TEST_STATUS
    , b.TEST_STATUS_DESCRIPTION
    , a.ANOTHER_STATUS
    , c.ANOTHER_STATUS_DESCRIPTION
    , ...
    FROM TEST_TABLE as a
    JOIN TEST_STATUS_TABLE as b
    ON a.TEST_STATUS = b.TEST_STATUS
    JOIN ANOTHER_STATUS_TABLE as c
    ON a.ANOTHER_STATUS = c.ANOTHER_STATUS
    JOIN ...


    If things are too slow there are a couple of things you can try:



    • Create a statistical view that can help determine cardinalities from the joins (may help the optimizer creating a better plan):

    https://www.ibm.com/support/knowledgecenter/sl/SSEPGG_9.7.0/com.ibm.db2.luw.admin.perf.doc/doc/c0021713.html



    • If your license admits you can experiment with Materialized Query Tables (MQT). Note that there is a penalty for modifications of the base tables, so if you have more of a OLTP workload, this is probably not a good idea:

    https://www.ibm.com/developerworks/data/library/techarticle/dm-0509melnyk/index.html



    A third option if your lookup table is fairly static is to cache the lookup table in the application. Read the TEST_TABLE from the database, and lookup descriptions in the application. Further improvements may be to add triggers that invalidate the cache when lookup table is modified.






    share|improve this answer












    From a logical point of view, I would consider splitting the lookup table into several domain/dimension tables. Not sure if that is possible to do for you, so I'll leave that part.



    As mentioned in my comment I would stay away from using DECODE as described in your post. I would start by doing it as usual joins:



    SELECT a.TEST_STATUS
    , b.TEST_STATUS_DESCRIPTION
    , a.ANOTHER_STATUS
    , c.ANOTHER_STATUS_DESCRIPTION
    , ...
    FROM TEST_TABLE as a
    JOIN TEST_STATUS_TABLE as b
    ON a.TEST_STATUS = b.TEST_STATUS
    JOIN ANOTHER_STATUS_TABLE as c
    ON a.ANOTHER_STATUS = c.ANOTHER_STATUS
    JOIN ...


    If things are too slow there are a couple of things you can try:



    • Create a statistical view that can help determine cardinalities from the joins (may help the optimizer creating a better plan):

    https://www.ibm.com/support/knowledgecenter/sl/SSEPGG_9.7.0/com.ibm.db2.luw.admin.perf.doc/doc/c0021713.html



    • If your license admits you can experiment with Materialized Query Tables (MQT). Note that there is a penalty for modifications of the base tables, so if you have more of a OLTP workload, this is probably not a good idea:

    https://www.ibm.com/developerworks/data/library/techarticle/dm-0509melnyk/index.html



    A third option if your lookup table is fairly static is to cache the lookup table in the application. Read the TEST_TABLE from the database, and lookup descriptions in the application. Further improvements may be to add triggers that invalidate the cache when lookup table is modified.







    share|improve this answer












    share|improve this answer



    share|improve this answer










    answered Nov 13 '18 at 8:02









    Lennart

    4,98111124




    4,98111124











    • Yeah, I was afraid I would have to do a bunch of joins. Ah well, at least I wont have to maintain it as closely as the DECODE. Also, thankfully I do not have to worry about the performance of the tables themselves as this is a reporting db that I have full control over, but I just want to make sure that the load time of the tables stays under 90 minutes for the whole 26 of them.
      – DataforDays
      Nov 13 '18 at 15:27
















    • Yeah, I was afraid I would have to do a bunch of joins. Ah well, at least I wont have to maintain it as closely as the DECODE. Also, thankfully I do not have to worry about the performance of the tables themselves as this is a reporting db that I have full control over, but I just want to make sure that the load time of the tables stays under 90 minutes for the whole 26 of them.
      – DataforDays
      Nov 13 '18 at 15:27















    Yeah, I was afraid I would have to do a bunch of joins. Ah well, at least I wont have to maintain it as closely as the DECODE. Also, thankfully I do not have to worry about the performance of the tables themselves as this is a reporting db that I have full control over, but I just want to make sure that the load time of the tables stays under 90 minutes for the whole 26 of them.
    – DataforDays
    Nov 13 '18 at 15:27




    Yeah, I was afraid I would have to do a bunch of joins. Ah well, at least I wont have to maintain it as closely as the DECODE. Also, thankfully I do not have to worry about the performance of the tables themselves as this is a reporting db that I have full control over, but I just want to make sure that the load time of the tables stays under 90 minutes for the whole 26 of them.
    – DataforDays
    Nov 13 '18 at 15:27













    0














    If you don't want to do all these joins you could create yourself an own LOOKUP function.



    create or replace function lookup(IN_ID INTEGER)
    returns varchar(32)
    deterministic reads sql data
    begin atomic
    declare OUT_TEXT varchar(32);--
    set OUT_TEXT=(select text from test.lookup where id=IN_ID);--
    return OUT_TEXT;--
    end;


    With a table TEST.LOOKUP like



    create table test.lookup(id integer, text varchar(32))


    containing some id/text pairs this will return the text value corrseponding to an id .. if not found NULL.



    With your mentioned 10k id/text pairs and an index on the ID field this shouldn't be a performance issue as such data amount should be easily be cached in the corresponding bufferpool.






    share|improve this answer

























      0














      If you don't want to do all these joins you could create yourself an own LOOKUP function.



      create or replace function lookup(IN_ID INTEGER)
      returns varchar(32)
      deterministic reads sql data
      begin atomic
      declare OUT_TEXT varchar(32);--
      set OUT_TEXT=(select text from test.lookup where id=IN_ID);--
      return OUT_TEXT;--
      end;


      With a table TEST.LOOKUP like



      create table test.lookup(id integer, text varchar(32))


      containing some id/text pairs this will return the text value corrseponding to an id .. if not found NULL.



      With your mentioned 10k id/text pairs and an index on the ID field this shouldn't be a performance issue as such data amount should be easily be cached in the corresponding bufferpool.






      share|improve this answer























        0












        0








        0






        If you don't want to do all these joins you could create yourself an own LOOKUP function.



        create or replace function lookup(IN_ID INTEGER)
        returns varchar(32)
        deterministic reads sql data
        begin atomic
        declare OUT_TEXT varchar(32);--
        set OUT_TEXT=(select text from test.lookup where id=IN_ID);--
        return OUT_TEXT;--
        end;


        With a table TEST.LOOKUP like



        create table test.lookup(id integer, text varchar(32))


        containing some id/text pairs this will return the text value corrseponding to an id .. if not found NULL.



        With your mentioned 10k id/text pairs and an index on the ID field this shouldn't be a performance issue as such data amount should be easily be cached in the corresponding bufferpool.






        share|improve this answer












        If you don't want to do all these joins you could create yourself an own LOOKUP function.



        create or replace function lookup(IN_ID INTEGER)
        returns varchar(32)
        deterministic reads sql data
        begin atomic
        declare OUT_TEXT varchar(32);--
        set OUT_TEXT=(select text from test.lookup where id=IN_ID);--
        return OUT_TEXT;--
        end;


        With a table TEST.LOOKUP like



        create table test.lookup(id integer, text varchar(32))


        containing some id/text pairs this will return the text value corrseponding to an id .. if not found NULL.



        With your mentioned 10k id/text pairs and an index on the ID field this shouldn't be a performance issue as such data amount should be easily be cached in the corresponding bufferpool.







        share|improve this answer












        share|improve this answer



        share|improve this answer










        answered Nov 21 '18 at 20:44









        The_Data_Doctor

        114




        114



























            draft saved

            draft discarded
















































            Thanks for contributing an answer to Stack Overflow!


            • Please be sure to answer the question. Provide details and share your research!

            But avoid


            • Asking for help, clarification, or responding to other answers.

            • Making statements based on opinion; back them up with references or personal experience.

            To learn more, see our tips on writing great answers.





            Some of your past answers have not been well-received, and you're in danger of being blocked from answering.


            Please pay close attention to the following guidance:


            • Please be sure to answer the question. Provide details and share your research!

            But avoid


            • Asking for help, clarification, or responding to other answers.

            • Making statements based on opinion; back them up with references or personal experience.

            To learn more, see our tips on writing great answers.




            draft saved


            draft discarded














            StackExchange.ready(
            function ()
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53270051%2fmost-efficient-way-to-decode-multiple-columns-db2%23new-answer', 'question_page');

            );

            Post as a guest















            Required, but never shown





















































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown

































            Required, but never shown














            Required, but never shown












            Required, but never shown







            Required, but never shown







            Popular posts from this blog

            Top Tejano songwriter Luis Silva dead of heart attack at 64

            ReactJS Fetched API data displays live - need Data displayed static

            政党