Match patterns from one file awk not working










1














I want to match strings from a pattern file to look into Source.txt file.



pattern_list.txt has 139k lines



Source.txt more than 5 millions lines



If I use grep like this it tooks 2 seconds to get the output.



grep -F -f pattern_list.txt Source.txt > Output.txt


But if I try with this AWK script it gets stuck and after 10 min I need to stop because nothing happens.



awk 'NR==FNR a[$1]; next 

for (i in a) if ($0 ~ i) print $0

' FS=, OFS=, pattern_list.txt Source.txt > Output.txt


pattern_list is like this



21051
99888
95746


and source.txt like this



72300,2,694
21051,1,694
63143,3,694
25223,2,694
99888,8,694
53919,2,694
51059,2,694


What it wrong with my AWK script?



I'm running on Cygwin in Windows.










share|improve this question



















  • 3




    Another approach: join -t "," <(sort pattern_list) <(sort source.txt)
    – Cyrus
    Nov 10 at 21:10










  • Possible duplicate of Fastest way to find lines of a file from another larger file in Bash
    – codeforester
    Nov 11 at 7:06










  • @codeforester Hi, I was asking more about why my awk script was so slow, than that ask the fastest way to do it in perl, grep, bash or other tools.
    – Ger Cas
    Nov 11 at 12:44










  • Since your awk code is trying to exactly what the accepted answer in the linked post is doing, I considered it a duplicate, or at least, related.
    – codeforester
    Nov 11 at 17:23















1














I want to match strings from a pattern file to look into Source.txt file.



pattern_list.txt has 139k lines



Source.txt more than 5 millions lines



If I use grep like this it tooks 2 seconds to get the output.



grep -F -f pattern_list.txt Source.txt > Output.txt


But if I try with this AWK script it gets stuck and after 10 min I need to stop because nothing happens.



awk 'NR==FNR a[$1]; next 

for (i in a) if ($0 ~ i) print $0

' FS=, OFS=, pattern_list.txt Source.txt > Output.txt


pattern_list is like this



21051
99888
95746


and source.txt like this



72300,2,694
21051,1,694
63143,3,694
25223,2,694
99888,8,694
53919,2,694
51059,2,694


What it wrong with my AWK script?



I'm running on Cygwin in Windows.










share|improve this question



















  • 3




    Another approach: join -t "," <(sort pattern_list) <(sort source.txt)
    – Cyrus
    Nov 10 at 21:10










  • Possible duplicate of Fastest way to find lines of a file from another larger file in Bash
    – codeforester
    Nov 11 at 7:06










  • @codeforester Hi, I was asking more about why my awk script was so slow, than that ask the fastest way to do it in perl, grep, bash or other tools.
    – Ger Cas
    Nov 11 at 12:44










  • Since your awk code is trying to exactly what the accepted answer in the linked post is doing, I considered it a duplicate, or at least, related.
    – codeforester
    Nov 11 at 17:23













1












1








1


1





I want to match strings from a pattern file to look into Source.txt file.



pattern_list.txt has 139k lines



Source.txt more than 5 millions lines



If I use grep like this it tooks 2 seconds to get the output.



grep -F -f pattern_list.txt Source.txt > Output.txt


But if I try with this AWK script it gets stuck and after 10 min I need to stop because nothing happens.



awk 'NR==FNR a[$1]; next 

for (i in a) if ($0 ~ i) print $0

' FS=, OFS=, pattern_list.txt Source.txt > Output.txt


pattern_list is like this



21051
99888
95746


and source.txt like this



72300,2,694
21051,1,694
63143,3,694
25223,2,694
99888,8,694
53919,2,694
51059,2,694


What it wrong with my AWK script?



I'm running on Cygwin in Windows.










share|improve this question















I want to match strings from a pattern file to look into Source.txt file.



pattern_list.txt has 139k lines



Source.txt more than 5 millions lines



If I use grep like this it tooks 2 seconds to get the output.



grep -F -f pattern_list.txt Source.txt > Output.txt


But if I try with this AWK script it gets stuck and after 10 min I need to stop because nothing happens.



awk 'NR==FNR a[$1]; next 

for (i in a) if ($0 ~ i) print $0

' FS=, OFS=, pattern_list.txt Source.txt > Output.txt


pattern_list is like this



21051
99888
95746


and source.txt like this



72300,2,694
21051,1,694
63143,3,694
25223,2,694
99888,8,694
53919,2,694
51059,2,694


What it wrong with my AWK script?



I'm running on Cygwin in Windows.







awk pattern-matching






share|improve this question















share|improve this question













share|improve this question




share|improve this question








edited Nov 11 at 7:06









codeforester

17.4k83864




17.4k83864










asked Nov 10 at 20:44









Ger Cas

320111




320111







  • 3




    Another approach: join -t "," <(sort pattern_list) <(sort source.txt)
    – Cyrus
    Nov 10 at 21:10










  • Possible duplicate of Fastest way to find lines of a file from another larger file in Bash
    – codeforester
    Nov 11 at 7:06










  • @codeforester Hi, I was asking more about why my awk script was so slow, than that ask the fastest way to do it in perl, grep, bash or other tools.
    – Ger Cas
    Nov 11 at 12:44










  • Since your awk code is trying to exactly what the accepted answer in the linked post is doing, I considered it a duplicate, or at least, related.
    – codeforester
    Nov 11 at 17:23












  • 3




    Another approach: join -t "," <(sort pattern_list) <(sort source.txt)
    – Cyrus
    Nov 10 at 21:10










  • Possible duplicate of Fastest way to find lines of a file from another larger file in Bash
    – codeforester
    Nov 11 at 7:06










  • @codeforester Hi, I was asking more about why my awk script was so slow, than that ask the fastest way to do it in perl, grep, bash or other tools.
    – Ger Cas
    Nov 11 at 12:44










  • Since your awk code is trying to exactly what the accepted answer in the linked post is doing, I considered it a duplicate, or at least, related.
    – codeforester
    Nov 11 at 17:23







3




3




Another approach: join -t "," <(sort pattern_list) <(sort source.txt)
– Cyrus
Nov 10 at 21:10




Another approach: join -t "," <(sort pattern_list) <(sort source.txt)
– Cyrus
Nov 10 at 21:10












Possible duplicate of Fastest way to find lines of a file from another larger file in Bash
– codeforester
Nov 11 at 7:06




Possible duplicate of Fastest way to find lines of a file from another larger file in Bash
– codeforester
Nov 11 at 7:06












@codeforester Hi, I was asking more about why my awk script was so slow, than that ask the fastest way to do it in perl, grep, bash or other tools.
– Ger Cas
Nov 11 at 12:44




@codeforester Hi, I was asking more about why my awk script was so slow, than that ask the fastest way to do it in perl, grep, bash or other tools.
– Ger Cas
Nov 11 at 12:44












Since your awk code is trying to exactly what the accepted answer in the linked post is doing, I considered it a duplicate, or at least, related.
– codeforester
Nov 11 at 17:23




Since your awk code is trying to exactly what the accepted answer in the linked post is doing, I considered it a duplicate, or at least, related.
– codeforester
Nov 11 at 17:23












2 Answers
2






active

oldest

votes


















2














if you are doing literal match this should be faster than your approach



$ awk -F, 'NR==FNRa[$0]; next $1 in aprint $1,$3,$8,$20' pattern_list source > output


However, I think sort/join will still be faster than grep and awk.






share|improve this answer






















  • Excellent. Now the execution time with your awk script is less than 4 seconds. But since my original source file has several fields how to say in your script that print only $3, $8 and $20 for the matched strings?
    – Ger Cas
    Nov 10 at 23:10










  • just print required fields, see the update.
    – karakfa
    Nov 10 at 23:56










  • Can't improve on karakfa's answer, but for grep vs awk performance tests see polydesmida.info/BASHing/2018-10-24.html
    – user2138595
    Nov 11 at 6:57











  • @karakfa Thanks a lot. It works exactly as I wanted.
    – Ger Cas
    Nov 11 at 12:36










  • @user2138595 Thanks for share the info. Yes, is what I understood in theory and practice, that awk is champion in speed.
    – Ger Cas
    Nov 11 at 12:37


















2














If increasing performance is your goal, you'll need to multithread this (AWK is unlikely faster, perhaps slower?).



If I were you, I'd partition the source file, then search each part:



$ split -l 100000 src.txt src_part
$ ls src_part* | xargs -n1 -P4 fgrep -f pat.txt > matches.txt
$ rm src_part*





share|improve this answer




















  • Thanks for answer, but what I know is that awk is faster than grep. So, I don't know what happens here.
    – Ger Cas
    Nov 10 at 21:06






  • 1




    @GerCas I doubt that is true as AWK has to parse the script, then run. grep, on the other hand, is heavily optimized for its purpose.
    – Rafael
    Nov 10 at 21:09











Your Answer






StackExchange.ifUsing("editor", function ()
StackExchange.using("externalEditor", function ()
StackExchange.using("snippets", function ()
StackExchange.snippets.init();
);
);
, "code-snippets");

StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "1"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);

else
createEditor();

);

function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);



);













draft saved

draft discarded


















StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53243236%2fmatch-patterns-from-one-file-awk-not-working%23new-answer', 'question_page');

);

Post as a guest















Required, but never shown

























2 Answers
2






active

oldest

votes








2 Answers
2






active

oldest

votes









active

oldest

votes






active

oldest

votes









2














if you are doing literal match this should be faster than your approach



$ awk -F, 'NR==FNRa[$0]; next $1 in aprint $1,$3,$8,$20' pattern_list source > output


However, I think sort/join will still be faster than grep and awk.






share|improve this answer






















  • Excellent. Now the execution time with your awk script is less than 4 seconds. But since my original source file has several fields how to say in your script that print only $3, $8 and $20 for the matched strings?
    – Ger Cas
    Nov 10 at 23:10










  • just print required fields, see the update.
    – karakfa
    Nov 10 at 23:56










  • Can't improve on karakfa's answer, but for grep vs awk performance tests see polydesmida.info/BASHing/2018-10-24.html
    – user2138595
    Nov 11 at 6:57











  • @karakfa Thanks a lot. It works exactly as I wanted.
    – Ger Cas
    Nov 11 at 12:36










  • @user2138595 Thanks for share the info. Yes, is what I understood in theory and practice, that awk is champion in speed.
    – Ger Cas
    Nov 11 at 12:37















2














if you are doing literal match this should be faster than your approach



$ awk -F, 'NR==FNRa[$0]; next $1 in aprint $1,$3,$8,$20' pattern_list source > output


However, I think sort/join will still be faster than grep and awk.






share|improve this answer






















  • Excellent. Now the execution time with your awk script is less than 4 seconds. But since my original source file has several fields how to say in your script that print only $3, $8 and $20 for the matched strings?
    – Ger Cas
    Nov 10 at 23:10










  • just print required fields, see the update.
    – karakfa
    Nov 10 at 23:56










  • Can't improve on karakfa's answer, but for grep vs awk performance tests see polydesmida.info/BASHing/2018-10-24.html
    – user2138595
    Nov 11 at 6:57











  • @karakfa Thanks a lot. It works exactly as I wanted.
    – Ger Cas
    Nov 11 at 12:36










  • @user2138595 Thanks for share the info. Yes, is what I understood in theory and practice, that awk is champion in speed.
    – Ger Cas
    Nov 11 at 12:37













2












2








2






if you are doing literal match this should be faster than your approach



$ awk -F, 'NR==FNRa[$0]; next $1 in aprint $1,$3,$8,$20' pattern_list source > output


However, I think sort/join will still be faster than grep and awk.






share|improve this answer














if you are doing literal match this should be faster than your approach



$ awk -F, 'NR==FNRa[$0]; next $1 in aprint $1,$3,$8,$20' pattern_list source > output


However, I think sort/join will still be faster than grep and awk.







share|improve this answer














share|improve this answer



share|improve this answer








edited Nov 10 at 23:55

























answered Nov 10 at 21:22









karakfa

48.1k52738




48.1k52738











  • Excellent. Now the execution time with your awk script is less than 4 seconds. But since my original source file has several fields how to say in your script that print only $3, $8 and $20 for the matched strings?
    – Ger Cas
    Nov 10 at 23:10










  • just print required fields, see the update.
    – karakfa
    Nov 10 at 23:56










  • Can't improve on karakfa's answer, but for grep vs awk performance tests see polydesmida.info/BASHing/2018-10-24.html
    – user2138595
    Nov 11 at 6:57











  • @karakfa Thanks a lot. It works exactly as I wanted.
    – Ger Cas
    Nov 11 at 12:36










  • @user2138595 Thanks for share the info. Yes, is what I understood in theory and practice, that awk is champion in speed.
    – Ger Cas
    Nov 11 at 12:37
















  • Excellent. Now the execution time with your awk script is less than 4 seconds. But since my original source file has several fields how to say in your script that print only $3, $8 and $20 for the matched strings?
    – Ger Cas
    Nov 10 at 23:10










  • just print required fields, see the update.
    – karakfa
    Nov 10 at 23:56










  • Can't improve on karakfa's answer, but for grep vs awk performance tests see polydesmida.info/BASHing/2018-10-24.html
    – user2138595
    Nov 11 at 6:57











  • @karakfa Thanks a lot. It works exactly as I wanted.
    – Ger Cas
    Nov 11 at 12:36










  • @user2138595 Thanks for share the info. Yes, is what I understood in theory and practice, that awk is champion in speed.
    – Ger Cas
    Nov 11 at 12:37















Excellent. Now the execution time with your awk script is less than 4 seconds. But since my original source file has several fields how to say in your script that print only $3, $8 and $20 for the matched strings?
– Ger Cas
Nov 10 at 23:10




Excellent. Now the execution time with your awk script is less than 4 seconds. But since my original source file has several fields how to say in your script that print only $3, $8 and $20 for the matched strings?
– Ger Cas
Nov 10 at 23:10












just print required fields, see the update.
– karakfa
Nov 10 at 23:56




just print required fields, see the update.
– karakfa
Nov 10 at 23:56












Can't improve on karakfa's answer, but for grep vs awk performance tests see polydesmida.info/BASHing/2018-10-24.html
– user2138595
Nov 11 at 6:57





Can't improve on karakfa's answer, but for grep vs awk performance tests see polydesmida.info/BASHing/2018-10-24.html
– user2138595
Nov 11 at 6:57













@karakfa Thanks a lot. It works exactly as I wanted.
– Ger Cas
Nov 11 at 12:36




@karakfa Thanks a lot. It works exactly as I wanted.
– Ger Cas
Nov 11 at 12:36












@user2138595 Thanks for share the info. Yes, is what I understood in theory and practice, that awk is champion in speed.
– Ger Cas
Nov 11 at 12:37




@user2138595 Thanks for share the info. Yes, is what I understood in theory and practice, that awk is champion in speed.
– Ger Cas
Nov 11 at 12:37













2














If increasing performance is your goal, you'll need to multithread this (AWK is unlikely faster, perhaps slower?).



If I were you, I'd partition the source file, then search each part:



$ split -l 100000 src.txt src_part
$ ls src_part* | xargs -n1 -P4 fgrep -f pat.txt > matches.txt
$ rm src_part*





share|improve this answer




















  • Thanks for answer, but what I know is that awk is faster than grep. So, I don't know what happens here.
    – Ger Cas
    Nov 10 at 21:06






  • 1




    @GerCas I doubt that is true as AWK has to parse the script, then run. grep, on the other hand, is heavily optimized for its purpose.
    – Rafael
    Nov 10 at 21:09
















2














If increasing performance is your goal, you'll need to multithread this (AWK is unlikely faster, perhaps slower?).



If I were you, I'd partition the source file, then search each part:



$ split -l 100000 src.txt src_part
$ ls src_part* | xargs -n1 -P4 fgrep -f pat.txt > matches.txt
$ rm src_part*





share|improve this answer




















  • Thanks for answer, but what I know is that awk is faster than grep. So, I don't know what happens here.
    – Ger Cas
    Nov 10 at 21:06






  • 1




    @GerCas I doubt that is true as AWK has to parse the script, then run. grep, on the other hand, is heavily optimized for its purpose.
    – Rafael
    Nov 10 at 21:09














2












2








2






If increasing performance is your goal, you'll need to multithread this (AWK is unlikely faster, perhaps slower?).



If I were you, I'd partition the source file, then search each part:



$ split -l 100000 src.txt src_part
$ ls src_part* | xargs -n1 -P4 fgrep -f pat.txt > matches.txt
$ rm src_part*





share|improve this answer












If increasing performance is your goal, you'll need to multithread this (AWK is unlikely faster, perhaps slower?).



If I were you, I'd partition the source file, then search each part:



$ split -l 100000 src.txt src_part
$ ls src_part* | xargs -n1 -P4 fgrep -f pat.txt > matches.txt
$ rm src_part*






share|improve this answer












share|improve this answer



share|improve this answer










answered Nov 10 at 21:00









Rafael

4,35382038




4,35382038











  • Thanks for answer, but what I know is that awk is faster than grep. So, I don't know what happens here.
    – Ger Cas
    Nov 10 at 21:06






  • 1




    @GerCas I doubt that is true as AWK has to parse the script, then run. grep, on the other hand, is heavily optimized for its purpose.
    – Rafael
    Nov 10 at 21:09

















  • Thanks for answer, but what I know is that awk is faster than grep. So, I don't know what happens here.
    – Ger Cas
    Nov 10 at 21:06






  • 1




    @GerCas I doubt that is true as AWK has to parse the script, then run. grep, on the other hand, is heavily optimized for its purpose.
    – Rafael
    Nov 10 at 21:09
















Thanks for answer, but what I know is that awk is faster than grep. So, I don't know what happens here.
– Ger Cas
Nov 10 at 21:06




Thanks for answer, but what I know is that awk is faster than grep. So, I don't know what happens here.
– Ger Cas
Nov 10 at 21:06




1




1




@GerCas I doubt that is true as AWK has to parse the script, then run. grep, on the other hand, is heavily optimized for its purpose.
– Rafael
Nov 10 at 21:09





@GerCas I doubt that is true as AWK has to parse the script, then run. grep, on the other hand, is heavily optimized for its purpose.
– Rafael
Nov 10 at 21:09


















draft saved

draft discarded
















































Thanks for contributing an answer to Stack Overflow!


  • Please be sure to answer the question. Provide details and share your research!

But avoid …


  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.

To learn more, see our tips on writing great answers.





Some of your past answers have not been well-received, and you're in danger of being blocked from answering.


Please pay close attention to the following guidance:


  • Please be sure to answer the question. Provide details and share your research!

But avoid …


  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.

To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53243236%2fmatch-patterns-from-one-file-awk-not-working%23new-answer', 'question_page');

);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







Popular posts from this blog

Top Tejano songwriter Luis Silva dead of heart attack at 64

ReactJS Fetched API data displays live - need Data displayed static

政党