Zombie state multiprocessing library python3










2















My question concerns a replacement of join() function to avoid a defunct or zombie state of already terminated processes when using the multiprocessing library of python3. Is there an alternative which may suspend the child processes from being terminated until they get the green light from the main process? This allows them to terminate correctly without going into a zombie state?



I prepared a quick illustration using the following code which launches 20 different processes, the first process takes 10 seconds work of load and all others take 3 seconds work of load:



import os
import sys
import time
import multiprocessing as mp
from multiprocessing import Process

def exe(i):
print(i)
if i == 1:
time.sleep(10)
else:
time.sleep(3)
procs =
for i in range(1,20):
proc = Process(target=exe, args=(i,))
proc.start()
procs.append(proc)

for proc in procs:
print(proc) # <-- I'm blocked to join others till the first process finishes its work load
proc.join()

print("finished")


If you launch the script, you will see that all the other processes go to into a zombie state until the join() function is released from the first process. This could make the system unstable or overloaded!



Thanks










share|improve this question






















  • Zombies require less resources than a suspended-but-unterminated process. Suspending a process instead of letting it become a zombie is completely counterproductive.

    – user2357112
    Nov 13 '18 at 22:08











  • If you suspend the process after it finishes its work, no resource will be occupied, at least on the programmer side. My intention was to make the script having total control over the processes, not letting the OS take control over my code by its assumptions, specially whether to kill a process or not!

    – Ouali Abdelkader
    Nov 14 '18 at 16:13















2















My question concerns a replacement of join() function to avoid a defunct or zombie state of already terminated processes when using the multiprocessing library of python3. Is there an alternative which may suspend the child processes from being terminated until they get the green light from the main process? This allows them to terminate correctly without going into a zombie state?



I prepared a quick illustration using the following code which launches 20 different processes, the first process takes 10 seconds work of load and all others take 3 seconds work of load:



import os
import sys
import time
import multiprocessing as mp
from multiprocessing import Process

def exe(i):
print(i)
if i == 1:
time.sleep(10)
else:
time.sleep(3)
procs =
for i in range(1,20):
proc = Process(target=exe, args=(i,))
proc.start()
procs.append(proc)

for proc in procs:
print(proc) # <-- I'm blocked to join others till the first process finishes its work load
proc.join()

print("finished")


If you launch the script, you will see that all the other processes go to into a zombie state until the join() function is released from the first process. This could make the system unstable or overloaded!



Thanks










share|improve this question






















  • Zombies require less resources than a suspended-but-unterminated process. Suspending a process instead of letting it become a zombie is completely counterproductive.

    – user2357112
    Nov 13 '18 at 22:08











  • If you suspend the process after it finishes its work, no resource will be occupied, at least on the programmer side. My intention was to make the script having total control over the processes, not letting the OS take control over my code by its assumptions, specially whether to kill a process or not!

    – Ouali Abdelkader
    Nov 14 '18 at 16:13













2












2








2


1






My question concerns a replacement of join() function to avoid a defunct or zombie state of already terminated processes when using the multiprocessing library of python3. Is there an alternative which may suspend the child processes from being terminated until they get the green light from the main process? This allows them to terminate correctly without going into a zombie state?



I prepared a quick illustration using the following code which launches 20 different processes, the first process takes 10 seconds work of load and all others take 3 seconds work of load:



import os
import sys
import time
import multiprocessing as mp
from multiprocessing import Process

def exe(i):
print(i)
if i == 1:
time.sleep(10)
else:
time.sleep(3)
procs =
for i in range(1,20):
proc = Process(target=exe, args=(i,))
proc.start()
procs.append(proc)

for proc in procs:
print(proc) # <-- I'm blocked to join others till the first process finishes its work load
proc.join()

print("finished")


If you launch the script, you will see that all the other processes go to into a zombie state until the join() function is released from the first process. This could make the system unstable or overloaded!



Thanks










share|improve this question














My question concerns a replacement of join() function to avoid a defunct or zombie state of already terminated processes when using the multiprocessing library of python3. Is there an alternative which may suspend the child processes from being terminated until they get the green light from the main process? This allows them to terminate correctly without going into a zombie state?



I prepared a quick illustration using the following code which launches 20 different processes, the first process takes 10 seconds work of load and all others take 3 seconds work of load:



import os
import sys
import time
import multiprocessing as mp
from multiprocessing import Process

def exe(i):
print(i)
if i == 1:
time.sleep(10)
else:
time.sleep(3)
procs =
for i in range(1,20):
proc = Process(target=exe, args=(i,))
proc.start()
procs.append(proc)

for proc in procs:
print(proc) # <-- I'm blocked to join others till the first process finishes its work load
proc.join()

print("finished")


If you launch the script, you will see that all the other processes go to into a zombie state until the join() function is released from the first process. This could make the system unstable or overloaded!



Thanks







python python-3.x python-multiprocessing zombie-process






share|improve this question













share|improve this question











share|improve this question




share|improve this question










asked Nov 13 '18 at 14:21









Ouali AbdelkaderOuali Abdelkader

162




162












  • Zombies require less resources than a suspended-but-unterminated process. Suspending a process instead of letting it become a zombie is completely counterproductive.

    – user2357112
    Nov 13 '18 at 22:08











  • If you suspend the process after it finishes its work, no resource will be occupied, at least on the programmer side. My intention was to make the script having total control over the processes, not letting the OS take control over my code by its assumptions, specially whether to kill a process or not!

    – Ouali Abdelkader
    Nov 14 '18 at 16:13

















  • Zombies require less resources than a suspended-but-unterminated process. Suspending a process instead of letting it become a zombie is completely counterproductive.

    – user2357112
    Nov 13 '18 at 22:08











  • If you suspend the process after it finishes its work, no resource will be occupied, at least on the programmer side. My intention was to make the script having total control over the processes, not letting the OS take control over my code by its assumptions, specially whether to kill a process or not!

    – Ouali Abdelkader
    Nov 14 '18 at 16:13
















Zombies require less resources than a suspended-but-unterminated process. Suspending a process instead of letting it become a zombie is completely counterproductive.

– user2357112
Nov 13 '18 at 22:08





Zombies require less resources than a suspended-but-unterminated process. Suspending a process instead of letting it become a zombie is completely counterproductive.

– user2357112
Nov 13 '18 at 22:08













If you suspend the process after it finishes its work, no resource will be occupied, at least on the programmer side. My intention was to make the script having total control over the processes, not letting the OS take control over my code by its assumptions, specially whether to kill a process or not!

– Ouali Abdelkader
Nov 14 '18 at 16:13





If you suspend the process after it finishes its work, no resource will be occupied, at least on the programmer side. My intention was to make the script having total control over the processes, not letting the OS take control over my code by its assumptions, specially whether to kill a process or not!

– Ouali Abdelkader
Nov 14 '18 at 16:13












1 Answer
1






active

oldest

votes


















1














Per this thread, Marko Rauhamaa writes:




If you don't care to know when child processes exit, you can simply ignore the SIGCHLD signal:



import signal
signal.signal(signal.SIGCHLD, signal.SIG_IGN)


That will prevent zombies from appearing.




The wait(2) man page explains:




POSIX.1-2001 specifies that if the disposition of SIGCHLD is set to
SIG_IGN or the SA_NOCLDWAIT flag is set for SIGCHLD (see
sigaction(2)), then children that terminate do not become zombies and
a call to wait() or waitpid() will block until all children have
terminated, and then fail with errno set to ECHILD. (The original
POSIX standard left the behavior of setting SIGCHLD to SIG_IGN
unspecified. Note that even though the default disposition of
SIGCHLD is "ignore", explicitly setting the disposition to SIG_IGN
results in different treatment of zombie process children.)



Linux 2.6 conforms to the POSIX requirements. However, Linux 2.4
(and earlier) does not: if a wait() or waitpid() call is made while
SIGCHLD is being ignored, the call behaves just as though SIGCHLD
were not being ignored, that is, the call blocks until the next child
terminates and then returns the process ID and status of that child.




So if you are using Linux 2.6 or a POSIX-compliant OS, using the above code will allow children processes to exit without becoming zombies. If you are not using a POSIX-compliant OS, then the thread above offers a number of options. Below is one alternative, somewhat similar to Marko Rauhamaa's third suggestion.




If for some reason you need to know when children processes exit and wish to
handle (at least some of them) differently, then you could set up a queue to
allow the child processes to signal the main process when they are done. Then
the main process can call the appropriate join in the order in which it receives
items from the queue:



import time
import multiprocessing as mp

def exe(i, q):
try:
print(i)
if i == 1:
time.sleep(10)
elif i == 10:
raise Exception('I quit')
else:
time.sleep(3)
finally:
q.put(mp.current_process().name)

if __name__ == '__main__':
procs = dict()
q = mp.Queue()
for i in range(1,20):
proc = mp.Process(target=exe, args=(i, q))
proc.start()
procs[proc.name] = proc

while procs:
name = q.get()
proc = procs[name]
print(proc)
proc.join()
del procs[name]

print("finished")


yields a result like



... 
<Process(Process-10, stopped[1])> # <-- process with exception still gets joined
19
<Process(Process-2, started)>
<Process(Process-4, stopped)>
<Process(Process-6, started)>
<Process(Process-5, stopped)>
<Process(Process-3, stopped)>
<Process(Process-9, started)>
<Process(Process-7, stopped)>
<Process(Process-8, started)>
<Process(Process-13, started)>
<Process(Process-12, stopped)>
<Process(Process-11, stopped)>
<Process(Process-16, started)>
<Process(Process-15, stopped)>
<Process(Process-17, stopped)>
<Process(Process-14, stopped)>
<Process(Process-18, started)>
<Process(Process-19, stopped)>
<Process(Process-1, started)> # <-- Process-1 ends last
finished





share|improve this answer

























  • Thanks, I found the queue approach really interesting, it adds some load using the queue, but but keep thinking more consistent.

    – Ouali Abdelkader
    Nov 14 '18 at 16:34










Your Answer






StackExchange.ifUsing("editor", function ()
StackExchange.using("externalEditor", function ()
StackExchange.using("snippets", function ()
StackExchange.snippets.init();
);
);
, "code-snippets");

StackExchange.ready(function()
var channelOptions =
tags: "".split(" "),
id: "1"
;
initTagRenderer("".split(" "), "".split(" "), channelOptions);

StackExchange.using("externalEditor", function()
// Have to fire editor after snippets, if snippets enabled
if (StackExchange.settings.snippets.snippetsEnabled)
StackExchange.using("snippets", function()
createEditor();
);

else
createEditor();

);

function createEditor()
StackExchange.prepareEditor(
heartbeatType: 'answer',
autoActivateHeartbeat: false,
convertImagesToLinks: true,
noModals: true,
showLowRepImageUploadWarning: true,
reputationToPostImages: 10,
bindNavPrevention: true,
postfix: "",
imageUploader:
brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
allowUrls: true
,
onDemand: true,
discardSelector: ".discard-answer"
,immediatelyShowMarkdownHelp:true
);



);













draft saved

draft discarded


















StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53283123%2fzombie-state-multiprocessing-library-python3%23new-answer', 'question_page');

);

Post as a guest















Required, but never shown

























1 Answer
1






active

oldest

votes








1 Answer
1






active

oldest

votes









active

oldest

votes






active

oldest

votes









1














Per this thread, Marko Rauhamaa writes:




If you don't care to know when child processes exit, you can simply ignore the SIGCHLD signal:



import signal
signal.signal(signal.SIGCHLD, signal.SIG_IGN)


That will prevent zombies from appearing.




The wait(2) man page explains:




POSIX.1-2001 specifies that if the disposition of SIGCHLD is set to
SIG_IGN or the SA_NOCLDWAIT flag is set for SIGCHLD (see
sigaction(2)), then children that terminate do not become zombies and
a call to wait() or waitpid() will block until all children have
terminated, and then fail with errno set to ECHILD. (The original
POSIX standard left the behavior of setting SIGCHLD to SIG_IGN
unspecified. Note that even though the default disposition of
SIGCHLD is "ignore", explicitly setting the disposition to SIG_IGN
results in different treatment of zombie process children.)



Linux 2.6 conforms to the POSIX requirements. However, Linux 2.4
(and earlier) does not: if a wait() or waitpid() call is made while
SIGCHLD is being ignored, the call behaves just as though SIGCHLD
were not being ignored, that is, the call blocks until the next child
terminates and then returns the process ID and status of that child.




So if you are using Linux 2.6 or a POSIX-compliant OS, using the above code will allow children processes to exit without becoming zombies. If you are not using a POSIX-compliant OS, then the thread above offers a number of options. Below is one alternative, somewhat similar to Marko Rauhamaa's third suggestion.




If for some reason you need to know when children processes exit and wish to
handle (at least some of them) differently, then you could set up a queue to
allow the child processes to signal the main process when they are done. Then
the main process can call the appropriate join in the order in which it receives
items from the queue:



import time
import multiprocessing as mp

def exe(i, q):
try:
print(i)
if i == 1:
time.sleep(10)
elif i == 10:
raise Exception('I quit')
else:
time.sleep(3)
finally:
q.put(mp.current_process().name)

if __name__ == '__main__':
procs = dict()
q = mp.Queue()
for i in range(1,20):
proc = mp.Process(target=exe, args=(i, q))
proc.start()
procs[proc.name] = proc

while procs:
name = q.get()
proc = procs[name]
print(proc)
proc.join()
del procs[name]

print("finished")


yields a result like



... 
<Process(Process-10, stopped[1])> # <-- process with exception still gets joined
19
<Process(Process-2, started)>
<Process(Process-4, stopped)>
<Process(Process-6, started)>
<Process(Process-5, stopped)>
<Process(Process-3, stopped)>
<Process(Process-9, started)>
<Process(Process-7, stopped)>
<Process(Process-8, started)>
<Process(Process-13, started)>
<Process(Process-12, stopped)>
<Process(Process-11, stopped)>
<Process(Process-16, started)>
<Process(Process-15, stopped)>
<Process(Process-17, stopped)>
<Process(Process-14, stopped)>
<Process(Process-18, started)>
<Process(Process-19, stopped)>
<Process(Process-1, started)> # <-- Process-1 ends last
finished





share|improve this answer

























  • Thanks, I found the queue approach really interesting, it adds some load using the queue, but but keep thinking more consistent.

    – Ouali Abdelkader
    Nov 14 '18 at 16:34















1














Per this thread, Marko Rauhamaa writes:




If you don't care to know when child processes exit, you can simply ignore the SIGCHLD signal:



import signal
signal.signal(signal.SIGCHLD, signal.SIG_IGN)


That will prevent zombies from appearing.




The wait(2) man page explains:




POSIX.1-2001 specifies that if the disposition of SIGCHLD is set to
SIG_IGN or the SA_NOCLDWAIT flag is set for SIGCHLD (see
sigaction(2)), then children that terminate do not become zombies and
a call to wait() or waitpid() will block until all children have
terminated, and then fail with errno set to ECHILD. (The original
POSIX standard left the behavior of setting SIGCHLD to SIG_IGN
unspecified. Note that even though the default disposition of
SIGCHLD is "ignore", explicitly setting the disposition to SIG_IGN
results in different treatment of zombie process children.)



Linux 2.6 conforms to the POSIX requirements. However, Linux 2.4
(and earlier) does not: if a wait() or waitpid() call is made while
SIGCHLD is being ignored, the call behaves just as though SIGCHLD
were not being ignored, that is, the call blocks until the next child
terminates and then returns the process ID and status of that child.




So if you are using Linux 2.6 or a POSIX-compliant OS, using the above code will allow children processes to exit without becoming zombies. If you are not using a POSIX-compliant OS, then the thread above offers a number of options. Below is one alternative, somewhat similar to Marko Rauhamaa's third suggestion.




If for some reason you need to know when children processes exit and wish to
handle (at least some of them) differently, then you could set up a queue to
allow the child processes to signal the main process when they are done. Then
the main process can call the appropriate join in the order in which it receives
items from the queue:



import time
import multiprocessing as mp

def exe(i, q):
try:
print(i)
if i == 1:
time.sleep(10)
elif i == 10:
raise Exception('I quit')
else:
time.sleep(3)
finally:
q.put(mp.current_process().name)

if __name__ == '__main__':
procs = dict()
q = mp.Queue()
for i in range(1,20):
proc = mp.Process(target=exe, args=(i, q))
proc.start()
procs[proc.name] = proc

while procs:
name = q.get()
proc = procs[name]
print(proc)
proc.join()
del procs[name]

print("finished")


yields a result like



... 
<Process(Process-10, stopped[1])> # <-- process with exception still gets joined
19
<Process(Process-2, started)>
<Process(Process-4, stopped)>
<Process(Process-6, started)>
<Process(Process-5, stopped)>
<Process(Process-3, stopped)>
<Process(Process-9, started)>
<Process(Process-7, stopped)>
<Process(Process-8, started)>
<Process(Process-13, started)>
<Process(Process-12, stopped)>
<Process(Process-11, stopped)>
<Process(Process-16, started)>
<Process(Process-15, stopped)>
<Process(Process-17, stopped)>
<Process(Process-14, stopped)>
<Process(Process-18, started)>
<Process(Process-19, stopped)>
<Process(Process-1, started)> # <-- Process-1 ends last
finished





share|improve this answer

























  • Thanks, I found the queue approach really interesting, it adds some load using the queue, but but keep thinking more consistent.

    – Ouali Abdelkader
    Nov 14 '18 at 16:34













1












1








1







Per this thread, Marko Rauhamaa writes:




If you don't care to know when child processes exit, you can simply ignore the SIGCHLD signal:



import signal
signal.signal(signal.SIGCHLD, signal.SIG_IGN)


That will prevent zombies from appearing.




The wait(2) man page explains:




POSIX.1-2001 specifies that if the disposition of SIGCHLD is set to
SIG_IGN or the SA_NOCLDWAIT flag is set for SIGCHLD (see
sigaction(2)), then children that terminate do not become zombies and
a call to wait() or waitpid() will block until all children have
terminated, and then fail with errno set to ECHILD. (The original
POSIX standard left the behavior of setting SIGCHLD to SIG_IGN
unspecified. Note that even though the default disposition of
SIGCHLD is "ignore", explicitly setting the disposition to SIG_IGN
results in different treatment of zombie process children.)



Linux 2.6 conforms to the POSIX requirements. However, Linux 2.4
(and earlier) does not: if a wait() or waitpid() call is made while
SIGCHLD is being ignored, the call behaves just as though SIGCHLD
were not being ignored, that is, the call blocks until the next child
terminates and then returns the process ID and status of that child.




So if you are using Linux 2.6 or a POSIX-compliant OS, using the above code will allow children processes to exit without becoming zombies. If you are not using a POSIX-compliant OS, then the thread above offers a number of options. Below is one alternative, somewhat similar to Marko Rauhamaa's third suggestion.




If for some reason you need to know when children processes exit and wish to
handle (at least some of them) differently, then you could set up a queue to
allow the child processes to signal the main process when they are done. Then
the main process can call the appropriate join in the order in which it receives
items from the queue:



import time
import multiprocessing as mp

def exe(i, q):
try:
print(i)
if i == 1:
time.sleep(10)
elif i == 10:
raise Exception('I quit')
else:
time.sleep(3)
finally:
q.put(mp.current_process().name)

if __name__ == '__main__':
procs = dict()
q = mp.Queue()
for i in range(1,20):
proc = mp.Process(target=exe, args=(i, q))
proc.start()
procs[proc.name] = proc

while procs:
name = q.get()
proc = procs[name]
print(proc)
proc.join()
del procs[name]

print("finished")


yields a result like



... 
<Process(Process-10, stopped[1])> # <-- process with exception still gets joined
19
<Process(Process-2, started)>
<Process(Process-4, stopped)>
<Process(Process-6, started)>
<Process(Process-5, stopped)>
<Process(Process-3, stopped)>
<Process(Process-9, started)>
<Process(Process-7, stopped)>
<Process(Process-8, started)>
<Process(Process-13, started)>
<Process(Process-12, stopped)>
<Process(Process-11, stopped)>
<Process(Process-16, started)>
<Process(Process-15, stopped)>
<Process(Process-17, stopped)>
<Process(Process-14, stopped)>
<Process(Process-18, started)>
<Process(Process-19, stopped)>
<Process(Process-1, started)> # <-- Process-1 ends last
finished





share|improve this answer















Per this thread, Marko Rauhamaa writes:




If you don't care to know when child processes exit, you can simply ignore the SIGCHLD signal:



import signal
signal.signal(signal.SIGCHLD, signal.SIG_IGN)


That will prevent zombies from appearing.




The wait(2) man page explains:




POSIX.1-2001 specifies that if the disposition of SIGCHLD is set to
SIG_IGN or the SA_NOCLDWAIT flag is set for SIGCHLD (see
sigaction(2)), then children that terminate do not become zombies and
a call to wait() or waitpid() will block until all children have
terminated, and then fail with errno set to ECHILD. (The original
POSIX standard left the behavior of setting SIGCHLD to SIG_IGN
unspecified. Note that even though the default disposition of
SIGCHLD is "ignore", explicitly setting the disposition to SIG_IGN
results in different treatment of zombie process children.)



Linux 2.6 conforms to the POSIX requirements. However, Linux 2.4
(and earlier) does not: if a wait() or waitpid() call is made while
SIGCHLD is being ignored, the call behaves just as though SIGCHLD
were not being ignored, that is, the call blocks until the next child
terminates and then returns the process ID and status of that child.




So if you are using Linux 2.6 or a POSIX-compliant OS, using the above code will allow children processes to exit without becoming zombies. If you are not using a POSIX-compliant OS, then the thread above offers a number of options. Below is one alternative, somewhat similar to Marko Rauhamaa's third suggestion.




If for some reason you need to know when children processes exit and wish to
handle (at least some of them) differently, then you could set up a queue to
allow the child processes to signal the main process when they are done. Then
the main process can call the appropriate join in the order in which it receives
items from the queue:



import time
import multiprocessing as mp

def exe(i, q):
try:
print(i)
if i == 1:
time.sleep(10)
elif i == 10:
raise Exception('I quit')
else:
time.sleep(3)
finally:
q.put(mp.current_process().name)

if __name__ == '__main__':
procs = dict()
q = mp.Queue()
for i in range(1,20):
proc = mp.Process(target=exe, args=(i, q))
proc.start()
procs[proc.name] = proc

while procs:
name = q.get()
proc = procs[name]
print(proc)
proc.join()
del procs[name]

print("finished")


yields a result like



... 
<Process(Process-10, stopped[1])> # <-- process with exception still gets joined
19
<Process(Process-2, started)>
<Process(Process-4, stopped)>
<Process(Process-6, started)>
<Process(Process-5, stopped)>
<Process(Process-3, stopped)>
<Process(Process-9, started)>
<Process(Process-7, stopped)>
<Process(Process-8, started)>
<Process(Process-13, started)>
<Process(Process-12, stopped)>
<Process(Process-11, stopped)>
<Process(Process-16, started)>
<Process(Process-15, stopped)>
<Process(Process-17, stopped)>
<Process(Process-14, stopped)>
<Process(Process-18, started)>
<Process(Process-19, stopped)>
<Process(Process-1, started)> # <-- Process-1 ends last
finished






share|improve this answer














share|improve this answer



share|improve this answer








edited Nov 13 '18 at 22:47

























answered Nov 13 '18 at 15:02









unutbuunutbu

545k10111631231




545k10111631231












  • Thanks, I found the queue approach really interesting, it adds some load using the queue, but but keep thinking more consistent.

    – Ouali Abdelkader
    Nov 14 '18 at 16:34

















  • Thanks, I found the queue approach really interesting, it adds some load using the queue, but but keep thinking more consistent.

    – Ouali Abdelkader
    Nov 14 '18 at 16:34
















Thanks, I found the queue approach really interesting, it adds some load using the queue, but but keep thinking more consistent.

– Ouali Abdelkader
Nov 14 '18 at 16:34





Thanks, I found the queue approach really interesting, it adds some load using the queue, but but keep thinking more consistent.

– Ouali Abdelkader
Nov 14 '18 at 16:34

















draft saved

draft discarded
















































Thanks for contributing an answer to Stack Overflow!


  • Please be sure to answer the question. Provide details and share your research!

But avoid


  • Asking for help, clarification, or responding to other answers.

  • Making statements based on opinion; back them up with references or personal experience.

To learn more, see our tips on writing great answers.




draft saved


draft discarded














StackExchange.ready(
function ()
StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53283123%2fzombie-state-multiprocessing-library-python3%23new-answer', 'question_page');

);

Post as a guest















Required, but never shown





















































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown

































Required, but never shown














Required, but never shown












Required, but never shown







Required, but never shown







Popular posts from this blog

Top Tejano songwriter Luis Silva dead of heart attack at 64

ReactJS Fetched API data displays live - need Data displayed static

政党