Hello,
this is probably a very basic question, but I don't know how to deal with it right now.
We are running a job consisting of two steps. The first one starts some daemons and at last it runs my REXX and then does a exit - at least the protocol I get from "set -x" says so. But the process still stays alive.
Do you have any hints about this?
brgds,
Ulrich Schmidt
DTLSPAWN-Step stays alive but should terminate
-
- Posts: 37
- Joined: Fri Jan 09, 2009 1:25 pm
- Location: Germany
-
- Posts: 37
- Joined: Fri Jan 09, 2009 1:25 pm
- Location: Germany
Re: DTLSPAWN-Step stays alive but should terminate
And to say more: If I goback from DTLSPAWN to BPXBATCH as a Launcher for this sequence it works fine.
brgds,
Ulrich Schmidt
brgds,
Ulrich Schmidt
Ulrich,
What version of COZBATCH (formerly DTLSPAWN) are you using?
Can you run with '-LT' and post or email the trace to support@dovetail.com?
Also, if you can provide us with a small test case that reproduces the problem, that would help us.
Thanks
What version of COZBATCH (formerly DTLSPAWN) are you using?
Can you run with '-LT' and post or email the trace to support@dovetail.com?
Also, if you can provide us with a small test case that reproduces the problem, that would help us.
Thanks
-
- Posts: 37
- Joined: Fri Jan 09, 2009 1:25 pm
- Location: Germany
Ulrich,
Thanks for sending the trace data; sorry that it took me a while to look at it carefully.
What I believe is happening is that COZBATCH is starting your shell (/bin/sh) as a child process and then it redirects its stdin, stdout, and stderr to pipes. COZBATCH then has a select() loop that does three things at once:
1) reads any records from DD:STDIN and writes them to the stdin pipe
2) reads any data from the stdout pipe and writes to DD:STDOUT
3) reads any data from the stderr pipe and writes to DD:STDERR
This continues until all three of the pipes to the shell have been closed (EOF).
From the trace that you sent me, I can see that COZBATCH is waiting for more data from stdout and stderr, but it never gets and EOF (pipe closed).
I assume that what is happening is that your shell forks a daemon (child) process (how is this done? with nohup? or just &? ). But when the shell does "exit 0", the daemon process is still running and has duplicate handles to the stdout/stderr pipes, which is why they never close.
Under Posix standards, a child process has a life of its own, and doesn't necessarily terminate when its parent terminates. So I'm not sure what COZBATCH *should* do in this case, since there is still a running process that is writing output to stdout->DD:STDOUT and stderr->DD:STDERR.
What happens if you fork your daemon child process, but redirect both its stdout and stderr to files (or /dev/null)? I would assume that this would allow the pipes in the main shell process to close and the job still will terminate.
Thanks for sending the trace data; sorry that it took me a while to look at it carefully.
What I believe is happening is that COZBATCH is starting your shell (/bin/sh) as a child process and then it redirects its stdin, stdout, and stderr to pipes. COZBATCH then has a select() loop that does three things at once:
1) reads any records from DD:STDIN and writes them to the stdin pipe
2) reads any data from the stdout pipe and writes to DD:STDOUT
3) reads any data from the stderr pipe and writes to DD:STDERR
This continues until all three of the pipes to the shell have been closed (EOF).
From the trace that you sent me, I can see that COZBATCH is waiting for more data from stdout and stderr, but it never gets and EOF (pipe closed).
I assume that what is happening is that your shell forks a daemon (child) process (how is this done? with nohup? or just &? ). But when the shell does "exit 0", the daemon process is still running and has duplicate handles to the stdout/stderr pipes, which is why they never close.
Under Posix standards, a child process has a life of its own, and doesn't necessarily terminate when its parent terminates. So I'm not sure what COZBATCH *should* do in this case, since there is still a running process that is writing output to stdout->DD:STDOUT and stderr->DD:STDERR.
What happens if you fork your daemon child process, but redirect both its stdout and stderr to files (or /dev/null)? I would assume that this would allow the pipes in the main shell process to close and the job still will terminate.
-
- Posts: 37
- Joined: Fri Jan 09, 2009 1:25 pm
- Location: Germany