Page 1 of 1

Correct way to pipe multiple datasets

Posted: Tue Apr 05, 2011 10:06 am
by tsdjim
I need to execute a program on Linux and export various files for use by the program. Below is the JCL I am trying to use. What is the correct way to export multiple files to Linux, and is the syntax for the DD: and PDS member statements correct?

//ARCHPARM DD *
Some data
/*
//DATAIN DD *
Some other data
/*
//STDIN DD *
export dd_ARCHIVE='/some/path/on/zLinux/file' &
export dd_ARCHPARM='DD:ARCHPARM' &
export dd_DATAIN='DD:DATAIN' &
export dd_SYSOUT='DD:SYSOUT' &
export dd_OUTPUT='//MY.OUT.FILE' &
export dd_DATEPARM='//MY.PDS.FILE(PDSMEM)' &
/opt/pgms/myprog
/*

Posted: Tue Apr 05, 2011 10:26 am
by dovetail
the '&' connectors are unnecessary.

I'm not sure what you are asking - will your program be using "fromdsn" and "todsn" to read/write data from z/OS datasets using these environment variables? If so, then check the syntax on the fromdsn/todsn man pages and test individually.

Input files: mkfifo archive .txt mkfifo archparm.txt Output

Posted: Tue Apr 05, 2011 11:21 am
by tsdjim
I need to pipe 2 input and receive the output in 2 datasets in a program on Linux. Can I use the following syntax?:

mkfifo archive .txt
mkfifo archparm.txt
mkfifo sysout.txt
mkfifo output.txt

export dd_ARCHIVE=archive.txt
export dd_ARCHPARM=archparm.txt
export dd_SYSOUT=sysout.txt
export dd_OUTPUT=output.txt

fromdsn //'DD:ARCHIVE' > archive.txt &
fromdsn //'DD:ARCHPARM' > archparm.txt &
todsn //'DD:SYSOUT' > archparm.txt &
todsn //'DD:OUTPUT' = sysout.txt &
/opt/pgms/myprog

Posted: Tue Apr 05, 2011 4:09 pm
by dovetail
You would need "<" redirection for "todsn" and you would need separate named pipes.

Also, you need to put a "wait" command at the end, since the todsn command child processes might not finish immediately.

Also, named pipes are a little tricky to manage and you sometimes have to worry about what happens if the program doesn't write or (finish) reading from one.

There are techniques to deal with this in your shell.

For insuring that a "fromdsn" pipe completes, you can do this:

Code: Select all

mkfifo readpipe.txt
fromdsn .... > readpipe.txt &
read_pid = $!
...
myprogram readpipe.txt
...
kill $read_pid 2>/dev/null
wait
rm readpipe.txt
For insuring that a "todsn" doesn't hang if your program never writes to it:

Code: Select all

mkfifo readpipe.txt
todsn ... < writepipe.txt &
exec 4>writepipe.txt      # 4 is an unused handle
...
myprogram writepipe.txt
...
exec 4>-   # make sure the writepipe gets closed
wait
And if you are reading and write from multiple named pipes, you can combine these.

Even easier if you can change the code in your program is to use something like the "popen()" function in C:

Code: Select all

  FILE* infile = popen("fromdsn ...", "r");
  FILE* outfile = popen("todsn ...", "w");
The first argument is the command to run as a child process with an unnamed pipe setup to either "r"ead or "w"rite from using a stream. You could use an exported environment variable to pass the command to your program.

Other languages support things similar to popen(), like Perl, PHP, Python, etc. In SAS on *nix, you can open a file using the keyword PIPE followed by the command, which also does the same thing.

Re: Correct way to pipe multiple datasets

Posted: Mon Aug 15, 2011 4:01 pm
by JohnMcKown
If you are using BASH on Linux, use what is called process substitution. Just from looking at your example command, I would guess that the input and output file names are "hard coded". Instead of doing this, pick them up as parameters from the command line. Your command line would look something like:

/opt/pgms/myprog <(fromdsn //'DD:ARCHIVE') <(fromdsn //'DD:ARCHPARM') >(todsn //'DD:SYSOUT) >(todsn //'DD:OUTPUT')

Example C code:

int main(int argc, char *argv[]) {
FILE *archive=fopen(argv[1],"r");
FILE *archparm=fopen(argv[2],"r");
FILE *sysout=fopen(argv[3],"w");
FILE *output=fopen(argv[4],"w");
// process your data
return 0;
}

Please excuse any bugs. I'm not a good C programmer. Don't use it much - Perl is simplier for me.