Hi All,
Maybe we're unique, but we have a setup on our mainframe in batch where we can create/catalog a dataset with a user's password in a jobstep prior to the STFP call. As long as the psdsn="PROD.SFTP.pasword.whatever" references an 'old' existing dataset, the transfers work terrific. We've been running/testing in this mode for months successfully.
However last week we introduced our first test of what will be our production method, which is call a program that reads the password off an encrypted database and writes it to a (NEW,CATLG,DELETE) cataloged dataset that is used for the SFTP call and then deleted in the next jobstep. For 7 out of about 8 attempts (yes it works every once in awhile - frustrating!) we get a 255 error from SFTP with the message, "DATA SET IS ALLOCATED TO ANOTHER JOB OR USER".
I have tried IEBGENRing the original file into another file, breaking up the jobsteps into two separate jobs - all in an attempt to confirm the dataset is 'available' to SFTP. This seems overkill because there is no reason we can find for the file to not be available. We've even performed a ENQ command on the file to insure it is free and the error still occurs.
Is anyone else creating the password file on the fly in the same Job? Have you seen any weird behavior like this?
Thanks In Advance
Error when pwdsn is created within the same job
Re: Error when pwdsn is created within the same job
The password is actually being supplied to the IBM Ported Tools "ssh" program by setting _SSH_ASKPASS to point to a read_passwd_dsn.sh. ssh runs in a separate OMVS address space, so when it executes this script it also runs in a separate OMVS address space. The script uses the Co:Z fromdsn to read the password dataset and it allocates it with DISP=SHR. The problem is that your main job apparently has the dataset allocated with an exclusive disposition, like NEW. I'm not sure how this would ever work. You will need to somehow change the earlier jobstep so that it creates the pw data set without leaving an ENQ. Perhaps use a REXX step to dynamically allocate the dataset, call the program, and then free it?
A better approach might be to replace the read_passwd_dsn.sh script with a rexx (Unix) script that calls your program and writes the decrypted password to stdout (with "say"). Then you never have to put it in a data set.
Also, have you considered using public/private keys, stored in SAF/RACF/ACF2 for user authentication?
A better approach might be to replace the read_passwd_dsn.sh script with a rexx (Unix) script that calls your program and writes the decrypted password to stdout (with "say"). Then you never have to put it in a data set.
Also, have you considered using public/private keys, stored in SAF/RACF/ACF2 for user authentication?
Re: Error when pwdsn is created within the same job
Thanks so much for the information. I'm an applications guy, so I had to sit down with our systems folks and talk this through. The information about separate address space was very helpful. With that info we are coming up with a solution. And I'm learning more than I ever wanted about JCL and how jobs operate!
The key that I found yesterday, was that - in fact - the creation of the dataset (NEW) was not the offending line of code. It was the delete step at the end. I did not realize that JCL 'looked forward' to determine how to ENQ the file. As long as the creation step was before the SFTP step and no other steps followed the SFTP step, the dataset is released by the Job after the dataset is created and the SFTP performs wonderfully (hence our frustration that 1 out 8 times it seemed to work). However once a jobstep is added *after* the SFTP step, the Job continues to keep the dataset ENQ'd and the SFTP step fails. I did not know that!!
So we are working on a gameplan to avoid that situation.
Thanks again for the help - it was very useful!!!
The key that I found yesterday, was that - in fact - the creation of the dataset (NEW) was not the offending line of code. It was the delete step at the end. I did not realize that JCL 'looked forward' to determine how to ENQ the file. As long as the creation step was before the SFTP step and no other steps followed the SFTP step, the dataset is released by the Job after the dataset is created and the SFTP performs wonderfully (hence our frustration that 1 out 8 times it seemed to work). However once a jobstep is added *after* the SFTP step, the Job continues to keep the dataset ENQ'd and the SFTP step fails. I did not know that!!
So we are working on a gameplan to avoid that situation.
Thanks again for the help - it was very useful!!!
Re: Error when pwdsn is created within the same job
If you can share a bit of info on the interace to your decription program (what DDs it uses, parms for how it is called, etc), I would be happy to show you how to call it from a Rexx script that you can use as your _SSH_ASKPASS program. That way, you NEVER have to put the clear text password in a data set - ssh would call the REXX script when it needed the password, your program would write out the password which would get piped back into SSH. It would never hit disk.
Re: Error when pwdsn is created within the same job
It is pretty straight forward. We are running Natural/ADABAS on the mainframe. When the user submits a batch job w/transfer, we store their PW on an encrypted file via a ciphered routine. The "sequence number" of the job is the key. Then we execute a program in batch that runs a program that reads (then deletes) the PW record based on the sequence number we send. Then we delete the file at the end. Up until we started using your product, we only wrote it to a temp dataset and deleted that at the end. So it was still written to disk, just not a cataloged dataset.
Again, the example is pretty much the same as we've always used, but we catalog the dataset (&JOBSEQNO is the job sequence number that passed in at runtime):
//JS010 EXEC NATURAL,
//*****************************************************************
//* Indicate the job has started and generate the temporary
//* file with ciphered password for SFTPing file
//* and for SFTPing back the file of rejected records.
//*****************************************************************
// SYS=TEST,
// DUMP='DUMMY,',PARM='MADIO=0',
// SYSOUT='&OCLASS,OUTPUT=*.UC401.OUTJ1'
//*
//NATBAT.CMSYNIN DD *
LOGON JOBS
JOBJOBE &JOBSEQNO S
JOBPSWD &JOBSEQNO CMPRT01
/*
//NATBAT.CMWKF05 DD *
**PSWD**
//NATBAT.CMWKF06 DD DSN=PROD.SFTP.UC4.GJJULJE.P&JOBSEQNO,
// DISP=(NEW,CATLG,DELETE),
// DCB=(RECFM=FB,BLKSIZE=0,LRECL=80),
// SPACE=(TRK,(0,1),RLSE)
Thanks again for all your help!
Again, the example is pretty much the same as we've always used, but we catalog the dataset (&JOBSEQNO is the job sequence number that passed in at runtime):
//JS010 EXEC NATURAL,
//*****************************************************************
//* Indicate the job has started and generate the temporary
//* file with ciphered password for SFTPing file
//* and for SFTPing back the file of rejected records.
//*****************************************************************
// SYS=TEST,
// DUMP='DUMMY,',PARM='MADIO=0',
// SYSOUT='&OCLASS,OUTPUT=*.UC401.OUTJ1'
//*
//NATBAT.CMSYNIN DD *
LOGON JOBS
JOBJOBE &JOBSEQNO S
JOBPSWD &JOBSEQNO CMPRT01
/*
//NATBAT.CMWKF05 DD *
**PSWD**
//NATBAT.CMWKF06 DD DSN=PROD.SFTP.UC4.GJJULJE.P&JOBSEQNO,
// DISP=(NEW,CATLG,DELETE),
// DCB=(RECFM=FB,BLKSIZE=0,LRECL=80),
// SPACE=(TRK,(0,1),RLSE)
Thanks again for all your help!
Re: Error when pwdsn is created within the same job
I doubt that your proc shows all of the DDs that would be necessary to allocate everything. I'm also not certain that NATURAL can be run in an OMVS address space.
What I can give you is a general purpose REXX shell script example
If you have someone familiar with using REXX, you should be able to adapt it.
I suggest strongly testing this from a OMVS shell before trying to run it with ssh.
If you program (MYPROG) is not in linklist or lpalist, then you would also need to allocate STEPLIB in the script. You won't be able to use STEPLIB or JOBLIB from the job that started ssh, since ssh and the SSH_ASKPASS program will be run in a separate OMVS address space.
To use this with ssh, you would need to:
export SSH_ASKPASS="/path/to/myprog.rexx"
export DISPLAY=none
(and you would have to run it without a terminal attached to stdin, which would be the case in batch)
What I can give you is a general purpose REXX shell script example
If you have someone familiar with using REXX, you should be able to adapt it.
I suggest strongly testing this from a OMVS shell before trying to run it with ssh.
Code: Select all
/* REXX */
/**************************************************************
REXX shell script to invoke a the z/OS utility
This script must be placed in the z/OS Unix file system,
as a file that is executable by the user:
chmod 755 myprog.rexx
*/
urc = BPXWDYN("ALLOC DD(SYSUT1) DA(FQ.DSN) SHR MSG(2) REUSE")
if urc <> 0 then
do /* all error message need to be written to stderr */
line.1 = "Error allocating input dataset"
address mvs "execio 1 diskw STDERR ( stem line."
exit 8
end
/* Here is the DD where your password is written to stdout, which is redirected back into ssh */
call BPXWDYN("ALLOC DD(SYSUT2) PATH('/dev/fd1') PATHOPTS(OWRONLY) FILEDATA(TEXT) MSG(2) REUSE")
parm = "SOMETHING"
address ATTCHMVS "MYPROG parm"
urc=RC
call BPXWDYN("free DD(SYSUT1) MSG(2)")
call BPXWDYN("free DD(SYSUT2) MSG(2)")
exit urc
To use this with ssh, you would need to:
export SSH_ASKPASS="/path/to/myprog.rexx"
export DISPLAY=none
(and you would have to run it without a terminal attached to stdin, which would be the case in batch)
Re: Error when pwdsn is created within the same job
Thank you for the information. We have limited REXX expertise, but I'll get with my systems person when he gets back in the office and bounce this off of him. We - thanks again to you for steering us the right direction - have a usable solution for now. But will consider this direction for the future to maybe keep the PW off disk completely.
Thanks Again!
Paul
Thanks Again!
Paul