Pipe to gzip not working
Re: Pipe to gzip not working
The network and file I/O is still done on the mainframe.
Re: Pipe to gzip not working
I have a question about your original dataset.
The messages from your fromdsn test say that the amount of data read from the dataset was 2,175,683,079 bytes, but the information that you gave earlier has:
Allocated bytes . . : 1,425,813,480
The messages from your fromdsn test say that the amount of data read from the dataset was 2,175,683,079 bytes, but the information that you gave earlier has:
Allocated bytes . . : 1,425,813,480
Re: Pipe to gzip not working
I am not sure why the discrepancy. From the info below, it seems like 2GB is the right file size. When I browsed the file, it did have 87083 records, which agrees with FROMDSN.
FROMDSN:
fromdsn(DRP.ABARS.DMGTEST.D.C01V0015)Nì: 87083 records/2175683079 bytes read
TSO 3.2:
Allocated bytes . . : 1,425,813,480
Used bytes . . . . : 1,425,813,480
Used extents . . . : 3
TSO 3.4:
DRP.ABARS.DMGTEST.D.C01V0015 43523TRKS
VTOC:
Data Set Name Seq Volume Begin CYL-HD End CYL-HD Tracks Dsorg Recfm Lrecl
DRP.ABARS.DMGTEST.D.C01V0015 1 VDR001 6 0 1087 0 16216 PS U 0
DRP.ABARS.DMGTEST.D.C01V0015 2 VDR001 1087 1 2168 1 16216 PS U 0
DRP.ABARS.DMGTEST.D.C01V0015 3 VDR001 2168 2 2907 7 11091 PS U 0
FROMDSN:
fromdsn(DRP.ABARS.DMGTEST.D.C01V0015)Nì: 87083 records/2175683079 bytes read
TSO 3.2:
Allocated bytes . . : 1,425,813,480
Used bytes . . . . : 1,425,813,480
Used extents . . . : 3
TSO 3.4:
DRP.ABARS.DMGTEST.D.C01V0015 43523TRKS
VTOC:
Data Set Name Seq Volume Begin CYL-HD End CYL-HD Tracks Dsorg Recfm Lrecl
DRP.ABARS.DMGTEST.D.C01V0015 1 VDR001 6 0 1087 0 16216 PS U 0
DRP.ABARS.DMGTEST.D.C01V0015 2 VDR001 1087 1 2168 1 16216 PS U 0
DRP.ABARS.DMGTEST.D.C01V0015 3 VDR001 2168 2 2907 7 11091 PS U 0
Re: Pipe to gzip not working
Per your post of nov 29, are you still planning on trying to recreate this, or is this a dead issue?
thanks,
Sandy
thanks,
Sandy
Re: Pipe to gzip not working
Our testing shows a little more CPU time for using fromdsn vs FTP.
I still can't understand the discrepancy in your file vs allocated dataset size. Perhaps you are using compressed datasets? That would also use CPU time.
I still can't understand the discrepancy in your file vs allocated dataset size. Perhaps you are using compressed datasets? That would also use CPU time.
Re: Pipe to gzip not working
This file is not compressed. But regardless of the actual filesize, the bottom line is that the same file takes more cpu to use fromdsn than to use FTP. So I'm not sure what the point is for using fromdsn. I'm doing the best I can to understand and resolve this, because we really do want to offload some work to the linux environment!
Re: Pipe to gzip not working
I think that we found a good part of the problem (difference in CPU usage between FTP and Co:Z fromdsn) -
We suspect that you don't have your TCP stack on z/OS configured like ours. When we changed the TCPCONFIG statement so that the socket send and receive buffer sizes are the IBM defaults, then we do see a large increase in CPU time for Co:Z fromdsn. IBM FTP apparently overrides the default to have a 180K buffer size, vs the 16K buffer size that is the stack default default.
Larger buffer sizes will allow the stack to take advantage of "segmentation offload", which significantly reduces CPU and can also increase throughput.
So, you could change your TCPCONFIG TCPSENDBFRSIZE and TCPRCVBFRSIZE to a larger default, but this would mean larger buffers (and memory usage) for all applications, which may not be desirable in your installation.
Therefore, we are looking at adding a configuration option and a larger default for the TCP send and receive buffer sizes. We are also looking at some other areas that we think that we can improve in this area.
Thanks for your patience.
We suspect that you don't have your TCP stack on z/OS configured like ours. When we changed the TCPCONFIG statement so that the socket send and receive buffer sizes are the IBM defaults, then we do see a large increase in CPU time for Co:Z fromdsn. IBM FTP apparently overrides the default to have a 180K buffer size, vs the 16K buffer size that is the stack default default.
Larger buffer sizes will allow the stack to take advantage of "segmentation offload", which significantly reduces CPU and can also increase throughput.
So, you could change your TCPCONFIG TCPSENDBFRSIZE and TCPRCVBFRSIZE to a larger default, but this would mean larger buffers (and memory usage) for all applications, which may not be desirable in your installation.
Therefore, we are looking at adding a configuration option and a larger default for the TCP send and receive buffer sizes. We are also looking at some other areas that we think that we can improve in this area.
Thanks for your patience.
Re: Pipe to gzip not working
Thank you for your reply. I will talk to our TCP/FTP person and see about our configuration. In the meantime, I look forward to trying out any new parameter options that you develop.
Re: Pipe to gzip not working
I spoke to our TCP/FTP person, and yes, we do use smaller buffer sizes to cater to our IMS streams which are small in nature. So it is highly likely that a change in the COZ buffer size will help to resolve this problem.
thanks,
Sandy
thanks,
Sandy
Re: Pipe to gzip not working
We are testing a new release that will allow configuration of send and receive buffer sizes, so as to take advantage of segmentation offload.
We are also looking to exploit fastpath socket processing.
So far, the results are looking good.
I'll post something here when there is a release to test.
We are also looking to exploit fastpath socket processing.
So far, the results are looking good.
I'll post something here when there is a release to test.