Note: The Statistics Storage Information API STATOP_STORAGE option code 241 uses API_STOR_STATS2 for Version 2.
|
Note: For more information about this new command interface, see “MODIFY OMVS command enhancement” on page 2 and “Addressing PFS commands to zFS and TFS” on page 3.
|
zFS configuration options
|
Old range
|
New range
|
vnode_cache_size
|
32 - 500,000
|
1000 - 10,000,000
|
meta_cache_size
|
1 M – 1024 M
|
1 M – 64 G
|
token_cache_size
|
20480 – 2,621,440
|
20480 – 20,000,000
|
trace_table_size
|
1 M – 2048 M
|
1 M – 65535 M
|
xcf_trace_table_size
|
1 M – 2048 M
|
1 M – 65535 M
|
Tip: We recommend the use of FSINFO instead of List Aggregate Status (opcode 135 or 140) or List File system status (opcode 142).
|
zfsadm fsinfo [-aggregate name | -path path_name | -all]
[-basic |-owner | -full |-reset]
[-select criteria | -exceptions]
[-sort sort_name][-level][-help]
|
Exceptions
|
Description
|
CE
|
XCF communication failures between clients systems and owning systems
|
DA
|
Marked damaged by the zFS salvager
|
DI
|
Disabled for reading and writing
|
GD
|
Disabled for dynamic grow
|
GF
|
Failures on dynamic grow attempts
|
IE
|
Disk IO errors
|
L
|
Less than 1 MB of free space; forces increased XCF traffic for writing files
|
Q
|
Currently quiesced
|
SE
|
Returned ENOSPC errors to applications
|
V5D
|
Shown for aggregates that are disabled for conversion to version 1.5
|
Note: This option cannot be specified with -exceptions, -reset, and -path.
|
Criteria
|
Description
|
CE
|
XCF communication failures between clients systems and owning systems
|
DA
|
Marked damaged by the zFS salvager
|
DI
|
Disabled for reading and writing
|
GD
|
Disabled for dynamic grow
|
GF
|
Failures on dynamic grow attempts
|
GR
|
Currently being grown
|
IE
|
Returned ENOSPC errors to applications
|
L
|
Less than 1 MB of free space; forces increased XCF traffic for writing files
|
NS
|
Mounted NORWSHARE
|
OV
|
Extended (v5) directories that are using overflow pages
|
Q
|
Currently quiesced
|
RQ
|
Had application activity
|
RO
|
Mounted read-only
|
RW
|
Mounted read/write
|
RS
|
Mounted RWSHARE (sysplex-aware)
|
SE
|
Returned ENOSPC errors to applications
|
TH
|
Having sysplex thrashing objects in them
|
V4
|
Aggregates that are version 1.4
|
V5
|
Aggregates that are version 1.5
|
V5D
|
Aggregates that are disabled for conversion to version 1.5
|
WR
|
Had application write activity
|
sort_name
|
Function
|
Name
|
Sort by file system name, in ascending order. This option is the default.
|
Requests
|
Sort by the number of external requests that are made to the file system by user applications, in descending order. The most actively requested file systems are listed first.
|
Response
|
Sort by response time of requests to the file system, in descending order. The slower responding file systems are listed first.
|
Note: This option cannot be specified with -reset.
|
$> zfsadm fsinfo hering*
HERING.TEST.DUMMY.ZFS SC74 RW,RS,Q,L
HERING.TEST.ZFS SC74 RW,NS,L
HERING.ZFS SC74 RW,RS
Legend: RW=Read-write,Q=Quiesced,L=Low on space,RS=Mounted RWSHARE
NS=Mounted NORWSHARE
$>
|
$> zfsadm fsinfo -path test -basic
HERING.TEST.ZFS SC74 RW,NS,L
Legend: RW=Read-write, L=Low on space, NS=Mounted NORWSHARE
$>
|
$> zfsadm fsinfo -path test
File System Name: HERING.TEST.ZFS
*** owner information ***
Owner: SC74 Converttov5: OFF,n/a
Size: 36000K Free 8K Blocks: 88
Free 1K Fragments: 46 Log File Size: 112K
Bitmap Size: 8K Anode Table Size: 80K
File System Objects: 257 Version: 1.5
Overflow Pages: 0 Overflow HighWater: 0
Thrashing Objects: 0 Thrashing Resolution: 0
Token Revocations: 0 Revocation Wait Time: 0.000
Devno: 54 Space Monitoring: 0,0
Quiescing System: n/a Quiescing Job Name: n/a
Quiescor ASID: n/a File System Grow: ON,0
Status: RW,NS,L
Audit Fid: C2C8F5E2 E3F20184 0000
File System Creation Time: Sep 8 09:38:25 2006
Time of Ownership: Jul 31 11:57:53 2015
Statistics Reset Time: Jul 31 11:57:53 2015
Quiesce Time: n/a
Last Grow Time: n/a
Connected Clients: n/a
Legend: RW=Read-write, L=Low on space, NS=Mounted NORWSHARE
$>
|
$> zfsadm fsinfo -select q,ns
HERING.TEST.DUMMY.ZFS SC74 RW,RS,Q,L
HERING.TEST.ZFS SC74 RW,NS,L
Legend: RW=Read-write,Q=Quiesced,L=Low on space,RS=Mounted RWSHARE
NS=Mounted NORWSHARE
$>
|
$> rxdowner -a hering.largedir.v4
RXDWN004E Aggregate HERING.LARGEDIR.V4 cannot be found.
$> zfsadm fsinfo hering.largedir.v4
File System Name: HERING.LARGEDIR.V4
*** owner information ***
Owner: n/a Converttov5: OFF,n/a
Size: 360000K Free 8K Blocks: 9152
Free 1K Fragments: 7 Log File Size: 3600K
Bitmap Size: 56K Anode Table Size: 250264K
File System Objects: 1000003 Version: 1.5
Overflow Pages: 0 Overflow HighWater: 0
Thrashing Objects: 0 Thrashing Resolution: 0
Token Revocations: 0 Revocation Wait Time: 0.000
Devno: 0 Space Monitoring: 0,0
Quiescing System: n/a Quiescing Job Name: n/a
Quiescor ASID: n/a File System Grow: OFF,0
Status: NM
Audit Fid: C2C8F5D6 C5F1000A 0000
File System Creation Time: Jun 16 00:48:25 2013
Time of Ownership: Aug 12 22:38:19 2015
Statistics Reset Time: Aug 12 22:38:19 2015
Quiesce Time: n/a
Last Grow Time: n/a
Connected Clients: n/a
Legend: NM=Not mounted
$>
|
BPX1PCT(“ZFS “, /* File system type followed by 5 blanks */
0x40000013, /* ZFSCALL_FSINFO – fsinfo operation */
parmlen, /* Length of parameter buffer */
parmbuf, /* Address of parameter buffer */
&rv, /* return value */
&rc, /* return code */
&rsn) /* reason code */
|
$> rxlstqsd
HERING.TEST.PRELE.ZFS
HERING.TEST.RW.ZFS
HERING.TEST.ZFS
$> cn "f axr,rxlstqsd"
ZFSQS004I RXLSTQSD on SC74 -
HERING.TEST.PRELE.ZFS
HERING.TEST.RW.ZFS
HERING.TEST.ZFS
$> sudo zfsadm unquiesce HERING.TEST.PRELE.ZFS
IOEZ00166I Aggregate HERING.TEST.PRELE.ZFS successfully unquiesced
$> sudo zfsadm unquiesce HERING.TEST.RW.ZFS
IOEZ00166I Aggregate HERING.TEST.RW.ZFS successfully unquiesced
$> sudo zfsadm unquiesce HERING.TEST.ZFS
IOEZ00166I Aggregate HERING.TEST.ZFS successfully unquiesced
$> rxlstqsd
ZFSQS006I There are no quiesced aggregates.
$> tsocmd "rxlstqsd"
rxlstqsd
ZFSQS006I There are no quiesced aggregates.
$>
|
Note: On a down-level system, you receive a message that you must be at least on z/OS V2R2 to use the utility.
|
modify zFS_procname,fsinfo[,{aggrname | all}
[,{full | basic | owner | reset} [,{select=criteria | exceptions}]
[,sort=sort_name]]]
|
$> cn "d omvs,o" | grep KERNELSTACKS
KERNELSTACKS = ABOVE
$>
|
$> echo "The local sysclone value is:" $(sysvar SYSCLONE)
The local sysclone value is: 74
$> cat "//'SYS1.PARMLIB(IEASYS00)'" | grep OMVS
OMVS=(&SYSCLONE.,&OMVSPARM.),
$> cat "//'SYS1.PARMLIB(BPXPRM74)'"
KERNELSTACKS(ABOVE)
FILESYSTYPE TYPE(ZFS)
ENTRYPOINT(IOEFSCM)
PARM('PRM=(&SYSCLONE.,00)')
$>
|
$> cn "d omvs,p" | grep ZFS
ZFS IOEFSCM
ZFS PRM=(74,00)
$> cn "f zfs,query,level"
IEE341I ZFS NOT ACTIVE
$> cn "f omvs,pfs=zfs,query,level"
IOEZ00639I zFS kernel: z/OS zFS
Version 02.02.00 Service Level OA47915 - HZFS420.
Created on Fri May 29 13:31:44 EDT 2015.
sysplex(filesys,rwshare) interface(4)
IOEZ00025I zFS kernel: MODIFY command - QUERY,LEVEL completed successfully.
$>
|