NCO User's Guide: "> NCO User's Guide: ">
[Top] | [Contents] | [Index] | [ ? ] |
Foreword Summary 1. Introduction 2. Operator Strategies 3. Features common to most operators 4. Reference manual for all operators 5. Contributing General Index
-- The Detailed Node Listing ---
Introduction
1.1 Availability 1.2 Operating systems compatible with NCO 1.3 Libraries 1.4 netCDF 2.x vs. 3.x 1.5 Help and Bug reports
Operating systems compatible with NCO
1.2.1 Compiling NCO for Microsoft Windows OS
Operator Strategies
Averagers vs. Concatenators
2.6.1 Concatenators ncrcat
andncecat
2.6.2 Averagers ncea
,ncra
, andncwa
2.6.3 Interpolator ncflint
Features common to most operators
Accessing files stored remotely
3.2.1 DODS
Reference manual for all operators
ncwa
netCDF Weighted Averager
Masking condition Normalization
[ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
[ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
ls
or mkdir
.
The operators take netCDF file(s) (or HDF4 files) as input, perform an
operation (e.g., averaging or hyperslabbing), and produce a netCDF file
as output.
The operators are primarily designed to aid manipulation and analysis of
data.
The examples in this documentation are typical applications of the
operators for processing climate model output.
This reflects their origin, but the operators are as general as netCDF
itself.
[ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
1.1 Availability 1.2 Operating systems compatible with NCO 1.3 Libraries 1.4 netCDF 2.x vs. 3.x 1.5 Help and Bug reports
[ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
tar
lets you perform both operations in one step with
`tar -xvzf nco.tar.gz'.
The documentation for NCO is called the NCO User's Guide.
The User's Guide is available in Postscript, HTML, DVI,
TeXinfo, and Info formats.
These formats are included in the source distribution in the files
`nco.ps', `nco.html', `nco.dvi', `nco.texi', and
`nco.info*', respectively.
All the documentation descends from a single source file,
`nco.texi'
(1).
Hence the documentation in every format is very similar.
However, some of the complex mathematical expressions needed to describe
ncwa
can only be displayed in the Postscript and DVI formats.
If you want to quickly see what the latest improvements in NCO are (without downloading the entire source distribution), visit the NCO homepage at http://nco.sourceforge.net. The HTML version of the User's Guide is also available online through the World Wide Web at URL http://nco.sourceforge.net/nco.html. To build and use NCO, you must have netCDF installed. The netCDF homepage is http://www.unidata.ucar.edu/packages/netcdf.
New NCO releases are announced on the netCDF list and on the
nco-announce
mailing list
http://lists.sourceforge.net/mailman/listinfo/nco-announce.
[ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
The major prerequisite for installing NCO on a particular platform is
the successful, prior installation of the netCDF libraries themselves.
Unidata has shown a commitment to maintaining netCDF on all popular UNIX
platforms, and is moving towards full support for the Microsoft Windows
operating system (OS).
Given this, the only difficulty in implementing NCO on a particular
platform is standardization of various C and Fortran interface and
system calls.
The C-code has been tested for ANSI compliance by compiling with GNU
gcc -ansi -pedantic
.
Certain branches in the code were required to satisfy the native SGI and
SunOS cc
compilers, which are strictly ANSI compliant and do not
allow variable-size arrays, a nice feature supported by GNU,
UNICOS, Solaris, and AIX compilers.
The most time-intensive portion of NCO execution is spent in arithmetic operations, e.g., multiplication, averaging, subtraction. Until August, 1999, these operations were performed in Fortran by default. This was a design decision based on the speed of Fortran-based object code vs. C-based object code in late 1994. Since 1994 native C compilers have improved their vectorization capabilities and it has become advantageous to replace all Fortran subroutines with C subroutines. Furthermore, this greatly simplifies the task of compiling on nominally unsupported platforms. As of August 1999, NCO is built entirely in C by default. This allows NCO to compile on any machine with an ANSI C compiler. Furthermore, NCO automatically takes advantage of extensions to ANSI C when compiled with the GNU compiler collection, GCC.
As of July 2000 and NCO version 1.2, NCO no longer
supports performing arithmetic operations in Fortran to improve speed.
Supporting Fortran involves maintaining two sets of routines for every
arithmetic operation.
The USE_FORTRAN_ARITHMETIC
flag is retained in the Makefile and
the file containing the Fortran code, `nc_fortran.F', is still
distributed with NCO in case a volunteer decides to resurrect them.
If you would like to volunteer to maintain `nc_fortran.F' please
contact me.
Otherwise the Fortran hooks will be completely removed in the next major
release.
1.2.1 Compiling NCO for Microsoft Windows OS
[ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
NCO has been successfully ported and tested on the Microsoft
Windows NT 4.0 operating system.
The switches necessary to accomplish this are included in the standard
distribution of NCO.
Using the freely available Cygwin (formerly gnu-win32) development
environment
(2), the compilation process is very similar to installing NCO on a
UNIX system.
The preprocessor token PVM_ARCH
should be set to WIN32
.
Note that defining WIN32
has the side effect of disabling
Internet features of NCO (see below).
Unless you have a Fortran compiler (like g77
or f90
)
available, no other tokens are required.
Users with fast Fortran compilers may wish to activate the Fortran
arithmetic routines.
To do this, define the preprocessor token USE_FORTRAN_ARITHMETIC
in the makefile which comes with NCO, `Makefile', or in the
compilation shell.
The least portable section of the code is the use of standard UNIX and
Internet protocols (e.g., ftp
, rcp
, scp
,
getuid
, gethostname
, and header files
`<arpa/nameser.h>' and
`<resolv.h>').
Fortunately, these UNIXy calls are only invoked by the single NCO
subroutine which is responsible for retrieving files stored on remote
systems (see section 3.2 Accessing files stored remotely).
In order to support NCO on the Microsoft Windows platforms,
this single feature was disabled (on Windows OS only).
This was required by Cygwin 18.x--newer versions of Cygwin may support
these protocols (let me know if this is the case).
The NCO operators should behave identically on Windows and
UNIX platforms in all other respects.
[ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
LD_LIBRARY_PATH
environment variable) is not set correctly, or if
the system libraries have been moved, renamed, or deleted since NCO was
installed, it is possible an NCO operator will fail with a message that
it cannot find a dynamically loaded (aka shared object or
`.so') library.
This usually produces a distinctive error message, such as
`ld.so.1: /usr/local/bin/ncea: fatal: libsunmath.so.1: can't
open file: errno=2'.
If you received an error message like this, ask your system
administrator to diagnose whether the library is truly missing
(3), or whether you
simply need to alter your library search path.
As a final remedy, you can reinstall NCO with all operators statically
linked.
[ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
ncks
) began with netCDF 2.x in 1994.
netCDF 3.0 was released in 1996, and we were eager to reap the
performance advantages of the newer netCDF implementation.
One netCDF 3.x interface call (nc_inq_libvers
) was added to
NCO in January, 1998, to aid in maintainance and debugging.
In March, 2001, the final conversion of NCO to netCDF 3.x
was completed (coincidentally on the same day netCDF 3.5 was released).
NCO versions 2.0 and higher are built with the
-DNO_NETCDF_2
flag to ensure no netCDF 2.x interface calls
are used.
However, the ability to compile NCO with only netCDF 2.x calls
is worth maintaining because HDF version 4
(4)
(available from HDF)
supports only the netCDF 2.x library calls
(see http://hdf.ncsa.uiuc.edu/UG41r3_html/SDS_SD.fm12.html#47784).
Note that there are multiple versions of HDF.
Currently HDF version 4.x supports netCDF 2.x and thus
NCO version 1.2.x.
If NCO version 1.2.x (or earlier) is built with only netCDF
2.x calls then all NCO operators should work with
HDF4 files as well as netCDF files
(5).
The preprocessor token NETCDF2_ONLY
exists
in NCO version 1.2.x to eliminate all netCDF 3.x calls.
Only versions of NCO numbered 1.2.x and earlier have this
capability.
The NCO 1.2.x branch will be maintained with bugfixes only
(no new features) until HDF begins to fully support the
netCDF 3.x interface (which is employed by NCO 2.x).
If, at compilation time, NETCDF2_ONLY
is defined, then
NCO version 1.2.x will not use any netCDF 3.x calls and, if
linked properly, the resulting NCO operators will work with
HDF4 files.
The `Makefile' supplied with NCO 1.2.x has been written
to simplify building in this HDF capability.
When NCO is built with make HDF4=Y
, the `Makefile'
will set all required preprocessor flags and library links to build
with the HDF4 libraries (which are assumed to reside under
/usr/local/hdf4
, edit the `Makefile' to suit your
installation).
HDF version 5.x became available in 1999, but did not support netCDF (or, for that matter, Fortran) as of December 1999. By early 2001, HDF version 5.x did support Fortran90. However, support for netCDF 3.x in HDF 5.x is incomplete. Much of the HDF5-netCDF3 interface is complete, however, and it may be separately downloaded from the HDF5-netCDF website. Now that NCO uses only netCDF 3.x system calls we are eager for HDF5 to complete their netCDF 3.x support.
[ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
If you would like NCO to include a new feature, first check to see if that feature is already on the TODO list. If it is, please consider implementing that feature yourself and sending us the patch! If the feature is not yet on the list then send a note to the NCO Discussion forum.
Please read the manual before reporting a bug or posting a request for help. Sending questions whose answers are not in the manual is the best way to motivate us to write more documentation. We would also like to accentuate the contrapositive of this statement. If you think you have found a real bug the most helpful thing you can do is simplify the problem to a manageable size and report it. The first thing to do is to make sure you are running the latest publicly released version of NCO.
Once you have read the manual, if you are still unable to get NCO to perform a documented function, write help request. Follow the same procedure as described below for reporting bugs (after all, it might be a bug). That is, describe what you are trying to do, and inlude the complete commands (with `-D 5'), error messages, and version of NCO. Post your help request to the NCO Help forum.
If you think you are using the right command, but NCO is misbehaving, then you might have found a bug. A core dump, sementation violation, or incorrect numerical answers is always considered a high priority bug. How do you simplify a problem that may be revealing a bug? Cut out extraneous variables, dimensions, and metadata from the offending files and re-run the command until it no longer breaks. Then back up one step and report the problem. Usually the file(s) will be very small, i.e., one variable with one or two small dimensions ought to suffice. Include in the report your run-time environment, the exact error messages (and run the operator with `-D 5' to increase the verbosity of the debugging output), and a copy, or the publically accessible location, of the file(s). Post the bug report to the NCO Project buglist.
[ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
[ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
The main design goal has been to produce operators that can be invoked from the command line to perform useful operations on netCDF files. Many scientists work with models and observations which produce too much data to analyze in tabular format. Thus, it is often natural to reduce and massage this raw or primary level data into summary, or second level data, e.g., temporal or spatial averages. These second level data may become the inputs to graphical and statistical packages, and are often more suitable for archival and dissemination to the scientific community. NCO performs a suite of operations useful in manipulating data from the primary to the second level state. Higher level interpretive languages (e.g., IDL, Yorick, Matlab, NCL, Perl, Python), and lower level compiled languages (e.g., C, Fortran) can always perform any task performed by NCO, but often with more overhead. NCO, on the other hand, is limited to a much smaller set of arithmetic and metadata operations than these full blown languages.
Another goal has been to implement enough command line switches so that frequently used sequences of these operators can be executed from a shell script or batch file. Finally, NCO was written to consume the absolute minimum amount of system memory required to perform a given job. The arithmetic operators are extremely efficient; their exact memory usage is detailed in 2.9 Approximate NCO memory requirements.
[ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
NCO was developed at NCAR to aid analysis and manipulation of datasets produced by General Circulation Models (GCMs). Datasets produced by GCMs share many features with all gridded scientific datasets and so provide a useful paradigm for the explication of the NCO operator set. Examples in this manual use a GCM paradigm because latitude, longitude, time, temperature and other fields related to our natural environment are as easy to visualize for the layman as the expert.
[ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
.pid<process ID>.<operator name>.tmp
to the
user-specified output-file name.
When the operator completes its task with no fatal errors, the temporary
output file is moved to the user-specified output-file.
Note the construction of a temporary output file uses more disk space
than just overwriting existing files "in place" (because there may be
two copies of the same file on disk until the NCO operation successfully
concludes and the temporary output file overwrites the existing
output-file).
Also, note this feature increases the execution time of the operator
by approximately the time it takes to copy the output-file.
Finally, note this feature allows the output-file to be the same
as the input-file without any danger of "overlap".
Other safeguards exist to protect the user from inadvertently overwriting data. If the output-file specified for a command is a pre-existing file, then the operator will prompt the user whether to overwrite (erase) the existing output-file, attempt to append to it, or abort the operation. However, in processing large amounts of data, too many interactive questions can be a curse to productivity. Therefore NCO also implements two ways to override its own safety features, the `-O' and `-A' switches. Specifying `-O' tells the operator to overwrite any existing output-file without prompting the user interactively. Specifying `-A' tells the operator to attempt to append to any existing output-file without prompting the user interactively. These switches are useful in batch environments because they suppress interactive keyboard input.
[ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
ncks
can append variables from one file to another
file.
This capability is invoked by naming two files on the command line,
input-file and output-file.
When output-file already exists, the user is prompted whether to
overwrite, append/replace, or exit from the command.
Selecting overwrite tells the operator to erase the existing
output-file and replace it with the results of the operation.
Selecting exit causes the operator to exit--the output-file
will not be touched in this case.
Selecting append/replace causes the operator to attempt to place
the results of the operation in the existing output-file,
See section 4.6 ncks
netCDF Kitchen Sink.
[ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
Users comfortable with NCO semantics may find it easier to perform
some simple mathematical operations in NCO rather than higher level
languages.
ncdiff
(see section 4.2 ncdiff
netCDF Differencer) can be used for
subtraction and broadcasting.
ncflint
(see section 4.5 ncflint
netCDF File Interpolator) can be used
for addition, subtraction, multiplication and interpolation.
Sequences of these commands can accomplish simple but powerful
operations at the command line.
[ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
The most frequently used operators of NCO are probably the averagers and
concatenators.
Because there are so many permutations of averaging (e.g., across files,
within a file, over the record dimension, over other dimensions, with or
without weights and masks) and of concatenating (across files, along the
record dimension, along other dimensions), there are currently no fewer
than five operators which tackle these two purposes: ncra
,
ncea
, ncwa
, ncrcat
, and ncecat
.
These operators do share many capabilities (9), but each has its unique specialty.
Two of these operators, ncrcat
and ncecat
, are for
concatenating hyperslabs across files.
The other two operators, ncra
and ncea
, are for averaging
hyperslabs across files
(10).
First, let's describe the concatenators, then the averagers.
2.6.1 Concatenators ncrcat
andncecat
2.6.2 Averagers ncea
,ncra
, andncwa
2.6.3 Interpolator ncflint
[ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
ncrcat
and ncecat
Joining independent files together along a record coordinate is called
concatenation.
ncrcat
is designed for concatenating record variables, while
ncecat
is designed for concatenating fixed length variables.
Consider 5 files, `85.nc', `86.nc', ...
`89.nc' each containing a year's worth of data.
Say you wish to create from them a single file, `8589.nc'
containing all the data, i.e., spanning all 5 years.
If the annual files make use of the same record variable, then
ncrcat
will do the job nicely with, e.g., ncrcat 8?.nc
8589.nc
.
The number of records in the input files is arbitrary and can vary from
file to file.
See section 4.8 ncrcat
netCDF Record Concatenator, for a complete description of
ncrcat
.
However, suppose the annual files have no record variable, and thus
their data is all fixed length.
For example, the files may not be conceptually sequential, but rather
members of the same group, or ensemble.
Members of an ensemble may have no reason to contain a record dimension.
ncecat
will create a new record dimension (named record by
default) with which to glue together the individual files into the single
ensemble file.
If ncecat
is used on files which contain an existing record
dimension, that record dimension will be converted into a fixed length
dimension of the same name and a new record dimension will be created.
Consider five realizations, `85a.nc', `85b.nc', ...
`85e.nc' of 1985 predictions from the same climate model.
Then ncecat 85?.nc 85_ens.nc
glues the individual realizations
together into the single file, `85_ens.nc'.
If an input variable was dimensioned [lat
,lon
], it will have
dimensions [record
,lat
,lon
] in the output file.
A restriction of ncecat
is that the hyperslabs of the processed
variables must be the same from file to file.
Normally this means all the input files are the same size, and contain
data on different realizations of the same variables.
See section 4.4 ncecat
netCDF Ensemble Concatenator, for a complete description of
ncecat
.
Note that ncrcat
cannot concatenate fixed-length variables,
whereas ncecat
can concatenate both fixed-length and record
variables.
To conserve system memory, use ncrcat
rather than
ncecat
when concatenating record variables.
[ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
ncea
, ncra
, and ncwa
The differences between the averagers ncra
and ncea
are
analogous to the differences between the concatenators.
ncra
is designed for averaging record variables from at least one
file, while ncea
is designed for averaging fixed length variables
from multiple files.
ncra
performs a simple arithmetic average over the record
dimension of all the input files, with each record having an equal
weight in the average.
ncea
performs a simple arithmetic average of all the input files,
with each file having an equal weight in the average.
Note that ncra
cannot average fixed-length variables,
but ncea
can average both fixed-length and record variables.
To conserve system memory, use ncra
rather than
ncea
where possible (e.g., if each input-file is one record
long).
The file output from ncea
will have the same dimensions (meaning
dimension names as well as sizes) as the input hyperslabs
(see section 4.3 ncea
netCDF Ensemble Averager, for a complete description of
ncea
).
The file output from ncra
will have the same dimensions as
the input hyperslabs except for the record dimension, which will have a
size of 1 (see section 4.7 ncra
netCDF Record Averager, for a complete
description of ncra
).
[ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
ncflint
ncflint
can interpolate data between or two files.
Since no other operators have this ability, the description of
interpolation is given fully on the ncflint
reference page
(see section 4.5 ncflint
netCDF File Interpolator).
Note that this capability also allows ncflint
to linearly rescale
any data in a netCDF file, e.g., to convert between differing units.
[ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
Occasionally one desires to digest (i.e., concatenate or average)
hundreds or thousands of input files.
One brave user, for example, recently created a five year time-series of
satellite observations by using ncecat
to join thousands of daily
data files together.
Unfotunately, data archives (e.g., NASA EOSDIS) are unlikely to
distribute netCDF files conveniently named in a format the `-n
loop' switch (which automatically generates arbitrary numbers of
input filenames) understands.
If there is not a simple, arithmetic pattern to the input filenames
(e.g., `h00001.nc', `h00002.nc', ... `h90210.nc')
then the `-n loop' switch is useless.
Moreover, when the input files are so numerous that the input filenames
are too lengthy (when strung together as a single argument) to be passed
by the calling shell to the NCO operator
(11), then the following strategy
has proven useful to specify the input filenames to NCO.
Write a script that creates symbolic links between the irregular input
filenames and a set of regular, arithmetic filenames that `-n
loop' switch understands.
The NCO operator will then succeed at automatically generating the
filnames with the `-n loop' option (which circumvents any
OS and shell limits on command line size).
You can remove the symbolic links once the operator completes its task.
[ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
Large files are those files that are comparable in size to the
amount of memory (RAM) in your computer.
Many users of NCO work with files larger than 100 Mb.
Files this large not only push the current edge of storage technology,
they present special problems for programs which attempt to access the
entire file at once, such as ncea
, and ncecat
.
If you need to work with a 300 Mb file on a machine with only 32 Mb of
memory then you will need large amounts of swap space (virtual
memory on disk) and NCO will work slowly, or else
NCO will fail.
There is no easy solution for this and the best strategy is to work on a
machine with massive amounts of memory and swap space.
That is, if your local machine has problems working with large files,
try running NCO from a more powerful machine, such as a
network server.
Certain machine architectures, e.g., Cray UNICOS, have special
commands which allow one to increase the amount of interactive memory.
If you get a core dump on a Cray system (e.g., `Error exit (core
dumped)'), try increasing the available memory by using the
ilimit
command.
The speed of the NCO operators also depends on file size.
When processing large files the operators may appear to hang, or do
nothing, for large periods of time.
In order to see what the operator is actually doing, it is useful to
activate a more verbose output mode.
This is accomplished by supplying a number greater than 0 to the
`-D debug_level' switch.
When the debug_level is nonzero, the operators report their
current status to the terminal through the stderr facility.
Using `-D' does not slow the operators down.
Choose a debug_level between 1 and 3 for most situations, e.g.,
ncea -D 2 85.nc 86.nc 8586.nc
.
A full description of how to estimate the actual amount of memory the
multi-file NCO operators consume is given in 2.9 Approximate NCO memory requirements.
[ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
The multi-file operators currently comprise the record operators,
ncra
and ncrcat
, and the ensemble operators, ncea
and ncecat
,
The record operators require much less memory than the ensemble
operators.
This is because the record operators are designed to operate on a single
record of a file at a time, while the ensemble operators must retrieve
an entire variable at a time into memory.
Let MS be the peak sustained memory demand of an operator,
FT be the memory required to store the entire contents of all the
variables to be processed in an input file,
FR be the memory required to store the entire contents of a
single record of each of the variables to be processed in an input file,
VR be the memory required to store a single record of the
largest record variable to be processed in an input file,
VT be the memory required to store the largest variable
to be processed in an input file,
VI be the memory required to store the largest variable
which is not processed, but is copied from the initial file to the
output file.
All operators require MI = VI during the initial copying of
variables from the first input file to the output file.
This is the initial (and transient) memory demand.
The sustained memory demand is that memory required by the
operators during the processing (i.e., averaging, concatenation)
phase which lasts until all the input files have been processed.
The operators have the following memory requirements:
ncrcat
requires MS <= VR.
ncecat
requires MS <= VT.
ncra
requires MS = 2FR + VR.
ncea
requires MS = 2FT + VT.
ncdiff
requires MS <= 2VT.
ncflint
requires MS <= 2VT.
Note that only variables which are processed, i.e., averaged or
concatenated, contribute to MS.
Memory is never allocated to hold variables which do not appear in the
output file (see section 3.4 Including/Excluding specific variables).
[ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
ncvarget
and
ncvarput
operations.
Hyperslabs too large too hold in core memory will suffer substantial
performance penalties because of this.
ncks
when printing variables to screen.
[ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
Many features have been implemented in more than one operator and are described here for brevity. The description of each feature is preceded by a box listing the operators for which the feature is implemented. Command line switches for a given feature are consistent across all operators wherever possible. If no "key switches" are listed for a feature, then that particular feature is automatic and cannot be controlled by the user.
[ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
ncra
to average five input files, `85.nc', `86.nc',
... `89.nc', and store the results in `8589.nc'.
Here are the four methods in order.
They produce identical answers.
ncra 85.nc 86.nc 87.nc 88.nc 89.nc 8589.nc ncra 8[56789].nc 8589.nc ncra -p input-path 85.nc 86.nc 87.nc 88.nc 89.nc 8589.nc ncra -n 5,2,1 85.nc 8589.nc |
8[56789].nc
.
The shell passes valid filenames which match the expansion to
ncra
.
The third method uses the `-p input-path' argument to specify
the directory where all the input files reside.
NCO prepends input-path (e.g., `/data/usrname/model') to all
input-files (but not to output-file).
Thus, using `-p', the path to any number of input files need only
be specified once.
Note input-path need not end with `/'; the `/' is
automatically generated if necessary.
The last method passes (with `-n') syntax concisely describing
the entire set of filenames
(12).
This option is only available with the multi-file operators:
ncra
, ncrcat
, ncea
, and ncecat
.
By definition, multi-file operators are able to process an arbitrary
number of input-files.
This option is very useful for abbreviating lists of filenames
representable as
alphanumeric_prefix+numeric_suffix+`.'+filetype
where alphanumeric_prefix is a string of arbitrary length and
composition, numeric_suffix is a fixed width field of digits, and
filetype is a standard filetype indicator.
For example, in the file `ccm3_h0001.nc', we have
alphanumeric_prefix = `ccm3_h', numeric_suffix =
`0001', and filetype = `nc'.
NCO is able to decode lists of such filenames encoded using the
`-n' option.
The simpler (3-argument) `-n' usage takes the form
-n file_number,digit_number,numeric_increment
where file_number is the number of files, digit_number is
the fixed number of numeric digits comprising the numeric_suffix,
and numeric_increment is the constant, integer-valued difference
between the numeric_suffix of any two consecutive files.
The value of alphanumeric_prefix is taken from the input file,
which serves as a template for decoding the filenames.
In the example above, the encoding -n 5,2,1
along with the input
file name `85.nc' tells NCO to
construct five (5) filenames identical to the template `85.nc'
except that the final two (2) digits are a numeric suffix to be
incremented by one (1) for each successive file.
Currently filetype may be either be empty, `nc',
`cdf', `hdf', or `hd5'.
If present, these filetype suffixes (and the preceding `.')
are ignored by NCO as it uses the `-n' arguments to locate,
evaluate, and compute the numeric_suffix component of filenames.
Recently the `-n' option has been extended to allow convenient
specification of filenames with "circular" characteristics.
This means it is now possible for NCO to automatically generate
filenames which increment regularly until a specified maximum value, and
then wrap back to begin again at a specified minimum value.
The corresponding `-n' usage becomes more complex, taking one or
two additional arguments for a total of four or five, respectively:
-n
file_number,digit_number,numeric_increment[,numeric_max[,numeric_min]]
where numeric_max, if present, is the maximum integer-value of
numeric_suffix and numeric_min, if present, is the minimum
integer-value of numeric_suffix.
Consider, for example, the problem of specifying non-consecutive input
files where the filename suffixes end with the month index.
In climate modeling it is common to create summertime and wintertime
averages which contain the averages of the months June--July--August,
and December--January--February, respectively:
ncra -n 3,2,1 85_06.nc 85_0608.nc ncra -n 3,2,1,12 85_12.nc 85_1202.nc ncra -n 3,2,1,12,1 85_12.nc 85_1202.nc |
06, 07, 08
) which do not
"wrap" back to a minimum value.
The second example shows how to use the optional fourth and fifth
elements of the `-n' option to specify a wrap value to NCO.
The fourth argument to `-n', if present, specifies the maximum
integer value of numeric_suffix.
In this case the maximum value is 12, and will be formatted as `12'
in the filename string.
The fifth argument to `-n', if present, specifies the minimum
integer value of numeric_suffix.
The default minimum filename suffix is 1, which is formatted as
`01' in this case.
Thus the second and third examples have the same effect, that is, they
automatically generate, in order, the filenames `85_12.nc',
`85_01.nc', and `85_02.nc' as input to NCO.
[ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
rcp
or scp
privileges, or NCAR's Mass
Storage System (MSS).
To access a file via an anonymous FTP server, supply the remote file's
URL.
To access a file using rcp
or scp
, specify the
Internet address of the remote file.
Of course in this case you must have rcp
or scp
privileges which allow transparent (no password entry required) access
to the remote machine.
This means that `~/.rhosts' or `~/ssh/authorized_keys' must
be set accordingly on both local and remote machines.
To access a file on NCAR's MSS, specify the full MSS pathname of the
remote file.
NCO will attempt to detect whether the local machine has direct
(synchronous) MSS access.
In this case, NCO attempts to use the NCAR
msrcp
command
(13), or, failing that, /usr/local/bin/msread
.
Otherwise NCO attempts to retrieve the MSS file
through the (asynchronous) Masnet Interface Gateway System
(MIGS) using the nrnet
command.
The following examples show how one might analyze files stored on remote systems.
ncks -H -l ./ ftp://ftp.cgd.ucar.edu/pub/zender/nco/in.nc ncks -H -l ./ dust.ps.uci.edu:/home/zender/nco/in.nc ncks -H -l ./ /ZENDER/nco/in.nc ncks -H -l ./ mss:/ZENDER/nco/in.nc ncks -H -l ./ -p http://www.cdc.noaa.gov/cgi-bin/nph-nc/Datasets/- ncep.reanalysis.dailyavgs/surface air.sig995.1975.nc |
rcp
or scp
access to the machine dust.ps.uci.edu
.
The third example will work from NCAR computers with local access to
the msrcp
, msread
, or nrnet
commands.
The fourth command will work if your local version of NCO was built with
DODS capability (see section 3.2.1 DODS).
The above commands can be rewritten using the `-p input-path'
option as follows:
ncks -H -p ftp://ftp.cgd.ucar.edu/pub/zender/nc -l ./ in.nc ncks -H -p dust.ps.uci.edu:/home/zender/nc -l ./ in.nc ncks -H -p /ZENDER/nco -l ./ in.nc ncks -H -p mss:/ZENDER/nco -l ./ in.nc |
3.2.1 DODS
[ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
make DODS=Y
adds the (non-intuitive) commands
to link to the DODS libraries installed in the $DODS_ROOT
directory.
You will probably need to visit the
DODS Homepage
to learn which libraries to obtain and link to for the
DODS-enabled NCO executables.
Once NCO is DODS-enabled the operators are DODS clients. All DODS clients have network transparent access to any files controlled by a DODS server. Simply specify the path to the file in URL notation
ncks -C -d lon,0 -v lon -l ./ -p http://www.cdc.noaa.gov/cgi-bin/nph-nc/ Datasets/ncep.reanalysis.dailyavgs/surface air.sig995.1975.nc foo.nc |
Note that the remote retrieval features of NCO can be used to
retrieve any file, including non-netCDF files, via SSH
,
anonymous FTP, or msrcp
.
Often this method is quicker than using a browser, or running an FTP
session from a shell window yourself.
For example, say you want to obtain a JPEG file from a weather server.
ncks -p ftp://weather.edu/pub/pix/jpeg -l ./ storm.jpg |
ncks
automatically performs an anonymous FTP
login to the remote machine and retrieves the specified file.
When ncks
attempts to read the local copy of `storm.nc'
as a netCDF file, it fails and exits, leaving `storm.nc' in
the current directory.
[ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
[ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
ncdiff
, ncea
, ncecat
, ncflint
,
ncks
, ncra
, ncrcat
, ncwa
[ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
ncdiff
, ncea
, ncecat
, ncflint
,
ncks
, ncra
, ncrcat
, ncwa
lat
always carry the
values of lat
with them into the output-file.
This feature can be disabled with `-C', which causes NCO to not
automatically add coordinates to the variables appearing in the
output-file.
However, using `-C' does not preclude the user from including some
coordinates in the output files simply by explicitly selecting the
coordinates with the -v option.
The `-c' option, on the other hand, is a shorthand way of
automatically specifying that all coordinate variables in the
input-files should appear in the output-file.
Thus `-c' allows the user to select all the coordinate variables
without having to know their names.
[ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
ncdiff
, ncea
, ncecat
, ncflint
,
ncks
, ncra
, ncrcat
, ncwa
time
.
The following hyperslab operations produce identical results, a
June-July-August average of the data:
ncra -d time,5,7 85.nc 85_JJA.nc ncra -F -d time,6,8 85.nc 85_JJA.nc |
Printing variable three_dmn_var in file `in.nc' first with C indexing conventions, then with Fortran indexing conventions results in the following output formats:
% ncks -H -v three_dmn_var in.nc % lat[0]=-90 lev[0]=1000 lon[0]=-180 three_dmn_var[0]=0 ... % ncks -F -H -v three_dmn_var in.nc % lon(1)=-180 lev(1)=1000 lat(1)=-90 three_dmn_var(1)=0 |
[ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
ncdiff
, ncea
, ncecat
, ncflint
,
ncks
, ncra
, ncrcat
, ncwa
-d
dim,[min][,[max]]
option.
The bounds of the hyperslab to be extracted are specified by the
associated min and max values.
A half-open range is specified by omitting either the min or
max parameter but including the separating comma.
The unspecified limit is interpreted as the maximum or minimum value in
the unspecified direction.
A cross-section at a specific coordinate is extracted by specifying only
the min limit and omitting a trailing comma.
Dimensions not mentioned are passed with no reduction in range.
The dimensionality of variables is not reduced (in the case of a
cross-section, the size of the constant dimension will be one).
If values of a coordinate-variable are used to specify a range or
cross-section, then the coordinate variable must be monotonic (values
either increasing or decreasing).
In this case, command-line values need not exactly match coordinate
values for the specified dimension.
Ranges are determined by seeking the first coordinate value to occur in
the closed range [min,max] and including all subsequent values until one
falls outside the range.
The coordinate value for a cross-section is the coordinate-variable
value closest to the specified value and must lie within the range or
coordinate-variable values.
Coordinate values should be specified using real notation with a decimal point required in the value, whereas dimension indices are specified using integer notation without a decimal point. Note that this convention is only to differentiate coordinate values from dimension indices, and is independent of the actual type of netCDF coordinate variables, if any. For a given dimension, the specified limits must both be coordinate values (with decimal points) or dimension indices (no decimal points).
User-specified coordinate limits are promoted to double precision values
while searching for the indices which bracket the range.
Thus, hyperslabs on coordinates of type NC_BYTE
and
NC_CHAR
are computed numerically rather than lexically, so the
results are unpredictable.
The relative magnitude of min and max indicate to the
operator whether to expect a wrapped coordinate
(see section 3.8 Wrapped coordinates), such as longitude.
If min > max, the NCO expects the coordinate to be
wrapped, and a warning message will be printed.
When this occurs, NCO selects all values outside the domain
[max < min], i.e., all the values exclusive of the
values which would have been selected if min and max were
swapped.
If this seems confusing, test your command on just the coordinate
variables with ncks
, and then examine the output to ensure NCO
selected the hyperslab you expected (coordinate wrapping is only
supported by ncks
).
Because of the way wrapped coordinates are interpreted, it is very
important to make sure you always specify hyperslabs in the
monotonically increasing sense, i.e., min < max
(even if the underlying coordinate variable is monotonically
decreasing).
The only exception to this is when you are indeed specifying a wrapped
coordinate.
The distinction is crucial to understand because the points selected by,
e.g., -d longitude,50.,340.
, are exactly the complement of the
points selected by -d longitude,340.,50.
.
Not specifying any hyperslab option is equivalent to specifying full
ranges of all dimensions.
This option may be specified more than once in a single command
(each hyperslabed dimension requires its own -d
option).
[ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
ncks
ncks
must invoke special software routines to assemble
the desired output hyperslab from multiple reads of the input-file.
Assume the domain of the monotonically increasing longitude coordinate
lon
is 0 < lon < 360.
ncks
will extract a hyperslab which crosses the Greenwich
meridian simply by specifying the westernmost longitude as min and
the easternmost longitude as max.
Thus, the following commands extract a hyperslab containing the Saharan desert:
ncks -d lon,340.,50. in.nc out.nc ncks -d lon,340.,50. -d lat,10.,35. in.nc out.nc |
lon
in the output-file, `out.nc', will
no longer be monotonic!
The values of lon
will be, e.g., `340, 350, 0, 10, 20, 30,
40, 50'.
This can have serious implications should you run `out.nc' through
another operation which expects the lon
coordinate to be
monotonically increasing.
Fortunately, the chances of this happening are slim, since lon
has already been hyperslabbed, there should be no reason to hyperslab
lon
again.
Should you need to hyperslab lon
again, be sure to give
dimensional indices as the hyperslab arguments, rather than coordinate
values (see section 3.7 Hyperslabs).
[ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
ncks
, ncra
, ncrcat
ncks
offers support for specifying a stride for any
hyperslab, while ncra
and ncrcat
suport the stride
argument only for the record dimension.
The stride is the spacing between consecutive points in a
hyperslab.
A stride of 1 means pick all the elements of the hyperslab, but a
stride of 2 means skip every other element, etc.
Using the stride option with ncra
and ncrcat
makes
it possible, for instance, to average or concatenate regular intervals
across multi-file input data sets.
The stride is specified as the optional fourth argument to the
`-d' hyperslab specification:
-d dim,[min][,[max]][,[stride]]
.
Specify stride as an integer (i.e., no decimal point) following
the third comma in the `-d' argument.
There is no default value for stride.
Thus using `-d time,,,2' is valid but `-d time,,,2.0' and
`-d time,,,' are not.
When stride is specified but min is not, there is an
ambiguity as to whether the extracted hyperslab should begin with (using
C-style, 0-based indexes) element 0 or element `stride-1'.
NCO must resolve this ambiguity and it chooses element 0 as the first
element of the hyperslab when min is not specified.
Thus `-d time,,,stride' is syntactically equivalent to
`-d time,0,,stride'.
This means, for example, that specifying the operation `-d
time,,,2' on the array `1,2,3,4,5' selects the hyperslab `1,3,5'.
To obtain the hyperslab `2,4' instead, simply explicitly specify
the starting index as 1, i.e., `-d time,1,,2'.
For example, consider a file `8501_8912.nc' which contains 60
consecutive months of data.
Say you wish to obtain just the March data from this file.
Using 0-based subscripts (see section 3.6 C & Fortran index conventions) these
data are stored in records 2, 14, ... 50 so the desired stride
is 12.
Without the stride option, the procedure is very awkward.
One could use ncks
five times and then use ncrcat
to
concatenate the resulting files together:
foreach idx (02 14 26 38 50) ncks -d time,$idx 8501_8912.nc foo.$idx end ncrcat foo.?? 8589_03.nc rm foo.?? |
ncks
performs this hyperslab
extraction in one operation:
ncks -d time,2,,12 8501_8912.nc 8589_03.nc |
ncks
netCDF Kitchen Sink, for more information on ncks
.
The stride option is supported by ncra
and ncrcat
for the record dimension only.
This makes it possible, for instance, to average or concatenate regular
intervals across multi-file input data sets.
ncra -F -d time,3,,12 85.nc 86.nc 87.nc 88.nc 89.nc 8589_03.nc ncrcat -F -d time,3,,12 85.nc 86.nc 87.nc 88.nc 89.nc 8503_8903.nc |
[ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
ncdiff
, ncea
, ncflint
, ncra
,
ncwa
The phrase missing data refers to data points that are missing, invalid, or for any reason not intended to be arithmetically processed in the same fashion as valid data. The NCO arithmetic operators attempt to handle missing data in an intelligent fashion. There are four steps in the NCO treatment of missing data:
NCO follows the convention that missing data should be stored
with the missing_value specified in the variable's
missing_value
attribute
(14).
The only way NCO recognizes that a variable may contain
missing data is if the variable has a missing_value
attribute.
In this case, any elements of the variable which are numerically equal
to the missing_value are treated as missing data.
Consider a variable var of type var_type with a
missing_value
attribute of type att_type containing the
value missing_value.
As a guideline, the type of the missing_value
attribute should be
the same as the type of the variable it is attached to.
If var_type equals att_type then NCO straightforwardly
compares each value of var to missing_value to determine
which elements of var are to be treated as missing data.
If not, then NCO will internally convert att_type to
var_type by using the implicit conversion rules of C, or, if
att_type is NC_CHAR
(15), by typecasting the results of the C function
strtod(missing_value)
.
You may use the NCO operator ncatted
to change the
missing_value
attribute and all data whose data is
missing_value to a new value
(see section 4.1 ncatted
netCDF Attribute Editor).
When an NCO arithmetic operator is processing a variable var with
a missing_value
attribute, it compares each value of var
to missing_value before performing an operation.
Note the missing_value comparison inflicts a performance penalty
on the operator.
Arithmetic processing of variables which contain the
missing_value
attribute always incurs this penalty, even when
none of the data is missing.
Conversely, arithmetic processing of variables which do not contain the
missing_value
attribute never incurs this penalty.
In other words, do not attach a missing_value
attribute to a
variable which does not contain missing data.
This exhortation can usually be obeyed for model generated data, but it
may be harder to know in advance whether all observational data will be
valid or not.
NCO averagers (ncra
, ncea
, ncwa
) do not count any
element with the value missing_value towards the average.
ncdiff
and ncflint
define a missing_value result
when either of the input values is a missing_value.
Sometimes the missing_value may change from file to file in a
multi-file operator, e.g., ncra
.
NCO is written to account for this (it always compares a variable to the
missing_value assigned to that variable in the current file).
Suffice it to say that, in all known cases, NCO does "the right thing".
[ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
ncra
,ncea
,ncwa
avg
sqravg
avgsqr
max
min
rms
rmssdn
sqrt
ttl
ncwa
netCDF Weighted Averager, for additional information on
masks and normalization.
Averaging is the default, and will be described first so the
terminology for the other operations will be familiar.
<p><b>Note for HTML user's</b>: <br>The definition of mathematical operations involving rank reduction (e.g., averaging) relies heavily on mathematical expressions which cannot be easily represented in HTML. <b>See the printed manual for complete documentation.</b> Note for Info user's: The definition of mathematical operations involving rank reduction (e.g., averaging) relies heavily on mathematical expressions which cannot be easily represented in Info. See the printed manual for complete documentation.
The definitions of some of these operations are not universally useful. Mostly they were chosen to facilitate standard statistical computations within the NCO framework. We are open to redefining and or adding to the above. If you are interested in having other statistical quantities defined in NCO please contact the NCO project (see section 1.5 Help and Bug reports).
EXAMPLES
Suppose you wish to examine the variable prs_sfc(time,lat,lon)
which contains a time series of the surface pressure as a function of
latitude and longitude.
Find the minimium value of prs_sfc
over all dimensions:
ncwa -y min -v prs_sfc in.nc foo.nc |
prs_sfc
at each time interval for each
latitude:
ncwa -y max -v prs_sfc -a lon in.nc foo.nc |
prs_sfc
at
every gridpoint:
ncra -y rms -v prs_sfc in.nc foo.nc ncwa -y rms -v prs_sfc -a time in.nc foo.nc |
ncra
is
preferred because it has a smaller memory footprint.
Also, ncra
leaves the (degenerate) time
dimension in the
output file (which is usually useful) whereas ncwa
removes the
time
dimension.
These operations work as expected in multi-file operators.
Suppose that prs_sfc
is stored in multiple timesteps per file
across multiple files, say `jan.nc', `feb.nc',
`march.nc'.
We can now find the three month maximium surface pressure at every point.
ncea -y max -v prs_sfc jan.nc feb.nc march.nc out.nc |
It is possible to use a combination of these operations to compute the variance and standard deviation of a field stored in a single file or across multiple files. The procedure to compute the temporal standard deviation of the surface pressure at all points in a single file `in.nc' involves three steps.
ncwa -O -v prs_sfc -a time in.nc out.nc ncdiff -O -v prs_sfc in.nc out.nc out.nc ncra -O -y rmssdn out.nc out.nc |
prs_sfc
.
Next `out.nc' is overwritten with the deviation from the mean.
Finally `out.nc' is overwritten with the root-mean-square of
itself.
Note the use of `-y rmssdn' (rather than `-y rms') in the
final step.
This ensures the standard deviation is correctly normalized by one fewer
than the number of time samples.
The procedure to compute the variance is identical except for the use of
`-y var' instead of `-y rmssdn' in the final step.
The procedure to compute the spatial standard deviation of a field in a single file `in.nc' involves three steps.
ncwa -O -v prs_sfc,gw -a lat,lon -w gw in.nc out.nc ncdiff -O -v prs_sfc,gw in.nc out.nc out.nc ncwa -O -y rmssdn -v prs_sfc -a lat,lon -w gw out.nc out.nc |
The procedure to compute the standard deviation of a time-series across multiple files involves one extra step since all the input must first be collected into one file.
ncrcat -O -v tpt in.nc in.nc foo1.nc ncwa -O -a time foo1.nc foo2.nc ncdiff -O -v tpt foo1.nc foo2.nc foo2.nc ncra -O -y rmssdn foo2.nc out.nc |
ncdiff
operation in the fourth step.
[ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
ncea
, ncra
, ncwa
NC_SHORT
(2 bytes) to
NC_DOUBLE
(8 bytes).
As a general rule, type conversions should be avoided for at least two
reasons.
First, type conversions are expensive since they require creating
(temporary) buffers and casting each element of a variable from
the type it was stored at to some other type.
Second, the dataset's creator probably had a good reason
for storing data as, say, NC_FLOAT
rather than NC_DOUBLE
.
In a scientific framework there is no reason to store data with more
precision than the observations were made.
Thus NCO tries to avoid performing type conversions when performing
arithmetic.
Type conversion during arithmetic in the languages C and Fortran is
performed only when necessary.
All operands in an operation are converted to the most precise type
before the operation takes place.
However, following this parsimonious conversion rule dogmatically
results in numerous headaches.
For example, the average of the two NC_SHORT
s 17000s
and
17000s
results in garbage since the intermediate value which
holds their sum is also of type NC_SHORT
and thus cannot
represent values greater than 32,767
(16).
There are valid reasons for expecting this operation to succeed and
the NCO philosophy is to make operators do what you want, not what is
most pure.
Thus, unlike C and Fortran, but like many other higher level interpreted
languages, NCO arithmetic operators will perform automatic type
conversion when all the following conditions are met
(17):
ncea
, ncra
, or ncwa
.
ncdiff
is not included because subtraction does not benefit from
type conversion.
NC_BYTE
, NC_CHAR
,
NC_SHORT
, or NC_INT
.
Type NC_DOUBLE
is not type converted because there is no type of
higher precision to convert to.
Type NC_FLOAT
is not type converted because, in our judgement,
the performance penalty of always doing so would outweigh the (extremely
rare) potential benefits.
When these criteria are all met, the operator converts the variable in
question to type NC_DOUBLE
, performs all the arithmetic
operations, casts the NC_DOUBLE
type back to the original type,
and finally writes the result to disk.
The result written to disk may not be what you expect, because of
incommensurate ranges represented by different types, and because of
(lack of) rounding.
First, continuing the example given above, the average (e.g., `-y avg')
of 17000s
and 17000s
is written to disk as 17000s
.
The type conversion feature of NCO makes this possible since the
arithmetic and intermediate values are stored as NC_DOUBLE
s,
i.e., 34000.0d
and only the final result must be represented
as an NC_SHORT
.
Without the type conversion feature of NCO, the average would have
been garbage (albeit predictable garbage near -15768s
).
Similarly, the total (e.g., `-y ttl') of 17000s
and
17000s
written to disk is garbage (actually -31536s
) since
the final result (the true total) of 34000 is outside the range
of type NC_SHORT
.
Type conversions use the floor
function to convert floating point
number to integers.
Type conversions do not attempt to round floating point numbers to the
nearest integer.
Thus the average of 1s
and 2s
is computed in double
precisions arithmetic as
(1.0d
+ 1.5d
)/2) = 1.5d
.
This result is converted to NC_SHORT
and stored on disk as
floor(1.5d)
= 1s
(18).
Thus no "rounding up" is performed.
The type conversion rules of C can be stated as follows:
If n is an integer then any floating point value x
satisfying
n <= x < n+1
will have the value n when converted to an integer.
[ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
[ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
history
global attribute to
any file they modify or create.
The history
attribute consists of a timestamp and the full string
of the invocation command to the operator, e.g., `Mon May 26 20:10:24
1997: ncks in.nc foo.nc'.
The full contents of an existing history
attribute are copied
from the first input-file to the output-file.
The timestamps appear in reverse chronological order, with the most
recent timestamp appearing first in the history
attribute.
Since NCO and many other netCDF operators adhere to the history
convention, the entire data processing path of a given netCDF file may
often be deduced from examination of its history
attribute.
To avoid information overkill, all operators have an optional switch
(`-h') to override automatically appending the history
attribute (see section 4.1 ncatted
netCDF Attribute Editor).
[ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
ncdiff
, ncea
, ncecat
, ncflint
,
ncra
, ncwa
Currently, NCO determines whether a datafile is a CSM output datafile
simply by checking whether value of the global attribute
convention
(if it exists) equals `NCAR-CSM'.
Should convention
equal `NCAR-CSM' in the (first)
input-file, NCO will attempt to treat certain variables specially,
because of their meaning in CSM files.
NCO will not average the following variables often found in CSM files:
ntrm
, ntrn
, ntrk
, ndbase
, nsbase
,
nbdate
, nbsec
, mdt
, mhisf
.
These variables contain scalar metadata such as the resolution of the
host CSM model and it makes no sense to change their values.
Furthermore, the ncdiff
operator will not attempt to difference
the following variables: gw
, ORO
, date
,
datesec
, hyam
, hybm
, hyai
, hybi
.
These variables represent the Gaussian weights, the orography field,
time fields, and hybrid pressure coefficients.
These are fields which you want to remain unaltered in the differenced
file 99% of the time.
If you decide you would like any of the above CSM fields processed, you
must use ncrename
to rename them first.
[ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
ncrcat
ncrcat
has been programmed to recognize ARM (Atmospheric
Radiation Measurement Program) data files.
If you do not work with ARM data then you may skip this section.
ARM data files store time information in two variables, a scalar,
base_time
, and a record variable, time_offset
.
Subtle but serious problems can arise when these type of files are
just blindly concatenated.
Therefore ncrcat
has been specially programmed to be able to
chain together consecutive ARM input-files and produce and
an output-file which contains the correct time information.
Currently, ncrcat
determines whether a datafile is an ARM
datafile simply by testing for the existence of the variables
base_time
, time_offset
, and the dimension time
.
If these are found in the input-file then ncrcat
will
automatically perform two non-standard, but hopefully useful,
procedures.
First, ncrcat
will ensure that values of time_offset
appearing in the output-file are relative to the base_time
appearing in the first input-file (and presumably, though not
necessarily, also appearing in the output-file).
Second, if a coordinate variable named time
is not found in the
input-files, then ncrcat
automatically creates the
time
coordinate in the output-file.
The values of time
are defined by the ARM convention
time = base_time + time_offset.
Thus, if output-file contains the time_offset
variable, it will also contain the time
coordinate.
A short message is added to the history
global attribute whenever
these ARM-specific procedures are executed.
[ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
ncks
, will produce something
like `NCO netCDF Operators version 1.2
Copyright (C) 1995--2000 Charlie Zender
ncks version 1.30 (2000/07/31) "Bolivia"'.
This tells you ncks
contains all patches up to version 1.30,
which dates from July 31, 2000.
[ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
This chapter presents reference pages for each of the operators individually. The operators are presented in alphabetical order. All valid command line switches are included in the syntax statement. Recall that descriptions of many of these command line switches are provided only in 3. Features common to most operators, to avoid redundancy. Only options specific to, or most useful with, a particular operator are described in any detail in the sections below.
[ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
ncatted
netCDF Attribute Editor SYNTAX
ncatted [-a att_dsc] [-a ...] [-D] [-h] [-l path] [-O] [-p path] [-R] [-r] input-file [output-file] |
ncatted
edits attributes in a netCDF file.
If you are editing attributes then you are spending too much time in the
world of metadata, and ncatted
was written to get you back out as
quickly and painlessly as possible.
ncatted
can append, create, delete,
modify, and overwrite attributes (all explained below).
Furthermore, ncatted
allows each editing operation to be applied
to every variable in a file, thus saving you time when you want to
change attribute conventions throughout a file.
ncatted
interprets character attributes as strings.
Because repeated use of ncatted
can considerably increase the size
of the history
global attribute (see section 3.14 History attribute), the
`-h' switch is provided to override automatically appending the
command to the history
global attribute in the output-file.
When ncatted
is used to change the missing_value
attribute,
it changes the associated missing data self-consistently.
If the internal floating point representation of a missing value,
e.g., 1.0e36, differs between two machines then netCDF files produced
on those machines will have incompatible missing values.
This allows ncatted
to change the missing values in files from
different machines to a single value so that the files may then be
concatenated together, e.g., by ncrcat
, without losing any
information.
See section 3.10 Missing values, for more information.
The key to mastering ncatted
is understanding the meaning of the
structure describing the attribute modification, att_dsc.
Each att_dsc contains five elements, which makes using
ncatted
somewhat complicated, but powerful.
The att_dsc argument structure contains five arguments in the
following order:
att_dsc = att_nm, var_nm, mode, att_type,
att_val
units
pressure
a
.
See below for complete listing of valid values of mode.
c
.
See below for complete listing of valid values of att_type.
pascal
.
The value of att_nm is the name of the attribute you want to edit.
This meaning of this should be clear to all users of the ncatted
operator.
The value of var_nm is the name of the variable containing the
attribute (named att_nm) that you want to edit.
There are two very important and useful exceptions to this rule.
The value of var_nm can also be used to direct ncatted
to
edit global attributes, or to repeat the editing operation for every
variable in a file.
A value of var_nm of "global" indicates that att_nm refers
to a global attribute, rather than a particular variable's attribute.
This is the method ncatted
supports for editing global
attributes.
If var_nm is left blank, on the other hand, then ncatted
attempts to perform the editing operation on every variable in the file.
This option may be convenient to use if you decide to change the
conventions you use for describing the data.
The value of mode is a single character abbreviation (a
,
c
, d
, m
, or o
) standing for one of
five editing modes:
a
c
d
m
o
The value of att_type is a single character abbreviation (f
,
d
, l
, i
, s
, c
, or b
) standing for one of
the six primitive netCDF data types:
f
d
i
l
s
c
b
The value of att_val is what you want to change attribute
att_nm to contain.
The specification of att_val is optional in Delete mode.
Attribute values for all types besides NC_CHAR must have an attribute
length of at least one.
Thus att_val may be a single value or one-dimensional array of
elements of type att_type
.
If the att_val is not set or is set to empty space,
and the att_type is NC_CHAR, e.g., -a units,T,o,c,""
or
-a units,T,o,c,
, then the corresponding attribute is set to
have zero length.
When specifying an array of values, it is safest to enclose
att_val in double or single quotes, e.g.,
-a levels,T,o,s,"1,2,3,4"
or
-a levels,T,o,s,'1,2,3,4'
.
The quotes are strictly unnecessary around att_val except
when att_val contains characters which would confuse the calling
shell, such as spaces, commas, and wildcard characters.
NCO processing of NC_CHAR attributes is a bit like Perl in that
it attempts to do what you want by default (but this sometimes causes
unexpected results if you want unusual data storage).
If the att_type is NC_CHAR then the argument is interpreted as a
string and it may contain C-language escape sequences, e.g., \n
,
which NCO will interpret before writing anything to disk.
NCO translates valid escape sequences and stores the
appropriate ASCII code instead.
Since two byte escape sequences, e.g., \n
, represent one byte
ASCII codes, e.g., ASCII 10 (decimal), the stored
string attribute is one byte shorter than the input string length for
each embedded escape sequence.
The most frequently used C-language escape sequences are \n
(for
linefeed) and \t
(for horizontal tab).
These sequences in particular allow convenient editing of formatted text
attributes.
The other valid ASCII codes are \a
, \b
, \f
,
\r
, \v
, and \\
.
See section 4.6 ncks
netCDF Kitchen Sink, for more examples of string formatting
(with the ncks
`-s' option) with special characters.
Analogous to printf
, other special characters are also allowed by
ncatted
if they are "protected" by a backslash.
The characters "
, '
, ?
, and \
may be
input to the shell as \"
, \'
, \?
, and \\
.
NCO simply strips away the leading backslash from these characters
before editing the attribute.
No other characters require protection by a backslash.
Backslashes which precede any other character (e.g., 3
, m
,
$
, |
, &
, @
, %
, {
, and
}
) will not be filtered and will be included in the attribute.
Note that the NUL character \0
which terminates C language
strings is assumed and need not be explicitly specified.
If \0
is input, it will not be translated (because it would
terminate the string in an additional location).
Because of these context-sensitive rules, if wish to use an attribute of
type NC_CHAR to store data, rather than text strings, you should use
ncatted
with care.
EXAMPLES
Append the string "Data version 2.0.\n" to the global attribute
history
:
ncatted -O -a history,global,a,c,"Data version 2.0\n" in.nc |
printf()
-style escape
sequences.
Change the value of the long_name
attribute for variable T
from whatever it currently is to "temperature":
ncatted -O -a long_name,T,o,c,temperature in.nc |
Delete all existing units
attributes:
ncatted -O -a units,,d,, in.nc |
Modify all existing units
attributes to "meter second-1"
ncatted -O -a units,,m,c,"meter second-1" in.nc |
Overwrite the quanta
attribute of variable
energy
to an array of four integers.
ncatted -O -a quanta,energy,o,s,"010,101,111,121" in.nc |
Demonstrate input of C-language escape sequences (e.g., \n
) and
other special characters (e.g., \"
)
ncatted -h -a special,global,o,c, '\nDouble quote: \"\nTwo consecutive double quotes: \"\"\n Single quote: Beyond my shell abilities!\nBackslash: \\\n Two consecutive backslashes: \\\\\nQuestion mark: \?\n' in.nc |
[ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
ncdiff
netCDF Differencer SYNTAX
ncdiff [-A] [-C] [-c] [-D dbg] [-d dim,[min][,[max]]] [-F] [-h] [-l path] [-O] [-p path] [-R] [-r] [-v var[,...]] [-x] file_1 file_2 file_3 |
DESCRIPTION
ncdiff
subtracts variables in file_2 from the corresponding
variables (those with the same name) in file_1 and stores the
results in file_3.
Variables in file_2 are broadcast to conform to the
corresponding variable in file_1 if necessary.
Broadcasting a variable means creating data in non-existing dimensions
from the data in existing dimensions.
For example, a two dimensional variable in file_2 can be
subtracted from a four, three, or two (but not one or zero)
dimensional variable (of the same name) in file_1
.
This functionality allows the user to compute anomalies from the mean.
Note that variables in file_1 are not broadcast to conform
to the dimensions in file_2.
Thus, ncdiff
, the number of dimensions, or rank, of any
processed variable in file_1 must be greater than or equal to the
rank of the same variable in file_2.
Furthermore, the size of all dimensions common to both file_1 and
file_2 must be equal.
When computing anomalies from the mean it is often the case that
file_2 was created by applying an averaging operator to a file
with the same dimensions as file_1, if not file_1 itself.
In these cases, creating file_2 with ncra
rather than
ncwa
will cause the ncdiff
operation to fail.
For concreteness say the record dimension in file_1
is
time
.
If file_2 were created by averaging file_1 over the
time
dimension with the ncra
operator rather than with the
ncwa
operator, then file_2 will have a time
dimension of size 1 rather than having no time
dimension at all
(21).
In this case the input files to ncdiff
, file_1 and
file_2, will have unequally sized time
dimensions which
causes ncdiff
to fail.
To prevent this from occuring, use ncwa
to remove the time
dimension from file_2.
An example is given below.
ncdiff
will never difference coordinate variables or variables of
type NC_CHAR
or NC_BYTE
.
This ensures that coordinates like (e.g., latitude and longitude) are
physically meaningful in the output file, file_3.
This behavior is hardcoded.
ncdiff
applies special rules to some NCAR CSM fields (e.g.,
ORO
).
See 3.15 NCAR CSM Conventions for a complete description.
Finally, we note that ncflint
(see section 4.5 ncflint
netCDF File Interpolator) can be also perform file subtraction (as well as
addition, multiplication and interpolation).
EXAMPLES
Say files `85_0112.nc' and `86_0112.nc' each contain 12 months of data. Compute the change in the monthly averages from 1985 to 1986:
ncdiff 86_0112.nc 85_0112.nc 86m85_0112.nc |
The following examples demonstrate the broadcasting feature of
ncdiff
.
Say we wish to compute the monthly anomalies of T
from the yearly
average of T
for the year 1985.
First we create the 1985 average from the monthly data, which is stored
with the record dimension time
.
ncra 85_0112.nc 85.nc ncwa -O -a time 85.nc 85.nc |
ncwa
, gets rid of the time
dimension
of size 1 that ncra
left in `85.nc'.
Now none of the variables in `85.nc' has a time
dimension.
A quicker way to accomplish this is to use ncwa
from the
beginning:
ncwa -a time 85_0112.nc 85.nc |
ncdiff
to compute the anomalies for 1985:
ncdiff -v T 85_0112.nc 85.nc t_anm_85_0112.nc |
T
from the annual mean of T
for each
gridpoint.
Say we wish to compute the monthly gridpoint anomalies from the zonal
annual mean.
A zonal mean is a quantity that has been averaged over the
longitudinal (or x) direction.
First we use ncwa
to average over longitudinal direction
lon
, creating `xavg_85.nc', the zonal mean of `85.nc'.
Then we use ncdiff
to subtract the zonal annual means from the
monthly gridpoint data:
ncwa -a lon 85.nc xavg_85.nc ncdiff 85_0112.nc xavg_85.nc tx_anm_85_0112.nc |
time
and lon
,
this example only works if `xavg_85.nc' has no time
or
lon
dimension.
As a final example, say we have five years of monthly data (i.e., 60
months) stored in `8501_8912.nc' and we wish to create a file
which contains the twelve month seasonal cycle of the average monthly
anomaly from the five-year mean of this data.
The following method is just one permutation of many which will
accomplish the same result.
First use ncwa
to create the file containing the five-year mean:
ncwa -a time 8501_8912.nc 8589.nc |
ncdiff
to create a file containing the difference of
each month's data from the five-year mean:
ncdiff 8501_8912.nc 8589.nc t_anm_8501_8912.nc |
ncks
to group the five January anomalies together in one
file, and use ncra
to create the average anomaly for all five
Januarys.
These commands are embedded in a shell loop so they are repeated for all
twelve months:
foreach idx (01 02 03 04 05 06 07 08 09 10 11 12) ncks -F -d time,$idx,,12 t_anm_8501_8912.nc foo.$idx ncra foo.$idx t_anm_8589_$idx.nc end |
ncra
understands the stride
argument so the two
commands inside the loop may be combined into the single command
ncra -F -d time,$idx,,12 t_anm_8501_8912.nc foo.$idx |
ncrcat
to concatenate the 12 average monthly anomaly
files into one twelve-record file which contains the entire seasonal
cycle of the monthly anomalies:
ncrcat t_anm_8589_??.nc t_anm_8589_0112.nc |
[ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
ncea
netCDF Ensemble Averager SYNTAX
ncea [-A] [-C] [-c] [-D dbg] [-d dim,[min][,[max]]] [-F] [-h] [-l path] [-n loop] [-O] [-p path] [-R] [-r] [-v var[,...]] [-x] [-y op_typ] input-files output-file |
DESCRIPTION
ncea
performs gridpoint averages of variables across an arbitrary
number (an ensemble) of input files, with each file receiving an
equal weight in the average.
Each variable in the output-file will be the same size as the same
variable in any one of the in the input-files, and all
input-files must be the same size.
Whereas ncra
only performs averages over the record dimension
(e.g., time), and weights each record in the record dimension evenly,
ncea
averages entire files, and weights each file evenly.
All dimensions, including the record dimension, are treated identically
and preserved in the output-file.
See section 2.6 Averagers vs. Concatenators, for a description of the
distinctions between the various averagers and concatenators.
The file is the logical unit of organization for the results of many
scientific studies.
Often one wishes to generate a file which is the gridpoint average of
many separate files.
This may be to reduce statistical noise by combining the results of a
large number of experiments, or it may simply be a step in a procedure
whose goal is to compute anomalies from a mean state.
In any case, when one desires to generate a file whose properties are
the mean of all the input files, then ncea
is the operator to
use.
ncea
assumes coordinate variable are properties common to all of
the experiments and so does not average them across files.
Instead, ncea
copies the values of the coordinate variables from
the first input file to the output file.
EXAMPLES
Consider a model experiment which generated five realizations of one year of data, say 1985. You can imagine that the experimenter slightly perturbs the initial conditions of the problem before generating each new solution. Assume each file contains all twelve months (a seasonal cycle) of data and we want to produce a single file containing the ensemble average (mean) seasonal cycle. Here the numeric filename suffix denotes the experiment number (not the month):
ncea 85_01.nc 85_02.nc 85_03.nc 85_04.nc 85_05.nc 85.nc ncea 85_0[1-5].nc 85.nc ncea -n 5,2,1 85_01.nc 85.nc |
In the previous example, the user could have obtained the ensemble average values in a particular spatio-temporal region by adding a hyperslab argument to the command, e.g.,
ncea -d time,0,2 -d lat,-23.5,23.5 85_??.nc 85.nc |
[ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
ncecat
netCDF Ensemble Concatenator SYNTAX
ncecat [-A] [-C] [-c] [-D dbg] [-d dim,[min][,[max]]] [-F] [-h] [-l path] [-n loop] [-O] [-p path] [-R] [-r] [-v var[,...]] [-x] input-files output-file |
DESCRIPTION
ncecat
concatenates an arbitrary number of input files into a
single output file.
Input files are glued together by creating a record dimension in the
output file.
Input files must be the same size.
Each input file is stored consecutively as a single record in the output
file.
Thus, the size of the output file is the sum of the sizes of the input
files.
See section 2.6 Averagers vs. Concatenators, for a description of the
distinctions between the various averagers and concatenators.
Consider five realizations, `85a.nc', `85b.nc', ...
`85e.nc' of 1985 predictions from the same climate model.
Then ncecat 85?.nc 85_ens.nc
glues the individual realizations
together into the single file, `85_ens.nc'.
If an input variable was dimensioned [lat
,lon
], it will have
dimensions [record
,lat
,lon
] in the output file.
A restriction of ncecat
is that the hyperslabs of the processed
variables must be the same from file to file.
Normally this means all the input files are the same size, and contain
data on different realizations of the same variables.
EXAMPLES
Consider a model experiment which generated five realizations of one year of data, say 1985. You can imagine that the experimenter slightly perturbs the initial conditions of the problem before generating each new solution. Assume each file contains all twelve months (a seasonal cycle) of data and we want to produce a single file containing all the seasonal cycles. Here the numeric filename suffix denotes the experiment number (not the month):
ncecat 85_01.nc 85_02.nc 85_03.nc 85_04.nc 85_05.nc 85.nc ncecat 85_0[1-5].nc 85.nc ncecat -n 5,2,1 85_01.nc 85.nc |
[ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
ncflint
netCDF File Interpolator SYNTAX
ncflint [-A] [-C] [-c] [-D dbg] [-d dim,[min][,[max]]] [-F] [-h] [-i var,val3] [-l path] [-O] [-p path] [-R] [-r] [-v var[,...]] [-w wgt1[,wgt2]] [-x] file_1 file_2 file_3 |
DESCRIPTION
ncflint
creates an output file that is a linear combination of
the input files.
This linear combination can be a weighted average, a normalized weighted
average, or an interpolation of the input files.
Coordinate variables are not acted upon in any case, they are simply
copied from file_1.
There are two conceptually distinct methods of using ncflint
.
The first method is to specify the weight each input file is to have in
the output file.
In this method, the value val3 of a variable in the output file
file_3 is determined from its values val1 and val2 in the
two input files according to
val3 = wgt1*val1 + wgt2*val2
.
Here at least wgt1, and, optionally, wgt2, are specified on
the command line with the `-w' switch.
If only wgt1 is specified then wgt2 is automatically
computed as wgt2 = 1 - wgt1.
Note that weights larger than 1 are allowed.
Thus it is possible to specify wgt1 = 2 and
wgt2 = -3.
One can use this functionality to multiply all the values in a given
file by a constant.
The second method of using ncflint
is specifying the
interpolation option with `-i'.
This is really the inverse of the first method in the following sense.
When the user specifies the weights directly, ncflint
has no
work to do besides multiplying the input values by their respective
weights and adding the results together to produce the output values.
This assumes it is the weights that are known a priori.
In another class of cases it is the arrival value (i.e.,
val3) of a particular variable var that is known a priori.
In this case, the implied weights can always be inferred by examining
the values of var in the input files.
This results in one equation in two unknowns, wgt1 and wgt2:
val3 = wgt1*val1 + wgt2*val2
.
Unique determination of the weights requires imposing the additional
constraint of normalization on the weights:
wgt1 + wgt2 = 1.
Thus, to use the interpolation option, the user specifies var
and val3 with the `-i' option.
ncflint
will compute wgt1 and wgt2, and use these
weights on all variables to generate the output file.
Although var may have any number of dimensions in the input
files, it must represent a single, scalar value.
Thus any dimensions associated with var must be degenerate,
i.e., of size one.
If neither `-i' nor `-w' is specified on the command line,
ncflint
defaults to weighting each input file equally in the
output file.
This is equivalent to specifying `-w 0.5' or `-w 0.5,0.5'.
Attempting to specify both `-i' and `-w' methods in the same
command is an error.
ncflint
is programmed not to interpolate variables of type
NC_CHAR
and NC_BYTE
.
This behavior is hardcoded.
EXAMPLES
Although it has other uses, the interpolation feature was designed
to interpolate file_3 to a time between existing files.
Consider input files `85.nc' and `87.nc' containing variables
describing the state of a physical system at times time
=
85 and time
= 87.
Assume each file contains its timestamp in the scalar variable
time
.
Then, to linearly interpolate to a file `86.nc' which describes
the state of the system at time at time
= 86, we would use
ncflint -i time,86 85.nc 87.nc 86.nc |
Say you have observational data covering January and April 1985 in two files named `85_01.nc' and `85_04.nc', respectively. Then you can estimate the values for February and March by interpolating the existing data as follows. Combine `85_01.nc' and `85_04.nc' in a 2:1 ratio to make `85_02.nc':
ncflint -w 0.667 85_01.nc 85_04.nc 85_02.nc ncflint -w 0.667,0.333 85_01.nc 85_04.nc 85_02.nc |
Multiply `85.nc' by 3 and by -2 and add them together to make `tst.nc':
ncflint -w 3,-2 85.nc 85.nc tst.nc |
Add `85.nc' to `86.nc' to obtain `85p86.nc', then subtract `86.nc' from `85.nc' to obtain `85m86.nc'
ncflint -w 1,1 85.nc 86.nc 85p86.nc ncflint -w 1,-1 85.nc 86.nc 85m86.nc ncdiff 85.nc 86.nc 85m86.nc |
ncflint
can be used to mimic ncdiff
operations.
However this is not a good idea in practice because ncflint
does not broadcast (see section 4.2 ncdiff
netCDF Differencer) conforming
variables during arithmetic.
Thus the final two commands would produce identical results except that
ncflint
would fail if any variables needed to be broadcast.
Rescale the dimensional units of the surface pressure prs_sfc
from Pascals to hectopascals (millibars)
ncflint -O -C -v prs_sfc -w 0.01,0.0 in.nc in.nc out.nc ncatted -O -a units,prs_sfc,o,c,millibar out.nc |
[ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
ncks
netCDF Kitchen Sink SYNTAX
ncks [-A] [-a] [-C] [-c] [-D] [-d dim,[min][,[max]][,[stride]]] [-F] [-H] [-h] [-l path] [-M] [-m] [-O] [-p path] [-q] [-R] [-r] [-s format] [-u] [-v var[,...]] [-x] input-file [output-file] |
DESCRIPTION
ncks
combines selected features of ncdump
,
ncextr
, and the nccut and ncpaste specifications into one
versatile utility.
ncks
extracts a subset of the data from input-file and
either prints it as ASCII text to stdout, or writes (or pastes) it to
output-file, or both.
ncks
will print netCDF data in ASCII format to stdout
,
like ncdump
, but with these differences:
ncks
prints data in a tabular format intended to be easy to
search for the data you want, one datum per screen line, with all
dimension subscripts and coordinate values (if any) preceding the datum.
Option `-s' allows the user the format the data using C-style
format strings.
Options `-a', `-F', `-H', `-M', `-m', `-q', `-s', and `-u' control the formatted appearance of the data.
ncks
will extract (and optionally create a new netCDF file
comprised of) only selected variable from the input file, like
ncextr
but with these differences: Only variables and
coordinates may be specifically included or excluded--all global
attributes and any attribute associated with an extracted variable will
be copied to the screen and/or output netCDF file.
Options `-c', `-C', `-v', and `-x' control which
variables are extracted.
ncks
will extract hyperslabs from the specified variables.
In fact ncks
implements the nccut specification exactly.
Option `-d' controls the hyperslab specification.
Input dimensions that are not associated with any output variable will not appear in the output netCDF. This feature removes superfluous dimensions from a netCDF file.
ncks
will append variables and attributes from the
input-file to output-file if output-file is a
pre-existing netCDF file whose relevant dimensions conform to dimension
sizes of input-file.
The append features of ncks
are intended to provide a rudimentary
means of adding data from one netCDF file to another, conforming, netCDF
file.
When naming conflicts exists between the two files, data in
output-file is usually overwritten by the corresponding data from
input-file.
Thus it is recommended that the user backup output-file in case
valuable data is accidentally overwritten.
If output-file exists, the user will be queried whether to
overwrite, append, or exit the ncks
call
completely.
Choosing overwrite destroys the existing output-file and
create an entirely new one from the output of the ncks
call.
Append has differing effects depending on the uniqueness of the
variables and attributes output by ncks
: If a variable or
attribute extracted from input-file does not have a name conflict with
the members of output-file then it will be added to output-file
without overwriting any of the existing contents of output-file.
In this case the relevant dimensions must agree (conform) between the
two files; new dimensions are created in output-file as required.
When a name conflict occurs, a global attribute from input-file
will overwrite the corresponding global attribute from
output-file.
If the name conflict occurs for a non-record variable, then the
dimensions and type of the variable (and of its coordinate dimensions,
if any) must agree (conform) in both files.
Then the variable values (and any coordinate dimension values)
from input-file will overwrite the corresponding variable values (and
coordinate dimension values, if any) in output-file
(22).
Since there can only be one record dimension in a file, the record dimension must have the same name (but not necessarily the same size) in both files if a record dimension variable is to be appended. If the record dimensions are of differing sizes, the record dimension of output-file will become the greater of the two record dimension sizes, the record variable from input-file will overwrite any counterpart in output-file and fill values will be written to any gaps left in the rest of the record variables (I think). In all cases variable attributes in output-file are superseded by attributes of the same name from input-file, and left alone if there is no name conflict.
Some users may wish to avoid interactive ncks
queries about
whether to overwrite existing data.
For example, batch scripts will fail if ncks
does not receive
responses to its queries.
Options `-O' and `-A' are available to force overwriting
existing files and variables, respectively.
[ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
ncks
The following list provides a short summary of the features unique to
ncks
.
Features common to many operators are described in
3. Features common to most operators.
-a
results in the variables being extracted, printed,
and written to disk in the order in which they were saved in the input
file.
Thus -a
retains the original ordering of the variables.
-s
), each element of the data
hyperslab is printed on a separate line containing the names, indices,
and, values, if any, of all of the variables dimensions.
The dimension and variable indices refer to the location of the
corresponding data element with respect to the variable as stored on
disk (i.e., not the hyperslab).
% ncks -H -C -v three_dmn_var in.nc lat[0]=-90 lev[0]=100 lon[0]=0 three_dmn_var[0]=0 lat[0]=-90 lev[0]=100 lon[1]=90 three_dmn_var[1]=1 lat[0]=-90 lev[0]=100 lon[2]=180 three_dmn_var[2]=2 ... lat[1]=90 lev[2]=1000 lon[1]=90 three_dmn_var[21]=21 lat[1]=90 lev[2]=1000 lon[2]=180 three_dmn_var[22]=22 lat[1]=90 lev[2]=1000 lon[3]=270 three_dmn_var[23]=23 |
% ncks -F -H -C -v three_dmn_var in.nc lon(1)=0 lev(1)=100 lat(1)=-90 three_dmn_var(1)=0 lon(2)=90 lev(1)=100 lat(1)=-90 three_dmn_var(2)=1 lon(3)=180 lev(1)=100 lat(1)=-90 three_dmn_var(3)=2 ... |
% ncks -H -d lat,90.0 -d lev,1000.0 -v three_dmn_var in.nc out.nc lat[1]=90 lev[2]=1000 lon[0]=0 three_dmn_var[20]=20 lat[1]=90 lev[2]=1000 lon[1]=90 three_dmn_var[21]=21 lat[1]=90 lev[2]=1000 lon[2]=180 three_dmn_var[22]=22 lat[1]=90 lev[2]=1000 lon[3]=270 three_dmn_var[23]=23 % ncks -H out.nc lat[0]=90 lev[0]=1000 lon[0]=0 three_dmn_var[0]=20 lat[0]=90 lev[0]=1000 lon[1]=90 three_dmn_var[1]=21 lat[0]=90 lev[0]=1000 lon[2]=180 three_dmn_var[2]=22 lat[0]=90 lev[0]=1000 lon[3]=270 three_dmn_var[3]=23 |
ncdump -h
).
This displays all metadata pertaining to each variable, one variable
at a time.
printf()
formats.
EXAMPLES
View all data in netCDF `in.nc', printed with Fortran indexing conventions:
ncks -H -F in.nc |
Copy the netCDF file `in.nc' to file `out.nc'.
ncks -O in.nc out.nc |
history
global attribute (see section 3.14 History attribute)
will contain the command used to create `out.nc'.
Second, the variables in `out.nc' will be defined in alphabetical
order.
Of course the internal storage of variable in a netCDF file should be
transparent to the user, but there are cases when alphabetizing a file
is useful (see description of -a
switch).
Print variable three_dmn_var
from file `in.nc' with
default notations.
Next print three_dmn_var
as an un-annotated text column.
Then print three_dmn_var
signed with very high precision.
Finally, print three_dmn_var
as a comma-separated list.
% ncks -H -C -v three_dmn_var in.nc lat[0]=-90 lev[0]=100 lon[0]=0 three_dmn_var[0]=0 lat[0]=-90 lev[0]=100 lon[1]=90 three_dmn_var[1]=1 ... lat[1]=90 lev[2]=1000 lon[3]=270 three_dmn_var[23]=23 % ncks -s "%f\n" -H -C -v three_dmn_var in.nc 0.000000 1.000000 ... 23.000000 % ncks -s "%+16.10f\n" -H -C -v three_dmn_var in.nc +0.0000000000 +1.0000000000 ... +23.0000000000 % ncks -s "%f, " -H -C -v three_dmn_var in.nc 0.000000, 1.000000, ..., 23.000000, |
ncatted
netCDF Attribute Editor, for more details on string
formatting and special characters.
One dimensional arrays of characters stored as netCDF variables are automatically printed as strings, whether or not they are NUL-terminated, e.g.,
ncks -v fl_nm in.nc |
%c
formatting code is useful for printing
multidimensional arrays of characters representing fixed length strings
ncks -H -s "%c" -v fl_nm_arr in.nc |
%s
format code on strings which are not NUL-terminated
(and thus not technically strings) is likely to result in a core dump.
Create netCDF `out.nc' containing all variables, and any associated
coordinates, except variable time
, from netCDF `in.nc':
ncks -x -v time in.nc out.nc |
Extract variables time
and pressure
from netCDF `in.nc'.
If `out.nc' does not exist it will be created.
Otherwise the you will be prompted whether to append to or to
overwrite `out.nc':
ncks -v time,pressure in.nc out.nc ncks -C -v time,pressure in.nc out.nc |
time
, pressure
, and any coordinate variables associated
with pressure.
The `out.nc' from the second version is guaranteed to contain only
two variables time
and pressure
.
Create netCDF `out.nc' containing all variables from file `in.nc'.
Restrict the dimensions of these variables to a hyperslab.
Print (with -H
) the hyperslabs to the screen for good measure.
The specified hyperslab is: the fifth value in dimension time
; the
half-open range lat > 0. in coordinate lat
; the
half-open range lon < 330. in coordinate lon
; the
closed interval .3 < band < .5 in coordinate band
; and
cross-section closest to 1000. in coordinate lev
.
Note that limits applied to coordinate values are specified with a
decimal point, and limits applied to dimension indices do not have a
decimal point See section 3.7 Hyperslabs.
ncks -H -d time,5 -d lat,,0. -d lon,330., -d band,.3,.5 -d lev,1000. in.nc out.nc |
Assume the domain of the monotonically increasing longitude coordinate
lon
is 0 < lon < 360.
Here, lon
is an example of a wrapped coordinate.
ncks
will extract a hyperslab which crosses the Greenwich
meridian simply by specifying the westernmost longitude as min and
the easternmost longitude as max, as follows:
ncks -d lon,260.,45. in.nc out.nc |
[ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
ncra
netCDF Record Averager SYNTAX
ncra [-A] [-C] [-c] [-D dbg] [-d dim,[min][,[max]][,[stride]]] [-F] [-h] [-l path] [-n loop] [-O] [-p path] [-R] [-r] [-v var[,...]] [-x] [-y op_typ] input-files output-file |
DESCRIPTION
ncra
averages record variables across an arbitrary number of
input files.
The record dimension is retained as a degenerate (size 1) dimension in
the output variables.
See section 2.6 Averagers vs. Concatenators, for a description of the
distinctions between the various averagers and concatenators.
Input files may vary in size, but each must have a record dimension.
The record coordinate, if any, should be monotonic for (or else non-fatal
warnings may be generated).
Hyperslabs of the record dimension which include more than one file are
handled correctly.
ncra
supports the stride argument to the `-d'
hyperslab option for the record dimension only, stride is not
supported for non-record dimensions.
ncra
weights each record (e.g., time slice) in the
input-files equally.
ncra
does not attempt to see if, say, the time
coordinate
is irregularly spaced and thus would require a weighted average in order
to be a true time average.
EXAMPLES
Average files `85.nc', `86.nc', ... `89.nc' along the record dimension, and store the results in `8589.nc':
ncra 85.nc 86.nc 87.nc 88.nc 89.nc 8589.nc ncra 8[56789].nc 8589.nc ncra -n 5,2,1 85.nc 8589.nc |
Assume the files `85.nc', `86.nc', ... `89.nc' each contain a record coordinate time of length 12 defined such that the third record in `86.nc' contains data from March 1986, etc. NCO knows how to hyperslab the record dimension across files. Thus, to average data from December, 1985 through February, 1986:
ncra -d time,11,13 85.nc 86.nc 87.nc 8512_8602.nc ncra -F -d time,12,14 85.nc 86.nc 87.nc 8512_8602.nc |
ncra -F -d time,3,,12 -v temperature 85.nc 86.nc 87.nc 858687_03.nc |
Assume the time coordinate is incrementally numbered such that January, 1985 = 1 and December, 1989 = 60. Assuming `??' only expands to the five desired files, the following averages June, 1985--June, 1989:
ncra -d time,6.,54. ??.nc 8506_8906.nc |
[ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
ncrcat
netCDF Record Concatenator SYNTAX
ncrcat [-A] [-C] [-c] [-D dbg] [-d dim,[min][,[max]][,[stride]]] [-F] [-h] [-l path] [-n loop] [-O] [-p path] [-R] [-r] [-v var[,...]] [-x] input-files output-file |
DESCRIPTION
ncrcat
concatenates record variables across an arbitrary number
of input files.
The final record dimension is by default the sum of the lengths of the
record dimensions in the input files.
See section 2.6 Averagers vs. Concatenators, for a description of the
distinctions between the various averagers and concatenators.
Input files may vary in size, but each must have a record dimension.
The record coordinate, if any, should be monotonic (or else non-fatal
warnings may be generated).
Hyperslabs of the record dimension which include more than one file are
handled correctly.
ncra
supports the stride argument to the `-d'
hyperslab option for the record dimension only, stride is not
supported for non-record dimensions.
ncrcat
applies special rules to ARM convention time fields (e.g.,
time_offset
).
See 3.16 ARM Conventions for a complete description.
EXAMPLES
Concatenate files `85.nc', `86.nc', ... `89.nc' along the record dimension, and store the results in `8589.nc':
ncrcat 85.nc 86.nc 87.nc 88.nc 89.nc 8589.nc ncrcat 8[56789].nc 8589.nc ncrcat -n 5,2,1 85.nc 8589.nc |
Assume the files `85.nc', `86.nc', ... `89.nc' each contain a record coordinate time of length 12 defined such that the third record in `86.nc' contains data from March 1986, etc. NCO knows how to hyperslab the record dimension across files. Thus, to concatenate data from December, 1985--February, 1986:
ncrcat -d time,11,13 85.nc 86.nc 87.nc 8512_8602.nc ncrcat -F -d time,12,14 85.nc 86.nc 87.nc 8512_8602.nc |
The following uses the stride option to concatenate all the March temperature data from multiple input files into a single output file
ncrcat -F -d time,3,,12 -v temperature 85.nc 86.nc 87.nc 858687_03.nc |
Assume the time coordinate is incrementally numbered such that
January, 1985 = 1 and December, 1989 = 60.
Assuming ??
only expands to the five desired files, the following
concatenates June, 1985--June, 1989:
ncrcat -d time,6.,54. ??.nc 8506_8906.nc |
[ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
ncrename
netCDF Renamer SYNTAX
ncrename [-a old_name,new_name] [-a ...] [-D] [-d old_name,new_name] [-d ...] [-h] [-l path] [-O] [-p path] [-R] [-r] [-v old_name,new_name] [-v ...] input-file [output-file] |
ncrename
renames dimensions, variables, and attributes in a
netCDF file.
Each object that has a name in the list of old names is renamed using
the corresponding name in the list of new names.
All the new names must be unique.
Every old name must exist in the input file, unless the name is preceded
by the character `.'.
The validity of the old names is not checked prior to the renaming.
Thus, if an old name is specified without the the `.' prefix and is
not present in input-file, ncrename
will abort.
ncrename
is the exception to the normal rules that the user will
be interactively prompted before an existing file is changed, and that a
temporary copy of an output file is constructed during the operation.
If only input-file is specified, then ncrename
will change
the names of the input-file in place without prompting and without
creating a temporary copy of input-file
.
This is because the renaming operation is considered reversible if the
user makes a mistake.
The new_name can easily be changed back to old_name by using
ncrename
one more time.
Note that renaming a dimension to the name of a dependent variable can be used to invert the relationship between an independent coordinate variable and a dependent variable. In this case, the named dependent variable must be one-dimensional and should have no missing values. Such a variable will become a coordinate variable.
According to the netCDF User's Guide, renaming properties in netCDF files does not incur the penalty of recopying the entire file when the new_name is shorter than the old_name.
OPTIONS
EXAMPLES
Rename the variable p
to pressure
and t
to
temperature
in netCDF `in.nc'.
In this case p
must exist in the input file (or ncrename
will
abort), but the presence of t
is optional:
ncrename -v p,pressure -v .t,temperature in.nc |
ncrename
does not automatically attach dimensions to variables of
the same name.
If you want to rename a coordinate variable so that it remains a
coordinate variable, you must separately rename both the dimension and
the variable:
ncrename -d lon,longitude -v lon,longitude in.nc |
Create netCDF `out.nc' identical to `in.nc' except the attribute
_FillValue
is changed to missing_value
(in all variables
which possess it) and the global attribute Zaire
is changed to
Congo
:
ncrename -a _FillValue,missing_value -a Zaire,Congo in.nc out.nc |
[ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
ncwa
netCDF Weighted Averager SYNTAX
ncwa [-A] [-a dim[,...]] [-C] [-c] [-D dbg] [-d dim,[min][,[max]]] [-F] [-h] [-I] [-l path] [-M val] [-m mask] [-N] [-n] [-O] [-o condition] [-p path] [-R] [-r] [-v var[,...]] [-W] [-w weight] [-x] [-y op_typ] input-file output-file |
DESCRIPTION
ncwa
averages variables in a single file over arbitrary
dimensions, with options to specify weights, masks, and normalization.
See section 2.6 Averagers vs. Concatenators, for a description of the
distinctions between the various averagers and concatenators.
The default behavior of ncwa
is to arithmetically average every
numerical variable over all dimensions and produce a scalar result.
To average variables over only a subset of their dimensions, specify
these dimensions in a comma-separated list following `-a', e.g.,
`-a time,lat,lon'.
As with all arithmetic operators, the operation may be restricted to
an arbitrary hypserslab by employing the `-d' option
(see section 3.7 Hyperslabs).
ncwa
also handles values matching the variable's
missing_value
attribute correctly.
Moreover, ncwa
understands how to manipulate user-specified
weights, masks, and normalization options.
With these options, ncwa
can compute sophisticated averages (and
integrals) from the command line.
mask and weight, if specified, are broadcast to conform to
the variables being averaged.
The rank of variables is reduced by the number of dimensions which they
are averaged over.
Thus arrays which are one dimensional in the input-file and are
averaged by ncwa
appear in the output-file as scalars.
This allows the user to infer which dimensions may have been averaged.
Note that that it is impossible for ncwa
to make make a
weight or mask of rank W conform to a var of
rank V if W > V.
This situation often arises when coordinate variables (which, by
definition, are one dimensional) are weighted and averaged.
ncwa
assumes you know this is impossible and so ncwa
does
not attempt to broadcast weight or mask to conform to
var in this case, nor does ncwa
print a warning message
telling you this, because it is so common.
Specifying dbg > 2 does cause ncwa
to emit warnings in
these situations, however.
Non-coordinate variables are always masked and weighted if specified.
Coordinate variables, however, may be treated specially.
By default, an averaged coordinate variable, e.g., latitude
,
appears in output-file averaged the same way as any other variable
containing an averaged dimension.
In other words, by default ncwa
weights and masks
coordinate variables like all other variables.
This design decision was intended to be helpful but for some
applications it may be preferable not to weight or mask coordinate
variables just like all other variables.
Consider the following arguments to ncwa
: -a latitude -w
lat_wgt -d latitude,0.,90.
where lat_wgt
is a weight in the
latitude
dimension.
Since, by default ncwa
weights coordinate variables, the
value of latitude
in the output-file depends on the weights
in lat_wgt and is not likely to be 45.---the midpoint latitude of
the hyperslab.
Option `-I' overrides this default behavior and causes ncwa
not to weight or mask coordinate variables.
In the above case, this causes the value of latitude
in the
output-file to be 45.---which is a somewhat appealing result.
Thus, `-I' specifies simple arithmetic averages for the coordinate
variables.
In the case of latitude, `-I' specifies that you prefer to archive
the central latitude of the hyperslab over which variables were averaged
rather than the area weighted centroid of the hyperslab
(23).
Note that the default behavior of (`-I') changed on
1998/12/01--before this date the default was not to weight or mask
coordinate variables.
The mathematical definition of operations involving rank reduction
is given above (see section 3.11 Operation Types).
Masking condition Normalization
[ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
The masking condition has the syntax mask condition val. Here mask is the name of the masking variable (specified with `-m'). The condition argument to `-o' may be any one of the six arithmetic comparatives: eq, ne, gt, lt, ge, le. These are the Fortran-style character abbreviations for the logical operations ==, !=, >, <, >=, <=. The masking condition defaults to eq (equality). The val argument to `-M' is the right hand side of the masking condition. Thus for the i'th element of the hyperslab to be averaged, the masking condition is mask_i condition val.
[ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
ncwa
has one switch which controls the normalization of the
averages appearing in the output-file.
Option `-N' prevents ncwa
from dividing the weighted sum of
the variable (the numerator in the averaging expression) by the weighted
sum of the weights (the denominator in the averaging expression).
Thus `-N' tells ncwa
to return just the numerator of the
arithmetic expression defining the operation (see section 3.11 Operation Types).
EXAMPLES
Given file `85_0112.nc':
netcdf 85_0112 { dimensions: lat = 64 ; lev = 18 ; lon = 128 ; time = UNLIMITED ; // (12 currently) variables: float lat(lat) ; float lev(lev) ; float lon(lon) ; float time(time) ; float scalar_var ; float three_dmn_var(lat, lev, lon) ; float two_dmn_var(lat, lev) ; float mask(lat, lon) ; float gw(lat) ; } |
Average all variables in `in.nc' over all dimensions and store results in `out.nc':
ncwa in.nc out.nc |
Store the zonal (longitudinal) average of `in.nc' in `out.nc':
ncwa -a lon in.nc out.nc |
lon
, or 128.
Compute the meridional (latitudinal) average, with values weighted by the corresponding element of gw (24):
ncwa -w gw -a lat in.nc out.nc |
lat
, or 64.
The sum of the Gaussian weights is 2.0.
Compute the area average over the tropical Pacific:
ncwa -w gw -a lat,lon -d lat,-20.,20. -d lon,120.,270. in.nc out.nc |
Compute the area average over the globe, but include only points for which ORO < 0.5 (25):
ncwa -m ORO -M 0.5 -o lt -w gw -a lat,lon in.nc out.nc |
Compute the global annual average over the maritime tropical Pacific:
ncwa -m ORO -M 0.5 -o lt -w gw -a lat,lon,time -d lat,-20.0,20.0 -d lon,120.0,270.0 in.nc out.nc |
[ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
[ < ] | [ > ] | [ << ] | [ Up ] | [ >> ] | [Top] | [Contents] | [Index] | [ ? ] |
Jump to: | N |
---|
Jump to: | N |
---|
[Top] | [Contents] | [Index] | [ ? ] |
To produce these formats, `nco.texi' was simply run through the
freely available programs texi2dvi
, dvips
,
texi2html
, and makeinfo
.
Due to a bug in TeX, the resulting Postscript file, `nco.ps',
contains the Table of Contents as the final pages.
Thus if you print `nco.ps', remember to insert the Table of
Contents after the cover sheet before you staple the manual.
The Cygwin package is available from
http://sourceware.cygnus.com/cygwin
Currently, Cygwin 20.x comes with the GNU C/C++/Fortran
compilers (gcc
, g++
, g77
).
These GNU compilers may be used to build the netCDF distribution
itself.
The ldd
command, if it is available on your system,
will tell you where the executable is looking for each dynamically
loaded library. Use, e.g., ldd `which ncea`
.
The Hierarchical Data Format, or HDF, is another self-describing data format similar to, but more elaborate than, netCDF.
One must link the NCO code to the HDF4 MFHDF library instead of the usual netCDF library. However, the MFHDF library only supports netCDF 2.x calls. Thus I will try to keep this capability in NCO as long as it is not too much trouble.
The ncrename
operator is an exception to this rule.
See section 4.9 ncrename
netCDF Renamer.
The terminology merging is reserved for an (unwritten) operator which replaces hyperslabs of a variable in one file with hyperslabs of the same variable from another file
Yes, the terminology is confusing. By all means mail me if you think of a better nomenclature. Should NCO use paste instead of append?
Currently
ncea
and ncrcat
are symbolically linked to the ncra
executable, which behaves slightly differently based on its invocation
name (i.e., `argv[0]').
These three operators share the same source code, but merely have
different inner loops.
The third averaging operator, ncwa
, is the most
sophisticated averager in NCO.
However, ncwa
is in a different class than ncra
and
ncea
because it can only operate on a single file per invocation
(as opposed to multiple files).
On that single file, however, ncwa
provides a richer set of
averaging options--including weighting, masking, and broadcasting.
The exact length which exceeds the operating system internal
limit for command line lengths varies from OS to OS
and from shell to shell.
GNU bash
may not have any arbitrary fixed limits to the size of
command line arguments.
Many OSs cannot handle command line arguments longer than a
few thousand characters.
When this occurs, the ANSI C-standard argc
-argv
method of passing arguments from the calling shell to a C-program (i.e.,
an NCO operator) breaks down.
The `-n' option is a backward compatible superset of the
NINTAP
option from the NCAR CCM Processor.
The msrcp
command must be in the user's path and
located in one of the following directories: /usr/local/bin
,
/usr/bin
, /opt/local/bin
, or /usr/local/dcs/bin
.
NCO averagers have a bug (TODO 121) which may cause them to behave incorrectly if the missing_value = `0.0' for a variable to be averaged. The workaround for this bug is to change missing_value to anything besides zero.
For example, the DOE ARM program often uses att_type =
NC_CHAR
and missing_value = `-99999.'.
32767 = 2^15-1
Operators began performing type conversions before arithmetic in NCO version 1.2, August, 2000. Previous version never performed unnecessary type conversion for arithmetic.
The actual type conversions are handled by intrinsic C-language type
conversion, so the floor()
function is not explicitly called, but
the results are the same as if it were.
The exception is appending/altering the attributes x_op
,
y_op
, z_op
, and t_op
for variables which have been
averaged across space and time dimensions.
This feature is scheduled for future inclusion in NCO.
The CSM convention recommends time
be stored in the format
time since base_time, e.g., the units
attribute of
time
might be `days since 1992-10-8 15:15:42.5 -6:00'.
A problem with this format occurs when using ncrcat
to
concatenate multiple files together, each with a different
base_time.
That is, any time
values from files following the first file to
be concatenated should be corrected to the base_time offset
specified in the units
attribute of time
from the first
file.
The analogous problem has been fixed in ARM files (see section 3.16 ARM Conventions)
and could be fixed for CSM files if there is sufficient lobbying, and if
Unidata fixes the UDUNITS package to build out of the box on Linux.
This is because ncra
collapses the record dimension
to a size of 1 (making it a degenerate dimension), but does not
remove it, while ncwa
removes all dimensions it averages over.
In other words, ncra
changes the size but not the rank of
variables, while ncwa
changes both the size and the rank of
variables.
Those familiar with netCDF mechanics might wish to know what is
happening here: ncks
does not attempt to redefine the variable in
output-file to match its definition in input-file, ncks
merely
copies the values of the variable and its coordinate dimensions, if any,
from input-file to output-file.
If lat_wgt
contains Gaussian weights then the value of
latitude
in the output-file will be the area-weighted
centroid of the hyperslab. For the example given, this is about 30
degrees.
gw
stands for Gaussian weight in the NCAR climate
model.
ORO
stands for Orography in the NCAR climate model.
ORO < 0.5 selects the gridpoints which are covered by ocean.
[Top] | [Contents] | [Index] | [ ? ] |
Foreword
Summary
1. Introduction
1.1 Availability2. Operator Strategies
1.2 Operating systems compatible with NCO
1.2.1 Compiling NCO for Microsoft Windows OS1.3 Libraries
1.4 netCDF 2.x vs. 3.x
1.5 Help and Bug reports
2.1 NCO operator philosophy3. Features common to most operators
2.2 Climate model paradigm
2.3 Temporary output files
2.4 Appending variables to a file
2.5 Addition Subtraction Multiplication and Interpolation
2.6 Averagers vs. Concatenators
2.6.1 Concatenators2.7 Working with large numbers of input filesncrcat
andncecat
2.6.2 Averagersncea
,ncra
, andncwa
2.6.3 Interpolatorncflint
2.8 Working with large files
2.9 Approximate NCO memory requirements
2.10 Performance limitations of the operators
3.1 Specifying input files4. Reference manual for all operators
3.2 Accessing files stored remotely
3.2.1 DODS3.3 Retention of remotely retrieved files
3.4 Including/Excluding specific variables
3.5 Including/Excluding coordinate variables
3.6 C & Fortran index conventions
3.7 Hyperslabs
3.8 Wrapped coordinates
3.9 Stride
3.10 Missing values
3.11 Operation Types
3.12 Type conversion
3.13 Suppressing interactive prompts
3.14 History attribute
3.15 NCAR CSM Conventions
3.16 ARM Conventions
3.17 Operator version
4.15. Contributingncatted
netCDF Attribute Editor
4.2ncdiff
netCDF Differencer
4.3ncea
netCDF Ensemble Averager
4.4ncecat
netCDF Ensemble Concatenator
4.5ncflint
netCDF File Interpolator
4.6ncks
netCDF Kitchen Sink
Options specific to4.7ncks
ncra
netCDF Record Averager
4.8ncrcat
netCDF Record Concatenator
4.9ncrename
netCDF Renamer
4.10ncwa
netCDF Weighted Averager
Masking condition
Normalization
General Index
[Top] | [Contents] | [Index] | [ ? ] |
Foreword
Summary
1. Introduction
2. Operator Strategies
3. Features common to most operators
4. Reference manual for all operators
5. Contributing
General Index
[Top] | [Contents] | [Index] | [ ? ] |
Button | Name | Go to | From 1.2.3 go to |
---|---|---|---|
[ < ] | Back | previous section in reading order | 1.2.2 |
[ > ] | Forward | next section in reading order | 1.2.4 |
[ << ] | FastBack | previous or up-and-previous section | 1.1 |
[ Up ] | Up | up section | 1.2 |
[ >> ] | FastForward | next or up-and-next section | 1.3 |
[Top] | Top | cover (top) of document | |
[Contents] | Contents | table of contents | |
[Index] | Index | concept index | |
[ ? ] | About | this page |