What’s New

v2022.03.0 (2 March 2022)

This release brings a number of small improvements, as well as a move to calendar versioning (GH6176).

Many thanks to the 16 contributors to the v2022.02.0 release!

Aaron Spring, Alan D. Snow, Anderson Banihirwe, crusaderky, Illviljan, Joe Hamman, Jonas Gliß, Lukas Pilz, Martin Bergemann, Mathias Hauser, Maximilian Roos, Romain Caneill, Stan West, Stijn Van Hoey, Tobias Kölling, and Tom Nicholas.

New Features

  • Enabled multiplying tick offsets by floats. Allows float n in CFTimeIndex.shift() if shift_freq is between Day and Microsecond. (GH6134, PR6135). By Aaron Spring.

  • Enbable to provide more keyword arguments to pydap backend when reading OpenDAP datasets (GH6274). By Jonas Gliß <https://github.com/jgliss>.

  • Allow DataArray.drop_duplicates() to drop duplicates along multiple dimensions at once, and add Dataset.drop_duplicates(). (PR6307) By Tom Nicholas.

Breaking changes

  • Renamed the interpolation keyword of all quantile methods (e.g. DataArray.quantile()) to method for consistency with numpy v1.22.0 (PR6108). By Mathias Hauser.

Deprecations

Bug fixes

  • Variables which are chunked using dask in larger (but aligned) chunks than the target zarr chunk size can now be stored using to_zarr() (PR6258) By Tobias Kölling.

  • Multi-file datasets containing encoded cftime.datetime objects can be read in parallel again (GH6226, PR6249, PR6305). By Martin Bergemann and Stan West.

Documentation

  • Delete files of datasets saved to disk while building the documentation and enable building on Windows via sphinx-build (PR6237). By Stan West.

Internal Changes

v0.21.1 (31 January 2022)

This is a bugfix release to resolve (GH6216, PR6207).

Bug fixes

v0.21.0 (27 January 2022)

Many thanks to the 20 contributors to the v0.21.0 release!

Abel Aoun, Anderson Banihirwe, Ant Gib, Chris Roat, Cindy Chiao, Deepak Cherian, Dominik Stańczak, Fabian Hofmann, Illviljan, Jody Klymak, Joseph K Aicher, Mark Harfouche, Mathias Hauser, Matthew Roeschke, Maximilian Roos, Michael Delgado, Pascal Bourgault, Pierre, Ray Bell, Romain Caneill, Tim Heap, Tom Nicholas, Zeb Nicholls, joseph nowak, keewis.

New Features

Breaking changes

  • Rely on matplotlib’s default datetime converters instead of pandas’ (GH6102, PR6109). By Jimmy Westling.

  • Improve repr readability when there are a large number of dimensions in datasets or dataarrays by wrapping the text once the maximum display width has been exceeded. (GH5546, PR5662) By Jimmy Westling.

Deprecations

  • Removed the lock kwarg from the zarr and pydap backends, completing the deprecation cycle started in GH5256. By Tom Nicholas.

  • Support for python 3.7 has been dropped. (PR5892) By Jimmy Westling.

Bug fixes

  • Preserve chunks when creating a DataArray from another DataArray (PR5984). By Fabian Hofmann.

  • Properly support DataArray.ffill(), DataArray.bfill(), Dataset.ffill() and Dataset.bfill() along chunked dimensions (GH6112). By Joseph Nowak.

  • Subclasses of byte and str (e.g. np.str_ and np.bytes_) will now serialise to disk rather than raising a ValueError: unsupported dtype for netCDF4 variable: object as they did previously (PR5264). By Zeb Nicholls.

  • Fix applying function with non-xarray arguments using xr.map_blocks(). By Cindy Chiao.

  • No longer raise an error for an all-nan-but-one argument to DataArray.interpolate_na() when using method=’nearest’ (GH5994, PR6144). By Michael Delgado.

  • dt.season can now handle NaN and NaT. (PR5876). By Pierre Loicq.

  • Determination of zarr chunks handles empty lists for encoding chunks or variable chunks that occurs in certain circumstances (PR5526). By Chris Roat.

Internal Changes

v0.20.2 (9 December 2021)

This is a bugfix release to resolve (GH3391, GH5715). It also includes performance improvements in unstacking to a sparse array and a number of documentation improvements.

Many thanks to the 20 contributors:

Aaron Spring, Alexandre Poux, Deepak Cherian, Enrico Minack, Fabien Maussion, Giacomo Caria, Gijom, Guillaume Maze, Illviljan, Joe Hamman, Joseph Hardin, Kai Mühlbauer, Matt Henderson, Maximilian Roos, Michael Delgado, Robert Gieseke, Sebastian Weigand and Stephan Hoyer.

Breaking changes

  • Use complex nan when interpolating complex values out of bounds by default (instead of real nan) (PR6019). By Alexandre Poux.

Performance

Bug fixes

  • xr.map_blocks() and xr.corr() now work when dask is not installed (GH3391, GH5715, PR5731). By Gijom.

  • Fix plot.line crash for data of shape (1, N) in _title_for_slice on format_item (PR5948). By Sebastian Weigand.

  • Fix a regression in the removal of duplicate backend entrypoints (GH5944, PR5959) By Kai Mühlbauer.

  • Fix an issue that datasets from being saved when time variables with units that cftime can parse but pandas can not were present (PR6049). By Tim Heap.

Documentation

Internal Changes

  • Use importlib to replace functionality of pkg_resources in backend plugins tests. (PR5959). By Kai Mühlbauer.

v0.20.1 (5 November 2021)

This is a bugfix release to fix GH5930.

Bug fixes

Documentation

v0.20.0 (1 November 2021)

This release brings improved support for pint arrays, methods for weighted standard deviation, variance, and sum of squares, the option to disable the use of the bottleneck library, significantly improved performance of unstack, as well as many bugfixes and internal changes.

Many thanks to the 40 contributors to this release!:

Aaron Spring, Akio Taniguchi, Alan D. Snow, arfy slowy, Benoit Bovy, Christian Jauvin, crusaderky, Deepak Cherian, Giacomo Caria, Illviljan, James Bourbeau, Joe Hamman, Joseph K Aicher, Julien Herzen, Kai Mühlbauer, keewis, lusewell, Martin K. Scherer, Mathias Hauser, Max Grover, Maxime Liquet, Maximilian Roos, Mike Taves, Nathan Lis, pmav99, Pushkar Kopparla, Ray Bell, Rio McMahon, Scott Staniewicz, Spencer Clark, Stefan Bender, Taher Chegini, Thomas Nicholas, Tomas Chor, Tom Augspurger, Victor Negîrneac, Zachary Blackwood, Zachary Moon, and Zeb Nicholls.

New Features

  • Add std, var, sum_of_squares to DatasetWeighted and DataArrayWeighted. By Christian Jauvin.

  • Added a get_options() method to xarray’s root namespace (GH5698, PR5716) By Pushkar Kopparla.

  • Xarray now does a better job rendering variable names that are long LaTeX sequences when plotting (GH5681, PR5682). By Tomas Chor.

  • Add an option ("use_bottleneck") to disable the use of bottleneck using set_options() (PR5560) By Justus Magin.

  • Added **kwargs argument to open_rasterio() to access overviews (GH3269). By Pushkar Kopparla.

  • Added storage_options argument to to_zarr() (GH5601, PR5615). By Ray Bell, Zachary Blackwood and Nathan Lis.

  • Added calendar utilities DataArray.convert_calendar(), DataArray.interp_calendar(), date_range(), date_range_like() and DataArray.dt.calendar (GH5155, PR5233). By Pascal Bourgault.

  • Histogram plots are set with a title displaying the scalar coords if any, similarly to the other plots (GH5791, PR5792). By Maxime Liquet.

  • Slice plots display the coords units in the same way as x/y/colorbar labels (PR5847). By Victor Negîrneac.

  • Added a new Dataset.chunksizes, DataArray.chunksizes, and Variable.chunksizes property, which will always return a mapping from dimension names to chunking pattern along that dimension, regardless of whether the object is a Dataset, DataArray, or Variable. (GH5846, PR5900) By Tom Nicholas.

Breaking changes

  • The minimum versions of some dependencies were changed:

    Package

    Old

    New

    cftime

    1.1

    1.2

    dask

    2.15

    2.30

    distributed

    2.15

    2.30

    lxml

    4.5

    4.6

    matplotlib-base

    3.2

    3.3

    numba

    0.49

    0.51

    numpy

    1.17

    1.18

    pandas

    1.0

    1.1

    pint

    0.15

    0.16

    scipy

    1.4

    1.5

    seaborn

    0.10

    0.11

    sparse

    0.8

    0.11

    toolz

    0.10

    0.11

    zarr

    2.4

    2.5

  • The __repr__ of a xarray.Dataset’s coords and data_vars ignore xarray.set_option(display_max_rows=...) and show the full output when called directly as, e.g., ds.data_vars or print(ds.data_vars) (GH5545, PR5580). By Stefan Bender.

Deprecations

  • Deprecate open_rasterio() (GH4697, PR5808). By Alan Snow.

  • Set the default argument for roll_coords to False for DataArray.roll() and Dataset.roll(). (PR5653) By Tom Nicholas.

  • xarray.open_mfdataset() will now error instead of warn when a value for concat_dim is passed alongside combine='by_coords'. By Tom Nicholas.

Bug fixes

  • Fix ZeroDivisionError from saving dask array with empty dimension (:issue: 5741). By Joseph K Aicher.

  • Fixed performance bug where cftime import attempted within various core operations if cftime not installed (PR5640). By Luke Sewell

  • Fixed bug when combining named DataArrays using combine_by_coords(). (PR5834). By Tom Nicholas.

  • When a custom engine was used in open_dataset() the engine wasn’t initialized properly, causing missing argument errors or inconsistent method signatures. (PR5684) By Jimmy Westling.

  • Numbers are properly formatted in a plot’s title (GH5788, PR5789). By Maxime Liquet.

  • Faceted plots will no longer raise a pint.UnitStrippedWarning when a pint.Quantity array is plotted, and will correctly display the units of the data in the colorbar (if there is one) (PR5886). By Tom Nicholas.

  • With backends, check for path-like objects rather than pathlib.Path type, use os.fspath (PR5879). By Mike Taves.

  • open_mfdataset() now accepts a single pathlib.Path object (:issue: 5881). By Panos Mavrogiorgos.

  • Improved performance of Dataset.unstack() (PR5906). By Tom Augspurger.

Documentation

  • Users are instructed to try use_cftime=True if a TypeError occurs when combining datasets and one of the types involved is a subclass of cftime.datetime (PR5776). By Zeb Nicholls.

  • A clearer error is now raised if a user attempts to assign a Dataset to a single key of another Dataset. (PR5839) By Tom Nicholas.

Internal Changes

  • Explicit indexes refactor: avoid len(index) in map_blocks (PR5670). By Deepak Cherian.

  • Explicit indexes refactor: decouple xarray.Index` from xarray.Variable (PR5636). By Benoit Bovy.

  • Fix Mapping argument typing to allow mypy to pass on str keys (PR5690). By Maximilian Roos.

  • Annotate many of our tests, and fix some of the resulting typing errors. This will also mean our typing annotations are tested as part of CI. (PR5728). By Maximilian Roos.

  • Improve the performance of reprs for large datasets or dataarrays. (PR5661) By Jimmy Westling.

  • Use isort’s float_to_top config. (PR5695). By Maximilian Roos.

  • Remove use of the deprecated kind argument in pandas.Index.get_slice_bound() inside xarray.CFTimeIndex tests (PR5723). By Spencer Clark.

  • Refactor xarray.core.duck_array_ops to no longer special-case dispatching to dask versions of functions when acting on dask arrays, instead relying numpy and dask’s adherence to NEP-18 to dispatch automatically. (PR5571) By Tom Nicholas.

  • Add an ASV benchmark CI and improve performance of the benchmarks (PR5796) By Jimmy Westling.

  • Use importlib to replace functionality of pkg_resources such as version setting and loading of resources. (PR5845). By Martin K. Scherer.

v0.19.0 (23 July 2021)

This release brings improvements to plotting of categorical data, the ability to specify how attributes are combined in xarray operations, a new high-level unify_chunks() function, as well as various deprecations, bug fixes, and minor improvements.

Many thanks to the 29 contributors to this release!:

Andrew Williams, Augustus, Aureliana Barghini, Benoit Bovy, crusaderky, Deepak Cherian, ellesmith88, Elliott Sales de Andrade, Giacomo Caria, github-actions[bot], Illviljan, Joeperdefloep, joooeey, Julia Kent, Julius Busecke, keewis, Mathias Hauser, Matthias Göbel, Mattia Almansi, Maximilian Roos, Peter Andreas Entschev, Ray Bell, Sander, Santiago Soler, Sebastian, Spencer Clark, Stephan Hoyer, Thomas Hirtz, Thomas Nicholas.

New Features

  • Allow passing argument missing_dims to Variable.transpose() and Dataset.transpose() (GH5550, PR5586) By Giacomo Caria.

  • Allow passing a dictionary as coords to a DataArray (GH5527, reverts PR1539, which had deprecated this due to python’s inconsistent ordering in earlier versions). By Sander van Rijn.

  • Added Dataset.coarsen.construct(), DataArray.coarsen.construct() (GH5454, PR5475). By Deepak Cherian.

  • Xarray now uses consolidated metadata by default when writing and reading Zarr stores (GH5251). By Stephan Hoyer.

  • New top-level function unify_chunks(). By Mattia Almansi.

  • Allow assigning values to a subset of a dataset using positional or label-based indexing (GH3015, PR5362). By Matthias Göbel.

  • Attempting to reduce a weighted object over missing dimensions now raises an error (PR5362). By Mattia Almansi.

  • Add .sum to rolling_exp() and rolling_exp() for exponentially weighted rolling sums. These require numbagg 0.2.1; (PR5178). By Maximilian Roos.

  • xarray.cov() and xarray.corr() now lazily check for missing values if inputs are dask arrays (GH4804, PR5284). By Andrew Williams.

  • Attempting to concat list of elements that are not all Dataset or all DataArray now raises an error (GH5051, PR5425). By Thomas Hirtz.

  • allow passing a function to combine_attrs (PR4896). By Justus Magin.

  • Allow plotting categorical data (PR5464). By Jimmy Westling.

  • Allow removal of the coordinate attribute coordinates on variables by setting .attrs['coordinates']= None (GH5510). By Elle Smith.

  • Added DataArray.to_numpy(), DataArray.as_numpy(), and Dataset.as_numpy(). (PR5568). By Tom Nicholas.

  • Units in plot labels are now automatically inferred from wrapped pint.Quantity() arrays. (PR5561). By Tom Nicholas.

Breaking changes

  • The default mode for Dataset.to_zarr() when region is set has changed to the new mode="r+", which only allows for overriding pre-existing array values. This is a safer default than the prior mode="a", and allows for higher performance writes (PR5252). By Stephan Hoyer.

  • The main parameter to combine_by_coords() is renamed to data_objects instead of datasets so anyone calling this method using a named parameter will need to update the name accordingly (GH3248, PR4696). By Augustus Ijams.

Deprecations

  • Removed the deprecated dim kwarg to DataArray.integrate() (PR5630)

  • Removed the deprecated keep_attrs kwarg to DataArray.rolling() (PR5630)

  • Removed the deprecated keep_attrs kwarg to DataArray.coarsen() (PR5630)

  • Completed deprecation of passing an xarray.DataArray to Variable() - will now raise a TypeError (PR5630)

Bug fixes

  • Fix a minor incompatibility between partial datetime string indexing with a CFTimeIndex and upcoming pandas version 1.3.0 (GH5356, PR5359). By Spencer Clark.

  • Fix 1-level multi-index incorrectly converted to single index (GH5384, PR5385). By Benoit Bovy.

  • Don’t cast a duck array in a coordinate to numpy.ndarray in DataArray.differentiate() (PR5408) By Justus Magin.

  • Fix the repr of Variable objects with display_expand_data=True (PR5406) By Justus Magin.

  • Plotting a pcolormesh with xscale="log" and/or yscale="log" works as expected after improving the way the interval breaks are generated (GH5333). By Santiago Soler

  • combine_by_coords() can now handle combining a list of unnamed DataArray as input (GH3248, PR4696). By Augustus Ijams.

Internal Changes

  • Run CI on the first & last python versions supported only; currently 3.7 & 3.9. (PR5433) By Maximilian Roos.

  • Publish test results & timings on each PR. (PR5537) By Maximilian Roos.

  • Explicit indexes refactor: add a xarray.Index.query() method in which one may eventually provide a custom implementation of label-based data selection (not ready yet for public use). Also refactor the internal, pandas-specific implementation into PandasIndex.query() and PandasMultiIndex.query() (PR5322). By Benoit Bovy.

v0.18.2 (19 May 2021)

This release reverts a regression in xarray’s unstacking of dask-backed arrays.

v0.18.1 (18 May 2021)

This release is intended as a small patch release to be compatible with the new 2021.5.0 dask.distributed release. It also includes a new drop_duplicates method, some documentation improvements, the beginnings of our internal Index refactoring, and some bug fixes.

Thank you to all 16 contributors!

Anderson Banihirwe, Andrew, Benoit Bovy, Brewster Malevich, Giacomo Caria, Illviljan, James Bourbeau, Keewis, Maximilian Roos, Ravin Kumar, Stephan Hoyer, Thomas Nicholas, Tom Nicholas, Zachary Moon.

New Features

  • Implement DataArray.drop_duplicates() to remove duplicate dimension values (PR5239). By Andrew Huang.

  • Allow passing combine_attrs strategy names to the keep_attrs parameter of apply_ufunc() (PR5041) By Justus Magin.

  • Dataset.interp() now allows interpolation with non-numerical datatypes, such as booleans, instead of dropping them. (GH4761 PR5008). By Jimmy Westling.

  • Raise more informative error when decoding time variables with invalid reference dates. (GH5199, PR5288). By Giacomo Caria.

Bug fixes

  • Opening netCDF files from a path that doesn’t end in .nc without supplying an explicit engine works again (GH5295), fixing a bug introduced in 0.18.0. By Stephan Hoyer

Documentation

  • Clean up and enhance docstrings for the DataArray.plot and Dataset.plot.* families of methods (PR5285). By Zach Moon.

  • Explanation of deprecation cycles and how to implement them added to contributors guide. (PR5289) By Tom Nicholas.

Internal Changes

  • Explicit indexes refactor: add an xarray.Index base class and Dataset.xindexes / DataArray.xindexes properties. Also rename PandasIndexAdapter to PandasIndex, which now inherits from xarray.Index (PR5102). By Benoit Bovy.

  • Replace SortedKeysDict with python’s dict, given dicts are now ordered. By Maximilian Roos.

  • Updated the release guide for developers. Now accounts for actions that are automated via github actions. (PR5274). By Tom Nicholas.

v0.18.0 (6 May 2021)

This release brings a few important performance improvements, a wide range of usability upgrades, lots of bug fixes, and some new features. These include a plugin API to add backend engines, a new theme for the documentation, curve fitting methods, and several new plotting functions.

Many thanks to the 38 contributors to this release: Aaron Spring, Alessandro Amici, Alex Marandon, Alistair Miles, Ana Paula Krelling, Anderson Banihirwe, Aureliana Barghini, Baudouin Raoult, Benoit Bovy, Blair Bonnett, David Trémouilles, Deepak Cherian, Gabriel Medeiros Abrahão, Giacomo Caria, Hauke Schulz, Illviljan, Mathias Hauser, Matthias Bussonnier, Mattia Almansi, Maximilian Roos, Ray Bell, Richard Kleijn, Ryan Abernathey, Sam Levang, Spencer Clark, Spencer Jones, Tammas Loughran, Tobias Kölling, Todd, Tom Nicholas, Tom White, Victor Negîrneac, Xianxiang Li, Zeb Nicholls, crusaderky, dschwoerer, johnomotani, keewis

New Features

  • apply combine_attrs on data variables and coordinate variables when concatenating and merging datasets and dataarrays (PR4902). By Justus Magin.

  • Add Dataset.to_pandas() (PR5247) By Giacomo Caria.

  • Add DataArray.plot.surface() which wraps matplotlib’s plot_surface to make surface plots (GH2235 GH5084 PR5101). By John Omotani.

  • Allow passing multiple arrays to Dataset.__setitem__() (PR5216). By Giacomo Caria.

  • Add ‘cumulative’ option to Dataset.integrate() and DataArray.integrate() so that result is a cumulative integral, like scipy.integrate.cumulative_trapezoidal() (PR5153). By John Omotani.

  • Add safe_chunks option to Dataset.to_zarr() which allows overriding checks made to ensure Dask and Zarr chunk compatibility (GH5056). By Ryan Abernathey

  • Add Dataset.query() and DataArray.query() which enable indexing of datasets and data arrays by evaluating query expressions against the values of the data variables (PR4984). By Alistair Miles.

  • Allow passing combine_attrs to Dataset.merge() (PR4895). By Justus Magin.

  • Support for dask.graph_manipulation (requires dask >=2021.3) By Guido Imperiale

  • Add Dataset.plot.streamplot() for streamplot plots with Dataset variables (PR5003). By John Omotani.

  • Many of the arguments for the DataArray.str methods now support providing an array-like input. In this case, the array provided to the arguments is broadcast against the original array and applied elementwise.

  • DataArray.str now supports +, *, and % operators. These behave the same as they do for str, except that they follow array broadcasting rules.

  • A large number of new DataArray.str methods were implemented, DataArray.str.casefold(), DataArray.str.cat(), DataArray.str.extract(), DataArray.str.extractall(), DataArray.str.findall(), DataArray.str.format(), DataArray.str.get_dummies(), DataArray.str.islower(), DataArray.str.join(), DataArray.str.normalize(), DataArray.str.partition(), DataArray.str.rpartition(), DataArray.str.rsplit(), and DataArray.str.split(). A number of these methods allow for splitting or joining the strings in an array. (GH4622) By Todd Jennings

  • Thanks to the new pluggable backend infrastructure external packages may now use the xarray.backends entry point to register additional engines to be used in open_dataset(), see the documentation in How to add a new backend (GH4309, GH4803, PR4989, PR4810 and many others). The backend refactor has been sponsored with the “Essential Open Source Software for Science” grant from the Chan Zuckerberg Initiative and developed by B-Open. By Aureliana Barghini and Alessandro Amici.

  • date added (GH4983, PR4994). By Hauke Schulz.

  • Implement __getitem__ for both DatasetGroupBy and DataArrayGroupBy, inspired by pandas’ get_group(). By Deepak Cherian.

  • Switch the tutorial functions to use pooch (which is now a optional dependency) and add tutorial.open_rasterio() as a way to open example rasterio files (GH3986, PR4102, PR5074). By Justus Magin.

  • Add typing information to unary and binary arithmetic operators operating on Dataset, DataArray, Variable, DatasetGroupBy or DataArrayGroupBy (PR4904). By Richard Kleijn.

  • Add a combine_attrs parameter to open_mfdataset() (PR4971). By Justus Magin.

  • Enable passing arrays with a subset of dimensions to DataArray.clip() & Dataset.clip(); these methods now use xarray.apply_ufunc(); (PR5184). By Maximilian Roos.

  • Disable the cfgrib backend if the eccodes library is not installed (PR5083). By Baudouin Raoult.

  • Added DataArray.curvefit() and Dataset.curvefit() for general curve fitting applications. (GH4300, PR4849) By Sam Levang.

  • Add options to control expand/collapse of sections in display of Dataset and DataArray. The function set_options() now takes keyword arguments display_expand_attrs, display_expand_coords, display_expand_data, display_expand_data_vars, all of which can be one of True to always expand, False to always collapse, or default to expand unless over a pre-defined limit (PR5126). By Tom White.

  • Significant speedups in Dataset.interp() and DataArray.interp(). (GH4739, PR4740). By Deepak Cherian.

  • Prevent passing concat_dim to xarray.open_mfdataset() when combine=’by_coords’ is specified, which should never have been possible (as xarray.combine_by_coords() has no concat_dim argument to pass to). Also removes unneeded internal reordering of datasets in xarray.open_mfdataset() when combine=’by_coords’ is specified. Fixes (GH5230). By Tom Nicholas.

  • Implement __setitem__ for xarray.core.indexing.DaskIndexingAdapter if dask version supports item assignment. (GH5171, PR5174) By Tammas Loughran.

Breaking changes

  • The minimum versions of some dependencies were changed:

    Package

    Old

    New

    boto3

    1.12

    1.13

    cftime

    1.0

    1.1

    dask

    2.11

    2.15

    distributed

    2.11

    2.15

    matplotlib

    3.1

    3.2

    numba

    0.48

    0.49

  • open_dataset() and open_dataarray() now accept only the first argument as positional, all others need to be passed are keyword arguments. This is part of the refactor to support external backends (GH4309, PR4989). By Alessandro Amici.

  • Functions that are identities for 0d data return the unchanged data if axis is empty. This ensures that Datasets where some variables do not have the averaged dimensions are not accidentally changed (GH4885, PR5207). By David Schwörer.

  • DataArray.coarsen and Dataset.coarsen no longer support passing keep_attrs via its constructor. Pass keep_attrs via the applied function, i.e. use ds.coarsen(...).mean(keep_attrs=False) instead of ds.coarsen(..., keep_attrs=False).mean(). Further, coarsen now keeps attributes per default (PR5227). By Mathias Hauser.

  • switch the default of the merge() combine_attrs parameter to "override". This will keep the current behavior for merging the attrs of variables but stop dropping the attrs of the main objects (PR4902). By Justus Magin.

Deprecations

  • Warn when passing concat_dim to xarray.open_mfdataset() when combine=’by_coords’ is specified, which should never have been possible (as xarray.combine_by_coords() has no concat_dim argument to pass to). Also removes unneeded internal reordering of datasets in xarray.open_mfdataset() when combine=’by_coords’ is specified. Fixes (GH5230), via (PR5231, PR5255). By Tom Nicholas.

  • The lock keyword argument to open_dataset() and open_dataarray() is now a backend specific option. It will give a warning if passed to a backend that doesn’t support it instead of being silently ignored. From the next version it will raise an error. This is part of the refactor to support external backends (GH5073). By Tom Nicholas and Alessandro Amici.

Bug fixes

  • Properly support DataArray.ffill(), DataArray.bfill(), Dataset.ffill(), Dataset.bfill() along chunked dimensions. (GH2699). By Deepak Cherian.

  • Fix 2d plot failure for certain combinations of dimensions when x is 1d and y is 2d (GH5097, PR5099). By John Omotani.

  • Ensure standard calendar times encoded with large values (i.e. greater than approximately 292 years), can be decoded correctly without silently overflowing (PR5050). This was a regression in xarray 0.17.0. By Zeb Nicholls.

  • Added support for numpy.bool_ attributes in roundtrips using h5netcdf engine with invalid_netcdf=True [which casts bool`s to `numpy.bool_] (GH4981, PR4986). By Victor Negîrneac.

  • Don’t allow passing axis to Dataset.reduce() methods (GH3510, PR4940). By Justus Magin.

  • Decode values as signed if attribute _Unsigned = “false” (GH4954) By Tobias Kölling.

  • Keep coords attributes when interpolating when the indexer is not a Variable. (GH4239, GH4839 PR5031) By Jimmy Westling.

  • Ensure standard calendar dates encoded with a calendar attribute with some or all uppercase letters can be decoded or encoded to or from np.datetime64[ns] dates with or without cftime installed (GH5093, PR5180). By Spencer Clark.

  • Warn on passing keep_attrs to resample and rolling_exp as they are ignored, pass keep_attrs to the applied function instead (PR5265). By Mathias Hauser.

Documentation

Internal Changes

  • Enable displaying mypy error codes and ignore only specific error codes using # type: ignore[error-code] (PR5096). By Mathias Hauser.

  • Replace uses of raises_regex with the more standard pytest.raises(Exception, match="foo"); (PR5188), (PR5191). By Maximilian Roos.

v0.17.0 (24 Feb 2021)

This release brings a few important performance improvements, a wide range of usability upgrades, lots of bug fixes, and some new features. These include better cftime support, a new quiver plot, better unstack performance, more efficient memory use in rolling operations, and some python packaging improvements. We also have a few documentation improvements (and more planned!).

Many thanks to the 36 contributors to this release: Alessandro Amici, Anderson Banihirwe, Aureliana Barghini, Ayrton Bourn, Benjamin Bean, Blair Bonnett, Chun Ho Chow, DWesl, Daniel Mesejo-León, Deepak Cherian, Eric Keenan, Illviljan, Jens Hedegaard Nielsen, Jody Klymak, Julien Seguinot, Julius Busecke, Kai Mühlbauer, Leif Denby, Martin Durant, Mathias Hauser, Maximilian Roos, Michael Mann, Ray Bell, RichardScottOZ, Spencer Clark, Tim Gates, Tom Nicholas, Yunus Sevinchan, alexamici, aurghs, crusaderky, dcherian, ghislainp, keewis, rhkleijn

Breaking changes

  • xarray no longer supports python 3.6

    The minimum version policy was changed to also apply to projects with irregular releases. As a result, the minimum versions of some dependencies have changed:

    Package

    Old

    New

    Python

    3.6

    3.7

    setuptools

    38.4

    40.4

    numpy

    1.15

    1.17

    pandas

    0.25

    1.0

    dask

    2.9

    2.11

    distributed

    2.9

    2.11

    bottleneck

    1.2

    1.3

    h5netcdf

    0.7

    0.8

    iris

    2.2

    2.4

    netcdf4

    1.4

    1.5

    pseudonetcdf

    3.0

    3.1

    rasterio

    1.0

    1.1

    scipy

    1.3

    1.4

    seaborn

    0.9

    0.10

    zarr

    2.3

    2.4

    (GH4688, PR4720, PR4907, PR4942)

  • As a result of PR4684 the default units encoding for datetime-like values (np.datetime64[ns] or cftime.datetime) will now always be set such that int64 values can be used. In the past, no units finer than “seconds” were chosen, which would sometimes mean that float64 values were required, which would lead to inaccurate I/O round-trips.

  • Variables referred to in attributes like bounds and grid_mapping can be set as coordinate variables. These attributes are moved to DataArray.encoding from DataArray.attrs. This behaviour is controlled by the decode_coords kwarg to open_dataset() and open_mfdataset(). The full list of decoded attributes is in Weather and climate data (PR2844, GH3689)

  • As a result of PR4911 the output from calling DataArray.sum() or DataArray.prod() on an integer array with skipna=True and a non-None value for min_count will now be a float array rather than an integer array.

Deprecations

  • dim argument to DataArray.integrate() is being deprecated in favour of a coord argument, for consistency with Dataset.integrate(). For now using dim issues a FutureWarning. It will be removed in version 0.19.0 (PR3993). By Tom Nicholas.

  • Deprecated autoclose kwargs from open_dataset() are removed (PR4725). By Aureliana Barghini.

  • the return value of Dataset.update() is being deprecated to make it work more like dict.update(). It will be removed in version 0.19.0 (PR4932). By Justus Magin.

New Features

  • cftime_range() and DataArray.resample() now support millisecond ("L" or "ms") and microsecond ("U" or "us") frequencies for cftime.datetime coordinates (GH4097, PR4758). By Spencer Clark.

  • Significantly higher unstack performance on numpy-backed arrays which contain missing values; 8x faster than previous versions in our benchmark, and now 2x faster than pandas (PR4746). By Maximilian Roos.

  • Add Dataset.plot.quiver() for quiver plots with Dataset variables. By Deepak Cherian.

  • Add "drop_conflicts" to the strategies supported by the combine_attrs kwarg (GH4749, PR4827). By Justus Magin.

  • Allow installing from git archives (PR4897). By Justus Magin.

  • DataArrayCoarsen and DatasetCoarsen now implement a reduce method, enabling coarsening operations with custom reduction functions (GH3741, PR4939). By Spencer Clark.

  • Most rolling operations use significantly less memory. (GH4325). By Deepak Cherian.

  • Add Dataset.drop_isel() and DataArray.drop_isel() (GH4658, PR4819). By Daniel Mesejo.

  • Xarray now leverages updates as of cftime version 1.4.1, which enable exact I/O roundtripping of cftime.datetime objects (PR4758). By Spencer Clark.

  • open_dataset() and open_mfdataset() now accept fsspec URLs (including globs for the latter) for engine="zarr", and so allow reading from many remote and other file systems (PR4461) By Martin Durant

  • DataArray.swap_dims() & Dataset.swap_dims() now accept dims in the form of kwargs as well as a dict, like most similar methods. By Maximilian Roos.

Bug fixes

  • Use specific type checks in xarray.core.variable.as_compatible_data instead of blanket access to values attribute (GH2097) By Yunus Sevinchan.

  • DataArray.resample() and Dataset.resample() do not trigger computations anymore if Dataset.weighted() or DataArray.weighted() are applied (GH4625, PR4668). By Julius Busecke.

  • merge() with combine_attrs='override' makes a copy of the attrs (GH4627).

  • By default, when possible, xarray will now always use values of type int64 when encoding and decoding numpy.datetime64[ns] datetimes. This ensures that maximum precision and accuracy are maintained in the round-tripping process (GH4045, PR4684). It also enables encoding and decoding standard calendar dates with time units of nanoseconds (PR4400). By Spencer Clark and Mark Harfouche.

  • DataArray.astype(), Dataset.astype() and Variable.astype() support the order and subok parameters again. This fixes a regression introduced in version 0.16.1 (GH4644, PR4683). By Richard Kleijn .

  • Remove dictionary unpacking when using .loc to avoid collision with .sel parameters (PR4695). By Anderson Banihirwe.

  • Fix the legend created by Dataset.plot.scatter() (GH4641, PR4723). By Justus Magin.

  • Fix a crash in orthogonal indexing on geographic coordinates with engine='cfgrib' (GH4733 PR4737). By Alessandro Amici.

  • Coordinates with dtype str or bytes now retain their dtype on many operations, e.g. reindex, align, concat, assign, previously they were cast to an object dtype (GH2658 and GH4543). By Mathias Hauser.

  • Limit number of data rows when printing large datasets. (GH4736, PR4750). By Jimmy Westling.

  • Add missing_dims parameter to transpose (GH4647, PR4767). By Daniel Mesejo.

  • Resolve intervals before appending other metadata to labels when plotting (GH4322, PR4794). By Justus Magin.

  • Fix regression when decoding a variable with a scale_factor and add_offset given as a list of length one (GH4631). By Mathias Hauser.

  • Expand user directory paths (e.g. ~/) in open_mfdataset() and Dataset.to_zarr() (GH4783, PR4795). By Julien Seguinot.

  • Raise DeprecationWarning when trying to typecast a tuple containing a DataArray. User now prompted to first call .data on it (GH4483). By Chun Ho Chow.

  • Ensure that Dataset.interp() raises ValueError when interpolating outside coordinate range and bounds_error=True (GH4854, PR4855). By Leif Denby.

  • Fix time encoding bug associated with using cftime versions greater than 1.4.0 with xarray (GH4870, PR4871). By Spencer Clark.

  • Stop DataArray.sum() and DataArray.prod() computing lazy arrays when called with a min_count parameter (GH4898, PR4911). By Blair Bonnett.

  • Fix bug preventing the min_count parameter to DataArray.sum() and DataArray.prod() working correctly when calculating over all axes of a float64 array (GH4898, PR4911). By Blair Bonnett.

  • Fix decoding of vlen strings using h5py versions greater than 3.0.0 with h5netcdf backend (GH4570, PR4893). By Kai Mühlbauer.

  • Allow converting Dataset or DataArray objects with a MultiIndex and at least one other dimension to a pandas object (GH3008, PR4442). By ghislainp.

Documentation

Internal Changes

  • Speed up of the continuous integration tests on azure.

    • Switched to mamba and use matplotlib-base for a faster installation of all dependencies (PR4672).

    • Use pytest.mark.skip instead of pytest.mark.xfail for some tests that can currently not succeed (PR4685).

    • Run the tests in parallel using pytest-xdist (PR4694).

    By Justus Magin and Mathias Hauser.

  • Use pyproject.toml instead of the setup_requires option for setuptools (PR4897). By Justus Magin.

  • Replace all usages of assert x.identical(y) with assert_identical(x,  y) for clearer error messages (PR4752). By Maximilian Roos.

  • Speed up attribute style access (e.g. ds.somevar instead of ds["somevar"]) and tab completion in IPython (GH4741, PR4742). By Richard Kleijn.

  • Added the set_close method to Dataset and DataArray for backends to specify how to voluntary release all resources. (PR#4809) By Alessandro Amici.

  • Update type hints to work with numpy v1.20 (PR4878). By Mathias Hauser.

  • Ensure warnings cannot be turned into exceptions in testing.assert_equal() and the other assert_* functions (PR4864). By Mathias Hauser.

  • Performance improvement when constructing DataArrays. Significantly speeds up repr for Datasets with large number of variables. By Deepak Cherian.

v0.16.2 (30 Nov 2020)

This release brings the ability to write to limited regions of zarr files, open zarr files with open_dataset() and open_mfdataset(), increased support for propagating attrs using the keep_attrs flag, as well as numerous bugfixes and documentation improvements.

Many thanks to the 31 contributors who contributed to this release: Aaron Spring, Akio Taniguchi, Aleksandar Jelenak, alexamici, Alexandre Poux, Anderson Banihirwe, Andrew Pauling, Ashwin Vishnu, aurghs, Brian Ward, Caleb, crusaderky, Dan Nowacki, darikg, David Brochart, David Huard, Deepak Cherian, Dion Häfner, Gerardo Rivera, Gerrit Holl, Illviljan, inakleinbottle, Jacob Tomlinson, James A. Bednar, jenssss, Joe Hamman, johnomotani, Joris Van den Bossche, Julia Kent, Julius Busecke, Kai Mühlbauer, keewis, Keisuke Fujii, Kyle Cranmer, Luke Volpatti, Mathias Hauser, Maximilian Roos, Michaël Defferrard, Michal Baumgartner, Nick R. Papior, Pascal Bourgault, Peter Hausamann, PGijsbers, Ray Bell, Romain Martinez, rpgoldman, Russell Manser, Sahid Velji, Samnan Rahee, Sander, Spencer Clark, Stephan Hoyer, Thomas Zilio, Tobias Kölling, Tom Augspurger, Wei Ji, Yash Saboo, Zeb Nicholls,

Deprecations

  • weekofyear and week have been deprecated. Use DataArray.dt.isocalendar().week instead (PR4534). By Mathias Hauser. Maximilian Roos, and Spencer Clark.

  • DataArray.rolling and Dataset.rolling no longer support passing keep_attrs via its constructor. Pass keep_attrs via the applied function, i.e. use ds.rolling(...).mean(keep_attrs=False) instead of ds.rolling(..., keep_attrs=False).mean() Rolling operations now keep their attributes per default (PR4510). By Mathias Hauser.

New Features

Bug fixes

  • Fix bug where reference times without padded years (e.g. since 1-1-1) would lose their units when being passed by encode_cf_datetime (GH4422, PR4506). Such units are ambiguous about which digit represents the years (is it YMD or DMY?). Now, if such formatting is encountered, it is assumed that the first digit is the years, they are padded appropriately (to e.g. since 0001-1-1) and a warning that this assumption is being made is issued. Previously, without cftime, such times would be silently parsed incorrectly (at least based on the CF conventions) e.g. “since 1-1-1” would be parsed (via pandas and dateutil) to since 2001-1-1. By Zeb Nicholls.

  • Fix DataArray.plot.step(). By Deepak Cherian.

  • Fix bug where reading a scalar value from a NetCDF file opened with the h5netcdf backend would raise a ValueError when decode_cf=True (GH4471, PR4485). By Gerrit Holl.

  • Fix bug where datetime64 times are silently changed to incorrect values if they are outside the valid date range for ns precision when provided in some other units (GH4427, PR4454). By Andrew Pauling

  • Fix silently overwriting the engine key when passing open_dataset() a file object to an incompatible netCDF (GH4457). Now incompatible combinations of files and engines raise an exception instead. By Alessandro Amici.

  • The min_count argument to DataArray.sum() and DataArray.prod() is now ignored when not applicable, i.e. when skipna=False or when skipna=None and the dtype does not have a missing value (GH4352). By Mathias Hauser.

  • combine_by_coords() now raises an informative error when passing coordinates with differing calendars (GH4495). By Mathias Hauser.

  • DataArray.rolling and Dataset.rolling now also keep the attributes and names of of (wrapped) DataArray objects, previously only the global attributes were retained (GH4497, PR4510). By Mathias Hauser.

  • Improve performance where reading small slices from huge dimensions was slower than necessary (PR4560). By Dion Häfner.

  • Fix bug where dask_gufunc_kwargs was silently changed in apply_ufunc() (PR4576). By Kai Mühlbauer.

Documentation

Internal Changes

v0.16.1 (2020-09-20)

This patch release fixes an incompatibility with a recent pandas change, which was causing an issue indexing with a datetime64. It also includes improvements to rolling, to_dataframe, cov & corr methods and bug fixes. Our documentation has a number of improvements, including fixing all doctests and confirming their accuracy on every commit.

Many thanks to the 36 contributors who contributed to this release:

Aaron Spring, Akio Taniguchi, Aleksandar Jelenak, Alexandre Poux, Caleb, Dan Nowacki, Deepak Cherian, Gerardo Rivera, Jacob Tomlinson, James A. Bednar, Joe Hamman, Julia Kent, Kai Mühlbauer, Keisuke Fujii, Mathias Hauser, Maximilian Roos, Nick R. Papior, Pascal Bourgault, Peter Hausamann, Romain Martinez, Russell Manser, Samnan Rahee, Sander, Spencer Clark, Stephan Hoyer, Thomas Zilio, Tobias Kölling, Tom Augspurger, alexamici, crusaderky, darikg, inakleinbottle, jenssss, johnomotani, keewis, and rpgoldman.

Breaking changes

New Features

  • rolling() and rolling() now accept more than 1 dimension. (PR4219) By Keisuke Fujii.

  • to_dataframe() and to_dataframe() now accept a dim_order parameter allowing to specify the resulting dataframe’s dimensions order (GH4331, PR4333). By Thomas Zilio.

  • Support multiple outputs in xarray.apply_ufunc() when using dask='parallelized'. (GH1815, PR4060). By Kai Mühlbauer.

  • min_count can be supplied to reductions such as .sum when specifying multiple dimension to reduce over; (PR4356). By Maximilian Roos.

  • xarray.cov() and xarray.corr() now handle missing values; (PR4351). By Maximilian Roos.

  • Add support for parsing datetime strings formatted following the default string representation of cftime objects, i.e. YYYY-MM-DD hh:mm:ss, in partial datetime string indexing, as well as cftime_range() (GH4337). By Spencer Clark.

  • Build CFTimeIndex.__repr__ explicitly as pandas.Index. Add calendar as a new property for CFTimeIndex and show calendar and length in CFTimeIndex.__repr__ (GH2416, PR4092) By Aaron Spring.

  • Use a wrapped array’s _repr_inline_ method to construct the collapsed repr of DataArray and Dataset objects and document the new method in xarray Internals. (PR4248). By Justus Magin.

  • Allow per-variable fill values in most functions. (PR4237). By Justus Magin.

  • Expose use_cftime option in open_zarr() (GH2886, PR3229) By Samnan Rahee and Anderson Banihirwe.

Bug fixes

  • Fix indexing with datetime64 scalars with pandas 1.1 (GH4283). By Stephan Hoyer and Justus Magin.

  • Variables which are chunked using dask only along some dimensions can be chunked while storing with zarr along previously unchunked dimensions (PR4312) By Tobias Kölling.

  • Fixed a bug in backend caused by basic installation of Dask (GH4164, PR4318) Sam Morley.

  • Fixed a few bugs with Dataset.polyfit() when encountering deficient matrix ranks (GH4190, PR4193). By Pascal Bourgault.

  • Fixed inconsistencies between docstring and functionality for DataArray.str.get() and DataArray.str.wrap() (GH4334). By Mathias Hauser.

  • Fixed overflow issue causing incorrect results in computing means of cftime.datetime arrays (GH4341). By Spencer Clark.

  • Fixed Dataset.coarsen(), DataArray.coarsen() dropping attributes on original object (GH4120, PR4360). By Julia Kent.

  • fix the signature of the plot methods. (PR4359) By Justus Magin.

  • Fix xarray.apply_ufunc() with vectorize=True and exclude_dims (GH3890). By Mathias Hauser.

  • Fix KeyError when doing linear interpolation to an nd DataArray that contains NaNs (PR4233). By Jens Svensmark

  • Fix incorrect legend labels for Dataset.plot.scatter() (GH4126). By Peter Hausamann.

  • Fix dask.optimize on DataArray producing an invalid Dask task graph (GH3698) By Tom Augspurger

  • Fix pip install . when no .git directory exists; namely when the xarray source directory has been rsync’ed by PyCharm Professional for a remote deployment over SSH. By Guido Imperiale

  • Preserve dimension and coordinate order during xarray.concat() (GH2811, GH4072, PR4419). By Kai Mühlbauer.

  • Avoid relying on set objects for the ordering of the coordinates (PR4409) By Justus Magin.

Documentation

  • Update the docstring of DataArray.copy() to remove incorrect mention of ‘dataset’ (GH3606) By Sander van Rijn.

  • Removed skipna argument from DataArray.count(), DataArray.any(), DataArray.all(). (GH755) By Sander van Rijn

  • Update the contributing guide to use merges instead of rebasing and state that we squash-merge. (PR4355). By Justus Magin.

  • Make sure the examples from the docstrings actually work (PR4408). By Justus Magin.

  • Updated Vectorized Indexing to a clearer example. By Maximilian Roos

Internal Changes

v0.16.0 (2020-07-11)

This release adds xarray.cov & xarray.corr for covariance & correlation respectively; the idxmax & idxmin methods, the polyfit method & xarray.polyval for fitting polynomials, as well as a number of documentation improvements, other features, and bug fixes. Many thanks to all 44 contributors who contributed to this release:

Akio Taniguchi, Andrew Williams, Aurélien Ponte, Benoit Bovy, Dave Cole, David Brochart, Deepak Cherian, Elliott Sales de Andrade, Etienne Combrisson, Hossein Madadi, Huite, Joe Hamman, Kai Mühlbauer, Keisuke Fujii, Maik Riechert, Marek Jacob, Mathias Hauser, Matthieu Ancellin, Maximilian Roos, Noah D Brenowitz, Oriol Abril, Pascal Bourgault, Phillip Butcher, Prajjwal Nijhara, Ray Bell, Ryan Abernathey, Ryan May, Spencer Clark, Spencer Hill, Srijan Saurav, Stephan Hoyer, Taher Chegini, Todd, Tom Nicholas, Yohai Bar Sinai, Yunus Sevinchan, arabidopsis, aurghs, clausmichele, dmey, johnomotani, keewis, raphael dussin, risebell

Breaking changes

  • Minimum supported versions for the following packages have changed: dask >=2.9, distributed>=2.9. By Deepak Cherian

  • groupby operations will restore coord dimension order. Pass restore_coord_dims=False to revert to previous behavior.

  • DataArray.transpose() will now transpose coordinates by default. Pass transpose_coords=False to revert to previous behaviour. By Maximilian Roos

  • Alternate draw styles for plot.step() must be passed using the drawstyle (or ds) keyword argument, instead of the linestyle (or ls) keyword argument, in line with the upstream change in Matplotlib. (PR3274) By Elliott Sales de Andrade

  • The old auto_combine function has now been removed in favour of the combine_by_coords() and combine_nested() functions. This also means that the default behaviour of open_mfdataset() has changed to use combine='by_coords' as the default argument value. (GH2616, PR3926) By Tom Nicholas.

  • The DataArray and Variable HTML reprs now expand the data section by default (GH4176) By Stephan Hoyer.

New Features

  • DataArray.argmin() and DataArray.argmax() now support sequences of ‘dim’ arguments, and if a sequence is passed return a dict (which can be passed to DataArray.isel() to get the value of the minimum) of the indices for each dimension of the minimum or maximum of a DataArray. (PR3936) By John Omotani, thanks to Keisuke Fujii for work in PR1469.

  • Added xarray.cov() and xarray.corr() (GH3784, PR3550, PR4089). By Andrew Williams and Robin Beer.

  • Implement DataArray.idxmax(), DataArray.idxmin(), Dataset.idxmax(), Dataset.idxmin(). (GH60, PR3871) By Todd Jennings

  • Added DataArray.polyfit() and xarray.polyval() for fitting polynomials. (GH3349, PR3733, PR4099) By Pascal Bourgault.

  • Added xarray.infer_freq() for extending frequency inferring to CFTime indexes and data (PR4033). By Pascal Bourgault.

  • chunks='auto' is now supported in the chunks argument of Dataset.chunk(). (GH4055) By Andrew Williams

  • Control over attributes of result in merge(), concat(), combine_by_coords() and combine_nested() using combine_attrs keyword argument. (GH3865, PR3877) By John Omotani

  • missing_dims argument to Dataset.isel(), DataArray.isel() and Variable.isel() to allow replacing the exception when a dimension passed to isel is not present with a warning, or just ignore the dimension. (GH3866, PR3923) By John Omotani

  • Support dask handling for DataArray.idxmax(), DataArray.idxmin(), Dataset.idxmax(), Dataset.idxmin(). (PR3922, PR4135) By Kai Mühlbauer and Pascal Bourgault.

  • More support for unit aware arrays with pint (PR3643, PR3975, PR4163) By Justus Magin.

  • Support overriding existing variables in to_zarr() with mode='a' even without append_dim, as long as dimension sizes do not change. By Stephan Hoyer.

  • Allow plotting of boolean arrays. (PR3766) By Marek Jacob

  • Enable using MultiIndex levels as coordinates in 1D and 2D plots (GH3927). By Mathias Hauser.

  • A days_in_month accessor for xarray.CFTimeIndex, analogous to the days_in_month accessor for a pandas.DatetimeIndex, which returns the days in the month each datetime in the index. Now days in month weights for both standard and non-standard calendars can be obtained using the DatetimeAccessor (PR3935). This feature requires cftime version 1.1.0 or greater. By Spencer Clark.

  • For the netCDF3 backend, added dtype coercions for unsigned integer types. (GH4014, PR4018) By Yunus Sevinchan

  • map_blocks() now accepts a template kwarg. This allows use cases where the result of a computation could not be inferred automatically. By Deepak Cherian

  • map_blocks() can now handle dask-backed xarray objects in args. (PR3818) By Deepak Cherian

  • Add keyword decode_timedelta to xarray.open_dataset(), (xarray.open_dataarray(), xarray.open_dataarray(), xarray.decode_cf()) that allows to disable/enable the decoding of timedeltas independently of time decoding (GH1621) Aureliana Barghini

Enhancements

  • Performance improvement of DataArray.interp() and Dataset.interp() We performs independent interpolation sequentially rather than interpolating in one large multidimensional space. (GH2223) By Keisuke Fujii.

  • DataArray.interp() now support interpolations over chunked dimensions (PR4155). By Alexandre Poux.

  • Major performance improvement for Dataset.from_dataframe() when the dataframe has a MultiIndex (PR4184). By Stephan Hoyer. - DataArray.reset_index() and Dataset.reset_index() now keep coordinate attributes (PR4103). By Oriol Abril.

  • Axes kwargs such as facecolor can now be passed to DataArray.plot() in subplot_kws. This works for both single axes plots and FacetGrid plots. By Raphael Dussin.

  • Array items with long string reprs are now limited to a reasonable width (PR3900) By Maximilian Roos

  • Large arrays whose numpy reprs would have greater than 40 lines are now limited to a reasonable length. (PR3905) By Maximilian Roos

Bug fixes

Documentation

Internal Changes

v0.15.1 (23 Mar 2020)

This release brings many new features such as Dataset.weighted() methods for weighted array reductions, a new jupyter repr by default, and the start of units integration with pint. There’s also the usual batch of usability improvements, documentation additions, and bug fixes.

Breaking changes

  • Raise an error when assigning to the .values or .data attribute of dimension coordinates i.e. IndexVariable objects. This has been broken since v0.12.0. Please use DataArray.assign_coords() or Dataset.assign_coords() instead. (GH3470, PR3862) By Deepak Cherian

New Features

  • Weighted array reductions are now supported via the new DataArray.weighted() and Dataset.weighted() methods. See Weighted array reductions. (GH422, PR2922). By Mathias Hauser.

  • The new jupyter notebook repr (Dataset._repr_html_ and DataArray._repr_html_) (introduced in 0.14.1) is now on by default. To disable, use xarray.set_options(display_style="text"). By Julia Signell.

  • Added support for pandas.DatetimeIndex-style rounding of cftime.datetime objects directly via a CFTimeIndex or via the DatetimeAccessor. By Spencer Clark

  • Support new h5netcdf backend keyword phony_dims (available from h5netcdf v0.8.0 for H5NetCDFStore. By Kai Mühlbauer.

  • Add partial support for unit aware arrays with pint. (PR3706, PR3611) By Justus Magin.

  • Dataset.groupby() and DataArray.groupby() now raise a TypeError on multiple string arguments. Receiving multiple string arguments often means a user is attempting to pass multiple dimensions as separate arguments and should instead pass a single list of dimensions. (PR3802) By Maximilian Roos

  • map_blocks() can now apply functions that add new unindexed dimensions. By Deepak Cherian

  • An ellipsis (...) is now supported in the dims argument of Dataset.stack() and DataArray.stack(), meaning all unlisted dimensions, similar to its meaning in DataArray.transpose(). (PR3826) By Maximilian Roos

  • Dataset.where() and DataArray.where() accept a lambda as a first argument, which is then called on the input; replicating pandas’ behavior. By Maximilian Roos.

  • skipna is available in Dataset.quantile(), DataArray.quantile(), core.groupby.DatasetGroupBy.quantile(), core.groupby.DataArrayGroupBy.quantile() (GH3843, PR3844) By Aaron Spring.

  • Add a diff summary for testing.assert_allclose. (GH3617, PR3847) By Justus Magin.

Bug fixes

  • Fix Dataset.interp() when indexing array shares coordinates with the indexed variable (GH3252). By David Huard.

  • Fix recombination of groups in Dataset.groupby() and DataArray.groupby() when performing an operation that changes the size of the groups along the grouped dimension. By Eric Jansen.

  • Fix use of multi-index with categorical values (GH3674). By Matthieu Ancellin.

  • Fix alignment with join="override" when some dimensions are unindexed. (GH3681). By Deepak Cherian.

  • Fix Dataset.swap_dims() and DataArray.swap_dims() producing index with name reflecting the previous dimension name instead of the new one (GH3748, PR3752). By Joseph K Aicher.

  • Use dask_array_type instead of dask_array.Array for type checking. (GH3779, PR3787) By Justus Magin.

  • concat() can now handle coordinate variables only present in one of the objects to be concatenated when coords="different". By Deepak Cherian.

  • xarray now respects the over, under and bad colors if set on a provided colormap. (GH3590, PR3601) By johnomotani.

  • coarsen and rolling now respect xr.set_options(keep_attrs=True) to preserve attributes. Dataset.coarsen() accepts a keyword argument keep_attrs to change this setting. (GH3376, PR3801) By Andrew Thomas.

  • Delete associated indexes when deleting coordinate variables. (GH3746). By Deepak Cherian.

  • Fix Dataset.to_zarr() when using append_dim and group simultaneously. (GH3170). By Matthias Meyer.

  • Fix html repr on Dataset with non-string keys (PR3807). By Maximilian Roos.

Documentation

  • Fix documentation of DataArray removing the deprecated mention that when omitted, dims are inferred from a coords-dict. (PR3821) By Sander van Rijn.

  • Improve the where() docstring. By Maximilian Roos

  • Update the installation instructions: only explicitly list recommended dependencies (GH3756). By Mathias Hauser.

Internal Changes

  • Remove the internal import_seaborn function which handled the deprecation of the seaborn.apionly entry point (GH3747). By Mathias Hauser.

  • Don’t test pint integration in combination with datetime objects. (GH3778, PR3788) By Justus Magin.

  • Change test_open_mfdataset_list_attr to only run with dask installed (GH3777, PR3780). By Bruno Pagani.

  • Preserve the ability to index with method="nearest" with a CFTimeIndex with pandas versions greater than 1.0.1 (GH3751). By Spencer Clark.

  • Greater flexibility and improved test coverage of subtracting various types of objects from a CFTimeIndex. By Spencer Clark.

  • Update Azure CI MacOS image, given pending removal. By Maximilian Roos

  • Remove xfails for scipy 1.0.1 for tests that append to netCDF files (PR3805). By Mathias Hauser.

  • Remove conversion to pandas.Panel, given its removal in pandas in favor of xarray’s objects. By Maximilian Roos

v0.15.0 (30 Jan 2020)

This release brings many improvements to xarray’s documentation: our examples are now binderized notebooks (click here) and we have new example notebooks from our SciPy 2019 sprint (many thanks to our contributors!).

This release also features many API improvements such as a new TimedeltaAccessor and support for CFTimeIndex in interpolate_na()); as well as many bug fixes.

Breaking changes

  • Bumped minimum tested versions for dependencies:

    • numpy 1.15

    • pandas 0.25

    • dask 2.2

    • distributed 2.2

    • scipy 1.3

  • Remove compat and encoding kwargs from DataArray, which have been deprecated since 0.12. (PR3650). Instead, specify the encoding kwarg when writing to disk or set the DataArray.encoding attribute directly. By Maximilian Roos.

  • xarray.dot(), DataArray.dot(), and the @ operator now use align="inner" (except when xarray.set_options(arithmetic_join="exact"); GH3694) by Mathias Hauser.

New Features

  • Implement DataArray.pad() and Dataset.pad(). (GH2605, PR3596). By Mark Boer.

  • DataArray.sel() and Dataset.sel() now support pandas.CategoricalIndex. (GH3669) By Keisuke Fujii.

  • Support using an existing, opened h5netcdf File with H5NetCDFStore. This permits creating an Dataset from a h5netcdf File that has been opened using other means (GH3618). By Kai Mühlbauer.

  • Implement median and nanmedian for dask arrays. This works by rechunking to a single chunk along all reduction axes. (GH2999). By Deepak Cherian.

  • concat() now preserves attributes from the first Variable. (GH2575, GH2060, GH1614) By Deepak Cherian.

  • Dataset.quantile(), DataArray.quantile() and GroupBy.quantile now work with dask Variables. By Deepak Cherian.

  • Added the count reduction method to both DatasetCoarsen and DataArrayCoarsen objects. (PR3500) By Deepak Cherian

  • Add meta kwarg to apply_ufunc(); this is passed on to dask.array.blockwise(). (PR3660) By Deepak Cherian.

  • Add attrs_file option in open_mfdataset() to choose the source file for global attributes in a multi-file dataset (GH2382, PR3498). By Julien Seguinot.

  • Dataset.swap_dims() and DataArray.swap_dims() now allow swapping to dimension names that don’t exist yet. (PR3636) By Justus Magin.

  • Extend DatetimeAccessor properties and support .dt accessor for timedeltas via TimedeltaAccessor (PR3612) By Anderson Banihirwe.

  • Improvements to interpolating along time axes (GH3641, PR3631). By David Huard.

    • Support CFTimeIndex in DataArray.interpolate_na()

    • define 1970-01-01 as the default offset for the interpolation index for both pandas.DatetimeIndex and CFTimeIndex,

    • use microseconds in the conversion from timedelta objects to floats to avoid overflow errors.

Bug fixes

  • Applying a user-defined function that adds new dimensions using apply_ufunc() and vectorize=True now works with dask > 2.0. (GH3574, PR3660). By Deepak Cherian.

  • Fix combine_by_coords() to allow for combining incomplete hypercubes of Datasets (GH3648). By Ian Bolliger.

  • Fix combine_by_coords() when combining cftime coordinates which span long time intervals (GH3535). By Spencer Clark.

  • Fix plotting with transposed 2D non-dimensional coordinates. (GH3138, PR3441) By Deepak Cherian.

  • plot.FacetGrid.set_titles() can now replace existing row titles of a FacetGrid plot. In addition FacetGrid gained two new attributes: col_labels and row_labels contain matplotlib.text.Text handles for both column and row labels. These can be used to manually change the labels. By Deepak Cherian.

  • Fix issue with Dask-backed datasets raising a KeyError on some computations involving map_blocks() (PR3598). By Tom Augspurger.

  • Ensure Dataset.quantile(), DataArray.quantile() issue the correct error when q is out of bounds (GH3634) by Mathias Hauser.

  • Fix regression in xarray 0.14.1 that prevented encoding times with certain dtype, _FillValue, and missing_value encodings (GH3624). By Spencer Clark

  • Raise an error when trying to use Dataset.rename_dims() to rename to an existing name (GH3438, PR3645) By Justus Magin.

  • Dataset.rename(), DataArray.rename() now check for conflicts with MultiIndex level names.

  • Dataset.merge() no longer fails when passed a DataArray instead of a Dataset. By Tom Nicholas.

  • Fix a regression in Dataset.drop(): allow passing any iterable when dropping variables (GH3552, PR3693) By Justus Magin.

  • Fixed errors emitted by mypy --strict in modules that import xarray. (GH3695) by Guido Imperiale.

  • Allow plotting of binned coordinates on the y axis in plot.line() and plot.step() plots (GH3571, PR3685) by Julien Seguinot.

  • setuptools is now marked as a dependency of xarray (PR3628) by Richard Höchenberger.

Documentation

Internal Changes

  • Make sure dask names change when rechunking by different chunk sizes. Conversely, make sure they stay the same when rechunking by the same chunk size. (GH3350) By Deepak Cherian.

  • 2x to 5x speed boost (on small arrays) for Dataset.isel(), DataArray.isel(), and DataArray.__getitem__() when indexing by int, slice, list of int, scalar ndarray, or 1-dimensional ndarray. (PR3533) by Guido Imperiale.

  • Removed internal method Dataset._from_vars_and_coord_names, which was dominated by Dataset._construct_direct. (PR3565) By Maximilian Roos.

  • Replaced versioneer with setuptools-scm. Moved contents of setup.py to setup.cfg. Removed pytest-runner from setup.py, as per deprecation notice on the pytest-runner project. (PR3714) by Guido Imperiale.

  • Use of isort is now enforced by CI. (PR3721) by Guido Imperiale

v0.14.1 (19 Nov 2019)

Breaking changes

  • Broken compatibility with cftime < 1.0.3 . By Deepak Cherian.

    Warning

    cftime version 1.0.4 is broken (cftime/126); please use version 1.0.4.2 instead.

  • All leftover support for dates from non-standard calendars through netcdftime, the module included in versions of netCDF4 prior to 1.4 that eventually became the cftime package, has been removed in favor of relying solely on the standalone cftime package (PR3450). By Spencer Clark.

New Features

  • Added the sparse option to unstack(), unstack(), reindex(), reindex() (GH3518). By Keisuke Fujii.

  • Added the fill_value option to DataArray.unstack() and Dataset.unstack() (GH3518, PR3541). By Keisuke Fujii.

  • Added the max_gap kwarg to interpolate_na() and interpolate_na(). This controls the maximum size of the data gap that will be filled by interpolation. By Deepak Cherian.

  • Added Dataset.drop_sel() & DataArray.drop_sel() for dropping labels. Dataset.drop_vars() & DataArray.drop_vars() have been added for dropping variables (including coordinates). The existing Dataset.drop() & DataArray.drop() methods remain as a backward compatible option for dropping either labels or variables, but using the more specific methods is encouraged. (PR3475) By Maximilian Roos

  • Added Dataset.map() & GroupBy.map & Resample.map for mapping / applying a function over each item in the collection, reflecting the widely used and least surprising name for this operation. The existing apply methods remain for backward compatibility, though using the map methods is encouraged. (PR3459) By Maximilian Roos

  • Dataset.transpose() and DataArray.transpose() now support an ellipsis (...) to represent all ‘other’ dimensions. For example, to move one dimension to the front, use .transpose('x', ...). (PR3421) By Maximilian Roos

  • Changed xr.ALL_DIMS to equal python’s Ellipsis (...), and changed internal usages to use ... directly. As before, you can use this to instruct a groupby operation to reduce over all dimensions. While we have no plans to remove xr.ALL_DIMS, we suggest using .... (PR3418) By Maximilian Roos

  • xarray.dot(), and DataArray.dot() now support the dims=... option to sum over the union of dimensions of all input arrays (GH3423) by Mathias Hauser.

  • Added new Dataset._repr_html_ and DataArray._repr_html_ to improve representation of objects in Jupyter. By default this feature is turned off for now. Enable it with xarray.set_options(display_style="html"). (PR3425) by Benoit Bovy and Julia Signell.

  • Implement dask deterministic hashing for xarray objects. Note that xarray objects with a dask.array backend already used deterministic hashing in previous releases; this change implements it when whole xarray objects are embedded in a dask graph, e.g. when DataArray.map_blocks() is invoked. (GH3378, PR3446, PR3515) By Deepak Cherian and Guido Imperiale.

  • Add the documented-but-missing quantile().

  • xarray now respects the DataArray.encoding["coordinates"] attribute when writing to disk. See Coordinates for more. (GH3351, PR3487) By Deepak Cherian.

  • Add the documented-but-missing quantile(). (GH3525, PR3527). By Justus Magin.

Bug fixes

  • Ensure an index of type CFTimeIndex is not converted to a DatetimeIndex when calling Dataset.rename(), Dataset.rename_dims() and Dataset.rename_vars(). By Mathias Hauser. (GH3522).

  • Fix a bug in DataArray.set_index() in case that an existing dimension becomes a level variable of MultiIndex. (PR3520). By Keisuke Fujii.

  • Harmonize _FillValue, missing_value during encoding and decoding steps. (PR3502) By Anderson Banihirwe.

  • Fix regression introduced in v0.14.0 that would cause a crash if dask is installed but cloudpickle isn’t (GH3401) by Rhys Doyle

  • Fix grouping over variables with NaNs. (GH2383, PR3406). By Deepak Cherian.

  • Make alignment and concatenation significantly more efficient by using dask names to compare dask objects prior to comparing values after computation. This change makes it more convenient to carry around large non-dimensional coordinate variables backed by dask arrays. Existing workarounds involving reset_coords(drop=True) should now be unnecessary in most cases. (GH3068, GH3311, GH3454, PR3453). By Deepak Cherian.

  • Add support for cftime>=1.0.4. By Anderson Banihirwe.

  • Rolling reduction operations no longer compute dask arrays by default. (GH3161). In addition, the allow_lazy kwarg to reduce is deprecated. By Deepak Cherian.

  • Fix GroupBy.reduce when reducing over multiple dimensions. (GH3402). By Deepak Cherian

  • Allow appending datetime and bool data variables to zarr stores. (GH3480). By Akihiro Matsukawa.

  • Add support for numpy >=1.18 (); bugfix mean() on datetime64 arrays on dask backend (GH3409, PR3537). By Guido Imperiale.

  • Add support for pandas >=0.26 (GH3440). By Deepak Cherian.

  • Add support for pseudonetcdf >=3.1 (PR3485). By Barron Henderson.

Documentation

Internal Changes

v0.14.0 (14 Oct 2019)

Breaking changes

  • This release introduces a rolling policy for minimum dependency versions: Minimum dependency versions.

    Several minimum versions have been increased:

    Package

    Old

    New

    Python

    3.5.3

    3.6

    numpy

    1.12

    1.14

    pandas

    0.19.2

    0.24

    dask

    0.16 (tested: 2.4)

    1.2

    bottleneck

    1.1 (tested: 1.2)

    1.2

    matplotlib

    1.5 (tested: 3.1)

    3.1

    Obsolete patch versions (x.y.Z) are not tested anymore. The oldest supported versions of all optional dependencies are now covered by automated tests (before, only the very latest versions were tested).

    (GH3222, GH3293, GH3340, GH3346, GH3358). By Guido Imperiale.

  • Dropped the drop=False optional parameter from Variable.isel(). It was unused and doesn’t make sense for a Variable. (PR3375). By Guido Imperiale.

  • Remove internal usage of collections.OrderedDict. After dropping support for Python <=3.5, most uses of OrderedDict in xarray were no longer necessary. We have removed the internal use of the OrderedDict in favor of Python’s builtin dict object which is now ordered itself. This change will be most obvious when interacting with the attrs property on Dataset and DataArray objects. (GH3380, PR3389). By Joe Hamman.

New functions/methods

Enhancements

  • core.groupby.GroupBy enhancements. By Deepak Cherian.

    • Added a repr (PR3344). Example:

      >>> da.groupby("time.season")
      DataArrayGroupBy, grouped over 'season'
      4 groups with labels 'DJF', 'JJA', 'MAM', 'SON'
      
    • Added a GroupBy.dims property that mirrors the dimensions of each group (GH3344).

  • Speed up Dataset.isel() up to 33% and DataArray.isel() up to 25% for small arrays (GH2799, PR3375). By Guido Imperiale.

Bug fixes

  • Reintroduce support for weakref (broken in v0.13.0). Support has been reinstated for DataArray and Dataset objects only. Internal xarray objects remain unaddressable by weakref in order to save memory (GH3317). By Guido Imperiale.

  • Line plots with the x or y argument set to a 1D non-dimensional coord now plot the correct data for 2D DataArrays (GH3334). By Tom Nicholas.

  • Make concat() more robust when merging variables present in some datasets but not others (GH508). By Deepak Cherian.

  • The default behaviour of reducing across all dimensions for DataArrayGroupBy objects has now been properly removed as was done for DatasetGroupBy in 0.13.0 (GH3337). Use xarray.ALL_DIMS if you need to replicate previous behaviour. Also raise nicer error message when no groups are created (GH1764). By Deepak Cherian.

  • Fix error in concatenating unlabeled dimensions (PR3362). By Deepak Cherian.

  • Warn if the dim kwarg is passed to rolling operations. This is redundant since a dimension is specified when the DatasetRolling or DataArrayRolling object is created. (PR3362). By Deepak Cherian.

Documentation

  • Created a glossary of important xarray terms (GH2410, PR3352). By Gregory Gundersen.

  • Created a “How do I…” section (How do I …) for solutions to common questions. (PR3357). By Deepak Cherian.

  • Add examples for Dataset.swap_dims() and DataArray.swap_dims() (pull:3331, pull:3331). By Justus Magin.

  • Add examples for align(), merge(), combine_by_coords(), full_like(), zeros_like(), ones_like(), Dataset.pipe(), Dataset.assign(), Dataset.reindex(), Dataset.fillna() (PR3328). By Anderson Banihirwe.

  • Fixed documentation to clean up an unwanted file created in ipython example (PR3353). By Gregory Gundersen.

v0.13.0 (17 Sep 2019)

This release includes many exciting changes: wrapping of NEP18 compliant numpy-like arrays; new scatter() plotting method that can scatter two DataArrays in a Dataset against each other; support for converting pandas DataFrames to xarray objects that wrap pydata/sparse; and more!

Breaking changes

  • This release increases the minimum required Python version from 3.5.0 to 3.5.3 (GH3089). By Guido Imperiale.

  • The isel_points and sel_points methods are removed, having been deprecated since v0.10.0. These are redundant with the isel / sel methods. See Vectorized Indexing for the details By Maximilian Roos

  • The inplace kwarg for public methods now raises an error, having been deprecated since v0.11.0. By Maximilian Roos

  • concat() now requires the dim argument. Its indexers, mode and concat_over kwargs have now been removed. By Deepak Cherian

  • Passing a list of colors in cmap will now raise an error, having been deprecated since v0.6.1.

  • Most xarray objects now define __slots__. This reduces overall RAM usage by ~22% (not counting the underlying numpy buffers); on CPython 3.7/x64, a trivial DataArray has gone down from 1.9kB to 1.5kB.

    Caveats:

    • Pickle streams produced by older versions of xarray can’t be loaded using this release, and vice versa.

    • Any user code that was accessing the __dict__ attribute of xarray objects will break. The best practice to attach custom metadata to xarray objects is to use the attrs dictionary.

    • Any user code that defines custom subclasses of xarray classes must now explicitly define __slots__ itself. Subclasses that don’t add any attributes must state so by defining __slots__ = () right after the class header. Omitting __slots__ will now cause a FutureWarning to be logged, and will raise an error in a later release.

    (GH3250) by Guido Imperiale.

  • The default dimension for Dataset.groupby(), Dataset.resample(), DataArray.groupby() and DataArray.resample() reductions is now the grouping or resampling dimension.

  • DataArray.to_dataset() requires name to be passed as a kwarg (previously ambiguous positional arguments were deprecated)

  • Reindexing with variables of a different dimension now raise an error (previously deprecated)

  • xarray.broadcast_array is removed (previously deprecated in favor of broadcast())

  • Variable.expand_dims is removed (previously deprecated in favor of Variable.set_dims())

New functions/methods

  • xarray can now wrap around any NEP18 compliant numpy-like library (important: read notes about NUMPY_EXPERIMENTAL_ARRAY_FUNCTION in the above link). Added explicit test coverage for sparse. (GH3117, GH3202). This requires sparse>=0.8.0. By Nezar Abdennur and Guido Imperiale.

  • from_dataframe() and from_series() now support sparse=True for converting pandas objects into xarray objects wrapping sparse arrays. This is particularly useful with sparsely populated hierarchical indexes. (GH3206) By Stephan Hoyer.

  • The xarray package is now discoverable by mypy (although typing hints coverage is not complete yet). mypy type checking is now enforced by CI. Libraries that depend on xarray and use mypy can now remove from their setup.cfg the lines:

    [mypy-xarray]
    ignore_missing_imports = True
    

    (GH2877, GH3088, GH3090, GH3112, GH3117, GH3207) By Guido Imperiale and Maximilian Roos.

  • Added DataArray.broadcast_like() and Dataset.broadcast_like(). By Deepak Cherian and David Mertz.

  • Dataset plotting API for visualizing dependencies between two DataArrays! Currently only Dataset.plot.scatter() is implemented. By Yohai Bar Sinai and Deepak Cherian

  • Added DataArray.head(), DataArray.tail() and DataArray.thin(); as well as Dataset.head(), Dataset.tail() and Dataset.thin() methods. (GH319) By Gerardo Rivera.

Enhancements

  • Multiple enhancements to concat() and open_mfdataset(). By Deepak Cherian

    • Added compat='override'. When merging, this option picks the variable from the first dataset and skips all comparisons.

    • Added join='override'. When aligning, this only checks that index sizes are equal among objects and skips checking indexes for equality.

    • concat() and open_mfdataset() now support the join kwarg. It is passed down to align().

    • concat() now calls merge() on variables that are not concatenated (i.e. variables without concat_dim when data_vars or coords are "minimal"). concat() passes its new compat kwarg down to merge(). (GH2064)

    Users can avoid a common bottleneck when using open_mfdataset() on a large number of files with variables that are known to be aligned and some of which need not be concatenated. Slow equality comparisons can now be avoided, for e.g.:

    data = xr.open_mfdataset(files, concat_dim='time', data_vars='minimal',
                             coords='minimal', compat='override', join='override')
    
  • In to_zarr(), passing mode is not mandatory if append_dim is set, as it will automatically be set to 'a' internally. By David Brochart.

  • Added the ability to initialize an empty or full DataArray with a single value. (GH277) By Gerardo Rivera.

  • to_netcdf() now supports the invalid_netcdf kwarg when used with engine="h5netcdf". It is passed to h5netcdf.File. By Ulrich Herter.

  • xarray.Dataset.drop now supports keyword arguments; dropping index labels by using both dim and labels or using a DataArrayCoordinates object are deprecated (GH2910). By Gregory Gundersen.

  • Added examples of Dataset.set_index() and DataArray.set_index(), as well are more specific error messages when the user passes invalid arguments (GH3176). By Gregory Gundersen.

  • Dataset.filter_by_attrs() now filters the coordinates as well as the variables. By Spencer Jones.

Bug fixes

  • Improve “missing dimensions” error message for apply_ufunc() (GH2078). By Rick Russotto.

  • assign_coords() now supports dictionary arguments (GH3231). By Gregory Gundersen.

  • Fix regression introduced in v0.12.2 where copy(deep=True) would convert unicode indices to dtype=object (GH3094). By Guido Imperiale.

  • Improved error handling and documentation for .expand_dims() read-only view.

  • Fix tests for big-endian systems (GH3125). By Graham Inggs.

  • XFAIL several tests which are expected to fail on ARM systems due to a datetime issue in NumPy (GH2334). By Graham Inggs.

  • Fix KeyError that arises when using .sel method with float values different from coords float type (GH3137). By Hasan Ahmad.

  • Fixed bug in combine_by_coords() causing a ValueError if the input had an unused dimension with coordinates which were not monotonic (GH3150). By Tom Nicholas.

  • Fixed crash when applying distributed.Client.compute() to a DataArray (GH3171). By Guido Imperiale.

  • Better error message when using groupby on an empty DataArray (GH3037). By Hasan Ahmad.

  • Fix error that arises when using open_mfdataset on a series of netcdf files having differing values for a variable attribute of type list. (GH3034) By Hasan Ahmad.

  • Prevent argmax() and argmin() from calling dask compute (GH3237). By Ulrich Herter.

  • Plots in 2 dimensions (pcolormesh, contour) now allow to specify levels as numpy array (GH3284). By Mathias Hauser.

  • Fixed bug in DataArray.quantile() failing to keep attributes when keep_attrs was True (GH3304). By David Huard.

Documentation

v0.12.3 (10 July 2019)

New functions/methods

  • New methods Dataset.to_stacked_array() and DataArray.to_unstacked_dataset() for reshaping Datasets of variables with different dimensions (GH1317). This is useful for feeding data from xarray into machine learning models, as described in Stacking different variables together. By Noah Brenowitz.

Enhancements

  • Support for renaming Dataset variables and dimensions independently with rename_vars() and rename_dims() (GH3026). By Julia Kent.

  • Add scales, offsets, units and descriptions attributes to DataArray returned by open_rasterio(). (GH3013) By Erle Carrara.

Bug fixes

  • Resolved deprecation warnings from newer versions of matplotlib and dask.

  • Compatibility fixes for the upcoming pandas 0.25 and NumPy 1.17 releases. By Stephan Hoyer.

  • Fix summaries for multiindex coordinates (GH3079). By Jonas Hörsch.

  • Fix HDF5 error that could arise when reading multiple groups from a file at once (GH2954). By Stephan Hoyer.

v0.12.2 (29 June 2019)

New functions/methods

  • Two new functions, combine_nested() and combine_by_coords(), allow for combining datasets along any number of dimensions, instead of the one-dimensional list of datasets supported by concat().

    The new combine_nested will accept the datasets as a nested list-of-lists, and combine by applying a series of concat and merge operations. The new combine_by_coords instead uses the dimension coordinates of datasets to order them.

    open_mfdataset() can use either combine_nested or combine_by_coords to combine datasets along multiple dimensions, by specifying the argument combine='nested' or combine='by_coords'.

    The older function auto_combine has been deprecated, because its functionality has been subsumed by the new functions. To avoid FutureWarnings switch to using combine_nested or combine_by_coords, (or set the combine argument in open_mfdataset). (GH2159) By Tom Nicholas.

  • rolling_exp() and rolling_exp() added, similar to pandas’ pd.DataFrame.ewm method. Calling .mean on the resulting object will return an exponentially weighted moving average. By Maximilian Roos.

  • New DataArray.str for string related manipulations, based on pandas.Series.str. By 0x0L.

  • Added strftime method to .dt accessor, making it simpler to hand a datetime DataArray to other code expecting formatted dates and times. (GH2090). strftime() is also now available on CFTimeIndex. By Alan Brammer and Ryan May.

  • GroupBy.quantile is now a method of GroupBy objects (GH3018). By David Huard.

  • Argument and return types are added to most methods on DataArray and Dataset, allowing static type checking both within xarray and external libraries. Type checking with mypy is enabled in CI (though not required yet). By Guido Imperiale and Maximilian Roos.

Enhancements to existing functionality

  • Add keepdims argument for reduce operations (GH2170) By Scott Wales.

  • Enable @ operator for DataArray. This is equivalent to DataArray.dot() By Maximilian Roos.

  • Add fill_value argument for reindex, align, and merge operations to enable custom fill values. (GH2876) By Zach Griffith.

  • DataArray.transpose() now accepts a keyword argument transpose_coords which enables transposition of coordinates in the same way as Dataset.transpose(). DataArray.groupby() DataArray.groupby_bins(), and DataArray.resample() now accept a keyword argument restore_coord_dims which keeps the order of the dimensions of multi-dimensional coordinates intact (GH1856). By Peter Hausamann.

  • Clean up Python 2 compatibility in code (GH2950) By Guido Imperiale.

  • Better warning message when supplying invalid objects to xr.merge (GH2948). By Mathias Hauser.

  • Add errors keyword argument to Dataset.drop and Dataset.drop_dims() that allows ignoring errors if a passed label or dimension is not in the dataset (GH2994). By Andrew Ross.

Bug fixes

v0.12.1 (4 April 2019)

Enhancements

  • Allow expand_dims method to support inserting/broadcasting dimensions with size > 1. (GH2710) By Martin Pletcher.

Bug fixes

v0.12.0 (15 March 2019)

Highlights include:

  • Removed support for Python 2. This is the first version of xarray that is Python 3 only!

  • New coarsen() and integrate() methods. See Coarsen large arrays and Computation using Coordinates for details.

  • Many improvements to cftime support. See below for details.

Deprecations

  • The compat argument to Dataset and the encoding argument to DataArray are deprecated and will be removed in a future release. (GH1188) By Maximilian Roos.

Other enhancements

  • Added ability to open netcdf4/hdf5 file-like objects with open_dataset. Requires (h5netcdf>0.7 and h5py>2.9.0). (GH2781) By Scott Henderson

  • Add data=False option to to_dict() methods. (GH2656) By Ryan Abernathey

  • DataArray.coarsen() and Dataset.coarsen() are newly added. See Coarsen large arrays for details. (GH2525) By Keisuke Fujii.

  • Upsampling an array via interpolation with resample is now dask-compatible, as long as the array is not chunked along the resampling dimension. By Spencer Clark.

  • xarray.testing.assert_equal() and xarray.testing.assert_identical() now provide a more detailed report showing what exactly differs between the two objects (dimensions / coordinates / variables / attributes) (GH1507). By Benoit Bovy.

  • Add tolerance option to resample() methods bfill, pad, nearest. (GH2695) By Hauke Schulz.

  • DataArray.integrate() and Dataset.integrate() are newly added. See Computation using Coordinates for the detail. (GH1332) By Keisuke Fujii.

  • Added drop_dims() (GH1949). By Kevin Squire.

Bug fixes

  • Silenced warnings that appear when using pandas 0.24. By Stephan Hoyer

  • Interpolating via resample now internally specifies bounds_error=False as an argument to scipy.interpolate.interp1d, allowing for interpolation from higher frequencies to lower frequencies. Datapoints outside the bounds of the original time coordinate are now filled with NaN (GH2197). By Spencer Clark.

  • Line plots with the x argument set to a non-dimensional coord now plot the correct data for 1D DataArrays. (GH2725). By Tom Nicholas.

  • Subtracting a scalar cftime.datetime object from a CFTimeIndex now results in a pandas.TimedeltaIndex instead of raising a TypeError (GH2671). By Spencer Clark.

  • backend_kwargs are no longer ignored when using open_dataset with pynio engine (:issue:’2380’) By Jonathan Joyce.

  • Fix open_rasterio creating a WKT CRS instead of PROJ.4 with rasterio 1.0.14+ (GH2715). By David Hoese.

  • Masking data arrays with xarray.DataArray.where() now returns an array with the name of the original masked array (GH2748 and GH2457). By Yohai Bar-Sinai.

  • Fixed error when trying to reduce a DataArray using a function which does not require an axis argument. (GH2768) By Tom Nicholas.

  • Concatenating a sequence of DataArray with varying names sets the name of the output array to None, instead of the name of the first input array. If the names are the same it sets the name to that, instead to the name of the first DataArray in the list as it did before. (GH2775). By Tom Nicholas.

  • Per the CF conventions section on calendars, specifying 'standard' as the calendar type in cftime_range() now correctly refers to the 'gregorian' calendar instead of the 'proleptic_gregorian' calendar (GH2761).

v0.11.3 (26 January 2019)

Bug fixes

  • Saving files with times encoded with reference dates with timezones (e.g. ‘2000-01-01T00:00:00-05:00’) no longer raises an error (GH2649). By Spencer Clark.

  • Fixed performance regression with open_mfdataset (GH2662). By Tom Nicholas.

  • Fixed supplying an explicit dimension in the concat_dim argument to to open_mfdataset (GH2647). By Ben Root.

v0.11.2 (2 January 2019)

Removes inadvertently introduced setup dependency on pytest-runner (GH2641). Otherwise, this release is exactly equivalent to 0.11.1.

Warning

This is the last xarray release that will support Python 2.7. Future releases will be Python 3 only, but older versions of xarray will always be available for Python 2.7 users. For the more details, see:

v0.11.1 (29 December 2018)

This minor release includes a number of enhancements and bug fixes, and two (slightly) breaking changes.

Breaking changes

  • Minimum rasterio version increased from 0.36 to 1.0 (for open_rasterio)

  • Time bounds variables are now also decoded according to CF conventions (GH2565). The previous behavior was to decode them only if they had specific time attributes, now these attributes are copied automatically from the corresponding time coordinate. This might break downstream code that was relying on these variables to be brake downstream code that was relying on these variables to be not decoded. By Fabien Maussion.

Enhancements

  • Ability to read and write consolidated metadata in zarr stores (GH2558). By Ryan Abernathey.

  • CFTimeIndex uses slicing for string indexing when possible (like pandas.DatetimeIndex), which avoids unnecessary copies. By Stephan Hoyer

  • Enable passing rasterio.io.DatasetReader or rasterio.vrt.WarpedVRT to open_rasterio instead of file path string. Allows for in-memory reprojection, see (GH2588). By Scott Henderson.

  • Like pandas.DatetimeIndex, CFTimeIndex now supports “dayofyear” and “dayofweek” accessors (GH2597). Note this requires a version of cftime greater than 1.0.2. By Spencer Clark.

  • The option 'warn_for_unclosed_files' (False by default) has been added to allow users to enable a warning when files opened by xarray are deallocated but were not explicitly closed. This is mostly useful for debugging; we recommend enabling it in your test suites if you use xarray for IO. By Stephan Hoyer

  • Support Dask HighLevelGraphs by Matthew Rocklin.

  • DataArray.resample() and Dataset.resample() now supports the loffset kwarg just like pandas. By Deepak Cherian

  • Datasets are now guaranteed to have a 'source' encoding, so the source file name is always stored (GH2550). By Tom Nicholas.

  • The apply methods for DatasetGroupBy, DataArrayGroupBy, DatasetResample and DataArrayResample now support passing positional arguments to the applied function as a tuple to the args argument. By Matti Eskelinen.

  • 0d slices of ndarrays are now obtained directly through indexing, rather than extracting and wrapping a scalar, avoiding unnecessary copying. By Daniel Wennberg.

  • Added support for fill_value with shift() and shift() By Maximilian Roos

Bug fixes

  • Ensure files are automatically closed, if possible, when no longer referenced by a Python variable (GH2560). By Stephan Hoyer

  • Fixed possible race conditions when reading/writing to disk in parallel (GH2595). By Stephan Hoyer

  • Fix h5netcdf saving scalars with filters or chunks (GH2563). By Martin Raspaud.

  • Fix parsing of _Unsigned attribute set by OPENDAP servers. (GH2583). By Deepak Cherian

  • Fix failure in time encoding when exporting to netCDF with versions of pandas less than 0.21.1 (GH2623). By Spencer Clark.

  • Fix MultiIndex selection to update label and level (GH2619). By Keisuke Fujii.

v0.11.0 (7 November 2018)

Breaking changes

  • Finished deprecations (changed behavior with this release):

    • Dataset.T has been removed as a shortcut for Dataset.transpose(). Call Dataset.transpose() directly instead.

    • Iterating over a Dataset now includes only data variables, not coordinates. Similarly, calling len and bool on a Dataset now includes only data variables.

    • DataArray.__contains__ (used by Python’s in operator) now checks array data, not coordinates.

    • The old resample syntax from before xarray 0.10, e.g., data.resample('1D', dim='time', how='mean'), is no longer supported will raise an error in most cases. You need to use the new resample syntax instead, e.g., data.resample(time='1D').mean() or data.resample({'time': '1D'}).mean().

  • New deprecations (behavior will be changed in xarray 0.12):

    • Reduction of DataArray.groupby() and DataArray.resample() without dimension argument will change in the next release. Now we warn a FutureWarning. By Keisuke Fujii.

    • The inplace kwarg of a number of DataArray and Dataset methods is being deprecated and will be removed in the next release. By Deepak Cherian.

  • Refactored storage backends:

    • Xarray’s storage backends now automatically open and close files when necessary, rather than requiring opening a file with autoclose=True. A global least-recently-used cache is used to store open files; the default limit of 128 open files should suffice in most cases, but can be adjusted if necessary with xarray.set_options(file_cache_maxsize=...). The autoclose argument to open_dataset and related functions has been deprecated and is now a no-op.

      This change, along with an internal refactor of xarray’s storage backends, should significantly improve performance when reading and writing netCDF files with Dask, especially when working with many files or using Dask Distributed. By Stephan Hoyer

  • Support for non-standard calendars used in climate science:

    • Xarray will now always use cftime.datetime objects, rather than by default trying to coerce them into np.datetime64[ns] objects. A CFTimeIndex will be used for indexing along time coordinates in these cases.

    • A new method to_datetimeindex() has been added to aid in converting from a CFTimeIndex to a pandas.DatetimeIndex for the remaining use-cases where using a CFTimeIndex is still a limitation (e.g. for resample or plotting).

    • Setting the enable_cftimeindex option is now a no-op and emits a FutureWarning.

Enhancements

  • xarray.DataArray.plot.line() can now accept multidimensional coordinate variables as input. hue must be a dimension name in this case. (GH2407) By Deepak Cherian.

  • Added support for Python 3.7. (GH2271). By Joe Hamman.

  • Added support for plotting data with pandas.Interval coordinates, such as those created by groupby_bins() By Maximilian Maahn.

  • Added shift() for shifting the values of a CFTimeIndex by a specified frequency. (GH2244). By Spencer Clark.

  • Added support for using cftime.datetime coordinates with differentiate(), differentiate(), interp(), and interp(). By Spencer Clark

  • There is now a global option to either always keep or always discard dataset and dataarray attrs upon operations. The option is set with xarray.set_options(keep_attrs=True), and the default is to use the old behaviour. By Tom Nicholas.

  • Added a new backend for the GRIB file format based on ECMWF cfgrib python driver and ecCodes C-library. (GH2475) By Alessandro Amici, sponsored by ECMWF.

  • Resample now supports a dictionary mapping from dimension to frequency as its first argument, e.g., data.resample({'time': '1D'}).mean(). This is consistent with other xarray functions that accept either dictionaries or keyword arguments. By Stephan Hoyer.

  • The preferred way to access tutorial data is now to load it lazily with xarray.tutorial.open_dataset(). xarray.tutorial.load_dataset() calls Dataset.load() prior to returning (and is now deprecated). This was changed in order to facilitate using tutorial datasets with dask. By Joe Hamman.

  • DataArray can now use xr.set_option(keep_attrs=True) and retain attributes in binary operations, such as (+, -, * ,/). Default behaviour is unchanged (Attributes will be dismissed). By Michael Blaschek

Bug fixes

  • FacetGrid now properly uses the cbar_kwargs keyword argument. (GH1504, GH1717) By Deepak Cherian.

  • Addition and subtraction operators used with a CFTimeIndex now preserve the index’s type. (GH2244). By Spencer Clark.

  • We now properly handle arrays of datetime.datetime and datetime.timedelta provided as coordinates. (GH2512) By Deepak Cherian.

  • xarray.DataArray.roll correctly handles multidimensional arrays. (GH2445) By Keisuke Fujii.

  • xarray.plot() now properly accepts a norm argument and does not override the norm’s vmin and vmax. (GH2381) By Deepak Cherian.

  • xarray.DataArray.std() now correctly accepts ddof keyword argument. (GH2240) By Keisuke Fujii.

  • Restore matplotlib’s default of plotting dashed negative contours when a single color is passed to DataArray.contour() e.g. colors='k'. By Deepak Cherian.

  • Fix a bug that caused some indexing operations on arrays opened with open_rasterio to error (GH2454). By Stephan Hoyer.

  • Subtracting one CFTimeIndex from another now returns a pandas.TimedeltaIndex, analogous to the behavior for DatetimeIndexes (GH2484). By Spencer Clark.

  • Adding a TimedeltaIndex to, or subtracting a TimedeltaIndex from a CFTimeIndex is now allowed (GH2484). By Spencer Clark.

  • Avoid use of Dask’s deprecated get= parameter in tests by Matthew Rocklin.

  • An OverflowError is now accurately raised and caught during the encoding process if a reference date is used that is so distant that the dates must be encoded using cftime rather than NumPy (GH2272). By Spencer Clark.

  • Chunked datasets can now roundtrip to Zarr storage continually with to_zarr and open_zarr (GH2300). By Lily Wang.

v0.10.9 (21 September 2018)

This minor release contains a number of backwards compatible enhancements.

Announcements of note:

  • Xarray is now a NumFOCUS fiscally sponsored project! Read the announcement for more details.

  • We have a new Development roadmap that outlines our future development plans.

  • Dataset.apply now properly documents the way func is called. By Matti Eskelinen.

Enhancements

  • differentiate() and differentiate() are newly added. (GH1332) By Keisuke Fujii.

  • Default colormap for sequential and divergent data can now be set via set_options() (GH2394) By Julius Busecke.

  • min_count option is newly supported in sum(), prod() and sum(), and prod(). (GH2230) By Keisuke Fujii.

  • plot() now accepts the kwargs xscale, yscale, xlim, ylim, xticks, yticks just like pandas. Also xincrease=False, yincrease=False now use matplotlib’s axis inverting methods instead of setting limits. By Deepak Cherian. (GH2224)

  • DataArray coordinates and Dataset coordinates and data variables are now displayed as a b … y z rather than a b c d …. (GH1186) By Seth P.

  • A new CFTimeIndex-enabled cftime_range() function for use in generating dates from standard or non-standard calendars. By Spencer Clark.

  • When interpolating over a datetime64 axis, you can now provide a datetime string instead of a datetime64 object. E.g. da.interp(time='1991-02-01') (GH2284) By Deepak Cherian.

  • A clear error message is now displayed if a set or dict is passed in place of an array (GH2331) By Maximilian Roos.

  • Applying unstack to a large DataArray or Dataset is now much faster if the MultiIndex has not been modified after stacking the indices. (GH1560) By Maximilian Maahn.

  • You can now control whether or not to offset the coordinates when using the roll method and the current behavior, coordinates rolled by default, raises a deprecation warning unless explicitly setting the keyword argument. (GH1875) By Andrew Huang.

  • You can now call unstack without arguments to unstack every MultiIndex in a DataArray or Dataset. By Julia Signell.

  • Added the ability to pass a data kwarg to copy to create a new object with the same metadata as the original object but using new values. By Julia Signell.

Bug fixes

  • xarray.plot.imshow() correctly uses the origin argument. (GH2379) By Deepak Cherian.

  • Fixed DataArray.to_iris() failure while creating DimCoord by falling back to creating AuxCoord. Fixed dependency on var_name attribute being set. (GH2201) By Thomas Voigt.

  • Fixed a bug in zarr backend which prevented use with datasets with invalid chunk size encoding after reading from an existing store (GH2278). By Joe Hamman.

  • Tests can be run in parallel with pytest-xdist By Tony Tung.

  • Follow up the renamings in dask; from dask.ghost to dask.overlap By Keisuke Fujii.

  • Now raises a ValueError when there is a conflict between dimension names and level names of MultiIndex. (GH2299) By Keisuke Fujii.

  • Follow up the renamings in dask; from dask.ghost to dask.overlap By Keisuke Fujii.

  • Now apply_ufunc() raises a ValueError when the size of input_core_dims is inconsistent with the number of arguments. (GH2341) By Keisuke Fujii.

  • Fixed Dataset.filter_by_attrs() behavior not matching netCDF4.Dataset.get_variables_by_attributes(). When more than one key=value is passed into Dataset.filter_by_attrs() it will now return a Dataset with variables which pass all the filters. (GH2315) By Andrew Barna.

v0.10.8 (18 July 2018)

Breaking changes

  • Xarray no longer supports python 3.4. Additionally, the minimum supported versions of the following dependencies has been updated and/or clarified:

    • pandas: 0.18 -> 0.19

    • NumPy: 1.11 -> 1.12

    • Dask: 0.9 -> 0.16

    • Matplotlib: unspecified -> 1.5

    (GH2204). By Joe Hamman.

Enhancements

  • interp_like() and interp_like() methods are newly added. (GH2218) By Keisuke Fujii.

  • Added support for curvilinear and unstructured generic grids to to_cdms2() and from_cdms2() (GH2262). By Stephane Raynaud.

Bug fixes

  • Fixed a bug in zarr backend which prevented use with datasets with incomplete chunks in multiple dimensions (GH2225). By Joe Hamman.

  • Fixed a bug in to_netcdf() which prevented writing datasets when the arrays had different chunk sizes (GH2254). By Mike Neish.

  • Fixed masking during the conversion to cdms2 objects by to_cdms2() (GH2262). By Stephane Raynaud.

  • Fixed a bug in 2D plots which incorrectly raised an error when 2D coordinates weren’t monotonic (GH2250). By Fabien Maussion.

  • Fixed warning raised in to_netcdf() due to deprecation of effective_get in dask (GH2238). By Joe Hamman.

v0.10.7 (7 June 2018)

Enhancements

Bug fixes

  • Fixed a bug in rasterio backend which prevented use with distributed. The rasterio backend now returns pickleable objects (GH2021). By Joe Hamman.

v0.10.6 (31 May 2018)

The minor release includes a number of bug-fixes and backwards compatible enhancements.

Enhancements

  • New PseudoNetCDF backend for many Atmospheric data formats including GEOS-Chem, CAMx, NOAA arlpacked bit and many others. See Formats supported by PseudoNetCDF for more details. By Barron Henderson.

  • The Dataset constructor now aligns DataArray arguments in data_vars to indexes set explicitly in coords, where previously an error would be raised. (GH674) By Maximilian Roos.

  • sel(), isel() & reindex(), (and their Dataset counterparts) now support supplying a dict as a first argument, as an alternative to the existing approach of supplying kwargs. This allows for more robust behavior of dimension names which conflict with other keyword names, or are not strings. By Maximilian Roos.

  • rename() now supports supplying **kwargs, as an alternative to the existing approach of supplying a dict as the first argument. By Maximilian Roos.

  • cumsum() and cumprod() now support aggregation over multiple dimensions at the same time. This is the default behavior when dimensions are not specified (previously this raised an error). By Stephan Hoyer

  • DataArray.dot() and dot() are partly supported with older dask<0.17.4. (related to GH2203) By Keisuke Fujii.

  • Xarray now uses Versioneer to manage its version strings. (GH1300). By Joe Hamman.

Bug fixes

  • Fixed a regression in 0.10.4, where explicitly specifying dtype='S1' or dtype=str in encoding with to_netcdf() raised an error (GH2149). Stephan Hoyer

  • apply_ufunc() now directly validates output variables (GH1931). By Stephan Hoyer.

  • Fixed a bug where to_netcdf(..., unlimited_dims='bar') yielded NetCDF files with spurious 0-length dimensions (i.e. b, a, and r) (GH2134). By Joe Hamman.

  • Removed spurious warnings with Dataset.update(Dataset) (GH2161) and array.equals(array) when array contains NaT (GH2162). By Stephan Hoyer.

  • Aggregations with Dataset.reduce() (including mean, sum, etc) no longer drop unrelated coordinates (GH1470). Also fixed a bug where non-scalar data-variables that did not include the aggregation dimension were improperly skipped. By Stephan Hoyer

  • Fix stack() with non-unique coordinates on pandas 0.23 (GH2160). By Stephan Hoyer

  • Selecting data indexed by a length-1 CFTimeIndex with a slice of strings now behaves as it does when using a length-1 DatetimeIndex (i.e. it no longer falsely returns an empty array when the slice includes the value in the index) (GH2165). By Spencer Clark.

  • Fix DataArray.groupby().reduce() mutating coordinates on the input array when grouping over dimension coordinates with duplicated entries (GH2153). By Stephan Hoyer

  • Fix Dataset.to_netcdf() cannot create group with engine="h5netcdf" (GH2177). By Stephan Hoyer

v0.10.4 (16 May 2018)

The minor release includes a number of bug-fixes and backwards compatible enhancements. A highlight is CFTimeIndex, which offers support for non-standard calendars used in climate modeling.

Documentation

Enhancements

  • Add an option for using a CFTimeIndex for indexing times with non-standard calendars and/or outside the Timestamp-valid range; this index enables a subset of the functionality of a standard pandas.DatetimeIndex. See Non-standard calendars and dates outside the Timestamp-valid range for full details. (GH789, GH1084, GH1252) By Spencer Clark with help from Stephan Hoyer.

  • Allow for serialization of cftime.datetime objects (GH789, GH1084, GH2008, GH1252) using the standalone cftime library. By Spencer Clark.

  • Support writing lists of strings as netCDF attributes (GH2044). By Dan Nowacki.

  • to_netcdf() with engine='h5netcdf' now accepts h5py encoding settings compression and compression_opts, along with the NetCDF4-Python style settings gzip=True and complevel. This allows using any compression plugin installed in hdf5, e.g. LZF (GH1536). By Guido Imperiale.

  • dot() on dask-backed data will now call dask.array.einsum(). This greatly boosts speed and allows chunking on the core dims. The function now requires dask >= 0.17.3 to work on dask-backed data (GH2074). By Guido Imperiale.

  • plot.line() learned new kwargs: xincrease, yincrease that change the direction of the respective axes. By Deepak Cherian.

  • Added the parallel option to open_mfdataset(). This option uses dask.delayed to parallelize the open and preprocessing steps within open_mfdataset. This is expected to provide performance improvements when opening many files, particularly when used in conjunction with dask’s multiprocessing or distributed schedulers (GH1981). By Joe Hamman.

  • New compute option in to_netcdf(), to_zarr(), and save_mfdataset() to allow for the lazy computation of netCDF and zarr stores. This feature is currently only supported by the netCDF4 and zarr backends. (GH1784). By Joe Hamman.

Bug fixes

  • ValueError is raised when coordinates with the wrong size are assigned to a DataArray. (GH2112) By Keisuke Fujii.

  • Fixed a bug in rolling() with bottleneck. Also, fixed a bug in rolling an integer dask array. (GH2113) By Keisuke Fujii.

  • Fixed a bug where keep_attrs=True flag was neglected if apply_ufunc() was used with Variable. (GH2114) By Keisuke Fujii.

  • When assigning a DataArray to Dataset, any conflicted non-dimensional coordinates of the DataArray are now dropped. (GH2068) By Keisuke Fujii.

  • Better error handling in open_mfdataset (GH2077). By Stephan Hoyer.

  • plot.line() does not call autofmt_xdate() anymore. Instead it changes the rotation and horizontal alignment of labels without removing the x-axes of any other subplots in the figure (if any). By Deepak Cherian.

  • Colorbar limits are now determined by excluding ±Infs too. By Deepak Cherian. By Joe Hamman.

  • Fixed to_iris to maintain lazy dask array after conversion (GH2046). By Alex Hilson and Stephan Hoyer.

v0.10.3 (13 April 2018)

The minor release includes a number of bug-fixes and backwards compatible enhancements.

Enhancements

  • isin() and isin() methods, which test each value in the array for whether it is contained in the supplied list, returning a bool array. See Selecting values with isin for full details. Similar to the np.isin function. By Maximilian Roos.

  • Some speed improvement to construct DataArrayRolling object (GH1993) By Keisuke Fujii.

  • Handle variables with different values for missing_value and _FillValue by masking values for both attributes; previously this resulted in a ValueError. (GH2016) By Ryan May.

Bug fixes

  • Fixed decode_cf function to operate lazily on dask arrays (GH1372). By Ryan Abernathey.

  • Fixed labeled indexing with slice bounds given by xarray objects with datetime64 or timedelta64 dtypes (GH1240). By Stephan Hoyer.

  • Attempting to convert an xarray.Dataset into a numpy array now raises an informative error message. By Stephan Hoyer.

  • Fixed a bug in decode_cf_datetime where int32 arrays weren’t parsed correctly (GH2002). By Fabien Maussion.

  • When calling xr.auto_combine() or xr.open_mfdataset() with a concat_dim, the resulting dataset will have that one-element dimension (it was silently dropped, previously) (GH1988). By Ben Root.

v0.10.2 (13 March 2018)

The minor release includes a number of bug-fixes and enhancements, along with one possibly backwards incompatible change.

Backwards incompatible changes

  • The addition of __array_ufunc__ for xarray objects (see below) means that NumPy ufunc methods (e.g., np.add.reduce) that previously worked on xarray.DataArray objects by converting them into NumPy arrays will now raise NotImplementedError instead. In all cases, the work-around is simple: convert your objects explicitly into NumPy arrays before calling the ufunc (e.g., with .values).

Enhancements

  • Added dot(), equivalent to numpy.einsum(). Also, dot() now supports dims option, which specifies the dimensions to sum over. (GH1951) By Keisuke Fujii.

  • Support for writing xarray datasets to netCDF files (netcdf4 backend only) when using the dask.distributed scheduler (GH1464). By Joe Hamman.

  • Support lazy vectorized-indexing. After this change, flexible indexing such as orthogonal/vectorized indexing, becomes possible for all the backend arrays. Also, lazy transpose is now also supported. (GH1897) By Keisuke Fujii.

  • Implemented NumPy’s __array_ufunc__ protocol for all xarray objects (GH1617). This enables using NumPy ufuncs directly on xarray.Dataset objects with recent versions of NumPy (v1.13 and newer):

    In [1]: ds = xr.Dataset({"a": 1})
    
    In [2]: np.sin(ds)
    Out[2]: 
    <xarray.Dataset>
    Dimensions:  ()
    Data variables:
        a        float64 0.8415
    

    This obliviates the need for the xarray.ufuncs module, which will be deprecated in the future when xarray drops support for older versions of NumPy. By Stephan Hoyer.

  • Improve rolling() logic. DataArrayRolling() object now supports construct() method that returns a view of the DataArray / Dataset object with the rolling-window dimension added to the last axis. This enables more flexible operation, such as strided rolling, windowed rolling, ND-rolling, short-time FFT and convolution. (GH1831, GH1142, GH819) By Keisuke Fujii.

  • line() learned to make plots with data on x-axis if so specified. (GH575) By Deepak Cherian.

Bug fixes

v0.10.1 (25 February 2018)

The minor release includes a number of bug-fixes and backwards compatible enhancements.

Documentation

Enhancements

New functions and methods:

Plotting enhancements:

  • xarray.plot.imshow() now handles RGB and RGBA images. Saturation can be adjusted with vmin and vmax, or with robust=True. By Zac Hatfield-Dodds.

  • contourf() learned to contour 2D variables that have both a 1D coordinate (e.g. time) and a 2D coordinate (e.g. depth as a function of time) (GH1737). By Deepak Cherian.

  • plot() rotates x-axis ticks if x-axis is time. By Deepak Cherian.

  • line() can draw multiple lines if provided with a 2D variable. By Deepak Cherian.

Other enhancements:

  • Reduce methods such as DataArray.sum() now handles object-type array.

    In [3]: da = xr.DataArray(np.array([True, False, np.nan], dtype=object), dims="x")
    
    In [4]: da.sum()
    Out[4]: 
    <xarray.DataArray ()>
    array(1)
    

    (GH1866) By Keisuke Fujii.

  • Reduce methods such as DataArray.sum() now accepts dtype arguments. (GH1838) By Keisuke Fujii.

  • Added nodatavals attribute to DataArray when using open_rasterio(). (GH1736). By Alan Snow.

  • Use pandas.Grouper class in xarray resample methods rather than the deprecated pandas.TimeGrouper class (GH1766). By Joe Hamman.

  • Experimental support for parsing ENVI metadata to coordinates and attributes in xarray.open_rasterio(). By Matti Eskelinen.

  • Reduce memory usage when decoding a variable with a scale_factor, by converting 8-bit and 16-bit integers to float32 instead of float64 (PR1840), and keeping float16 and float32 as float32 (GH1842). Correspondingly, encoded variables may also be saved with a smaller dtype. By Zac Hatfield-Dodds.

  • Speed of reindexing/alignment with dask array is orders of magnitude faster when inserting missing values (GH1847). By Stephan Hoyer.

  • Fix axis keyword ignored when applying np.squeeze to DataArray (GH1487). By Florian Pinault.

  • netcdf4-python has moved the its time handling in the netcdftime module to a standalone package (netcdftime). As such, xarray now considers netcdftime an optional dependency. One benefit of this change is that it allows for encoding/decoding of datetimes with non-standard calendars without the netcdf4-python dependency (GH1084). By Joe Hamman.

New functions/methods

  • New rank() on arrays and datasets. Requires bottleneck (GH1731). By 0x0L.

Bug fixes

  • Rolling aggregation with center=True option now gives the same result with pandas including the last element (GH1046). By Keisuke Fujii.

  • Support indexing with a 0d-np.ndarray (GH1921). By Keisuke Fujii.

  • Added warning in api.py of a netCDF4 bug that occurs when the filepath has 88 characters (GH1745). By Liam Brannigan.

  • Fixed encoding of multi-dimensional coordinates in to_netcdf() (GH1763). By Mike Neish.

  • Fixed chunking with non-file-based rasterio datasets (GH1816) and refactored rasterio test suite. By Ryan Abernathey

  • Bug fix in open_dataset(engine=’pydap’) (GH1775) By Keisuke Fujii.

  • Bug fix in vectorized assignment (GH1743, GH1744). Now item assignment to __setitem__() checks

  • Bug fix in vectorized assignment (GH1743, GH1744). Now item assignment to DataArray.__setitem__() checks coordinates of target, destination and keys. If there are any conflict among these coordinates, IndexError will be raised. By Keisuke Fujii.

  • Properly point DataArray.__dask_scheduler__ to dask.threaded.get. By Matthew Rocklin.

  • Bug fixes in DataArray.plot.imshow(): all-NaN arrays and arrays with size one in some dimension can now be plotted, which is good for exploring satellite imagery (GH1780). By Zac Hatfield-Dodds.

  • Fixed UnboundLocalError when opening netCDF file (GH1781). By Stephan Hoyer.

  • The variables, attrs, and dimensions properties have been deprecated as part of a bug fix addressing an issue where backends were unintentionally loading the datastores data and attributes repeatedly during writes (GH1798). By Joe Hamman.

  • Compatibility fixes to plotting module for NumPy 1.14 and pandas 0.22 (GH1813). By Joe Hamman.

  • Bug fix in encoding coordinates with {'_FillValue': None} in netCDF metadata (GH1865). By Chris Roth.

  • Fix indexing with lists for arrays loaded from netCDF files with engine='h5netcdf (GH1864). By Stephan Hoyer.

  • Corrected a bug with incorrect coordinates for non-georeferenced geotiff files (GH1686). Internally, we now use the rasterio coordinate transform tool instead of doing the computations ourselves. A parse_coordinates kwarg has beed added to open_rasterio() (set to True per default). By Fabien Maussion.

  • The colors of discrete colormaps are now the same regardless if seaborn is installed or not (GH1896). By Fabien Maussion.

  • Fixed dtype promotion rules in where() and concat() to match pandas (GH1847). A combination of strings/numbers or unicode/bytes now promote to object dtype, instead of strings or unicode. By Stephan Hoyer.

  • Fixed bug where isnull() was loading data stored as dask arrays (GH1937). By Joe Hamman.

v0.10.0 (20 November 2017)

This is a major release that includes bug fixes, new features and a few backwards incompatible changes. Highlights include:

  • Indexing now supports broadcasting over dimensions, similar to NumPy’s vectorized indexing (but better!).

  • resample() has a new groupby-like API like pandas.

  • apply_ufunc() facilitates wrapping and parallelizing functions written for NumPy arrays.

  • Performance improvements, particularly for dask and open_mfdataset().

Breaking changes

  • xarray now supports a form of vectorized indexing with broadcasting, where the result of indexing depends on dimensions of indexers, e.g., array.sel(x=ind) with ind.dims == ('y',). Alignment between coordinates on indexed and indexing objects is also now enforced. Due to these changes, existing uses of xarray objects to index other xarray objects will break in some cases.

    The new indexing API is much more powerful, supporting outer, diagonal and vectorized indexing in a single interface. The isel_points and sel_points methods are deprecated, since they are now redundant with the isel / sel methods. See Vectorized Indexing for the details (GH1444, GH1436). By Keisuke Fujii and Stephan Hoyer.

  • A new resampling interface to match pandas’ groupby-like API was added to Dataset.resample() and DataArray.resample() (GH1272). Timeseries resampling is fully supported for data with arbitrary dimensions as is both downsampling and upsampling (including linear, quadratic, cubic, and spline interpolation).

    Old syntax:

    In [5]: ds.resample("24H", dim="time", how="max")
    Out[5]: 
    <xarray.Dataset>
    [...]
    

    New syntax:

    In [6]: ds.resample(time="24H").max()
    Out[6]: 
    <xarray.Dataset>
    [...]
    

    Note that both versions are currently supported, but using the old syntax will produce a warning encouraging users to adopt the new syntax. By Daniel Rothenberg.

  • Calling repr() or printing xarray objects at the command line or in a Jupyter Notebook will not longer automatically compute dask variables or load data on arrays lazily loaded from disk (GH1522). By Guido Imperiale.

  • Supplying coords as a dictionary to the DataArray constructor without also supplying an explicit dims argument is no longer supported. This behavior was deprecated in version 0.9 but will now raise an error (GH727).

  • Several existing features have been deprecated and will change to new behavior in xarray v0.11. If you use any of them with xarray v0.10, you should see a FutureWarning that describes how to update your code:

    • Dataset.T has been deprecated an alias for Dataset.transpose() (GH1232). In the next major version of xarray, it will provide short- cut lookup for variables or attributes with name 'T'.

    • DataArray.__contains__ (e.g., key in data_array) currently checks for membership in DataArray.coords. In the next major version of xarray, it will check membership in the array data found in DataArray.values instead (GH1267).

    • Direct iteration over and counting a Dataset (e.g., [k for k in ds], ds.keys(), ds.values(), len(ds) and if ds) currently includes all variables, both data and coordinates. For improved usability and consistency with pandas, in the next major version of xarray these will change to only include data variables (GH884). Use ds.variables, ds.data_vars or ds.coords as alternatives.

  • Changes to minimum versions of dependencies:

    • Old numpy < 1.11 and pandas < 0.18 are no longer supported (GH1512). By Keisuke Fujii.

    • The minimum supported version bottleneck has increased to 1.1 (GH1279). By Joe Hamman.

Enhancements

New functions/methods

  • New helper function apply_ufunc() for wrapping functions written to work on NumPy arrays to support labels on xarray objects (GH770). apply_ufunc also support automatic parallelization for many functions with dask. See Wrapping custom computation and Automatic parallelization with apply_ufunc and map_blocks for details. By Stephan Hoyer.

  • Added new method Dataset.to_dask_dataframe(), convert a dataset into a dask dataframe. This allows lazy loading of data from a dataset containing dask arrays (GH1462). By James Munroe.

  • New function where() for conditionally switching between values in xarray objects, like numpy.where():

    In [7]: import xarray as xr
    
    In [8]: arr = xr.DataArray([[1, 2, 3], [4, 5, 6]], dims=("x", "y"))
    
    In [9]: xr.where(arr % 2, "even", "odd")
    Out[9]: 
    <xarray.DataArray (x: 2, y: 3)>
    array([['even', 'odd', 'even'],
           ['odd', 'even', 'odd']],
          dtype='<U4')
    Dimensions without coordinates: x, y
    

    Equivalently, the where() method also now supports the other argument, for filling with a value other than NaN (GH576). By Stephan Hoyer.

  • Added show_versions() function to aid in debugging (GH1485). By Joe Hamman.

Performance improvements

  • concat() was computing variables that aren’t in memory (e.g. dask-based) multiple times; open_mfdataset() was loading them multiple times from disk. Now, both functions will instead load them at most once and, if they do, store them in memory in the concatenated array/dataset (GH1521). By Guido Imperiale.

  • Speed-up (x 100) of xarray.conventions.decode_cf_datetime. By Christian Chwala.

IO related improvements

  • Unicode strings (str on Python 3) are now round-tripped successfully even when written as character arrays (e.g., as netCDF3 files or when using engine='scipy') (GH1638). This is controlled by the _Encoding attribute convention, which is also understood directly by the netCDF4-Python interface. See String encoding for full details. By Stephan Hoyer.

  • Support for data_vars and coords keywords from concat() added to open_mfdataset() (GH438). Using these keyword arguments can significantly reduce memory usage and increase speed. By Oleksandr Huziy.

  • Support for pathlib.Path objects added to open_dataset(), open_mfdataset(), xarray.to_netcdf, and save_mfdataset() (GH799):

    In [10]: from pathlib import Path  # In Python 2, use pathlib2!
    
    In [11]: data_dir = Path("data/")
    
    In [12]: one_file = data_dir / "dta_for_month_01.nc"
    
    In [13]: xr.open_dataset(one_file)
    Out[13]: 
    <xarray.Dataset>
    [...]
    

    By Willi Rath.

  • You can now explicitly disable any default _FillValue (NaN for floating point values) by passing the encoding {'_FillValue': None} (GH1598). By Stephan Hoyer.

  • More attributes available in attrs dictionary when raster files are opened with open_rasterio(). By Greg Brener.

  • Support for NetCDF files using an _Unsigned attribute to indicate that a a signed integer data type should be interpreted as unsigned bytes (GH1444). By Eric Bruning.

  • Support using an existing, opened netCDF4 Dataset with NetCDF4DataStore. This permits creating an Dataset from a netCDF4 Dataset that has been opened using other means (GH1459). By Ryan May.

  • Changed PydapDataStore to take a Pydap dataset. This permits opening Opendap datasets that require authentication, by instantiating a Pydap dataset with a session object. Also added xarray.backends.PydapDataStore.open() which takes a url and session object (GH1068). By Philip Graae.

  • Support reading and writing unlimited dimensions with h5netcdf (GH1636). By Joe Hamman.

Other improvements

  • Added _ipython_key_completions_ to xarray objects, to enable autocompletion for dictionary-like access in IPython, e.g., ds['tem + tab -> ds['temperature'] (GH1628). By Keisuke Fujii.

  • Support passing keyword arguments to load, compute, and persist methods. Any keyword arguments supplied to these methods are passed on to the corresponding dask function (GH1523). By Joe Hamman.

  • Encoding attributes are now preserved when xarray objects are concatenated. The encoding is copied from the first object (GH1297). By Joe Hamman and Gerrit Holl.

  • Support applying rolling window operations using bottleneck’s moving window functions on data stored as dask arrays (GH1279). By Joe Hamman.

  • Experimental support for the Dask collection interface (GH1674). By Matthew Rocklin.

Bug fixes

  • Suppress RuntimeWarning issued by numpy for “invalid value comparisons” (e.g. NaN). Xarray now behaves similarly to pandas in its treatment of binary and unary operations on objects with NaNs (GH1657). By Joe Hamman.

  • Unsigned int support for reduce methods with skipna=True (GH1562). By Keisuke Fujii.

  • Fixes to ensure xarray works properly with pandas 0.21:

    • Fix isnull() method (GH1549).

    • to_series() and to_dataframe() should not return a pandas.MultiIndex for 1D data (GH1548).

    • Fix plotting with datetime64 axis labels (GH1661).

    By Stephan Hoyer.

  • open_rasterio() method now shifts the rasterio coordinates so that they are centered in each pixel (GH1468). By Greg Brener.

  • rename() method now doesn’t throw errors if some Variable is renamed to the same name as another Variable as long as that other Variable is also renamed (GH1477). This method now does throw when two Variables would end up with the same name after the rename (since one of them would get overwritten in this case). By Prakhar Goel.

  • Fix xarray.testing.assert_allclose() to actually use atol and rtol arguments when called on DataArray objects (GH1488). By Stephan Hoyer.

  • xarray quantile methods now properly raise a TypeError when applied to objects with data stored as dask arrays (GH1529). By Joe Hamman.

  • Fix positional indexing to allow the use of unsigned integers (GH1405). By Joe Hamman and Gerrit Holl.

  • Creating a Dataset now raises MergeError if a coordinate shares a name with a dimension but is comprised of arbitrary dimensions (GH1120). By Joe Hamman.

  • open_rasterio() method now skips rasterio’s crs attribute if its value is None (GH1520). By Leevi Annala.

  • Fix xarray.DataArray.to_netcdf() to return bytes when no path is provided (GH1410). By Joe Hamman.

  • Fix xarray.save_mfdataset() to properly raise an informative error when objects other than Dataset are provided (GH1555). By Joe Hamman.

  • xarray.Dataset.copy() would not preserve the encoding property (GH1586). By Guido Imperiale.

  • xarray.concat() would eagerly load dask variables into memory if the first argument was a numpy variable (GH1588). By Guido Imperiale.

  • Fix bug in to_netcdf() when writing in append mode (GH1215). By Joe Hamman.

  • Fix netCDF4 backend to properly roundtrip the shuffle encoding option (GH1606). By Joe Hamman.

  • Fix bug when using pytest class decorators to skipping certain unittests. The previous behavior unintentionally causing additional tests to be skipped (GH1531). By Joe Hamman.

  • Fix pynio backend for upcoming release of pynio with Python 3 support (GH1611). By Ben Hillman.

  • Fix seaborn import warning for Seaborn versions 0.8 and newer when the apionly module was deprecated. (GH1633). By Joe Hamman.

  • Fix COMPAT: MultiIndex checking is fragile (GH1833). By Florian Pinault.

  • Fix rasterio backend for Rasterio versions 1.0alpha10 and newer. (GH1641). By Chris Holden.

Bug fixes after rc1

  • Suppress warning in IPython autocompletion, related to the deprecation of .T attributes (GH1675). By Keisuke Fujii.

  • Fix a bug in lazily-indexing netCDF array. (GH1688) By Keisuke Fujii.

  • (Internal bug) MemoryCachedArray now supports the orthogonal indexing. Also made some internal cleanups around array wrappers (GH1429). By Keisuke Fujii.

  • (Internal bug) MemoryCachedArray now always wraps np.ndarray by NumpyIndexingAdapter. (GH1694) By Keisuke Fujii.

  • Fix importing xarray when running Python with -OO (GH1706). By Stephan Hoyer.

  • Saving a netCDF file with a coordinates with a spaces in its names now raises an appropriate warning (GH1689). By Stephan Hoyer.

  • Fix two bugs that were preventing dask arrays from being specified as coordinates in the DataArray constructor (GH1684). By Joe Hamman.

  • Fixed apply_ufunc with dask='parallelized' for scalar arguments (GH1697). By Stephan Hoyer.

  • Fix “Chunksize cannot exceed dimension size” error when writing netCDF4 files loaded from disk (GH1225). By Stephan Hoyer.

  • Validate the shape of coordinates with names matching dimensions in the DataArray constructor (GH1709). By Stephan Hoyer.

  • Raise NotImplementedError when attempting to save a MultiIndex to a netCDF file (GH1547). By Stephan Hoyer.

  • Remove netCDF dependency from rasterio backend tests. By Matti Eskelinen

Bug fixes after rc2

  • Fixed unexpected behavior in Dataset.set_index() and DataArray.set_index() introduced by pandas 0.21.0. Setting a new index with a single variable resulted in 1-level pandas.MultiIndex instead of a simple pandas.Index (GH1722). By Benoit Bovy.

  • Fixed unexpected memory loading of backend arrays after print. (GH1720). By Keisuke Fujii.

v0.9.6 (8 June 2017)

This release includes a number of backwards compatible enhancements and bug fixes.

Enhancements

Bug fixes

  • Fix error from repeated indexing of datasets loaded from disk (GH1374). By Stephan Hoyer.

  • Fix a bug where .isel_points wrongly assigns unselected coordinate to data_vars. By Keisuke Fujii.

  • Tutorial datasets are now checked against a reference MD5 sum to confirm successful download (GH1392). By Matthew Gidden.

  • DataArray.chunk() now accepts dask specific kwargs like Dataset.chunk() does. By Fabien Maussion.

  • Support for engine='pydap' with recent releases of Pydap (3.2.2+), including on Python 3 (GH1174).

Documentation

Testing

  • Fix test suite failure caused by changes to pandas.cut function (GH1386). By Ryan Abernathey.

  • Enhanced tests suite by use of @network decorator, which is controlled via --run-network-tests command line argument to py.test (GH1393). By Matthew Gidden.

v0.9.5 (17 April, 2017)

Remove an inadvertently introduced print statement.

v0.9.3 (16 April, 2017)

This minor release includes bug-fixes and backwards compatible enhancements.

Enhancements

  • New persist() method to Datasets and DataArrays to enable persisting data in distributed memory when using Dask (GH1344). By Matthew Rocklin.

  • New expand_dims() method for DataArray and Dataset (GH1326). By Keisuke Fujii.

Bug fixes

  • Fix .where() with drop=True when arguments do not have indexes (GH1350). This bug, introduced in v0.9, resulted in xarray producing incorrect results in some cases. By Stephan Hoyer.

  • Fixed writing to file-like objects with to_netcdf() (GH1320). Stephan Hoyer.

  • Fixed explicitly setting engine='scipy' with to_netcdf when not providing a path (GH1321). Stephan Hoyer.

  • Fixed open_dataarray does not pass properly its parameters to open_dataset (GH1359). Stephan Hoyer.

  • Ensure test suite works when runs from an installed version of xarray (GH1336). Use @pytest.mark.slow instead of a custom flag to mark slow tests. By Stephan Hoyer

v0.9.2 (2 April 2017)

The minor release includes bug-fixes and backwards compatible enhancements.

Enhancements

  • rolling on Dataset is now supported (GH859).

  • .rolling() on Dataset is now supported (GH859). By Keisuke Fujii.

  • When bottleneck version 1.1 or later is installed, use bottleneck for rolling var, argmin, argmax, and rank computations. Also, rolling median now accepts a min_periods argument (GH1276). By Joe Hamman.

  • When .plot() is called on a 2D DataArray and only one dimension is specified with x= or y=, the other dimension is now guessed (GH1291). By Vincent Noel.

  • Added new method assign_attrs() to DataArray and Dataset, a chained-method compatible implementation of the dict.update method on attrs (GH1281). By Henry S. Harrison.

  • Added new autoclose=True argument to open_mfdataset() to explicitly close opened files when not in use to prevent occurrence of an OS Error related to too many open files (GH1198). Note, the default is autoclose=False, which is consistent with previous xarray behavior. By Phillip J. Wolfram.

  • The repr() of Dataset and DataArray attributes uses a similar format to coordinates and variables, with vertically aligned entries truncated to fit on a single line (GH1319). Hopefully this will stop people writing data.attrs = {} and discarding metadata in notebooks for the sake of cleaner output. The full metadata is still available as data.attrs. By Zac Hatfield-Dodds.

  • Enhanced tests suite by use of @slow and @flaky decorators, which are controlled via --run-flaky and --skip-slow command line arguments to py.test (GH1336). By Stephan Hoyer and Phillip J. Wolfram.

  • New aggregation on rolling objects count() which providing a rolling count of valid values (GH1138).

Bug fixes

v0.9.1 (30 January 2017)

Renamed the “Unindexed dimensions” section in the Dataset and DataArray repr (added in v0.9.0) to “Dimensions without coordinates” (GH1199).

v0.9.0 (25 January 2017)

This major release includes five months worth of enhancements and bug fixes from 24 contributors, including some significant changes that are not fully backwards compatible. Highlights include:

  • Coordinates are now optional in the xarray data model, even for dimensions.

  • Changes to caching, lazy loading and pickling to improve xarray’s experience for parallel computing.

  • Improvements for accessing and manipulating pandas.MultiIndex levels.

  • Many new methods and functions, including quantile(), cumsum(), cumprod() combine_first set_index(), reset_index(), reorder_levels(), full_like(), zeros_like(), ones_like() open_dataarray(), compute(), Dataset.info(), testing.assert_equal(), testing.assert_identical(), and testing.assert_allclose().

Breaking changes

  • Index coordinates for each dimensions are now optional, and no longer created by default GH1017. You can identify such dimensions without coordinates by their appearance in list of “Dimensions without coordinates” in the Dataset or DataArray repr:

    In [14]: xr.Dataset({"foo": (("x", "y"), [[1, 2]])})
    Out[14]: 
    <xarray.Dataset>
    Dimensions:  (x: 1, y: 2)
    Dimensions without coordinates: x, y
    Data variables:
        foo      (x, y) int64 1 2
    

    This has a number of implications:

    • align() and reindex() can now error, if dimensions labels are missing and dimensions have different sizes.

    • Because pandas does not support missing indexes, methods such as to_dataframe/from_dataframe and stack/unstack no longer roundtrip faithfully on all inputs. Use reset_index() to remove undesired indexes.

    • Dataset.__delitem__ and drop() no longer delete/drop variables that have dimensions matching a deleted/dropped variable.

    • DataArray.coords.__delitem__ is now allowed on variables matching dimension names.

    • .sel and .loc now handle indexing along a dimension without coordinate labels by doing integer based indexing. See Missing coordinate labels for an example.

    • indexes is no longer guaranteed to include all dimensions names as keys. The new method get_index() has been added to get an index for a dimension guaranteed, falling back to produce a default RangeIndex if necessary.

  • The default behavior of merge is now compat='no_conflicts', so some merges will now succeed in cases that previously raised xarray.MergeError. Set compat='broadcast_equals' to restore the previous default. See Merging with ‘no_conflicts’ for more details.

  • Reading values no longer always caches values in a NumPy array GH1128. Caching of .values on variables read from netCDF files on disk is still the default when open_dataset() is called with cache=True. By Guido Imperiale and Stephan Hoyer.

  • Pickling a Dataset or DataArray linked to a file on disk no longer caches its values into memory before pickling (GH1128). Instead, pickle stores file paths and restores objects by reopening file references. This enables preliminary, experimental use of xarray for opening files with dask.distributed. By Stephan Hoyer.

  • Coordinates used to index a dimension are now loaded eagerly into pandas.Index objects, instead of loading the values lazily. By Guido Imperiale.

  • Automatic levels for 2d plots are now guaranteed to land on vmin and vmax when these kwargs are explicitly provided (GH1191). The automated level selection logic also slightly changed. By Fabien Maussion.

  • DataArray.rename() behavior changed to strictly change the DataArray.name if called with string argument, or strictly change coordinate names if called with dict-like argument. By Markus Gonser.

  • By default to_netcdf() add a _FillValue = NaN attributes to float types. By Frederic Laliberte.

  • repr on DataArray objects uses an shortened display for NumPy array data that is less likely to overflow onto multiple pages (GH1207). By Stephan Hoyer.

  • xarray no longer supports python 3.3, versions of dask prior to v0.9.0, or versions of bottleneck prior to v1.0.

Deprecations

  • Renamed the Coordinate class from xarray’s low level API to IndexVariable. Variable.to_variable and Variable.to_coord have been renamed to to_base_variable() and to_index_variable().

  • Deprecated supplying coords as a dictionary to the DataArray constructor without also supplying an explicit dims argument. The old behavior encouraged relying on the iteration order of dictionaries, which is a bad practice (GH727).

  • Removed a number of methods deprecated since v0.7.0 or earlier: load_data, vars, drop_vars, dump, dumps and the variables keyword argument to Dataset.

  • Removed the dummy module that enabled import xray.

Enhancements

  • Added new method combine_first() to DataArray and Dataset, based on the pandas method of the same name (see Combine). By Chun-Wei Yuan.

  • Added the ability to change default automatic alignment (arithmetic_join=”inner”) for binary operations via set_options() (see Automatic alignment). By Chun-Wei Yuan.

  • Add checking of attr names and values when saving to netCDF, raising useful error messages if they are invalid. (GH911). By Robin Wilson.

  • Added ability to save DataArray objects directly to netCDF files using to_netcdf(), and to load directly from netCDF files using open_dataarray() (GH915). These remove the need to convert a DataArray to a Dataset before saving as a netCDF file, and deals with names to ensure a perfect ‘roundtrip’ capability. By Robin Wilson.

  • Multi-index levels are now accessible as “virtual” coordinate variables, e.g., ds['time'] can pull out the 'time' level of a multi-index (see Coordinates). sel also accepts providing multi-index levels as keyword arguments, e.g., ds.sel(time='2000-01') (see Multi-level indexing). By Benoit Bovy.

  • Added set_index, reset_index and reorder_levels methods to easily create and manipulate (multi-)indexes (see Set and reset index). By Benoit Bovy.

  • Added the compat option 'no_conflicts' to merge, allowing the combination of xarray objects with disjoint (GH742) or overlapping (GH835) coordinates as long as all present data agrees. By Johnnie Gray. See Merging with ‘no_conflicts’ for more details.

  • It is now possible to set concat_dim=None explicitly in open_mfdataset() to disable inferring a dimension along which to concatenate. By Stephan Hoyer.

  • Added methods DataArray.compute(), Dataset.compute(), and Variable.compute() as a non-mutating alternative to load(). By Guido Imperiale.

  • Adds DataArray and Dataset methods cumsum() and cumprod(). By Phillip J. Wolfram.

  • New properties Dataset.sizes and DataArray.sizes for providing consistent access to dimension length on both Dataset and DataArray (GH921). By Stephan Hoyer.

  • New keyword argument drop=True for sel(), isel() and squeeze() for dropping scalar coordinates that arise from indexing. DataArray (GH242). By Stephan Hoyer.

  • New top-level functions full_like(), zeros_like(), and ones_like() By Guido Imperiale.

  • Overriding a preexisting attribute with register_dataset_accessor() or register_dataarray_accessor() now issues a warning instead of raising an error (GH1082). By Stephan Hoyer.

  • Options for axes sharing between subplots are exposed to FacetGrid and plot(), so axes sharing can be disabled for polar plots. By Bas Hoonhout.

  • New utility functions assert_equal(), assert_identical(), and assert_allclose() for asserting relationships between xarray objects, designed for use in a pytest test suite.

  • figsize, size and aspect plot arguments are now supported for all plots (GH897). See Controlling the figure size for more details. By Stephan Hoyer and Fabien Maussion.

  • New info() method to summarize Dataset variables and attributes. The method prints to a buffer (e.g. stdout) with output similar to what the command line utility ncdump -h produces (GH1150). By Joe Hamman.

  • Added the ability write unlimited netCDF dimensions with the scipy and netcdf4 backends via the new xray.Dataset.encoding attribute or via the unlimited_dims argument to xray.Dataset.to_netcdf. By Joe Hamman.

  • New quantile() method to calculate quantiles from DataArray objects (GH1187). By Joe Hamman.

Bug fixes

  • groupby_bins now restores empty bins by default (GH1019). By Ryan Abernathey.

  • Fix issues for dates outside the valid range of pandas timestamps (GH975). By Mathias Hauser.

  • Unstacking produced flipped array after stacking decreasing coordinate values (GH980). By Stephan Hoyer.

  • Setting dtype via the encoding parameter of to_netcdf failed if the encoded dtype was the same as the dtype of the original array (GH873). By Stephan Hoyer.

  • Fix issues with variables where both attributes _FillValue and missing_value are set to NaN (GH997). By Marco Zühlke.

  • .where() and .fillna() now preserve attributes (GH1009). By Fabien Maussion.

  • Applying broadcast() to an xarray object based on the dask backend won’t accidentally convert the array from dask to numpy anymore (GH978). By Guido Imperiale.

  • Dataset.concat() now preserves variables order (GH1027). By Fabien Maussion.

  • Fixed an issue with pcolormesh (GH781). A new infer_intervals keyword gives control on whether the cell intervals should be computed or not. By Fabien Maussion.

  • Grouping over an dimension with non-unique values with groupby gives correct groups. By Stephan Hoyer.

  • Fixed accessing coordinate variables with non-string names from .coords. By Stephan Hoyer.

  • rename() now simultaneously renames the array and any coordinate with the same name, when supplied via a dict (GH1116). By Yves Delley.

  • Fixed sub-optimal performance in certain operations with object arrays (GH1121). By Yves Delley.

  • Fix .groupby(group) when group has datetime dtype (GH1132). By Jonas Sølvsteen.

  • Fixed a bug with facetgrid (the norm keyword was ignored, GH1159). By Fabien Maussion.

  • Resolved a concurrency bug that could cause Python to crash when simultaneously reading and writing netCDF4 files with dask (GH1172). By Stephan Hoyer.

  • Fix to make .copy() actually copy dask arrays, which will be relevant for future releases of dask in which dask arrays will be mutable (GH1180). By Stephan Hoyer.

  • Fix opening NetCDF files with multi-dimensional time variables (GH1229). By Stephan Hoyer.

Performance improvements

  • xarray.Dataset.isel_points and xarray.Dataset.sel_points now use vectorised indexing in numpy and dask (GH1161), which can result in several orders of magnitude speedup. By Jonathan Chambers.

v0.8.2 (18 August 2016)

This release includes a number of bug fixes and minor enhancements.

Breaking changes

  • broadcast() and concat() now auto-align inputs, using join=outer. Previously, these functions raised ValueError for non-aligned inputs. By Guido Imperiale.

Enhancements

Bug fixes

  • Ensure xarray works with h5netcdf v0.3.0 for arrays with dtype=str (GH953). By Stephan Hoyer.

  • Dataset.__dir__() (i.e. the method python calls to get autocomplete options) failed if one of the dataset’s keys was not a string (GH852). By Maximilian Roos.

  • Dataset constructor can now take arbitrary objects as values (GH647). By Maximilian Roos.

  • Clarified copy argument for reindex() and align(), which now consistently always return new xarray objects (GH927).

  • Fix open_mfdataset with engine='pynio' (GH936). By Stephan Hoyer.

  • groupby_bins sorted bin labels as strings (GH952). By Stephan Hoyer.

  • Fix bug introduced by v0.8.0 that broke assignment to datasets when both the left and right side have the same non-unique index values (GH956).

v0.8.1 (5 August 2016)

Bug fixes

  • Fix bug in v0.8.0 that broke assignment to Datasets with non-unique indexes (GH943). By Stephan Hoyer.

v0.8.0 (2 August 2016)

This release includes four months of new features and bug fixes, including several breaking changes.

Breaking changes

  • Dropped support for Python 2.6 (GH855).

  • Indexing on multi-index now drop levels, which is consistent with pandas. It also changes the name of the dimension / coordinate when the multi-index is reduced to a single index (GH802).

  • Contour plots no longer add a colorbar per default (GH866). Filled contour plots are unchanged.

  • DataArray.values and .data now always returns an NumPy array-like object, even for 0-dimensional arrays with object dtype (GH867). Previously, .values returned native Python objects in such cases. To convert the values of scalar arrays to Python objects, use the .item() method.

Enhancements

  • Groupby operations now support grouping over multidimensional variables. A new method called groupby_bins() has also been added to allow users to specify bins for grouping. The new features are described in Multidimensional Grouping and Working with Multidimensional Coordinates. By Ryan Abernathey.

  • DataArray and Dataset method where() now supports a drop=True option that clips coordinate elements that are fully masked. By Phillip J. Wolfram.

  • New top level merge() function allows for combining variables from any number of Dataset and/or DataArray variables. See Merge for more details. By Stephan Hoyer.

  • DataArray.resample() and Dataset.resample() now support the keep_attrs=False option that determines whether variable and dataset attributes are retained in the resampled object. By Jeremy McGibbon.

  • Better multi-index support in DataArray.sel(), DataArray.loc(), Dataset.sel() and Dataset.loc(), which now behave more closely to pandas and which also accept dictionaries for indexing based on given level names and labels (see Multi-level indexing). By Benoit Bovy.

  • New (experimental) decorators register_dataset_accessor() and register_dataarray_accessor() for registering custom xarray extensions without subclassing. They are described in the new documentation page on xarray Internals. By Stephan Hoyer.

  • Round trip boolean datatypes. Previously, writing boolean datatypes to netCDF formats would raise an error since netCDF does not have a bool datatype. This feature reads/writes a dtype attribute to boolean variables in netCDF files. By Joe Hamman.

  • 2D plotting methods now have two new keywords (cbar_ax and cbar_kwargs), allowing more control on the colorbar (GH872). By Fabien Maussion.

  • New Dataset method Dataset.filter_by_attrs(), akin to netCDF4.Dataset.get_variables_by_attributes, to easily filter data variables using its attributes. Filipe Fernandes.

Bug fixes

  • Attributes were being retained by default for some resampling operations when they should not. With the keep_attrs=False option, they will no longer be retained by default. This may be backwards-incompatible with some scripts, but the attributes may be kept by adding the keep_attrs=True option. By Jeremy McGibbon.

  • Concatenating xarray objects along an axis with a MultiIndex or PeriodIndex preserves the nature of the index (GH875). By Stephan Hoyer.

  • Fixed bug in arithmetic operations on DataArray objects whose dimensions are numpy structured arrays or recarrays GH861, GH837. By Maciek Swat.

  • decode_cf_timedelta now accepts arrays with ndim >1 (GH842).

    This fixes issue GH665. Filipe Fernandes.

  • Fix a bug where xarray.ufuncs that take two arguments would incorrectly use to numpy functions instead of dask.array functions (GH876). By Stephan Hoyer.

  • Support for pickling functions from xarray.ufuncs (GH901). By Stephan Hoyer.

  • Variable.copy(deep=True) no longer converts MultiIndex into a base Index (GH769). By Benoit Bovy.

  • Fixes for groupby on dimensions with a multi-index (GH867). By Stephan Hoyer.

  • Fix printing datasets with unicode attributes on Python 2 (GH892). By Stephan Hoyer.

  • Fixed incorrect test for dask version (GH891). By Stephan Hoyer.

  • Fixed dim argument for isel_points/sel_points when a pandas.Index is passed. By Stephan Hoyer.

  • contour() now plots the correct number of contours (GH866). By Fabien Maussion.

v0.7.2 (13 March 2016)

This release includes two new, entirely backwards compatible features and several bug fixes.

Enhancements

  • New DataArray method DataArray.dot() for calculating the dot product of two DataArrays along shared dimensions. By Dean Pospisil.

  • Rolling window operations on DataArray objects are now supported via a new DataArray.rolling() method. For example:

    In [15]: import xarray as xr
       ....: import numpy as np
       ....: 
    
    In [16]: arr = xr.DataArray(np.arange(0, 7.5, 0.5).reshape(3, 5), dims=("x", "y"))
    
    In [17]: arr
    Out[17]: 
    <xarray.DataArray (x: 3, y: 5)>
    array([[ 0. ,  0.5,  1. ,  1.5,  2. ],
           [ 2.5,  3. ,  3.5,  4. ,  4.5],
           [ 5. ,  5.5,  6. ,  6.5,  7. ]])
    Coordinates:
      * x        (x) int64 0 1 2
      * y        (y) int64 0 1 2 3 4
    
    In [18]: arr.rolling(y=3, min_periods=2).mean()
    Out[18]: 
    <xarray.DataArray (x: 3, y: 5)>
    array([[  nan,  0.25,  0.5 ,  1.  ,  1.5 ],
           [  nan,  2.75,  3.  ,  3.5 ,  4.  ],
           [  nan,  5.25,  5.5 ,  6.  ,  6.5 ]])
    Coordinates:
      * x        (x) int64 0 1 2
      * y        (y) int64 0 1 2 3 4
    

    See Rolling window operations for more details. By Joe Hamman.

Bug fixes

  • Fixed an issue where plots using pcolormesh and Cartopy axes were being distorted by the inference of the axis interval breaks. This change chooses not to modify the coordinate variables when the axes have the attribute projection, allowing Cartopy to handle the extent of pcolormesh plots (GH781). By Joe Hamman.

  • 2D plots now better handle additional coordinates which are not DataArray dimensions (GH788). By Fabien Maussion.

v0.7.1 (16 February 2016)

This is a bug fix release that includes two small, backwards compatible enhancements. We recommend that all users upgrade.

Enhancements

  • Numerical operations now return empty objects on no overlapping labels rather than raising ValueError (GH739).

  • Series is now supported as valid input to the Dataset constructor (GH740).

Bug fixes

  • Restore checks for shape consistency between data and coordinates in the DataArray constructor (GH758).

  • Single dimension variables no longer transpose as part of a broader .transpose. This behavior was causing pandas.PeriodIndex dimensions to lose their type (GH749)

  • Dataset labels remain as their native type on .to_dataset. Previously they were coerced to strings (GH745)

  • Fixed a bug where replacing a DataArray index coordinate would improperly align the coordinate (GH725).

  • DataArray.reindex_like now maintains the dtype of complex numbers when reindexing leads to NaN values (GH738).

  • Dataset.rename and DataArray.rename support the old and new names being the same (GH724).

  • Fix from_dataframe() for DataFrames with Categorical column and a MultiIndex index (GH737).

  • Fixes to ensure xarray works properly after the upcoming pandas v0.18 and NumPy v1.11 releases.

Acknowledgments

The following individuals contributed to this release:

  • Edward Richards

  • Maximilian Roos

  • Rafael Guedes

  • Spencer Hill

  • Stephan Hoyer

v0.7.0 (21 January 2016)

This major release includes redesign of DataArray internals, as well as new methods for reshaping, rolling and shifting data. It includes preliminary support for pandas.MultiIndex, as well as a number of other features and bug fixes, several of which offer improved compatibility with pandas.

New name

The project formerly known as “xray” is now “xarray”, pronounced “x-array”! This avoids a namespace conflict with the entire field of x-ray science. Renaming our project seemed like the right thing to do, especially because some scientists who work with actual x-rays are interested in using this project in their work. Thanks for your understanding and patience in this transition. You can now find our documentation and code repository at new URLs:

To ease the transition, we have simultaneously released v0.7.0 of both xray and xarray on the Python Package Index. These packages are identical. For now, import xray still works, except it issues a deprecation warning. This will be the last xray release. Going forward, we recommend switching your import statements to import xarray as xr.

Breaking changes

  • The internal data model used by xray.DataArray has been rewritten to fix several outstanding issues (GH367, GH634, this stackoverflow report). Internally, DataArray is now implemented in terms of ._variable and ._coords attributes instead of holding variables in a Dataset object.

    This refactor ensures that if a DataArray has the same name as one of its coordinates, the array and the coordinate no longer share the same data.

    In practice, this means that creating a DataArray with the same name as one of its dimensions no longer automatically uses that array to label the corresponding coordinate. You will now need to provide coordinate labels explicitly. Here’s the old behavior:

    In [19]: xray.DataArray([4, 5, 6], dims="x", name="x")
    Out[19]: 
    <xray.DataArray 'x' (x: 3)>
    array([4, 5, 6])
    Coordinates:
      * x        (x) int64 4 5 6
    

    and the new behavior (compare the values of the x coordinate):

    In [20]: xray.DataArray([4, 5, 6], dims="x", name="x")
    Out[20]: 
    <xray.DataArray 'x' (x: 3)>
    array([4, 5, 6])
    Coordinates:
      * x        (x) int64 0 1 2
    
  • It is no longer possible to convert a DataArray to a Dataset with xray.DataArray.to_dataset if it is unnamed. This will now raise ValueError. If the array is unnamed, you need to supply the name argument.

Enhancements

  • Basic support for MultiIndex coordinates on xray objects, including indexing, stack() and unstack():

    In [21]: df = pd.DataFrame({"foo": range(3), "x": ["a", "b", "b"], "y": [0, 0, 1]})
    
    In [22]: s = df.set_index(["x", "y"])["foo"]
    
    In [23]: arr = xray.DataArray(s, dims="z")
    
    In [24]: arr
    Out[24]: 
    <xray.DataArray 'foo' (z: 3)>
    array([0, 1, 2])
    Coordinates:
      * z        (z) object ('a', 0) ('b', 0) ('b', 1)
    
    In [25]: arr.indexes["z"]
    Out[25]: 
    MultiIndex(levels=[[u'a', u'b'], [0, 1]],
               labels=[[0, 1, 1], [0, 0, 1]],
               names=[u'x', u'y'])
    
    In [26]: arr.unstack("z")
    Out[26]: 
    <xray.DataArray 'foo' (x: 2, y: 2)>
    array([[  0.,  nan],
           [  1.,   2.]])
    Coordinates:
      * x        (x) object 'a' 'b'
      * y        (y) int64 0 1
    
    In [27]: arr.unstack("z").stack(z=("x", "y"))
    Out[27]: 
    <xray.DataArray 'foo' (z: 4)>
    array([  0.,  nan,   1.,   2.])
    Coordinates:
      * z        (z) object ('a', 0) ('a', 1) ('b', 0) ('b', 1)
    

    See Stack and unstack for more details.

    Warning

    xray’s MultiIndex support is still experimental, and we have a long to- do list of desired additions (GH719), including better display of multi-index levels when printing a Dataset, and support for saving datasets with a MultiIndex to a netCDF file. User contributions in this area would be greatly appreciated.

  • Support for reading GRIB, HDF4 and other file formats via PyNIO. See Formats supported by PyNIO for more details.

  • Better error message when a variable is supplied with the same name as one of its dimensions.

  • Plotting: more control on colormap parameters (GH642). vmin and vmax will not be silently ignored anymore. Setting center=False prevents automatic selection of a divergent colormap.

  • New xray.Dataset.shift and xray.Dataset.roll methods for shifting/rotating datasets or arrays along a dimension:

    In [28]: array = xray.DataArray([5, 6, 7, 8], dims="x")
    
    In [29]: array.shift(x=2)
    Out[29]: 
    <xarray.DataArray (x: 4)>
    array([nan, nan,  5.,  6.])
    Dimensions without coordinates: x
    
    In [30]: array.roll(x=2)
    Out[30]: 
    <xarray.DataArray (x: 4)>
    array([7, 8, 5, 6])
    Dimensions without coordinates: x
    

    Notice that shift moves data independently of coordinates, but roll moves both data and coordinates.

  • Assigning a pandas object directly as a Dataset variable is now permitted. Its index names correspond to the dims of the Dataset, and its data is aligned.

  • Passing a pandas.DataFrame or pandas.Panel to a Dataset constructor is now permitted.

  • New function xray.broadcast for explicitly broadcasting DataArray and Dataset objects against each other. For example:

    In [31]: a = xray.DataArray([1, 2, 3], dims="x")
    
    In [32]: b = xray.DataArray([5, 6], dims="y")
    
    In [33]: a
    Out[33]: 
    <xarray.DataArray (x: 3)>
    array([1, 2, 3])
    Dimensions without coordinates: x
    
    In [34]: b
    Out[34]: 
    <xarray.DataArray (y: 2)>
    array([5, 6])
    Dimensions without coordinates: y
    
    In [35]: a2, b2 = xray.broadcast(a, b)
    
    In [36]: a2
    Out[36]: 
    <xarray.DataArray (x: 3, y: 2)>
    array([[1, 1],
           [2, 2],
           [3, 3]])
    Dimensions without coordinates: x, y
    
    In [37]: b2
    Out[37]: 
    <xarray.DataArray (x: 3, y: 2)>
    array([[5, 6],
           [5, 6],
           [5, 6]])
    Dimensions without coordinates: x, y
    

Bug fixes

  • Fixes for several issues found on DataArray objects with the same name as one of their coordinates (see Breaking changes for more details).

  • DataArray.to_masked_array always returns masked array with mask being an array (not a scalar value) (GH684)

  • Allows for (imperfect) repr of Coords when underlying index is PeriodIndex (GH645).

  • Fixes for several issues found on DataArray objects with the same name as one of their coordinates (see Breaking changes for more details).

  • Attempting to assign a Dataset or DataArray variable/attribute using attribute-style syntax (e.g., ds.foo = 42) now raises an error rather than silently failing (GH656, GH714).

  • You can now pass pandas objects with non-numpy dtypes (e.g., categorical or datetime64 with a timezone) into xray without an error (GH716).

Acknowledgments

The following individuals contributed to this release:

  • Antony Lee

  • Fabien Maussion

  • Joe Hamman

  • Maximilian Roos

  • Stephan Hoyer

  • Takeshi Kanmae

  • femtotrader

v0.6.1 (21 October 2015)

This release contains a number of bug and compatibility fixes, as well as enhancements to plotting, indexing and writing files to disk.

Note that the minimum required version of dask for use with xray is now version 0.6.

API Changes

  • The handling of colormaps and discrete color lists for 2D plots in xray.DataArray.plot was changed to provide more compatibility with matplotlib’s contour and contourf functions (GH538). Now discrete lists of colors should be specified using colors keyword, rather than cmap.

Enhancements

  • Faceted plotting through xray.plot.FacetGrid and the xray.plot.plot method. See Faceting for more details and examples.

  • xray.Dataset.sel and xray.Dataset.reindex now support the tolerance argument for controlling nearest-neighbor selection (GH629):

    In [38]: array = xray.DataArray([1, 2, 3], dims="x")
    
    In [39]: array.reindex(x=[0.9, 1.5], method="nearest", tolerance=0.2)
    Out[39]: 
    <xray.DataArray (x: 2)>
    array([  2.,  nan])
    Coordinates:
      * x        (x) float64 0.9 1.5
    

    This feature requires pandas v0.17 or newer.

  • New encoding argument in xray.Dataset.to_netcdf for writing netCDF files with compression, as described in the new documentation section on Writing encoded data.

  • Add xray.Dataset.real and xray.Dataset.imag attributes to Dataset and DataArray (GH553).

  • More informative error message with xray.Dataset.from_dataframe if the frame has duplicate columns.

  • xray now uses deterministic names for dask arrays it creates or opens from disk. This allows xray users to take advantage of dask’s nascent support for caching intermediate computation results. See GH555 for an example.

Bug fixes

  • Forwards compatibility with the latest pandas release (v0.17.0). We were using some internal pandas routines for datetime conversion, which unfortunately have now changed upstream (GH569).

  • Aggregation functions now correctly skip NaN for data for complex128 dtype (GH554).

  • Fixed indexing 0d arrays with unicode dtype (GH568).

  • xray.DataArray.name and Dataset keys must be a string or None to be written to netCDF (GH533).

  • xray.DataArray.where now uses dask instead of numpy if either the array or other is a dask array. Previously, if other was a numpy array the method was evaluated eagerly.

  • Global attributes are now handled more consistently when loading remote datasets using engine='pydap' (GH574).

  • It is now possible to assign to the .data attribute of DataArray objects.

  • coordinates attribute is now kept in the encoding dictionary after decoding (GH610).

  • Compatibility with numpy 1.10 (GH617).

Acknowledgments

The following individuals contributed to this release:

  • Ryan Abernathey

  • Pete Cable

  • Clark Fitzgerald

  • Joe Hamman

  • Stephan Hoyer

  • Scott Sinclair

v0.6.0 (21 August 2015)

This release includes numerous bug fixes and enhancements. Highlights include the introduction of a plotting module and the new Dataset and DataArray methods xray.Dataset.isel_points, xray.Dataset.sel_points, xray.Dataset.where and xray.Dataset.diff. There are no breaking changes from v0.5.2.

Enhancements

  • Plotting methods have been implemented on DataArray objects xray.DataArray.plot through integration with matplotlib (GH185). For an introduction, see Plotting.

  • Variables in netCDF files with multiple missing values are now decoded as NaN after issuing a warning if open_dataset is called with mask_and_scale=True.

  • We clarified our rules for when the result from an xray operation is a copy vs. a view (see Copies vs. Views for more details).

  • Dataset variables are now written to netCDF files in order of appearance when using the netcdf4 backend (GH479).

  • Added xray.Dataset.isel_points and xray.Dataset.sel_points to support pointwise indexing of Datasets and DataArrays (GH475).

    In [40]: da = xray.DataArray(
       ....:     np.arange(56).reshape((7, 8)),
       ....:     coords={"x": list("abcdefg"), "y": 10 * np.arange(8)},
       ....:     dims=["x", "y"],
       ....: )
       ....: 
    
    In [41]: da
    Out[41]: 
    <xray.DataArray (x: 7, y: 8)>
    array([[ 0,  1,  2,  3,  4,  5,  6,  7],
           [ 8,  9, 10, 11, 12, 13, 14, 15],
           [16, 17, 18, 19, 20, 21, 22, 23],
           [24, 25, 26, 27, 28, 29, 30, 31],
           [32, 33, 34, 35, 36, 37, 38, 39],
           [40, 41, 42, 43, 44, 45, 46, 47],
           [48, 49, 50, 51, 52, 53, 54, 55]])
    Coordinates:
    * y        (y) int64 0 10 20 30 40 50 60 70
    * x        (x) |S1 'a' 'b' 'c' 'd' 'e' 'f' 'g'
    
    # we can index by position along each dimension
    In [42]: da.isel_points(x=[0, 1, 6], y=[0, 1, 0], dim="points")
    Out[42]: 
    <xray.DataArray (points: 3)>
    array([ 0,  9, 48])
    Coordinates:
        y        (points) int64 0 10 0
        x        (points) |S1 'a' 'b' 'g'
      * points   (points) int64 0 1 2
    
    # or equivalently by label
    In [43]: da.sel_points(x=["a", "b", "g"], y=[0, 10, 0], dim="points")
    Out[43]: 
    <xray.DataArray (points: 3)>
    array([ 0,  9, 48])
    Coordinates:
        y        (points) int64 0 10 0
        x        (points) |S1 'a' 'b' 'g'
      * points   (points) int64 0 1 2
    
  • New xray.Dataset.where method for masking xray objects according to some criteria. This works particularly well with multi-dimensional data:

    In [44]: ds = xray.Dataset(coords={"x": range(100), "y": range(100)})
    
    In [45]: ds["distance"] = np.sqrt(ds.x**2 + ds.y**2)
    
    In [46]: ds.distance.where(ds.distance < 100).plot()
    Out[46]: <matplotlib.collections.QuadMesh at 0x7f7e2d0bacb0>
    
    _images/where_example.png
  • Added new methods xray.DataArray.diff and xray.Dataset.diff for finite difference calculations along a given axis.

  • New xray.DataArray.to_masked_array convenience method for returning a numpy.ma.MaskedArray.

    In [47]: da = xray.DataArray(np.random.random_sample(size=(5, 4)))
    
    In [48]: da.where(da < 0.5)
    Out[48]: 
    <xarray.DataArray (dim_0: 5, dim_1: 4)>
    array([[0.127,   nan, 0.26 ,   nan],
           [0.377, 0.336, 0.451,   nan],
           [0.123,   nan, 0.373, 0.448],
           [0.129,   nan,   nan, 0.352],
           [0.229,   nan,   nan, 0.138]])
    Dimensions without coordinates: dim_0, dim_1
    
    In [49]: da.where(da < 0.5).to_masked_array(copy=True)
    Out[49]: 
    masked_array(
      data=[[0.12696983303810094, --, 0.26047600586578334, --],
            [0.37674971618967135, 0.33622174433445307, 0.45137647047539964, --],
            [0.12310214428849964, --, 0.37301222522143085, 0.4479968246859435],
            [0.12944067971751294, --, --, 0.35205353914802473],
            [0.2288873043216132, --, --, 0.1375535565632705]],
      mask=[[False,  True, False,  True],
            [False, False, False,  True],
            [False,  True, False, False],
            [False,  True,  True, False],
            [False,  True,  True, False]],
      fill_value=1e+20)
    
  • Added new flag “drop_variables” to xray.open_dataset for excluding variables from being parsed. This may be useful to drop variables with problems or inconsistent values.

Bug fixes

  • Fixed aggregation functions (e.g., sum and mean) on big-endian arrays when bottleneck is installed (GH489).

  • Dataset aggregation functions dropped variables with unsigned integer dtype (GH505).

  • .any() and .all() were not lazy when used on xray objects containing dask arrays.

  • Fixed an error when attempting to saving datetime64 variables to netCDF files when the first element is NaT (GH528).

  • Fix pickle on DataArray objects (GH515).

  • Fixed unnecessary coercion of float64 to float32 when using netcdf3 and netcdf4_classic formats (GH526).

v0.5.2 (16 July 2015)

This release contains bug fixes, several additional options for opening and saving netCDF files, and a backwards incompatible rewrite of the advanced options for xray.concat.

Backwards incompatible changes

  • The optional arguments concat_over and mode in xray.concat have been removed and replaced by data_vars and coords. The new arguments are both more easily understood and more robustly implemented, and allowed us to fix a bug where concat accidentally loaded data into memory. If you set values for these optional arguments manually, you will need to update your code. The default behavior should be unchanged.

Enhancements

  • xray.open_mfdataset now supports a preprocess argument for preprocessing datasets prior to concatenaton. This is useful if datasets cannot be otherwise merged automatically, e.g., if the original datasets have conflicting index coordinates (GH443).

  • xray.open_dataset and xray.open_mfdataset now use a global thread lock by default for reading from netCDF files with dask. This avoids possible segmentation faults for reading from netCDF4 files when HDF5 is not configured properly for concurrent access (GH444).

  • Added support for serializing arrays of complex numbers with engine=’h5netcdf’.

  • The new xray.save_mfdataset function allows for saving multiple datasets to disk simultaneously. This is useful when processing large datasets with dask.array. For example, to save a dataset too big to fit into memory to one file per year, we could write:

    In [50]: years, datasets = zip(*ds.groupby("time.year"))
    
    In [51]: paths = ["%s.nc" % y for y in years]
    
    In [52]: xray.save_mfdataset(datasets, paths)
    

Bug fixes

  • Fixed min, max, argmin and argmax for arrays with string or unicode types (GH453).

  • xray.open_dataset and xray.open_mfdataset support supplying chunks as a single integer.

  • Fixed a bug in serializing scalar datetime variable to netCDF.

  • Fixed a bug that could occur in serialization of 0-dimensional integer arrays.

  • Fixed a bug where concatenating DataArrays was not always lazy (GH464).

  • When reading datasets with h5netcdf, bytes attributes are decoded to strings. This allows conventions decoding to work properly on Python 3 (GH451).

v0.5.1 (15 June 2015)

This minor release fixes a few bugs and an inconsistency with pandas. It also adds the pipe method, copied from pandas.

Enhancements

  • Added xray.Dataset.pipe, replicating the new pandas method in version 0.16.2. See Transforming datasets for more details.

  • xray.Dataset.assign and xray.Dataset.assign_coords now assign new variables in sorted (alphabetical) order, mirroring the behavior in pandas. Previously, the order was arbitrary.

Bug fixes

  • xray.concat fails in an edge case involving identical coordinate variables (GH425)

  • We now decode variables loaded from netCDF3 files with the scipy engine using native endianness (GH416). This resolves an issue when aggregating these arrays with bottleneck installed.

v0.5 (1 June 2015)

Highlights

The headline feature in this release is experimental support for out-of-core computing (data that doesn’t fit into memory) with Parallel computing with Dask. This includes a new top-level function xray.open_mfdataset that makes it easy to open a collection of netCDF (using dask) as a single xray.Dataset object. For more on dask, read the blog post introducing xray + dask and the new documentation section Parallel computing with Dask.

Dask makes it possible to harness parallelism and manipulate gigantic datasets with xray. It is currently an optional dependency, but it may become required in the future.

Backwards incompatible changes

  • The logic used for choosing which variables are concatenated with xray.concat has changed. Previously, by default any variables which were equal across a dimension were not concatenated. This lead to some surprising behavior, where the behavior of groupby and concat operations could depend on runtime values (GH268). For example:

    In [53]: ds = xray.Dataset({"x": 0})
    
    In [54]: xray.concat([ds, ds], dim="y")
    Out[54]: 
    <xray.Dataset>
    Dimensions:  ()
    Coordinates:
        *empty*
    Data variables:
        x        int64 0
    

    Now, the default always concatenates data variables:

    In [55]: xray.concat([ds, ds], dim="y")
    Out[55]: 
    <xarray.Dataset>
    Dimensions:  (y: 2)
    Dimensions without coordinates: y
    Data variables:
        x        (y) int64 0 0
    

    To obtain the old behavior, supply the argument concat_over=[].

Enhancements

  • New xray.Dataset.to_array and enhanced xray.DataArray.to_dataset methods make it easy to switch back and forth between arrays and datasets:

    In [56]: ds = xray.Dataset(
       ....:     {"a": 1, "b": ("x", [1, 2, 3])},
       ....:     coords={"c": 42},
       ....:     attrs={"Conventions": "None"},
       ....: )
       ....: 
    
    In [57]: ds.to_array()
    Out[57]: 
    <xarray.DataArray (variable: 2, x: 3)>
    array([[1, 1, 1],
           [1, 2, 3]])
    Coordinates:
        c         int64 42
      * variable  (variable) <U1 'a' 'b'
    Dimensions without coordinates: x
    Attributes:
        Conventions:  None
    
    In [58]: ds.to_array().to_dataset(dim="variable")
    Out[58]: 
    <xarray.Dataset>
    Dimensions:  (x: 3)
    Coordinates:
        c        int64 42
    Dimensions without coordinates: x
    Data variables:
        a        (x) int64 1 1 1
        b        (x) int64 1 2 3
    Attributes:
        Conventions:  None
    
  • New xray.Dataset.fillna method to fill missing values, modeled off the pandas method of the same name:

    In [59]: array = xray.DataArray([np.nan, 1, np.nan, 3], dims="x")
    
    In [60]: array.fillna(0)
    Out[60]: 
    <xarray.DataArray (x: 4)>
    array([0., 1., 0., 3.])
    Dimensions without coordinates: x
    

    fillna works on both Dataset and DataArray objects, and uses index based alignment and broadcasting like standard binary operations. It also can be applied by group, as illustrated in Fill missing values with climatology.

  • New xray.Dataset.assign and xray.Dataset.assign_coords methods patterned off the new DataFrame.assign method in pandas:

    In [61]: ds = xray.Dataset({"y": ("x", [1, 2, 3])})
    
    In [62]: ds.assign(z=lambda ds: ds.y**2)
    Out[62]: 
    <xarray.Dataset>
    Dimensions:  (x: 3)
    Dimensions without coordinates: x
    Data variables:
        y        (x) int64 1 2 3
        z        (x) int64 1 4 9
    
    In [63]: ds.assign_coords(z=("x", ["a", "b", "c"]))
    Out[63]: 
    <xarray.Dataset>
    Dimensions:  (x: 3)
    Coordinates:
        z        (x) <U1 'a' 'b' 'c'
    Dimensions without coordinates: x
    Data variables:
        y        (x) int64 1 2 3
    

    These methods return a new Dataset (or DataArray) with updated data or coordinate variables.

  • xray.Dataset.sel now supports the method parameter, which works like the parameter of the same name on xray.Dataset.reindex. It provides a simple interface for doing nearest-neighbor interpolation:

    In [64]: ds.sel(x=1.1, method="nearest")
    Out[64]: 
    <xray.Dataset>
    Dimensions:  ()
    Coordinates:
        x        int64 1
    Data variables:
        y        int64 2
    
    In [65]: ds.sel(x=[1.1, 2.1], method="pad")
    Out[65]: 
    <xray.Dataset>
    Dimensions:  (x: 2)
    Coordinates:
      * x        (x) int64 1 2
    Data variables:
        y        (x) int64 2 3
    

    See Nearest neighbor lookups for more details.

  • You can now control the underlying backend used for accessing remote datasets (via OPeNDAP) by specifying engine='netcdf4' or engine='pydap'.

  • xray now provides experimental support for reading and writing netCDF4 files directly via h5py with the h5netcdf package, avoiding the netCDF4-Python package. You will need to install h5netcdf and specify engine='h5netcdf' to try this feature.

  • Accessing data from remote datasets now has retrying logic (with exponential backoff) that should make it robust to occasional bad responses from DAP servers.

  • You can control the width of the Dataset repr with xray.set_options. It can be used either as a context manager, in which case the default is restored outside the context:

    In [66]: ds = xray.Dataset({"x": np.arange(1000)})
    
    In [67]: with xray.set_options(display_width=40):
       ....:     print(ds)
       ....: 
    <xarray.Dataset>
    Dimensions:  (x: 1000)
    Coordinates:
      * x        (x) int64 0 1 2 ... 998 999
    Data variables:
        *empty*
    

    Or to set a global option:

    In [68]: xray.set_options(display_width=80)
    

    The default value for the display_width option is 80.

Deprecations

  • The method load_data() has been renamed to the more succinct xray.Dataset.load.

v0.4.1 (18 March 2015)

The release contains bug fixes and several new features. All changes should be fully backwards compatible.

Enhancements

  • New documentation sections on Time series data and Reading multi-file datasets.

  • xray.Dataset.resample lets you resample a dataset or data array to a new temporal resolution. The syntax is the same as pandas, except you need to supply the time dimension explicitly:

    In [69]: time = pd.date_range("2000-01-01", freq="6H", periods=10)
    
    In [70]: array = xray.DataArray(np.arange(10), [("time", time)])
    
    In [71]: array.resample("1D", dim="time")
    

    You can specify how to do the resampling with the how argument and other options such as closed and label let you control labeling:

    In [72]: array.resample("1D", dim="time", how="sum", label="right")
    

    If the desired temporal resolution is higher than the original data (upsampling), xray will insert missing values:

    In [73]: array.resample("3H", "time")
    
  • first and last methods on groupby objects let you take the first or last examples from each group along the grouped axis:

    In [74]: array.groupby("time.day").first()
    

    These methods combine well with resample:

    In [75]: array.resample("1D", dim="time", how="first")
    
  • xray.Dataset.swap_dims allows for easily swapping one dimension out for another:

    In [76]: ds = xray.Dataset({"x": range(3), "y": ("x", list("abc"))})
    
    In [77]: ds
    Out[77]: 
    <xarray.Dataset>
    Dimensions:  (x: 3)
    Coordinates:
      * x        (x) int64 0 1 2
    Data variables:
        y        (x) <U1 'a' 'b' 'c'
    
    In [78]: ds.swap_dims({"x": "y"})
    Out[78]: 
    <xarray.Dataset>
    Dimensions:  (y: 3)
    Coordinates:
        x        (y) int64 0 1 2
      * y        (y) <U1 'a' 'b' 'c'
    Data variables:
        *empty*
    

    This was possible in earlier versions of xray, but required some contortions.

  • xray.open_dataset and xray.Dataset.to_netcdf now accept an engine argument to explicitly select which underlying library (netcdf4 or scipy) is used for reading/writing a netCDF file.

Bug fixes

  • Fixed a bug where data netCDF variables read from disk with engine='scipy' could still be associated with the file on disk, even after closing the file (GH341). This manifested itself in warnings about mmapped arrays and segmentation faults (if the data was accessed).

  • Silenced spurious warnings about all-NaN slices when using nan-aware aggregation methods (GH344).

  • Dataset aggregations with keep_attrs=True now preserve attributes on data variables, not just the dataset itself.

  • Tests for xray now pass when run on Windows (GH360).

  • Fixed a regression in v0.4 where saving to netCDF could fail with the error ValueError: could not automatically determine time units.

v0.4 (2 March, 2015)

This is one of the biggest releases yet for xray: it includes some major changes that may break existing code, along with the usual collection of minor enhancements and bug fixes. On the plus side, this release includes all hitherto planned breaking changes, so the upgrade path for xray should be smoother going forward.

Breaking changes

  • We now automatically align index labels in arithmetic, dataset construction, merging and updating. This means the need for manually invoking methods like xray.align and xray.Dataset.reindex_like should be vastly reduced.

    For arithmetic, we align based on the intersection of labels:

    In [79]: lhs = xray.DataArray([1, 2, 3], [("x", [0, 1, 2])])
    
    In [80]: rhs = xray.DataArray([2, 3, 4], [("x", [1, 2, 3])])
    
    In [81]: lhs + rhs
    Out[81]: 
    <xarray.DataArray (x: 2)>
    array([4, 6])
    Coordinates:
      * x        (x) int64 1 2
    

    For dataset construction and merging, we align based on the union of labels:

    In [82]: xray.Dataset({"foo": lhs, "bar": rhs})
    Out[82]: 
    <xarray.Dataset>
    Dimensions:  (x: 4)
    Coordinates:
      * x        (x) int64 0 1 2 3
    Data variables:
        foo      (x) float64 1.0 2.0 3.0 nan
        bar      (x) float64 nan 2.0 3.0 4.0
    

    For update and __setitem__, we align based on the original object:

    In [83]: lhs.coords["rhs"] = rhs
    
    In [84]: lhs
    Out[84]: 
    <xarray.DataArray (x: 3)>
    array([1, 2, 3])
    Coordinates:
      * x        (x) int64 0 1 2
        rhs      (x) float64 nan 2.0 3.0
    
  • Aggregations like mean or median now skip missing values by default:

    In [85]: xray.DataArray([1, 2, np.nan, 3]).mean()
    Out[85]: 
    <xarray.DataArray ()>
    array(2.)
    

    You can turn this behavior off by supplying the keyword argument skipna=False.

    These operations are lightning fast thanks to integration with bottleneck, which is a new optional dependency for xray (numpy is used if bottleneck is not installed).

  • Scalar coordinates no longer conflict with constant arrays with the same value (e.g., in arithmetic, merging datasets and concat), even if they have different shape (GH243). For example, the coordinate c here persists through arithmetic, even though it has different shapes on each DataArray:

    In [86]: a = xray.DataArray([1, 2], coords={"c": 0}, dims="x")
    
    In [87]: b = xray.DataArray([1, 2], coords={"c": ("x", [0, 0])}, dims="x")
    
    In [88]: (a + b).coords
    Out[88]: 
    Coordinates:
        c        (x) int64 0 0
    

    This functionality can be controlled through the compat option, which has also been added to the xray.Dataset constructor.

  • Datetime shortcuts such as 'time.month' now return a DataArray with the name 'month', not 'time.month' (GH345). This makes it easier to index the resulting arrays when they are used with groupby:

    In [89]: time = xray.DataArray(
       ....:     pd.date_range("2000-01-01", periods=365), dims="time", name="time"
       ....: )
       ....: 
    
    In [90]: counts = time.groupby("time.month").count()
    
    In [91]: counts.sel(month=2)
    Out[91]: 
    <xarray.DataArray 'time' ()>
    array(29)
    Coordinates:
        month    int64 2
    

    Previously, you would need to use something like counts.sel(**{'time.month': 2}}), which is much more awkward.

  • The season datetime shortcut now returns an array of string labels such ‘DJF’:

    In [92]: ds = xray.Dataset({"t": pd.date_range("2000-01-01", periods=12, freq="M")})
    
    In [93]: ds["t.season"]
    Out[93]: 
    <xarray.DataArray 'season' (t: 12)>
    array(['DJF', 'DJF', 'MAM', ..., 'SON', 'SON', 'DJF'], dtype='<U3')
    Coordinates:
      * t        (t) datetime64[ns] 2000-01-31 2000-02-29 ... 2000-11-30 2000-12-31
    

    Previously, it returned numbered seasons 1 through 4.

  • We have updated our use of the terms of “coordinates” and “variables”. What were known in previous versions of xray as “coordinates” and “variables” are now referred to throughout the documentation as “coordinate variables” and “data variables”. This brings xray in closer alignment to CF Conventions. The only visible change besides the documentation is that Dataset.vars has been renamed Dataset.data_vars.

  • You will need to update your code if you have been ignoring deprecation warnings: methods and attributes that were deprecated in xray v0.3 or earlier (e.g., dimensions, attributes`) have gone away.

Enhancements

  • Support for xray.Dataset.reindex with a fill method. This provides a useful shortcut for upsampling:

    In [94]: data = xray.DataArray([1, 2, 3], [("x", range(3))])
    
    In [95]: data.reindex(x=[0.5, 1, 1.5, 2, 2.5], method="pad")
    Out[95]: 
    <xarray.DataArray (x: 5)>
    array([1, 2, 2, 3, 3])
    Coordinates:
      * x        (x) float64 0.5 1.0 1.5 2.0 2.5
    

    This will be especially useful once pandas 0.16 is released, at which point xray will immediately support reindexing with method=’nearest’.

  • Use functions that return generic ndarrays with DataArray.groupby.apply and Dataset.apply (GH327 and GH329). Thanks Jeff Gerard!

  • Consolidated the functionality of dumps (writing a dataset to a netCDF3 bytestring) into xray.Dataset.to_netcdf (GH333).

  • xray.Dataset.to_netcdf now supports writing to groups in netCDF4 files (GH333). It also finally has a full docstring – you should read it!

  • xray.open_dataset and xray.Dataset.to_netcdf now work on netCDF3 files when netcdf4-python is not installed as long as scipy is available (GH333).

  • The new xray.Dataset.drop and xray.DataArray.drop methods makes it easy to drop explicitly listed variables or index labels:

    # drop variables
    In [96]: ds = xray.Dataset({"x": 0, "y": 1})
    
    In [97]: ds.drop("x")
    Out[97]: 
    <xarray.Dataset>
    Dimensions:  ()
    Data variables:
        y        int64 1
    
    # drop index labels
    In [98]: arr = xray.DataArray([1, 2, 3], coords=[("x", list("abc"))])
    
    In [99]: arr.drop(["a", "c"], dim="x")
    Out[99]: 
    <xarray.DataArray (x: 1)>
    array([2])
    Coordinates:
      * x        (x) <U1 'b'
    
  • xray.Dataset.broadcast_equals has been added to correspond to the new compat option.

  • Long attributes are now truncated at 500 characters when printing a dataset (GH338). This should make things more convenient for working with datasets interactively.

  • Added a new documentation example, Calculating Seasonal Averages from Time Series of Monthly Means. Thanks Joe Hamman!

Bug fixes

  • Several bug fixes related to decoding time units from netCDF files (GH316, GH330). Thanks Stefan Pfenninger!

  • xray no longer requires decode_coords=False when reading datasets with unparsable coordinate attributes (GH308).

  • Fixed DataArray.loc indexing with ... (GH318).

  • Fixed an edge case that resulting in an error when reindexing multi-dimensional variables (GH315).

  • Slicing with negative step sizes (GH312).

  • Invalid conversion of string arrays to numeric dtype (GH305).

  • Fixed``repr()`` on dataset objects with non-standard dates (GH347).

Deprecations

  • dump and dumps have been deprecated in favor of xray.Dataset.to_netcdf.

  • drop_vars has been deprecated in favor of xray.Dataset.drop.

Future plans

The biggest feature I’m excited about working toward in the immediate future is supporting out-of-core operations in xray using Dask, a part of the Blaze project. For a preview of using Dask with weather data, read this blog post by Matthew Rocklin. See GH328 for more details.

v0.3.2 (23 December, 2014)

This release focused on bug-fixes, speedups and resolving some niggling inconsistencies.

There are a few cases where the behavior of xray differs from the previous version. However, I expect that in almost all cases your code will continue to run unmodified.

Warning

xray now requires pandas v0.15.0 or later. This was necessary for supporting TimedeltaIndex without too many painful hacks.

Backwards incompatible changes

  • Arrays of datetime.datetime objects are now automatically cast to datetime64[ns] arrays when stored in an xray object, using machinery borrowed from pandas:

    In [100]: from datetime import datetime
    
    In [101]: xray.Dataset({"t": [datetime(2000, 1, 1)]})
    Out[101]: 
    <xarray.Dataset>
    Dimensions:  (t: 1)
    Coordinates:
      * t        (t) datetime64[ns] 2000-01-01
    Data variables:
        *empty*
    
  • xray now has support (including serialization to netCDF) for TimedeltaIndex. datetime.timedelta objects are thus accordingly cast to timedelta64[ns] objects when appropriate.

  • Masked arrays are now properly coerced to use NaN as a sentinel value (GH259).

Enhancements

  • Due to popular demand, we have added experimental attribute style access as a shortcut for dataset variables, coordinates and attributes:

    In [102]: ds = xray.Dataset({"tmin": ([], 25, {"units": "celsius"})})
    
    In [103]: ds.tmin.units
    Out[103]: 'celsius'
    

    Tab-completion for these variables should work in editors such as IPython. However, setting variables or attributes in this fashion is not yet supported because there are some unresolved ambiguities (GH300).

  • You can now use a dictionary for indexing with labeled dimensions. This provides a safe way to do assignment with labeled dimensions:

    In [104]: array = xray.DataArray(np.zeros(5), dims=["x"])
    
    In [105]: array[dict(x=slice(3))] = 1
    
    In [106]: array
    Out[106]: 
    <xarray.DataArray (x: 5)>
    array([1., 1., 1., 0., 0.])
    Dimensions without coordinates: x
    
  • Non-index coordinates can now be faithfully written to and restored from netCDF files. This is done according to CF conventions when possible by using the coordinates attribute on a data variable. When not possible, xray defines a global coordinates attribute.

  • Preliminary support for converting xray.DataArray objects to and from CDAT cdms2 variables.

  • We sped up any operation that involves creating a new Dataset or DataArray (e.g., indexing, aggregation, arithmetic) by a factor of 30 to 50%. The full speed up requires cyordereddict to be installed.

Bug fixes

  • Fix for to_dataframe() with 0d string/object coordinates (GH287)

  • Fix for to_netcdf with 0d string variable (GH284)

  • Fix writing datetime64 arrays to netcdf if NaT is present (GH270)

  • Fix align silently upcasts data arrays when NaNs are inserted (GH264)

Future plans

  • I am contemplating switching to the terms “coordinate variables” and “data variables” instead of the (currently used) “coordinates” and “variables”, following their use in CF Conventions (GH293). This would mostly have implications for the documentation, but I would also change the Dataset attribute vars to data.

  • I no longer certain that automatic label alignment for arithmetic would be a good idea for xray – it is a feature from pandas that I have not missed (GH186).

  • The main API breakage that I do anticipate in the next release is finally making all aggregation operations skip missing values by default (GH130). I’m pretty sick of writing ds.reduce(np.nanmean, 'time').

  • The next version of xray (0.4) will remove deprecated features and aliases whose use currently raises a warning.

If you have opinions about any of these anticipated changes, I would love to hear them – please add a note to any of the referenced GitHub issues.

v0.3.1 (22 October, 2014)

This is mostly a bug-fix release to make xray compatible with the latest release of pandas (v0.15).

We added several features to better support working with missing values and exporting xray objects to pandas. We also reorganized the internal API for serializing and deserializing datasets, but this change should be almost entirely transparent to users.

Other than breaking the experimental DataStore API, there should be no backwards incompatible changes.

New features

  • Added xray.Dataset.count and xray.Dataset.dropna methods, copied from pandas, for working with missing values (GH247, GH58).

  • Added xray.DataArray.to_pandas for converting a data array into the pandas object with the same dimensionality (1D to Series, 2D to DataFrame, etc.) (GH255).

  • Support for reading gzipped netCDF3 files (GH239).

  • Reduced memory usage when writing netCDF files (GH251).

  • ‘missing_value’ is now supported as an alias for the ‘_FillValue’ attribute on netCDF variables (GH245).

  • Trivial indexes, equivalent to range(n) where n is the length of the dimension, are no longer written to disk (GH245).

Bug fixes

  • Compatibility fixes for pandas v0.15 (GH262).

  • Fixes for display and indexing of NaT (not-a-time) (GH238, GH240)

  • Fix slicing by label was an argument is a data array (GH250).

  • Test data is now shipped with the source distribution (GH253).

  • Ensure order does not matter when doing arithmetic with scalar data arrays (GH254).

  • Order of dimensions preserved with DataArray.to_dataframe (GH260).

v0.3 (21 September 2014)

New features

  • Revamped coordinates: “coordinates” now refer to all arrays that are not used to index a dimension. Coordinates are intended to allow for keeping track of arrays of metadata that describe the grid on which the points in “variable” arrays lie. They are preserved (when unambiguous) even though mathematical operations.

  • Dataset math xray.Dataset objects now support all arithmetic operations directly. Dataset-array operations map across all dataset variables; dataset-dataset operations act on each pair of variables with the same name.

  • GroupBy math: This provides a convenient shortcut for normalizing by the average value of a group.

  • The dataset __repr__ method has been entirely overhauled; dataset objects now show their values when printed.

  • You can now index a dataset with a list of variables to return a new dataset: ds[['foo', 'bar']].

Backwards incompatible changes

  • Dataset.__eq__ and Dataset.__ne__ are now element-wise operations instead of comparing all values to obtain a single boolean. Use the method xray.Dataset.equals instead.

Deprecations

  • Dataset.noncoords is deprecated: use Dataset.vars instead.

  • Dataset.select_vars deprecated: index a Dataset with a list of variable names instead.

  • DataArray.select_vars and DataArray.drop_vars deprecated: use xray.DataArray.reset_coords instead.

v0.2 (14 August 2014)

This is major release that includes some new features and quite a few bug fixes. Here are the highlights:

  • There is now a direct constructor for DataArray objects, which makes it possible to create a DataArray without using a Dataset. This is highlighted in the refreshed tutorial.

  • You can perform aggregation operations like mean directly on xray.Dataset objects, thanks to Joe Hamman. These aggregation methods also worked on grouped datasets.

  • xray now works on Python 2.6, thanks to Anna Kuznetsova.

  • A number of methods and attributes were given more sensible (usually shorter) names: labeled -> sel, indexed -> isel, select -> select_vars, unselect -> drop_vars, dimensions -> dims, coordinates -> coords, attributes -> attrs.

  • New xray.Dataset.load_data and xray.Dataset.close methods for datasets facilitate lower level of control of data loaded from disk.

v0.1.1 (20 May 2014)

xray 0.1.1 is a bug-fix release that includes changes that should be almost entirely backwards compatible with v0.1:

  • Python 3 support (GH53)

  • Required numpy version relaxed to 1.7 (GH129)

  • Return numpy.datetime64 arrays for non-standard calendars (GH126)

  • Support for opening datasets associated with NetCDF4 groups (GH127)

  • Bug-fixes for concatenating datetime arrays (GH134)

Special thanks to new contributors Thomas Kluyver, Joe Hamman and Alistair Miles.

v0.1 (2 May 2014)

Initial release.