Table Of Contents

Search

Enter search terms or a module, class or function name.

What’s New in 0.24.0 (Month XX, 2018)

Warning

Starting January 1, 2019, pandas feature releases will support Python 3 only. See Plan for dropping Python 2.7 for more.

These are the changes in pandas 0.24.0. See Release Notes for a full changelog including other versions of pandas.

New features

  • merge() now directly allows merge between objects of type DataFrame and named Series, without the need to convert the Series object into a DataFrame beforehand (GH21220)
  • ExcelWriter now accepts mode as a keyword argument, enabling append to existing workbooks when using the openpyxl engine (GH3441)
  • FrozenList has gained the .union() and .difference() methods. This functionality greatly simplifies groupby’s that rely on explicitly excluding certain columns. See Splitting an object into groups for more information (GH15475, GH15506).
  • DataFrame.to_parquet() now accepts index as an argument, allowing the user to override the engine’s default behavior to include or omit the dataframe’s indexes from the resulting Parquet file. (GH20768)
  • DataFrame.corr() and Series.corr() now accept a callable for generic calculation methods of correlation, e.g. histogram intersection (GH22684)
  • DataFrame.to_string() now accepts decimal as an argument, allowing the user to specify which decimal separator should be used in the output. (GH23614)
  • DataFrame.read_feather() now accepts columns as an argument, allowing the user to specify which columns should be read. (GH24025)

Accessing the values in a Series or Index

Series.array and Index.array have been added for extracting the array backing a Series or Index.

In [1]: idx = pd.period_range('2000', periods=4)

In [2]: idx.array
Out[2]: 
<PeriodArray>
['2000-01-01', '2000-01-02', '2000-01-03', '2000-01-04']
Length: 4, dtype: period[D]

In [3]: pd.Series(idx).array
Out[3]: 
<PeriodArray>
['2000-01-01', '2000-01-02', '2000-01-03', '2000-01-04']
Length: 4, dtype: period[D]

Historically, this would have been done with series.values, but with .values it was unclear whether the returned value would be the actual array, some transformation of it, or one of pandas custom arrays (like Categorical). For example, with PeriodIndex, .values generates a new ndarray of period objects each time.

In [4]: id(idx.values)
Out[4]: 139809338449392

In [5]: id(idx.values)
Out[5]: 139809338446352

If you need an actual NumPy array, use Series.to_numpy() or Index.to_numpy().

In [6]: idx.to_numpy()
Out[6]: 
array([Period('2000-01-01', 'D'), Period('2000-01-02', 'D'),
       Period('2000-01-03', 'D'), Period('2000-01-04', 'D')], dtype=object)

In [7]: pd.Series(idx).to_numpy()
Out[7]: 
array([Period('2000-01-01', 'D'), Period('2000-01-02', 'D'),
       Period('2000-01-03', 'D'), Period('2000-01-04', 'D')], dtype=object)

For Series and Indexes backed by normal NumPy arrays, this will be the same thing (and the same as .values).

In [8]: ser = pd.Series([1, 2, 3])

In [9]: ser.array
Out[9]: array([1, 2, 3])

In [10]: ser.to_numpy()
Out[10]: array([1, 2, 3])

We haven’t removed or deprecated Series.values or DataFrame.values, but we recommend and using .array or .to_numpy() instead.

See dtypes and dsintro.attrs for more.

ExtensionArray operator support

A Series based on an ExtensionArray now supports arithmetic and comparison operators (GH19577). There are two approaches for providing operator support for an ExtensionArray:

  1. Define each of the operators on your ExtensionArray subclass.
  2. Use an operator implementation from pandas that depends on operators that are already defined on the underlying elements (scalars) of the ExtensionArray.

See the ExtensionArray Operator Support documentation section for details on both ways of adding operator support.

Optional Integer NA Support

Pandas has gained the ability to hold integer dtypes with missing values. This long requested feature is enabled through the use of extension types. Here is an example of the usage.

We can construct a Series with the specified dtype. The dtype string Int64 is a pandas ExtensionDtype. Specifying a list or array using the traditional missing value marker of np.nan will infer to integer dtype. The display of the Series will also use the NaN to indicate missing values in string outputs. (GH20700, GH20747, GH22441, GH21789, GH22346)

In [11]: s = pd.Series([1, 2, np.nan], dtype='Int64')

In [12]: s
Out[12]: 
0      1
1      2
2    NaN
Length: 3, dtype: Int64

Operations on these dtypes will propagate NaN as other pandas operations.

# arithmetic
In [13]: s + 1
Out[13]: 
0      2
1      3
2    NaN
Length: 3, dtype: Int64

# comparison
In [14]: s == 1
Out[14]: 
0     True
1    False
2    False
Length: 3, dtype: bool

# indexing
In [15]: s.iloc[1:3]
Out[15]: 
1      2
2    NaN
Length: 2, dtype: Int64

# operate with other dtypes
In [16]: s + s.iloc[1:3].astype('Int8')
Out[16]: 
0    NaN
1      4
2    NaN
Length: 3, dtype: Int64

# coerce when needed
In [17]: s + 0.01
Out[17]: 
0    1.01
1    2.01
2     NaN
Length: 3, dtype: float64

These dtypes can operate as part of of DataFrame.

In [18]: df = pd.DataFrame({'A': s, 'B': [1, 1, 3], 'C': list('aab')})

In [19]: df
Out[19]: 
     A  B  C
0    1  1  a
1    2  1  a
2  NaN  3  b

[3 rows x 3 columns]

In [20]: df.dtypes
Out[20]: 
A     Int64
B     int64
C    object
Length: 3, dtype: object

These dtypes can be merged & reshaped & casted.

In [21]: pd.concat([df[['A']], df[['B', 'C']]], axis=1).dtypes
Out[21]: 
A     Int64
B     int64
C    object
Length: 3, dtype: object

In [22]: df['A'].astype(float)
Out[22]: 
0    1.0
1    2.0
2    NaN
Name: A, Length: 3, dtype: float64

Reduction and groupby operations such as ‘sum’ work.

In [23]: df.sum()
Out[23]: 
A      3
B      5
C    aab
Length: 3, dtype: object

In [24]: df.groupby('B').A.sum()
Out[24]: 
B
1    3
3    0
Name: A, Length: 2, dtype: Int64

Warning

The Integer NA support currently uses the captilized dtype version, e.g. Int8 as compared to the traditional int8. This may be changed at a future date.

read_html Enhancements

read_html() previously ignored colspan and rowspan attributes. Now it understands them, treating them as sequences of cells with the same value. (GH17054)

In [25]: result = pd.read_html("""
   ....:   <table>
   ....:     <thead>
   ....:       <tr>
   ....:         <th>A</th><th>B</th><th>C</th>
   ....:       </tr>
   ....:     </thead>
   ....:     <tbody>
   ....:       <tr>
   ....:         <td colspan="2">1</td><td>2</td>
   ....:       </tr>
   ....:     </tbody>
   ....:   </table>""")
   ....: 

Previous Behavior:

In [13]: result
Out [13]:
[   A  B   C
 0  1  2 NaN]

Current Behavior:

In [26]: result
Out[26]: 
[   A  B  C
 0  1  1  2
 
 [1 rows x 3 columns]]

Storing Interval and Period Data in Series and DataFrame

Interval and Period data may now be stored in a Series or DataFrame, in addition to an IntervalIndex and PeriodIndex like previously (GH19453, GH22862).

In [27]: ser = pd.Series(pd.interval_range(0, 5))

In [28]: ser
Out[28]: 
0    (0, 1]
1    (1, 2]
2    (2, 3]
3    (3, 4]
4    (4, 5]
Length: 5, dtype: interval

In [29]: ser.dtype
Out[29]: interval[int64]

And for periods:

In [30]: pser = pd.Series(pd.date_range("2000", freq="D", periods=5))

In [31]: pser
Out[31]: 
0   2000-01-01
1   2000-01-02
2   2000-01-03
3   2000-01-04
4   2000-01-05
Length: 5, dtype: datetime64[ns]

In [32]: pser.dtype
Out[32]: dtype('<M8[ns]')

Previously, these would be cast to a NumPy array with object dtype. In general, this should result in better performance when storing an array of intervals or periods in a Series or column of a DataFrame.

Use Series.array to extract the underlying array of intervals or periods from the Series:

.. ipython:: python
ser.array pser.array

Warning

For backwards compatibility, Series.values continues to return a NumPy array of objects for Interval and Period data. We recommend using Series.array when you need the array of data stored in the Series, and Series.to_numpy() when you know you need a NumPy array. See dtypes and dsintro.attrs for more.

New Styler.pipe() method

The Styler class has gained a pipe() method (GH23229). This provides a convenient way to apply users’ predefined styling functions, and can help reduce “boilerplate” when using DataFrame styling functionality repeatedly within a notebook.

In [33]: df = pd.DataFrame({'N': [1250, 1500, 1750], 'X': [0.25, 0.35, 0.50]})

In [34]: def format_and_align(styler):
   ....:     return (styler.format({'N': '{:,}', 'X': '{:.1%}'})
   ....:                   .set_properties(**{'text-align': 'right'}))
   ....: 

In [35]: df.style.pipe(format_and_align).set_caption('Summary of results.')
Out[35]: <pandas.io.formats.style.Styler at 0x7f27d2b33208>

Similar methods already exist for other classes in pandas, including DataFrame.pipe(), Groupby.pipe(), and Resampler.pipe().

Joining with two multi-indexes

Datafame.merge() and Dataframe.join() can now be used to join multi-indexed Dataframe instances on the overlaping index levels (GH6360)

See the Merge, join, and concatenate documentation section.

In [36]: index_left = pd.MultiIndex.from_tuples([('K0', 'X0'), ('K0', 'X1'),
   ....:                                        ('K1', 'X2')],
   ....:                                        names=['key', 'X'])
   ....: 

In [37]: left = pd.DataFrame({'A': ['A0', 'A1', 'A2'],
   ....:                      'B': ['B0', 'B1', 'B2']}, index=index_left)
   ....: 

In [38]: index_right = pd.MultiIndex.from_tuples([('K0', 'Y0'), ('K1', 'Y1'),
   ....:                                         ('K2', 'Y2'), ('K2', 'Y3')],
   ....:                                         names=['key', 'Y'])
   ....: 

In [39]: right = pd.DataFrame({'C': ['C0', 'C1', 'C2', 'C3'],
   ....:                       'D': ['D0', 'D1', 'D2', 'D3']}, index=index_right)
   ....: 

In [40]: left.join(right)
Out[40]: 
            A   B   C   D
key X  Y                 
K0  X0 Y0  A0  B0  C0  D0
    X1 Y0  A1  B1  C0  D0
K1  X2 Y1  A2  B2  C1  D1

[3 rows x 4 columns]

For earlier versions this can be done using the following.

In [41]: pd.merge(left.reset_index(), right.reset_index(),
   ....:          on=['key'], how='inner').set_index(['key', 'X', 'Y'])
   ....: 
Out[41]: 
            A   B   C   D
key X  Y                 
K0  X0 Y0  A0  B0  C0  D0
    X1 Y0  A1  B1  C0  D0
K1  X2 Y1  A2  B2  C1  D1

[3 rows x 4 columns]

Renaming names in a MultiIndex

DataFrame.rename_axis() now supports index and columns arguments and Series.rename_axis() supports index argument (GH19978)

This change allows a dictionary to be passed so that some of the names of a MultiIndex can be changed.

Example:

In [42]: mi = pd.MultiIndex.from_product([list('AB'), list('CD'), list('EF')],
   ....:                                 names=['AB', 'CD', 'EF'])
   ....: 

In [43]: df = pd.DataFrame([i for i in range(len(mi))], index=mi, columns=['N'])

In [44]: df
Out[44]: 
          N
AB CD EF   
A  C  E   0
      F   1
   D  E   2
      F   3
B  C  E   4
      F   5
   D  E   6
      F   7

[8 rows x 1 columns]

In [45]: df.rename_axis(index={'CD': 'New'})
Out[45]: 
           N
AB New EF   
A  C   E   0
       F   1
   D   E   2
       F   3
B  C   E   4
       F   5
   D   E   6
       F   7

[8 rows x 1 columns]

See the advanced docs on renaming for more details.

Other Enhancements

Backwards incompatible API changes

Dependencies have increased minimum versions

We have updated our minimum supported versions of dependencies (GH21242, GH18742, GH23774). If installed, we now require:

Package Minimum Version Required
numpy 1.12.0 X
bottleneck 1.2.0  
fastparquet 0.1.2  
matplotlib 2.0.0  
numexpr 2.6.1  
pandas-gbq 0.8.0  
pyarrow 0.7.0  
pytables 3.4.2  
scipy 0.18.1  
xlrd 1.0.0  
pytest (dev) 3.6  

Additionally we no longer depend on feather-format for feather based storage and replaced it with references to pyarrow (GH21639 and GH23053).

os.linesep is used for line_terminator of DataFrame.to_csv

DataFrame.to_csv() now uses os.linesep() rather than '\n' for the default line terminator (GH20353). This change only affects when running on Windows, where '\r\n' was used for line terminator even when '\n' was passed in line_terminator.

Previous Behavior on Windows:

In [1]: data = pd.DataFrame({"string_with_lf": ["a\nbc"],
   ...:                      "string_with_crlf": ["a\r\nbc"]})

In [2]: # When passing file PATH to to_csv,
   ...: # line_terminator does not work, and csv is saved with '\r\n'.
   ...: # Also, this converts all '\n's in the data to '\r\n'.
   ...: data.to_csv("test.csv", index=False, line_terminator='\n')

In [3]: with open("test.csv", mode='rb') as f:
   ...:     print(f.read())
Out[3]: b'string_with_lf,string_with_crlf\r\n"a\r\nbc","a\r\r\nbc"\r\n'

In [4]: # When passing file OBJECT with newline option to
   ...: # to_csv, line_terminator works.
   ...: with open("test2.csv", mode='w', newline='\n') as f:
   ...:     data.to_csv(f, index=False, line_terminator='\n')

In [5]: with open("test2.csv", mode='rb') as f:
   ...:     print(f.read())
Out[5]: b'string_with_lf,string_with_crlf\n"a\nbc","a\r\nbc"\n'

New Behavior on Windows:

  • By passing line_terminator explicitly, line terminator is set to that character.

  • The value of line_terminator only affects the line terminator of CSV, so it does not change the value inside the data.

    In [1]: data = pd.DataFrame({"string_with_lf": ["a\nbc"],
       ...:                      "string_with_crlf": ["a\r\nbc"]})
    
    In [2]: data.to_csv("test.csv", index=False, line_terminator='\n')
    
    In [3]: with open("test.csv", mode='rb') as f:
       ...:     print(f.read())
    Out[3]: b'string_with_lf,string_with_crlf\n"a\nbc","a\r\nbc"\n'
    
  • On Windows, the value of os.linesep is '\r\n', so if line_terminator is not set, '\r\n' is used for line terminator.

  • Again, it does not affect the value inside the data.

    In [1]: data = pd.DataFrame({"string_with_lf": ["a\nbc"],
       ...:                      "string_with_crlf": ["a\r\nbc"]})
    
    In [2]: data.to_csv("test.csv", index=False)
    
    In [3]: with open("test.csv", mode='rb') as f:
       ...:     print(f.read())
    Out[3]: b'string_with_lf,string_with_crlf\r\n"a\nbc","a\r\nbc"\r\n'
    
  • For files objects, specifying newline is not sufficient to set the line terminator. You must pass in the line_terminator explicitly, even in this case.

    In [1]: data = pd.DataFrame({"string_with_lf": ["a\nbc"],
       ...:                      "string_with_crlf": ["a\r\nbc"]})
    
    In [2]: with open("test2.csv", mode='w', newline='\n') as f:
       ...:     data.to_csv(f, index=False)
    
    In [3]: with open("test2.csv", mode='rb') as f:
       ...:     print(f.read())
    Out[3]: b'string_with_lf,string_with_crlf\r\n"a\nbc","a\r\nbc"\r\n'
    

Parsing Datetime Strings with Timezone Offsets

Previously, parsing datetime strings with UTC offsets with to_datetime() or DatetimeIndex would automatically convert the datetime to UTC without timezone localization. This is inconsistent from parsing the same datetime string with Timestamp which would preserve the UTC offset in the tz attribute. Now, to_datetime() preserves the UTC offset in the tz attribute when all the datetime strings have the same UTC offset (GH17697, GH11736, GH22457)

Previous Behavior:

In [2]: pd.to_datetime("2015-11-18 15:30:00+05:30")
Out[2]: Timestamp('2015-11-18 10:00:00')

In [3]: pd.Timestamp("2015-11-18 15:30:00+05:30")
Out[3]: Timestamp('2015-11-18 15:30:00+0530', tz='pytz.FixedOffset(330)')

# Different UTC offsets would automatically convert the datetimes to UTC (without a UTC timezone)
In [4]: pd.to_datetime(["2015-11-18 15:30:00+05:30", "2015-11-18 16:30:00+06:30"])
Out[4]: DatetimeIndex(['2015-11-18 10:00:00', '2015-11-18 10:00:00'], dtype='datetime64[ns]', freq=None)

Current Behavior:

In [46]: pd.to_datetime("2015-11-18 15:30:00+05:30")
Out[46]: Timestamp('2015-11-18 15:30:00+0530', tz='pytz.FixedOffset(330)')

In [47]: pd.Timestamp("2015-11-18 15:30:00+05:30")
Out[47]: Timestamp('2015-11-18 15:30:00+0530', tz='pytz.FixedOffset(330)')

Parsing datetime strings with the same UTC offset will preserve the UTC offset in the tz

In [48]: pd.to_datetime(["2015-11-18 15:30:00+05:30"] * 2)
Out[48]: DatetimeIndex(['2015-11-18 15:30:00+05:30', '2015-11-18 15:30:00+05:30'], dtype='datetime64[ns, pytz.FixedOffset(330)]', freq=None)

Parsing datetime strings with different UTC offsets will now create an Index of datetime.datetime objects with different UTC offsets

In [49]: idx = pd.to_datetime(["2015-11-18 15:30:00+05:30",
   ....:                       "2015-11-18 16:30:00+06:30"])
   ....: 

In [50]: idx
Out[50]: Index([2015-11-18 15:30:00+05:30, 2015-11-18 16:30:00+06:30], dtype='object')

In [51]: idx[0]
Out[51]: datetime.datetime(2015, 11, 18, 15, 30, tzinfo=tzoffset(None, 19800))

In [52]: idx[1]
Out[52]: datetime.datetime(2015, 11, 18, 16, 30, tzinfo=tzoffset(None, 23400))

Passing utc=True will mimic the previous behavior but will correctly indicate that the dates have been converted to UTC

In [53]: pd.to_datetime(["2015-11-18 15:30:00+05:30",
   ....:                 "2015-11-18 16:30:00+06:30"], utc=True)
   ....: 
Out[53]: DatetimeIndex(['2015-11-18 10:00:00+00:00', '2015-11-18 10:00:00+00:00'], dtype='datetime64[ns, UTC]', freq=None)

CalendarDay Offset

Day and associated frequency alias 'D' were documented to represent a calendar day; however, arithmetic and operations with Day sometimes respected absolute time instead (i.e. Day(n) and acted identically to Timedelta(days=n)).

Previous Behavior:

In [2]: ts = pd.Timestamp('2016-10-30 00:00:00', tz='Europe/Helsinki')

# Respects calendar arithmetic
In [3]: pd.date_range(start=ts, freq='D', periods=3)
Out[3]:
DatetimeIndex(['2016-10-30 00:00:00+03:00', '2016-10-31 00:00:00+02:00',
               '2016-11-01 00:00:00+02:00'],
              dtype='datetime64[ns, Europe/Helsinki]', freq='D')

# Respects absolute arithmetic
In [4]: ts + pd.tseries.frequencies.to_offset('D')
Out[4]: Timestamp('2016-10-30 23:00:00+0200', tz='Europe/Helsinki')

CalendarDay and associated frequency alias 'CD' are now available and respect calendar day arithmetic while Day and frequency alias 'D' will now respect absolute time (GH22274, GH20596, GH16980, GH8774) See the documentation here for more information.

Addition with CalendarDay across a daylight savings time transition:

In [54]: ts = pd.Timestamp('2016-10-30 00:00:00', tz='Europe/Helsinki')

In [55]: ts + pd.offsets.Day(1)
Out[55]: Timestamp('2016-10-30 23:00:00+0200', tz='Europe/Helsinki')

In [56]: ts + pd.offsets.CalendarDay(1)
Out[56]: Timestamp('2016-10-31 00:00:00+0200', tz='Europe/Helsinki')

Time values in dt.end_time and to_timestamp(how='end')

The time values in Period and PeriodIndex objects are now set to ‘23:59:59.999999999’ when calling Series.dt.end_time, Period.end_time, PeriodIndex.end_time, Period.to_timestamp() with how='end', or PeriodIndex.to_timestamp() with how='end' (GH17157)

Previous Behavior:

In [2]: p = pd.Period('2017-01-01', 'D')
In [3]: pi = pd.PeriodIndex([p])

In [4]: pd.Series(pi).dt.end_time[0]
Out[4]: Timestamp(2017-01-01 00:00:00)

In [5]: p.end_time
Out[5]: Timestamp(2017-01-01 23:59:59.999999999)

Current Behavior:

Calling Series.dt.end_time will now result in a time of ‘23:59:59.999999999’ as is the case with Period.end_time, for example

In [57]: p = pd.Period('2017-01-01', 'D')

In [58]: pi = pd.PeriodIndex([p])

In [59]: pd.Series(pi).dt.end_time[0]
Out[59]: Timestamp('2017-01-01 23:59:59.999999999')

In [60]: p.end_time
Out[60]: Timestamp('2017-01-01 23:59:59.999999999')

Sparse Data Structure Refactor

SparseArray, the array backing SparseSeries and the columns in a SparseDataFrame, is now an extension array (GH21978, GH19056, GH22835). To conform to this interface and for consistency with the rest of pandas, some API breaking changes were made:

  • SparseArray is no longer a subclass of numpy.ndarray. To convert a SparseArray to a NumPy array, use numpy.asarray().
  • SparseArray.dtype and SparseSeries.dtype are now instances of SparseDtype, rather than np.dtype. Access the underlying dtype with SparseDtype.subtype.
  • numpy.asarray(sparse_array)() now returns a dense array with all the values, not just the non-fill-value values (GH14167)
  • SparseArray.take now matches the API of pandas.api.extensions.ExtensionArray.take() (GH19506):
    • The default value of allow_fill has changed from False to True.
    • The out and mode parameters are now longer accepted (previously, this raised if they were specified).
    • Passing a scalar for indices is no longer allowed.
  • The result of concatenating a mix of sparse and dense Series is a Series with sparse values, rather than a SparseSeries.
  • SparseDataFrame.combine and DataFrame.combine_first no longer supports combining a sparse column with a dense column while preserving the sparse subtype. The result will be an object-dtype SparseArray.
  • Setting SparseArray.fill_value to a fill value with a different dtype is now allowed.
  • DataFrame[column] is now a Series with sparse values, rather than a SparseSeries, when slicing a single column with sparse values (GH23559).
  • The result of Series.where() is now a Series with sparse values, like with other extension arrays (GH24077)

Some new warnings are issued for operations that require or are likely to materialize a large dense array:

  • A errors.PerformanceWarning is issued when using fillna with a method, as a dense array is constructed to create the filled array. Filling with a value is the efficient way to fill a sparse array.
  • A errors.PerformanceWarning is now issued when concatenating sparse Series with differing fill values. The fill value from the first sparse array continues to be used.

In addition to these API breaking changes, many performance improvements and bug fixes have been made.

Finally, a Series.sparse accessor was added to provide sparse-specific methods like Series.sparse.from_coo().

In [61]: s = pd.Series([0, 0, 1, 1, 1], dtype='Sparse[int]')

In [62]: s.sparse.density
Out[62]: 0.6

Raise ValueError in DataFrame.to_dict(orient='index')

Bug in DataFrame.to_dict() raises ValueError when used with orient='index' and a non-unique index instead of losing data (GH22801)

In [63]: df = pd.DataFrame({'a': [1, 2], 'b': [0.5, 0.75]}, index=['A', 'A'])

In [64]: df
Out[64]: 
   a     b
A  1  0.50
A  2  0.75

[2 rows x 2 columns]

In [65]: df.to_dict(orient='index')
---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
<ipython-input-65-f5309a7c6adb> in <module>
----> 1 df.to_dict(orient='index')

~/build/pandas-dev/pandas/pandas/core/frame.py in to_dict(self, orient, into)
   1235             if not self.index.is_unique:
   1236                 raise ValueError(
-> 1237                     "DataFrame index must be unique for orient='index'."
   1238                 )
   1239             return into_c((t[0], dict(zip(self.columns, t[1:])))

ValueError: DataFrame index must be unique for orient='index'.

Tick DateOffset Normalize Restrictions

Creating a Tick object (Day, Hour, Minute, Second, Milli, Micro, Nano) with normalize=True is no longer supported. This prevents unexpected behavior where addition could fail to be monotone or associative. (GH21427)

Previous Behavior:

In [2]: ts = pd.Timestamp('2018-06-11 18:01:14')

In [3]: ts
Out[3]: Timestamp('2018-06-11 18:01:14')

In [4]: tic = pd.offsets.Hour(n=2, normalize=True)
   ...:

In [5]: tic
Out[5]: <2 * Hours>

In [6]: ts + tic
Out[6]: Timestamp('2018-06-11 00:00:00')

In [7]: ts + tic + tic + tic == ts + (tic + tic + tic)
Out[7]: False

Current Behavior:

In [66]: ts = pd.Timestamp('2018-06-11 18:01:14')

In [67]: tic = pd.offsets.Hour(n=2)

In [68]: ts + tic + tic + tic == ts + (tic + tic + tic)
Out[68]: True

Period Subtraction

Subtraction of a Period from another Period will give a DateOffset. instead of an integer (GH21314)

In [69]: june = pd.Period('June 2018')

In [70]: april = pd.Period('April 2018')

In [71]: june - april
Out[71]: <2 * MonthEnds>

Previous Behavior:

In [2]: june = pd.Period('June 2018')

In [3]: april = pd.Period('April 2018')

In [4]: june - april
Out [4]: 2

Similarly, subtraction of a Period from a PeriodIndex will now return an Index of DateOffset objects instead of an Int64Index

In [72]: pi = pd.period_range('June 2018', freq='M', periods=3)

In [73]: pi - pi[0]
Out[73]: Index([<0 * MonthEnds>, <MonthEnd>, <2 * MonthEnds>], dtype='object')

Previous Behavior:

In [2]: pi = pd.period_range('June 2018', freq='M', periods=3)

In [3]: pi - pi[0]
Out[3]: Int64Index([0, 1, 2], dtype='int64')

Addition/Subtraction of NaN from DataFrame

Adding or subtracting NaN from a DataFrame column with timedelta64[ns] dtype will now raise a TypeError instead of returning all-NaT. This is for compatibility with TimedeltaIndex and Series behavior (GH22163)

In [74]: df = pd.DataFrame([pd.Timedelta(days=1)])

In [75]: df - np.nan
---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)
<ipython-input-75-2fbc21b58712> in <module>
----> 1 df - np.nan

~/build/pandas-dev/pandas/pandas/core/ops.py in f(self, other, axis, level, fill_value)
   2022 
   2023             assert np.ndim(other) == 0
-> 2024             return self._combine_const(other, op)
   2025 
   2026     f.__name__ = op_name

~/build/pandas-dev/pandas/pandas/core/frame.py in _combine_const(self, other, func)
   4932     def _combine_const(self, other, func):
   4933         assert lib.is_scalar(other) or np.ndim(other) == 0
-> 4934         return ops.dispatch_to_series(self, other, func)
   4935 
   4936     def combine(self, other, func, fill_value=None, overwrite=True):

~/build/pandas-dev/pandas/pandas/core/ops.py in dispatch_to_series(left, right, func, str_rep, axis)
   1149         raise NotImplementedError(right)
   1150 
-> 1151     new_data = expressions.evaluate(column_op, str_rep, left, right)
   1152 
   1153     result = left._constructor(new_data, index=left.index, copy=False)

~/build/pandas-dev/pandas/pandas/core/computation/expressions.py in evaluate(op, op_str, a, b, use_numexpr, **eval_kwargs)
    204     use_numexpr = use_numexpr and _bool_arith_check(op_str, a, b)
    205     if use_numexpr:
--> 206         return _evaluate(op, op_str, a, b, **eval_kwargs)
    207     return _evaluate_standard(op, op_str, a, b)
    208 

~/build/pandas-dev/pandas/pandas/core/computation/expressions.py in _evaluate_numexpr(op, op_str, a, b, truediv, reversed, **eval_kwargs)
    119 
    120     if result is None:
--> 121         result = _evaluate_standard(op, op_str, a, b)
    122 
    123     return result

~/build/pandas-dev/pandas/pandas/core/computation/expressions.py in _evaluate_standard(op, op_str, a, b, **eval_kwargs)
     64         _store_test_result(False)
     65     with np.errstate(all='ignore'):
---> 66         return op(a, b)
     67 
     68 

~/build/pandas-dev/pandas/pandas/core/ops.py in column_op(a, b)
   1120         def column_op(a, b):
   1121             return {i: func(a.iloc[:, i], b)
-> 1122                     for i in range(len(a.columns))}
   1123 
   1124     elif isinstance(right, ABCDataFrame):

~/build/pandas-dev/pandas/pandas/core/ops.py in <dictcomp>(.0)
   1120         def column_op(a, b):
   1121             return {i: func(a.iloc[:, i], b)
-> 1122                     for i in range(len(a.columns))}
   1123 
   1124     elif isinstance(right, ABCDataFrame):

~/build/pandas-dev/pandas/pandas/core/ops.py in wrapper(left, right)
   1551 
   1552         elif is_timedelta64_dtype(left):
-> 1553             result = dispatch_to_index_op(op, left, right, pd.TimedeltaIndex)
   1554             return construct_result(left, result,
   1555                                     index=left.index, name=res_name)

~/build/pandas-dev/pandas/pandas/core/ops.py in dispatch_to_index_op(op, left, right, index_class)
   1183         left_idx = left_idx._shallow_copy(freq=None)
   1184     try:
-> 1185         result = op(left_idx, right)
   1186     except NullFrequencyError:
   1187         # DatetimeIndex and TimedeltaIndex with freq == None raise ValueError

TypeError: unsupported operand type(s) for -: 'TimedeltaIndex' and 'float'

Previous Behavior:

In [4]: df = pd.DataFrame([pd.Timedelta(days=1)])

In [5]: df - np.nan
Out[5]:
    0
0 NaT

DataFrame Comparison Operations Broadcasting Changes

Previously, the broadcasting behavior of DataFrame comparison operations (==, !=, …) was inconsistent with the behavior of arithmetic operations (+, -, …). The behavior of the comparison operations has been changed to match the arithmetic operations in these cases. (GH22880)

The affected cases are:

  • operating against a 2-dimensional np.ndarray with either 1 row or 1 column will now broadcast the same way a np.ndarray would (GH23000).
  • a list or tuple with length matching the number of rows in the DataFrame will now raise ValueError instead of operating column-by-column (GH22880.
  • a list or tuple with length matching the number of columns in the DataFrame will now operate row-by-row instead of raising ValueError (GH22880).

Previous Behavior:

In [3]: arr = np.arange(6).reshape(3, 2)
In [4]: df = pd.DataFrame(arr)

In [5]: df == arr[[0], :]
    ...: # comparison previously broadcast where arithmetic would raise
Out[5]:
       0      1
0   True   True
1  False  False
2  False  False
In [6]: df + arr[[0], :]
...
ValueError: Unable to coerce to DataFrame, shape must be (3, 2): given (1, 2)

In [7]: df == (1, 2)
    ...: # length matches number of columns;
    ...: # comparison previously raised where arithmetic would broadcast
...
ValueError: Invalid broadcasting comparison [(1, 2)] with block values
In [8]: df + (1, 2)
Out[8]:
   0  1
0  1  3
1  3  5
2  5  7

In [9]: df == (1, 2, 3)
    ...:  # length matches number of rows
    ...:  # comparison previously broadcast where arithmetic would raise
Out[9]:
       0      1
0  False   True
1   True  False
2  False  False
In [10]: df + (1, 2, 3)
...
ValueError: Unable to coerce to Series, length must be 2: given 3

Current Behavior:

In [76]: arr = np.arange(6).reshape(3, 2)

In [77]: df = pd.DataFrame(arr)
# Comparison operations and arithmetic operations both broadcast.
In [78]: df == arr[[0], :]
Out[78]: 
       0      1
0   True   True
1  False  False
2  False  False

[3 rows x 2 columns]

In [79]: df + arr[[0], :]
Out[79]: 
   0  1
0  0  2
1  2  4
2  4  6

[3 rows x 2 columns]
# Comparison operations and arithmetic operations both broadcast.
In [80]: df == (1, 2)
Out[80]: 
       0      1
0  False  False
1  False  False
2  False  False

[3 rows x 2 columns]

In [81]: df + (1, 2)
Out[81]: 
   0  1
0  1  3
1  3  5
2  5  7

[3 rows x 2 columns]
# Comparison operations and arithmetic opeartions both raise ValueError.
In [82]: df == (1, 2, 3)
---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
<ipython-input-82-e541fe41cc7f> in <module>
----> 1 df == (1, 2, 3)

~/build/pandas-dev/pandas/pandas/core/ops.py in f(self, other)
   2077     def f(self, other):
   2078 
-> 2079         other = _align_method_FRAME(self, other, axis=None)
   2080 
   2081         if isinstance(other, ABCDataFrame):

~/build/pandas-dev/pandas/pandas/core/ops.py in _align_method_FRAME(left, right, axis)
   1971           not isinstance(right, (ABCSeries, ABCDataFrame))):
   1972         # GH17901
-> 1973         right = to_series(right)
   1974 
   1975     return right

~/build/pandas-dev/pandas/pandas/core/ops.py in to_series(right)
   1933             if len(left.columns) != len(right):
   1934                 raise ValueError(msg.format(req_len=len(left.columns),
-> 1935                                             given_len=len(right)))
   1936             right = left._constructor_sliced(right, index=left.columns)
   1937         return right

ValueError: Unable to coerce to Series, length must be 2: given 3

In [83]: df + (1, 2, 3)
---------------------------------------------------------------------------
ValueError                                Traceback (most recent call last)
<ipython-input-83-e5153dc9a2b8> in <module>
----> 1 df + (1, 2, 3)

~/build/pandas-dev/pandas/pandas/core/ops.py in f(self, other, axis, level, fill_value)
   2004     def f(self, other, axis=default_axis, level=None, fill_value=None):
   2005 
-> 2006         other = _align_method_FRAME(self, other, axis)
   2007 
   2008         if isinstance(other, ABCDataFrame):

~/build/pandas-dev/pandas/pandas/core/ops.py in _align_method_FRAME(left, right, axis)
   1971           not isinstance(right, (ABCSeries, ABCDataFrame))):
   1972         # GH17901
-> 1973         right = to_series(right)
   1974 
   1975     return right

~/build/pandas-dev/pandas/pandas/core/ops.py in to_series(right)
   1933             if len(left.columns) != len(right):
   1934                 raise ValueError(msg.format(req_len=len(left.columns),
-> 1935                                             given_len=len(right)))
   1936             right = left._constructor_sliced(right, index=left.columns)
   1937         return right

ValueError: Unable to coerce to Series, length must be 2: given 3

DataFrame Arithmetic Operations Broadcasting Changes

DataFrame arithmetic operations when operating with 2-dimensional np.ndarray objects now broadcast in the same way as np.ndarray broadcast. (GH23000)

Previous Behavior:

In [3]: arr = np.arange(6).reshape(3, 2)
In [4]: df = pd.DataFrame(arr)
In [5]: df + arr[[0], :]   # 1 row, 2 columns
...
ValueError: Unable to coerce to DataFrame, shape must be (3, 2): given (1, 2)
In [6]: df + arr[:, [1]]   # 1 column, 3 rows
...
ValueError: Unable to coerce to DataFrame, shape must be (3, 2): given (3, 1)

Current Behavior:

In [84]: arr = np.arange(6).reshape(3, 2)

In [85]: df = pd.DataFrame(arr)

In [86]: df
Out[86]: 
   0  1
0  0  1
1  2  3
2  4  5

[3 rows x 2 columns]
In [87]: df + arr[[0], :]   # 1 row, 2 columns
Out[87]: 
   0  1
0  0  2
1  2  4
2  4  6

[3 rows x 2 columns]

In [88]: df + arr[:, [1]]   # 1 column, 3 rows
Out[88]: 
   0   1
0  1   2
1  5   6
2  9  10

[3 rows x 2 columns]

ExtensionType Changes

:class:`pandas.api.extensions.ExtensionDtype` Equality and Hashability

Pandas now requires that extension dtypes be hashable. The base class implements a default __eq__ and __hash__. If you have a parametrized dtype, you should update the ExtensionDtype._metadata tuple to match the signature of your __init__ method. See pandas.api.extensions.ExtensionDtype for more (GH22476).

Other changes

  • ExtensionArray has gained the abstract methods .dropna() (GH21185)
  • ExtensionDtype has gained the ability to instantiate from string dtypes, e.g. decimal would instantiate a registered DecimalDtype; furthermore the ExtensionDtype has gained the method construct_array_type (GH21185)
  • An ExtensionArray with a boolean dtype now works correctly as a boolean indexer. pandas.api.types.is_bool_dtype() now properly considers them boolean (GH22326)
  • Added ExtensionDtype._is_numeric for controlling whether an extension dtype is considered numeric (GH22290).
  • The ExtensionArray constructor, _from_sequence now take the keyword arg copy=False (GH21185)
  • Bug in Series.get() for Series using ExtensionArray and integer index (GH21257)
  • pandas.api.extensions.ExtensionArray.shift() added as part of the basic ExtensionArray interface (GH22387).
  • shift() now dispatches to ExtensionArray.shift() (GH22386)
  • Series.combine() works correctly with ExtensionArray inside of Series (GH20825)
  • Series.combine() with scalar argument now works for any function type (GH21248)
  • Series.astype() and DataFrame.astype() now dispatch to ExtensionArray.astype() (GH21185:).
  • Slicing a single row of a DataFrame with multiple ExtensionArrays of the same type now preserves the dtype, rather than coercing to object (GH22784)
  • Added pandas.api.types.register_extension_dtype() to register an extension type with pandas (GH22664)
  • Bug when concatenating multiple Series with different extension dtypes not casting to object dtype (GH22994)
  • Series backed by an ExtensionArray now work with util.hash_pandas_object() (GH23066)
  • Updated the .type attribute for PeriodDtype, DatetimeTZDtype, and IntervalDtype to be instances of the dtype (Period, Timestamp, and Interval respectively) (GH22938)
  • ExtensionArray.isna() is allowed to return an ExtensionArray (GH22325).
  • Support for reduction operations such as sum, mean via opt-in base class method override (GH22762)
  • DataFrame.stack() no longer converts to object dtype for DataFrames where each column has the same extension dtype. The output Series will have the same dtype as the columns (GH23077).
  • Series.unstack() and DataFrame.unstack() no longer convert extension arrays to object-dtype ndarrays. Each column in the output DataFrame will now have the same dtype as the input (GH23077).
  • Bug when grouping Dataframe.groupby() and aggregating on ExtensionArray it was not returning the actual ExtensionArray dtype (GH23227).
  • A default repr for ExtensionArray is now provided (GH23601).

Series and Index Data-Dtype Incompatibilities

Series and Index constructors now raise when the data is incompatible with a passed dtype= (GH15832)

Previous Behavior:

In [4]: pd.Series([-1], dtype="uint64")
Out [4]:
0    18446744073709551615
dtype: uint64

Current Behavior:

In [4]: pd.Series([-1], dtype="uint64")
Out [4]:
...
OverflowError: Trying to coerce negative values to unsigned integers

Crosstab Preserves Dtypes

crosstab() will preserve now dtypes in some cases that previously would cast from integer dtype to floating dtype (GH22019)

Previous Behavior:

In [3]: df = pd.DataFrame({'a': [1, 2, 2, 2, 2], 'b': [3, 3, 4, 4, 4],
   ...:                    'c': [1, 1, np.nan, 1, 1]})
In [4]: pd.crosstab(df.a, df.b, normalize='columns')
Out[4]:
b    3    4
a
1  0.5  0.0
2  0.5  1.0

Current Behavior:

In [3]: df = pd.DataFrame({'a': [1, 2, 2, 2, 2],
   ...:                    'b': [3, 3, 4, 4, 4],
   ...:                    'c': [1, 1, np.nan, 1, 1]})
In [4]: pd.crosstab(df.a, df.b, normalize='columns')

Datetimelike API Changes

Other API Changes

  • DatetimeIndex now accepts Int64Index arguments as epoch timestamps (GH20997)
  • Accessing a level of a MultiIndex with a duplicate name (e.g. in get_level_values()) now raises a ValueError instead of a KeyError (GH21678).
  • Invalid construction of IntervalDtype will now always raise a TypeError rather than a ValueError if the subdtype is invalid (GH21185)
  • Trying to reindex a DataFrame with a non unique MultiIndex now raises a ValueError instead of an Exception (GH21770)
  • PeriodIndex.tz_convert() and PeriodIndex.tz_localize() have been removed (GH21781)
  • Index subtraction will attempt to operate element-wise instead of raising TypeError (GH19369)
  • pandas.io.formats.style.Styler supports a number-format property when using to_excel() (GH22015)
  • DataFrame.corr() and Series.corr() now raise a ValueError along with a helpful error message instead of a KeyError when supplied with an invalid method (GH22298)
  • shift() will now always return a copy, instead of the previous behaviour of returning self when shifting by 0 (GH22397)
  • DataFrame.set_index() now allows all one-dimensional list-likes, raises a TypeError for incorrect types, has an improved KeyError message, and will not fail on duplicate column names with drop=True. (GH22484)
  • Slicing a single row of a DataFrame with multiple ExtensionArrays of the same type now preserves the dtype, rather than coercing to object (GH22784)
  • DateOffset attribute _cacheable and method _should_cache have been removed (GH23118)
  • Comparing Timedelta to be less or greater than unknown types now raises a TypeError instead of returning False (GH20829)
  • Categorical.searchsorted(), when supplied a scalar value to search for, now returns a scalar instead of an array (GH23466).
  • Categorical.searchsorted() now raises a KeyError rather that a ValueError, if a searched for key is not found in its categories (GH23466).
  • Index.hasnans() and Series.hasnans() now always return a python boolean. Previously, a python or a numpy boolean could be returned, depending on circumstances (GH23294).
  • The order of the arguments of DataFrame.to_html() and DataFrame.to_string() is rearranged to be consistent with each other. (GH23614)
  • CategoricalIndex.reindex() now raises a ValueError if the target index is non-unique and not equal to the current index. It previously only raised if the target index was not of a categorical dtype (GH23963).

Deprecations

  • MultiIndex.labels has been deprecated and replaced by MultiIndex.codes. The functionality is unchanged. The new name better reflects the natures of these codes and makes the MultiIndex API more similar to the API for CategoricalIndex (GH13443). As a consequence, other uses of the name labels in MultiIndex have also been deprecated and replaced with codes:
    • You should initialize a MultiIndex instance using a parameter named codes rather than labels.
    • MultiIndex.set_labels has been deprecated in favor of MultiIndex.set_codes().
    • For method MultiIndex.copy(), the labels parameter has been deprecated and replaced by a codes parameter.
  • DataFrame.to_stata(), read_stata(), StataReader and StataWriter have deprecated the encoding argument. The encoding of a Stata dta file is determined by the file type and cannot be changed (GH21244)
  • MultiIndex.to_hierarchical() is deprecated and will be removed in a future version (GH21613)
  • Series.ptp() is deprecated. Use numpy.ptp instead (GH21614)
  • Series.compress() is deprecated. Use Series[condition] instead (GH18262)
  • The signature of Series.to_csv() has been uniformed to that of DataFrame.to_csv(): the name of the first argument is now path_or_buf, the order of subsequent arguments has changed, the header argument now defaults to True. (GH19715)
  • Categorical.from_codes() has deprecated providing float values for the codes argument. (GH21767)
  • pandas.read_table() is deprecated. Instead, use pandas.read_csv() passing sep='\t' if necessary (GH21948)
  • Series.str.cat() has deprecated using arbitrary list-likes within list-likes. A list-like container may still contain many Series, Index or 1-dimensional np.ndarray, or alternatively, only scalar values. (GH21950)
  • FrozenNDArray.searchsorted() has deprecated the v parameter in favor of value (GH14645)
  • DatetimeIndex.shift() and PeriodIndex.shift() now accept periods argument instead of n for consistency with Index.shift() and Series.shift(). Using n throws a deprecation warning (GH22458, GH22912)
  • The fastpath keyword of the different Index constructors is deprecated (GH23110).
  • Timestamp.tz_localize(), DatetimeIndex.tz_localize(), and Series.tz_localize() have deprecated the errors argument in favor of the nonexistent argument (GH8917)
  • The class FrozenNDArray has been deprecated. When unpickling, FrozenNDArray will be unpickled to np.ndarray once this class is removed (GH9031)
  • The methods DataFrame.update() and Panel.update() have deprecated the raise_conflict=False|True keyword in favor of errors='ignore'|'raise' (GH23585)
  • The methods Series.str.partition() and Series.str.rpartition() have deprecated the pat keyword in favor of sep (GH22676)
  • Deprecated the nthreads keyword of pandas.read_feather() in favor of use_threads to reflect the changes in pyarrow 0.11.0. (GH23053)
  • ExtensionArray._formatting_values() is deprecated. Use ExtensionArray._formatter instead. (GH23601)
  • pandas.read_excel() has deprecated accepting usecols as an integer. Please pass in a list of ints from 0 to usecols inclusive instead (GH23527)
  • Constructing a TimedeltaIndex from data with datetime64-dtyped data is deprecated, will raise TypeError in a future version (GH23539)
  • Constructing a DatetimeIndex from data with timedelta64-dtyped data is deprecated, will raise TypeError in a future version (GH23675)
  • The keep_tz=False option (the default) of the keep_tz keyword of DatetimeIndex.to_series() is deprecated (GH17832).
  • Timezone converting a tz-aware datetime.datetime or Timestamp with Timestamp and the tz argument is now deprecated. Instead, use Timestamp.tz_convert() (GH23579)
  • pandas.types.is_period() is deprecated in favor of pandas.types.is_period_dtype (GH23917)
  • pandas.types.is_datetimetz() is deprecated in favor of pandas.types.is_datetime64tz (GH23917)
  • Creating a TimedeltaIndex or DatetimeIndex by passing range arguments start, end, and periods is deprecated in favor of timedelta_range() and date_range() (GH23919)
  • Passing a string alias like 'datetime64[ns, UTC]' as the unit parameter to DatetimeTZDtype is deprecated. Use DatetimeTZDtype.construct_from_string instead (GH23990).
  • In Series.where() with Categorical data, providing an other that is not present in the categories is deprecated. Convert the categorical to a different dtype or add the other to the categories first (GH24077).
  • Series.clip_lower(), Series.clip_upper(), DataFrame.clip_lower() and DataFrame.clip_upper() are deprecated and will be removed in a future version. Use Series.clip(lower=threshold), Series.clip(upper=threshold) and the equivalent DataFrame methods (GH24203)

Integer Addition/Subtraction with Datetime-like Classes Is Deprecated

In the past, users could add or subtract integers or integer-dtypes arrays from Period, PeriodIndex, and in some cases Timestamp, DatetimeIndex and TimedeltaIndex.

This usage is now deprecated. Instead add or subtract integer multiples of the object’s freq attribute. The result of subtraction of Period objects will be agnostic of the multiplier of the objects’ freq attribute (GH21939, GH23878).

Previous Behavior:

In [3]: per = pd.Period('2016Q1')
In [4]: per + 3
Out[4]: Period('2016Q4', 'Q-DEC')

In [5]: ts = pd.Timestamp('1994-05-06 12:15:16', freq=pd.offsets.Hour())
In [6]: ts + 2
Out[6]: Timestamp('1994-05-06 14:15:16', freq='H')

In [7]: tdi = pd.timedelta_range('1D', periods=2)
In [8]: tdi - np.array([2, 1])
Out[8]: TimedeltaIndex(['-1 days', '1 days'], dtype='timedelta64[ns]', freq=None)

In [9]: dti = pd.date_range('2001-01-01', periods=2, freq='7D')
In [10]: dti + pd.Index([1, 2])
Out[10]: DatetimeIndex(['2001-01-08', '2001-01-22'], dtype='datetime64[ns]', freq=None)

Current Behavior:

In [89]: per = pd.Period('2016Q1')

In [90]: per + 3
Out[90]: Period('2016Q4', 'Q-DEC')

In [91]: per = pd.Period('2016Q1')

In [92]: per + 3 * per.freq
Out[92]: Period('2016Q4', 'Q-DEC')

In [93]: ts = pd.Timestamp('1994-05-06 12:15:16', freq=pd.offsets.Hour())

In [94]: ts + 2 * ts.freq
Out[94]: Timestamp('1994-05-06 14:15:16', freq='H')

In [95]: tdi = pd.timedelta_range('1D', periods=2)

In [96]: tdi - np.array([2 * tdi.freq, 1 * tdi.freq])
Out[96]: TimedeltaIndex(['-1 days', '1 days'], dtype='timedelta64[ns]', freq=None)

In [97]: dti = pd.date_range('2001-01-01', periods=2, freq='7D')

In [98]: dti + pd.Index([1 * dti.freq, 2 * dti.freq])
Out[98]: DatetimeIndex(['2001-01-08', '2001-01-22'], dtype='datetime64[ns]', freq=None)

Removal of prior version deprecations/changes

  • The LongPanel and WidePanel classes have been removed (GH10892)
  • Series.repeat() has renamed the reps argument to repeats (GH14645)
  • Several private functions were removed from the (non-public) module pandas.core.common (GH22001)
  • Removal of the previously deprecated module pandas.core.datetools (GH14105, GH14094)
  • Strings passed into DataFrame.groupby() that refer to both column and index levels will raise a ValueError (GH14432)
  • Index.repeat() and MultiIndex.repeat() have renamed the n argument to repeats (GH14645)
  • The Series constructor and .astype method will now raise a ValueError if timestamp dtypes are passed in without a unit (e.g. np.datetime64) for the dtype parameter (GH15987)
  • Removal of the previously deprecated as_indexer keyword completely from str.match() (GH22356, GH6581)
  • The modules pandas.types, pandas.computation, and pandas.util.decorators have been removed (GH16157, GH16250)
  • Removed the pandas.formats.style shim for pandas.io.formats.style.Styler (GH16059)
  • pandas.pnow(), pandas.match(), pandas.groupby(), pd.get_store(), pd.Expr, and pd.Term have been removed (GH15538, GH15940)
  • Categorical.searchsorted() and Series.searchsorted() have renamed the v argument to value (GH14645)
  • pandas.parser, pandas.lib, and pandas.tslib have been removed (GH15537)
  • TimedeltaIndex.searchsorted(), DatetimeIndex.searchsorted(), and PeriodIndex.searchsorted() have renamed the key argument to value (GH14645)
  • DataFrame.consolidate() and Series.consolidate() have been removed (GH15501)
  • Removal of the previously deprecated module pandas.json (GH19944)
  • The module pandas.tools has been removed (GH15358, GH16005)
  • SparseArray.get_values() and SparseArray.to_dense() have dropped the fill parameter (GH14686)
  • DataFrame.sortlevel() and Series.sortlevel() have been removed (GH15099)
  • SparseSeries.to_dense() has dropped the sparse_only parameter (GH14686)
  • DataFrame.astype() and Series.astype() have renamed the raise_on_error argument to errors (GH14967)
  • is_sequence, is_any_int_dtype, and is_floating_dtype have been removed from pandas.api.types (GH16163, GH16189)

Performance Improvements

Documentation Changes

  • Added sphinx spelling extension, updated documentation on how to use the spell check (GH21079)

Bug Fixes

Categorical

  • Bug in Categorical.from_codes() where NaN values in codes were silently converted to 0 (GH21767). In the future this will raise a ValueError. Also changes the behavior of .from_codes([1.1, 2.0]).
  • Bug in Categorical.sort_values() where NaN values were always positioned in front regardless of na_position value. (GH22556).
  • Bug when indexing with a boolean-valued Categorical. Now a boolean-valued Categorical is treated as a boolean mask (GH22665)
  • Constructing a CategoricalIndex with empty values and boolean categories was raising a ValueError after a change to dtype coercion (GH22702).
  • Bug in Categorical.take() with a user-provided fill_value not encoding the fill_value, which could result in a ValueError, incorrect results, or a segmentation fault (GH23296).
  • In meth:Series.unstack, specifying a fill_value not present in the categories now raises a TypeError rather than ignoring the fill_value (GH23284)
  • Bug when resampling Dataframe.resample() and aggregating on categorical data, the categorical dtype was getting lost. (GH23227)
  • Bug in many methods of the .str-accessor, which always failed on calling the CategoricalIndex.str constructor (GH23555, GH23556)
  • Bug in Series.where() losing the categorical dtype for categorical data (GH24077)

Datetimelike

  • Fixed bug where two DateOffset objects with different normalize attributes could evaluate as equal (GH21404)
  • Fixed bug where Timestamp.resolution() incorrectly returned 1-microsecond timedelta instead of 1-nanosecond Timedelta (GH21336, GH21365)
  • Bug in to_datetime() that did not consistently return an Index when box=True was specified (GH21864)
  • Bug in DatetimeIndex comparisons where string comparisons incorrectly raises TypeError (GH22074)
  • Bug in DatetimeIndex comparisons when comparing against timedelta64[ns] dtyped arrays; in some cases TypeError was incorrectly raised, in others it incorrectly failed to raise (GH22074)
  • Bug in DatetimeIndex comparisons when comparing against object-dtyped arrays (GH22074)
  • Bug in DataFrame with datetime64[ns] dtype addition and subtraction with Timedelta-like objects (GH22005, GH22163)
  • Bug in DataFrame with datetime64[ns] dtype addition and subtraction with DateOffset objects returning an object dtype instead of datetime64[ns] dtype (GH21610, GH22163)
  • Bug in DataFrame with datetime64[ns] dtype comparing against NaT incorrectly (GH22242, GH22163)
  • Bug in DataFrame with datetime64[ns] dtype subtracting Timestamp-like object incorrectly returned datetime64[ns] dtype instead of timedelta64[ns] dtype (GH8554, GH22163)
  • Bug in DataFrame with datetime64[ns] dtype subtracting np.datetime64 object with non-nanosecond unit failing to convert to nanoseconds (GH18874, GH22163)
  • Bug in DataFrame comparisons against Timestamp-like objects failing to raise TypeError for inequality checks with mismatched types (GH8932, GH22163)
  • Bug in DataFrame with mixed dtypes including datetime64[ns] incorrectly raising TypeError on equality comparisons (GH13128, GH22163)
  • Bug in DataFrame.eq() comparison against NaT incorrectly returning True or NaN (GH15697, GH22163)
  • Bug in DatetimeIndex subtraction that incorrectly failed to raise OverflowError (GH22492, GH22508)
  • Bug in DatetimeIndex incorrectly allowing indexing with Timedelta object (GH20464)
  • Bug in DatetimeIndex where frequency was being set if original frequency was None (GH22150)
  • Bug in rounding methods of DatetimeIndex (round(), ceil(), floor()) and Timestamp (round(), ceil(), floor()) could give rise to loss of precision (GH22591)
  • Bug in to_datetime() with an Index argument that would drop the name from the result (GH21697)
  • Bug in PeriodIndex where adding or subtracting a timedelta or Tick object produced incorrect results (GH22988)
  • Bug in the Series repr with period-dtype data missing a space before the data (GH23601)
  • Bug in date_range() when decrementing a start date to a past end date by a negative frequency (GH23270)
  • Bug in Series.min() which would return NaN instead of NaT when called on a series of NaT (GH23282)
  • Bug in Series.combine_first() not properly aligning categoricals, so that missing values in self where not filled by valid values from other (GH24147)
  • Bug in DataFrame.combine() with datetimelike values raising a TypeError (GH23079)
  • Bug in date_range() with frequency of Day or higher where dates sufficiently far in the future could wrap around to the past instead of raising OutOfBoundsDatetime (GH14187)
  • Bug in PeriodIndex with attribute freq.n greater than 1 where adding a DateOffset object would return incorrect results (GH23215)
  • Bug in Series that interpreted string indices as lists of characters when setting datetimelike values (GH23451)
  • Bug in Timestamp constructor which would drop the frequency of an input Timestamp (GH22311)
  • Bug in DatetimeIndex where calling np.array(dtindex, dtype=object) would incorrectly return an array of long objects (GH23524)
  • Bug in Index where passing a timezone-aware DatetimeIndex and dtype=object would incorrectly raise a ValueError (GH23524)
  • Bug in Index where calling np.array(dtindex, dtype=object) on a timezone-naive DatetimeIndex would return an array of datetime objects instead of Timestamp objects, potentially losing nanosecond portions of the timestamps (GH23524)
  • Bug in Categorical.__setitem__ not allowing setting with another Categorical when both are undordered and have the same categories, but in a different order (GH24142)
  • Bug in date_range() where using dates with millisecond resolution or higher could return incorrect values or the wrong number of values in the index (GH24110)

Timedelta

Timezones

Offsets

  • Bug in FY5253 where date offsets could incorrectly raise an AssertionError in arithmetic operatons (GH14774)
  • Bug in DateOffset where keyword arguments week and milliseconds were accepted and ignored. Passing these will now raise ValueError (GH19398)
  • Bug in adding DateOffset with DataFrame or PeriodIndex incorrectly raising TypeError (GH23215)
  • Bug in comparing DateOffset objects with non-DateOffset objects, particularly strings, raising ValueError instead of returning False for equality checks and True for not-equal checks (GH23524)

Numeric

Strings

  • Bug in Index.str.partition() was not nan-safe (GH23558).
  • Bug in Index.str.split() was not nan-safe (GH23677).
  • Bug Series.str.contains() not respecting the na argument for a Categorical dtype Series (GH22158)
  • Bug in Index.str.cat() when the result contained only NaN (GH24044)

Interval

  • Bug in the IntervalIndex constructor where the closed parameter did not always override the inferred closed (GH19370)
  • Bug in the IntervalIndex repr where a trailing comma was missing after the list of intervals (GH20611)
  • Bug in Interval where scalar arithmetic operations did not retain the closed value (GH22313)
  • Bug in IntervalIndex where indexing with datetime-like values raised a KeyError (GH20636)
  • Bug in IntervalTree where data containing NaN triggered a warning and resulted in incorrect indexing queries with IntervalIndex (GH23352)

Indexing

  • The traceback from a KeyError when asking .loc for a single missing label is now shorter and more clear (GH21557)
  • PeriodIndex now emits a KeyError when a malformed string is looked up, which is consistent with the behavior of DateTimeIndex (GH22803)
  • When .ix is asked for a missing integer label in a MultiIndex with a first level of integer type, it now raises a KeyError, consistently with the case of a flat Int64Index, rather than falling back to positional indexing (GH21593)
  • Bug in DatetimeIndex.reindex() when reindexing a tz-naive and tz-aware DatetimeIndex (GH8306)
  • Bug in Series.reindex() when reindexing an empty series with a datetime64[ns, tz] dtype (GH20869)
  • Bug in DataFrame when setting values with .loc and a timezone aware DatetimeIndex (GH11365)
  • DataFrame.__getitem__ now accepts dictionaries and dictionary keys as list-likes of labels, consistently with Series.__getitem__ (GH21294)
  • Fixed DataFrame[np.nan] when columns are non-unique (GH21428)
  • Bug when indexing DatetimeIndex with nanosecond resolution dates and timezones (GH11679)
  • Bug where indexing with a Numpy array containing negative values would mutate the indexer (GH21867)
  • Bug where mixed indexes wouldn’t allow integers for .at (GH19860)
  • Float64Index.get_loc now raises KeyError when boolean key passed. (GH19087)
  • Bug in DataFrame.loc() when indexing with an IntervalIndex (GH19977)
  • Index no longer mangles None, NaN and NaT, i.e. they are treated as three different keys. However, for numeric Index all three are still coerced to a NaN (GH22332)
  • Bug in scalar in Index if scalar is a float while the Index is of integer dtype (GH22085)
  • Bug in MultiIndex.set_levels when levels value is not subscriptable (GH23273)
  • Bug where setting a timedelta column by Index causes it to be casted to double, and therefore lose precision (GH23511)
  • Bug in Index.union() and Index.intersection() where name of the Index of the result was not computed correctly for certain cases (GH9943, GH9862)
  • Bug in Index slicing with boolean Index may raise TypeError (GH22533)
  • Bug in PeriodArray.__setitem__ when accepting slice and list-like value (GH23978)

Missing

  • Bug in DataFrame.fillna() where a ValueError would raise when one column contained a datetime64[ns, tz] dtype (GH15522)
  • Bug in Series.hasnans() that could be incorrectly cached and return incorrect answers if null elements are introduced after an initial call (GH19700)
  • Series.isin() now treats all NaN-floats as equal also for np.object-dtype. This behavior is consistent with the behavior for float64 (GH22119)
  • unique() no longer mangles NaN-floats and the NaT-object for np.object-dtype, i.e. NaT is no longer coerced to a NaN-value and is treated as a different entity. (GH22295)

MultiIndex

I/O

Proper handling of np.NaN in a string data-typed column with the Python engine

There was bug in read_excel() and read_csv() with the Python engine, where missing values turned to 'nan' with dtype=str and na_filter=True. Now, these missing values are converted to the string missing indicator, np.nan. (:issue 20377)

Previous Behavior:

In [5]: data = 'a,b,c\n1,,3\n4,5,6'
In [6]: df = pd.read_csv(StringIO(data), engine='python', dtype=str, na_filter=True)
In [7]: df.loc[0, 'b']
Out[7]:
'nan'

Current Behavior:

In [99]: data = 'a,b,c\n1,,3\n4,5,6'

In [100]: df = pd.read_csv(StringIO(data), engine='python', dtype=str, na_filter=True)

In [101]: df.loc[0, 'b']
Out[101]: nan

Notice how we now instead output np.nan itself instead of a stringified form of it.

  • Bug in read_csv() in which a column specified with CategoricalDtype of boolean categories was not being correctly coerced from string values to booleans (GH20498)
  • Bug in to_sql() when writing timezone aware data (datetime64[ns, tz] dtype) would raise a TypeError (GH9086)
  • Bug in to_sql() where a naive DatetimeIndex would be written as TIMESTAMP WITH TIMEZONE type in supported databases, e.g. PostgreSQL (GH23510)
  • Bug in read_excel() when parse_cols is specified with an empty dataset (GH9208)
  • read_html() no longer ignores all-whitespace <tr> within <thead> when considering the skiprows and header arguments. Previously, users had to decrease their header and skiprows values on such tables to work around the issue. (GH21641)
  • read_excel() will correctly show the deprecation warning for previously deprecated sheetname (GH17994)
  • read_csv() and func:read_table() will throw UnicodeError and not coredump on badly encoded strings (GH22748)
  • read_csv() will correctly parse timezone-aware datetimes (GH22256)
  • Bug in read_csv() in which memory management was prematurely optimized for the C engine when the data was being read in chunks (GH23509)
  • Bug in read_csv() in unnamed columns were being improperly identified when extracting a multi-index (GH23687)
  • read_sas() will parse numbers in sas7bdat-files that have width less than 8 bytes correctly. (GH21616)
  • read_sas() will correctly parse sas7bdat files with many columns (GH22628)
  • read_sas() will correctly parse sas7bdat files with data page types having also bit 7 set (so page type is 128 + 256 = 384) (GH16615)
  • Bug in detect_client_encoding() where potential IOError goes unhandled when importing in a mod_wsgi process due to restricted access to stdout. (GH21552)
  • Bug in to_html() with index=False misses truncation indicators (…) on truncated DataFrame (GH15019, GH22783)
  • Bug in DataFrame.to_string() that broke column alignment when index=False and width of first column’s values is greater than the width of first column’s header (GH16839, GH13032)
  • Bug in DataFrame.to_string() that caused representations of DataFrame to not take up the whole window (GH22984)
  • Bug in DataFrame.to_csv() where a single level MultiIndex incorrectly wrote a tuple. Now just the value of the index is written (GH19589).
  • HDFStore() will raise ValueError when the format kwarg is passed to the constructor (GH13291)
  • Bug in HDFStore.append() when appending a DataFrame with an empty string column and min_itemsize < 8 (GH12242)
  • Bug in read_csv() in which memory leaks occurred in the C engine when parsing NaN values due to insufficient cleanup on completion or error (GH21353)
  • Bug in read_csv() in which incorrect error messages were being raised when skipfooter was passed in along with nrows, iterator, or chunksize (GH23711)
  • Bug in read_csv() in which MultiIndex index names were being improperly handled in the cases when they were not provided (GH23484)
  • Bug in read_csv() in which unnecessary warnings were being raised when the dialect’s values conflicted with the default arguments (GH23761)
  • Bug in read_html() in which the error message was not displaying the valid flavors when an invalid one was provided (GH23549)
  • Bug in read_excel() in which extraneous header names were extracted, even though none were specified (GH11733)
  • Bug in read_excel() in which column names were not being properly converted to string sometimes in Python 2.x (GH23874)
  • Bug in read_excel() in which index_col=None was not being respected and parsing index columns anyway (GH18792, GH20480)
  • Bug in read_excel() in which usecols was not being validated for proper column names when passed in as a string (GH20480)
  • Bug in DataFrame.to_dict() when the resulting dict contains non-Python scalars in the case of numeric data (GH23753)
  • DataFrame.to_string(), DataFrame.to_html(), DataFrame.to_latex() will correctly format output when a string is passed as the float_format argument (GH21625, GH22270)
  • Bug in read_csv() that caused it to raise OverflowError when trying to use ‘inf’ as na_value with integer index column (GH17128)

Plotting

Groupby/Resample/Rolling

  • Bug in pandas.core.groupby.GroupBy.first() and pandas.core.groupby.GroupBy.last() with as_index=False leading to the loss of timezone information (GH15884)
  • Bug in DatetimeIndex.resample() when downsampling across a DST boundary (GH8531)
  • Bug where ValueError is wrongly raised when calling count() method of a SeriesGroupBy when the grouping variable only contains NaNs and numpy version < 1.13 (GH21956).
  • Multiple bugs in pandas.core.Rolling.min() with closed='left' and a datetime-like index leading to incorrect results and also segfault. (GH21704)
  • Bug in Resampler.apply() when passing postiional arguments to applied func (GH14615).
  • Bug in Series.resample() when passing numpy.timedelta64 to loffset kwarg (GH7687).
  • Bug in Resampler.asfreq() when frequency of TimedeltaIndex is a subperiod of a new frequency (GH13022).
  • Bug in SeriesGroupBy.mean() when values were integral but could not fit inside of int64, overflowing instead. (GH22487)
  • RollingGroupby.agg() and ExpandingGroupby.agg() now support multiple aggregation functions as parameters (GH15072)
  • Bug in DataFrame.resample() and Series.resample() when resampling by a weekly offset ('W') across a DST transition (GH9119, GH21459)
  • Bug in DataFrame.expanding() in which the axis argument was not being respected during aggregations (GH23372)
  • Bug in pandas.core.groupby.DataFrameGroupBy.transform() which caused missing values when the input function can accept a DataFrame but renames it (GH23455).
  • Bug in pandas.core.groupby.GroupBy.nth() where column order was not always preserved (GH20760)
  • Bug in pandas.core.groupby.DataFrameGroupBy.rank() with method='dense' and pct=True when a group has only one member would raise a ZeroDivisionError (GH23666).
  • Calling DataFrameGroupBy.rank() and SeriesGroupBy.rank() with empty groups and pct=True was raising a ZeroDivisionError due to c1068d9 (GH22519)
  • Bug in DataFrame.resample() when resampling NaT in TimeDeltaIndex (GH13223).

Reshaping

Sparse

  • Updating a boolean, datetime, or timedelta column to be Sparse now works (GH22367)
  • Bug in Series.to_sparse() with Series already holding sparse data not constructing properly (GH22389)
  • Providing a sparse_index to the SparseArray constructor no longer defaults the na-value to np.nan for all dtypes. The correct na_value for data.dtype is now used.
  • Bug in SparseArray.nbytes under-reporting its memory usage by not including the size of its sparse index.
  • Improved performance of Series.shift() for non-NA fill_value, as values are no longer converted to a dense array.
  • Bug in DataFrame.groupby not including fill_value in the groups for non-NA fill_value when grouping by a sparse column (GH5078)
  • Bug in unary inversion operator (~) on a SparseSeries with boolean values. The performance of this has also been improved (GH22835)
  • Bug in SparseArary.unique() not returning the unique values (GH19595)
  • Bug in SparseArray.nonzero() and SparseDataFrame.dropna() returning shifted/incorrect results (GH21172)
  • Bug in DataFrame.apply() where dtypes would lose sparseness (GH23744)

Build Changes

Other

  • background_gradient() now takes a text_color_threshold parameter to automatically lighten the text color based on the luminance of the background color. This improves readability with dark background colors without the need to limit the background colormap range. (GH21258)
  • Require at least 0.28.2 version of cython to support read-only memoryviews (GH21688)
  • background_gradient() now also supports tablewise application (in addition to rowwise and columnwise) with axis=None (GH15204)
  • DataFrame.nlargest() and DataFrame.nsmallest() now returns the correct n values when keep != ‘all’ also when tied on the first columns (GH22752)
  • bar() now also supports tablewise application (in addition to rowwise and columnwise) with axis=None and setting clipping range with vmin and vmax (GH21548 and GH21526). NaN values are also handled properly.
  • Logical operations &, |, ^ between Series and Index will no longer raise ValueError (GH22092)
  • Checking PEP 3141 numbers in is_scalar() function returns True (GH22903)
  • Bug in DataFrame.combine_first() in which column types were unexpectedly converted to float (GH20699)
  • Bug where C variables were declared with external linkage causing import errors if certain other C libraries were imported before Pandas. (GH24113)
  • Constructing a DataFrame with an index argument that wasn’t already an instance of Index was broken in 4efb39f (GH22227).

Contributors

Scroll To Top