{"id":143,"date":"2013-03-31T23:46:48","date_gmt":"2013-04-01T04:46:48","guid":{"rendered":"http:\/\/homepages.uc.edu\/~yaozo\/wordpress\/?p=143"},"modified":"2013-03-31T23:46:48","modified_gmt":"2013-04-01T04:46:48","slug":"time-series-date-functionality","status":"publish","type":"post","link":"https:\/\/zhuoyao.net\/index.php\/2013\/03\/31\/time-series-date-functionality\/","title":{"rendered":"Time Series \/ Date functionality"},"content":{"rendered":"<p>pandas has proven very successful as a tool for working with time series data, especially in the financial data analysis space. With the 0.8 release, we have further improved the time series API in pandas by leaps and bounds. Using the new NumPy\u00a0<tt>datetime64<\/tt>\u00a0dtype, we have consolidated a large number of features from other Python libraries like\u00a0<tt>scikits.timeseries<\/tt>\u00a0as well as created a tremendous amount of new functionality for manipulating time series data.<\/p>\n<p>In working with time series data, we will frequently seek to:<\/p>\n<blockquote>\n<ul>\n<li>generate sequences of fixed-frequency dates and time spans<\/li>\n<li>conform or convert time series to a particular frequency<\/li>\n<li>compute \u201crelative\u201d dates based on various non-standard time increments (e.g. 5 business days before the last business day of the year), or \u201croll\u201d dates forward or backward<\/li>\n<\/ul>\n<\/blockquote>\n<p>pandas provides a relatively compact and self-contained set of tools for performing the above tasks.<\/p>\n<p>Create a range of dates:<\/p>\n<div>\n<div>\n<pre># 72 hours starting with midnight Jan 1st, 2011\nIn [1524]: rng = date_range('1\/1\/2011', periods=72, freq='H')\n\nIn [1525]: rng[:5]\nOut[1525]: \n&lt;class 'pandas.tseries.index.DatetimeIndex'&gt;\n[2011-01-01 00:00:00, ..., 2011-01-01 04:00:00]\nLength: 5, Freq: H, Timezone: None<\/pre>\n<\/div>\n<\/div>\n<p>Index pandas objects with dates:<\/p>\n<div>\n<div>\n<pre>In [1526]: ts = Series(randn(len(rng)), index=rng)\n\nIn [1527]: ts.head()\nOut[1527]: \n2011-01-01 00:00:00    0.469112\n2011-01-01 01:00:00   -0.282863\n2011-01-01 02:00:00   -1.509059\n2011-01-01 03:00:00   -1.135632\n2011-01-01 04:00:00    1.212112\nFreq: H, dtype: float64<\/pre>\n<\/div>\n<\/div>\n<p>Change frequency and fill gaps:<\/p>\n<div>\n<div>\n<pre># to 45 minute frequency and forward fill\nIn [1528]: converted = ts.asfreq('45Min', method='pad')\n\nIn [1529]: converted.head()\nOut[1529]: \n2011-01-01 00:00:00    0.469112\n2011-01-01 00:45:00    0.469112\n2011-01-01 01:30:00   -0.282863\n2011-01-01 02:15:00   -1.509059\n2011-01-01 03:00:00   -1.135632\nFreq: 45T, dtype: float64<\/pre>\n<\/div>\n<\/div>\n<p>Resample:<\/p>\n<div>\n<div>\n<pre># Daily means\nIn [1530]: ts.resample('D', how='mean')\nOut[1530]: \n2011-01-01   -0.319569\n2011-01-02   -0.337703\n2011-01-03    0.117258\nFreq: D, dtype: float64<\/pre>\n<\/div>\n<\/div>\n<div id=\"time-stamps-vs-time-spans\">\n<h2>Time Stamps vs. Time Spans<\/h2>\n<p>Time-stamped data is the most basic type of timeseries data that associates values with points in time. For pandas objects it means using the points in time to create the index<\/p>\n<div>\n<div>\n<pre>In [1531]: dates = [datetime(2012, 5, 1), datetime(2012, 5, 2), datetime(2012, 5, 3)]\n\nIn [1532]: ts = Series(np.random.randn(3), dates)\n\nIn [1533]: type(ts.index)\nOut[1533]: pandas.tseries.index.DatetimeIndex\n\nIn [1534]: ts\nOut[1534]: \n2012-05-01   -0.410001\n2012-05-02   -0.078638\n2012-05-03    0.545952\ndtype: float64<\/pre>\n<\/div>\n<\/div>\n<p>However, in many cases it is more natural to associate things like change variables with a time span instead.<\/p>\n<p>For example:<\/p>\n<div>\n<div>\n<pre>In [1535]: periods = PeriodIndex([Period('2012-01'), Period('2012-02'),\n   ......:                        Period('2012-03')])\n   ......:\n\nIn [1536]: ts = Series(np.random.randn(3), periods)\n\nIn [1537]: type(ts.index)\nOut[1537]: pandas.tseries.period.PeriodIndex\n\nIn [1538]: ts\nOut[1538]: \n2012-01   -1.219217\n2012-02   -1.226825\n2012-03    0.769804\nFreq: M, dtype: float64<\/pre>\n<\/div>\n<\/div>\n<p>Starting with 0.8, pandas allows you to capture both representations and convert between them. Under the hood, pandas represents timestamps using instances of\u00a0<tt>Timestamp<\/tt>\u00a0and sequences of timestamps using instances of\u00a0<tt>DatetimeIndex<\/tt>. For regular time spans, pandas uses\u00a0<tt>Period<\/tt>\u00a0objects for scalar values and\u00a0<tt>PeriodIndex<\/tt>\u00a0for sequences of spans. Better support for irregular intervals with arbitrary start and end points are forth-coming in future releases.<\/p>\n<\/div>\n<div id=\"generating-ranges-of-timestamps\">\n<h2>Generating Ranges of Timestamps<\/h2>\n<p>To generate an index with time stamps, you can use either the DatetimeIndex or Index constructor and pass in a list of datetime objects:<\/p>\n<div>\n<div>\n<pre>In [1539]: dates = [datetime(2012, 5, 1), datetime(2012, 5, 2), datetime(2012, 5, 3)]\n\nIn [1540]: index = DatetimeIndex(dates)\n\nIn [1541]: index # Note the frequency information\nOut[1541]: \n&lt;class 'pandas.tseries.index.DatetimeIndex'&gt;\n[2012-05-01 00:00:00, ..., 2012-05-03 00:00:00]\nLength: 3, Freq: None, Timezone: None\n\nIn [1542]: index = Index(dates)\n\nIn [1543]: index # Automatically converted to DatetimeIndex\nOut[1543]: \n&lt;class 'pandas.tseries.index.DatetimeIndex'&gt;\n[2012-05-01 00:00:00, ..., 2012-05-03 00:00:00]\nLength: 3, Freq: None, Timezone: None<\/pre>\n<\/div>\n<\/div>\n<p>Practically, this becomes very cumbersome because we often need a very long index with a large number of timestamps. If we need timestamps on a regular frequency, we can use the pandas functions\u00a0<tt>date_range<\/tt>\u00a0and\u00a0<tt>bdate_range<\/tt>\u00a0to create timestamp indexes.<\/p>\n<div>\n<div>\n<pre>In [1544]: index = date_range('2000-1-1', periods=1000, freq='M')\n\nIn [1545]: index\nOut[1545]: \n&lt;class 'pandas.tseries.index.DatetimeIndex'&gt;\n[2000-01-31 00:00:00, ..., 2083-04-30 00:00:00]\nLength: 1000, Freq: M, Timezone: None\n\nIn [1546]: index = bdate_range('2012-1-1', periods=250)\n\nIn [1547]: index\nOut[1547]: \n&lt;class 'pandas.tseries.index.DatetimeIndex'&gt;\n[2012-01-02 00:00:00, ..., 2012-12-14 00:00:00]\nLength: 250, Freq: B, Timezone: None<\/pre>\n<\/div>\n<\/div>\n<p>Convenience functions like\u00a0<tt>date_range<\/tt>\u00a0and\u00a0<tt>bdate_range<\/tt>\u00a0utilize a variety of frequency aliases. The default frequency for\u00a0<tt>date_range<\/tt>\u00a0is a\u00a0<strong>calendar day<\/strong>\u00a0while the default for\u00a0<tt>bdate_range<\/tt>\u00a0is a<strong>business day<\/strong><\/p>\n<div>\n<div>\n<pre>In [1548]: start = datetime(2011, 1, 1)\n\nIn [1549]: end = datetime(2012, 1, 1)\n\nIn [1550]: rng = date_range(start, end)\n\nIn [1551]: rng\nOut[1551]: \n&lt;class 'pandas.tseries.index.DatetimeIndex'&gt;\n[2011-01-01 00:00:00, ..., 2012-01-01 00:00:00]\nLength: 366, Freq: D, Timezone: None\n\nIn [1552]: rng = bdate_range(start, end)\n\nIn [1553]: rng\nOut[1553]: \n&lt;class 'pandas.tseries.index.DatetimeIndex'&gt;\n[2011-01-03 00:00:00, ..., 2011-12-30 00:00:00]\nLength: 260, Freq: B, Timezone: None<\/pre>\n<\/div>\n<\/div>\n<p><tt>date_range<\/tt>\u00a0and\u00a0<tt>bdate_range<\/tt>\u00a0makes it easy to generate a range of dates using various combinations of parameters like\u00a0<tt>start<\/tt>,\u00a0<tt>end<\/tt>,\u00a0<tt>periods<\/tt>, and\u00a0<tt>freq<\/tt>:<\/p>\n<div>\n<div>\n<pre>In [1554]: date_range(start, end, freq='BM')\nOut[1554]: \n&lt;class 'pandas.tseries.index.DatetimeIndex'&gt;\n[2011-01-31 00:00:00, ..., 2011-12-30 00:00:00]\nLength: 12, Freq: BM, Timezone: None\n\nIn [1555]: date_range(start, end, freq='W')\nOut[1555]: \n&lt;class 'pandas.tseries.index.DatetimeIndex'&gt;\n[2011-01-02 00:00:00, ..., 2012-01-01 00:00:00]\nLength: 53, Freq: W-SUN, Timezone: None\n\nIn [1556]: bdate_range(end=end, periods=20)\nOut[1556]: \n&lt;class 'pandas.tseries.index.DatetimeIndex'&gt;\n[2011-12-05 00:00:00, ..., 2011-12-30 00:00:00]\nLength: 20, Freq: B, Timezone: None\n\nIn [1557]: bdate_range(start=start, periods=20)\nOut[1557]: \n&lt;class 'pandas.tseries.index.DatetimeIndex'&gt;\n[2011-01-03 00:00:00, ..., 2011-01-28 00:00:00]\nLength: 20, Freq: B, Timezone: None<\/pre>\n<\/div>\n<\/div>\n<p>The start and end dates are strictly inclusive. So it will not generate any dates outside of those dates if specified.<\/p>\n<div id=\"datetimeindex\">\n<h3>DatetimeIndex<\/h3>\n<p>One of the main uses for\u00a0<tt>DatetimeIndex<\/tt>\u00a0is as an index for pandas objects. The\u00a0<tt>DatetimeIndex<\/tt>class contains many timeseries related optimizations:<\/p>\n<blockquote>\n<ul>\n<li>A large range of dates for various offsets are pre-computed and cached under the hood in order to make generating subsequent date ranges very fast (just have to grab a slice)<\/li>\n<li>Fast shifting using the\u00a0<tt>shift<\/tt>\u00a0and\u00a0<tt>tshift<\/tt>\u00a0method on pandas objects<\/li>\n<li>Unioning of overlapping DatetimeIndex objects with the same frequency is very fast (important for fast data alignment)<\/li>\n<li>Quick access to date fields via properties such as\u00a0<tt>year<\/tt>,\u00a0<tt>month<\/tt>, etc.<\/li>\n<li>Regularization functions like\u00a0<tt>snap<\/tt>\u00a0and very fast\u00a0<tt>asof<\/tt>\u00a0logic<\/li>\n<\/ul>\n<\/blockquote>\n<p><tt>DatetimeIndex<\/tt>\u00a0can be used like a regular index and offers all of its intelligent functionality like selection, slicing, etc.<\/p>\n<div>\n<div>\n<pre>In [1558]: rng = date_range(start, end, freq='BM')\n\nIn [1559]: ts = Series(randn(len(rng)), index=rng)\n\nIn [1560]: ts.index\nOut[1560]: \n&lt;class 'pandas.tseries.index.DatetimeIndex'&gt;\n[2011-01-31 00:00:00, ..., 2011-12-30 00:00:00]\nLength: 12, Freq: BM, Timezone: None\n\nIn [1561]: ts[:5].index\nOut[1561]: \n&lt;class 'pandas.tseries.index.DatetimeIndex'&gt;\n[2011-01-31 00:00:00, ..., 2011-05-31 00:00:00]\nLength: 5, Freq: BM, Timezone: None\n\nIn [1562]: ts[::2].index\nOut[1562]: \n&lt;class 'pandas.tseries.index.DatetimeIndex'&gt;\n[2011-01-31 00:00:00, ..., 2011-11-30 00:00:00]\nLength: 6, Freq: 2BM, Timezone: None<\/pre>\n<\/div>\n<\/div>\n<p>You can pass in dates and strings that parses to dates as indexing parameters:<\/p>\n<div>\n<div>\n<pre>In [1563]: ts['1\/31\/2011']\nOut[1563]: -1.2812473076599531\n\nIn [1564]: ts[datetime(2011, 12, 25):]\nOut[1564]: \n2011-12-30    0.687738\nFreq: BM, dtype: float64\n\nIn [1565]: ts['10\/31\/2011':'12\/31\/2011']\nOut[1565]: \n2011-10-31    0.149748\n2011-11-30   -0.732339\n2011-12-30    0.687738\nFreq: BM, dtype: float64<\/pre>\n<\/div>\n<\/div>\n<p>A\u00a0<tt>truncate<\/tt>\u00a0convenience function is provided that is equivalent to slicing:<\/p>\n<div>\n<div>\n<pre>In [1566]: ts.truncate(before='10\/31\/2011', after='12\/31\/2011')\nOut[1566]: \n2011-10-31    0.149748\n2011-11-30   -0.732339\n2011-12-30    0.687738\nFreq: BM, dtype: float64<\/pre>\n<\/div>\n<\/div>\n<p>To provide convenience for accessing longer time series, you can also pass in the year or year and month as strings:<\/p>\n<div>\n<div>\n<pre>In [1567]: ts['2011']\nOut[1567]: \n2011-01-31   -1.281247\n2011-02-28   -0.727707\n2011-03-31   -0.121306\n2011-04-29   -0.097883\n2011-05-31    0.695775\n2011-06-30    0.341734\n2011-07-29    0.959726\n2011-08-31   -1.110336\n2011-09-30   -0.619976\n2011-10-31    0.149748\n2011-11-30   -0.732339\n2011-12-30    0.687738\nFreq: BM, dtype: float64\n\nIn [1568]: ts['2011-6']\nOut[1568]: \n2011-06-30    0.341734\nFreq: BM, dtype: float64<\/pre>\n<\/div>\n<\/div>\n<p>Even complicated fancy indexing that breaks the DatetimeIndex\u2019s frequency regularity will result in a\u00a0<tt>DatetimeIndex<\/tt>\u00a0(but frequency is lost):<\/p>\n<div>\n<div>\n<pre>In [1569]: ts[[0, 2, 6]].index\nOut[1569]: \n&lt;class 'pandas.tseries.index.DatetimeIndex'&gt;\n[2011-01-31 00:00:00, ..., 2011-07-29 00:00:00]\nLength: 3, Freq: None, Timezone: None<\/pre>\n<\/div>\n<\/div>\n<p>DatetimeIndex objects has all the basic functionality of regular Index objects and a smorgasbord of advanced timeseries-specific methods for easy frequency processing.<\/p>\n<div>\n<p>See also<\/p>\n<p><a href=\"http:\/\/pandas.pydata.org\/pandas-docs\/dev\/basics.html#basics-reindexing\"><em>Reindexing methods<\/em><\/a><\/p>\n<\/div>\n<div>\n<p>Note<\/p>\n<p>While pandas does not force you to have a sorted date index, some of these methods may have unexpected or incorrect behavior if the dates are unsorted. So please be careful.<\/p>\n<\/div>\n<\/div>\n<\/div>\n<div id=\"dateoffset-objects\">\n<h2>DateOffset objects<\/h2>\n<p>In the preceding examples, we created DatetimeIndex objects at various frequencies by passing in frequency strings like \u2018M\u2019, \u2018W\u2019, and \u2018BM to the\u00a0<tt>freq<\/tt>\u00a0keyword. Under the hood, these frequency strings are being translated into an instance of pandas\u00a0<tt>DateOffset<\/tt>, which represents a regular frequency increment. Specific offset logic like \u201cmonth\u201d, \u201cbusiness day\u201d, or \u201cone hour\u201d is represented in its various subclasses.<\/p>\n<table border=\"1\">\n<colgroup>\n<col width=\"19%\" \/>\n<col width=\"81%\" \/><\/colgroup>\n<thead valign=\"bottom\">\n<tr>\n<th>Class name<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody valign=\"top\">\n<tr>\n<td>DateOffset<\/td>\n<td>Generic offset class, defaults to 1 calendar day<\/td>\n<\/tr>\n<tr>\n<td>BDay<\/td>\n<td>business day (weekday)<\/td>\n<\/tr>\n<tr>\n<td>Week<\/td>\n<td>one week, optionally anchored on a day of the week<\/td>\n<\/tr>\n<tr>\n<td>WeekOfMonth<\/td>\n<td>the x-th day of the y-th week of each month<\/td>\n<\/tr>\n<tr>\n<td>MonthEnd<\/td>\n<td>calendar month end<\/td>\n<\/tr>\n<tr>\n<td>MonthBegin<\/td>\n<td>calendar month begin<\/td>\n<\/tr>\n<tr>\n<td>BMonthEnd<\/td>\n<td>business month end<\/td>\n<\/tr>\n<tr>\n<td>BMonthBegin<\/td>\n<td>business month begin<\/td>\n<\/tr>\n<tr>\n<td>QuarterEnd<\/td>\n<td>calendar quarter end<\/td>\n<\/tr>\n<tr>\n<td>QuarterBegin<\/td>\n<td>calendar quarter begin<\/td>\n<\/tr>\n<tr>\n<td>BQuarterEnd<\/td>\n<td>business quarter end<\/td>\n<\/tr>\n<tr>\n<td>BQuarterBegin<\/td>\n<td>business quarter begin<\/td>\n<\/tr>\n<tr>\n<td>YearEnd<\/td>\n<td>calendar year end<\/td>\n<\/tr>\n<tr>\n<td>YearBegin<\/td>\n<td>calendar year begin<\/td>\n<\/tr>\n<tr>\n<td>BYearEnd<\/td>\n<td>business year end<\/td>\n<\/tr>\n<tr>\n<td>BYearBegin<\/td>\n<td>business year begin<\/td>\n<\/tr>\n<tr>\n<td>Hour<\/td>\n<td>one hour<\/td>\n<\/tr>\n<tr>\n<td>Minute<\/td>\n<td>one minute<\/td>\n<\/tr>\n<tr>\n<td>Second<\/td>\n<td>one second<\/td>\n<\/tr>\n<tr>\n<td>Milli<\/td>\n<td>one millisecond<\/td>\n<\/tr>\n<tr>\n<td>Micro<\/td>\n<td>one microsecond<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<p>The basic\u00a0<tt>DateOffset<\/tt>\u00a0takes the same arguments as\u00a0<tt>dateutil.relativedelta<\/tt>, which works like:<\/p>\n<div>\n<div>\n<pre>In [1570]: d = datetime(2008, 8, 18)\n\nIn [1571]: d + relativedelta(months=4, days=5)\nOut[1571]: datetime.datetime(2008, 12, 23, 0, 0)<\/pre>\n<\/div>\n<\/div>\n<p>We could have done the same thing with\u00a0<tt>DateOffset<\/tt>:<\/p>\n<div>\n<div>\n<pre>In [1572]: from pandas.tseries.offsets import *\n\nIn [1573]: d + DateOffset(months=4, days=5)\nOut[1573]: datetime.datetime(2008, 12, 23, 0, 0)<\/pre>\n<\/div>\n<\/div>\n<p>The key features of a\u00a0<tt>DateOffset<\/tt>\u00a0object are:<\/p>\n<blockquote>\n<ul>\n<li>it can be added \/ subtracted to\/from a datetime object to obtain a shifted date<\/li>\n<li>it can be multiplied by an integer (positive or negative) so that the increment will be applied multiple times<\/li>\n<li>it has\u00a0<tt>rollforward<\/tt>\u00a0and\u00a0<tt>rollback<\/tt>\u00a0methods for moving a date forward or backward to the next or previous \u201coffset date\u201d<\/li>\n<\/ul>\n<\/blockquote>\n<p>Subclasses of\u00a0<tt>DateOffset<\/tt>\u00a0define the\u00a0<tt>apply<\/tt>\u00a0function which dictates custom date increment logic, such as adding business days:<\/p>\n<div>\n<div>\n<pre>class BDay(DateOffset):\n    \"\"\"DateOffset increments between business days\"\"\"\n    def apply(self, other):\n        ...<\/pre>\n<\/div>\n<\/div>\n<div>\n<div>\n<pre>In [1574]: d - 5 * BDay()\nOut[1574]: datetime.datetime(2008, 8, 11, 0, 0)\n\nIn [1575]: d + BMonthEnd()\nOut[1575]: datetime.datetime(2008, 8, 29, 0, 0)<\/pre>\n<\/div>\n<\/div>\n<p>The\u00a0<tt>rollforward<\/tt>\u00a0and\u00a0<tt>rollback<\/tt>\u00a0methods do exactly what you would expect:<\/p>\n<div>\n<div>\n<pre>In [1576]: d\nOut[1576]: datetime.datetime(2008, 8, 18, 0, 0)\n\nIn [1577]: offset = BMonthEnd()\n\nIn [1578]: offset.rollforward(d)\nOut[1578]: datetime.datetime(2008, 8, 29, 0, 0)\n\nIn [1579]: offset.rollback(d)\nOut[1579]: datetime.datetime(2008, 7, 31, 0, 0)<\/pre>\n<\/div>\n<\/div>\n<p>It\u2019s definitely worth exploring the\u00a0<tt>pandas.tseries.offsets<\/tt>\u00a0module and the various docstrings for the classes.<\/p>\n<div id=\"parametric-offsets\">\n<h3>Parametric offsets<\/h3>\n<p>Some of the offsets can be \u201cparameterized\u201d when created to result in different behavior. For example, the\u00a0<tt>Week<\/tt>\u00a0offset for generating weekly data accepts a\u00a0<tt>weekday<\/tt>\u00a0parameter which results in the generated dates always lying on a particular day of the week:<\/p>\n<div>\n<div>\n<pre>In [1580]: d + Week()\nOut[1580]: datetime.datetime(2008, 8, 25, 0, 0)\n\nIn [1581]: d + Week(weekday=4)\nOut[1581]: datetime.datetime(2008, 8, 22, 0, 0)\n\nIn [1582]: (d + Week(weekday=4)).weekday()\nOut[1582]: 4<\/pre>\n<\/div>\n<\/div>\n<p>Another example is parameterizing\u00a0<tt>YearEnd<\/tt>\u00a0with the specific ending month:<\/p>\n<div>\n<div>\n<pre>In [1583]: d + YearEnd()\nOut[1583]: datetime.datetime(2008, 12, 31, 0, 0)\n\nIn [1584]: d + YearEnd(month=6)\nOut[1584]: datetime.datetime(2009, 6, 30, 0, 0)<\/pre>\n<\/div>\n<\/div>\n<\/div>\n<div id=\"offset-aliases\">\n<h3>Offset Aliases<\/h3>\n<p>A number of string aliases are given to useful common time series frequencies. We will refer to these aliases as\u00a0<em>offset aliases<\/em>\u00a0(referred to as\u00a0<em>time rules<\/em>\u00a0prior to v0.8.0).<\/p>\n<table border=\"1\">\n<colgroup>\n<col width=\"13%\" \/>\n<col width=\"87%\" \/><\/colgroup>\n<thead valign=\"bottom\">\n<tr>\n<th>Alias<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody valign=\"top\">\n<tr>\n<td>B<\/td>\n<td>business day frequency<\/td>\n<\/tr>\n<tr>\n<td>D<\/td>\n<td>calendar day frequency<\/td>\n<\/tr>\n<tr>\n<td>W<\/td>\n<td>weekly frequency<\/td>\n<\/tr>\n<tr>\n<td>M<\/td>\n<td>month end frequency<\/td>\n<\/tr>\n<tr>\n<td>BM<\/td>\n<td>business month end frequency<\/td>\n<\/tr>\n<tr>\n<td>MS<\/td>\n<td>month start frequency<\/td>\n<\/tr>\n<tr>\n<td>BMS<\/td>\n<td>business month start frequency<\/td>\n<\/tr>\n<tr>\n<td>Q<\/td>\n<td>quarter end frequency<\/td>\n<\/tr>\n<tr>\n<td>BQ<\/td>\n<td>business quarter endfrequency<\/td>\n<\/tr>\n<tr>\n<td>QS<\/td>\n<td>quarter start frequency<\/td>\n<\/tr>\n<tr>\n<td>BQS<\/td>\n<td>business quarter start frequency<\/td>\n<\/tr>\n<tr>\n<td>A<\/td>\n<td>year end frequency<\/td>\n<\/tr>\n<tr>\n<td>BA<\/td>\n<td>business year end frequency<\/td>\n<\/tr>\n<tr>\n<td>AS<\/td>\n<td>year start frequency<\/td>\n<\/tr>\n<tr>\n<td>BAS<\/td>\n<td>business year start frequency<\/td>\n<\/tr>\n<tr>\n<td>H<\/td>\n<td>hourly frequency<\/td>\n<\/tr>\n<tr>\n<td>T<\/td>\n<td>minutely frequency<\/td>\n<\/tr>\n<tr>\n<td>S<\/td>\n<td>secondly frequency<\/td>\n<\/tr>\n<tr>\n<td>L<\/td>\n<td>milliseonds<\/td>\n<\/tr>\n<tr>\n<td>U<\/td>\n<td>microseconds<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<\/div>\n<div id=\"combining-aliases\">\n<h3>Combining Aliases<\/h3>\n<p>As we have seen previously, the alias and the offset instance are fungible in most functions:<\/p>\n<div>\n<div>\n<pre>In [1585]: date_range(start, periods=5, freq='B')\nOut[1585]: \n&lt;class 'pandas.tseries.index.DatetimeIndex'&gt;\n[2011-01-03 00:00:00, ..., 2011-01-07 00:00:00]\nLength: 5, Freq: B, Timezone: None\n\nIn [1586]: date_range(start, periods=5, freq=BDay())\nOut[1586]: \n&lt;class 'pandas.tseries.index.DatetimeIndex'&gt;\n[2011-01-03 00:00:00, ..., 2011-01-07 00:00:00]\nLength: 5, Freq: B, Timezone: None<\/pre>\n<\/div>\n<\/div>\n<p>You can combine together day and intraday offsets:<\/p>\n<div>\n<div>\n<pre>In [1587]: date_range(start, periods=10, freq='2h20min')\nOut[1587]: \n&lt;class 'pandas.tseries.index.DatetimeIndex'&gt;\n[2011-01-01 00:00:00, ..., 2011-01-01 21:00:00]\nLength: 10, Freq: 140T, Timezone: None\n\nIn [1588]: date_range(start, periods=10, freq='1D10U')\nOut[1588]: \n&lt;class 'pandas.tseries.index.DatetimeIndex'&gt;\n[2011-01-01 00:00:00, ..., 2011-01-10 00:00:00.000090]\nLength: 10, Freq: 86400000010U, Timezone: None<\/pre>\n<\/div>\n<\/div>\n<\/div>\n<div id=\"anchored-offsets\">\n<h3>Anchored Offsets<\/h3>\n<p>For some frequencies you can specify an anchoring suffix:<\/p>\n<table border=\"1\">\n<colgroup>\n<col width=\"13%\" \/>\n<col width=\"87%\" \/><\/colgroup>\n<thead valign=\"bottom\">\n<tr>\n<th>Alias<\/th>\n<th>Description<\/th>\n<\/tr>\n<\/thead>\n<tbody valign=\"top\">\n<tr>\n<td>W-SUN<\/td>\n<td>weekly frequency (sundays). Same as \u2018W\u2019<\/td>\n<\/tr>\n<tr>\n<td>W-MON<\/td>\n<td>weekly frequency (mondays)<\/td>\n<\/tr>\n<tr>\n<td>W-TUE<\/td>\n<td>weekly frequency (tuesdays)<\/td>\n<\/tr>\n<tr>\n<td>W-WED<\/td>\n<td>weekly frequency (wednesdays)<\/td>\n<\/tr>\n<tr>\n<td>W-THU<\/td>\n<td>weekly frequency (thursdays)<\/td>\n<\/tr>\n<tr>\n<td>W-FRI<\/td>\n<td>weekly frequency (fridays)<\/td>\n<\/tr>\n<tr>\n<td>W-SAT<\/td>\n<td>weekly frequency (saturdays)<\/td>\n<\/tr>\n<tr>\n<td>(B)Q(S)-DEC<\/td>\n<td>quarterly frequency, year ends in December. Same as \u2018Q\u2019<\/td>\n<\/tr>\n<tr>\n<td>(B)Q(S)-JAN<\/td>\n<td>quarterly frequency, year ends in January<\/td>\n<\/tr>\n<tr>\n<td>(B)Q(S)-FEB<\/td>\n<td>quarterly frequency, year ends in February<\/td>\n<\/tr>\n<tr>\n<td>(B)Q(S)-MAR<\/td>\n<td>quarterly frequency, year ends in March<\/td>\n<\/tr>\n<tr>\n<td>(B)Q(S)-APR<\/td>\n<td>quarterly frequency, year ends in April<\/td>\n<\/tr>\n<tr>\n<td>(B)Q(S)-MAY<\/td>\n<td>quarterly frequency, year ends in May<\/td>\n<\/tr>\n<tr>\n<td>(B)Q(S)-JUN<\/td>\n<td>quarterly frequency, year ends in June<\/td>\n<\/tr>\n<tr>\n<td>(B)Q(S)-JUL<\/td>\n<td>quarterly frequency, year ends in July<\/td>\n<\/tr>\n<tr>\n<td>(B)Q(S)-AUG<\/td>\n<td>quarterly frequency, year ends in August<\/td>\n<\/tr>\n<tr>\n<td>(B)Q(S)-SEP<\/td>\n<td>quarterly frequency, year ends in September<\/td>\n<\/tr>\n<tr>\n<td>(B)Q(S)-OCT<\/td>\n<td>quarterly frequency, year ends in October<\/td>\n<\/tr>\n<tr>\n<td>(B)Q(S)-NOV<\/td>\n<td>quarterly frequency, year ends in November<\/td>\n<\/tr>\n<tr>\n<td>(B)A(S)-DEC<\/td>\n<td>annual frequency, anchored end of December. Same as \u2018A\u2019<\/td>\n<\/tr>\n<tr>\n<td>(B)A(S)-JAN<\/td>\n<td>annual frequency, anchored end of January<\/td>\n<\/tr>\n<tr>\n<td>(B)A(S)-FEB<\/td>\n<td>annual frequency, anchored end of February<\/td>\n<\/tr>\n<tr>\n<td>(B)A(S)-MAR<\/td>\n<td>annual frequency, anchored end of March<\/td>\n<\/tr>\n<tr>\n<td>(B)A(S)-APR<\/td>\n<td>annual frequency, anchored end of April<\/td>\n<\/tr>\n<tr>\n<td>(B)A(S)-MAY<\/td>\n<td>annual frequency, anchored end of May<\/td>\n<\/tr>\n<tr>\n<td>(B)A(S)-JUN<\/td>\n<td>annual frequency, anchored end of June<\/td>\n<\/tr>\n<tr>\n<td>(B)A(S)-JUL<\/td>\n<td>annual frequency, anchored end of July<\/td>\n<\/tr>\n<tr>\n<td>(B)A(S)-AUG<\/td>\n<td>annual frequency, anchored end of August<\/td>\n<\/tr>\n<tr>\n<td>(B)A(S)-SEP<\/td>\n<td>annual frequency, anchored end of September<\/td>\n<\/tr>\n<tr>\n<td>(B)A(S)-OCT<\/td>\n<td>annual frequency, anchored end of October<\/td>\n<\/tr>\n<tr>\n<td>(B)A(S)-NOV<\/td>\n<td>annual frequency, anchored end of November<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<p>These can be used as arguments to\u00a0<tt>date_range<\/tt>,\u00a0<tt>bdate_range<\/tt>, constructors for\u00a0<tt>DatetimeIndex<\/tt>, as well as various other timeseries-related functions in pandas.<\/p>\n<\/div>\n<div id=\"legacy-aliases\">\n<h3>Legacy Aliases<\/h3>\n<p>Note that prior to v0.8.0, time rules had a slightly different look. Pandas will continue to support the legacy time rules for the time being but it is strongly recommended that you switch to using the new offset aliases.<\/p>\n<table border=\"1\">\n<colgroup>\n<col width=\"19%\" \/>\n<col width=\"81%\" \/><\/colgroup>\n<thead valign=\"bottom\">\n<tr>\n<th>Legacy Time Rule<\/th>\n<th>Offset Alias<\/th>\n<\/tr>\n<\/thead>\n<tbody valign=\"top\">\n<tr>\n<td>WEEKDAY<\/td>\n<td>B<\/td>\n<\/tr>\n<tr>\n<td>EOM<\/td>\n<td>BM<\/td>\n<\/tr>\n<tr>\n<td>W@MON<\/td>\n<td>W-MON<\/td>\n<\/tr>\n<tr>\n<td>W@TUE<\/td>\n<td>W-TUE<\/td>\n<\/tr>\n<tr>\n<td>W@WED<\/td>\n<td>W-WED<\/td>\n<\/tr>\n<tr>\n<td>W@THU<\/td>\n<td>W-THU<\/td>\n<\/tr>\n<tr>\n<td>W@FRI<\/td>\n<td>W-FRI<\/td>\n<\/tr>\n<tr>\n<td>W@SAT<\/td>\n<td>W-SAT<\/td>\n<\/tr>\n<tr>\n<td>W@SUN<\/td>\n<td>W-SUN<\/td>\n<\/tr>\n<tr>\n<td>Q@JAN<\/td>\n<td>BQ-JAN<\/td>\n<\/tr>\n<tr>\n<td>Q@FEB<\/td>\n<td>BQ-FEB<\/td>\n<\/tr>\n<tr>\n<td>Q@MAR<\/td>\n<td>BQ-MAR<\/td>\n<\/tr>\n<tr>\n<td>A@JAN<\/td>\n<td>BA-JAN<\/td>\n<\/tr>\n<tr>\n<td>A@FEB<\/td>\n<td>BA-FEB<\/td>\n<\/tr>\n<tr>\n<td>A@MAR<\/td>\n<td>BA-MAR<\/td>\n<\/tr>\n<tr>\n<td>A@APR<\/td>\n<td>BA-APR<\/td>\n<\/tr>\n<tr>\n<td>A@MAY<\/td>\n<td>BA-MAY<\/td>\n<\/tr>\n<tr>\n<td>A@JUN<\/td>\n<td>BA-JUN<\/td>\n<\/tr>\n<tr>\n<td>A@JUL<\/td>\n<td>BA-JUL<\/td>\n<\/tr>\n<tr>\n<td>A@AUG<\/td>\n<td>BA-AUG<\/td>\n<\/tr>\n<tr>\n<td>A@SEP<\/td>\n<td>BA-SEP<\/td>\n<\/tr>\n<tr>\n<td>A@OCT<\/td>\n<td>BA-OCT<\/td>\n<\/tr>\n<tr>\n<td>A@NOV<\/td>\n<td>BA-NOV<\/td>\n<\/tr>\n<tr>\n<td>A@DEC<\/td>\n<td>BA-DEC<\/td>\n<\/tr>\n<tr>\n<td>min<\/td>\n<td>T<\/td>\n<\/tr>\n<tr>\n<td>ms<\/td>\n<td>L<\/td>\n<\/tr>\n<tr>\n<td>us: \u201cU\u201d<\/td>\n<td><\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<p>As you can see, legacy quarterly and annual frequencies are business quarter and business year ends. Please also note the legacy time rule for milliseconds\u00a0<tt>ms<\/tt>\u00a0versus the new offset alias for month start\u00a0<tt>MS<\/tt>. This means that offset alias parsing is case sensitive.<\/p>\n<\/div>\n<\/div>\n<div id=\"time-series-related-instance-methods\">\n<h2>Time series-related instance methods<\/h2>\n<div id=\"shifting-lagging\">\n<h3>Shifting \/ lagging<\/h3>\n<p>One may want to\u00a0<em>shift<\/em>\u00a0or\u00a0<em>lag<\/em>\u00a0the values in a TimeSeries back and forward in time. The method for this is\u00a0<tt>shift<\/tt>, which is available on all of the pandas objects. In DataFrame,<tt>shift<\/tt>\u00a0will currently only shift along the\u00a0<tt>index<\/tt>\u00a0and in Panel along the\u00a0<tt>major_axis<\/tt>.<\/p>\n<div>\n<div>\n<pre>In [1589]: ts = ts[:5]\n\nIn [1590]: ts.shift(1)\nOut[1590]: \n2011-01-31         NaN\n2011-02-28   -1.281247\n2011-03-31   -0.727707\n2011-04-29   -0.121306\n2011-05-31   -0.097883\nFreq: BM, dtype: float64<\/pre>\n<\/div>\n<\/div>\n<p>The shift method accepts an\u00a0<tt>freq<\/tt>\u00a0argument which can accept a\u00a0<tt>DateOffset<\/tt>\u00a0class or other<tt>timedelta<\/tt>-like object or also a\u00a0<a href=\"http:\/\/pandas.pydata.org\/pandas-docs\/dev\/timeseries.html#timeseries-alias\"><em>offset alias<\/em><\/a>:<\/p>\n<div>\n<div>\n<pre>In [1591]: ts.shift(5, freq=datetools.bday)\nOut[1591]: \n2011-02-07   -1.281247\n2011-03-07   -0.727707\n2011-04-07   -0.121306\n2011-05-06   -0.097883\n2011-06-07    0.695775\ndtype: float64\n\nIn [1592]: ts.shift(5, freq='BM')\nOut[1592]: \n2011-06-30   -1.281247\n2011-07-29   -0.727707\n2011-08-31   -0.121306\n2011-09-30   -0.097883\n2011-10-31    0.695775\nFreq: BM, dtype: float64<\/pre>\n<\/div>\n<\/div>\n<p>Rather than changing the alignment of the data and the index,\u00a0<tt>DataFrame<\/tt>\u00a0and\u00a0<tt>TimeSeries<\/tt>objects also have a\u00a0<tt>tshift<\/tt>\u00a0convenience method that changes all the dates in the index by a specified number of offsets:<\/p>\n<div>\n<div>\n<pre>In [1593]: ts.tshift(5, freq='D')\nOut[1593]: \n2011-02-05   -1.281247\n2011-03-05   -0.727707\n2011-04-05   -0.121306\n2011-05-04   -0.097883\n2011-06-05    0.695775\ndtype: float64<\/pre>\n<\/div>\n<\/div>\n<p>Note that with\u00a0<tt>tshift<\/tt>, the leading entry is no longer NaN because the data is not being realigned.<\/p>\n<\/div>\n<div id=\"frequency-conversion\">\n<h3>Frequency conversion<\/h3>\n<p>The primary function for changing frequencies is the\u00a0<tt>asfreq<\/tt>\u00a0function. For a\u00a0<tt>DatetimeIndex<\/tt>, this is basically just a thin, but convenient wrapper around\u00a0<tt>reindex<\/tt>\u00a0which generates a\u00a0<tt>date_range<\/tt>and calls\u00a0<tt>reindex<\/tt>.<\/p>\n<div>\n<div>\n<pre>In [1594]: dr = date_range('1\/1\/2010', periods=3, freq=3 * datetools.bday)\n\nIn [1595]: ts = Series(randn(3), index=dr)\n\nIn [1596]: ts\nOut[1596]: \n2010-01-01    0.176444\n2010-01-06    0.403310\n2010-01-11   -0.154951\nFreq: 3B, dtype: float64\n\nIn [1597]: ts.asfreq(BDay())\nOut[1597]: \n2010-01-01    0.176444\n2010-01-04         NaN\n2010-01-05         NaN\n2010-01-06    0.403310\n2010-01-07         NaN\n2010-01-08         NaN\n2010-01-11   -0.154951\nFreq: B, dtype: float64<\/pre>\n<\/div>\n<\/div>\n<p><tt>asfreq<\/tt>\u00a0provides a further convenience so you can specify an interpolation method for any gaps that may appear after the frequency conversion<\/p>\n<div>\n<div>\n<pre>In [1598]: ts.asfreq(BDay(), method='pad')\nOut[1598]: \n2010-01-01    0.176444\n2010-01-04    0.176444\n2010-01-05    0.176444\n2010-01-06    0.403310\n2010-01-07    0.403310\n2010-01-08    0.403310\n2010-01-11   -0.154951\nFreq: B, dtype: float64<\/pre>\n<\/div>\n<\/div>\n<\/div>\n<div id=\"filling-forward-backward\">\n<h3>Filling forward \/ backward<\/h3>\n<p>Related to\u00a0<tt>asfreq<\/tt>\u00a0and\u00a0<tt>reindex<\/tt>\u00a0is the\u00a0<tt>fillna<\/tt>\u00a0function documented in the\u00a0<a href=\"http:\/\/pandas.pydata.org\/pandas-docs\/dev\/missing_data.html#missing-data-fillna\"><em>missing data section<\/em><\/a>.<\/p>\n<\/div>\n<div id=\"converting-to-python-datetimes\">\n<h3>Converting to Python datetimes<\/h3>\n<p><tt>DatetimeIndex<\/tt>\u00a0can be converted to an array of Python native datetime.datetime objects using the\u00a0<tt>to_pydatetime<\/tt>\u00a0method.<\/p>\n<\/div>\n<\/div>\n<div id=\"up-and-downsampling\">\n<h2>Up- and downsampling<\/h2>\n<p>With 0.8, pandas introduces simple, powerful, and efficient functionality for performing resampling operations during frequency conversion (e.g., converting secondly data into 5-minutely data). This is extremely common in, but not limited to, financial applications.<\/p>\n<p>See some\u00a0<a href=\"http:\/\/pandas.pydata.org\/pandas-docs\/dev\/cookbook.html#cookbook-resample\"><em>cookbook examples<\/em><\/a>\u00a0for some advanced strategies<\/p>\n<div>\n<div>\n<pre>In [1599]: rng = date_range('1\/1\/2012', periods=100, freq='S')\n\nIn [1600]: ts = Series(randint(0, 500, len(rng)), index=rng)\n\nIn [1601]: ts.resample('5Min', how='sum')\nOut[1601]: \n2012-01-01    25792\nFreq: 5T, dtype: int64<\/pre>\n<\/div>\n<\/div>\n<p>The\u00a0<tt>resample<\/tt>\u00a0function is very flexible and allows you to specify many different parameters to control the frequency conversion and resampling operation.<\/p>\n<p>The\u00a0<tt>how<\/tt>\u00a0parameter can be a function name or numpy array function that takes an array and produces aggregated values:<\/p>\n<div>\n<div>\n<pre>In [1602]: ts.resample('5Min') # default is mean\nOut[1602]: \n2012-01-01    257.92\nFreq: 5T, dtype: float64\n\nIn [1603]: ts.resample('5Min', how='ohlc')\nOut[1603]: \n            open  high  low  close\n2012-01-01   230   492    0    214\n\nIn [1604]: ts.resample('5Min', how=np.max)\nOut[1604]: \n2012-01-01   NaN\nFreq: 5T, dtype: float64<\/pre>\n<\/div>\n<\/div>\n<p>Any function available via\u00a0<a href=\"http:\/\/pandas.pydata.org\/pandas-docs\/dev\/groupby.html#groupby-dispatch\"><em>dispatching<\/em><\/a>\u00a0can be given to the\u00a0<tt>how<\/tt>\u00a0parameter by name, including<tt>sum<\/tt>,\u00a0<tt>mean<\/tt>,\u00a0<tt>std<\/tt>,\u00a0<tt>max<\/tt>,\u00a0<tt>min<\/tt>,\u00a0<tt>median<\/tt>,\u00a0<tt>first<\/tt>,\u00a0<tt>last<\/tt>,\u00a0<tt>ohlc<\/tt>.<\/p>\n<p>For downsampling,\u00a0<tt>closed<\/tt>\u00a0can be set to \u2018left\u2019 or \u2018right\u2019 to specify which end of the interval is closed:<\/p>\n<div>\n<div>\n<pre>In [1605]: ts.resample('5Min', closed='right')\nOut[1605]: \n2011-12-31 23:55:00    230.00000\n2012-01-01 00:00:00    258.20202\nFreq: 5T, dtype: float64\n\nIn [1606]: ts.resample('5Min', closed='left')\nOut[1606]: \n2012-01-01    257.92\nFreq: 5T, dtype: float64<\/pre>\n<\/div>\n<\/div>\n<p>For upsampling, the\u00a0<tt>fill_method<\/tt>\u00a0and\u00a0<tt>limit<\/tt>\u00a0parameters can be specified to interpolate over the gaps that are created:<\/p>\n<div>\n<div>\n<pre># from secondly to every 250 milliseconds\nIn [1607]: ts[:2].resample('250L')\nOut[1607]: \n2012-01-01 00:00:00           230\n2012-01-01 00:00:00.250000    NaN\n2012-01-01 00:00:00.500000    NaN\n2012-01-01 00:00:00.750000    NaN\n2012-01-01 00:00:01           202\nFreq: 250L, dtype: float64\n\nIn [1608]: ts[:2].resample('250L', fill_method='pad')\nOut[1608]: \n2012-01-01 00:00:00           230\n2012-01-01 00:00:00.250000    230\n2012-01-01 00:00:00.500000    230\n2012-01-01 00:00:00.750000    230\n2012-01-01 00:00:01           202\nFreq: 250L, dtype: int64\n\nIn [1609]: ts[:2].resample('250L', fill_method='pad', limit=2)\nOut[1609]: \n2012-01-01 00:00:00           230\n2012-01-01 00:00:00.250000    230\n2012-01-01 00:00:00.500000    230\n2012-01-01 00:00:00.750000    NaN\n2012-01-01 00:00:01           202\nFreq: 250L, dtype: float64<\/pre>\n<\/div>\n<\/div>\n<p>Parameters like\u00a0<tt>label<\/tt>\u00a0and\u00a0<tt>loffset<\/tt>\u00a0are used to manipulate the resulting labels.\u00a0<tt>label<\/tt>specifies whether the result is labeled with the beginning or the end of the interval.\u00a0<tt>loffset<\/tt>performs a time adjustment on the output labels.<\/p>\n<div>\n<div>\n<pre>In [1610]: ts.resample('5Min') # by default label='right'\nOut[1610]: \n2012-01-01    257.92\nFreq: 5T, dtype: float64\n\nIn [1611]: ts.resample('5Min', label='left')\nOut[1611]: \n2012-01-01    257.92\nFreq: 5T, dtype: float64\n\nIn [1612]: ts.resample('5Min', label='left', loffset='1s')\nOut[1612]: \n2012-01-01 00:00:01    257.92\ndtype: float64<\/pre>\n<\/div>\n<\/div>\n<p>The\u00a0<tt>axis<\/tt>\u00a0parameter can be set to 0 or 1 and allows you to resample the specified axis for a DataFrame.<\/p>\n<p><tt>kind<\/tt>\u00a0can be set to \u2018timestamp\u2019 or \u2018period\u2019 to convert the resulting index to\/from time-stamp and time-span representations. By default\u00a0<tt>resample<\/tt>\u00a0retains the input representation.<\/p>\n<p><tt>convention<\/tt>\u00a0can be set to \u2018start\u2019 or \u2018end\u2019 when resampling period data (detail below). It specifies how low frequency periods are converted to higher frequency periods.<\/p>\n<p>Note that 0.8 marks a watershed in the timeseries functionality in pandas. In previous versions, resampling had to be done using a combination of\u00a0<tt>date_range<\/tt>,\u00a0<tt>groupby<\/tt>\u00a0with\u00a0<tt>asof<\/tt>, and then calling an aggregation function on the grouped object. This was not nearly convenient or performant as the new pandas timeseries API.<\/p>\n<\/div>\n<div id=\"time-span-representation\">\n<h2>Time Span Representation<\/h2>\n<p>Regular intervals of time are represented by\u00a0<tt>Period<\/tt>\u00a0objects in pandas while sequences of<tt>Period<\/tt>\u00a0objects are collected in a\u00a0<tt>PeriodIndex<\/tt>, which can be created with the convenience function\u00a0<tt>period_range<\/tt>.<\/p>\n<div id=\"period\">\n<h3>Period<\/h3>\n<p>A\u00a0<tt>Period<\/tt>\u00a0represents a span of time (e.g., a day, a month, a quarter, etc). It can be created using a frequency alias:<\/p>\n<div>\n<div>\n<pre>In [1613]: Period('2012', freq='A-DEC')\nOut[1613]: Period('2012', 'A-DEC')\n\nIn [1614]: Period('2012-1-1', freq='D')\nOut[1614]: Period('2012-01-01', 'D')\n\nIn [1615]: Period('2012-1-1 19:00', freq='H')\nOut[1615]: Period('2012-01-01 19:00', 'H')<\/pre>\n<\/div>\n<\/div>\n<p>Unlike time stamped data, pandas does not support frequencies at multiples of DateOffsets (e.g., \u20183Min\u2019) for periods.<\/p>\n<p>Adding and subtracting integers from periods shifts the period by its own frequency.<\/p>\n<div>\n<div>\n<pre>In [1616]: p = Period('2012', freq='A-DEC')\n\nIn [1617]: p + 1\nOut[1617]: Period('2013', 'A-DEC')\n\nIn [1618]: p - 3\nOut[1618]: Period('2009', 'A-DEC')<\/pre>\n<\/div>\n<\/div>\n<p>Taking the difference of\u00a0<tt>Period<\/tt>\u00a0instances with the same frequency will return the number of frequency units between them:<\/p>\n<div>\n<div>\n<pre>In [1619]: Period('2012', freq='A-DEC') - Period('2002', freq='A-DEC')\nOut[1619]: 10<\/pre>\n<\/div>\n<\/div>\n<\/div>\n<div id=\"periodindex-and-period-range\">\n<h3>PeriodIndex and period_range<\/h3>\n<p>Regular sequences of\u00a0<tt>Period<\/tt>\u00a0objects can be collected in a\u00a0<tt>PeriodIndex<\/tt>, which can be constructed using the\u00a0<tt>period_range<\/tt>\u00a0convenience function:<\/p>\n<div>\n<div>\n<pre>In [1620]: prng = period_range('1\/1\/2011', '1\/1\/2012', freq='M')\n\nIn [1621]: prng\nOut[1621]: \n&lt;class 'pandas.tseries.period.PeriodIndex'&gt;\nfreq: M\n[2011-01, ..., 2012-01]\nlength: 13<\/pre>\n<\/div>\n<\/div>\n<p>The\u00a0<tt>PeriodIndex<\/tt>\u00a0constructor can also be used directly:<\/p>\n<div>\n<div>\n<pre>In [1622]: PeriodIndex(['2011-1', '2011-2', '2011-3'], freq='M')\nOut[1622]: \n&lt;class 'pandas.tseries.period.PeriodIndex'&gt;\nfreq: M\n[2011-01, ..., 2011-03]\nlength: 3<\/pre>\n<\/div>\n<\/div>\n<p>Just like\u00a0<tt>DatetimeIndex<\/tt>, a\u00a0<tt>PeriodIndex<\/tt>\u00a0can also be used to index pandas objects:<\/p>\n<div>\n<div>\n<pre>In [1623]: Series(randn(len(prng)), prng)\nOut[1623]: \n2011-01    0.301624\n2011-02   -1.460489\n2011-03    0.610679\n2011-04    1.195856\n2011-05   -0.008820\n2011-06   -0.045729\n2011-07   -1.051015\n2011-08   -0.422924\n2011-09   -0.028361\n2011-10   -0.782386\n2011-11    0.861980\n2011-12    1.438604\n2012-01   -0.525492\nFreq: M, dtype: float64<\/pre>\n<\/div>\n<\/div>\n<\/div>\n<div id=\"frequency-conversion-and-resampling-with-periodindex\">\n<h3>Frequency Conversion and Resampling with PeriodIndex<\/h3>\n<p>The frequency of Periods and PeriodIndex can be converted via the\u00a0<tt>asfreq<\/tt>\u00a0method. Let\u2019s start with the fiscal year 2011, ending in December:<\/p>\n<div>\n<div>\n<pre>In [1624]: p = Period('2011', freq='A-DEC')\n\nIn [1625]: p\nOut[1625]: Period('2011', 'A-DEC')<\/pre>\n<\/div>\n<\/div>\n<p>We can convert it to a monthly frequency. Using the\u00a0<tt>how<\/tt>\u00a0parameter, we can specify whether to return the starting or ending month:<\/p>\n<div>\n<div>\n<pre>In [1626]: p.asfreq('M', how='start')\nOut[1626]: Period('2011-01', 'M')\n\nIn [1627]: p.asfreq('M', how='end')\nOut[1627]: Period('2011-12', 'M')<\/pre>\n<\/div>\n<\/div>\n<p>The shorthands \u2018s\u2019 and \u2018e\u2019 are provided for convenience:<\/p>\n<div>\n<div>\n<pre>In [1628]: p.asfreq('M', 's')\nOut[1628]: Period('2011-01', 'M')\n\nIn [1629]: p.asfreq('M', 'e')\nOut[1629]: Period('2011-12', 'M')<\/pre>\n<\/div>\n<\/div>\n<p>Converting to a \u201csuper-period\u201d (e.g., annual frequency is a super-period of quarterly frequency) automatically returns the super-period that includes the input period:<\/p>\n<div>\n<div>\n<pre>In [1630]: p = Period('2011-12', freq='M')\n\nIn [1631]: p.asfreq('A-NOV')\nOut[1631]: Period('2012', 'A-NOV')<\/pre>\n<\/div>\n<\/div>\n<p>Note that since we converted to an annual frequency that ends the year in November, the monthly period of December 2011 is actually in the 2012 A-NOV period.<\/p>\n<p id=\"timeseries-quarterly\">Period conversions with anchored frequencies are particularly useful for working with various quarterly data common to economics, business, and other fields. Many organizations define quarters relative to the month in which their fiscal year start and ends. Thus, first quarter of 2011 could start in 2010 or a few months into 2011. Via anchored frequencies, pandas works all quarterly frequencies\u00a0<tt>Q-JAN<\/tt>\u00a0through\u00a0<tt>Q-DEC<\/tt>.<\/p>\n<p><tt>Q-DEC<\/tt>\u00a0define regular calendar quarters:<\/p>\n<div>\n<div>\n<pre>In [1632]: p = Period('2012Q1', freq='Q-DEC')\n\nIn [1633]: p.asfreq('D', 's')\nOut[1633]: Period('2012-01-01', 'D')\n\nIn [1634]: p.asfreq('D', 'e')\nOut[1634]: Period('2012-03-31', 'D')<\/pre>\n<\/div>\n<\/div>\n<p><tt>Q-MAR<\/tt>\u00a0defines fiscal year end in March:<\/p>\n<div>\n<div>\n<pre>In [1635]: p = Period('2011Q4', freq='Q-MAR')\n\nIn [1636]: p.asfreq('D', 's')\nOut[1636]: Period('2011-01-01', 'D')\n\nIn [1637]: p.asfreq('D', 'e')\nOut[1637]: Period('2011-03-31', 'D')<\/pre>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<div id=\"converting-between-representations\">\n<h2>Converting between Representations<\/h2>\n<p>Timestamped data can be converted to PeriodIndex-ed data using\u00a0<tt>to_period<\/tt>\u00a0and vice-versa using\u00a0<tt>to_timestamp<\/tt>:<\/p>\n<div>\n<div>\n<pre>In [1638]: rng = date_range('1\/1\/2012', periods=5, freq='M')\n\nIn [1639]: ts = Series(randn(len(rng)), index=rng)\n\nIn [1640]: ts\nOut[1640]: \n2012-01-31   -1.684469\n2012-02-29    0.550605\n2012-03-31    0.091955\n2012-04-30    0.891713\n2012-05-31    0.807078\nFreq: M, dtype: float64\n\nIn [1641]: ps = ts.to_period()\n\nIn [1642]: ps\nOut[1642]: \n2012-01   -1.684469\n2012-02    0.550605\n2012-03    0.091955\n2012-04    0.891713\n2012-05    0.807078\nFreq: M, dtype: float64\n\nIn [1643]: ps.to_timestamp()\nOut[1643]: \n2012-01-01   -1.684469\n2012-02-01    0.550605\n2012-03-01    0.091955\n2012-04-01    0.891713\n2012-05-01    0.807078\nFreq: MS, dtype: float64<\/pre>\n<\/div>\n<\/div>\n<p>Remember that \u2018s\u2019 and \u2018e\u2019 can be used to return the timestamps at the start or end of the period:<\/p>\n<div>\n<div>\n<pre>In [1644]: ps.to_timestamp('D', how='s')\nOut[1644]: \n2012-01-01   -1.684469\n2012-02-01    0.550605\n2012-03-01    0.091955\n2012-04-01    0.891713\n2012-05-01    0.807078\nFreq: MS, dtype: float64<\/pre>\n<\/div>\n<\/div>\n<p>Converting between period and timestamp enables some convenient arithmetic functions to be used. In the following example, we convert a quarterly frequency with year ending in November to 9am of the end of the month following the quarter end:<\/p>\n<div>\n<div>\n<pre>In [1645]: prng = period_range('1990Q1', '2000Q4', freq='Q-NOV')\n\nIn [1646]: ts = Series(randn(len(prng)), prng)\n\nIn [1647]: ts.index = (prng.asfreq('M', 'e') + 1).asfreq('H', 's') + 9\n\nIn [1648]: ts.head()\nOut[1648]: \n1990-03-01 09:00    0.221441\n1990-06-01 09:00   -0.113139\n1990-09-01 09:00   -1.812900\n1990-12-01 09:00   -0.053708\n1991-03-01 09:00   -0.114574\nFreq: H, dtype: float64<\/pre>\n<\/div>\n<\/div>\n<\/div>\n<div id=\"time-zone-handling\">\n<h2>Time Zone Handling<\/h2>\n<p>Using\u00a0<tt>pytz<\/tt>, pandas provides rich support for working with timestamps in different time zones. By default, pandas objects are time zone unaware:<\/p>\n<div>\n<div>\n<pre>In [1649]: rng = date_range('3\/6\/2012 00:00', periods=15, freq='D')\n\nIn [1650]: print(rng.tz)\nNone<\/pre>\n<\/div>\n<\/div>\n<p>To supply the time zone, you can use the\u00a0<tt>tz<\/tt>\u00a0keyword to\u00a0<tt>date_range<\/tt>\u00a0and other functions:<\/p>\n<div>\n<div>\n<pre>In [1651]: rng_utc = date_range('3\/6\/2012 00:00', periods=10, freq='D', tz='UTC')\n\nIn [1652]: print(rng_utc.tz)\nUTC<\/pre>\n<\/div>\n<\/div>\n<p>Timestamps, like Python\u2019s\u00a0<tt>datetime.datetime<\/tt>\u00a0object can be either time zone naive or time zone aware. Naive time series and DatetimeIndex objects can be\u00a0<em>localized<\/em>\u00a0using<tt>tz_localize<\/tt>:<\/p>\n<div>\n<div>\n<pre>In [1653]: ts = Series(randn(len(rng)), rng)\n\nIn [1654]: ts_utc = ts.tz_localize('UTC')\n\nIn [1655]: ts_utc\nOut[1655]: \n2012-03-06 00:00:00+00:00   -0.114722\n2012-03-07 00:00:00+00:00    0.168904\n2012-03-08 00:00:00+00:00   -0.048048\n2012-03-09 00:00:00+00:00    0.801196\n2012-03-10 00:00:00+00:00    1.392071\n2012-03-11 00:00:00+00:00   -0.048788\n2012-03-12 00:00:00+00:00   -0.808838\n2012-03-13 00:00:00+00:00   -1.003677\n2012-03-14 00:00:00+00:00   -0.160766\n2012-03-15 00:00:00+00:00    1.758853\n2012-03-16 00:00:00+00:00    0.729195\n2012-03-17 00:00:00+00:00    1.359732\n2012-03-18 00:00:00+00:00    2.006296\n2012-03-19 00:00:00+00:00    0.870210\n2012-03-20 00:00:00+00:00    0.043464\nFreq: D, dtype: float64<\/pre>\n<\/div>\n<\/div>\n<p>You can use the\u00a0<tt>tz_convert<\/tt>\u00a0method to convert pandas objects to convert tz-aware data to another time zone:<\/p>\n<div>\n<div>\n<pre>In [1656]: ts_utc.tz_convert('US\/Eastern')\nOut[1656]: \n2012-03-05 19:00:00-05:00   -0.114722\n2012-03-06 19:00:00-05:00    0.168904\n2012-03-07 19:00:00-05:00   -0.048048\n2012-03-08 19:00:00-05:00    0.801196\n2012-03-09 19:00:00-05:00    1.392071\n2012-03-10 19:00:00-05:00   -0.048788\n2012-03-11 20:00:00-04:00   -0.808838\n2012-03-12 20:00:00-04:00   -1.003677\n2012-03-13 20:00:00-04:00   -0.160766\n2012-03-14 20:00:00-04:00    1.758853\n2012-03-15 20:00:00-04:00    0.729195\n2012-03-16 20:00:00-04:00    1.359732\n2012-03-17 20:00:00-04:00    2.006296\n2012-03-18 20:00:00-04:00    0.870210\n2012-03-19 20:00:00-04:00    0.043464\nFreq: D, dtype: float64<\/pre>\n<\/div>\n<\/div>\n<p>Under the hood, all timestamps are stored in UTC. Scalar values from a\u00a0<tt>DatetimeIndex<\/tt>\u00a0with a time zone will have their fields (day, hour, minute) localized to the time zone. However, timestamps with the same UTC value are still considered to be equal even if they are in different time zones:<\/p>\n<div>\n<div>\n<pre>In [1657]: rng_eastern = rng_utc.tz_convert('US\/Eastern')\n\nIn [1658]: rng_berlin = rng_utc.tz_convert('Europe\/Berlin')\n\nIn [1659]: rng_eastern[5]\nOut[1659]: &lt;Timestamp: 2012-03-10 19:00:00-0500 EST, tz=US\/Eastern&gt;\n\nIn [1660]: rng_berlin[5]\nOut[1660]: &lt;Timestamp: 2012-03-11 01:00:00+0100 CET, tz=Europe\/Berlin&gt;\n\nIn [1661]: rng_eastern[5] == rng_berlin[5]\nOut[1661]: True<\/pre>\n<\/div>\n<\/div>\n<p>Like Series, DataFrame, and DatetimeIndex, Timestamps can be converted to other time zones using\u00a0<tt>tz_convert<\/tt>:<\/p>\n<div>\n<div>\n<pre>In [1662]: rng_eastern[5]\nOut[1662]: &lt;Timestamp: 2012-03-10 19:00:00-0500 EST, tz=US\/Eastern&gt;\n\nIn [1663]: rng_berlin[5]\nOut[1663]: &lt;Timestamp: 2012-03-11 01:00:00+0100 CET, tz=Europe\/Berlin&gt;\n\nIn [1664]: rng_eastern[5].tz_convert('Europe\/Berlin')\nOut[1664]: &lt;Timestamp: 2012-03-11 01:00:00+0100 CET, tz=Europe\/Berlin&gt;<\/pre>\n<\/div>\n<\/div>\n<p>Localization of Timestamps functions just like DatetimeIndex and TimeSeries:<\/p>\n<div>\n<div>\n<pre>In [1665]: rng[5]\nOut[1665]: &lt;Timestamp: 2012-03-11 00:00:00&gt;\n\nIn [1666]: rng[5].tz_localize('Asia\/Shanghai')\nOut[1666]: &lt;Timestamp: 2012-03-11 00:00:00+0800 CST, tz=Asia\/Shanghai&gt;<\/pre>\n<\/div>\n<\/div>\n<p>Operations between TimeSeries in difficult time zones will yield UTC TimeSeries, aligning the data on the UTC timestamps:<\/p>\n<div>\n<div>\n<pre>In [1667]: eastern = ts_utc.tz_convert('US\/Eastern')\n\nIn [1668]: berlin = ts_utc.tz_convert('Europe\/Berlin')\n\nIn [1669]: result = eastern + berlin\n\nIn [1670]: result\nOut[1670]: \n2012-03-06 00:00:00+00:00   -0.229443\n2012-03-07 00:00:00+00:00    0.337809\n2012-03-08 00:00:00+00:00   -0.096096\n2012-03-09 00:00:00+00:00    1.602392\n2012-03-10 00:00:00+00:00    2.784142\n2012-03-11 00:00:00+00:00   -0.097575\n2012-03-12 00:00:00+00:00   -1.617677\n2012-03-13 00:00:00+00:00   -2.007353\n2012-03-14 00:00:00+00:00   -0.321532\n2012-03-15 00:00:00+00:00    3.517706\n2012-03-16 00:00:00+00:00    1.458389\n2012-03-17 00:00:00+00:00    2.719465\n2012-03-18 00:00:00+00:00    4.012592\n2012-03-19 00:00:00+00:00    1.740419\n2012-03-20 00:00:00+00:00    0.086928\nFreq: D, dtype: float64\n\nIn [1671]: result.index\nOut[1671]: \n&lt;class 'pandas.tseries.index.DatetimeIndex'&gt;\n[2012-03-06 00:00:00, ..., 2012-03-20 00:00:00]\nLength: 15, Freq: D, Timezone: UTC<\/pre>\n<\/div>\n<\/div>\n<\/div>\n<div id=\"time-deltas\">\n<h2>Time Deltas<\/h2>\n<p>Timedeltas are differences in times, expressed in difference units, e.g. days,hours,minutes,seconds. They can be both positive and negative.<\/p>\n<div>\n<div>\n<pre>In [1672]: from datetime import datetime, timedelta\n\nIn [1673]: s  = Series(date_range('2012-1-1', periods=3, freq='D'))\n\nIn [1674]: td = Series([ timedelta(days=i) for i in range(3) ])\n\nIn [1675]: df = DataFrame(dict(A = s, B = td))\n\nIn [1676]: df\nOut[1676]: \n                    A                B\n0 2012-01-01 00:00:00         00:00:00\n1 2012-01-02 00:00:00 1 days, 00:00:00\n2 2012-01-03 00:00:00 2 days, 00:00:00\n\nIn [1677]: df['C'] = df['A'] + df['B']\n\nIn [1678]: df\nOut[1678]: \n                    A                B                   C\n0 2012-01-01 00:00:00         00:00:00 2012-01-01 00:00:00\n1 2012-01-02 00:00:00 1 days, 00:00:00 2012-01-03 00:00:00\n2 2012-01-03 00:00:00 2 days, 00:00:00 2012-01-05 00:00:00\n\nIn [1679]: df.dtypes\nOut[1679]: \nA     datetime64[ns]\nB    timedelta64[ns]\nC     datetime64[ns]\ndtype: object\n\nIn [1680]: s - s.max()\nOut[1680]: \n0   -2 days, 00:00:00\n1   -1 days, 00:00:00\n2            00:00:00\ndtype: timedelta64[ns]\n\nIn [1681]: s - datetime(2011,1,1,3,5)\nOut[1681]: \n0   364 days, 20:55:00\n1   365 days, 20:55:00\n2   366 days, 20:55:00\ndtype: timedelta64[ns]\n\nIn [1682]: s + timedelta(minutes=5)\nOut[1682]: \n0   2012-01-01 00:05:00\n1   2012-01-02 00:05:00\n2   2012-01-03 00:05:00\ndtype: datetime64[ns]<\/pre>\n<\/div>\n<\/div>\n<p>Series of timedeltas with\u00a0<tt>NaT<\/tt>\u00a0values are supported<\/p>\n<div>\n<div>\n<pre>In [1683]: y = s - s.shift()\n\nIn [1684]: y\nOut[1684]: \n0                NaT\n1   1 days, 00:00:00\n2   1 days, 00:00:00\ndtype: timedelta64[ns]<\/pre>\n<\/div>\n<\/div>\n<p>The can be set to\u00a0<tt>NaT<\/tt>\u00a0using\u00a0<tt>np.nan<\/tt>\u00a0analagously to datetimes<\/p>\n<div>\n<div>\n<pre>In [1685]: y[1] = np.nan\n\nIn [1686]: y\nOut[1686]: \n0                NaT\n1                NaT\n2   1 days, 00:00:00\ndtype: timedelta64[ns]<\/pre>\n<\/div>\n<\/div>\n<p>Operands can also appear in a reversed order (a singluar object operated with a Series)<\/p>\n<div>\n<div>\n<pre>In [1687]: s.max() - s\nOut[1687]: \n0   2 days, 00:00:00\n1   1 days, 00:00:00\n2           00:00:00\ndtype: timedelta64[ns]\n\nIn [1688]: datetime(2011,1,1,3,5) - s\nOut[1688]: \n0   -364 days, 20:55:00\n1   -365 days, 20:55:00\n2   -366 days, 20:55:00\ndtype: timedelta64[ns]\n\nIn [1689]: timedelta(minutes=5) + s\nOut[1689]: \n0   2012-01-01 00:05:00\n1   2012-01-02 00:05:00\n2   2012-01-03 00:05:00\ndtype: datetime64[ns]<\/pre>\n<\/div>\n<\/div>\n<p>Some timedelta numeric like operations are supported.<\/p>\n<div>\n<div>\n<pre>In [1690]: td - timedelta(minutes=5,seconds=5,microseconds=5)\nOut[1690]: \n0          -00:05:05.000005\n1           23:54:54.999995\n2   1 days, 23:54:54.999995\ndtype: timedelta64[ns]\nWARNING: Output cache limit (currently 1000 entries) hit.\nFlushing cache and resetting history counter...\nThe only history variables available will be _,__,___ and _1\nwith the current result.<\/pre>\n<\/div>\n<\/div>\n<p><tt>min,\u00a0max<\/tt>\u00a0and the corresponding\u00a0<tt>idxmin,\u00a0idxmax<\/tt>\u00a0operations are support on frames<\/p>\n<div>\n<div>\n<pre>In [1691]: df = DataFrame(dict(A = s - Timestamp('20120101')-timedelta(minutes=5,seconds=5),\n   ......:                     B = s - Series(date_range('2012-1-2', periods=3, freq='D'))))\n   ......:\n\nIn [1692]: df\nOut[1692]: \n                 A                 B\n0        -00:05:05 -1 days, 00:00:00\n1         23:54:55 -1 days, 00:00:00\n2 1 days, 23:54:55 -1 days, 00:00:00\n\nIn [1693]: df.min()\nOut[1693]: \nA           -00:05:05\nB   -1 days, 00:00:00\ndtype: timedelta64[ns]\n\nIn [1694]: df.min(axis=1)\nOut[1694]: \n0   -1 days, 00:00:00\n1   -1 days, 00:00:00\n2   -1 days, 00:00:00\ndtype: timedelta64[ns]\n\nIn [1695]: df.idxmin()\nOut[1695]: \nA    0\nB    0\ndtype: int64\n\nIn [1696]: df.idxmax()\nOut[1696]: \nA    2\nB    0\ndtype: int64<\/pre>\n<\/div>\n<\/div>\n<p><tt>min,\u00a0max<\/tt>\u00a0operations are support on series, these return a single element\u00a0<tt>timedelta64[ns]<\/tt>Series (this avoids having to deal with numpy timedelta64 issues).\u00a0<tt>idxmin,\u00a0idxmax<\/tt>\u00a0are supported as well.<\/p>\n<div>\n<div>\n<pre>In [1697]: df.min().max()\nOut[1697]: \n0   -00:05:05\ndtype: timedelta64[ns]\n\nIn [1698]: df.min(axis=1).min()\nOut[1698]: \n0   -1 days, 00:00:00\ndtype: timedelta64[ns]\n\nIn [1699]: df.min().idxmax()\nOut[1699]: 'A'\n\nIn [1700]: df.min(axis=1).idxmin()\nOut[1700]: 0<\/pre>\n<\/div>\n<\/div>\n<\/div>\n","protected":false},"excerpt":{"rendered":"<p>pandas has proven very successful as a tool for working with time series data, especially in the financial data analysis space. With the 0.8 release,&hellip; <\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[19],"tags":[],"class_list":["post-143","post","type-post","status-publish","format-standard","hentry","category-python"],"_links":{"self":[{"href":"https:\/\/zhuoyao.net\/index.php\/wp-json\/wp\/v2\/posts\/143","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/zhuoyao.net\/index.php\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/zhuoyao.net\/index.php\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/zhuoyao.net\/index.php\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/zhuoyao.net\/index.php\/wp-json\/wp\/v2\/comments?post=143"}],"version-history":[{"count":0,"href":"https:\/\/zhuoyao.net\/index.php\/wp-json\/wp\/v2\/posts\/143\/revisions"}],"wp:attachment":[{"href":"https:\/\/zhuoyao.net\/index.php\/wp-json\/wp\/v2\/media?parent=143"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/zhuoyao.net\/index.php\/wp-json\/wp\/v2\/categories?post=143"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/zhuoyao.net\/index.php\/wp-json\/wp\/v2\/tags?post=143"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}