Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

breaking: remove deprecated behavior #1220

Merged
merged 9 commits into from
Dec 19, 2024
Merged
Show file tree
Hide file tree
Changes from 5 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
4 changes: 3 additions & 1 deletion nbs/common.base_model.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -472,7 +472,9 @@
"\n",
" @classmethod\n",
" def load(cls, path, **kwargs):\n",
" with fsspec.open(path, 'rb') as f:\n",
" with fsspec.open(path, 'rb') as f, warnings.catch_warnings():\n",
" # ignore possible warnings about weights_only=False\n",
" warnings.filterwarnings('ignore', category=FutureWarning)\n",
" content = torch.load(f, **kwargs)\n",
" with _disable_torch_init():\n",
" model = cls(**content['hyper_parameters']) \n",
Expand Down
156 changes: 27 additions & 129 deletions nbs/core.ipynb

Large diffs are not rendered by default.

273 changes: 173 additions & 100 deletions nbs/docs/capabilities/03_exogenous_variables.ipynb

Large diffs are not rendered by default.

708 changes: 384 additions & 324 deletions nbs/docs/capabilities/04_hyperparameter_tuning.ipynb
elephaint marked this conversation as resolved.
Show resolved Hide resolved

Large diffs are not rendered by default.

132 changes: 91 additions & 41 deletions nbs/docs/capabilities/05_predictInsample.ipynb
jmoralez marked this conversation as resolved.
Show resolved Hide resolved

Large diffs are not rendered by default.

412 changes: 207 additions & 205 deletions nbs/docs/capabilities/06_save_load_models.ipynb

Large diffs are not rendered by default.

343 changes: 212 additions & 131 deletions nbs/docs/capabilities/07_time_series_scaling.ipynb

Large diffs are not rendered by default.

219 changes: 45 additions & 174 deletions nbs/docs/capabilities/08_cross_validation.ipynb

Large diffs are not rendered by default.

75 changes: 49 additions & 26 deletions nbs/docs/getting-started/01_introduction.ipynb

Large diffs are not rendered by default.

119 changes: 79 additions & 40 deletions nbs/docs/getting-started/02_quickstart.ipynb

Large diffs are not rendered by default.

184 changes: 79 additions & 105 deletions nbs/docs/getting-started/05_datarequirements.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -80,13 +80,15 @@
"name": "stderr",
"output_type": "stream",
"text": [
"100%|██████████| 1.76M/1.76M [00:00<00:00, 5.55MiB/s]\n",
"INFO:datasetsforecast.utils:Successfully downloaded M3C.xls, 1757696, bytes.\n"
"100%|█████████████████████████████████████████████████████████████████████████████████████████████| 1.76M/1.76M [00:00<00:00, 25.2MiB/s]\n",
jmoralez marked this conversation as resolved.
Show resolved Hide resolved
"INFO:datasetsforecast.utils:Successfully downloaded M3C.xls, 1757696, bytes.\n",
"/home/ubuntu/repos/neuralforecast/.venv/lib/python3.10/site-packages/datasetsforecast/m3.py:108: FutureWarning: 'Y' is deprecated and will be removed in a future version, please use 'YE' instead.\n",
" freq = pd.tseries.frequencies.to_offset(class_group.freq)\n"
]
}
],
"source": [
"Y_df, *_ = M3.load('./data', group='Yearly')\n"
"Y_df, *_ = M3.load('./data', group='Yearly')"
]
},
{
Expand Down Expand Up @@ -218,6 +220,27 @@
"Y_df.groupby('unique_id').head(2)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"`Y_df` is a dataframe with three columns: `unique_id` with a unique identifier for each time series, a column `ds` with the datestamp and a column `y` with the values of the series."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Single time series"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"If you have only one time series, you have to include the `unique_id` column. Consider, for example, the [AirPassengers](https://github.com/Nixtla/transfer-learning-time-series/blob/main/datasets/air_passengers.csv) dataset."
]
},
{
"cell_type": "code",
"execution_count": null,
Expand All @@ -244,98 +267,86 @@
" <thead>\n",
" <tr style=\"text-align: right;\">\n",
" <th></th>\n",
" <th>unique_id</th>\n",
" <th>ds</th>\n",
" <th>y</th>\n",
" <th>timestamp</th>\n",
" <th>value</th>\n",
" </tr>\n",
" </thead>\n",
" <tbody>\n",
" <tr>\n",
" <th>18</th>\n",
" <td>Y1</td>\n",
" <td>1993-12-31</td>\n",
" <td>8407.84</td>\n",
" <th>0</th>\n",
" <td>1949-01-01</td>\n",
" <td>112</td>\n",
" </tr>\n",
" <tr>\n",
" <th>19</th>\n",
" <td>Y1</td>\n",
" <td>1994-12-31</td>\n",
" <td>9156.01</td>\n",
" <th>1</th>\n",
" <td>1949-02-01</td>\n",
" <td>118</td>\n",
" </tr>\n",
" <tr>\n",
" <th>38</th>\n",
" <td>Y10</td>\n",
" <td>1993-12-31</td>\n",
" <td>3187.00</td>\n",
" <th>2</th>\n",
" <td>1949-03-01</td>\n",
" <td>132</td>\n",
" </tr>\n",
" <tr>\n",
" <th>39</th>\n",
" <td>Y10</td>\n",
" <td>1994-12-31</td>\n",
" <td>3058.00</td>\n",
" <th>3</th>\n",
" <td>1949-04-01</td>\n",
" <td>129</td>\n",
" </tr>\n",
" <tr>\n",
" <th>58</th>\n",
" <td>Y100</td>\n",
" <td>1993-12-31</td>\n",
" <td>3539.00</td>\n",
" <th>4</th>\n",
" <td>1949-05-01</td>\n",
" <td>121</td>\n",
" </tr>\n",
" <tr>\n",
" <th>...</th>\n",
" <td>...</td>\n",
" <td>...</td>\n",
" <td>...</td>\n",
" </tr>\n",
" <tr>\n",
" <th>18278</th>\n",
" <td>Y97</td>\n",
" <td>1994-12-31</td>\n",
" <td>4507.00</td>\n",
" <th>139</th>\n",
" <td>1960-08-01</td>\n",
" <td>606</td>\n",
" </tr>\n",
" <tr>\n",
" <th>18297</th>\n",
" <td>Y98</td>\n",
" <td>1993-12-31</td>\n",
" <td>1801.00</td>\n",
" <th>140</th>\n",
" <td>1960-09-01</td>\n",
" <td>508</td>\n",
" </tr>\n",
" <tr>\n",
" <th>18298</th>\n",
" <td>Y98</td>\n",
" <td>1994-12-31</td>\n",
" <td>1710.00</td>\n",
" <th>141</th>\n",
" <td>1960-10-01</td>\n",
" <td>461</td>\n",
" </tr>\n",
" <tr>\n",
" <th>18317</th>\n",
" <td>Y99</td>\n",
" <td>1993-12-31</td>\n",
" <td>2379.30</td>\n",
" <th>142</th>\n",
" <td>1960-11-01</td>\n",
" <td>390</td>\n",
" </tr>\n",
" <tr>\n",
" <th>18318</th>\n",
" <td>Y99</td>\n",
" <td>1994-12-31</td>\n",
" <td>2723.00</td>\n",
" <th>143</th>\n",
" <td>1960-12-01</td>\n",
" <td>432</td>\n",
" </tr>\n",
" </tbody>\n",
"</table>\n",
"<p>1290 rows × 3 columns</p>\n",
"<p>144 rows × 2 columns</p>\n",
"</div>"
],
"text/plain": [
" unique_id ds y\n",
"18 Y1 1993-12-31 8407.84\n",
"19 Y1 1994-12-31 9156.01\n",
"38 Y10 1993-12-31 3187.00\n",
"39 Y10 1994-12-31 3058.00\n",
"58 Y100 1993-12-31 3539.00\n",
"... ... ... ...\n",
"18278 Y97 1994-12-31 4507.00\n",
"18297 Y98 1993-12-31 1801.00\n",
"18298 Y98 1994-12-31 1710.00\n",
"18317 Y99 1993-12-31 2379.30\n",
"18318 Y99 1994-12-31 2723.00\n",
" timestamp value\n",
"0 1949-01-01 112\n",
"1 1949-02-01 118\n",
"2 1949-03-01 132\n",
"3 1949-04-01 129\n",
"4 1949-05-01 121\n",
".. ... ...\n",
"139 1960-08-01 606\n",
"140 1960-09-01 508\n",
"141 1960-10-01 461\n",
"142 1960-11-01 390\n",
"143 1960-12-01 432\n",
"\n",
"[1290 rows x 3 columns]"
"[144 rows x 2 columns]"
]
},
"execution_count": null,
Expand All @@ -344,37 +355,8 @@
}
],
"source": [
"Y_df.groupby('unique_id').tail(2)"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"`Y_df` is a dataframe with three columns: `unique_id` with a unique identifier for each time series, a column `ds` with the datestamp and a column `y` with the values of the series."
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"### Single time series"
]
},
{
"cell_type": "markdown",
"metadata": {},
"source": [
"If you have only one time series, you have to include the `unique_id` column. Consider, for example, the [AirPassengers](https://github.com/Nixtla/transfer-learning-time-series/blob/main/datasets/air_passengers.csv) dataset."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"Y_df = pd.read_csv('https://raw.githubusercontent.com/Nixtla/transfer-learning-time-series/main/datasets/air_passengers.csv')"
"Y_df = pd.read_csv('https://raw.githubusercontent.com/Nixtla/transfer-learning-time-series/main/datasets/air_passengers.csv')\n",
"Y_df"
]
},
{
Expand All @@ -384,17 +366,6 @@
"In this example `Y_df` only contains two columns: `timestamp`, and `value`. To use `NeuralForecast` we have to include the `unique_id` column and rename the previuos ones."
]
},
{
"cell_type": "code",
"execution_count": null,
"metadata": {},
"outputs": [],
"source": [
"Y_df['unique_id'] = 1. # We can add an integer as identifier\n",
"Y_df = Y_df.rename(columns={'timestamp': 'ds', 'value': 'y'})\n",
"Y_df = Y_df[['unique_id', 'ds', 'y']]"
]
},
{
"cell_type": "code",
"execution_count": null,
Expand Down Expand Up @@ -521,6 +492,9 @@
}
],
"source": [
"Y_df['unique_id'] = 1. # We can add an integer as identifier\n",
"Y_df = Y_df.rename(columns={'timestamp': 'ds', 'value': 'y'})\n",
"Y_df = Y_df[['unique_id', 'ds', 'y']]\n",
"Y_df"
]
},
Expand All @@ -538,9 +512,9 @@
],
"metadata": {
"kernelspec": {
"display_name": "neuralforecast",
"display_name": "python3",
"language": "python",
"name": "neuralforecast"
"name": "python3"
}
},
"nbformat": 4,
Expand Down
873 changes: 513 additions & 360 deletions nbs/docs/tutorials/01_getting_started_complete.ipynb

Large diffs are not rendered by default.

87,776 changes: 623 additions & 87,153 deletions nbs/docs/tutorials/15_comparing_methods.ipynb

Large diffs are not rendered by default.

10 changes: 5 additions & 5 deletions nbs/docs/use-cases/electricity_peak_forecasting.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -231,7 +231,7 @@
"# Instantiate StatsForecast class as sf\n",
"nf = NeuralForecast(\n",
" models=models,\n",
" freq='H', \n",
" freq='h', \n",
")"
]
},
Expand Down Expand Up @@ -411,7 +411,7 @@
"metadata": {},
"outputs": [],
"source": [
"crossvalidation_df = crossvalidation_df.reset_index()[['ds','y','AutoNHITS']]\n",
"crossvalidation_df = crossvalidation_df[['ds','y','AutoNHITS']]\n",
"max_day = crossvalidation_df.iloc[crossvalidation_df['y'].argmax()].ds.day # Day with maximum load\n",
"cv_df_day = crossvalidation_df.query('ds.dt.day == @max_day')\n",
"max_hour = cv_df_day['y'].argmax()\n",
Expand Down Expand Up @@ -502,11 +502,11 @@
],
"metadata": {
"kernelspec": {
"display_name": "neuralforecast",
"display_name": "python3",
"language": "python",
"name": "neuralforecast"
"name": "python3"
}
},
"nbformat": 4,
"nbformat_minor": 2
"nbformat_minor": 4
}
Loading