forked from ibis-project/ibis-ml
-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy pathpytorch.qmd
224 lines (172 loc) · 7.74 KB
/
pytorch.qmd
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
---
title: "Preprocess your data with recipes"
description: |
Prepare data for modeling with modular preprocessing steps.
---
{{< include _tool-chooser.md >}}
## Introduction
In this article, we'll explore [`Recipe`](/reference/core.html#ibis_ml.Recipe)s, which are designed to help you preprocess your data before training your model. Recipes are built as a series of preprocessing steps, such as:
- converting qualitative predictors to indicator variables (also known as dummy variables),
- transforming data to be on a different scale (e.g., taking the logarithm of a variable),
- transforming whole groups of predictors together,
- extracting key features from raw variables (e.g., getting the day of the week out of a date variable),
and so on. If you are familiar with [scikit-learn's dataset transformations](https://scikit-learn.org/stable/data_transforms.html), a lot of this might sound familiar and like what a transformer already does. Recipes can be used to do many of the same things, but they can scale your workloads on any [Ibis](https://ibis-project.org/)-supported backend. This article shows how to use recipes for modeling.
To use code in this article, you will need to install the following packages: Ibis, IbisML, and skorch, a high-level library for PyTorch that provides full scikit-learn compatibility.
```bash
pip install 'ibis-framework[duckdb,examples]' ibis-ml skorch torch
```
## The New York City flight data
Let's use the [nycflights13 data](https://github.com/hadley/nycflights13) to predict whether a plane arrives more than 30 minutes late. This dataset contains information on 325,819 flights departing near New York City in 2013. Let's start by loading the data and making a few changes to the variables:
```{python}
#| output: false
import ibis
con = ibis.connect("duckdb://nycflights13.ddb")
con.create_table(
"flights", ibis.examples.nycflights13_flights.fetch().to_pyarrow(), overwrite=True
)
con.create_table(
"weather", ibis.examples.nycflights13_weather.fetch().to_pyarrow(), overwrite=True
)
```
You can now see the example dataset copied over to the database:
```{python}
con = ibis.connect("duckdb://nycflights13.ddb")
con.list_tables()
```
We'll turn on interactive mode, which partially executes queries to give users a preview of the results.
```{python}
ibis.options.interactive = True
```
```{python}
flights = con.table("flights")
flights = flights.mutate(
dep_time=(
flights.dep_time.lpad(4, "0").substr(0, 2)
+ ":"
+ flights.dep_time.substr(-2, 2)
+ ":00"
).try_cast("time"),
arr_delay=flights.arr_delay.try_cast(int),
air_time=flights.air_time.try_cast(int),
)
flights
```
```{python}
weather = con.table("weather")
weather
```
```{python}
flight_data = (
flights.mutate(
# Convert the arrival delay to a factor
# By default, PyTorch expects the target to have a Long datatype
arr_delay=ibis.ifelse(flights.arr_delay >= 30, 1, 0).cast("int64"),
# We will use the date (not date-time) in the recipe below
date=flights.time_hour.date(),
)
# Include the weather data
.inner_join(weather, ["origin", "time_hour"])
# Only retain the specific columns we will use
.select(
"dep_time",
"flight",
"origin",
"dest",
"air_time",
"distance",
"carrier",
"date",
"arr_delay",
"time_hour",
)
# Exclude missing data
.drop_null()
)
flight_data
```
We can see that about 16% of the flights in this dataset arrived more than 30 minutes late.
```{python}
flight_data.arr_delay.value_counts().rename(n="arr_delay_count").mutate(
prop=ibis._.n / ibis._.n.sum()
)
```
## Data splitting
To get started, let's split this single dataset into two: a _training_ set and a _testing_ set. We'll keep most of the rows in the original dataset (subset chosen randomly) in the _training_ set. The training data will be used to _fit_ the model, and the _testing_ set will be used to measure model performance.
Because the order of rows in an Ibis table is undefined, we need a unique key to split the data reproducibly. [It is permissible for airlines to use the same flight number for different routes, as long as the flights do not operate on the same day. This means that the combination of the flight number and the date of travel is always unique.](https://www.euclaim.com/blog/flight-numbers-explained#:~:text=Can%20flight%20numbers%20be%20reused,of%20travel%20is%20always%20unique.)
```{python}
import ibis_ml as ml
# Create data frames for the two sets:
train_data, test_data = ml.train_test_split(
flight_data,
unique_key=["carrier", "flight", "date"],
# Put 3/4 of the data into the training set
test_size=0.25,
num_buckets=4,
# Fix the random numbers by setting the seed
# This enables the analysis to be reproducible when random numbers are used
random_seed=222,
)
```
## Create features
```{python}
flights_rec = ml.Recipe(
ml.ExpandDate("date", components=["dow", "month"]),
ml.Drop("date"),
ml.TargetEncode(ml.nominal()),
ml.DropZeroVariance(ml.everything()),
ml.MutateAt("dep_time", ibis._.hour() * 60 + ibis._.minute()),
ml.MutateAt(ml.timestamp(), ibis._.epoch_seconds()),
# By default, PyTorch requires that the type of `X` is `np.float32`.
# https://discuss.pytorch.org/t/mat1-and-mat2-must-have-the-same-dtype-but-got-double-and-float/197555/2
ml.Cast(ml.numeric(), "float32"),
)
```
## Fit a model with a recipe
Let's model the flight data. We can use any scikit-learn-compatible estimator.
We will want to use our recipe across several steps as we train and test our model. We will:
1. **Process the recipe using the training set**: This involves any estimation or calculations based on the training set. For our recipe, the training set will be used to determine which predictors should be converted to dummy variables and which predictors will have zero-variance in the training set, and should be slated for removal.
1. **Apply the recipe to the training set**: We create the final predictor set on the training set.
1. **Apply the recipe to the test set**: We create the final predictor set on the test set. Nothing is recomputed and no information from the test set is used here; the dummy variable and zero-variance results from the training set are applied to the test set.
To simplify this process, we can use a [scikit-learn `Pipeline`](https://scikit-learn.org/stable/modules/generated/sklearn.pipeline.Pipeline.html).
```{python}
from sklearn.pipeline import Pipeline
from skorch import NeuralNetClassifier
from torch import nn
class MyModule(nn.Module):
def __init__(self, num_units=10, nonlin=nn.ReLU()):
super().__init__()
self.dense0 = nn.Linear(10, num_units)
self.nonlin = nonlin
self.dropout = nn.Dropout(0.5)
self.dense1 = nn.Linear(num_units, num_units)
self.output = nn.Linear(num_units, 2)
self.softmax = nn.Softmax(dim=-1)
def forward(self, X, **kwargs):
X = self.nonlin(self.dense0(X))
X = self.dropout(X)
X = self.nonlin(self.dense1(X))
X = self.softmax(self.output(X))
return X
net = NeuralNetClassifier(
MyModule,
max_epochs=10,
lr=0.1,
# Shuffle training data on each epoch
iterator_train__shuffle=True,
)
pipe = Pipeline([("flights_rec", flights_rec), ("net", net)])
```
Now, there is a single function that can be used to prepare the recipe and train the model from the resulting predictors:
```{python}
X_train = train_data.drop("arr_delay")
y_train = train_data.arr_delay
pipe.fit(X_train, y_train)
```
## Use a trained workflow to predict
...
```{python}
X_test = test_data.drop("arr_delay")
y_test = test_data.arr_delay
pipe.score(X_test, y_test)
```
{{< include _acknowledgments.md >}}