Description
Hi, I'm trying to implement a couple of torch learners on some custom data sets (tasks).
I'm reading https://mlr3torch.mlr-org.com/articles/pipeop_torch.html which how do implement a convolutional network for the tiny_imagenet pre-defined task.
That page is definitely helpful for what I'm trying to do, but I wonder if you could please add another example which shows how to do it with a custom (not pre-defined) data set/task?
I would appreciate some more explanation about how po("torch_ingress_num")
works, and when we need to use po("nn_reshape")
, and can/should the user define tasks with lazy tensors, like the image feature/input in the example?
I seem to have got something to work using something similar to the code below, but I am not sure if this is an intended use of mlr3torch, because I do not see any such usage in the documentation.
N.pixels <- 10
N.classes <- 5
N.features <- 100
N.images <- 200
set.seed(1)
my.X.mat <- matrix(runif(N.features*N.images), N.images, N.features)
my.df <- data.frame(y=factor(1:N.classes), my.X.mat)
my.task <- mlr3::TaskClassif$new("MyTask", my.df, target="y")
library(mlr3pipelines)
library(mlr3torch)
graph <- po("select", selector = selector_type(c("numeric", "integer"))) %>>%
po("torch_ingress_num") %>>%
po("nn_reshape", shape=c(-1,1,N.pixels,N.pixels)) %>>%
po("nn_conv2d_1", out_channels = 20, kernel_size = 3) %>>%
po("nn_relu_1", inplace = TRUE) %>>%
po("nn_max_pool2d_1", kernel_size = 2) %>>%
po("nn_flatten") %>>%
po("nn_linear", out_features = 100) %>>%
po("torch_loss", t_loss("cross_entropy")) %>>%
po("torch_optimizer", t_opt("sgd", lr=0.01)) %>>%
po("torch_model_classif", batch_size = 32, epochs = 100L)
graph$train(my.task)
graph$predict(my.task)
In ths code above I have a custom classification task, with 100 features, X1 to X100, which is different from what I see in the example (1 feature called image).
Thanks for any help / clarification to the docs you can provide.