-
Notifications
You must be signed in to change notification settings - Fork 0
/
README.Rmd
87 lines (65 loc) · 3.25 KB
/
README.Rmd
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
---
output: github_document
---
<!-- README.md is generated from README.Rmd. Please edit that file -->
```{r, include = FALSE}
knitr::opts_chunk$set(
collapse = TRUE,
comment = "#>",
fig.path = "man/figures/README-",
out.width = "100%"
)
```
# tsforest
<!-- badges: start -->
<!-- badges: end -->
The goal of tsforest is to provide an R implementation of the Time Series Forest classification algorithm described by Deng et al (2013) and documented on timeseriesclassification.com. There's another R package that implements many of these, but its backend is in Java which can cause some installation and running problems.
## Installation
You can install the the development version from [GitHub](https://github.com/) with:
``` r
# install.packages("devtools")
devtools::install_github("mattsq/tsforest")
```
This is still very much a work in progress! Eventually I'd like to S3-ize the model objects, build tests and more example data, and in general make it a more properly-featured model package.
## Usage
The package is pretty easy to use! Here's a simple example:
```{r fit_model_1}
library(tsforest)
data("FreezerRegularTrain_TRAIN")
data("FreezerRegularTrain_TEST")
model <- tsforest(FreezerRegularTrain_TRAIN, target = "target")
print(model)
```
Predictions use the standard S3 predict method, and return a vector of predictions:
```{r predict_model_1}
preds <- predict(model, FreezerRegularTrain_TEST)
table(preds$predictions, FreezerRegularTrain_TEST$target)
```
There's also a more experimental (and not at all theoretically grounded!) function that takes advantage of the fact that variables are (partially) defined as intervals to plot the variable importance across the time series interval. You can use any summary function, although sum seems to work the best:
```{r intervalwise_model_1}
model <- tsforest(FreezerRegularTrain_TRAIN,
importance = 'permutation',
verbose = FALSE)
intervalwise_variable_importance(model, summary_function = sum)
```
You can also plot an individual example using the function, where the example will be scaled correctly to the importance values:
```{r intervalwise_with_example}
intervalwise_variable_importance(model,
summary_function = sum,
optional_example_rownumber = 1)
```
We implement random forest here for convenience, but the bag of features used by Time Series Forest can be used by other models - `tsforest::new_tsforest()` and `tsforest::featurized_df` allow you to apply the features using your own models. This interface isn't perfect, so it may change. Here, in slightly goofy example, we use logistic regression instead of random forests:
```{r}
data("FreezerRegularTrain_TRAIN")
data("FreezerRegularTrain_TEST")
trained_tsobj <- new_tsforest(FreezerRegularTrain_TRAIN, target = "target", min_length = 2)
featurized_train <- featurize_df(FreezerRegularTrain_TRAIN, trained_tsobj, verbose = FALSE)
glm_model <- glm(target ~ ., data = featurized_train, family = "binomial")
summary(glm_model)
```
```{r}
featurized_test <- featurize_df(FreezerRegularTrain_TEST, trained_tsobj, verbose = FALSE)
preds <- predict(glm_model, featurized_test, type = "response")
preds <- as.numeric(preds > 0.5) + 1
table(preds, featurized_test$target)
```