-
Notifications
You must be signed in to change notification settings - Fork 0
/
README.Rmd
56 lines (43 loc) · 1.92 KB
/
README.Rmd
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
---
title: "R/ipriorBVS: Bayesian Variable Selection for Linear Models using I-priors"
output: github_document
---
<!-- README.md is generated from README.Rmd. Please edit that file -->
```{r setup, include = FALSE}
knitr::opts_chunk$set(echo = TRUE, collapse = TRUE)
library(ipriorBVS)
library(rjags)
load.module("lecuyer")
runjags::runjags.options(
silent.runjags = TRUE,
silent.jags = FALSE,
summary.warning = FALSE,
rng.warning = FALSE
)
```
Bayesian variable selection for linear models using I-priors in R.
This work is part of the PhD project entitled *Regression Modelling with Priors using Fisher Information Covariance Kernels (I-priors)*.
Visit [http://phd.haziqj.ml](http://phd.haziqj.ml) for details.
## Benchmark data (Tibshirani, 1996)
A toy data set designed by [Tibshirani (1996)](https://statweb.stanford.edu/~tibs/lasso/lasso.pdf), often used to compare variable selection methods.
`n = 50` data points are generated from a linear model with parameters `beta = c(3, 1.5, 0, 0, 2, 0, 0, 0)` and `sigma = 3`.
The `X` are generated from a normal distribution with mean zero, and the correlation between the `i`th and `j`th variable is `0.5 ^ abs(i - j)`.
This is implemented in the `gen_benchmark()` function included in the package.
```{r, cache = TRUE}
(dat <- gen_benchmark(n = 50, sd = 3, seed = 123))
```
### Model fit
The model fitted either using formula or non-formula syntax.
We are then able to obtain posterior inclusion probabilities (PIPs) for the each variable, and also posterior model probabilities (PMPs).
For comparison, Bayes factors and deviances are reported as well.
```{r, cache = TRUE}
runjags::runjags.options(silent.jags = TRUE, silent.runjags = TRUE)
(mod <- ipriorBVS(y ~ ., dat))
```
### Coefficients
The model coefficients are averaged across all probable sub-models, which yields a kind of "model-averaged" coefficients.
```{r}
coef(mod)
```
***
Copyright (C) 2017 [Haziq Jamil](http://haziqj.ml).