Loading [MathJax]/jax/output/CommonHTML/jax.js
+ - 0:00:00
Notes for current slide
Notes for next slide

Linear Regression

Dr. D’Agostino McGowan

1 / 68

Lab follow-up

  • Knit, commit, and push after every exercise
  • When you are working on labs, homeworks, or application exercises, edit the file I have started for you (01-hello-r.Rmd)
  • Any questions?
2 / 68

Linear Models

3 / 68

Linear Regression Questions

  • Is there a relationship between a response variable and predictors?
  • How strong is the relationship?
  • What is the uncertainty?
  • How accurately can we predict a future outcome?
4 / 68

Simple linear regression

Y=β0+β1X+ϵ

5 / 68

Simple linear regression

Y=β0+β1X+ϵ

  • β0: intercept
5 / 68

Simple linear regression

Y=β0+β1X+ϵ

  • β0: intercept
  • β1: slope
5 / 68

Simple linear regression

Y=β0+β1X+ϵ

  • β0: intercept
  • β1: slope
    • β0 and β1 are coefficients, parameters
5 / 68

Simple linear regression

Y=β0+β1X+ϵ

  • β0: intercept
  • β1: slope
    • β0 and β1 are coefficients, parameters
  • ϵ: error
5 / 68

Simple linear regression

We estimate this with

ˆy=ˆβ0+ˆβ1x

6 / 68

Simple linear regression

We estimate this with

ˆy=ˆβ0+ˆβ1x

  • ˆy is the prediction of Y when X=x
6 / 68

Simple linear regression

We estimate this with

ˆy=ˆβ0+ˆβ1x

  • ˆy is the prediction of Y when X=x
  • The hat denotes that this is an estimated value

6 / 68

Simple linear regression

Yi=β0+β1Xi+ϵi

ϵiN(0,σ2)

7 / 68

Simple linear regression

Yi=β0+β1Xi+ϵi

ϵiN(0,σ2)

Y1=β0+β1X1+ϵ1Y2=β0+β1X2+ϵ2Yn=β0+β1Xn+ϵn

8 / 68

Simple linear regression

Yi=β0+β1Xi+ϵi

ϵiN(0,σ2)

Y1=β0+β1X1+ϵ1Y2=β0+β1X2+ϵ2Yn=β0+β1Xn+ϵn

[Y1Y2Yn]=[β0+β1X1β0+β1X2β0+β1Xn]+[ϵ1ϵ2ϵn]

8 / 68

Simple linear regression

Yi=β0+β1Xi+ϵi

ϵiN(0,σ2)

Y1=β0+β1X1+ϵ1Y2=β0+β1X2+ϵ2Yn=β0+β1Xn+ϵn

[Y1Y2Yn]=[1X11X21Xn][β0β1]+[ϵ1ϵ2ϵn]

9 / 68

Simple linear regression

[Y1Y2Yn]=[1X11X21Xn][β0β1]+[ϵ1ϵ2ϵn]

10 / 68

Simple linear regression

[Y1Y2Yn]=[1X11X21Xn]X: Design Matrix[β0β1]+[ϵ1ϵ2ϵn]

11 / 68

Simple linear regression

[Y1Y2Yn]=[1X11X21Xn]X: Design Matrix[β0β1]+[ϵ1ϵ2ϵn]

What are the dimensions of X?

11 / 68

Simple linear regression

[Y1Y2Yn]=[1X11X21Xn]X: Design Matrix[β0β1]+[ϵ1ϵ2ϵn]

What are the dimensions of (\mathbf{X})?

  • n×2
11 / 68

Simple linear regression

[Y1Y2Yn]=[1X11X21Xn]X: Design Matrix[β0β1]β: Vector of parameters+[ϵ1ϵ2ϵn]

12 / 68

Simple linear regression

[Y1Y2Yn]=[1X11X21Xn]X: Design Matrix[β0β1]β: Vector of parameters+[ϵ1ϵ2ϵn]

What are the dimensions of β?

12 / 68

Simple linear regression

[Y1Y2Yn]=[1X11X21Xn][β0β1]+[ϵ1ϵ2ϵn]ϵ: vector of error terms

13 / 68

Simple linear regression

[Y1Y2Yn]=[1X11X21Xn][β0β1]+[ϵ1ϵ2ϵn]ϵ: vector of error terms

What are the dimensions of ϵ?

13 / 68

Simple linear regression

[Y1Y2Yn]Y: Vector of responses=[1X11X21Xn][β0β1]+[ϵ1ϵ2ϵn]

14 / 68

Simple linear regression

[Y1Y2Yn]Y: Vector of responses=[1X11X21Xn][β0β1]+[ϵ1ϵ2ϵn]

What are the dimensions of Y?

14 / 68

Simple linear regression

[Y1Y2Yn]=[1X11X21Xn][β0β1]+[ϵ1ϵ2ϵn]

Y=Xβ+ϵ

15 / 68

Simple linear regression

[ˆy1ˆy2ˆyn]=[1x11x21xn][ˆβ0 ˆβ1]

ˆyi=ˆβ0+ˆβ1xi

16 / 68

Simple linear regression

[ˆy1ˆy2ˆyn]=[1x11x21xn][ˆβ0 ˆβ1]

ˆyi=ˆβ0+ˆβ1xi

  • ϵi=yiˆyi
16 / 68

Simple linear regression

[ˆy1ˆy2ˆyn]=[1x11x21xn][ˆβ0 ˆβ1]

ˆyi=ˆβ0+ˆβ1xi

  • ϵi=yiˆyi
  • ϵi=yi(ˆβ0+ˆβ1xi)
16 / 68

Simple linear regression

[ˆy1ˆy2ˆyn]=[1x11x21xn][ˆβ0 ˆβ1]

ˆyi=ˆβ0+ˆβ1xi

  • ϵi=yiˆyi
  • ϵi=yi(ˆβ0+ˆβ1xi)
  • ϵi is known as the residual for observation i
16 / 68

Simple linear regression

How are ˆβ0 and ˆβ1 chosen? What are we minimizing?

17 / 68

Simple linear regression

How are ˆβ0 and ˆβ1 chosen? What are we minimizing?

  • Minimize the residual sum of squares
17 / 68

Simple linear regression

How are ˆβ0 and ˆβ1 chosen? What are we minimizing?

  • Minimize the residual sum of squares
  • RSS = ϵ2i=ϵ21+ϵ22++ϵ2n
17 / 68

Simple linear regression

How could we re-write this with yi and xi?

  • Minimize the residual sum of squares
  • RSS = ϵ2i=ϵ21+ϵ22++ϵ2n
18 / 68

Simple linear regression

How could we re-write this with yi and xi?

  • Minimize the residual sum of squares
  • RSS = ϵ2i=ϵ21+ϵ22++ϵ2n
  • RSS = (y1^β0ˆβ1x1)2+(y2ˆβ0ˆβ1x2)2++(ynˆβ0ˆβ1xn)2
18 / 68

Simple linear regression

Let's put this back in matrix form:

ϵ2i=[ϵ1ϵ2ϵn][ϵ1ϵ2ϵn]=ϵTϵ

19 / 68

Simple linear regression

What can we replace ϵi with? (Hint: look back a few slides)

20 / 68

Simple linear regression

What can we replace ϵi with? (Hint: look back a few slides)

ϵ2i=(YXβ)T(YXβ)

20 / 68

Simple linear regression

OKAY! So this is the thing we are trying to minimize with respect to β:

(YXβ)T(YXβ)

In calculus, how do we minimize things?

21 / 68

Simple linear regression

OKAY! So this is the thing we are trying to minimize with respect to β:

(YXβ)T(YXβ)

In calculus, how do we minimize things?

  • Take the derivative with respect to β
  • Set it equal to 0 (or a vector of 0s!)
  • Solve for β
21 / 68

Simple linear regression

OKAY! So this is the thing we are trying to minimize with respect to β:

(YXβ)T(YXβ)

In calculus, how do we minimize things?

  • Take the derivative with respect to β
  • Set it equal to 0 (or a vector of 0s!)
  • Solve for β ddβ(YXβ)T(YXβ)=2XT(YXβ)
21 / 68

Simple linear regression

OKAY! So this is the thing we are trying to minimize with respect to β:

(YXβ)T(YXβ)

In calculus, how do we minimize things?

  • Take the derivative with respect to β
  • Set it equal to 0 (or a vector of 0s!)
  • Solve for β ddβ(YXβ)T(YXβ)=2XT(YXβ) 2XT(YXβ)=0
21 / 68

Simple linear regression

OKAY! So this is the thing we are trying to minimize with respect to β:

(YXβ)T(YXβ)

In calculus, how do we minimize things?

  • Take the derivative with respect to β
  • Set it equal to 0 (or a vector of 0s!)
  • Solve for β ddβ(YXβ)T(YXβ)=2XT(YXβ) 2XT(YXβ)=0 XTY=XTXβ
21 / 68

Simple linear regression

XTY=XTXˆβ

[ˆβ0ˆβ1]=(XTX)1XTY

22 / 68

Simple linear regression

ˆY=XˆβˆY=X(XTX)1XTY

23 / 68

Simple linear regression

ˆY=XˆβˆY=X(XTX)1XTYˆβ

24 / 68

Simple linear regression

ˆY=XˆβˆY=X(XTX)1XThat matrixY

25 / 68

Simple linear regression

ˆY=XˆβˆY=X(XTX)1XThat matrixY

Why do you think this is called the "hat matrix"

25 / 68

Linear Models

  • Go to the sta-363-s20 GitHub organization and search for appex-01-linear-models
  • Clone this repository into RStudio Cloud
  • Complete the exercises
  • Knit, Commit, Push
  • Leave RStudio Cloud open, we may return at the end of class
26 / 68

Multiple linear regression

We can generalize this beyond just one predictor

[ˆβ0ˆβ1ˆβp]=(XTX)1XTY

27 / 68

Multiple linear regression

We can generalize this beyond just one predictor

[ˆβ0ˆβ1ˆβp]=(XTX)1XTY

What are the dimensions of the design matrix, X now?

27 / 68

Multiple linear regression

We can generalize this beyond just one predictor

[ˆβ0ˆβ1ˆβp]=(XTX)1XTY

What are the dimensions of the design matrix, (\mathbf{X}) now?

  • Xn×(p+1)
27 / 68

Multiple linear regression

We can generalize this beyond just one predictor

[ˆβ0ˆβ1ˆβp]=(XTX)1XTY

What are the dimensions of the design matrix, X now?

X=[1X11X12X1p1X21X22X2p1Xn1Xn2Xnp]

28 / 68

ˆβ interpretation in multiple linear regression

29 / 68

ˆβ interpretation in multiple linear regression

The coefficient for x is ˆβ (95% CI: LBˆβ,UBˆβ). A one-unit increase in x yields an expected increase in y of ˆβ, holding all other variables constant.

29 / 68

Linear Regression Questions

  • ✔️ Is there a relationship between a response variable and predictors?
  • How strong is the relationship?
  • What is the uncertainty?
  • How accurately can we predict a future outcome?
30 / 68

Linear Regression Questions

  • ✔️ Is there a relationship between a response variable and predictors?
  • How strong is the relationship?
  • What is the uncertainty?
  • How accurately can we predict a future outcome?
31 / 68

Linear regression uncertainty

  • The standard error of an estimator reflects how it varies under repeated sampling
32 / 68

Linear regression uncertainty

  • The standard error of an estimator reflects how it varies under repeated sampling

Var(ˆβ)=σ2(XTX)1

32 / 68

Linear regression uncertainty

  • The standard error of an estimator reflects how it varies under repeated sampling

Var(ˆβ)=σ2(XTX)1

  • σ2=Var(ϵ)
32 / 68

Linear regression uncertainty

  • The standard error of an estimator reflects how it varies under repeated sampling

Var(ˆβ)=σ2(XTX)1

  • σ2=Var(ϵ)* In the case of simple linear regression,

SE(ˆβ1)2=σ2ni=1(xiˉx)2

32 / 68

Linear regression uncertainty

  • The standard error of an estimator reflects how it varies under repeated sampling

Var(ˆβ)=σ2(XTX)1

  • σ2=Var(ϵ)* In the case of simple linear regression,

SE(ˆβ1)2=σ2ni=1(xiˉx)2

  • This uncertainty is used in the test statistic t=ˆβ1SEˆβ1
32 / 68

Let's look at an example

Let's look at a sample of 116 sparrows from Kent Island. We are intrested in the relationship between Weight and Wing Length

  • the standard error of ^β1 ( SEˆβ1 ) is how much we expect the sample slope to vary from one random sample to another.
33 / 68

broom

a quick pause for R

  • You're familiar with the tidyverse:
library(tidyverse)
  • The broom package takes the messy output of built-in functions in R, such as lm, and turns them into tidy data frames.
library(broom)
34 / 68

How does a pipe work?

  • You can think about the following sequence of actions - find key, unlock car, start car, drive to school, park.
  • Expressed as a set of nested functions in R pseudocode this would look like:
park(drive(start_car(find("keys")), to = "campus"))
  • Writing it out using pipes give it a more natural (and easier to read) structure:
find("keys") %>%
start_car() %>%
drive(to = "campus") %>%
park()
35 / 68

What about other arguments?

To send results to a function argument other than first one or to use the previous result for multiple arguments, use .:

starwars %>%
filter(species == "Human") %>%
lm(mass ~ height, data = .)
##
## Call:
## lm(formula = mass ~ height, data = .)
##
## Coefficients:
## (Intercept) height
## -116.58 1.11
36 / 68

Sparrows

How can we quantify how much we'd expect the slope to differ from one random sample to another?

lm(Weight ~ WingLength, data = Sparrows) %>%
tidy()
## # A tibble: 2 x 5
## term estimate std.error statistic p.value
## <chr> <dbl> <dbl> <dbl> <dbl>
## 1 (Intercept) 1.37 0.957 1.43 1.56e- 1
## 2 WingLength 0.467 0.0347 13.5 2.62e-25
37 / 68

Sparrows

How do we interpret this?

lm(Weight ~ WingLength, data = Sparrows) %>%
tidy()
## # A tibble: 2 x 5
## term estimate std.error statistic p.value
## <chr> <dbl> <dbl> <dbl> <dbl>
## 1 (Intercept) 1.37 0.957 1.43 1.56e- 1
## 2 WingLength 0.467 0.0347 13.5 2.62e-25
38 / 68

Sparrows

How do we interpret this?

lm(Weight ~ WingLength, data = Sparrows) %>%
tidy()
## # A tibble: 2 x 5
## term estimate std.error statistic p.value
## <chr> <dbl> <dbl> <dbl> <dbl>
## 1 (Intercept) 1.37 0.957 1.43 1.56e- 1
## 2 WingLength 0.467 0.0347 13.5 2.62e-25
  • "the sample slope is more than 13 standard errors above a slope of zero"
39 / 68

Sparrows

How do we know what values of this statistic are worth paying attention to?

lm(Weight ~ WingLength, data = Sparrows) %>%
tidy()
## # A tibble: 2 x 5
## term estimate std.error statistic p.value
## <chr> <dbl> <dbl> <dbl> <dbl>
## 1 (Intercept) 1.37 0.957 1.43 1.56e- 1
## 2 WingLength 0.467 0.0347 13.5 2.62e-25
40 / 68

Sparrows

How do we know what values of this statistic are worth paying attention to?

lm(Weight ~ WingLength, data = Sparrows) %>%
tidy()
## # A tibble: 2 x 5
## term estimate std.error statistic p.value
## <chr> <dbl> <dbl> <dbl> <dbl>
## 1 (Intercept) 1.37 0.957 1.43 1.56e- 1
## 2 WingLength 0.467 0.0347 13.5 2.62e-25
  • confidence intervals
  • p-values
41 / 68

Sparrows

How do we know what values of this statistic are worth paying attention to?

lm(Weight ~ WingLength, data = Sparrows) %>%
tidy(conf.int = TRUE)
## # A tibble: 2 x 7
## term estimate std.error statistic p.value conf.low conf.high
## <chr> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 (Intercept) 1.37 0.957 1.43 1.56e- 1 -0.531 3.26
## 2 WingLength 0.467 0.0347 13.5 2.62e-25 0.399 0.536
  • confidence intervals
  • p-values
42 / 68

Sparrows

How are these statistics distributed under the null hypothesis?

lm(Weight ~ WingLength, data = Sparrows) %>%
tidy()
## # A tibble: 2 x 5
## term estimate std.error statistic p.value
## <chr> <dbl> <dbl> <dbl> <dbl>
## 1 (Intercept) 1.37 0.957 1.43 1.56e- 1
## 2 WingLength 0.467 0.0347 13.5 2.62e-25
43 / 68

Sparrows

  • I've generated some data under a null hypothesis where n=20
44 / 68

Sparrows

  • this is a t-distribution with n-p-1 degrees of freedom.
45 / 68

Sparrows

The distribution of test statistics we would expect given the null hypothesis is true, β1=0, is t-distribution with n-2 degrees of freedom.

46 / 68

Sparrows

47 / 68

Sparrows

How can we compare this line to the distribution under the null?

48 / 68

Sparrows

How can we compare this line to the distribution under the null?

  • p-value
48 / 68

p-value

The probability of getting a statistic as extreme or more extreme than the observed test statistic given the null hypothesis is true

49 / 68

Sparrows

lm(Weight ~ WingLength, data = Sparrows) %>%
tidy()
## # A tibble: 2 x 5
## term estimate std.error statistic p.value
## <chr> <dbl> <dbl> <dbl> <dbl>
## 1 (Intercept) 1.37 0.957 1.43 1.56e- 1
## 2 WingLength 0.467 0.0347 13.5 2.62e-25
50 / 68

Return to generated data, n = 20

  • Let's say we get a statistic of 1.5 in a sample
51 / 68

Let's do it in R!

The proportion of area less than 1.5

pt(1.5, df = 18)
## [1] 0.9245248
52 / 68

Let's do it in R!

The proportion of area greater than 1.5

pt(1.5, df = 18, lower.tail = FALSE)
## [1] 0.07547523
53 / 68

Let's do it in R!

The proportion of area greater than 1.5 or less than -1.5.

54 / 68

Let's do it in R!

The proportion of area greater than 1.5 or less than -1.5.

pt(1.5, df = 18, lower.tail = FALSE) * 2
## [1] 0.1509505
54 / 68

p-value

The probability of getting a statistic as extreme or more extreme than the observed test statistic given the null hypothesis is true

55 / 68

Hypothesis test

  • null hypothesis H0:β1=0
  • alternative hypothesis HA:β10
56 / 68

Hypothesis test

  • null hypothesis H0:β1=0
  • alternative hypothesis HA:β10 *p-value: 0.15
56 / 68

Hypothesis test

  • null hypothesis H0:β1=0
  • alternative hypothesis HA:β10 p-value: 0.15 Often, we have an α-level cutoff to compare this to, for example 0.05. Since this is greater than 0.05, we fail to reject the null hypothesis
56 / 68

confidence intervals

If we use the same sampling method to select different samples and computed an interval estimate for each sample, we would expect the true population parameter ( β1 ) to fall within the interval estimates 95% of the time.

57 / 68

Confidence interval

ˆβ1±t×SEˆβ1

58 / 68

Confidence interval

(\Huge \hat\beta1 \pm t^∗ \times SE{\hat\beta_1})

  • t is the critical value for the tnp1 density curve to obtain the desired confidence level
58 / 68

Confidence interval

(\Huge \hat\beta1 \pm t^∗ \times SE{\hat\beta_1})

  • t is the critical value for the tnp1 density curve to obtain the desired confidence level Often we want a *95% confidence level.
58 / 68

Let's do it in R!

lm(Weight ~ WingLength, data = Sparrows) %>%
tidy(conf.int = TRUE)
## # A tibble: 2 x 7
## term estimate std.error statistic p.value conf.low conf.high
## <chr> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 (Intercept) 1.37 0.957 1.43 1.56e- 1 -0.531 3.26
## 2 WingLength 0.467 0.0347 13.5 2.62e-25 0.399 0.536
  • t=tnp1=t114=1.98
59 / 68

Let's do it in R!

lm(Weight ~ WingLength, data = Sparrows) %>%
tidy(conf.int = TRUE)
## # A tibble: 2 x 7
## term estimate std.error statistic p.value conf.low conf.high
## <chr> <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 (Intercept) 1.37 0.957 1.43 1.56e- 1 -0.531 3.26
## 2 WingLength 0.467 0.0347 13.5 2.62e-25 0.399 0.536
  • t=tnp1=t114=1.98* LB=0.471.98×0.0347=0.399
  • UB=0.47+1.98×0.0347=0.536
59 / 68

confidence intervals

If we use the same sampling method to select different samples and computed an interval estimate for each sample, we would expect the true population parameter ( β1 ) to fall within the interval estimates 95% of the time.

60 / 68

Linear Regression Questions

  • ✔️ Is there a relationship between a response variable and predictors?
  • ✔️ How strong is the relationship?
  • ✔️ What is the uncertainty?
  • How accurately can we predict a future outcome?
61 / 68

Sparrows

Using the information here, how could I predict a new sparrow's weight if I knew the wing length was 30?

lm(Weight ~ WingLength, data = Sparrows) %>%
tidy()
## # A tibble: 2 x 5
## term estimate std.error statistic p.value
## <chr> <dbl> <dbl> <dbl> <dbl>
## 1 (Intercept) 1.37 0.957 1.43 1.56e- 1
## 2 WingLength 0.467 0.0347 13.5 2.62e-25
62 / 68

Sparrows

Using the information here, how could I predict a new sparrow's weight if I knew the wing length was 30?

lm(Weight ~ WingLength, data = Sparrows) %>%
tidy()
## # A tibble: 2 x 5
## term estimate std.error statistic p.value
## <chr> <dbl> <dbl> <dbl> <dbl>
## 1 (Intercept) 1.37 0.957 1.43 1.56e- 1
## 2 WingLength 0.467 0.0347 13.5 2.62e-25
  • 1.37+0.467×30=15.38
62 / 68

Linear Regression Accuracy

What is the residual sum of squares again?

  • Note: In previous classes, this may have been referred to as SSE (sum of squares error), the book uses RSS, so we will stick with that!
63 / 68

Linear Regression Accuracy

What is the residual sum of squares again?

  • Note: In previous classes, this may have been referred to as SSE (sum of squares error), the book uses RSS, so we will stick with that!

RSS=(yiˆyi)2

63 / 68

Linear Regression Accuracy

What is the residual sum of squares again?

  • Note: In previous classes, this may have been referred to as SSE (sum of squares error), the book uses RSS, so we will stick with that!

RSS=(yiˆyi)2

  • The total sum of squares represents the variability of the outcome, it is equivalent to the variability described by the model plus the remaining residual sum of squares

TSS=(yiˉy)2

63 / 68

Linear Regression Accuracy

  • There are many ways "model fit" can be assessed. Two commone ones are:
    • Residual Standard Error (RSE)
    • R2 - the fraction of the variance explained
64 / 68

Linear Regression Accuracy

  • There are many ways "model fit" can be assessed. Two commone ones are:
    • Residual Standard Error (RSE)
    • R2 - the fraction of the variance explained* RSE=1np1RSS
64 / 68

Linear Regression Accuracy

  • There are many ways "model fit" can be assessed. Two commone ones are:
    • Residual Standard Error (RSE)
    • R2 - the fraction of the variance explained RSE=1np1RSS R2=1RSSTSS
64 / 68

Linear Regression Accuracy

What could we use to determine whether at least one predictor is useful?

65 / 68

Linear Regression Accuracy

What could we use to determine whether at least one predictor is useful?

F=(TSSRSS)/pRSS/(np1)Fp,np1 We can use a F-statistic!

65 / 68

Let's do it in R!

lm(Weight ~ WingLength, data = Sparrows) %>%
glance()
## # A tibble: 1 x 11
## r.squared adj.r.squared sigma statistic p.value df logLik AIC BIC
## <dbl> <dbl> <dbl> <dbl> <dbl> <int> <dbl> <dbl> <dbl>
## 1 0.614 0.611 1.40 181. 2.62e-25 2 -203. 411. 419.
## # … with 2 more variables: deviance <dbl>, df.residual <int>
66 / 68

Additional Linear Regression Topics

  • Polynomial terms
  • Interactions
  • Outliers
  • Non-constant variance of error terms
  • High leverage points
  • Collinearity

Refer to Chapter 3 for more details on these topics if you need a refresher.

67 / 68

Linear Models

  • Go back to your Linear Models RStudio Cloud session
  • load the tidyverse and broom using library(tidyverse) then library(broom)
  • Using the mtcars dataset, fit a model predicting mpg from am
  • Use the tidy() function to see the beta coefficients
  • Use the glance() function to see the model fit statistics
  • Knit, Commit, Push
68 / 68

Lab follow-up

  • Knit, commit, and push after every exercise
  • When you are working on labs, homeworks, or application exercises, edit the file I have started for you (01-hello-r.Rmd)
  • Any questions?
2 / 68
Paused

Help

Keyboard shortcuts

, , Pg Up, k Go to previous slide
, , Pg Dn, Space, j Go to next slide
Home Go to first slide
End Go to last slide
Number + Return Go to specific slide
b / m / f Toggle blackout / mirrored / fullscreen mode
c Clone slideshow
p Toggle presenter mode
t Restart the presentation timer
?, h Toggle this help
Esc Back to slideshow