Open Stats Lab Additional Analytic Skills in R Using Data from Gino, Kouchaki, and Galinsky (2015)

Kevin P. McIntyre developed this amazing resource for students of psychology. Check out Open Stats Lab for a collection of all activities.

Each activity includes an article from Psychological Science, a data set, and an activity to complete in SPSS. However, if you are an open source fanatic, you could also complete the activity in JASP. For tips on how to use JASP, check out this resource created by Buchanan, Hopke, and Donaldson (2018).

I prefer to get my hands deep into the data. Dr. McIntyre does not yet offer an R activity to accompany the work of Gino, Kouchaki, and Galinsky (2015), so here is one possible solution written in R.

Analysis


I will perform assumption checks for each test prior to running it. We already know that the data meet all assumptions, otherwise the authors would have used a different analytic approach. However, checking the assumptions is helpful because:

  1. reproducibility and accuracy can be verified; and
  2. if you are a student, then you should form the habit of testing assumptions.

This analysis will follow the data science workflow advocated by Garrett Grolemund and Hadley Wickham. First, we will set-up our session and import the data. Then, we must clean the data. Next, we will transform, model, and visualize the data to understand it. Finally, we will communicate our findings.

Import


We start by loading the necessary packages.

library(tidyverse) # utility & visualization
library(psych) # Cronbach's alpha
library(knitr) # create tables
library(kableExtra) # style tables
library(broom) # calculate effect size
library(car) # Levene's test
library(gmodels) # crosstabs method for chi-square

Now we can import the data set. Note the relative path since the working directory has been set.

gino <- read_csv("https://www.cjcascalheira.com/data/osl-gino-kouchaki-galinsky-2015/gino-kouchaki-galinsky-2015-experiment-3.csv")

We can also set the default theme for the exploratory plots we will create as we test the assumptions of the univariate ANOVAs.

theme_set(theme_minimal())

Clean


Unlike some data sets in the Open Stats Lab series, Dr. McIntyre chose to leave all observations in this file. That is, the data set has 291 observations instead of 288.

glimpse(gino)
## Observations: 291
## Variables: 30
## $ instr               <dbl> 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
## $ filter              <dbl> 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
## $ CONDITION           <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, ...
## $ FAILED_MC           <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, ...
## $ condition_string    <chr> "neutral", "neutral", "neutral", "neutral"...
## $ impurity_1          <dbl> 1, 1, 1, 1, 1, 2, 1, 1, 1, 1, 1, 1, 2, 1, ...
## $ impurity_2          <dbl> 1, 1, 1, 1, 1, 2, 1, 1, 1, 1, 1, 1, 1, 1, ...
## $ impurity_3          <dbl> 1, 1, 1, 1, 1, 2, 1, 1, 1, 1, 1, 1, 1, 1, ...
## $ dissonance_1        <dbl> 1, 2, 2, 1, 1, 3, 2, 1, 1, 1, 2, 1, 5, 1, ...
## $ dissonance_2        <dbl> 1, 2, 2, 1, 1, 2, 1, 1, 1, 1, 5, 1, 5, 1, ...
## $ dissonance_3        <dbl> 6, 2, 2, 1, 1, 2, 1, 1, 1, 1, 6, 1, 5, 1, ...
## $ neg_aff1            <dbl> 3, 4, 2, 1, 1, 1, 1, 1, 1, 5, 4, 1, 3, 1, ...
## $ neg_aff2            <dbl> 2, 2, 2, 1, 1, 1, 1, 1, 1, 1, 4, 1, 2, 1, ...
## $ neg_aff3            <dbl> 3, 3, 2, 1, 1, 2, 1, 1, 1, 5, 5, 1, 2, 1, ...
## $ pos_aff1            <dbl> 2, 5, 5, 6, 4, 4, 5, 5, 6, 6, 5, 6, 2, 6, ...
## $ pos_aff2            <dbl> 1, 6, 5, 6, 5, 4, 5, 5, 6, 6, 5, 6, 2, 7, ...
## $ pos_aff3            <dbl> 1, 5, 2, 6, 6, 4, 2, 4, 5, 5, 2, 6, 2, 6, ...
## $ embarrassed         <dbl> 1, 1, 5, 1, 4, 2, 1, 1, 1, 1, 6, 1, 5, 1, ...
## $ ashamed             <dbl> 1, 1, 2, 1, 4, 2, 1, 1, 1, 1, 5, 1, 5, 1, ...
## $ alientation_1       <dbl> 2, 3, 5, 1, 1, 2, 2, 2, 2, 1, 3, 4, 4, 1, ...
## $ alientation_2       <dbl> 1, 3, 2, 1, 1, 2, 2, 2, 1, 1, 2, 1, 3, 1, ...
## $ alientation_3       <dbl> 1, 2, 2, 1, 1, 2, 2, 1, 1, 1, 3, 1, 5, 1, ...
## $ alientation_4       <dbl> 2, 2, 2, 1, 1, 2, 2, 1, 1, 1, 4, 1, 3, 1, ...
## $ MCheck              <dbl> 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, 2, ...
## $ age                 <dbl> 30, 31, 29, 33, 20, 18, 31, 39, 33, 28, 28...
## $ male                <dbl> 2, 1, 1, 2, 1, 1, 1, 1, 1, 1, 2, 1, 1, 2, ...
## $ decided_to_help     <dbl> 0, 0, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, ...
## $ neutralDummy        <dbl> 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
## $ failureDummy        <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, ...
## $ inauthenticityDummy <dbl> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, ...

It appears that we must filter for those participants who passed the manipulation check, FAILED_MC, excluding those who failed it. Once we do that, this variable is useless and we can drop it, along with instr, filter, MCheck, the demographic variables and the dummy variables. Keep only one of the two condition variables.

gino_clean <- gino %>%
  filter(FAILED_MC == 0) %>%
  select(-c(instr, filter, CONDITION, FAILED_MC, MCheck, age, male),
         -ends_with("Dummy")) %>%
  rename(condition = condition_string)

A transformation of condition into a factor is necessary for the one-way ANOVAs.

gino_clean <- within(gino_clean, {
  condition <- factor(condition)
})

Understand


Our first task is to compute aggregate dependent measures. A chain of pipes makes this task is relatively simple in R. For each dependent measure, we will mutate a new variable that selects individual columns to compute the mean for each row. The aggregate is more useful for future analyses, so the individual measures of impurity, dissonance, and so on will be dropped.

Always save your results as new R data frames to track changes.

(gino_means <- gino_clean %>%
  mutate(
    feelings_of_impurity = select(gino_clean, starts_with("impurity")) %>% rowMeans(),
    feelings_of_discomfort = select(gino_clean, starts_with("dissonance")) %>% rowMeans(),
    negative_affect = select(gino_clean, starts_with("neg")) %>% rowMeans(),
    positive_affect = select(gino_clean, starts_with("pos")) %>% rowMeans(),
    embarrassment = select(gino_clean, embarrassed, ashamed) %>% rowMeans(),
    self_alienation = select(gino_clean, starts_with("alien")) %>% rowMeans()
  ) %>%
  select(condition, feelings_of_impurity, feelings_of_discomfort, negative_affect,
         positive_affect, embarrassment, self_alienation, decided_to_help))
## # A tibble: 288 x 8
##    condition feelings_of_imp~ feelings_of_dis~ negative_affect
##    <fct>                <dbl>            <dbl>           <dbl>
##  1 neutral                  1             2.67            2.67
##  2 neutral                  1             2               3   
##  3 neutral                  1             2               2   
##  4 neutral                  1             1               1   
##  5 neutral                  1             1               1   
##  6 neutral                  2             2.33            1.33
##  7 neutral                  1             1.33            1   
##  8 neutral                  1             1               1   
##  9 neutral                  1             1               1   
## 10 neutral                  1             1               3.67
## # ... with 278 more rows, and 4 more variables: positive_affect <dbl>,
## #   embarrassment <dbl>, self_alienation <dbl>, decided_to_help <dbl>

Cronbach’s Alpha

Cronbach’s alpha is a measure of internal consistency ranging from 0 to 1. Acceptable values for the coefficient range from 0.70 to 0.95, although it has been noted that values greater than 0.90 may indicate redundancy. Longer measurement instruments increase internal consistency, but may cause redundancy.

A function exists to calculate Cronbach’s alpha in psych::alpha. The function takes a data frame of scale items as its primary argument. Select the individual scale items prior to aggregation to compute alpha for each underlying construct.

# Cronbach's alpha for feelings of impurity
gino_clean %>%
  select(starts_with("impurity")) %>%
  psych::alpha()
## 
## Reliability analysis   
## Call: psych::alpha(x = .)
## 
##   raw_alpha std.alpha G6(smc) average_r S/N    ase mean  sd median_r
##       0.94      0.94    0.91      0.84  15 0.0062  2.3 1.7     0.84
## 
##  lower alpha upper     95% confidence boundaries
## 0.93 0.94 0.95 
## 
##  Reliability if an item is dropped:
##            raw_alpha std.alpha G6(smc) average_r  S/N alpha se var.r med.r
## impurity_1      0.92      0.92    0.84      0.84 10.9    0.010    NA  0.84
## impurity_2      0.91      0.91    0.83      0.83  9.7    0.011    NA  0.83
## impurity_3      0.91      0.91    0.84      0.84 10.4    0.010    NA  0.84
## 
##  Item statistics 
##              n raw.r std.r r.cor r.drop mean  sd
## impurity_1 288  0.94  0.94  0.90   0.87  2.3 1.8
## impurity_2 288  0.95  0.95  0.91   0.88  2.2 1.8
## impurity_3 288  0.95  0.94  0.90   0.87  2.4 1.9
## 
## Non missing response frequency for each item
##               1    2    3    4    5    6    7 miss
## impurity_1 0.59 0.09 0.06 0.07 0.11 0.05 0.02    0
## impurity_2 0.60 0.11 0.06 0.08 0.06 0.07 0.03    0
## impurity_3 0.56 0.10 0.07 0.08 0.09 0.06 0.03    0
# Cronbach's alpha for feelings of discomfort
gino_clean %>%
  select(starts_with("dissonance")) %>%
  psych::alpha()
## 
## Reliability analysis   
## Call: psych::alpha(x = .)
## 
##   raw_alpha std.alpha G6(smc) average_r S/N   ase mean sd median_r
##       0.94      0.94    0.93      0.85  17 0.006  4.1  2     0.82
## 
##  lower alpha upper     95% confidence boundaries
## 0.93 0.94 0.95 
## 
##  Reliability if an item is dropped:
##              raw_alpha std.alpha G6(smc) average_r  S/N alpha se var.r
## dissonance_1      0.90      0.90    0.82      0.82  9.3   0.0115    NA
## dissonance_2      0.88      0.88    0.79      0.79  7.6   0.0136    NA
## dissonance_3      0.96      0.96    0.92      0.92 24.7   0.0046    NA
##              med.r
## dissonance_1  0.82
## dissonance_2  0.79
## dissonance_3  0.92
## 
##  Item statistics 
##                n raw.r std.r r.cor r.drop mean  sd
## dissonance_1 288  0.96  0.96  0.94   0.90  4.0 2.2
## dissonance_2 288  0.97  0.97  0.96   0.92  4.0 2.1
## dissonance_3 288  0.92  0.92  0.84   0.82  4.3 2.1
## 
## Non missing response frequency for each item
##                 1    2    3    4    5    6    7 miss
## dissonance_1 0.25 0.11 0.05 0.05 0.23 0.20 0.11    0
## dissonance_2 0.24 0.09 0.05 0.06 0.25 0.21 0.10    0
## dissonance_3 0.21 0.06 0.06 0.05 0.25 0.23 0.14    0
# Cronbach's alpha for negative affect
gino_clean %>%
  select(starts_with("neg")) %>%
  psych::alpha()
## 
## Reliability analysis   
## Call: psych::alpha(x = .)
## 
##   raw_alpha std.alpha G6(smc) average_r S/N    ase mean sd median_r
##       0.93      0.93     0.9      0.81  13 0.0075  3.7  2     0.78
## 
##  lower alpha upper     95% confidence boundaries
## 0.91 0.93 0.94 
## 
##  Reliability if an item is dropped:
##          raw_alpha std.alpha G6(smc) average_r  S/N alpha se var.r med.r
## neg_aff1      0.88      0.88    0.78      0.78  7.3   0.0143    NA  0.78
## neg_aff2      0.93      0.93    0.87      0.87 12.9   0.0085    NA  0.87
## neg_aff3      0.87      0.87    0.77      0.77  6.9   0.0150    NA  0.77
## 
##  Item statistics 
##            n raw.r std.r r.cor r.drop mean  sd
## neg_aff1 288  0.94  0.94  0.91   0.87  3.9 2.2
## neg_aff2 288  0.91  0.91  0.83   0.81  3.3 2.1
## neg_aff3 288  0.95  0.95  0.92   0.88  3.9 2.2
## 
## Non missing response frequency for each item
##             1    2    3    4    5    6    7 miss
## neg_aff1 0.28 0.08 0.06 0.09 0.20 0.16 0.13    0
## neg_aff2 0.34 0.13 0.07 0.12 0.14 0.10 0.10    0
## neg_aff3 0.27 0.08 0.09 0.05 0.22 0.16 0.13    0
# Cronbach's alpha for positive affect
gino_clean %>%
  select(starts_with("pos")) %>%
  psych::alpha()
## 
## Reliability analysis   
## Call: psych::alpha(x = .)
## 
##   raw_alpha std.alpha G6(smc) average_r S/N    ase mean  sd median_r
##       0.95      0.95    0.94      0.86  18 0.0055  2.8 1.8     0.81
## 
##  lower alpha upper     95% confidence boundaries
## 0.94 0.95 0.96 
## 
##  Reliability if an item is dropped:
##          raw_alpha std.alpha G6(smc) average_r  S/N alpha se var.r med.r
## pos_aff1      0.89      0.89    0.81      0.81  8.5   0.0125    NA  0.81
## pos_aff2      0.89      0.89    0.81      0.81  8.5   0.0125    NA  0.81
## pos_aff3      0.98      0.98    0.96      0.96 49.9   0.0023    NA  0.96
## 
##  Item statistics 
##            n raw.r std.r r.cor r.drop mean  sd
## pos_aff1 288  0.97  0.97  0.97   0.93  2.8 1.9
## pos_aff2 288  0.97  0.97  0.97   0.93  2.9 2.0
## pos_aff3 288  0.91  0.92  0.82   0.82  2.7 1.8
## 
## Non missing response frequency for each item
##             1    2    3    4    5    6    7 miss
## pos_aff1 0.35 0.25 0.08 0.09 0.09 0.10 0.04    0
## pos_aff2 0.33 0.26 0.08 0.06 0.11 0.10 0.06    0
## pos_aff3 0.34 0.26 0.11 0.10 0.08 0.06 0.05    0
# Cronbach's alpha for embarrassment
gino_clean %>%
  select(embarrassed, ashamed) %>%
  psych::alpha()
## 
## Reliability analysis   
## Call: psych::alpha(x = .)
## 
##   raw_alpha std.alpha G6(smc) average_r S/N   ase mean  sd median_r
##        0.9       0.9    0.81      0.81 8.6 0.012  3.7 2.1     0.81
## 
##  lower alpha upper     95% confidence boundaries
## 0.87 0.9 0.92 
## 
##  Reliability if an item is dropped:
##             raw_alpha std.alpha G6(smc) average_r S/N alpha se var.r med.r
## embarrassed      0.81      0.81    0.66      0.81  NA       NA  0.81  0.81
## ashamed          0.66      0.81      NA        NA  NA       NA  0.66  0.81
## 
##  Item statistics 
##               n raw.r std.r r.cor r.drop mean  sd
## embarrassed 288  0.95  0.95  0.86   0.81  3.6 2.2
## ashamed     288  0.95  0.95  0.86   0.81  3.7 2.2
## 
## Non missing response frequency for each item
##                1    2    3    4    5    6    7 miss
## embarrassed 0.28 0.13 0.06 0.07 0.22 0.16 0.09    0
## ashamed     0.28 0.11 0.05 0.11 0.18 0.17 0.09    0
# Cronbach's alpha for self-alienation
gino_clean %>%
  select(starts_with("alien")) %>%
  psych::alpha()
## 
## Reliability analysis   
## Call: psych::alpha(x = .)
## 
##   raw_alpha std.alpha G6(smc) average_r S/N    ase mean  sd median_r
##        0.9       0.9    0.89      0.69   9 0.0095    3 1.7     0.72
## 
##  lower alpha upper     95% confidence boundaries
## 0.88 0.9 0.92 
## 
##  Reliability if an item is dropped:
##               raw_alpha std.alpha G6(smc) average_r  S/N alpha se  var.r
## alientation_1      0.92      0.92    0.89      0.80 12.2   0.0077 0.0014
## alientation_2      0.85      0.84    0.82      0.64  5.4   0.0154 0.0307
## alientation_3      0.85      0.85    0.82      0.66  5.9   0.0153 0.0168
## alientation_4      0.85      0.86    0.82      0.67  6.0   0.0149 0.0122
##               med.r
## alientation_1  0.79
## alientation_2  0.56
## alientation_3  0.67
## alientation_4  0.67
## 
##  Item statistics 
##                 n raw.r std.r r.cor r.drop mean  sd
## alientation_1 288  0.77  0.78  0.66   0.62  2.7 1.7
## alientation_2 288  0.92  0.92  0.89   0.85  2.8 1.7
## alientation_3 288  0.92  0.90  0.88   0.83  3.3 2.1
## alientation_4 288  0.91  0.90  0.88   0.83  3.2 2.0
## 
## Non missing response frequency for each item
##                  1    2    3    4    5    6    7 miss
## alientation_1 0.31 0.31 0.10 0.08 0.13 0.06 0.02    0
## alientation_2 0.33 0.25 0.09 0.09 0.17 0.05 0.02    0
## alientation_3 0.31 0.18 0.06 0.06 0.22 0.11 0.06    0
## alientation_4 0.32 0.18 0.06 0.12 0.17 0.10 0.05    0

After writing their essays, participants answered a questionnaire composed of items on a 7-point scale. The items assessed feelings of impurity (\(\alpha = .94\)), discomfort (\(\alpha = .94\)), negative (\(\alpha = .93\)) and positive (\(\alpha = .95\)) affect, embarrassment (\(\alpha = .90\)), and self-alienation (\(\alpha = .90\)).

Descriptive Statistics

One method for calculating descriptive statistics uses a pipeline chaining group_by() and summarize(). First, we need to transform the data into long format using gather().

Although the activity does not require it, you will notice the upper and lower bounds of the 95% confidence interval in the output. Note that the confidence intervals differ slightly from those reported by Gino, Kouchaki, and Galinsky (2015), which is to be expected. Had we chosen to perform bootstrapping with a call to quantile(), the computed 95% confidence intervals would have likely been different from those yielded below. Slight variation of confidence intervals is acceptable.

(gino_summary <- gino_means %>%
  select(-decided_to_help) %>%
  gather(key = item, value = score, -condition) %>%
  group_by(condition, item) %>%
  summarize(
    mean = mean(score),
    sd = sd(score),
    n = n(),
    t_star = qt(p = 0.975, df = n - 1),
    upper = mean + (t_star * (sd/sqrt(n))), 
    lower = mean - (t_star * (sd/sqrt(n)))
  ))
## # A tibble: 18 x 8
## # Groups:   condition [3]
##    condition      item                  mean    sd     n t_star upper lower
##    <fct>          <chr>                <dbl> <dbl> <int>  <dbl> <dbl> <dbl>
##  1 failure        embarrassment         4.69 1.82     97   1.98  5.05  4.32
##  2 failure        feelings_of_discomf~  4.90 1.64     97   1.98  5.23  4.57
##  3 failure        feelings_of_impurity  2.09 1.56     97   1.98  2.40  1.77
##  4 failure        negative_affect       4.61 1.73     97   1.98  4.96  4.26
##  5 failure        positive_affect       1.84 1.01     97   1.98  2.05  1.64
##  6 failure        self_alienation       3.21 1.62     97   1.98  3.54  2.89
##  7 inauthenticity embarrassment         4.40 1.71     92   1.99  4.76  4.05
##  8 inauthenticity feelings_of_discomf~  5.11 1.53     92   1.99  5.43  4.80
##  9 inauthenticity feelings_of_impurity  3.66 1.82     92   1.99  4.03  3.28
## 10 inauthenticity negative_affect       4.63 1.68     92   1.99  4.98  4.28
## 11 inauthenticity positive_affect       1.99 1.11     92   1.99  2.22  1.76
## 12 inauthenticity self_alienation       3.83 1.51     92   1.99  4.14  3.52
## 13 neutral        embarrassment         1.96 1.38     99   1.98  2.24  1.69
## 14 neutral        feelings_of_discomf~  2.41 1.71     99   1.98  2.75  2.07
## 15 neutral        feelings_of_impurity  1.21 0.613    99   1.98  1.33  1.09
## 16 neutral        negative_affect       1.88 1.30     99   1.98  2.13  1.62
## 17 neutral        positive_affect       4.46 1.77     99   1.98  4.82  4.11
## 18 neutral        self_alienation       1.92 1.19     99   1.98  2.16  1.69

APA Style Table

If item becames a factor, we can rearrange the order and apply labels that match the publication.

(gino_summary <- within(gino_summary, {
  item <- factor(item, 
                 levels = c("self_alienation", "feelings_of_impurity", "feelings_of_discomfort",
                            "negative_affect", "positive_affect", "embarrassment"),
                 labels = c("Self-alienation", "Feelings of impurity", "Discomfort",
                            "Negative affect", "Positive affect", "Embarrassment"))
})) 
## # A tibble: 18 x 8
## # Groups:   condition [3]
##    condition      item                  mean    sd     n t_star upper lower
##    <fct>          <fct>                <dbl> <dbl> <int>  <dbl> <dbl> <dbl>
##  1 failure        Embarrassment         4.69 1.82     97   1.98  5.05  4.32
##  2 failure        Discomfort            4.90 1.64     97   1.98  5.23  4.57
##  3 failure        Feelings of impurity  2.09 1.56     97   1.98  2.40  1.77
##  4 failure        Negative affect       4.61 1.73     97   1.98  4.96  4.26
##  5 failure        Positive affect       1.84 1.01     97   1.98  2.05  1.64
##  6 failure        Self-alienation       3.21 1.62     97   1.98  3.54  2.89
##  7 inauthenticity Embarrassment         4.40 1.71     92   1.99  4.76  4.05
##  8 inauthenticity Discomfort            5.11 1.53     92   1.99  5.43  4.80
##  9 inauthenticity Feelings of impurity  3.66 1.82     92   1.99  4.03  3.28
## 10 inauthenticity Negative affect       4.63 1.68     92   1.99  4.98  4.28
## 11 inauthenticity Positive affect       1.99 1.11     92   1.99  2.22  1.76
## 12 inauthenticity Self-alienation       3.83 1.51     92   1.99  4.14  3.52
## 13 neutral        Embarrassment         1.96 1.38     99   1.98  2.24  1.69
## 14 neutral        Discomfort            2.41 1.71     99   1.98  2.75  2.07
## 15 neutral        Feelings of impurity  1.21 0.613    99   1.98  1.33  1.09
## 16 neutral        Negative affect       1.88 1.30     99   1.98  2.13  1.62
## 17 neutral        Positive affect       4.46 1.77     99   1.98  4.82  4.11
## 18 neutral        Self-alienation       1.92 1.19     99   1.98  2.16  1.69

Isolate the mean and standard deviation, renaming the columns to match APA format.

# Drop confidence interval information
(gino_summary_short <- gino_summary %>%
    select(condition, item, mean, sd))
## # A tibble: 18 x 4
## # Groups:   condition [3]
##    condition      item                  mean    sd
##    <fct>          <fct>                <dbl> <dbl>
##  1 failure        Embarrassment         4.69 1.82 
##  2 failure        Discomfort            4.90 1.64 
##  3 failure        Feelings of impurity  2.09 1.56 
##  4 failure        Negative affect       4.61 1.73 
##  5 failure        Positive affect       1.84 1.01 
##  6 failure        Self-alienation       3.21 1.62 
##  7 inauthenticity Embarrassment         4.40 1.71 
##  8 inauthenticity Discomfort            5.11 1.53 
##  9 inauthenticity Feelings of impurity  3.66 1.82 
## 10 inauthenticity Negative affect       4.63 1.68 
## 11 inauthenticity Positive affect       1.99 1.11 
## 12 inauthenticity Self-alienation       3.83 1.51 
## 13 neutral        Embarrassment         1.96 1.38 
## 14 neutral        Discomfort            2.41 1.71 
## 15 neutral        Feelings of impurity  1.21 0.613
## 16 neutral        Negative affect       1.88 1.30 
## 17 neutral        Positive affect       4.46 1.77 
## 18 neutral        Self-alienation       1.92 1.19
# Rename columns
(gino_summary_short <- gino_summary_short %>%
    rename(
      Variable = item,
      M = mean,
      SD = sd
    ) %>%
    arrange(Variable))
## # A tibble: 18 x 4
## # Groups:   condition [3]
##    condition      Variable                 M    SD
##    <fct>          <fct>                <dbl> <dbl>
##  1 failure        Self-alienation       3.21 1.62 
##  2 inauthenticity Self-alienation       3.83 1.51 
##  3 neutral        Self-alienation       1.92 1.19 
##  4 failure        Feelings of impurity  2.09 1.56 
##  5 inauthenticity Feelings of impurity  3.66 1.82 
##  6 neutral        Feelings of impurity  1.21 0.613
##  7 failure        Discomfort            4.90 1.64 
##  8 inauthenticity Discomfort            5.11 1.53 
##  9 neutral        Discomfort            2.41 1.71 
## 10 failure        Negative affect       4.61 1.73 
## 11 inauthenticity Negative affect       4.63 1.68 
## 12 neutral        Negative affect       1.88 1.30 
## 13 failure        Positive affect       1.84 1.01 
## 14 inauthenticity Positive affect       1.99 1.11 
## 15 neutral        Positive affect       4.46 1.77 
## 16 failure        Embarrassment         4.69 1.82 
## 17 inauthenticity Embarrassment         4.40 1.71 
## 18 neutral        Embarrassment         1.96 1.38

The table is too long. Divide this data frame into three R objects, then merge the columns to one master data frame.

# Separate by condition
(gino_table <- gino_summary_short %>%
  filter(condition == "inauthenticity") %>%
  ungroup() %>%
  select(Variable, M, SD))
## # A tibble: 6 x 3
##   Variable                 M    SD
##   <fct>                <dbl> <dbl>
## 1 Self-alienation       3.83  1.51
## 2 Feelings of impurity  3.66  1.82
## 3 Discomfort            5.11  1.53
## 4 Negative affect       4.63  1.68
## 5 Positive affect       1.99  1.11
## 6 Embarrassment         4.40  1.71
failure <- gino_summary_short %>%
  filter(condition == "failure") %>%
  ungroup() %>%
  select(M, SD)

condition <- gino_summary_short %>%
  filter(condition == "neutral") %>%
  ungroup() %>%
  select(M, SD)

# Prepare table format
(clean_table <- bind_cols(gino_table, failure, condition))
## # A tibble: 6 x 7
##   Variable                 M    SD    M1   SD1    M2   SD2
##   <fct>                <dbl> <dbl> <dbl> <dbl> <dbl> <dbl>
## 1 Self-alienation       3.83  1.51  3.21  1.62  1.92 1.19 
## 2 Feelings of impurity  3.66  1.82  2.09  1.56  1.21 0.613
## 3 Discomfort            5.11  1.53  4.90  1.64  2.41 1.71 
## 4 Negative affect       4.63  1.68  4.61  1.73  1.88 1.30 
## 5 Positive affect       1.99  1.11  1.84  1.01  4.46 1.77 
## 6 Embarrassment         4.40  1.71  4.69  1.82  1.96 1.38

The following code is a kable in LaTeX form. It saves the table in APA style as a .png and displays the output.

kable(clean_table, "latex", booktabs = TRUE, digits = 2, align = "lcccccc",
      col.names = c("Variable", "M", "SD", "M", "SD", "M", "SD")) %>%
  add_header_above(header = c(" ", "Inauthenticity" = 2, "Failure" = 2, "Control" = 2))

One-way ANOVA

Now we can conduct a one-way between-groups ANOVA for each construct. The assumptions for this parametic test can be thought of as either met before the study was conducted or tested after data collection.

Design Assumptions 1. Continuous dependent variable at the interval or ratio level.

  1. Categorical independent variable as a factor with three or more levels.

  2. Independence of observations such that participants are randomly assigned to levels of the factor (i.e., groups) with no overlap.

Tested Assumptions 4. No significant outliers, which can skew the distribution and affect normality; assessed via boxplots.

  1. Normal distribution of dependent measures for each group; assessed with Shapiro-Wilk and Q-Q plots.
    • Univariate ANOVAs can tolerate non-normal data.
    • Shapiro-Wilk test is conducted on the residuals of the ANOVA models.
    • The data points in the Q-Q plot should be aligned with the dotted diagonal line if the distribution of the dependent measure is to be considered normal.
  2. Homogeneity of variances between all groups for each dependent measure; assessed with Levene’s test and plot of residuals versus fitted values.
    • Violation of this assumption is most problematic when sample sizes are not roughly equal.
    • Homogeneity of variances can be assumed if the red line in residuals vs. fitted values plot is horizontal.

After each omnibus test, we will conduct pairwise comparisons with a Bonferroni correction to determine which pairs of groups are significantly different. The Bonferroni correction divides the Type I error rate by the number of levels of the factor (in this case, 3), making the Bonferroni correction a more conservative estimate of significance.

Reporting the direction of comparisons relies on the descriptive statistics calculated above.

The effect size of a one-way ANOVA is partial eta-squared, which we can calculate manually:

\[\eta_p^2 = \frac{SS_\text{condition}}{SS_\text{condition} + SS_\text{residuals}}\]

After creating an aov object, we will extract coefficient-level information with broom::tidy to code the sum of squares. This prevents transcription error and supports reproducibility.

Self-alienation

# Self-alienation
alienation_aov <- aov(self_alienation ~ condition, data = gino_means)

# Partial eta-squared
(alienation_tidied <- tidy(alienation_aov))
## # A tibble: 2 x 6
##   term         df sumsq meansq statistic   p.value
##   <chr>     <dbl> <dbl>  <dbl>     <dbl>     <dbl>
## 1 condition     2  182.  90.9       43.2  4.00e-17
## 2 Residuals   285  600.   2.10      NA   NA
alienation_tidied$sumsq[1] / (alienation_tidied$sumsq[1] + alienation_tidied$sumsq[2])
## [1] 0.2327635
# Pairwise comparisons
pairwise.t.test(gino_means$self_alienation, gino_means$condition,
                p.adjust.method = "bonferroni")
## 
##  Pairwise comparisons using t tests with pooled SD 
## 
## data:  gino_means$self_alienation and gino_means$condition 
## 
##                failure inauthenticity
## inauthenticity 0.012   -             
## neutral        5.2e-09 < 2e-16       
## 
## P value adjustment method: bonferroni

A one-way ANOVA using the self-alienation manipulation check as the dependent measure evinced a main effect of condition, F(2, 285) = 43.23, p < .001, \(\eta_p^2 = .23\). Pairwise t-tests corrected by Bonferroni adjustment revealed that participants in the inauthentic condition (M = 3.83, SD = 1.51) felt a greater distance from the self than participants in either the failure (M = 3.21, SD = 1.62, p = .012) or the control condition (M = 1.92, SD = 1.19, p < .001). Participants reported greater self-alienation when they recalled a failure than when they recalled a recent situation (p < .001).

Outliers?
ggplot(gino_means, aes(x = condition, y = self_alienation)) +
  geom_boxplot()

Outliers are present int he neutral condition.

Normality?
alienation_residuals <- residuals(alienation_aov)
shapiro.test(alienation_residuals)
## 
##  Shapiro-Wilk normality test
## 
## data:  alienation_residuals
## W = 0.98086, p-value = 0.0006695
plot(alienation_aov, 2)

Since p < .001, we reject the null hypothesis: the data are not normally distributed.

Homoscedasticity?
leveneTest(self_alienation ~ condition, data = gino_means)
## Levene's Test for Homogeneity of Variance (center = median)
##        Df F value    Pr(>F)    
## group   2  9.0921 0.0001487 ***
##       285                      
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
plot(alienation_aov, 1)

Since p < .001. we reject the null hypothesis: the variances are heterogeneous.

Feelings of Impurity

# Feelings of Impurity
impurity_aov <- aov(feelings_of_impurity ~ condition, data = gino_means)

# Partial eta-squared
(impurity_tidied <- tidy(impurity_aov))
## # A tibble: 2 x 6
##   term         df sumsq meansq statistic   p.value
##   <chr>     <dbl> <dbl>  <dbl>     <dbl>     <dbl>
## 1 condition     2  291. 146.        72.3  4.04e-26
## 2 Residuals   285  574.   2.01      NA   NA
impurity_tidied$sumsq[1] / (impurity_tidied$sumsq[1] + impurity_tidied$sumsq[2])
## [1] 0.3365678
# Pairwise comparisons
pairwise.t.test(gino_means$feelings_of_impurity, gino_means$condition,
                p.adjust.method = "bonferroni")
## 
##  Pairwise comparisons using t tests with pooled SD 
## 
## data:  gino_means$feelings_of_impurity and gino_means$condition 
## 
##                failure inauthenticity
## inauthenticity 1.4e-12 -             
## neutral        5.9e-05 < 2e-16       
## 
## P value adjustment method: bonferroni

A one-way ANOVA using the composite score of impurity as the dependent measure also revealed a main effect of condition, F(2, 285) = 72.29, p < .001, \(\eta_p^2 = .34\). Pairwise comparisons (with Bonferroni adjustment) revealed significant difference across conditions. Participants who wrote essays about an inauthentic experience (M = 3.66, SD = 1.82) reported feeling more impure than those who wrote about a failure (M = 2.09, SD = 1.56, p < .001) or recent memory (M = 1.21, SD = 0.61, p < .001). Reported feelings of impurity were higher among participants in the failure than in the control condition (p < .001).

Outliers?
ggplot(gino_means, aes(x = condition, y = feelings_of_impurity)) +
  geom_boxplot()

Outliers are present.

Normality?
impurity_residuals <- residuals(impurity_aov)
shapiro.test(impurity_residuals)
## 
##  Shapiro-Wilk normality test
## 
## data:  impurity_residuals
## W = 0.93409, p-value = 5.143e-10
plot(impurity_aov, 2)

The data are not normally distributed.

Homoscedasticity?
leveneTest(feelings_of_impurity ~ condition, data = gino_means)
## Levene's Test for Homogeneity of Variance (center = median)
##        Df F value    Pr(>F)    
## group   2  34.762 3.097e-14 ***
##       285                      
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
plot(impurity_aov, 1)

Since p < .001, we reject the null hypothesis that the variances are homogeneous.

Discomfort

# Discomfort
discomfort_aov <- aov(feelings_of_discomfort ~ condition, data = gino_means)

# Partial eta-squared
(discomfort_tidied <- tidy(discomfort_aov))
## # A tibble: 2 x 6
##   term         df sumsq meansq statistic   p.value
##   <chr>     <dbl> <dbl>  <dbl>     <dbl>     <dbl>
## 1 condition     2  440. 220.        82.7  4.85e-29
## 2 Residuals   285  758.   2.66      NA   NA
discomfort_tidied$sumsq[1] / (discomfort_tidied$sumsq[1] + discomfort_tidied$sumsq[2])
## [1] 0.367143
# Pairwise comparisons
pairwise.t.test(gino_means$feelings_of_discomfort, gino_means$condition,
                p.adjust.method = "bonferroni")
## 
##  Pairwise comparisons using t tests with pooled SD 
## 
## data:  gino_means$feelings_of_discomfort and gino_means$condition 
## 
##                failure inauthenticity
## inauthenticity 1       -             
## neutral        <2e-16  <2e-16        
## 
## P value adjustment method: bonferroni

Feelings of discomfort varied across conditions, F(2, 285) = 82.67, p < .001, \(\eta_p^2 = .37\). Pairwise comparisons (with Bonferroni adjustment) revealed that participants who wrote about inauthenticity (M = 5.11, SD = 1.53) felt the same amount of psychological discomfort as those who wrote about failure (M = 4.90, SD = 1.64, p = 1.00), but reported higher levels than those who wrote about a recent situation (M = 2.41, SD = 1.71, p < .001). Participants reported more discomfort when recalling a memory involving failure than a neutral memory (p < .001).

Outliers?
ggplot(gino_means, aes(x = condition, y = feelings_of_discomfort)) +
  geom_boxplot()

Outliers are present in the inauthenticity condition.

Normality?
discomfort_residuals <- residuals(discomfort_aov)
shapiro.test(discomfort_residuals)
## 
##  Shapiro-Wilk normality test
## 
## data:  discomfort_residuals
## W = 0.97762, p-value = 0.0001754
plot(discomfort_aov, 2)

The data are not normally distributed (p < .001).

Homoscedasticity?
leveneTest(feelings_of_discomfort ~ condition, data = gino_means)
## Levene's Test for Homogeneity of Variance (center = median)
##        Df F value Pr(>F)
## group   2  1.1707 0.3116
##       285
plot(discomfort_aov, 1)

Since p = .312, we fail the reject the null hypothesis; the variances are homogeneous.

Negative Affect

# Negative Affect
negative_aov <- aov(negative_affect ~ condition, data = gino_means)

# Partial eta-squared
(negative_tidied <- tidy(negative_aov))
## # A tibble: 2 x 6
##   term         df sumsq meansq statistic   p.value
##   <chr>     <dbl> <dbl>  <dbl>     <dbl>     <dbl>
## 1 condition     2  489. 245.        98.3  3.46e-33
## 2 Residuals   285  709.   2.49      NA   NA
negative_tidied$sumsq[1] / (negative_tidied$sumsq[1] + negative_tidied$sumsq[2])
## [1] 0.408165
# Pairwise comparisons
pairwise.t.test(gino_means$negative_affect, gino_means$condition,
                p.adjust.method = "bonferroni")
## 
##  Pairwise comparisons using t tests with pooled SD 
## 
## data:  gino_means$negative_affect and gino_means$condition 
## 
##                failure inauthenticity
## inauthenticity 1       -             
## neutral        <2e-16  <2e-16        
## 
## P value adjustment method: bonferroni

A main effect of condition was found for reported negative affect, F(2, 285) = 98.28, p < .001, \(\eta_p^2 = .41\). Pairwise t-tests with Bonferroni corrections revealed no difference in reported negative affect among participants in the inauthenticity (M = 4.63, SD = 1.68) and failure (M = 4.61, SD = 1.73, p = 1.00) conditions. Participants who wrote about a recent memory (M = 1.88, SD = 1.30) were less likely to report negative affect than those who either wrote about inauthenticity (p < .001) or failure (p < .001).

Outliers?
ggplot(gino_means, aes(x = condition, y = negative_affect)) +
  geom_boxplot()

Three outliers in the control condition.

Normality?
negative_residuals <- residuals(negative_aov)
shapiro.test(negative_residuals)
## 
##  Shapiro-Wilk normality test
## 
## data:  negative_residuals
## W = 0.96877, p-value = 6.648e-06
plot(negative_aov, 2)

The distribution of reported negative affect violates the assumption of normality.

Homoscedasticity?
leveneTest(negative_affect ~ condition, data = gino_means)
## Levene's Test for Homogeneity of Variance (center = median)
##        Df F value   Pr(>F)   
## group   2   5.127 0.006493 **
##       285                    
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
plot(negative_aov, 1)

Positive Affect

# Positive Affect
positive_aov <- aov(positive_affect ~ condition, data = gino_means)

# Partial eta-squared
(positive_tidied <- tidy(positive_aov))
## # A tibble: 2 x 6
##   term         df sumsq meansq statistic   p.value
##   <chr>     <dbl> <dbl>  <dbl>     <dbl>     <dbl>
## 1 condition     2  423. 212.        117.  9.17e-38
## 2 Residuals   285  517.   1.81       NA  NA
positive_tidied$sumsq[1] / (positive_tidied$sumsq[1] + positive_tidied$sumsq[2])
## [1] 0.4503483
# Pairwise comparisons
pairwise.t.test(gino_means$positive_affect, gino_means$condition,
                p.adjust.method = "bonferroni")
## 
##  Pairwise comparisons using t tests with pooled SD 
## 
## data:  gino_means$positive_affect and gino_means$condition 
## 
##                failure inauthenticity
## inauthenticity 1       -             
## neutral        <2e-16  <2e-16        
## 
## P value adjustment method: bonferroni

Positive affect varied across conditions as well, F(2, 285) = 116.76, p < .001, \(\eta_p^2 = .45\). Pairwise comparisons (with Bonferroni adjustment) revealed that participants who wrote about an inexperience in which they felt inauthentic (M = 1.99, SD = 1.11) were less likely to report positive affect than those in the control condition (M = 4.46, SD = 1.77, p < .001). Compared to participants who wrote about failure (M = 1.84, SD = 1.01), those assigned to the control group reported greater positive affect (p < .001). Positive affect was the same for participants in both the inauthenticity and failure conditions (p = 1.00).

Outliers?
ggplot(gino_means, aes(x = condition, y = positive_affect)) +
  geom_boxplot()

Outliers are present.

Normality?
positive_residuals <- residuals(positive_aov)
shapiro.test(positive_residuals)
## 
##  Shapiro-Wilk normality test
## 
## data:  positive_residuals
## W = 0.9622, p-value = 7.839e-07
plot(positive_aov, 2)

The distribution of reported positive affect is non-normal.

Homoscedasticity?
leveneTest(positive_affect ~ condition, data = gino_means)
## Levene's Test for Homogeneity of Variance (center = median)
##        Df F value    Pr(>F)    
## group   2  21.114 2.812e-09 ***
##       285                      
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
plot(positive_aov, 1)

Since p < .001, we reject the null hypothesis; the data are heteroscedastic.

Embarrassment

# Embarrassment
embarrassment_aov <- aov(embarrassment ~ condition, data = gino_means)

# Partial eta-squared
(embarrassment_tidied <- tidy(embarrassment_aov))
## # A tibble: 2 x 6
##   term         df sumsq meansq statistic   p.value
##   <chr>     <dbl> <dbl>  <dbl>     <dbl>     <dbl>
## 1 condition     2  437. 219.        80.8  1.62e-28
## 2 Residuals   285  771.   2.71      NA   NA
embarrassment_tidied$sumsq[1] / (embarrassment_tidied$sumsq[1] + embarrassment_tidied$sumsq[2])
## [1] 0.3617641
# Pairwise comparisons
pairwise.t.test(gino_means$embarrassment, gino_means$condition, 
                p.adjust.method = "bonferroni")
## 
##  Pairwise comparisons using t tests with pooled SD 
## 
## data:  gino_means$embarrassment and gino_means$condition 
## 
##                failure inauthenticity
## inauthenticity 0.71    -             
## neutral        <2e-16  <2e-16        
## 
## P value adjustment method: bonferroni

Feelings of embarrassment also varied across conditions, F(2, 285) = 80.77, p < .001, \(\eta_p^2 = .36\). Pairwise comparisons (with Bonferroni adjustment) indicated no difference in reported embarrassment between those who wrote an essay about an inauthentic experience (M = 4.40, SD = 1.71) and those who wrote about failure (M = 4.69, SD = 1.82, p = .71). Participants were less likely to feel embarrassed when writing about a recent situation (M = 1.96, SD = 1.38) than in a memory involving inauthenticity (p < .001) or failure (p < .001).

Outliers?
ggplot(gino_means, aes(x = condition, y = embarrassment)) +
  geom_boxplot()

Outliers are present.

Normality?
embarrassment_residuals <- residuals(embarrassment_aov)
shapiro.test(embarrassment_residuals)
## 
##  Shapiro-Wilk normality test
## 
## data:  embarrassment_residuals
## W = 0.97344, p-value = 3.504e-05
plot(embarrassment_aov, 2)

Since p < .001, we reject the null hypothesis; the data violate the assumption of normality.

Homoscedasticity?
leveneTest(embarrassment ~ condition, data = gino_means)
## Levene's Test for Homogeneity of Variance (center = median)
##        Df F value  Pr(>F)  
## group   2  4.1709 0.01639 *
##       285                  
## ---
## Signif. codes:  0 '***' 0.001 '**' 0.01 '*' 0.05 '.' 0.1 ' ' 1
plot(embarrassment_aov, 1)

Since p = .016, we reject the null hypothesis; the variances are heterogeneous.

Chi-square Test

A chi-square test of independence is used to examine whether two categorical variables are associated. It can be conducted in R using chisq.test() on a contingency table, or calling that function on two vectors.

\[H_0: \text{the two variables are independent} \\ H_1: \text{the two variables are associated}\]

Cramer’s V is a measure of association for categorical variables and be thought of as the effect size of the chi-square test. It is given by the equation:

\[\phi_c = \sqrt{\frac{\chi^2}{N(k - 1)}}\]

This function can be used to calculate Cramer’s V:

cv.test = function(x,y) {
  CV = sqrt(chisq.test(x, y, correct=FALSE)$statistic /
              (length(x) * (min(length(unique(x)),length(unique(y))) - 1)))
  print.noquote("Cram?r V / Phi:")
  return(as.numeric(CV))
}

Is the percentage of participants helping the same across groups?

# Select helping variable
gino_help <- gino_clean %>%
  select(condition, decided_to_help)

# Conduct chi-square test on contingency table
chisq.test(table(gino_help))
## 
##  Pearson's Chi-squared test
## 
## data:  table(gino_help)
## X-squared = 10.349, df = 2, p-value = 0.00566
# Cramer's V for all conditions
cv.test(gino_means$condition, gino_means$decided_to_help)
## [1] Cram?r V / Phi:
## [1] 0.1895603

The proportion of participants who decided to help varied across conditions, \(\chi^2(2, N = 288) = 10.35\), p = .006, Cramer’s V = .19.

Now we calculate the proportion of those who helped by condition.

gino_help %>%
  group_by(condition) %>%
  summarize(
    percentage = mean(decided_to_help)
  )
## # A tibble: 3 x 2
##   condition      percentage
##   <fct>               <dbl>
## 1 failure             0.175
## 2 inauthenticity      0.337
## 3 neutral             0.162

Notice that when we compute chi-square tests on 2 x 2 frequency tables, it may be avisable to use Yate’s correction.

# Chi-square of inauthenticity vs. failure
gino_help_fail <- gino_help %>%
  filter(condition != "neutral")

# Drop unused factor level
gino_help_fail$condition <- droplevels(gino_help_fail$condition)

chisq.test(table(gino_help_fail))
## 
##  Pearson's Chi-squared test with Yates' continuity correction
## 
## data:  table(gino_help_fail)
## X-squared = 5.6904, df = 1, p-value = 0.01706
# Chi-square of inauthenticity vs. neutral
gino_help_neutral <- gino_help %>%
  filter(condition != "failure")

# Drop unused factor level
gino_help_neutral$condition <- droplevels(gino_help_neutral$condition)

chisq.test(table(gino_help_neutral))
## 
##  Pearson's Chi-squared test with Yates' continuity correction
## 
## data:  table(gino_help_neutral)
## X-squared = 6.9856, df = 1, p-value = 0.008217

Here, these values do not match those reported by Gino, Kouchaki, and Galinsky (2015). Let us verify our calculations with an alternative method using a function similar to CROSSTABS in SPSS.

# Crosstabs method: inauthenticity vs. failure
CrossTable(gino_help_fail$condition, gino_help_fail$decided_to_help, 
           format = "SPSS", chisq = TRUE)
## 
##    Cell Contents
## |-------------------------|
## |                   Count |
## | Chi-square contribution |
## |             Row Percent |
## |          Column Percent |
## |           Total Percent |
## |-------------------------|
## 
## Total Observations in Table:  189 
## 
##                          | gino_help_fail$decided_to_help 
## gino_help_fail$condition |        0  |        1  | Row Total | 
## -------------------------|-----------|-----------|-----------|
##                  failure |       80  |       17  |       97  | 
##                          |    0.806  |    2.366  |           | 
##                          |   82.474% |   17.526% |   51.323% | 
##                          |   56.738% |   35.417% |           | 
##                          |   42.328% |    8.995% |           | 
## -------------------------|-----------|-----------|-----------|
##           inauthenticity |       61  |       31  |       92  | 
##                          |    0.849  |    2.495  |           | 
##                          |   66.304% |   33.696% |   48.677% | 
##                          |   43.262% |   64.583% |           | 
##                          |   32.275% |   16.402% |           | 
## -------------------------|-----------|-----------|-----------|
##             Column Total |      141  |       48  |      189  | 
##                          |   74.603% |   25.397% |           | 
## -------------------------|-----------|-----------|-----------|
## 
##  
## Statistics for All Table Factors
## 
## 
## Pearson's Chi-squared test 
## ------------------------------------------------------------
## Chi^2 =  6.515902     d.f. =  1     p =  0.01069141 
## 
## Pearson's Chi-squared test with Yates' continuity correction 
## ------------------------------------------------------------
## Chi^2 =  5.690413     d.f. =  1     p =  0.01705784 
## 
##  
##        Minimum expected frequency: 23.36508
# Crosstabs method: inauthenticity vs. neutral
CrossTable(gino_help_neutral$condition, gino_help_neutral$decided_to_help, 
           format = "SPSS", chisq = TRUE)
## 
##    Cell Contents
## |-------------------------|
## |                   Count |
## | Chi-square contribution |
## |             Row Percent |
## |          Column Percent |
## |           Total Percent |
## |-------------------------|
## 
## Total Observations in Table:  191 
## 
##                             | gino_help_neutral$decided_to_help 
## gino_help_neutral$condition |        0  |        1  | Row Total | 
## ----------------------------|-----------|-----------|-----------|
##              inauthenticity |       61  |       31  |       92  | 
##                             |    1.008  |    3.088  |           | 
##                             |   66.304% |   33.696% |   48.168% | 
##                             |   42.361% |   65.957% |           | 
##                             |   31.937% |   16.230% |           | 
## ----------------------------|-----------|-----------|-----------|
##                     neutral |       83  |       16  |       99  | 
##                             |    0.937  |    2.870  |           | 
##                             |   83.838% |   16.162% |   51.832% | 
##                             |   57.639% |   34.043% |           | 
##                             |   43.455% |    8.377% |           | 
## ----------------------------|-----------|-----------|-----------|
##                Column Total |      144  |       47  |      191  | 
##                             |   75.393% |   24.607% |           | 
## ----------------------------|-----------|-----------|-----------|
## 
##  
## Statistics for All Table Factors
## 
## 
## Pearson's Chi-squared test 
## ------------------------------------------------------------
## Chi^2 =  7.902415     d.f. =  1     p =  0.004936884 
## 
## Pearson's Chi-squared test with Yates' continuity correction 
## ------------------------------------------------------------
## Chi^2 =  6.985551     d.f. =  1     p =  0.008217035 
## 
##  
##        Minimum expected frequency: 22.63874

Participants who wrote an essay about an inauthentic memory were likely to help (33.7%) than those who wrote about failure (17.5%, \(\chi^2(1, N = 189) = 5.69\), p = .017) or a recent situation (16.2%, \(\chi^2(1, N = 191) = 6.99\), p = .008).

Communicate


After writing their essays, participants answered a questionnaire composed of items on a 7-point scale. The items assessed feelings of impurity (\(\alpha = .94\)), discomfort (\(\alpha = .94\)), negative (\(\alpha = .93\)) and positive (\(\alpha = .95\)) affect, embarrassment (\(\alpha = .90\)), and self-alienation (\(\alpha = .90\)).

Self-alienation. A one-way ANOVA using the self-alienation manipulation check as the dependent measure evinced a main effect of condition, F(2, 285) = 43.23, p < .001, \(\eta_p^2 = .23\). Pairwise t-tests corrected by Bonferroni adjustment revealed that participants in the inauthentic condition (M = 3.83, SD = 1.51) felt a greater distance from the self than participants in either the failure (M = 3.21, SD = 1.62, p = .012) or the control condition (M = 1.92, SD = 1.19, p < .001). Participants reported greater self-alienation when they recalled a failure than when they recalled a recent situation (p < .001).

Feelings of impurity. A one-way ANOVA using the composite score of impurity as the dependent measure also revealed a main effect of condition, F(2, 285) = 72.29, p < .001, \(\eta_p^2 = .34\). Pairwise comparisons (with Bonferroni adjustment) revealed significant difference across conditions. Participants who wrote essays about an inauthentic experience (M = 3.66, SD = 1.82) reported feeling more impure than those who wrote about a failure (M = 2.09, SD = 1.56, p < .001) or recent memory (M = 1.21, SD = 0.61, p < .001). Reported feelings of impurity were higher among participants in the failure than in the control condition (p < .001).

Discomfort. Feelings of discomfort varied across conditions, F(2, 285) = 82.67, p < .001, \(\eta_p^2 = .37\). Pairwise comparisons (with Bonferroni adjustment) revealed that participants who wrote about inauthenticity (M = 5.11, SD = 1.53) felt the same amount of psychological discomfort as those who wrote about failure (M = 4.90, SD = 1.64, p = 1.00), but reported higher levels than those who wrote about a recent situation (M = 2.41, SD = 1.71, p < .001). Participants reported more discomfort when recalling a memory involving failure than a neutral memory (p < .001).

Negative affect. A main effect of condition was found for reported negative affect, F(2, 285) = 98.28, p < .001, \(\eta_p^2 = .41\). Pairwise t-tests with Bonferroni corrections revealed no difference in reported negative affect among participants in the inauthenticity (M = 4.63, SD = 1.68) and failure (M = 4.61, SD = 1.73, p = 1.00) conditions. Participants who wrote about a recent memory (M = 1.88, SD = 1.30) were less likely to report negative affect than those who either wrote about inauthenticity (p < .001) or failure (p < .001).

Positive affect. Positive affect varied across conditions as well, F(2, 285) = 116.76, p < .001, \(\eta_p^2 = .45\). Pairwise comparisons (with Bonferroni adjustment) revealed that participants who wrote about an inexperience in which they felt inauthentic (M = 1.99, SD = 1.11) were less likely to report positive affect than those in the control condition (M = 4.46, SD = 1.77, p < .001). Compared to participants who wrote about failure (M = 1.84, SD = 1.01), those assigned to the control group reported greater positive affect (p < .001). Positive affect was the same for participants in both the inauthenticity and failure conditions (p = 1.00).

Embarrassment. Feelings of embarrassment also varied across conditions, F(2, 285) = 80.77, p < .001, \(\eta_p^2 = .36\). Pairwise comparisons (with Bonferroni adjustment) indicated no difference in reported embarrassment between those who wrote an essay about an inauthentic experience (M = 4.40, SD = 1.71) and those who wrote about failure (M = 4.69, SD = 1.82, p = .71). Participants were less likely to feel embarrassed when writing about a recent situation (M = 1.96, SD = 1.38) than in a memory involving inauthenticity (p < .001) or failure (p < .001).

Decided to help. The proportion of participants who decided to help varied across conditions, \(\chi^2(2, N = 288) = 10.35\), p = .006, Cramer’s V = .19. Participants who wrote an essay about an inauthentic memory were likely to help (33.7%) than those who wrote about failure (17.5%, \(\chi^2(1, N = 189) = 5.69\), p = .017) or a recent situation (16.2%, \(\chi^2(1, N = 191) = 6.99\), p = .008).

Acknowledgements


I am thankful for my advisor, Dr. Brandt A. Smith for introducing me to R, JASP, and OSL. The discipline of psychology is advocating for preregistered, open materials. His encouragement to utilize open data and open source software has positioned me in the middle of the reproducible movement.

I would still be clicking checkboxes and dropdowns to analyze data if it were not for DataCamp, Alboukadel Kassambara, Jonathan Baron, and the team behind personality-project.

Avatar
Cory J. Cascalheira
Doctoral Researcher in Counseling Psychology

Research interests include identity, oppression, and resilience among marginalized populations, especially sexual minorities, with attention to addition and sexual well-being.

Related

comments powered by Disqus