Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Discrepency in calculation of standard mean differences? #284

Open
rcalinjageman opened this issue Jan 25, 2016 · 0 comments
Open

Discrepency in calculation of standard mean differences? #284

rcalinjageman opened this issue Jan 25, 2016 · 0 comments

Comments

@rcalinjageman
Copy link

I have a set of studies that I’m trying to meta-analyze with OpenMeta[Analyst]. I have raw means and SD but would like to analyze standardized mean differences (Cohen’s d) for ease of interpretation (especially because the DV is in squared units).

Here’s the data I inputed into OpenMeta[Analyst]…the last 3 columns are the standard mean differences calculated, with upper and lower bounds for CI:

            Study                                                    N1          M1         SD1        N2          M2         SD2        SMD      lower    upper

            Online                                                   212         3.172     3.038     199         2.308     3.172     0.278     0.083     0.472

            College of DuPage                           109         2.549     2.125     119         1.650     2.549     0.380     0.118     0.642

            Dominican University                     93           2.311     2.602     80           1.626     2.311     0.276     -0.024    0.576

            Concordia University                      88           2.728     2.456     71           2.137     2.728     0.228     -0.086    0.542

When I try analyzing the same data with a different tool (Meta-Essentials), however, I get:

Study name

Cohen's d

CI Lower limit

CI Upper limit

Online

0.34

0.14

0.53

College of DuPage

0.49

0.23

0.76

Dominican University

0.30

0.00

0.60

Concordia University

0.26

-0.06

0.57

Notice that this tool gives considerably higher effect size estmates, especially for the first two studies (0.278 vs 0.34 for Online study; 0.38 vs 0.49 for College of DuPage study).

Why is this? I’ve checked using a third online calculator using both the raw means and the t test results and my results always agree with the higher effect size estimates from Meta-Essentials. What’s strang, though, is that in other data-sets I’ve analyzed wth OpenMeta[Analyst] I get no discrepancy… not sure if there is something special happening here in OpenMeta[Analyst] in terms of pooling variance across all studies to estimate effect size? Any hints?

Thanks,

Bob

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant