You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
@LukasRoeseler in this submission, some correlations are listed as 1 - I noticed that you added a workaround for that in your success criteria code. However, I can generally not quite reproduce the effect sizes, and particularly not the ones listed as 1.
For one, they are not aligned to the correct effects - the infinite odds ratios come from persistent and serious, not from reliable and good-looking. Also, they are infinite because in one condition, 99% rated the person as serious, and in the other 100% - which are infinitely higher odds. Clearly, that is nonsense as that difference is not meaningful - so should we remove those? Or do you know of a better effect size measure we should use instead?
The other odds ratios also do not quite fit (see below)
Here the Table from the original paper
And here ChatGPT's odds ratios:
Trait
Warm
Cold
Odds Ratio
generous
91
8
116.278
wise
65
25
5.571
happy
90
34
17.471
good-natured
94
17
76.490
humorous
77
13
22.405
sociable
91
38
16.497
popular
84
28
13.500
reliable
94
99
0.158
important
88
99
0.074
humane
86
31
13.673
good-looking
77
69
1.504
persistent
100
97
Inf
serious
100
99
Inf
restrained
77
89
0.414
altruistic
69
18
10.140
imaginative
51
19
4.437
strong
98
95
2.579
honest
98
94
3.128
The text was updated successfully, but these errors were encountered:
@LukasRoeseler in this submission, some correlations are listed as 1 - I noticed that you added a workaround for that in your success criteria code. However, I can generally not quite reproduce the effect sizes, and particularly not the ones listed as 1.
For one, they are not aligned to the correct effects - the infinite odds ratios come from persistent and serious, not from reliable and good-looking. Also, they are infinite because in one condition, 99% rated the person as serious, and in the other 100% - which are infinitely higher odds. Clearly, that is nonsense as that difference is not meaningful - so should we remove those? Or do you know of a better effect size measure we should use instead?
The other odds ratios also do not quite fit (see below)
Here the Table from the original paper
And here ChatGPT's odds ratios:
The text was updated successfully, but these errors were encountered: