How does this pass as science?? There is no actual data on people. They simulated from a multivariate normal and then reported the frequency of observations where all three dimensions were one or more standard deviations above the mean. This has no bearing on the actual number of exceptional people, the results follow only from the assumed correlations and the assumption of normality (which is probably wrong).
It's interesting to compare with the result without covariance. The probability of > 2 sigma positive in 0.0228. The probability to get > 2 sigma in 3 variables is 0.0228^3=0.0000118=0.00118% or 118 per million. With the correlation they get 85, that is unsurprising.
Calculating the exact number looks like a nightmare. I'd try with Wolfram Alpha and hope it can integrate it numerically. Otherwise, I'd use the Montecarlo method, that is equivalent to their method, but they use "N = 20 million" that is pretty small, I think with a x1000 the calculation still takes less than a second and the error would be like 30 smaller.
I try not to be pessimistic about research, but this seems like an undergrad stats assignment. The correlations from other research are just plugged into a simulation with no new understanding for anyone who knows what correlation is.
How does this pass as science?? There is no actual data on people. They simulated from a multivariate normal and then reported the frequency of observations where all three dimensions were one or more standard deviations above the mean. This has no bearing on the actual number of exceptional people, the results follow only from the assumed correlations and the assumption of normality (which is probably wrong).
It's interesting to compare with the result without covariance. The probability of > 2 sigma positive in 0.0228. The probability to get > 2 sigma in 3 variables is 0.0228^3=0.0000118=0.00118% or 118 per million. With the correlation they get 85, that is unsurprising.
Calculating the exact number looks like a nightmare. I'd try with Wolfram Alpha and hope it can integrate it numerically. Otherwise, I'd use the Montecarlo method, that is equivalent to their method, but they use "N = 20 million" that is pretty small, I think with a x1000 the calculation still takes less than a second and the error would be like 30 smaller.
Why would you even need to simulate? If you have the parameters of the normal, couldn't you just solve for the size of the (hyper-)tail?
I think it is internally consistent. Normality is definitionally implied by the variables (e.g IQ), and the correlations on sourced as inputs.
The paper is simply computationally determining the combined frequency. My guess is that somebody simply needed a citation for this calculation.
[dead]
rightly said.
I try not to be pessimistic about research, but this seems like an undergrad stats assignment. The correlations from other research are just plugged into a simulation with no new understanding for anyone who knows what correlation is.
What is the expectation this is being held against?
The number of exceptional people is surpassed only by the number of exceptional publicists.