Announcement

Collapse
No announcement yet.

In English please

Collapse
X
  • Filter
  • Time
  • Show
Clear All
new posts

  • In English please

    I'm trying to study this experiment.

    Covert observation increases skin conductance in subjects unaware of when they are being observed: a replication | Journal of Parapsychology, The | Find Articles at BNET

    but I'm having a problem understanding the math. I kinda understand that they're supposed to be percentages but I could do with a little help.

    A one-sample, one-tailed t test confirmed the experimental hypothesis predicting greater skin conductance activity during covert observation than during the control condition, t(47) = 2.652, p [less than] .005, ES = .384. The same results were obtained when scores for the six subjects who participated in more than one session were replaced by means for their two or three sessions, t(38) = 2.445, p [less than] .01. Twice as many subjects (26, 66.7% of total) showed greater skin conductance during covert observation vs. the control condition than showed the reverse (13, 33.3% of total).

    The SAD correlated nonsignificantly with skin conductance differences between the two conditions, r(46) = .049. Males and females did not significantly differ in the degree to which they responded to covert observation. However, of possible interest are the results of an unplanned comparison taking into account the gender of the observers and the subjects: Opposite-sex pairs showed a significantly larger experimental effect, t(46) = 2.398, p [less than] .02, than same-sex pairs did. Indeed, opposite-sex pairs accounted almost entirely for the observed effect (same sex pairs ES = .098, t(23) = 0.480, p [less than] .64; opposite-sex pairs, ES = 0.576, t(23) = 2.827, p [less than] .01). Because we did not predict this result, this interpretation must await confirmation in a future study.


    I don't really understand any of that.

  • #2
    Hookay crash course in statistics

    So basically, when you collect data, you don't know the true value of something ahead of time. That's why you take multiple measurements. You expect the "real" value to be around the average. Depending on how accurate your measurements are, and how many you take, you also have a certain confidence that the value falls within that range.

    Real world example: You want to know your fuel economy of your car, so you record your mileage and how much gas you put in. You get values of 22.2, 22.8, and 22.0 miles per gallon. The average is 22.1 mpg, and you can be 95% sure that the "true" value lies somewhere between 21.6 and 22.6.

    Let's say you want to find out if driving with a lead foot has any impact on gas mileage. You would collect data under these conditions and get a mean 21.2 and confidence interval +/- 0.5. How do you know if these numbers are REALLY different, or just due to fluctuations in measurement and chance? To compare them, you would use what is called a t-test, which basically says, "How sure am I that these two sets of numbers are different?" This is a number known as significance, or p-value. It tells you the probability that your two sets of data vary just due to chance (and don't reflect any real change).

    So if p = 0.3, it is fairly likely (30%) that your data does not reflect any measurable difference. It just happened by chance. If p = 0.01, it is fairly unlikely that your data has varied that much just due to chance, and therefore there is a good reason to believe you have a measurable effect.

    Scientists generally set p < 0.05 arbitrarily as the definition of significant. If you have a p > 0.05, may be interesting, but it's not significant enough to say it's not just chance. If p < 0.05, then you get to say the effect is significant, which in layman's terms means, "Sweet, the data is not likely due to chance, there may be an effect here."

    Then there is effect size (ES), which is basically a measure of how much of the effect is attributable to what you are testing, versus random chance. ES = .4 means (roughly speaking) that your manipulation counts for 40% of the observed variation in the data. You never see near 100%, usually .5-.6 is considered a large effect size.

    t(47) = 2.652, p [less than] .005, ES = .384.


    So this means there is a 0.5% probability the data variation is due to chance alone (very significant) and that 38% of the effect observed is due to differences between the two conditions.

    R is a measure of similarity in trends of data. Similar to ES.

    Hope that makes SOME sense. Look up Youtube videos on statistics.

    Comment


    • #3
      Thanks that did help. How often do scientific experiments have an ES of 38%? What sort of experiments have an ES of 50 or 60%? Are there any that are highier?

      Comment


      • #4
        There really isn't a good way to generalize effect sizes for many types of experiments, because they each have their own criteria of what is big or small, and there will likely be a lot of effect sizes that flurry around some mean effect size (which is often why meta-analysis is done). Ganzfeld experiments have an average ES of 0.15 across general population subjects, but that goes up to 0.25 when using psi-selected individuals.

        In Feeling the Future, Bem's experiments get effect sizes around 0.2 for the effective experiments.

        Comment


        • #5
          Originally posted by MikeMachina
          In Feeling the Future, Bem's experiments get effect sizes around 0.2 for the effective experiments.
          The effect sizes range from about .15 to .4, inversely related to the number of participants. There is a graph is Weagenmakers et al, which I believe is accurate.

          ~~ Paul

          Comment

          Working...
          X