04-26-2012, 06:54 AM
(04-25-2012, 11:34 PM)zenmaster Wrote:That's not much of an argument. You're making assumptions about the motivations of the journal. Also an experiment like this was actually done before, you could consider Bem's work the replication.(04-23-2012, 04:55 AM)Ali Quadir Wrote: So in the very least we know that this article and these experiments are good enough in methodology for the high demands of a journal that we know is generally very critical of the articles it publishes.
No one has been able to replicate, and by now it's not in the journal's best interest to publish replications. The criticism has been on the lack understanding of statistical methodology involved and actually required to provide sufficient support for the claims made, given the type of experiments. Most studies involve a high correlation to hypotheses. However, in the case of 'psi', conducting a proper analysis of the opposite correlation tends to be unfamiliar and affords the author much greater leeway in his methods.
http://www.uniamsterdam.nl/D.J.Bierman/p..._pms97.pdf
(04-25-2012, 11:34 PM)zenmaster Wrote: "By all means....Prove the journal wrong....". myehh, lolIt is a valid argument... I don't know your credentials, you partly know mine, and we're arguing about professionals... This means you're going to have to come up with strong arguments.
Not that "they did not know the limits of the most commonly used statistical test in existence". That's really assuming the journal is run by a bunch of incompetents. And if the argument is based on such an underlying assumption. Then you should be really be able to guess that you're on the wrong track.
(04-25-2012, 11:34 PM)zenmaster Wrote: Ok, what was the purpose of the paper. To publish experimental evidence for certain psi powers using statistical methods to show (not account for) anomalous behavior. Remember that one can (knowingly or unknowingly) reduce 'p' by increasing the distance between the expected average and the observed average, by reducing the variability of individual scores, and by increasing the number of trials (to a certain point). So there's a lot of room available for dubious methodology. We must trust Bem's integrity that he did not stop the experiments at the point where the anomalies would be considered in the realm of chance. Also, that all tests conducted were accounted for in the paper.If you're suggesting he might have manipulated the data by selecting his sets. Then, unless you have actual evidence for that, we're just going to assume he did not do this. It's not normal scientific procedure to call colleagues frauds unless you have solid evidence.
Increasing the number of trials is a valid method of increasing the reliablity of your experiment. More trials is never a problem. Regular psi research goes into millions of trials. It's what you need to do to demonstrate subtle effects.
Though... Bem used 100 test subjects. And (from the top of my head) about 50 trials each. These are normal amounts for social sciences.
Adding measurements cuases the random noise over the whole set to be reduced and thus the signal is going to stand out more clearly.
(04-25-2012, 11:34 PM)zenmaster Wrote: About the non-reproducibility of his experiments so far (from researchers working in earnest), Bem says "it usually takes several years before enough attempted replications of a reported effect have accumulated to permit an overall analysis (often called a “meta-analysis”) of the evidence—20 years in the example described below. It usually takes busy researchers several months to find the time to design and run an experiment outside their primary research area, and my article was published only a year ago."He's saying that meta-analysis takes 20 years, this is true, though maybe exaggerated. The point I expect him to be making is that it is incorrect to suggest that an experiment is invalid because the meta-analysis has not been done...
(04-25-2012, 11:34 PM)zenmaster Wrote: So in other words, Bem is claiming that a reasonable time to be sure that a statistically significant set of replications has been conducted will be about 20 years from 1 year ago. To me that's not science and is seriously flawed thinking and demonstrates lack of sufficient evidence.No he's countering the argument against his experiment. Suggesting that this is not how science is done. The flawed thinking is not his. The flawed thinking belongs to the person who believes a meta analysis is required before we can draw any conclusions from experiments.
(04-23-2012, 04:55 AM)Ali Quadir Wrote: Statistically significant only given the biased selection criteria used. Basically, he provided weak evidence against the simple null hypothesis of chance (50%) by relying on a small p value alone. What the expert critism suggests for much stronger evidence would be for Bem to have used at least two hypotheses. For example, in comparison to null and in relation to a distribution of his own probabilities created before the study was undertaken. This provides the explanation with an explicit definition for what is considered to be ordinary or extraordinary.The 0 hypothesis is that there is no effect, the main hypothesis is that subjects would be able to identify the position of the erotic stimuli. It's right there in his article. I don't know what expert criticism you refer to. But this is normal methodology. Suggesting that Bem should not use normal methodology is not sound advice without a pretty darn good reason.
I'd be interested to know what other hypothesis you had in mind? I can't actually think of one.
The experiment is extremely straightforward. He demonstrated that erotic pictures caused an effect that should not exist. While regular pictures do not show such an effect. The t-tes he uses is the most ordinary test one can think of. It is regularly used in psychology, medical science and also in your computer science.
There is no reason to indicate he manipulated the data, or repeated the experiments or otherwise selected for an effect. So we're not going to assume that.
There is a 3% chance that his effect was produced randomly.. His conclusions might still have been wrong. However his experiment was sound. This certainty is well within the limits of normal science. In psychologogy and all science this is considered regular practice.
And, I remind you again, the effect has been produced in other similar experiments..
(04-23-2012, 04:55 AM)Ali Quadir Wrote:I thought we were talking about adequacy/inadequacy of Bem's evidence. [/quote]Quote:Actually. I consider that we are talking about evidence for psychic powers in general. With Bem as a specific example.(04-22-2012, 05:14 PM)Ali Quadir Wrote: Just as judging "Evidence of psychic powers" on one experiment alone is not really the proper thing to do.Apples and oranges, no analogy holds. We're talking about Bem having properly demonstrated sufficient rigour in the explanation of (any and all) experiments of that last paper.
I'm not... We're just going into that in detail. The topic title is "Evidence for Psychic Powers".
And your initial argument was not the inadequacy of Bem's evidence. Your intial argument was that the experiment was wrongly done.
Science does not deal in proof. But as it stands now purely from this experiment there's a 97% chance that things are the way he says they are.
Quote:As a point of interest... How does cybernetics use standard empirical scientific methodology? In what way is agent based modelling comparable to the psi experiment we're speaking about?(04-23-2012, 04:55 AM)Ali Quadir Wrote: Can you explain the role experiments and empirical evidence plays in math and computer science? And to what degree you consider these two fields of study comparable to say biology, physics, psychology, chemistry?Only a small portion of work in those fields involve explanations of empirical evidence, although in some fields, that's the primary work. For example, in math and cs, there is cybernetics, chaos theory, complex adaptive systems, agent-based modelling, etc.
(04-25-2012, 11:34 PM)zenmaster Wrote:(04-23-2012, 04:55 AM)Ali Quadir Wrote: What's wrong with psychology.. If you consider it a valid and proper science. Then surely the methodology involved is sound and if a similar science like parapsychology uses the same methodology then that is a good sign.. Right? Why not use that as an analogy?That's really a strawman argument. Very simply, if a claim is made in any scientific discipline then the burden is on the researcher to provide strong evidence backing it up. In Bem's case, his experiments were held the idea that what is to be considered paranormal is, under any and all circumstances, a low p with withspect to 50% probability. And, as we know, a low p with respect to 50% probability is what selection bias can create - this is what his paper is trying to show, after all - hence the attempts to replicate.
His job was to isolate the effect of psi and then measure if it existed. He did this by excluding all other possible causes... Unless you can suggest a way in which people can predict variables in computer memory we're just going to have to assume he did a good job. He then used a standard test to demonstrate that there was indeed an effect.
He did not do any selection. And the result of the test indicated that there was an effect only on the emotionally effective images. The erotic ones. The neutral images caused no effect.
Not only this, the same effect has been demonstrated in other experiments.
So to summarize.. I see no reason to assume his experiment was in anyway invalid. I do concede that there is a 3% chance that random chance caused him to come to his conclusions in error. However that is well within the range of scientific methodology.
And the effect has been demonstrated in other experiments. I have given you one of them. But this is just the tip of the iceberg.