04-25-2012, 11:34 PM
(04-23-2012, 04:55 AM)Ali Quadir Wrote:(04-22-2012, 08:52 PM)zenmaster Wrote: So in psychology, just anyone can peer review someone's experiments? That's broken and goes against the concept of 'expert'. Perhaps they couldn't find any experts and that's why the paper passed got published in the first place?In any science my friend... I'm sorry but peer review is not as absolute and holy as you consider it. It's just better than nothing.
Consider that new articles by definition are about some new insight idea or field, big or small. Often this is so new that the best peers available have very little prior and relevant knowledge.
Take this case.. You send an article to a psychological journal.. Who do you think is going to peer review it? A quantum physicist? A bread baker?
This is incidentally one of the often voiced criticisms against peer review.
The fact that this 9 experiments article got through peer review simply means that the reviewers honestly judged the research methodologically sound and the article was good enough for the standards of the journal. I doubt that they'd all agree with the existence of a psi effects. But their job is not to agree. But to judge the article and experiment so as not to include obvious junk into the journal.
The outcome of an experiment has very little to do with the peer review process.
So in the very least we know that this article and these experiments are good enough in methodology for the high demands of a journal that we know is generally very critical of the articles it publishes.
No one has been able to replicate, and by now it's not in the journal's best interest to publish replications. The criticism has been on the lack understanding of statistical methodology involved and actually required to provide sufficient support for the claims made, given the type of experiments. Most studies involve a high correlation to hypotheses. However, in the case of 'psi', conducting a proper analysis of the opposite correlation tends to be unfamiliar and affords the author much greater leeway in his methods.
(04-23-2012, 04:55 AM)Ali Quadir Wrote:(04-22-2012, 08:52 PM)zenmaster Wrote:(04-22-2012, 05:14 PM)Ali Quadir Wrote: What do you mean with "explained adequately?" Surely not that the mechanism underlying the measurements is adequately explained?'Explained adequately', meaning an explanation of hypothesis, experiment, and results which has sufficient rigour.
Well. Since this journal's peer reviewers allowed the article to be published... They judge a lot of articles. The rejection rates of these journals are generally between 60 to 80 percent. We'd have to look up the rejection rate of this particular journal. But I'd wager it's going to be on the high side.. (It's a very high profile journal. Everyone wants to be published in it... so they can afford and even need to be picky)
At any rate, we can conclude that a critical journal has judged the hypothesis experiments and results to be sufficiently sound.
If you disagree. Then tell us what's wrong with the article. By all means... Prove the journal wrong....
"By all means....Prove the journal wrong....". myehh, lol
Ok, what was the purpose of the paper. To publish experimental evidence for certain psi powers using statistical methods to show (not account for) anomalous behavior. Remember that one can (knowingly or unknowingly) reduce 'p' by increasing the distance between the expected average and the observed average, by reducing the variability of individual scores, and by increasing the number of trials (to a certain point). So there's a lot of room available for dubious methodology. We must trust Bem's integrity that he did not stop the experiments at the point where the anomalies would be considered in the realm of chance. Also, that all tests conducted were accounted for in the paper.
After all, we've all had 'runs of luck' were, for example, when correctly guessing a playing card type before each card is drawn. That's called 'anecdotal evidence'. It's also merely anecdotal if it can't be reproduced by others. Anecdotal evidence is not scientific evidence, after all, because no understanding whatsoever, about anything, can come from it.
About the non-reproducibility of his experiments so far (from researchers working in earnest), Bem says "it usually takes several years before enough attempted replications of a reported effect have accumulated to permit an overall analysis (often called a “meta-analysis”) of the evidence—20 years in the example described below. It usually takes busy researchers several months to find the time to design and run an experiment outside their primary research area, and my article was published only a year ago."
So in other words, Bem is claiming that a reasonable time to be sure that a statistically significant set of replications has been conducted will be about 20 years from 1 year ago. To me that's not science and is seriously flawed thinking and demonstrates lack of sufficient evidence. What exactly is the understanding gained from this paper? That's the whole point of a replication process. However, the journal refused to publish the (negative) replications, which were conducted in strict accordance with the original methods used.
(04-23-2012, 04:55 AM)Ali Quadir Wrote:Quote:No, his presented evidence is the whole 9 experiments.. Not one of them. Besides, I'm making it easy on you... Giving a bigger target to shoot at. But if you want me to take experiments from the article, lets just take the first 2 for demarcations sake.(04-22-2012, 05:14 PM)Ali Quadir Wrote: Why don't you tell me what's wrong with them? The whole paper exists of 9 experiments. Judging the whole thing on one of them would not be right.I disagree, it would be 'right'. This represents his presented evidence, after all.
What's wrong with either of them?
Statistically significant only given the biased selection criteria used. Basically, he provided weak evidence against the simple null hypothesis of chance (50%) by relying on a small p value alone. What the expert critism suggests for much stronger evidence would be for Bem to have used at least two hypotheses. For example, in comparison to null and in relation to a distribution of his own probabilities created before the study was undertaken. This provides the explanation with an explicit definition for what is considered to be ordinary or extraordinary. This is because, "In Bayesian terms, an extraordinary claim is a hypothesis ("The future made me do it!") that is improbable even before study." (http://www.psychologytoday.com/blog/one-...-inference). To me, that is not only reasonable, but intuitively the same apprehension while thinking about the problem of claiming something is statistically 'paranormal'. What that improved method provides is an adjusted probability for the results.
(04-23-2012, 04:55 AM)Ali Quadir Wrote:Quote:Actually. I consider that we are talking about evidence for psychic powers in general. With Bem as a specific example.(04-22-2012, 05:14 PM)Ali Quadir Wrote: Just as judging "Evidence of psychic powers" on one experiment alone is not really the proper thing to do.Apples and oranges, no analogy holds. We're talking about Bem having properly demonstrated sufficient rigour in the explanation of (any and all) experiments of that last paper.
I thought we were talking about adequacy/inadequacy of Bem's evidence.
(04-23-2012, 04:55 AM)Ali Quadir Wrote:Only a small portion of work in those fields involve explanations of empirical evidence, although in some fields, that's the primary work. For example, in math and cs, there is cybernetics, chaos theory, complex adaptive systems, agent-based modelling, etc.Quote:I suggested you pick a model science, one to compare parapsychology to... Yet you come up with two fields of study which have very little in common with regular empirical science.(04-22-2012, 05:14 PM)Ali Quadir Wrote: Also pick one field in science that you consider model science, what you would call: "proper well done science" we may use it as a reference point so that we can compare parapsychology to it in areas of dispute...The field of mathematics. Also computer science.
Can you explain the role experiments and empirical evidence plays in math and computer science? And to what degree you consider these two fields of study comparable to say biology, physics, psychology, chemistry?
(04-23-2012, 04:55 AM)Ali Quadir Wrote: What's wrong with psychology.. If you consider it a valid and proper science. Then surely the methodology involved is sound and if a similar science like parapsychology uses the same methodology then that is a good sign.. Right? Why not use that as an analogy?That's really a strawman argument. Very simply, if a claim is made in any scientific discipline then the burden is on the researcher to provide strong evidence backing it up. In Bem's case, his experiments were held the idea that what is to be considered paranormal is, under any and all circumstances, a low p with withspect to 50% probability. And, as we know, a low p with respect to 50% probability is what selection bias can create - this is what his paper is trying to show, after all - hence the attempts to replicate.