They might at that. I don't deny that possibility. But, and this is the important part, at least they'll do it honestly.
And if they conclude RDI? I only ask out of curiosity; not much can be done about it at this point.
I guess it would be nice for them to post evidence and reasons. I personally would like to see if they rule out Amy's rapist, how they do so (i.e they did recover DNA and it was not a match)
I argue they must do so using physical evidence recovered from the scene (i.e fiber, DNA) since he is one counter-example, given time (9 months) and proximity (less than 2 miles away -- both West Dance studio), material relevant, to RDI claim that "pedophile rapists do not enter sleeping parents home and assault children where their cries may bring the parents in" (which is entirely logical and reasonable but then criminals aren't necessarily logical and reasonable.)
In all seriousness, voynich, it wouldn't bother me so much if they had immediately tried again with someone else. You have to wonder why they didn't. Could it be that they were afraid that it would come out the same way?
they as in R's or they as in BPD? if by they you mean the R's then why didn't they do so, since Foster's conclusions (RDI) is contrary to their claims.
Darth Gerald the Wise seems "confident" in both his critique of Donald "a little knowledge is a dangerous thing" and his own conclusion (which he states as a statistic p-value)
the null hypothesis Darth Gerald states is that the sample variance of PR's provided exemplars are within range (using 18-20 variables like sentence length, types of words use, frequency of words, phrases, misspellings etc.) of the RN. He finds the range to differ enough to reject the null hypothesis
[ame]http://en.wikipedia.org/wiki/Null_hypothesis[/ame]
In statistical hypothesis testing, the null hypothesis (H0) formally describes some aspect of the statistical behaviour of a set of data; this description is treated as valid unless the actual behaviour of the data contradicts this assumption. Thus, the null hypothesis is contrasted against another hypothesis. Statistical hypothesis testing is used to make a decision about whether the data contradicts the null hypothesis: this is called significance testing. A null hypothesis is never proven by such methods, as the absence of evidence against the null hypothesis does not establish it. In other words, one may either reject, or not reject the null hypothesis; one cannot accept it. Failing to reject it gives no strong reason to change decisions predicated on its truth, but it also allows for the possibility of obtaining further data and then re-examining the same hypothesis.
For example, imagine flipping a coin three times, for three heads; and then forming the opinion that we have used a two-headed trick coin. Clearly this opinion is based on the premise that such a sequence is unlikely to have arisen using a normal coin. In fact, such sequences (three consecutive heads or three consecutive tails) occur a quarter of the time on average when using normal unbiased coins. Therefore the opinion that coin is two-headed has little support. Formally, the hypothesis to be tested in this example is "this is a two-headed coin". One tests it by assessing whether the data contradict the null hypothesis that "this is a normal, unbiased coin". Since the observed data arise reasonably often by chance under the null hypothesis, we cannot reject the null hypothesis as an explanation for the data, and we conclude that we cannot assert our hypothesis on the basis of the observed sequence.
http://en.wikipedia.org/wiki/P-value
In statistical hypothesis testing, the p-value is the probability of obtaining a result at least as extreme as the one that was actually observed, assuming that the null hypothesis is true. The fact that p-values are based on this assumption is crucial to their correct interpretation.
The lower the p-value, the less likely the result, assuming the null hypothesis, so the more "significant" the result, in the sense of statistical significance – one often uses p-values of 0.05 or 0.01, corresponding to a 5% chance or 1% of an outcome that extreme, given the null hypothesis. However, the idea of more or less significance is here only being used for illustrative purposes. The result of a test of significance is either "statistically significant" or "not statistically significant"; there are no shades of gray.
More technically, a p-value of an experiment is a random variable defined over the sample space of the experiment such that its distribution under the null hypothesis is uniform on the interval [0,1]. Many p-values can be defined for the same experiment.
Coin flipping example
For example, an experiment is performed to determine whether a coin flip is fair (50% chance of landing heads or tails) or unfairly biased, either toward heads (> 50% chance of landing heads) or toward tails (< 50% chance of landing heads). (A bent coin produces biased results.)
Suppose that the experimental results show the coin turning up heads 14 times out of 20 total flips. The p-value of this result would be the chance of a fair coin landing on heads at least 14 times out of 20 flips. The probability that 20 flips of a fair coin would result in 14 or more heads is 0.0577. Thus, the p-value for the coin turning up heads 14 times out of 20 total flips is 0.0577.
[edit] Interpretation
Generally, one rejects the null hypothesis if the p-value is smaller than or equal to the significance level,[1] often represented by the Greek letter α (alpha). If the level is 0.05, then results that are only 5% likely or less are deemed extraordinary, given that the null hypothesis is true.
In the above example we have:
* null hypothesis (H0): fair coin;
* observation (O): 14 heads out of 20 flips; and
* probability (p-value) of observation (O) given H0: P(O | H0) = 0.0577 × 2 (two-tailed) = 0.1154 (percentage expressed as 11.54%).
The calculated p-value exceeds 0.05, so the observation is consistent with the null hypothesis — that the observed result of 14 heads out of 20 flips can be ascribed to chance alone — as it falls within the range of what would happen 95% of the time were this in fact the case. In our example, we fail to reject the null hypothesis at the 5% level. Although the coin did not fall evenly, the deviation from expected outcome is just small enough to be reported as being "not statistically significant at the 5% level".
However, had a single extra head been obtained, the resulting p-value (two-tailed) would be 0.0414 (4.14%). This time the null hypothesis - that the observed result of 15 heads out of 20 flips can be ascribed to chance alone - is rejected. Such a finding would be described as being "statistically significant at the 5% level".
Critics of p-values point out that the criterion used to decide "statistical significance" is based on the somewhat arbitrary choice of level (often set at 0.05). It is necessary to use a reasonable null hypothesis to assess the result fairly. The choice of null hypothesis entails assumptions.
[ame]http://en.wikipedia.org/wiki/Statistical_hypothesis_testing[/ame]
A statistical hypothesis test is a method of making statistical decisions using experimental data. It is sometimes called confirmatory data analysis, in contrast to exploratory data analysis. In frequency probability, these decisions are almost always made using null-hypothesis tests; that is, ones that answer the question Assuming that the null hypothesis is true, what is the probability of observing a value for the test statistic that is at least as extreme as the value that was actually observed?[1] One use of hypothesis testing is deciding whether experimental results contain enough information to cast doubt on conventional wisdom.
Statistical hypothesis testing is a key technique of frequentist statistical inference, and is widely used, but also much criticized. The main alternative to statistical hypothesis testing is Bayesian inference.
The critical region of a hypothesis test is the set of all outcomes which, if they occur, cause the null hypothesis to be rejected in favor of the alternative hypothesis. The critical region is usually denoted by C.