![]() Type I error occurs when H 0 is statistically rejected even though it is actually true, whereas type II error refers to a false negative, H 0 is statistically accepted but H 0 is false ( Table 1). The significance of an experiment is a random variable that is defined in the sample space of the experiment and has a value between 0 and 1. In a statistical hypothesis test, the significance probability, asymptotic significance, or P value (probability value) denotes the probability that an extreme result will actually be observed if H 0 is true. It is expected that this paper will help researchers understand the differences between MCTs and apply them appropriately. Most researchers may hope to find the best way of adjusting the type I error rate to discriminate the real differences between observed data without wasting too much statistical power. However, concurrently, the test may have insufficient power resulted in increased probability of type II error occurrence. If the test is too conservative, a type I error is not likely to occur. ![]() To choose the appropriate test, we must maintain the balance between statistical power and type I error rate. In this paper, we discuss how to test multiple hypotheses simultaneously while limiting type I error rate, which is caused by α inflation. Consequently, in an MCT, it is necessary to control the error rate to an appropriate level. A problem occurs if the error rate increases while multiple hypothesis tests are performed simultaneously. When the null hypothesis is rejected in a validation, MCTs are performed when certain experimental conditions have a statistically significant mean difference or there is a specific aspect between the group means. Multiple comparisons tests (MCTs) are performed several times on the mean of experimental conditions.
0 Comments
Leave a Reply. |