One of the frequently used clinical relevance metrics is positive predictive value (APP), the proportion of total positive values and test reports that are actually positive. As a result, a test that goes well in populations with a high rate of actual positive cases may be woefully inadequate in populations with low prevalence. CLSI EP12: User Protocol for Evaluation of Qualitative Test Performance Protocol describes the terms of the Positive Percentage Agreement (AEA) and the Negative Performance Agreement (NPA). If you have two binary diagnostic tests to compare, you can use an agreement study to calculate these statistics. Because specificity/APA reflects the ability to accurately identify negative controls, which are more widely available than patient samples, IC tends to be narrower for these metrics than in sensitivity/AAE, allowing for consideration of the proportion of positive cases a test can find. In the absence of such a standard for COVID-19, serological developers have reported sensitivity and specificity as a positive prediction agreement (AAE) or negative preaching agreement (NPA) with RT-PCR tests on patients` nasal skinners. Nor do these statistics support the conclusion that one test is better than another. Recently, a British national newspaper published an article on a PCR test developed by Public Health of England and the fact that with a new commercial test in 35 samples out of 1144 (3%) disagreed. Of course, for many journalists, this was proof that the PHE test was imprecise. There is no way to know which test is correct and which is wrong in any of these 35 discrepancies. We simply do not know the actual state of the subject in unit studies.
Only further investigation into these discrepancies would identify the reasons for these discrepancies. An example is the microbiological smear of the larynx used in patients with sore throats. Usually, publications that indicate the APA of a larynx indicate the likelihood that this bacterium is present in the throat instead of being infected with the bacteria found. If the presence of this bacterium has always resulted in sore throats, then the APP would be very helpful. However, bacteria can colonize individuals in a harmless manner and never cause infection or disease. The sore throat that occurs in these people is caused by other medications such as a virus. In this situation, the gold standard used in the evaluation study represents only the presence of bacteria (which could be harmless), but no bacterial caustic sore throats. It can be shown that this problem influences positive predictive value much more than negative predictive value.
 To evaluate diagnostic tests in which the gold standard examines only the possible causes of the disease, an extension of the predictive value called etiological predictive value can be used.   Note that positive and negative forecast values can only be estimated from data from a cross-sectional study or another population-based study that can determine valid prevalence estimates. On the other hand, sensitivity and specificity can be estimated using control case studies. The FDA has issued nine COVID-19 antibody tests for Emergency Use Authorization (EEA). The application document (IFU) for each test indicates its sensitivity and specificity in the form of a positive percentage agreement (AEA) or a negative percentage agreement (NPA) with a chain reaction test by reverse transcription polymeraosis (RT-PCR) and 95% confidence intervals (IC) for each value.