The undersigned parties consent to the following conditions regarding corporate-required drug testing, or no testing may take place.
Both parties please initial each condition and sign below.
_____ I recognize that while there are legitimate considerations in some jobs that safety demands a clear mind, but that drug testing does not guarantee that someone is clear and capable, either at the time of test, or while the job is being performed. It only shows the presence of unauthorized molecules that may or may not have been ingested in the recent past; some up to thirty days in the past. I also understand that simple statistical analysis shows that the chance of a false positive or negative approaches 50%.
_____ I recognize that drug testing is an invasion of privacy in the extreme; a measure imposed by a paranoid Congress under a fanatical administration; and that these tests are given only because they are required by law.
_____ I hereby order the testing laboratory not to reveal the results of said tests to anyone inside or outside this company, except to the subject of the tests; to keep the results private and not to reveal them to any state or federal agency, except under court order, or in the course of investigation of any accidents or criminal violations against the company or agency undersigned.
_____ I agree not to demand or require drug testing of any employee or future employee or contractor unless a direct link between the safety and security of the company/agency can be shown, except where specifically required by law, subject to the conditions stated above.
The undersigned has agreed to all initialed conditions above.
for agency ordering test_____________________
I read your column on random drug tests, and it hit home hard. I am a truck driver. I went for a random drug test and showed a false positive. This cost me my job, and I am now in a drug-rehabilitation program. Neither my boss, the union, the clinic that gave the test, nor the Department of Transportation will even consider the possibility of a false positive. I desperately need to know more to try to save my job and clear my record.
T.M., Chicago, Ill.
As professors of statistics, we found your response to the drug-testing question perplexing and, indeed, incorrect.
(The question was, "Suppose we assume that 5% of people are drug users. A test is 95% accurate, which we'll say means that if a person is a user, the result is positive 95% of the time; and if she or he isn't, it's negative 95% of the time. A randomly chosen person tests positive. Is the individual highly likely to be a drug user?" You said, "Given your conditions, once the person has tested positive, you may as well flip a coin to determine whether she or he is a drug user. The chances are only 50-50. But the assumptions, the makeup of the test group and true accuracy of the tests themselves are additional considerations.")
This is not a 50-50 proposition. We would also argue that the test should be given twice. The likelihood of error on two consecutive tests is only 25 times out of 10,000. We hope you will redress this.
-- Paul A. Susen, Ph.D.
-- Herman Gelbwasser, Ph.D.
Mount Wachussett Community College, Gardner, Mass.
The original "50-50" answer is correct. Also, it often doesn't help to repeat the same test, because the false reports aren't random. Instead, they're more likely to come from analytic sensitivity (the rate of positive test results in people who are actually users) and analytic specificity (the rate of negative test results in people who are actually not users), and these are determined by biochemical factors, not statistical ones. That is, we're not referring to "lab error."
Here's how the "50-50" answer is determined: Suppose the general population consists of 10,000 people. Of those, we assume for this problem that 95% (9500) are nonusers and that 5% (500) are users.
Of the 9500 nonusers, 95% (9025) will test negative. That means 5% (475) will test positive. Of the 500 users, 95% (475) will test positive. That means that 5% (25) will test negative.
These are the totals:
The problem is not a hypothetical one. Here's another letter:
Your answer regarding drug testing is correct. In laboratories certified by the Forensic Urine Drug Testing Program and/or the National Institute of Drug Abuse, specimens that test positive are retested for confirmation by an even more specific method, raising the predictive value of a positive test.
It is important that your readers understand that the situation described in the letter from your reader may exist if the testing is done outside of a certified laboratory, such as by and employer or a noncertified laboratory.
-- David A. Mulkey, M.D.
W. Howard Hoffman, M.D.
David A. Miller, M.D.
Peter A. Scully, M.D.
Desert West Drug Testing Consultants, Las Vegas, Nevada
But suppose a test's overall performance rate goes all the way up to 99%. Is that good enough? No! And AIDS testing is a good example:
The Centers for Disease Control (CDC) states that the two tests used to identify human immunodeficiency virus type 1 (HIV-1) -- the enzyme immunoassay, or "EIA," and the Western blot, or "WB" -- combined, have better than a 99% overall analytic performance accuracy rate, but only if they're taken repeatedly. (The exact rate is unknown.) This rate is the percentage of correct test results in all specimens tested. With a 99% rate, if a population of 10,000 were tested, 9900 would receive correct results, but 100 would receive erroneous results -- either false positives or false negatives -- including indeterminates.
The CDC also states that, of the errors, it has no data on how many are false positives vs. false negatives. However, if we use 99% as an example, then false positives would have to be less than 4/10ths of the erroneous results, because the CDC estimates that .4% of Americans are "HIV-positive." That is, if false positives accounted for fully 4/10ths of the errors, then the .4% of people who are HIV-positive would all be false positives, and we know that's not the case.
Let's try assuming that the false positives are only 2/10ths of those errors, leaving the false negatives accounting for the remaining 8/10ths. So, of those same 100 people with erroneous results, 20% would be false positives and 80% would be false negatives. While those "false negative" people would be an unwitting threat to sex partners, at least most people are aware that "negative" doesn't mean "safe."
But there's another ramification: Using the CDC estimate that .4% of Americans are "HIV-positive," in a population of 10,000, that's 40. But this number must include all the false positives, which we assumed to be 20, leaving only 20 people actually infected. This leads to an interesting conclusion:
In this 99% scenario, there are as many "false positives" as there are "true positives." Even if the results of both AIDS tests (EIA and WB) are positive, the chances are only 50-50 that this individual actually is infected. That's why people with HIV-positive results must be sure to get tested repeatedly over the following months. The error rate is high with only two tests. (The CDC's Morbidity and Mortality Weekly Report shows an overall performance rate of only 98.4% on the Western blot alone -- far worse than our example.) Even after years, you may not fall ill. You may not have AIDS at all. You may just be a "positive tester."
The implications are broad for people tested at random. For example, I was tested during a routine insurance exam, but because I have no risk factors, I had no concern about having AIDS. I was concerned, however, about a "false positive," for which everyone has a risk. (In the 99% example, that risk is 20 out of 10,000.) Such a result might have made me uninsurable and destroyed both my personal and professional life. Fortunately, my test was negative. But anyone might be one of those unlucky false positives, and that's the very serious risk of testing.
There's also the danger that "false positive" people won't feel the need to avoid sex with truly infected people -- a good route to disease.
But there's also room for optimism in these statistics. An individual who is a random "positive" can find hope in them. This doesn't mean that he or she can take chances with other people's lives, of course, so each person must behave as though he or she actually is infected. But, inwardly, the random HIV-positive individual can be cautiously optimistic.
Want a printable copy of this form and article in Acrobat (.PDF) format?
Click here to download.
I received email challenging my comment on false positives and negatives being equal. For the sake of accuracy and argument, I reprint it below.
>Parade magazine has already run an article, reprinted on my Web page, that
>shows the chance of a false result, either positive or negative, at 50%.
>This invalidates the entire principle of testing, even if it were
>legitimate in the first place. But does the testing stop? Nope. This is a
>witch-hunt that will take decades to go away, and history will look upon it
>as just as dumb as the hunt for communists (or real witches).
I share your distrust and dislike of testing, but the statement you make here is not completely accurate.
The reason that Marilyn Vos Savant's example results in only a 50% "confidence" factor for a positive result is that she assumed the test was 95% accurate AND that 95% of the population tested was actually negative. Because these two factors in her example are equal, the confidence factor for a positive result is 50%. Regardless of what the two numbers are, if they are equal, the positive confidence factor will be 50%.
Positive Confidence = (True Positives) / (True Positives + False Positives) or
Positive Confidence = True Positive / (ALL Positive)
Marilyn DID NOT say that the confidence factor for a negative result was also 50%. It isn't. The negative confidence factor for her example is 99.7%.
Negative Confidence = (True Negative) / (True Negative + False Negative) or
Negative Confidence = (True Negative) / (ALL Negative)
I had to try this out on a spreadsheet to prove it to myself. Here is a little table of Confidence Factors for a few assumed accuracy's and population profiles. Basically, I started at 95/95 and worked my up to 99/99.
AC = accuracy; PP = Population Profile; PC= Positive Confidence; NC=Negative Confidence
AC PP PC NC 95 95 50 99.7 95 99 16 99.95 99 95 84 99.95 99 99 50 99.99The only time I would feel confident of the positive testing results is when the Accuracy far exceeds the Population Profile. As you can see, if the population profile exceeds the accuracy, the positive confidence plummets. In that situation, MOST of the positive results are from False Positives.
Like you, I strongly disagree with drug testing. But I wanted to be sure that you were made aware of your error from a sympathetic source instead of having it come from a proponent of testing and cause you potential embarrassment.