In this paper we investigate the issues of independence and diversity among individual classifiers participating in a multiple classifier fusion scheme. First we present a formal definition of statistically independent classifiers. Then we focus on testing the independence between two classifiers. Dependence of two classifiers leads to the conclusion that every ensemble of classifiers in which they participate is not an independent scheme. Previous studies have argued that independence of the classifiers infuses diversity in the multi-classifier system, which is directly related to improved performance. Consequently, we introduce a measure for the degree of diversity as expressed by the agreement among the classifiers' outputs in such an ensemble. A number of examples drawn from diverse domains in pattern recognition are also given to illustrate the relation between classifier dependence and diversity estimation. Our results suggest the measurement of the classifiers ' decisions agreement as an informative measure of the strength of association among dependent classifiers.