Factor analysis

From Knowino
Jump to: navigation, search

Contents

Factor analysis is a statistical technique used to explain variability among observed random variables in terms of fewer unobserved random variables called factors. The observed variables are modeled as linear combinations of the factors, plus "error" terms. Factor analysis originated in psychometrics, and is used in social sciences, Marketing, product management, operations research, and other applied sciences that deal with large quantities of data.

[edit] Example

This oversimplified example should not be taken to be realistic.

Suppose a psychologist proposes a theory that there are two kinds of intelligence, "verbal intelligence" and "mathematical intelligence". Note that these are inherently unobservable. Evidence for the theory is sought in the examination scores of 1000 students in each of 10 different academic fields. If each student is chosen randomly from a large population, then the student's 10 scores are random variables. The psychologist's theory may say that, for each of the 10 subjects, the score averaged over the group of all students who share some common pair of values for verbal and mathematical "intelligences" is some constant times their level of verbal intelligence plus another constant times their level of mathematical intelligence, i.e., it is a linear combination of those two "factors". The numbers, for this particular subject, by which the two kinds of intelligence are multiplied to obtain the expected score, are posited by the theory to be the same for all intelligence level pairs, and are called "factor loadings" for this subject. For example, the theory may hold that the average student's aptitude in the field of amphibology is

{ 10 × the student's verbal intelligence } + { 6 × the student's mathematical intelligence }.

The numbers 10 and 6 are the factor loadings associated with amphibology. Other academic subjects may have different factor loadings.

Two students having identical degrees of verbal intelligence and identical degrees of mathematical intelligence may have different aptitudes in amphibology because individual aptitudes differ from average aptitudes. That difference is called the "error" — a statistical term that means the amount by which an individual differs from what is average for his or her levels of intelligence (see errors and residuals in statistics).

The observable data that go into factor analysis would be 10 scores of each of the 1000 students, a total of 10,000 numbers. The factor loadings and levels of the two kinds of intelligence of each student must be inferred from the data. Even the number of factors (two, in this example) must be inferred from the data.

[edit] Mathematical model of the same example

In the example above, for i = 1, ..., 1,000 the ith student's scores are

\begin{matrix}x_{1,i} & = & \mu_1 & + & \ell_{1,1}v_i & + & \ell_{1,2}m_i & + & \varepsilon_{1,i} \\
\vdots & & \vdots & & \vdots & & \vdots & & \vdots \\
x_{10,i} & = & \mu_{10} & + & \ell_{10,1}v_i & + & \ell_{10,2}m_i & + & \varepsilon_{10,i}
\end{matrix}

where

In matrix notation, we have

X=\mu+LF+\epsilon

where

Observe that by doubling the scale on which "verbal intelligence"—the first component in each column of F—is measured, and simultaneously halving the factor loadings for verbal intelligence makes no difference to the model. Thus, no generality is lost by assuming that the standard deviation of verbal intelligence is 1. Likewise for mathematical intelligence. Moreover, for similar reasons, no generality is lost by assuming the two factors are uncorrelated with each other. (However, since any rotation of a solution is also a solution, this makes interpreting the factors difficult. See disadvantages below. In this particular example, if we do not know ex ante that the two types of intellegence are uncorrelated, then we can not interpret the two factors as the two different types of intellegence. Even if they are uncorrelated, we can not tell which factor corresponds to verbal intellegence and which corresponds to mathematical intellegence without an outside argument.) The "errors" ε are taken to be independent of each other. The variances of the "errors" associated with the 10 different subjects are not assumed to be equal.

The values of the loadings L, the averages μ, and the variances of the "errors" ε must be estimated given the observed data X. [How this is done is a subject that must get addressed in this article, which remains "under construction".]

[edit] Factor analysis in psychometrics

[edit] History

Charles Spearman pioneered the use of factor analysis in the field of psychology and is sometimes credited with the invention of factor analysis. He discovered that schoolchildren's scores on a wide variety of seemingly unrelated subjects were positively correlated, which led him to postulate that a general mental ability, or g, underlies and shapes human cognitive performance. His postulate now enjoys broad support in the field of intelligence research, where it is known as the g theory.

Raymond Cattell expanded on Spearman’s idea of a two-factor theory of intelligence after performing his own tests and factor analysis. He used a multi-factor theory to explain intelligence. Cattell’s theory addressed alternate factors in intellectual development, including motivation and psychology. Cattell also developed several mathematical methods for adjusting psychometric graphs, such as his "scree" test and similarity coefficients. His research lead to the development of his theory of fluid and crystallized intelligence, as well as his 16 Personality Factors theory of personality. Cattell was a strong advocate of factor analysis and psychometrics. He believed that all theory should be derived from research, which supports the continued use of empirical observation and objective testing to study human intelligence.

[edit] Applications in psychology

Factor analysis has been used in the study of human intelligence and human personality as a method for comparing the outcomes of (hopefully) objective tests and to construct matrices to define correlations between these outcomes, as well as finding the factors for these results. The field of psychology that measures human intelligence using quantitative testing in this way is known as psychometrics (psycho=mental, metrics=measurement).

[edit] Advantages

[edit] Disadvantages

[edit] Factor analysis in marketing

The basic steps are:

[edit] Information collection

The data collection stage is usually done by marketing research professionals. Survey questions ask the respondent to rate a product sample or descriptions of product concepts on a range of attributes. Anywhere from five to twenty attributes are chosen. They could include things like: ease of use, weight, accuracy, durability, colourfulness, price, or size. The attributes chosen will vary depending on the product being studied. The same question is asked about all the products in the study. The data for multiple products is coded and input into a statistical program such as SPSS or SAS.

[edit] Analysis

The analysis will isolate the underlying factors that explain the data. Factor analysis is an interdependence technique. The complete set of interdependent relationships are examined. There is no specification of either dependent variables, independent variables, or causality. Factor analysis assumes that all the rating data on different attributes can be reduced down to a few important dimensions. This reduction is possible because the attributes are related. The rating given to any one attribute is partially the result of the influence of other attributes. The statistical algorithm deconstructs the rating (called a raw score) into its various components, and reconstructs the partial scores into underlying factor scores. The degree of correlation between the initial raw score and the final factor score is called a factor loading. There are two approaches to factor analysis: "principal components analysis" (the total variance in the data is considered); and "common factor analysis" (the common variance is considered).

Note that there are very important conceptual differences between the two approaches, an important one being that the common factor model involves a testable model whereas principal components does not. This is due to the fact that in the common factor model, unique variables are required to be uncorrelated, whereas residuals in principal components are correlated. Finally, components are not latent variables; they are linear combinations of the input variables, and thus determinate. Factors, on the other hand, are latent variables, which are indeterminate. If your goal is to fit the variances of input variables for the purpose of data reduction, you should carry out principal components analysis. If you want to build a testable model to explain the intercorrelations among input variables, you should carry out a factor analysis.

The use of principal components in a semantic space can vary somewhat because the components may only "predict" but not "map" to the vector space. This produces a statistical principal component use where the most salient words or themes represent the preferred basis.

[edit] Advantages

[edit] Disadvantages

[edit] References

[edit] See also

Information.svg Some content on this page may previously have appeared on Citizendium.
Personal tools
Variants
Actions
Navigation
Community
Toolbox