to the nonuniqueness of factor loadings, the interpretation might be enhanced by rotation...

Jak cię złapią, to znaczy, że oszukiwałeś. Jak nie, to znaczy, że posłużyłeś się odpowiednią taktyką.

This is the topic of the next subsection.
Rotation
The constraints (10.11) and (10.12) are given as a matter of mathematical convenience (to create unique solutions) and can therefore complicate the problem of interpretation. The
interpretation of the loadings would be very simple if the variables could be split into disjoint
sets, each being associated with one factor. A well known analytical algorithm to rotate the
loadings is given by the varimax rotation method proposed by Kaiser (1985). In the simplest
case of k = 2 factors, a rotation matrix G is given by

cos θ
sin θ
G(θ) =
,
− sin θ cos θ
representing a clockwise rotation of the coordinate axes by the angle θ. The corresponding
rotation of loadings is calculated via ˆ
Q∗ = ˆ
QG(θ). The idea of the varimax method is to find
290
10
Factor Analysis
the angle θ that maximizes the sum of the variances of the squared loadings ˆ
q∗ within each
ij
column of ˆ
Q∗. More precisely, defining ˜
qjl = ˆ
q∗ /ˆ
h∗, the varimax criterion chooses θ so that
jl
j

k
p
(
p
)2
1 X X
1 X
V =
˜
q∗ 4
˜
q∗ 2



p
jl
p
jl
`=1
j=1
j=1
is maximized.
EXAMPLE 10.6 Let us return to the marketing example of Johnson and Wichern (1998)
(Example 10.5). The basic factor loadings given in Table 10.2 of the first factor and a second factor are almost identical making it difficult to interpret the factors.
Applying
the varimax rotation we obtain the loadings ˜
q1 = (0.02, 0.94, 0.13, 0.84, 0.97)> and ˜
q2 =
(0.99, −0.01, 0.98, 0.43, −0.02)>. The high loadings, indicated as bold entries, show that
variables 2, 4, 5 define factor 1, a nutricional factor. Variable 1 and 3 define factor 2 which
might be referred to as a taste factor.
Summary
,→ In practice, Q and Ψ have to be estimated from S = b
Q b
Q> + b
Ψ. The
number of parameters is d = 1 (p − k)2 − 1(p + k).
2
2
,→ If d = 0, then there exists an exact solution. In practice, d is usually
greater than 0, thus approximations must be considered.
,→ The maximum-likelihood method assumes a normal distribution for the
data. A solution can be found using numerical algorithms.
,→ The method of principal factors is a two-stage method which calculates b
Q
from the reduced correlation matrix R − e
Ψ, where e
Ψ is a pre-estimate of
Ψ. The final estimate of Ψ is found by b
ψii = 1 − Pk
q2 .
j=1 bij
,→ The principal component method is based on an approximation, b
Q, of Q.
,→ Often a more informative interpretation of the factors can be found by
rotating the factors.
,→ The
varimax
rotation
chooses
a
rotation
θ
that
maximizes

n
V = 1 Pk
Pp
˜
q∗ 4 − 1 Pp
˜
q∗ 2o2.
p
`=1
j=1
jl
p
j=1
jl
10.3
Factor Scores and Strategies
291
10.3
Factor Scores and Strategies
Up to now strategies have been presented for factor analysis that have concentrated on the
estimation of loadings and communalities and on their interpretations. This was a logical
step since the factors F were considered to be normalized random sources of information
and were explicitely addressed as nonspecific (common factors). The estimated values of the
factors, called the factor scores, may also be useful in the interpretation as well as in the
diagnostic analysis. To be more precise, the factor scores are estimates of the unobserved
random vectors Fl, l = 1, . . . , k, for each individual xi, i = 1, . . . , n. Johnson and Wichern
(1998) describe three methods which in practice yield very similar results. Here, we present
the regression method which has the advantage of being the simplest technique and is easy
to implement.
The idea is to consider the joint distribution of (X − µ) and F , and then to proceed with
the regression analysis presented in Chapter 5. Under the factor model (10.4), the joint covariance matrix of (X − µ) and F is:
X − µ
QQ> + Ψ Q
Var
=
.
(10.18)
F
Q>
Ik
Note that the upper left entry of this matrix equals Σ and that the matrix has size (p + k) ×
(p + k).
Assuming joint normality, the conditional distribution of F |X is multinormal, see Theo-
rem 5.1, with
E(F |X = x) = Q>Σ−1(X − µ)
(10.19)
and using (5.7) the covariance matrix can be calculated:
Var (F |X = x) = Ik − Q>Σ−1Q.
(10.20)
In practice, we replace the unknown Q, Σ and µ by corresponding estimators, leading to the
estimated individual factor scores:
b
fi = b
Q>S−1(xi − x).
(10.21)
We prefer to use the original sample covariance matrix S as an estimator of Σ, instead of
the factor analysis approximation b
Q b
Q> + b
Ψ, in order to be more robust against incorrect
determination of the number of factors.
The same rule can be followed when using R instead of S. Then (10.18) remains valid when
standardized variables, i.e., Z = D−1/2(X − µ), are considered if D
Σ
Σ = diag(σ11, . . . , σpp). In
this case the factors are given by
b
fi = b
Q>R−1(zi),
(10.22)
292
10
Factor Analysis
where zi = D−1/2(x
S
i − x),
b
Q is the loading obtained with the matrix R, and DS =
diag(s11, . . . , spp).
If the factors are rotated by the orthogonal matrix G, the factor scores have to be rotated
accordingly, that is
b
f ∗ =
i
G> b
fi.
(10.23)
Powered by wordpress | Theme: simpletex | © Jak cię złapią, to znaczy, że oszukiwałeś. Jak nie, to znaczy, że posłużyłeś się odpowiednią taktyką.