Springer Texts in Statistics Advisors: George Casella Stephen Fienberg Ingram Olkin
Springer New York Berlin Heidelberg Barcelona Hong Kong London Milan Paris Singapore Tokyo
I
Springer Texts in Statistics
,
Alfred: Elements of Statistics for the Life and Socia! Sciences Berger: An Introduction to Probability and Stochastic Processes Bilodeau and Brenner: Theory of Multivariate Statistics Blom: Probability and Statistics: Theory and Applications Brockwell and Davis: An Introduction to Times Series and Forecasting Chow and Teicher: Probability Theory: Independence, Interchangeability, Martingales, Third Edition Christensen: Plane Answers to Complex Questions: The Theory of Linear Models, Second Edition Christensen: Linear Models for Multivariate, Time Series, and Spatial Data Christensen: LogLinear Models and Logistic Regression, Second Edition Creighton: A First Course in Probability Models and Statistical Inference Dean and Voss: Design and Analysis of Experiments du Tv it, Steyn, and Stumpf Graphical Exploratory Data Analysis DUrrell.' Essentials of Stochastic Processes Edwards: Introduction to Graphical Modelling, Second Edition Fin/;elstein and Levin: Statistics for Lawyers FIU/y' A First Course in Multivariate Statistics Jobson: Applied Multivariate Data Analysis, Volume I: Regression and Experimental Design Jobson: Applied Multivariate Data Analysis, Volume II: Categorical and Multivariate Methods Kalbfleisch: Probability and Statistical Inference, Volume I: Probability, Second Edition Kalbfleisch: Probability and Statistical Inference, Volume II: Statistical Inference, Second Edition Karr: Probability Keyjitz: Applied Mathematical Demography, Second Edition Kiefer: Introduction to Statistical Inference Kokoska and Nevison: Statistical Tables and Fonnulae Kulkarni: Modeling, Analysis, Design, and Control of Stochastic Systems Lehmann: Elements of LargeSample Theory Lehmann: Testing Statistical Hypotheses, Second Edition Lehmann and Casella: Theory ofPoinfEstimation, Second Edition Lindman: Analysis of Variance in Experimental Design Lindsey: Applying Generalized Linear Models Madansky: Prescriptions for Working Statisticians McPherson: Statistics in Scientific Investigation: Its Basis, Application, and Interpretation Mueller: Basic Principles of Structural Equation Modeling: An Introduction to LISREL and EQS •
(conHnued after index)
,
David Edwards
,
•
ntro uction to • • ra lea o e In Second Edition
With 83 Illustrations
,
•• •
Springer
David Edwards Statistics Department Novo Nordisk AJS DK2880 Bagsvaerd Denmark
[email protected] I
Editorial Board George Casella Biometrics Unit Cornell University Ithaca, NY 148537801 USA
Stephen Fienberg Department of Statistics Carnegie Mellon University Pittsburgh, PA 152133890 USA
Ingram Olkin Department of Statistics Stanford University Stanford, CA 94305 USA
Library of Congress CataloginginPubliciltion Datil Edwards, pavid, j 949Introduction to graphical modelling / David Edwards. 2nd ed. p. cm.  (Springer texts in statistics) Includes bibliographical references and index. ISBN 0387950540 (alle paper) I. Graphical modeling (Statistics) I. Title. II. Series. QA279 .E34 2000
519.5'3Sdc2 I 00030760 Printed on acidfree paper. © 1995,2000 SpringerVerlag New York, Inc.
All rights reserved. This work may not be translated or copied in whole or in part without the written permission of the publisher (SpringerVerlag New York, Inc., 175 Fifth Avenue, New York, NY 10010, USA), except for brief excerpts in connection with reviews or scholarly analysis. Use in connection with any form of information storage and retrieval, electronic adaptation, computer software, or by similar or dissimilar methodology now known or hereafter developed is forbidden. The use of general descriptive names, trade names, trademarks, etc., in this pUblication, even if the former are not especially identified, is not to be taken as a sign that such names, as understood by the Trade Marks and Merchandise Marks Act, may accordingly be used freely by anyone. Production managed by A. Orrantia; manufacturing supervised by Jeff Taub. Photocomposed copy prepared from the authors' U.1EX ftIes and formatted by The Bartlett Press, Marietta, GA. Printed and bound by Hamilton Printing Co., Rensselaer, NY. Printed in the United States of America. 987654321 ISBN 0387950540 SpringerVerlag New York Berlin Heidelberg SPIN 10769313
•
I would rather discover a single causal relationship than be king of Persia.
Democritus .
•
/
,
/
•
1
C) u
_, ,/ :)
~
•
,
•
•
•
•
re ace
'' econ
•
•
I Ion
•
In the five years since the first edition of this book was published, the study of graphical models and their application has picked up momentum. New types of graphs have been introduced, so as to capture different types of dependence structure. Application of the methodology to what used to be called expert systems, but now are often called probabilistic networks, has grown explosively. Another active area of study has been in the ways directed acyclic graphs may contribute to causal inference. To address some of these new developments, two topics have been extended in this edition, each now being given a whole chapter (arguably, each deserves a whole book). Chapter 7 describes the use of directed graphs of various types, and Chapter 8 surveys some work on causal inference, with particular reference to graphical modelling. I have not attempted a description of probabilistic networks, for which many excellent texts are available, for example Cowell et al. (1999).
In addition to the new chapters, there are some lesser additions and revi
" . /
•
sions: the treatment of mean linearity and CGregression models has been expanded, the description of JvIIM: has been updated, and an appendix describing various estimation algorithms has been added. A diskette with the program is not included with the book, as 1t was with the previous edition: instead it can be downloaded from the internet. I am grateful to Don Rubin, Judea Pearl, Mervi Eerola, Thomas Richardson, and Vanessa Didelez for constructive comments to Chapter 8, and to Jan Koster and Elena Stanghellini for helpful advice. March 20, 2000
"
David Edwards
r
•
. Irs
re ace
•
"
•
I Ion
"
Grap1iicallll )dellillg is a forlll of Illldti\"ariate analysis that uses graphs to represent model::;. Altho\lgh its roots C'll1 be traced back to path analysis (Wright 1921) and statistical physics (Gibbs, 1902), its modern form is of reccnt origin. Key papers in the modern development include Darroch, Lauritzen, and Speed (1980), and Lauritzen and \Vermuth (1989). f
The purpose of this !.Jook is to provide a concise, applicationoriented introduction to graphical modelling. The theoretical coverage is informal, and should be supplemented by other sources: the book by Whittaker (1990) would be a natural choice. Readers primarily interested in discrete data should consult the introductorylevel book by Christensen (1990). Lauritzen (1992) provides a mat.hematically rigorous treatment: this is the source to consult about results stated here without proof. Applications of graphical modelling in a wide variety of areas are shmvn. These analyses make use of MThJ. a commanddriven PCprogram designed for graphical modelling. A student version of :MHvl is included with the book, and a reference guide is included as an appendi:x:.
"
•
Iv!y interest in graphical modelling started in 19781980 under the influence of Terry Speed, who held a seminallectu~e course on the topic in Copenhagen in 1978. Subsequent participation in a study group on graphical modelling, together with Steffen Lauritzen, Svend Kreiner, 1'lorten Frydenberg, Jens Henrik Badsberg, and Poul Svante Eriksen, has served to stimulate and broaden my interest in the topic. •
The first version of MIM was written in 19861987 at the Statistical Research Unit, Copenhagen University, when I was supported by a Danish "
x
Preface to the First Edition
Social Science Research Council grant. I wish to thank all the people who helped in the development of NIIM, including: Brian Murphy, for assistance and inspiration, and for kindly supplying the program LOLITA, which served as a starting point for MIM; Morten Frydenberg, for crucial assistance with the modified iterative proportional scaling (MIPS) algorithm; Steffen Lauritzen, Nanny Wermuth, HannsGeorg Leimer, Svend Kreiner, Jens Henrik Badsberg, and Joe Whittaker for encouraging help; Brian Francis and Joe Whittaker for contributing the SUSPEND and RETURN commands; Egon Hansen for help with the interactive graphics; The Tjur for generously letting me use his module for distribution functions; Svend Kreiner for helping me code Patefield's (1981) algorithm; and 10larta Horakova for programming the explicit estimation and EHselection procedures.

Finally, thanks are also due to Peter Smith, Philip Hougaard, and Helle Lynggaard for helpful suggestions. •
Januarv 28. 1995 v
David Edwards Roskilde
.
•
,.
ho ce
th •
IS
p; ~r ,
,,on en s
os: •
f
:f )r
Ie
"
•
PH,faCt'
to the Sec(Jl1d Edi1 jut!
" VB
, IX
Preface to the First Edition 1 Prelilllinari(;s
1.1 1.2 1. 3 1.4 1.5
1
Independence alld Conditional Independence. . . . . . . . . . . . . . . . . . Undirected Graphs . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Data, Models, and Graphs . . . . . . . . . . . , . . . . . . . . . . . . . . . . Simpson's Paradox . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Overview of the Book . . . . . . . . . . . . . . . . . . . . . . . . • • • • • • •
2 Discrete Models 2.1 ThreeWay Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.1.1 Example: Lizard Perching Behaviour . . . . . . . . . . . . . . . . . . 2.2 :MultiWay Tables . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2.1 Likelihood Equations . . . . . . . . . . . . . . . . . .. . . . . . . . . 2.2.2 Deviance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2.3 Graphs and Formulae . . . . . . . . . . . . . . . . . . . . . . . . . . . 2.2.4 Example: Risk Factors for Coronary Heart Disease . . . . . . . . . . 2.2.5 Example: Chromosome Mapping . . . . . . . . . . . . . . . . . . . . 2.2.6 Example: University Admissions . . . . . . . . . . . . . , . . . . . . . I
. . . . •
. . . . . . . . .
1 4
6 8 11 13 13 17
19 20 21 22 22 26 32
•
•
3 Continuous Models 3.1 Graphical Gaussian Models . . . . . . . . . . . . . . . . . . • • • • • • • • • • • 3.1.1 Likelihood . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3.1.2 Maximum Likelihood Estimation .. . . . . . . . . . . . . . . . . . . . 3.1.3 Deviance . . . . . . . . . . . . . . . . • • • • • • • • • • • • • • • • • • •
35
35 38
39 40
,
xii
Contents
•I
i, 1. I,
,, ,, "
3.2
, ,
_ 1 ,
,
,,
3.1.4 Example: Digoxin Clearance . . . . . . . . . . . . . . . . . . . . . . ., 3.1.5 Example: Anxiety and Anger . . . . . . . . . . . . . . . . . . . . . . . 3.1.6 Example: Mathematics Marks . . . . . . . . . . . . . . . . . . . . . .. Regression Models . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. 3.2.1 Example: Determinants of Bone Mineral Content. . . . . . . . . . ..
41 44 45 50 53
I
4
59
Mixed Models
4.1
4,2 4.3
4.4 4.5
4.6
Hierarcbjcal Interaction Models . . . . . . . . . . . . . . . . . . . . . . . . . 4.1.1 Models with One Discrete and One Continuous Variable. . . . . . . 4.1.2 A Model with Two Discrete and Two Continuous Variables ..... 4,1.3 Model Formulae . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.1.4 Formulae and Graphs . . . . . . . . . . . . . . . . . . . . . . . . . . . :1.1.5 Ivlaxirnurn Likelihood Estimation . . . . . . . . . . . . . . . . . . . . 4.1.6 Deviance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.1.7 A Simple Example. . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.1.8 Example: A Drug Trial Using ~1ice . . . . . . . . . . . . . . . . . . . .1.1 ,9 Example: Rats' Weights . . . . . . . . . . . . . . . . . . . . . . . . . 4.1.10 Example: Estrogen aud Lipid Metabolism . . . . . , . . . . . . . . . Dreaking Models into Smaller Ones . . . . . . . . . . . . . . . . . . . . . . . ~ Tean Linearity . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Decomposable Models . , . . . . . . . . . . . . . . . . . . . . . . . . . . . . . CGRegression Models. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.5.] Example: Health Status Indicators . . . . . . . . . . . . . . . . . . . 4.5.2 Example: Side Effects of an Antiepileptic Drug . . . . . . . . • • • • Incomplete Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.6.1 Assumptions for rvIissing Data . . . . . . . . . . . . . . . . . . . . . . 4.6.2 Some Latent Variable Models . . . . . . . . . . . . . . . . . . • • • • 4.6.3 Example: The Components of a Normal Mixture . . . . . . . . . . . 4.6.4 Example: Mathematics :Marks, Revisited . . . . . . . . . . . . . . . . Discriminant Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4.7.1 Example: Breast Cancer . . . . . . . . . . . . . . . . . . . . . . . . . •
,
4.7
5
Hypothesis Testing 5.1 An Overview . . . . . . . . . . 2 5.2 X  Tests . . . . . . . . . . . . . 5.3 FTests . . . . . . . . . . . . . 5.4 Exact Conditional Tests . . . .
60 61 63 65 67 .
68 70 72 75
79 81 88 92 98 99
101 . 103 . 104 106 • . 109 . 113 . 117 . 119
•
125
. • • • • • • • • • • • • • • • • • • • • • • • • • . • • • • • • • • • • • • • • • • • • • • • • • • • . • • • • • • • • • • • • • • • • • • • • • • • • . .. . . . . . . . . . . . . . . . . . . . . . . . 5.5 DevianceBased Tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.6 Permutation FTest . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.7 Pearson X2 Test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.8 Fisher's Exact Test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.9 Rank Tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.10 \\Tilcoxon Test . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5.11 KruskalWallis Test " . . . . . . . . • • • • • • • • • • • • • • • • • • • • • • ,
59
. . . . . . . . . . . . . . . .
•
•
125 126
•
128
•
. 130 . 134 . 138 . 139 . 140 . 141 . 144 •
148
•••
Contents
1 4
5 \ 0
5.12 5.13 5.14 5.15
JonckheereTerpstra Test . . . . . Tests for Variance Homogeneity . Tests for Equality of l\'Ieans Given HoteHing's 1'2 • • • • . . . . . •
•
•
.'. . . . . . . . . . . . . . . . . Homogeneity. .. .... ...
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
. . . .
3
XIII
. : . .• .• . 150 . . . . . . 153 . . . . . . 154 . 155 •
•
6 Model Selection and Criticism
•
9 9
G.1
1
3 5
6.2
8
o
.)
9 1
2
3 ;I
6.3 6.4
l.l. :J"
(i.6 6.7
7.1
3 •)
7.2
}
l 7 )
7.3 7.4
7.5 7.6
• ) •
•
•
•
•
•
•
•
7 . Directed Graphs and Their Models
1
1
Stepwise Selection . . . . . . . . . . . . . . . .. . . . . . . . . . . . . . • • .• 6.1.1 ForWard Selection . . . . . . . . . . . . . . . . . . . . . . . . . • • • • . 6.1.2 Restricting Selection to Decomposable Models . . . . . . . . :::: . 6.1. 3 Using F_Tests . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.1.4 Coherence . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6.1.5 Other Variants of Stepwise Selection . . . . . . . . . . . .. . .• .• . . The EHProcedure . . . . . . . . . . . . . . . . . . . . . : . 'd' . . . . . 6.2.1 Exalnple: Estrogen and Lipid Metabolism, Contmue . . . . : : : : . Selection lising Information Criteria. . . . . . . . . . . . . . . . . . . . . . . . CompariSon of the .\Ipthods . . . . . . . . . . . . . . . . . . . . . . . . BoxCox Transformations . . . . . . . . . . . . . . . . . . . . . . . . . . • Residual AnalYSis ........................... • • • . '" Dichotomization . .. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . •
o
,
157
•
.
•
Directed ACYClic Graphs . . . . . . . . . . . . . . . . . . . . . . . . .• .• . . . . 7.1.1 :Markov Properties of DAGs . . . . . . . . . . . . . . . . . . . . . . . . 7.1.2 MOdelling with DAGs . . . . . . . : . . . . . . . . . . . . . : . . . . . . 7.1.3 Exarl1ple: Side Effects of !\euroleptlcs. . . . . . . . . . . . . .. Chain Graphs ............................ . 7.2.1 Markov' Pr~;e~t.ies of Chain Graphs . . . . . . . . . . . . . . : : : : : : 7.2.2 MOdellinO' wit.h Chain Graphs . . . . . . . . . . . . . . . . . . 7.2.3 Example~ Membership of the "Leading Crowd" . . . . . . . . . . . : . Local Independence Graphs. . . . . . . . . . . . . . . . . . . . . . . : : : : . . Covariance Graphs . . . . . . . . . . . . . . . . '.' . . . . . . . . .. .. . . Chain Graphs with Alternative Markov Properties . . . . . . . . . . . . Reciprocal Graphs . . . . . . . . . . . . . . . . . . . . . . . . . . . . ..... •
•
•
'
158 161 163 164 165 166
167 169 171 172
174 170
184 189 191 193 196 197 203 207
208 209 210 213 215 216
) .
•
)
8 Causal Inference 8.1 Philosophical Aspects . . . . . . . . . . . . . . . . . . . . . . . . . .• . .• •. .• •. • 8.2 Rubin's Causal Model . . . . . . . . . . . . . . . . . . . . . . . . . · 8.2.1 Estimating Causal Effects . . . . . . . . • . . . . . . . . . . • • .• .. • • • 8.2.2 IgnOrability.......................:::::::....• 8.2.3 Propensity Score. . . . . . . . . . . . . . . . . . . ' . . . . . . . . 8.2.4 CaUsal Hypothesis Testing . . . . . . . . . . . . . . . . . . . . . . 8.3 Pearl's CaUsal Graphs . . . . . . . . . . . . . . . . . . . . . . . . . .. . . . . 8.3.1 A Simple Causal model . . . . . . . . . . . . . . . . . . . . . . : . . . . 8.3.2 Causal Graphs . . . . . . . . . . . . . . . . . . . . . . . . . . . •
!
•
)
t. ! )
) I •
•
,•
.
219 221 225 226 228 230 231 234 234 236
xiv
Contents
8.4
8.3.3 The BackDoor Criterion . . . . . . . . . . . . . . • • • • • 8.3.4 The FrontDoor Criterion . . . . . . . . . . . . . . • • • • • Discussion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8.4.1 Comparison of the Two Approaches . . . . . . . . . • • • • 8.4.2 Operational Implications . . . . . . . . . . . . • • • • • • • ,
..
238
•
., · . ..
240 241
•
•
•
•
•
· 241
•
•
•
•
•
· 242
•
•
•
•
•
•
•
•
•
•
•
•
• •
A The MIM Command Language A.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . • • • • • • • • • • · A.2 Declaring Variables . . . . . . . . . . . . . . . . . . . . . . • • • • • • • • • • · A.3 Undirected Models . . . . . . . . . . . . . . . . . . . . . . • • • • • • • • • · A.3.1 Deleting Edges . . . . . . . . . . . . . . . . . . . . • • • • • • • • · A.3.2 Adding Edges " . . . . . . . . . . . . . . . . . . • • • • • • A.3.3 Other !\JodelChanging Commands . . . . . . . . • • • • • • • • • .. . . A.3.4 rl'fodel Properties . . . . . . . . . . . . . . . . . . • • • • • • • • • .. A.4 BlockRecursive Models . . . . . . . . . . . . . . . . . . . • • • • • • • • .. A.4.l Defining the Block Structure . . . . . . . . . . . . • • • • • • • • • .. A.4.2 Block Mode . . . . . . . . . . . . . . . . . . . . . . • • • • • • · A.4.3 Defining Block· Recursive Models . . . . . . . . . • • • • • • • · A.4.:.l Working with Component .Models . . . . . . . .. • • • • • • • A.5 Readiilg and ;\lanipulating Data . . . . . . . . . . . . . . • · • • • A.5.1 Reading Casewise Data . . . . . . . . . . . . . .. • • • • • • • · A.5.2 Reading Counts, Means, and Covariances ... . • • • • • • • • • · A.5.3 Transforming Data . . . . . . . . . . . . . . . . . • • • • • • • • • • · A.5.4 Restricting Observations ..... '.' . . . . . . . • • • • • · • • • A.5.5 Generating Raw Data . . . . . . . . . . . . . . . . • • • • • • • • • • • · A.5.6 Deleting Variables . . . . . . . . . . . . . . . . . . • • • • • • • • • • • · A.6 Estimation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . · A.6.1 Undirected Models (Complete Data) . . . . . . . . . . . . . . . . . . · A.6.2 Undirected Models (Missing Data) . . . . . . . . . . . . . . . . . . . · ·A.6.3 CGRegression Models . . . . . . . . . . . . . . . . . . . . . . . . . .. A.7 Hypothesis Testing. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. A.7.l X2Tests.................................... A. 7.2 Test of Homogeneity . . . . . . . . . . . . . . . . . . . . . . . . . . . . A.7.3 FTests . . . . . . . . . . . . . . . . . . . • • • • • • • • • • • • • • • • · A. 7.4 Edge Deletion Tests . . . . . . . . . . . . • • • • • • • • • • • • • • • · A.7.5 Edge Deletion FTests . . . . . . . . . . . • • • • • • • • • • • • • • • · A.7.6 Exact Tests. . . . . . . . . . . . . . . . . • • • • • • • • • • • • • • • · A. 7. 7 Symmetry Tests . . . . . . . . . . . . . . . . . . . • • • • • • • • • • · A.7.8 Randomisation Tests . . . . . . . . . . . . . . . . • • • • • • • • • • • · A.8 Model Selection. . . . . . . . . . . . . . . . . . . . . . . . • • • • • • • · A.8.1 Stepwise Selection . . . . . . . . . . . . . . . • • • • • • • • · • • • A.8.2 The EHProcedure . . . . . . . . . . . . . . • • • • • • • • • • • • • • · A.8.3 Selection Using Information Criteria ... . • • • • • • • • • • • • · A.9 The BoxCox Transformation. . . . . . . . . . . . . • • • • • • · • • • • • •
246
•
248 248 249
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
245
•
•
•
245
250
250 251 251
252 252 253 254 254 255
256 257 258 259 259
259 261 265 267 267
268
268 268 269
269 271 271
272 272 275 277
278
Contents
l8 to 11 •
n 12 •
A.lO Residuals . . . . . . . . . . . • • • • • • A.ll Discriminant Analysis . . . . . . . . . . . . . . . . . . . . . . A.12 Utilities . . . . . .... . . . . . . . . . . . . . . . . . . . . . . A.l2.1 File Input. . . . . . . . . . . . . . . . . . . . . . . • • A.12.2 The Workspace . . . . . . . . . . . . . . . . . . . . . . A.12.3 Printing Information . . . . . . . . . . . . . . . . . • • • • A.12.4 Displaying Parameter Estimates. . . . . . . . . . . . . . . . A.l2.5 Displaying Summary Statistics. . . . . . . . . . . . . . . .. A.12.6 Setting the Maximum Model . . . . . . . . . . . . . . . . . . A.12.7 Fixing Variables . . . . . . . . . . . . . . . . . . . . . A.12.B Macros . . . . . . .~ . . . . . . . . . . . . . . . . . . . . . • •
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
. . . .
•
•
•
•
•
•
•
•
•
15 ~5
·,6 ,8 8
•
•
•
•
•
:
. . .• .• . . .. .... . . .. .. . : . ..
•
•
•
. . . •
•
•
•
•
9
o
o 1 1 'J
.'. /
;)
4
4 5
B Implementation Specifics of MIM B.l Calling IvIDvI . . . . . . . . . . . . . . . . . . . . . . . . . • • B.2 Tlw l'.Iain Menu . . . . . . . . . . . . . . . . . . . . . . . • • B.:~ Enterillg Cornrnands and Navigating the Work Area .. • • B....' Tlw BuiltIn Editor . . . . . . . . . . . . . . . . . . . . . • B.5 Intcractive Data Entry . . . . . . . . . . . . . . . . . . . • • B.fi rndcpendence Graphs . . . . . . . . . . . . . . . . . . . . . . B.7 Simple Data Graphics . . . . . . . . . . . . . . . . . . . . • • B. 7.1 Scatter Plots . . . . . . . . . . . . . . . . . . . . . • • B.7.2 Histograms . . . . . . . . . . . . . . . . . . . . . . . . B.7.3 Box Plots . . . . . . . . . . . . . . . . . . . . . . • • • B.8 Graphics Export Formats . . . . . . . . . B.9 Direct Database Access • • B.IO Program Intercommunication •
0
0
0
0
0
0
0
0
0
0
0
0
•
•
0
0
0
•
0
0
0
0
•
0
•
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
•
0
0
0
xv
· 278 280 . 281 . 281 . 281 . 282 . 285 . 286 . 287 287 288 o
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
· ·
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
. ..
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
0
0
•
•
•
•
•
•
•
· ·
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
0
0
·
C On Multivariate Symmetry •
301
D On the Estimation Algorithms
)
7
Dol
,
The MIPS Algorithm D.1.1 Notation D.1.2 The Likelihood Equations . . . . . . D.1.3 The General Algorithm Do1.4 The b.Collapsible Variant D.1. 5 The Mean Linear Variant. D.1.6 The QEquivalent Variant ... Dol. 7 The StepHalving Variant The EMAlgorithm . The MEAlgorithm
I •
Do2 Do3
• •
•

0
0
0
0
0
0
0
0
•
•
0
0
•
•
0
0
•
•
•
0
0
•
• • • • • •
0 . . .
0
0
•
0
0
•
•
•
•
0
0
0
•
•
0
•
•
0
o
•
•
o
0
0
0
••
0
• • • • • • •
•
•
•
0
I
289 289 290 293 294 294 295 296 297 298 298 299 299 300
0
•
• •
0
•
0
••
0
•
•
0
••
0
• • • •
0
• • • • • •
0
•
0
•
0
0
0
•
•
• • •
0
• • • •
0
• • • • • •
0
•
• • • •
• • • • • • •
•
•
•
•
0
0
0
•••
0
•
'0
•
0
•
•
0
• • • • • • • • •
•
•
•
•
•
0
•
•0
0
0
•
•
•
•
•
0
0
0
•
•
•
• • • • • • • •
•
• •
•
•0
••
00
•
305 305 o 305 · 307
o
·
· 310 · o
0
0
0
•
•
•
•
•
•
•
•
0
•
0
0
•
•
0
0
••
•
• •
•
• •
•
•
•
o
0
0
0
•
0
0
•
•
0
•
0
0
0
0
•
•
•
0
0
0
0
•
•
• •
•
•
•
•
•
o
•
311
· 312
0
•
30B
312
313 315
References
317
Index
329
I I
•
•
I
•
•

•
•
re 1
•
•
•
lnarles
This chapter introdllces some of til(' theory Iwhind graphical modelling. The basic concepts of indepelldence and conditional independence are reviewed, and an explanation of how conditional independence structures can be represented graphically is given. A brief discussion of Simpson's paradox is given to further motivate the graphical modelling approach. The final section gives an overview of the book.
1.1
I ndependence and Conditional Independence The concept of independence is fundamental to probability and statistics theory. Two events A and B are said to be independent if Pr(A n B) = Pr(04) Pr(B)
•
or, equivalently, if
Pr(04IB) = Pr(04). In this book we distinguish between two types of variables: continuous variables, whose values lie in the real line lR, and discrete variables (often called factors), which can take values ftom a finite set. For convenience, we label the values in this finite set as {I, 2, ... , #X}, where #X is the number of levels of X .
•
• •
When X is a random. variable we write its density or mass function as fx(x). If X is discrete, we may also write this as Pr(X = j), for level j E {1,2, ... ,#X}.
2
1. Preliminaries
Two random variables X and Yare said to be independent if their joint density factorizes into the product of their marginal densities:
!X,Y(x,y) = !x(x)fy(y), or, equivalently, if the condit.ional density of, say, Y given X = x is not a function of x,/which we can write as
!y/x(ylx) = fy(y). The advantage of this characterization is that it does not invohe the density of X. For example, if I is a fixed grouping factor, and Y is a response, then it is natural to examine the densities !Y/I (yli) for each level i of I. If these are constant over i, then we call this homoglOneity rather thall independence. Similarly, if X is a fixed continuous variable, then we will consider the couditional distributions !Yjx(ylx), for x E R Here, since x may take an infinity of values, it is necessary to adopt more specific parametric modelsfor example, the simple linear regression model
h/x(ylx)
rv
N(a
+ bx,(J2),
where a, b, and (J are ullknown parameters. When b = 0, !YI.dylx) is not a fUllction of x, and we ha\'c the same situation as before. However, we do not usually refer to this as homogeneity, but instead may use the expression zero regression coefficients or something similar. \Ve now t urn to the concept of conditional independence, which is of central importance in graphical modelling. Consider now three random variables, X, Y, and Z. If, for each value z, X and Yare independent in the conditional distribution given Z = z, then we say that X and Yare conditionally independent given Z, and we write this as X lL Y I Z. This notation is due to Dawid (1979), who discusses alternative characterizations of the property. One of these is that
!x/y,z(xly,z) does not depend on y. This is also appropriate when Y and/or Z are fixed. As illustration, consider some data from a study of health and social characteristics of Danish 70yearolds. Representative samples were taken in 1967 and again on a new cohort of 70yearolds in 1984 (Schult.zLarsen et aI., 1992). Body mass index (BM!) is a simple measure of obesity, de2 fined as weight/height . It is of interest to compare the distribution between males and females, and between the two years of sampling. 2
Figures 1.11.4 show histograms of BMI in kg/ m , broken down by gender and year. We write the true, unknown densities as
!B/G,y(b/G
= i, Y
= j)
= lij, •
..

l.l.
Independence and Conditional Independence
60
t
90 80 70 60 50 40 30 20 10 0 40 10
50 40
•
30

20 10

20
30
,
0 10
,,
"
FIGURE 1.1. Males, 1967 sample.
t
40
•
30
1
20

10
Ir
]
I
20
30
40
30
40
FIGURE 1.2. Males, 1984 sample. 80 70 60 50 40 30 20 10 0 50 10
FIGURE 1.3. Females, 1967 sample.
I
20
3
20
40
50
FIGURE 1.4. Females, 1984 sample.
, •
say, where i = 1,2 (corresponding to male and female, respectively), and j = 1,2 (corresponding to 1967 and 1984, respectively). If the two sexes have differing distributions of BMI, but there has been no change in these from 1967 to 1984, so that III = h2 and hI = 122, then this is equivalent to
BMI Jl Year I Gender. If the two sexes have the same distribution of BMI, but this changes from 1967 to 1984, so that III = hI and h2 = 122, then this is equivalent to
BMI Jl Gender I Year.
If the distribution of BMI is the same over year and sex, so that
III = h2 = hI = 122, then this is equivalent to •
BMI Jl (Gender, Year) .
,
Another characterization of X Jl Y I Z discussed by Dawid (1979) is that the joint density Ix,Y,z(x,y,x) can be factorized into the product of two factors, one not involving x and the other not involving y, Le., that

Ix,Y,z(x, y, z) = h(x, z)k(y, z),
•
(1.1)
·
i ••
4
1. Preliminaries
• •
where hand k are some functions. We use this repeatedly below. It is often helpful to regard conditional independence as expressing the notion of irrelevance, in the sense that we can interpret the statement X II YI Z as saying something like: I
If we know Z, information about Y is irrelevant for knowledge of X.
This formulation can be helpful, for example, when eliciting substantive assumptions from subjectmatter specialists, or when using graphs to communicate the conclusions of an analysis. •
1.2
Undirected Graphs •
As we describe in the next section, the key tool in graphical modelling is the independence graph of a model. Before we do this, we briefly introduce some graphtheon:tic terms that will be useful later. The definitions are collected hel (' for conYeniellCe they do not need to be absorbed at first reading. A graph, () = (V,£), is a structure consisting of a finite set V of vertices (also called nodes) and a finite set £ of edges (also called arcs) between these vertices. We ",,rite vertices using roman letters X, y, v and so on. In our context, they correspond to the variables in the model. We write an edge as [XYj or equivalently as [Y Xl. In many of the graphs we consider, each pair of vertices can have either no or one edge between them, and the edges are undirected. We represent a graph in a diagram, such as that in Figure 1.6. This graph is called undirected, since all the edges are undirected. We study other types of graphs in Chapter 7. The vertices are drawn as dots or circles: dots represent discrete variables, and circles represent continuous variables. Edges are drawn as straight lines between the vertices. Clearly a given graph can be drawn in an infinite number of ways; this does not affect its essential nature, which is just defined through the vertex set V and the edge set £ . •
D
A
w
C
B
y,,
FIGURE 1.5. A complete graph.
eX
z
FIGURE 1.6. An incomplete graph.
I
l.2.

Undirected Graphs
5
,
We say that two vertices X, Y E V are adjacent, written X rv Y, if there is an edge between them, i.e., [XY] E E. For example, in Figure 1.6, X and Yare adjacent but Y and Z are not.
,
We call a graph complete if there is an edge between every pair of vertices. For example, the graph in Figure 1.5 is complete. Any subset u C V induces a subgraph of Q. This is the graph Qu = (u, F) whose edge set F consists of those edges in E where both endpoints are in u. A subset u ~ V is called compl.ete if it induces a complete subgraph. In other words, if all the vertices in u are mutually adjacent, then it is complete. In Figure 1.6, the subset {X, Y, W} is complete.
A subset u ~ V is called a clique if it is maximally complete, i.e., u is complete, and if u C W, then w is not complete. The concept of a clique is important in graphical modelling, and often one needs to identify the cliques of a given graph. For €..'{ample, the cliques of the graph shown in Figure l.G are {X, }', W} and {X. Z}. A s('quence of vertices Xo .. .. X such that X i  1 rv Xi for i = 1, ... n is called a pa.th bctw('cn XLI and X n of length n. For example, in Figure 1.6, Z, X, Y, lV is a path of length 3 between Z and W Simiiarly, Z, X, W is a path of length 2 between Z and W. A graph is said to be connected if there is a path between every pair of vertices. 1')
A path Xl, X 2 , ... X n , Xl is called an ncycle, or a cycle of length n. For example, in Figure 1.5, A, B, D, C, A is a 4cycle. If the n vertices X 1 ,X2 , ... Xn of an ncycle X 1 ,X2 , ... Xn,X 1 are distinct, and if Xj rv X k only if Ii  kl = 1 or n 1, then we call it a chordless cycle. Figure 1.i shows two graphs, each of which contains a chordless 4cycle. (The chord less 4cycle in the first graph can be difficult to spot).
We call a graph triangulated if it has no chord less cycles of length greater than or equal to four. For example, if the edge [AC] was included in Figure 1.7 (left), the resulting graph would be triangulated. The triangulated property turns out to be closely related to the existence of closedform maximum likelihood estimates, as we see later.
A
B
I
H
F
G
I
•
D
c
FIGURE 1.7. Two graphs with chordless 4cydes.
·
I • :j
··
6
1. Preliminaries
For three subsets a, b, and s of V, we say s separates a and b if all paths from a to b intersect s. For example, in Figure 1.6, {X} separates {Y, W} and {Z}. Finally, we define the concept of a boundary. The boundary of a subset u 4.75 32 11 61 41
< 4.75 86 35 73
70
TABLE 2.1. Data on the perching behaviour of two species of lizards. Source: Schoener (1968).
diameter (B): 1 = narrow, 2 = wide; and perch height (C): 1 = high, 2 = low. The original data on perch diameter and height were continuous, but \vere later dichotomized. The data are shown in Table 2.1. We illustrate an analysis of these data, making use of NIDvi. The data are defined as follows: MIM>iactor A2B2C2; statread ABC DATA>32 86 11 35 61 73 41 70 ! Reading completed.
The Factor command defines the three binary factors, A, B, and C. The command StatRead reads the data in the form of cell CQunts . •
•
A sensible analysis strategy is to examine which conditional independence relations, if any, hold. To do this, we examine which edges can be removed from the complete graph. We set the current model to ABC (the full model) and then use the Stepwise command with the 0 (one step only) option: MIM>model ABC MIM>stepwise 0 Coherent Backward Selection Decomposable models, Chisquared tests. DFs adjusted for sparsity. Single step. Critical value: 0.0500 Initial model: ABC Model: ABC o P: 1.0000 Deviance: 0.0000 DF: Test Edge P Excluded Statistic DF 0.0009 + (AB] 14.0241 2 0.0027 + (AC] 11. 8229 2 0.3632 [BC] 2.0256 2 No change. Selected model: ABC
The output presents the deviances of the three models formed by removing an edge from the complete graph, together with the associated x2tests.
•
2.2.
MultiWay Tables
19
Only one edge, [BG], can be removed. We delete this edge and repeat the process: MIM>delete BC MIM>stepwise 0 Coherent Backward Selection Decomposable models, Chisquared tests. DFs adjusted for sparsity. Single step. Critical value: 0.0500 Initial model: AC,AB Model: AC,AB Deviance: 2.0256 DF: 2 P: 0.3632 Edge Test Excluded Statistic DF P [AB] 0.0004 + 12.6062 1 [AC] 0.0013 + 10.4049 1 No change. Deviance: 2.0256 DF: 2 Selected model: AC,AB
•
Deleting the edge [BGl results in the model AG, AB. We then consider the models formed by removing an edge from AG,AB. The output presents the deviance differences, together with the associated x2tests. Neither of the two edges can be removed if we test at, say, the 1% level. The model AG, AB is thus the simplest acceptable model. Its graph is Perch height (C) •
Species (A) Perch diameter (B) It states that for each species of lizard considered separately, perch height and perch diameter are unassociated.
2.2 •
MultiWay Tables I
• • •
•
; • •
, •
•
, I !
In this section, we describe loglinear models for multiway tables. Extending the results of the previous section to multiway tables is essentially quite straightforward: the main difficulty is notational. Whereas for threeway tables the factor names and levels can be written explicitly, as for example in p1{fc, uJtc, and nj+l, this method becomes very cumbersome when the number of factors is arbitrary, and we need a more general notation.
20
2. Discrete Models
Let Ll be a set of p discrete variables. We suppose that the data to be analyzed consist of N observations on these p variables, which can be summed up in a pdimensional table of counts, formed by crossclassifying the p discrete variables. We write a ,typical cell in the table as the ptuple
(2.3) and denote the number of observations in cell i as ni, the probability of cell i as Pi, and the expected cell count as mi = Npi. Let. I be the set of all cells in the table. We only consider complete tables, so that the number of cells in I is the product of the number of levels of th~ factors in D.. Under multinomial sampling, the likelihood of a given table {ni LEI is
N!
ni
.1
fI 1"1 nt · lEI .
(2.4)
Pi·
~
iVe also need a notation for marginal cells. For a cell i E I and a subset (l ;; ti. let ia be the corresponding sub ptuple of i, and let Ia be the set of all possiblei a . We write a general interaction term as uf, where it is understood that uf depends on i only through i a . Thus, the full (saturated) loglinear model for p factors can be written
In(pd = •
It is easy to show that for this model, the ~lLEs are Pi
=
ndN.
As before, models are formed by setting interactions· and all their higherorder relatives to zero. A model can be specified through a model formula dj .... ,dr ,
where the sets di ~ D. are called generators. These identify the maximal interactions that are not set to zero. Thus, we may write the model as a
(2.5)
ui'
a~6:a~dJ
2.2.1
for some
j
Likelihood Equations

Let {nin k EIn be the marginal table of observed counts corresponding to a. Similarly, let {min k EIn be the marginal table of fitted counts for some estimate {mdiEI of {mdiEI.
pairs, one of each type migrating tc opposite poles of the nucleus. Eventually, in stage IV, the nucleus divides into two parts, each containing two chromosomes. Each of these subsequently develops into a spore, in time giving rise to separate mildew progeny. (See, for example, Suzuki et al. (1989), Ch. 5.)
•
,
that the two parental isolates have characteristics (1,1, ... ,1) and (2,2, ... ,2), respectively.
•
To understand these data, we must know more about the process of nuclear division (meiosis). The mildew fungus is a haploid organism, that is to say, each nucleus normally contains only one set of chromosomes. During the reproductive process, hyphae from two parents grow together, and the cells and nuclei fuse. Immediately afterwards, the nuclei undergo division, as shown in Figure 2.5. Put briefly, the chromosome complements of the two parents mix and give rise to two progeny. If crossingover did not occur, each chromosome of a progeny would simply be chosen at random from the two corresponding parental chromosomes. Now consider the inheritance of individual loci. (We can think of these as blobs sitting on the chromosomes in Figure 2.5.) For a single locus, A, an offspring inherits either of the parental alleles, Le., either A = 1 or A = 2. These are equiprobable, so that Pr(A = I} = Pr(A = 2} = ~. If two loci, A and B, are on different chromosomes, then they are inherited independently, so that
•
I
•
•
for i, j = 1,2. This is termed independent assortment. However, if they are on the same chromosome, then their inheritance is not independent. If crossingover did not take place, then only parental combinations could •
, • ,
,, •
,
·
, •
L
. .
28
2. Discrete Models
occur, so that we would have 1
Pr(A = i, B = j) =
2 o
if i = j otherwise
I
for i, j = 1,2. However, since crossingover does take place, we have instead
Pr(A = i, B = j) =
(IPr)/2 Pr/2
ifi=j otherwise
for i,j = 1,2, where Pr is the probability of recombination between A and B, i.e., the occurrence of nonparcntal combinations. Since loci that are close together have a low probability of recombination, Pr is a measure (albeit nonadditive) of the distance between the loci. Independent assortment between t\\'O loci is often tested using the standard 2 X goodnessof.·fit test on the observed 2 x 2 contingency table (although a binomial test of Pr = ~ against Pr < ~ would have greater power). Similarly. if three loci are in the order A, B, C along a chromosome, then the occurrence of recombi!lation between A and B will be independent of recombination between Band C, at least to a first approximation. If this is not the case, there is said to be interference. In other words, no interference is equivalent to A J1 C lB. It follows that if we model the . data set using loglinear models, we are interested in models whose graphs consist of variables linked in strings, which correspond to the chromosomes.
,
Again we proceed by selecting a parsimonious model using backwards selection, starting from the saturated model. Since the table is very sparse, asymptotic x?tests would be quite inaccurate, and it is necessary to use exact tests these,are described later in Section 5.4: MIM>fact A2B2C2D2E2F2 MIM>sread ABCDEF DATA>O 0 0 0 3 0 1 0 0 1 0 0 0 1 0 0 DATA>l 0 1 0 7 1 4 0 0 0 0 2 1 3 0 11 DATA>16 1 4 0 1 0 0 0 1 4 1 4 0 0 0 1 DATA>O 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 ! Reading completed. MIM>satmod MIM>step e Coherent Backvard Selection Decomposable models, Chisquared tests. Exact tests, exhaustive enumeration. DFs adjusted for sparsity. Critical value: 0.0500 Initial model: ABCDEF Model: ABCDEF
,
,
~
2.2.

•
•
• •
.,, •
•, • • •
i, ,
, ,
, · I !• ·,•
,,,
,,
r,
, ,•••
Deviance: 0.0000 DF: o P: 1.0000 Edge Test Excluded Statistic DF P [AB] 29.3360 5 0.0000 + [AC] 0.4027 1 1.0000 [AD] 20.0479 3 0.0002 + [AE] 4.5529 3 1.0000 [AF] 0.4027 1 1.0000 [BC] 1.1790 2 1.0000 [BD] 3.3078 3 0.6364 [BE] 6.8444 3 0.1881 [BF] 1.1790 2 1.0000 [CD] 0.3684 2 1.0000 [CE] 5.7735 4 0.3624 . [CF] 48.7976 6 0.0000 + [DE] 2.7335 4 1.0000 [DF] 0.7711 3 1.0000 [EF] 4.1392 4 0.6702 Removed edge [AE] Model: BCDEF,ABCDF Deviance: 4.5529 DF: 16 P: 0.9976 Edge Test Excluded Statistic DF P [AC] 0.2227 1 1.0000 [AF] 0.4230 1 1.0000 [BE] 4.7105 4 0.5840 [CE] 5.5934 4 0.3920 [DE] 1.0446 4 1.0000 [EF] 4.1595 4 0.6762 Removed edge [AC] Model: BCDEF,ABDF Deviance: 4.7756 DF: 24 P: 1.0000 Edge Test Excluded Statistic DF P [A F) 2.9974 2 0.3452 [BC] 1.8815 4 1.0000 [BE] 4.7105 4 0.5840 [CD] 1.4225 4 1.0000 [CE] 5.5934 4 0.3920 [DE] 1.0446 4 1.0000 [EF] 4.1595 4 0.6762 Removed edge [BC] Model: CDEF,BDEF,ABDF Deviance: 6.6571 DF: 32 P: 1.0000 Edge Test Excluded Statistic DF P [AF] 2.9974 2 0.3452 •
MultiWay Tables
29
! •
•
• ••
..f .. ..·
I
30
I,
r
2. Discrete Models
• • • •
•
[BE] 4.5564 4 [CD] 1.4032 3 [CE] 5.4393 4 Removed edge [CD] Model: CEF,BDEF,ABDF , Deviance: 8.0603 OF: 36 Test Edge Excluded Statistic DF [AF] 2.9974 2 [BE] 4.5564 4 [CE] 4.1912 2 [DE] 1.4387 4 Removed edge [DE] Medel: CEF,BEF,ABDF 9.4990 DF: 40 Deviance: Test Edge Excluded Statistic DF [AF] 2.9974 2 [BD] 2.5611 2 [BE] 4.4793 2 (CE] 4.1912 2 [DF] 3.0386 3 Removed edge (DF] Model: CEF,BEF,ABF,ABD Deviance: 12.5375 DF: 44 Test Edge Excluded Statistic DF (AF] 0.0216 1 [BD] 0.0581 1 (BE] 4.4793 2 (CE] 4.1912 2 Removed edge [AF] Model: CEF,BEF,ABD Deviance: 12.5592 DF: 46 Test Edge Excluded Statistic DF (BD] 0.0581 1 (BE] 4.4793 2 (BF] 2.2159 2 [CE] 4.1912 2 Removed edge (BD] Model: CEF,BEF,AD,AB Deviance: 12.6172 DF: 48 Edge Test Excluded Statistic DF [BE] 4.4793 2. [BF] 2.2159 2
0.6289 1.0000 0.5288
··· ,• •
..
••
i,.
I ..
•
, I
P:
1.0000
I'. •
• •
•
P 0.3452 0.6289 0.2072 0.9554
..
! ·• i
••
• •
"
'.
···
P:
1.0000
·
I i
•
I
•
..
P 0.3452 0.2230 0.1323 0.2072 0.6152
P:
I ..
·•• •
1.0000 P 1.0000 1.0000 0.1323 0.2072
P:
1.0000 P 1.0000 0.1323 0.3396. 0.2072
P:
1.0000 P 0.1323 0.3396
, j
~
2.2.
MultiWay Tables
31
[CE] 4.1912 2 0.2072 Removed edge [BF] Model: CEF,BE,AD,AB Deviance: 14.8331 DF: 50 P: 1.0000 Edge Test Excluded Statistic DF P [BE] 6.4075 1 0.0154 + [CE] 0.2072 4.1912 2 [EF] 2.4240 2 0.4644 Removed edge [EF] Model: CF,CE,BE,AD,AB Deviance: 17.2571 DF: 52 P: 1.0000 Edge Test Excluded Statistic DF P [CE] 10.5570 1 0.0016 + No change. Selected model: CF,CE,BE,AD,AB
•
The graph of the selectt'd model CF,CE,BE,AD,AB, is shown in Fig\ll'e 2.6. \r" note that it is in the form of a string, consistent with our expectiltion that there is no interference. Note also that the edges retained in the graph have associated pvalues of less than 0.0016, except [BE], whose pvalue is 0.01.54, and the edges removed have pvalues > 0.3396. This implies that the selection procedure would have given the same result had the critical value been anywhere between 0.0155 and 0.3396. The data thus strongly suggest that the order of the loci is as shown. (See Edwards (1992) for a more detailed discussion of this example.)
•
I
.',, • •
,•
I
, • I, ,
I,
,, ,
•,
.~~.~.~~.~.
.
DAB
F
E
FIGURE 2.6. The selected model 
C
a chromosome map. •
• •
i •
, ,I •
•
32
,... ,.
2. Discrete Models
•
•
•
2.2.6
Example: University Admissions
. Our final example with discrete data concerns the analysis of a table showing admissions to Berkeley ill autumn 1973, crossclassified by department and sex of applicant (Freedman et al., 1978). These data, shown in Table 2.3, are also analyzed in Agresti (1990, p. 225228). A hypothetical example of this kind was. described in the illustration of Simpson's paradox in Section 1.4.
· • •
.t
I•
•
If we label the variables admission (A), department (D), and sex (S), then as we saw in Section 1.4, the interesting quest.ion is whether A lL SID. \Ve examine this in the following fragment: MIM>fact S2A2D6 KIM>label A "Admission" S "Sex" D "Department" MIM>sread DSA DATA>512 313 89 19 353 207 17 8 DATA>120 205 202 391 138 279 131 244 DATA>53 138 94 299 22 351 24 317 ! Reading completed. MIM>model ADS MIM>testdel AS Test of HO: DS,AD against H: ADS P: 0.0014 LR: 21.7355 DF: 6
The hypothesis is strongly rejected; this seems to imply that there is evidence of sexual discrimination in the admission process. To examine this more closely, recall that All SID means that for every department (D), Department
Sex
Whether admitted No Yes
I
Male Female
II
~Iale
512 89 353
III
IV V VI
Female Male Female Male Female Male Female Male Female
17
120 202 138 131 53 94 22 24
313 19 207 8 205 391 279 244 138 299 351 317
TABLE 2.3. Admissions to Berkeley in autumn 1973. Source: The Graduate Division, University of California, Berkeley.
•
2.2.
MultiWay Tables
33
admission (A) is independent of sex (8). We can break down the likelihood ratio test for conditional independence into a sum of tests for independence in each department, using
,
622 •
nasd In
~
d=ls=la=l
2
6

nasdn++d na+dn+sd
2
,
2 d=l
\vhere the expression inside {} is the deviance test for independence for the dth department. To decompose the deviance in this fashion we employ a special option (Z) with the TestDelete cOI"Dmand: MIM>testdel AS z Test of HO: DS,AD against H: ADS Daviance Decomposition in,to Strata D LRTest df P
1 19.054 2 0.259 3 0.751 4 0.298 5 0.990 6 0.384
1 1 1 1 1 1
0.000 0.611. 0.386 0.585 0.320 0.536
Departure from independence is only in ('\"idence for Department I: for none of the other departments is there e\"irlelJce of discrimination. Thus, the data are best described using two graphs, as shown in Figures 2.7 and
2.8. This example illustrates a phenomenon easily overlooked in graphical modelling: namely, that the dependence structure may well differ for different subsets of the data. It is not difficult t.o imagine other examples. For instance, different disease subtypes may have different etiologies (causes). H0jsgaard (1998) describes a method of generalization to multidimensional tables called split graphical models. A
A
•
D
D
•
s
s
FIGURE 2.7. Department 1.
•
I.,
.l
FIGURE 2.8. Other departments .
,
I I,
,,,
,
34
2. Discrete Models
I,
,, •
I, !,
Some problems exhibit a related type of hierarchical structure in which some variables are only defined for particular subsets of the data. For example, McCullagh and NeIder (1989, p. 160) describe a study of radiation mortality involving the following variables: •
1. exposure (exposed/unexposed),
2. mortality (alive/dead), 3. death due to cancer or other causes, 4. leukemia or other cancers.
,, •
I
I, I ,i ,,
,• •
I, I, 1•
, ,,1
Clearly, (3) is only defined if (2) is dead, and (4) is only defined if (3) is cancer. Although the data could be summarized in a 2 x 4 table, with the row factor being exposure (yes/no) and the column factor being outcome (alive, leukemia death, other cancer death, noncancer death), it is more helpful to structure the problem as a sequence of conditional models (see Section 7.1).
,
,, •
•
,i , • ••
,•
1
,• •
Returning to the college admissions example, we may note that for Department I, 89 females (82%) as opposed to 512 males (62%) were admitted, i.e., the discrimination was in favour of females. For all the departments combined, however, the numbers were 557 females (30%) as opposed to 1198 males (44%). This is a clear example of Simpson's paradox.
·
,
•
,,
,• •
I
•
•
•
....,on InUOUS
o e s •
•
•
III this chapter. we desc:rihe mod "Is ha~l'd on ttl(' lllulti\',uiate normal distribution that are analogous to the loglinear models of the previous section. The best introduction to these moJels is given by Whittaker (1990), who aptly calls them graphical Gaussian models, although they are perhaps more widely known as covarianc~ selection models, following Dempster (1972) .
3.1 Graphical Gaussian Models Suppose Y = (Y1 , ... , Yq)' is a qdimensional random variable, with a multivariate normal distribution with mean J.LI J.L=
(3.1 )
• • •
, •
and covariance matrix • •• •
I:=
.'
• • •
•
•
I
• •
• •
(3.2)
•
• ••
,
\, ,
,•
I.,
[ ••
I
I
(No apologies for the superscripts: these are used instead of the usual subscripts for notational reasons that will become apparent in the next chapter.) •
36
3. Continuous Models
We are very interested in the inverse covariance matrix, as
n = EI , written
• • •
n=
• • •
•
,
ql
w
•
• •
•
(3.3)
•
•
qq
w
• • •
This is often called the precision matrix: some authors prefer the term concentration matrix. •
It can be shown that the conditional distribution of (Y1 , Y2 ) gIven (Y3 , ... , Yq) is a bivariate normal distribution with covariance 1
1 •
(3.4)
The correlation coefficient in this bivariate distribution, _w 12
w w
. (3.5) '2
is called the partial correlation coefficient. We see that pI2.3 ... q =
0 .: :. w 12 =
o.
(3.6)
In other words, two variables are independent given the remaining variables if and only if the corresponding element of the inverse covariance is zero. Thus, the elements of the inverse covariance matrix play the same role here as twofactor interaction terms in loglinear models.
•
I
It is instructive to derive this result in another way. The density of Y can be written
(3.7) Collecting terms inside the exponential brackets, we can rewrite this as •
f(y) = exp(a + /3'y  ~y'ny), where
n=
El as before,
(3.8)
/3 = E1Jl, and a is the normalizing constant,
giyen by'
(3.9) In exponential family terminology, (3 and n are called canonical parameters. Equation (3.8) can be rewritten as q
q
,•
q
f(y) = exp(a +
(3.10) j=l
j=1 k=1
,•• J
I
3.1.
ten
Graphical Gaussian Models
37
From this we can see, using the factorization criterion (1.1), that
}j Jl Yk I (the rest) ': ;, w
jk
= O.
(3.11) •
Graphical Gaussian models are defined by setting specified elements of the inverse covariance matrix, and hence partial correlation coefficients, to zero. For example, if q = 4, then we could consider a model setting 24 w!3 = w = O. Thus, the inverse covariance would look like
3.3) •
•
;:)rm
o ven
n=
21 W
o
vi w
2
32
23
w w
33
o 3.4)
•
o
(3.12)
•
. ,34
""'
The graph of this model is fOfnlPd by connecting two nodes with an edge if the corresponding partial corrrlations are not set to zero:
•
3.5} 30 2 Notice that we use circles for the continuous variables, while in the previous chapter we used dots for the discrete variables. We now introduce a formula convention for graphical Gaussian models. For this purpose, we now label variables with letters rather than numbers. We call the set of variables r.
3.6) bles ero. lere
Just as with discrete graphical models, a model formula consists of a list of variables sets (generators) that are given as the cliques of the graph. For example, consider the following graph:
can
y
3.7)
v
3.8) •
ant,
••
, •
3.9) •
x WY
Xz
The missing edges are WY and XZ, so the model sets w and w to zero. The cliques of the graph are {V, W, X}, {V, W, Z}, {V; Z, Y}, and {V, X, Y}, so the model formula is •
/ /VW X, VW Z, V ZY, V XY.
;ers. •
Z
,,• , •
t
I
The double slashes are just a convention, necessary in connection with mixed models, as we shall see in the next chapter.
I
.10)
[
I
r, •
,I
Note that all the models are graphical (hence the name: graphical Gaussian models): in contrast to the models for discrete data described in the
,r ,".
'. "
. ,
38
3. Continuous Models
, ,
Model Formula
Decomposable
Graph oX
yes
I IX, YZ
· ,· ,
yo
oz
,
·, ·
X
IIXY, YZ
I
·· ·
yes yes
IIXYZ IIWX,XY,YZ, WZ
Z
y
Z
W
X
Z W
Y X
no •
I ITtV X Z, Xy Z
Y
·· •·
yes Z W
IIWX,XYZ
Y X
ves
,
Z W
/ /VlVX, VW Z, V XY, l'Y Z
,
110
Z
Y X Y
I,
·,
I, !· ,
, ,,
,,
TA BLE 3.1. Some graphical Gaussian models and their graphs.
previous chapter, there are no nongraphical models, and there is a onetoone correspondence between models and graphs. Models with closedform maximum likelihood estimates are called as before decomposable, and again a model is decomposable if and only if its graph is triangulated. Table 3.1 shows some models and their graphs, indicating whether they are decomposable.
3.1.1
Likelihood
Now sWpose a sample of N observations y(l), y(2), . .. , yiN) is taken; let i1 = Lk=l y(k) IN be the sample mean vector, and let N
S=
(y(k) _ y)(y(k)  y)'
IN
k=l
be the sample covariance matrix. The log density can be written as In f(y) = q In(21T )/2 In IEI/2

1
(y  J1.)'E (y  J1.) /2,
,, , ·· ·
,, ,
,
3.1.
Graphical Gaussian Models
39
so the log likelihood of the sample is N
£(Ji,0.) =  N q In(27r) /2  N In 1L:1/2 
(y(k)  Ji)'0.(y(k)  Ji) /2. k=l
We can simplify the last term by writing N
N
(y(k) _ Ji),0.(y(k)  Ji) =.
(y(k)  y)'0.(y(k) _ y)
LJ
tate (W), anger state (X), anxiety trait (Y), and anger trait (Z). The trRit variables are viewed as stable personality characteristics and the state variables as pertaining to behaviour in specific situations. The example is also treated in Wermuth (1991). Psyc!lOlogicai theory suggests that W Jl Z I (X, Y) and X Jl Y I (W, Z). We define the variables and read the data into the program as follows: MIM>cont WXYZ MIM>label W "Anxiety MIM>sread WXYZ DATA>684 DATA>18.8744 15.2265 DATA>37.1926 DATA>24.9311 44.8472 DATA>21.6056 17.8072 DATA>15.6907 21.8565
f
·•
, I•
••
·,
,
st" X "Anger st" Y "Anxiety trIO Z "Anger trIO
,
I .' •• • •
21.2019 23.4217
,, •
, , • ••
,,,
32.2462 18.3523 43.1191 !
1
i I
!
We examine the independence structure using the StepYise command:
•• ,
,
MIM>model //WXYZ MIM>stepwise 0 Coherent Backward Selection Decomposable models, chisquared tests. Single step. Critical value: 0.0500 Initial model: //WXYZ Model: / !WXYZ Deviance: 0.0000 DF: 0 P: 1.0000
• •
, ,
r •• • ,•
,.,
, ,
,, r I,
•
3.1. Graphical Gaussian Models
Edge Test Excluded Statistic DF [WX] 153.8998 1 [WY] 171.5093 1 [WZ] 1.2212 1 [XV] 0.3313 1 [XZ] 78.0384 1 [YZ] 72.9794 1 No change. Selected model: //WXYZ
•
•
,
45
P
0.0000 0.0000 0.2691 0.5649 0.0000 0.0000
+ +
+ +
The tests for the removal of [W Zj and [XYj are not rejected. We delete these edges from the model and fit the resulting model. MIM>delete WZ,XY MIM>print The current model is //WX,WY,XZ,YZ. MIM>fit Deviance: 2.1033 DF: 2 MIM>test Test of HO: //WX.WY,XZ,YZ against H: //WXYZ LR: 2.1033 DF: 2 P: 0.3494 2
The X test indicates that the model fits the data well; the graph is shown in Figure 3.1. We write out the parameter estimates: MIM>print f Fitted COUDts, means and W 37.193 X 24.931 44.847 Y 21.606 17.022 Z 16.886 21.856 Means 18.874 15.226
covariances.
•
W ,,•• ·
•
•
32.246 18.352 43.119 21.202 23.422 684.000 y' z Count X
MIM>print i Fitted discrete, linear and partial correlation paramet~rs. W 1.000 X 0.449 1.000 Y 0.471 0.000 1.000 Z 0.000 0.318 0.307 1.000 Linear 0.179 0.074 0.378 0.350 19.537 I X Y Z Discrete W
3.1.6 Example: Mathematics Marks These data are taken from Mardia, Kent and Bibby (1979). The variables are examination marks for 88 students on five different subjects, namely •
•
•
'1,
, I
"o
F.
r,
,ij.
I,
46
,,~' ..i' ..,
3. Continuous Models
'"
, ,
Anxiety state (W)
Anxiety trait (Y)
·
"
" r"
f ,
,•
Anger state (X)
Anger trait (Z)
,
•
,
FIGURE 3.1. The anxiety and anger model. \Ve see that like correlates with like: the pair of variables related to anxiety are connected, as are the pair related to anger, the two state variables, and the two trait variables.
mechanics (V), vectors (W), algebra (X), analysis (Y), and statistics (Z). All are measured on the same scale (0100). Whittaker (1990) bases his exposition of graphical Gaussian models on this example. The data set is moderately large, and is therefore stored on a file, say \data \mkb, to be lead into MIM. The file has the following appearance: cont VWXYZ label V "mechanics" W"vectors" X "algebra" Y "analysis" label Z "statistics" read VWXYZ 77 82 67 67 81 63 78 80 70 81 75 73 71 66 81
,
,, "
J..
[
,, ,
,,r
,, ,,•
,1 ,
,
,,
i , ,,
,
r , • ,
, • , ,
f
{.
,
I,
,
,,
,, , • ,
,
,
,,
•
• •
•
,
•
• • • •
5 26 15 20 20 o 40 21 9 14 !
The data are read into rvIIM, and the parameters for the saturated model are examined: MIM>input \data\mkb Reading completed. MIM>print s Calculating marginal statistics ... Empirical counts, means and covariances V 302.293 W 125.777 170.878 X 100.425 84.190 111.603 Y 105.065 93.597 110.839 217.876 Z 116.071 97.887 120.486 153.768 294.372 Means 38.955 50.591 50.602 46.682 42.307 V
W
X
Y
Z
•i ,
I
! i
I,
i,
, ,I, ,
88.000 Count ,
The correlation and partial correlation matrices are more informative:
, ,
,
, ,,
,
•
·
,
3.1. Graphical Gaussian Models
47
~~~
MIM>print uv Empirical counts, means and correlations V 1.000 0.553 W 1.000 X 0.547 0.610 1.000 Y 0.409 0.485 0.711 1.000 Z 0.389 0.436 0.665 0.607 1.000 Means 38.955 50.591 50.602 46.682 42.307
•
•
88.000 Count X V W Y Z Empirical discrete, linear and partial correlations 1.000 V W 0.329 1.000 X 0.230 0.281 1.000 0.078 0.432 Y 0.002 1.000 Z 0.025 0.020 0.357 1.000 0.253 • Linear 0.064 0.152 0.497 0.021 0.074 29.882 Z Discrete V X Y W
•
We note that all the correlations are positive, reflecting the fact that students that do well on one subject are apt to do well on the others. Apart from this, the correlation matrix does not appear to exhibit any particular structure. The partial correlation matrix, however, reveals a block of elements rather close to zero, namely between (Y, Z) and (V, W). To examine the edge removal deviances, we use the Stepwise command: MIM>satmod; stepvise 0 Coherent Backvard Selection Decomposable models, Chisquared tests. Single step. Critical value: 0.0500 Initial model: //VWXYZ Model: //VWXYZ Deviance: 0.0000 DF: 0 P: 1.0000 Test Edge Excluded Statistic DF P [VW] 10.0999 1 0.0015 + [VX] 0.0285 + 4.8003 1 [VY] 0.0002 1 0.9880 [VZ] 0.0532 1 0.8176 . [WX] 0.0012 + 7.2286 1 [WY] 0.5384 1 0.4631 [WZ] 0.0361 1 0.8494 [XY] 18.1640 1 0.0000 + [XZ] 11.9848 1 0.0005 + 5.8118 1 [YZ] 0.0159 + No change.
• • •
•
, •
• •• •• •
•
,
! • •
., i I I
tI • •
•
•
I• ,
,,.
·•.
48
·
•
3. Continuous Models
••
mechanics (V)
analysis (Y)
·
algebra (X) • •
i'
I
,•• •
. statistics (Z)
vectors (W)
,,
,
FIGURE 3.2. The graph of / /VW X, XY Z, which resembles a butterfly.
••
• ;•l
The edge removal deviances for the four edges are very small. To fit the model without the edges, we specify:
••• •
••,
•,. •••
•
MIM>delete VY,VZ,WY,WZ; fit; test Deviance: 0.8957 OF: 4 Test of HO: //VWX,XYZ against H: / /vwnz LR: 0.8957 OF: 4 P: 0.9252
The model fits the data very well; its graph is shown in Figure 3.2. It states that the marks for analysis and statistics are conditionally independent of mechanics and vectors, given algebra. One implicatio[l of the model is that to predict the statistics marks, the marks for algebra and analysis are sufficient. Algebra is evidently of central importance.
I'• •
o'
•
•
,
, • •
•
I'
I, r Ie
r f, ,•
The model parameters can be examined by specifying • •
•
MIM>print ihu Empirical counts, means and correlations V 1.000 W 0.553 1.000 X 0.547 0.610 1.000 0.409 0.485 0.711 1.000 Y 0.389 0.436 0.665 Z 0.607 1.000 Means 38.955 50.591 50.602 46.682 42.307 88.000 Y Count V W X Z Fitted counts. means and correlations V 1.000 W 0.553 1.000 X 0.547 0.610 1.000 Y 0.389 0.433 0.711 1.000 0.607 Z 0.363 0.405 0.665 1.000 Means 38.955 50.591 50.602 46.682 42.307 88.000 W Count V Y X Z Fitted discrete, linear and part ial correlation parameters 1.000 V
0.332
1.000
•· ·• ·
,·
,••, ,
,[,
I ••
I
I,
,,
•
,·,.,
·
\ I I
,, •
'.
•
I
I
,I , , •
,•
,••, f \I
3.1.
Graphical Gaussian Models
49
•
0.235 0.000 0.000 Linear 0.066 X Y Z
,
0.327 0.000 0.000 0.146
1.000 0.451 1.000 0.364 0.256 1.000 0.491 0.010 0.073 29.830 W X Y Z Discrete
V
Observe that the fitted and empirical correlations agree on the submatrices corresponding to the cliques of the graph, i.e., {V, W, X} and {X, Y, Z}, as required by the likelihood equations. The following fragment calculates Ftests, first for removal of edges present in the selected model, and then for addition of edges not present. MIM>stepyise os Coherent Backyard Selection Decomposable models, Ftests yhere appropriate. Single step. Critical value: 0.0500 Initial model: //VWX,XYZ Model: //VwX,XYZ Deviance: 0.8957 DF: 4 P: 0.9252 Test Edge Excluded Statistic DF P [VW] 10.5009 1 , 85 0.0017 + [VX] 9.5037 1 , 85 0.0028 + [WX] 20.4425 1, 85 0.0000 + [XV] 31.0908 1, 85 0.0000 + [XZ] 17.9095 1, 85 0.0001 + [YZ] 5.9756 1 , 85 0.0166 + No change. Selected model: //VWX,XYZ MIM>stepwise ofs Noncoherent Forward Selection Decomposable models, Ftests where appropriate. Single step. Critical value: 0.0500 Initial model: //VWX,XYZ Model: //VWX,XYZ DeviaLce: 0.8957 DF: 4 P: 0.9252 Test Edge Added Statistic DF 'P [VY] 0.1057 1 , 85 0.7459 [VZ] 0.1432 1, 85 0.7061 [WY] 0.7384 1, 85 0.3926 [WZ] 0.2365 1 , 85 0.6280 No change. Selected model: //VWX,XYZ
• ••
I
,\
,,, I I
,
•
I
I
I.
I
!
•
•
•
50
3. Continuous Models
There is no need to modify the model. We study these data further in Sections 4.6.4, 6.6, and 6.7.
3.2 Regression Models Up to now in this chapter, we have assumed that the q continuous variables in r have a •ioint multivariate normal distribution. Within the same framework of graphical Gaussian models, we can work with covariates, i.e., we can consider some variables as fixed. For these variables, we need make no distributional assumptions. Suppose that the q variables consist of ql response variables and q2 = q  ql explanatory variables, so that we write Y as (Yl, Y2) where Yl is a ql vector of responses and Y2 is a q2 vector of explanatory variables. We partit ion similarly r = (r 1, r 2), f.L = (f.L I, f.L2), etc. We assum~ the multiyariate regression framework, which we can write as the model (;3.18)
Here, A is a ql vector of intercepts, B is a ql x q2 matrLx of regression coefficients, and V is a random ql vector with the N(O, 1lJ) distribution. To work with these conditional models in the joint framework, we restrict attention to graphical Gaussian models that contain all edges between the explanatory variables. As explained more carefully in Section 4.2 below, it then follows that likelihood ratio tests between nested conditional models are identical to tests between the corresponding joint models, i.e., the models in which the covariates are assumed to be random. Similarly, MLEs for the conditional models are directly obtained from MLEs for the joint models. What do these conditional models look like? To see this, consider the conditional distribution of Y1 given Y2 = Y2. We know from standard results that this is multivariate normal with mean (3.19) and covariance (3.20) So equating (3.18) with (3.19) and (3.20) we find that the matrix of regression coefficients B = L;12(L;22)1, the intercept A = f.LI B/.12 , and the covariance matrix IlJ = L;1l.2 = L;ll  BL;21.· Imposing a graphical model will constrain the A, B, and II' parameters in various wavs. The most direct wa" to see this is to calculate the canonical • • •
,.
,
,, •
3.2. Regression Models
51
•
parameters corresponding to JL1.2 and (312 = (~112t1JLI2 = (31 _
,
~112,
namely
n12 Y2,
together with n l1 2
•
= (~112t1 = n11 .
n
12 The expression {31 Y2 represents a model for the linear canonical parameters {31.2. It includes covariates (Y2) and coefficients to these (the
elements of n12). Note that the expressions for {31.2 and n are considerably simpler than those for 111.2 and ~11.2. It is interesting to observe that whereas the moments parameters are invariant under marginalization, as in (JLl, ~ll), the canonical parameters display simi/iar invariance under conditioning, as in 1 12 ll (8  n Y2,n ). 11 . 2
Fo!" some models, the linear structure imposed en the /31.2 induces the same lincal struC"ture on the JL1.2. This is the C&3e when all response variables 12 have identical formulae, i.e., the rows of n have zeros in the same places. 
Conventional multivariate regression models posit the same model for the mean of each response variable and an unrestricted covariance matrix, so these can be handled as graphical Gaussian models in which the rows of l2 u n have zeros in the same places, and n has no zeros. (See Section 4.3 for further study of these models). If some elements of n structure between the response variables to be modelled. For example, suppose that r 1 = {X, Y, Z} and r 2 = {V, W}, and consider the model / / XYVW, Y znv. This specifies that Z II X I (V, W, V). The inverse covariance matrix has the form u are set to zero, then this allows the covariance


XY W YY w ZY w VY w
, •
I
. li
n=
•

0
YZ w Zz w wVz wWz
wWY
wXv YV w
wxw yw w
wZv wVv wWv
wzw wvw wWw
•
The linear canonical parameter in the conditional distribution has the form I
~
v
,
f
w
• = {3l 

wXv YV w
wXw YW w
wZv
wZw
V
w
,
o
Ii
a.. •.,.
52
f, !',
3. Continuous Models
• ;
;:
I•:
. ,...
' ~
• • • •
••
V
.•
p.
•
Z
• • • •
y
•
•• • •• • ••
w • . •
X
•
~
(a)
• • •• • • •• • • • • • •• •• •
• •
V
• •
• •
Z
,.. ;.
.
••
•
o
• • • •• •
•
y
,
tI ,
I
••
W
••..
• • ••
,",. •
,
X
.•
" • I
t
(b)
l:
t
FIGURE 3.3. Graphs of two nonstandard multivariate regression models.
t
,
[ o
so using pl.2 = (Dlltl,Bl'2, we see that the means of YI conditional on 1'2 = Y2 are of the form
,
o·
I
f •
f,
= tI, W = w) = a1 + biv + ClUJ E(Y I F = v, W = w) = a2 + b2 v + C2W E(Z I V = v, W = w) = 0.3 + b3 v + C3W, E(X I \f
o
0,
I , ,
(3.21 )
\ ,
I
,, ,
where the parameters {ai, bi , Cih=l...3 are ullconstrained. Since the term Xz w is set to zero.• the conditional covariance matrix is restricted. so the model is not a con\'~ntionalmultivariate regression model. Its independence graph is shown ilL Figure 3.3(a), where for clarity a box is drawn around the covariates.
·o
lI ,
•
If the response variables do not have identical linear structures, i.e., the 12 rows of n do not have identical zero patterns, then this induces constraints on the parameters of the linear system (3.21). In some cases the' model may correspond to a different linear system with unconstrained parameters; for example, if it is decomposable it will be expressible as a recursive linear system. An example is the model with graph shown in Figure 3.3(b) which is obtained from the previous graph by deleting edges [ZW] and [YW]. It is expressible as the recursive system
X = a1 +b1v+C1W+€1 Y = Z =
a2
a3
+ b2v + C2X + (02
+ b3V + C3Y + 103,
where €i rv N(O, O'i), for i = 1 ... 3, are independent error terms. Note that a given model may be expressible as recursive systems in many different \vays (see Section 4.2 and Wermuth, 1990) .
,·
I
! ;
r " ,
r, ,, o
,
~, •
,, o
· ,
,• ;
, ,
,
,, ,•
;
I
I
I
I I Io
o
•
,, ,
•
;
,
I
I
I!
,'
3.2.

Regression Models
53
•
Finally, we consider a model for time series data. For example, a second order autoregressive, or AR(2), model for data measured at six timepoints can be represented as follows:
,
,
Xl
X3
X5 Here. we condition on {X I ,X2 }: the remaining variables are linearly dependent on the two previous variables. We note in passing that graphical modelling for time series data is an actiw research topic. See Lynggaard and Walther (1993), Brillinger (1996), Dahlhaus et ,Ii. (1997), and Dahlhaus (2000).
3.2.1 ,, I
i
,i,,. , ,
,. ,
,r ,
,
,
I ,
,I
,
Example: Determinants of Bone Mineral Content
As illustration of the application of graphical Gaussian models in the llIultivariate regression framework, we consider data arising ill a clinical study of the effects of estrogen therapy on bone mineral content in 150 postmenopausal ,vomen (r..IunkJensen et al., 1994). There are three explanatory and three response variables. The three explanatory variables are menopausal age in weeks (U), body mass index, Le., weight/heighe (V), and alkaline phosphatase, an enzyme involved in bone metabolism (W). The three response variables correspond to three ways of estimating bone mineral content, namely bone mineral content determination in the upper arm (X), bone mineral density determination in the spine (Y), and bone mineral content determination in the spine (Z). All measurements were taken in a baseline period prior to active treatment. The object of the analysis is to use these crosssectional data to see whether the explanatory variables are significant determinants of bone mineral content: this could be of use, for example, in identifying risk factors for osteoporosis. The data are defined as follows:
MIM>cont UVWXYZ MIM>label U "Men Age" V "BMI" W"Alk" I MIM>label X "BMC  arm" Y "BMDspine" Z "BMCspine" MIM>read xyzuvw
•
, • •
,
•
•
•
•
• •
• • ,
,
The sample count, mean, and covariance are displayed using the Print S command: ,
..'~ ~•
..
>.
'" ,
(::
•
>
~
••
, ",... !, ,.
54
3. Continuous Models
•
.. ,
,·
MIM>print s Calculating marginal statistics ... Empirical counts, means and covariances. X 0.034 0.013 0.017 Y I 0.066 Z 0.082 0.448 0.635 0.465 1. 979 724.710 U 0.000 0.000 0.000 0.001 V 1. 305 W 0.007 0.008 0.031 Means 1.163 0.929 4.222 92.518 X
Y
Z
U
,.
•
•'.
• •
• •
,, ··..,•...
,
•,
0.000 0.000 0.002 V
·
•
,,.
0.046 5.098
I. •..•
,
139.000 Count W
•
·•,
, ,•
, ,
, ,
We see that 11 cases have missing values on one or more variables, so only 139 cases contribute to the analysis. We examine the correlation and partial correlation matrices: MIM>p:rint uv Empirical counts, means and correlations. 1.000 X 0.528 1.000 Y 0.540 0.930 1.000 Z 0.129 0.131 0.110 1.000 U 0.156 0.083 0.097 0.080 V 1.000 0.176 0.265 0.215 0.226 0.212 IV Means 1.163 0.929 4.222 92.518 0.002
!•,
, •
• ,
"•
:• ,
•
!,• f•
·· , ;
,• ,· t•
1 •
1.000 5.098 139.000 Count Z X Y U V IV Empirical discrete, linear and partial correlation parameters. X 1.000 Y 0.078 1.000 Z 0.173 0.895 1.000 u 0.058 0.039 0.023 1.000 0.044 0.073 0.029 1.000 V 0.239 0.011 0.172 0.072 0.183 0.223 1.000 IV Linear 29.822 161.232 15.381 0.0221899.335 133.305 390.486 U V IV Discrete X Y Z
· ,·
! •
,
,•
,
•
,• •
•
,•
r, ,1• ,
I We note that the partial correlations, except between the spinal measurements (Y and Z), are rather small. We can also examine the conditional distribution given the explanatory variables, by using the DisplayData command: MIM>DisplayData XYZ,UVW Empirical conditional means and covariances. 1.900 0.001 0.112 48.943 X 0.032 1.743 0.000 0.172 42.829 Y 0.012 7.564 0.002 0.724 222.151 Z 0.063 U W V X •
f
II
I 1
,
I
I \ 1
0.016 0.075
0.416
!,,
Y
Z
I
•
I •
I ! , •
,
3,2,
Regression Models
55
The first four columns give the sample estimates of the coefficients of the conditional means of X, Y, and Z, and the last three columns show the sample estimate of the conditional covariance matrix (lower triangular part), We note that the coefficients of U are small, and that the conditional variances and covariances are only slightly smaller than the corresponding marginal quantities. Thus, only a small proportion in the variation of the response variables can be explained by the explanatory variables.
•
•
,
To find a simple model consistent with the data, we perform backward selection starting from the full model. We fix the edges between the covariates in the graph using the Fix command to prevent these edges from being removed, We restrict the selection process to decomposable models and liS" Ftests.
•
, ••
,
•
I, •
,
I
I I
1 J
,
I ·
I•
f
•
, J
I f
I
I • I.
,
MIM>satmodel; fix UVW; stepwise s Fixed variables: UVW Coherent Backward Selection Decomposable models, Ftests where appropi·iate. Critical value: 0.0500 In:tial model: //UVWXYZ Model: //UVWXYZ Deviance: 0.0000 DF: 0 P: 1.0000 Edge Test Excluded Statistic DF P [UX] 0.4430 1, 133 0.5068 [UY] 0.1995 1, 1330.6558 [UZ] 0.0734 1, 133 0.7869 [VX] 8.0852 1, 133 0.0052 + [VY] 0.2579 1, 133 0.6124 [VZ] 0.7198 1, 133 0.3977 (WX] 0.0166 1, 133 0.8977 [WY] 4.0409 1, 133 0.0464 + [WZ] 0~6955 1, 133 0.4058 [XV] 0.8239 1, 133 0.3657 [XZ] 4.1250 1, 133 0.0442 + [YZ] 535.9620 1, 133 0.0000 + Removed edge [WX] Model: //UVWYZ,UVXYZ Deviance: 0.0173 DF: 1 P: 0.8953 Edge Test p' Excluded Statistic DF [UX] 0.4297 1, 134 0.5133 [WZ] 0.7622 1, 134 0.3842 [XY] 0.8140 1, 134 0.3686 Removed edge [UX] Model: //UVWYZ,VXYZ Deviance: 0.4623 DF: 2 P: 0.7936
• •
,'• ... , . •
I, ,
,
...
," :;
56
3. Continuous Models Edge Test P Excluded Statistic DF [UY] 0.2532 1, 134 0.6156 [UZ] 0.0252 1, 134 0.8741 [WZ] 0.7622 1, 134 0.3842 [XY] 0.9172 1, 135 0.3399 Removed edge [UZ] Model: //UVWY,VXYZ,VWYZ Deviance: 0.4885 DF: 3 P: 0.9214 Edge Test P Excluded Statistic DF [UY] 0.8954 1, 135 0.3457 [WZ] 0.8491 1, 135 0.3584 [XV] 0.9172 1, 135 0.3399 Removed edge [WZ] Model: //UVWY,VXYZ Deviance: 1.3600 DF: 4 P: 0.8511 Edge Test P Excluded Statistic DF (UY] 0.8954 1, 135 0.3457 (VZ] 1.2316 1, 135 0.2691 [XV] 0.9172 1, 135 0.3399 Removed edge [UY] Model: i/UVW,VXYZ,VWY Deviance: 2.2789 DF: 5 P: 0.8094 Edge Test P Excluded Statistic DF [VZ] 1. 2316 1, 135 0.2691 [XV] 0.9172 1, 135 0.3399 Removed edge [XV] Selected model: //UVW,VXZ,VWY,VYZ
,", '. ,,: , , , ,
,,
,,,, , ,,
,I
,
I
,, Ii
· I., "
"•
I,
,, ,.
,
I ,
,, i. ,
,.;.. ,
( ••
f
~ r
t
t< I •
I.
, i
•
,, .
", •
,r ,,
l
fi 1',
i'I i, ,
,, ,
,
• · , ,
,,, •
,i ,.
,
II
___ rI
The graph of the selected model is:
, I
BMD arm
Ivlenop. Age (U)
(~L
I
t·
/
BMI (V
rI I ~.
,
BMD spine (Z)
Alk. Phos. (W)
i I
•
, ,•
BMC spine (Y) A surprising implication of the model selected is that given body mass index and the alkaline phosphatase level, menopausal age does not appear to influence the bone mineral content measurements. This would suggest
.
• ',"
3.2.
Regression Models
57
•
that, in some sense, the influence of menopausal age on bone mineral content is mediated by the level of alkaline phosphatase and the body mass index. An alternative explanation is that these variables are confounded with menopausal age, and that the failure to find association is due to lack of power. ,
To see whether there are nondecomposable models that provide better fits to the data, we continue the stepwise selection procedure in unrestricted mode using x2tests. MIM>stepwise uo Coherent Backward Selection Unrestricted models, chisquared tests. Single step. Critical value: 0.0500 Initial model: //UVW,VXZ,VWY,VYZ Model: //UVW,VXZ,VWY,VYZ Deviance: 3.2201 DF: 6 P: 0.7808 Test Edge Excluded Statistic DF P [VX] 8.8999 1 0.0029 + [VY] 0.0075 1 0.9311 [VZ] 2.3733 1 0.1234 [WY] 12.2502 1 0.0005 + [XZ] 53.3352 1 0.0000 + [YZ] 278.1634 1 0.0000 + No change. Selected model: //UVW,VXZ,VWY,VYZ
" • •
• •• •
• •
,,
1 i l
f, •
f
I, I, •
,
•
,• •
,i
We see that the edges [VY] and [V Z], whose removal in each case leads to a nondecomposable model, yield nonsignificant pvalues. To see whether they can both be removed, we first delete [VY] and test whether [V Z] can be removed: ,
•
MIM>delete VY; testdelete VZ Test of HO: //UVW,VX,XZ,WY,YZ against H: //UVW,VXZ,WY,YZ LR: 10.0447 DF: 1 P: 0.0015
This is strongly rejected. Similarly, we C8Jn replace [VY], remove [V Z], and then test for the removal of [VY]:
•
,
.
MIM>add VYj delete VZ; testdelete VY Test of HO: //UVW,VX,XZ,WY,YZ against H: //UVW,VX,XZ,VWY,YZ LR: 7.6789 DF: 1 P: 0.0056 •
58
3. Continuous Models
This is also strongly rejected. Thus, either [V Z] or [VY] must be present, but it is not clear which. This is an example of nonorthogonality, due to . multicollinearity between the two spinal measurements.
•
r.' o
A reason for' preferring the decomposable model first selected to the two nondecomposable submodels is that of interpretation. As we describe later .in Section 4.4, a decomposable model is equivalent to a sequence of univariate regressions, whereas a nondecomposable model is not. So a decomposable model suggests a causal explanation for the data (see Cox, 1993). Of course, there is an important distinction between selecting a model that suggests a causal explanation, and claiming to have found evidence for causality. We discuss some related issues in Chapter 8.
r r• • •
•
•
,
o
,.
·•
f .,•f ,
•
,[,.
,•,•
!.
I• ,
f, ,,
I
\• i I
l ,
I I
•
I
I ,I
,•
I
I
\
I I
•
,
•
IXe
o e s
,
•
4.1
·.
Thi~
chapter describ('!; ;1 t'll11ily of models for Illi~:(>d discrete and continuous variables that combine and generalize the models of the previous two chapters. Graphical models for mixed discrete and continuous variables were introduced by Lauritzen and Wermuth (1989), and both undirected and directed types of models were described, The undirected models (graphical interaction models) were extended to a broader class, the hierarchical interaction models, in Edwards (1990). The latter are constructed by combining loglinear models for discrete variables with graphical Gaussian models for continuous variables, as we now describe.
Hierarchical Interaction Models
•
••
••
,
.•
•
• • • •
, • •
,, •
, •• • •
Suppose we have P discrete variables and q continuous variables, and write • the sets of variables as fj. and r, respectively. \Ve write the corresponding random variables as (J, Y), and a typical observation as (i, y). Here, i is a ptuple containing the values of the discrete variables, and y is a real vector of length q. We write I for the set of all possible i.
•
•
•
••
I
We suppose that the probability that J =r i is Pi, and that the distribution of Y given J = i is multivariate normal N(JLi, 'f i ) so that both the conditional mean and covariance may depend on i. This is called the CG (conditional Gaussian) distribution. The density can be written as
I
f(i, y) = PiI21r'fd~ exp{ Hy  JLi)'Ei1(y  JLi)}. ,
[,
, •
,, ,
••
•
(4.1 )
. The parameters {Pi, JLi, EihEI are called the moments parameters. •
I~
•~ fl1"
60
4. Mixed Models
~~~
We are often interested in models for which the covariance is constant over i, so that ~i = ~. Such models are called homogeneous. As we shall see later, there are generally two graphical models corresponding to a given graph: a heterogeneolls model and a homogeneous model.
i., (
"H., I,•
, •~
,, •
,,• •
We rewrite (4.1) in the more convenient form
,
·
(4.2) where ai is a scalar, f3i is a pJt1 vector, and ni is a pxp symmetric positive definite matrix. These are called the canonical parameters. As in the previous chapter, we can transform between the moments and the canonical parameters using
ni
= ~il,
f3i =
~l
.
J.Li, Qi = !n(Pi)  ~ In I~il ~J.L~L:ilJ.Li  ~ In(27r), and L..i
(4.3) (4.4) (4.5)
•• •
"of
tf f• I:•
,, •
f,,
,,
I I I• I
•
t ,,, , "
~i =
J.Li =
nil,
(4.6) (4.7)
ni 1 f3i,
Pi = (27r)~ Inil~ exp{ai + ~f3:nilf3i}'
(4.8)
Hierarchical interaction models are constructed by restricting the canonical parameters in a similar fashion to loglinear models. That is to say, the canonical parameters are expanded as sums of interaction terms, and models are defined by setting higherorder interaction terms to zero. To introduce this, we examine some simple examples.
• • •
,,
I,
!t I
r
lI ,,•
,I I
i ••
, •
I •
f
4.1.1
•
Models with One Discrete and One Continuous Variable
First, let us consider the case where P = q = 1, that is, where there is one discrete and one continuous variable. Let !1 = {A} and r = {Y}. The density can be written as
tI
l I,
I
1
f(i, y) = pd27rair~ exp {Hy  J.Li)2 lad = exp {ai
+ f3iY 
~wiy2} .
(4.9)
•,
,
!
I,
Replacing the canonical parameters with interaction term expansions, we rewrite (4.9) as
,
\
•
f( i, y) = exp { (u + ut) + (v
+ vt)y  Hw + wt )y2} .
The quadratic canonical parameter is Wi = W+ wt. Since O"i = wi 1 we see that the cell variances are constant if wt = O.. (We now see that we used superscripts in the previous chapter to reserve subscripts for cell indices).
• •
4.1.

Hierarchical Interaction Models
61
•
The linear canonical parameter is f3i = V + vf Using JLi = wi! f3i, we see that the cell means are constant if w~ = v~ = O. Also, using the factorization criterion (1.1), we see that All.. Y if and only if w~ = v~ = O.
•
The discrete canonical parameter is (}:i = U + ut; this corresponds to a main effects loglinear model and cannot be further simplified, Le., we do not consider setting ut = O. We are led to consider three possible models. The simplest is the model of marginal independence, formed by setting vt = wt = O. The density is
,
I
f(i,y) = pd21T0")2 exp {Hy  JL)2j0"2}, and the model formula is AjYjY. The second model, formed by setting wt = 0, allows the cell means to differ but constrains the variances to be homogeneous. The density is •
I
f(i, y) = Pi (2710")"2 exp {Hy  JLi)2j0"2} , and the model formula is AjAY jY. The third model (the full model) has freely varying cell means and variances. The density is I
.
f(i,y) = Pi (2'7O"if2 exp {HY  JLi)2jO"l}, and the model formula is A/4}'jAY. As these examples show, model formulae for mixed models consist of three parts, separated b? slashes (I). The three parts specify the interaction expansions of 0.;, 8 i . and nj , respectively. Thus, in the second and third models above, the second part of the formulae was AY j this means that the element of Pi corresponding to Y has an expansion with formula A, i.e., has the form f3r = v + vf In the model AjAYjY~ the term Y indicates that the quadratic canonical parameter wrY has a null formula; in other words, it is constant over the cells: = w.
,
•
,, •
, •
,i
,
wrY
•
These three models are, of course, closely related to familiar oneway ANOVA models: the only difference is that in the present setup, the cell counts are taken to be random. If we regard A as fixed, then we have precisely the oneway ANOVA setup, and the first model denotes homogeneity rather than independence (see the discussion in Section 1.1).
,
•
•
4.1.2 A Model with Two Discrete and Two Continuous Variables As the next illustration, we consid~r a model with P = q = 2. Suppose that !1 = {A, B}, r = {X, V}, and that A and B are indexed by j and k, •
,, , •
,.
",,.
,,." ' "
62
~
4. Mixed Models
b ,, .
f
respectively. The canonical parameters are
CKjk,
i..;.
..Ii,
,, f· •,i ,, •,
(3~
=
(3jk
,
,
• i
(31
,
and
·
njk
=
xx
Wjk
XY w.1'k
XY wjk
·YY Wjk
~," , ,.. ,r'
•
~
~[ ,
We can constrain the CKjk by requiring that they take the additive structure
r
f f
,
f
•
u, u1,
ur.
for some parameters and We represent this expansion as A, B: this will be the first part of the model formula.
, ,
,,• I
, I,
Similarly, we can constrain {3;~ and ,6}~ by requiring, for example,
x (3jk y
(3 jk
X'A vj '
X·B
+ +vk Y Y;B . V + vk
= V =
X
'
,
,,
!
and
Thus, (3~ has additive A and B effects, and (31 depends on B only.
i
To form the second part of the formula, we combine the shorthand formula for the expansion for (3!k, A, B, with the formula for (31, B, to obtain the expansion AX,BX,BY.
i'
Finally, we model the elements of the inverse covariance njk . The simplest structure it can have is constant diagonal elements and zero offdiagonal YY X Y The corresponding elements'. w~Y = 0 w~x = w and w = w Jk ' Jk , Jk • formula for njk is X, Y.
.,
Now we can put together our shorthand formula for the whole model, namely, A, BjAX,BX,BYjX, Y.
We can form the graph of this model by joining variables that occur in the same generator: A
X
B
Y
.
"
'•","
" •
4.1.

Hierarchical Interaction Models
63
•
4.L3 Model Formulae We now continue with the general case. Suppose the model formula has the form
,
(4.1O)
dl, ... ,dr/ll, .. ·ls/ gl,···gt· , .J, I' yo
•
discrete
.I
V
v
linear
quadratic
The three parts have the following functions:
. •
1. The discrete generators specify the expansion for ai. 2. The linear generators specify the expansion for f3i. Each linear generator contains one continuous variable. The expansion for f37 for some / E r is given by the linear generators that contain /. 3. The quadratic part gives the expansion for the inverse covariance matrix ~k Each quadratic generator must contain at least one continuous variable. The expansion for w7( for /, ( E l' is given by the quadratic generators that contain /, (. •
Two syntax rules restrict the permissible formulae: ·
,, ••
, f
I, ••, I
r J I
!•
,, •
•
,,
••
, •
I
I, , I• r
,• .
\
1. The linear generators must not be larger than the discrete generators, Le., for each linear generator lj there must correspond a discretegenerator dk such that lj n 6. ~ dk. For example, A,B/ABX/AX is not permitted since there is a linear generator ABX but no discrete generator containing AB. 2. The quadratic generators must not be larger than the corresponding linear generators, Le., for each quadratic generator gj and each continuous variable / E gj, there must correspond a linear generator lk such that (gj n 6.) U {r} ~ lk. For example, ABC/AX, BY, CZ/AXY, CZ is not permitted since there is a quadratic generator AXY but no linear generator containing AY. To motivate these rules, consider the requirement that a model be invariant under scale and location transformations of the continuous variables. Suppose Y is given by

(4.11 )
Y = A(Y +b),
where b is a gvector and A is a diagonal matrix I
•
•
al
A=
0 • • •
0 a2
0
o .. ,
...
0
0
• • •
•
•
•
0

,
0 aq 
I
~C
.:,', ,\
,, ,, , i"· ,
64
4. Mixed Models

, "
f
,., ,
, ,
with nonzero diagonal elements. Clearly, (I, Y) is CGdistributed, and the :' moments parameters {Pi, Pi, tdiEI are given as
.
•
,
,,, ,
,
,

! ~
Pi = Pi,
I :"
Using (4.34.5), we can derive the corresponding canonical parameters. In particular, we obtain
, ,f i.
,t ,
, ,
,,

(3i =

r I,
1
Li Pi 1
= A ((3i
,
,
+ nib).
(4.12)
If the model is invariant under the transformation, the new linear canonical parameter /3i must be subject to the same constraints ac:; the original (3i. In other words, it must have the same range as a function of i. From (4.12), we see that for each I E r, the range of B7 as a function of i encompasses 7J the range of ..ui terms for all 1/ E r. This is precisely the effect of the second SYllta.X rule above.
f,
! I, I
, , ,
, , ,
, ,,,
I
Similarly, we obtain ;;,. U'l. 
r.,. U'l.
,,t
b'(3''l  Ib'n,b 2 1.,
!, ,•
!
so that the range of (}i as a function of i encompasses the ranges of (37 and terms for /, 1J E r. This is ensured by the two syntax rules, so the rules ensure that the models are invariant under such transformations.
wr
,

We note in passing that the model formula syntax implicitly introduces another constraint. For example, if an offdiagonal precision element w'Y( depends on i, so must both corresponding diagonal elements, w'Y'Y and w(( More generally, the range of wJ7 as a function of i must encompass the range of w7(, for all /, ( E r. As pointed out by several authors (Edwards, 1990; Lauritzen, 1996, section 6.4.1) the model family could be usefully extended by removing this constraint (which would involve adopting a different model formula syntax). Some simple models fall outside the hierarchical interaction models as we have defined them above, but fall within this extended class: one such arises with 6. = {I} and r = {X, Y}. If we wish the regresssion of Y on X and I to involve nonparallel regression lines but homogeneous variances, then it turns out (see Section 4.3) that for the nonparallel lines we must introduce a term wfY by including a quadratic YY generator I XY in the formula. But this forces w also to depend on i, so the conditional variance becomes heterogeneous.
r, ,
f
,,,
,
:; • •
4.l.

Hierarchical Interaction Models
65
~.~~~~~~
.
•
4.1.4 Formulae and Graphs To study the correspondence between model formulae and graphs, we can expand (4.2) as
,
f(i,y) = exp{Cl:i + •
(4.13 ) I'Er 1)Er
and then apply the factorization criterion (1.1) to examine the pairwise Markov properties implied by a given model. For two discrete variables in the model, say A and B, All B I (the rest) holds whenever all of the interaction terms involving A and B are set to zero. That is to say, none of the expansions for Cl:i, ,67, or for any ,,(, TJ E r, may contain an AB interaction. In terms of the model formula, we just require that no discrete generator contains AB since the syntax rules then imply that no linear or quadratic generator may contain AB either.
wr,
••
If A is discrete and X is continuous, we see that A II X I (the rest) holds whenever all of the interaction terms involving A and X are set to zero. That is to say, none of the expansions for or 1) for any TJ E r may contain an interaction term involving A. In terms of the model formula, we just require that no linear generator may contain AX since the syntax rules then imply that no quadratic generator will contain AX either.
f3;
r •
w;
For two continuous variables, say X and Y, X II Y I (the rest) holds whenever is set to zero. In terms of the model formula, this means that no quadratic generator may include XY.
w;Y
, i,
These results make it easy to derive the independence graph from a model formula. We simply connect vertices that appear in the same generator. For example, the graph of ABjAX,BX,AY,BZjXY,XZ is shown in Figure 4.1. Note that different models may have the same graph: for example, ABjABX,AY, BZjAXY,BXZ and ABjABX,AY,BZjXY,YZ both have the graph shown in Figure 4.1.
,i
, r
i
,,• •
••
.
We now consider the reverse operation, that is, finding the formula of the graphical model corresponding to a given graph Q, by identifying the maximal interactions that are consistent with Q. To be more precise, we
A
y I
•
•
•
•
B
z
FIGURE 4.1. A graph on five vertices.
4. Mixed Models
66
•
r.
,"
l: ~M:od;e;l;:formu;la~i:n;l{;;I~;in1{:;;G7in;;1{7D:G~rap7h'f
.  "'"=",. , . ,
'
no
A/AX/X
yes
yes
A/AX/AX
yes
no
yes
A/AX/XY,
no
yes
yes •
yes
A/AX, Y/AX,XY
no
A .1  { ) X
I ,t,
"
A 1;0 X '
y
A
yes
A
Y
I, I., •,
no
A/AX,AY/X,Y
no
AB/ABX/X
yes
yes
yes
no
X
yes
,,
·
•
A
Y
A
B
A
X
I,
yes
•
AB/AX,BY/AX,BY
,,• ,
yes
~
[
B
Y
•
I, {
A,B/AX,AY,BX,BY/X,Y
no
yes
no
A
X
Y
B
,
,, , ,
r
•
TABLE 4.1. Some hierarchical interaction models. 1tb is the class of heterogeneous graphical models, 'He is the class of homogeneous graphical models, and 1tD is the class of decomposable models.
associate two graphical models with a given graph heterogeneous one. Consider again Figure 4.1.
a homogeneous and a
The discrete generators are given as the cliques of Q~, i.e., the subgraph of Q on the discrete variables. In Figure 4.1, this is just AB, so the first part of the formula is AB. For the linear part of the formula, we need to find the cliques of Q~uh} that contain 1, for each 1 E r. In Figure 4.1, this will give generators AB X, AY and BZ, so the linear part is ABX, AY, BZ. For the quadratic part, it depends on which graphical model we are interested in: the homogeneous or the heterogeneous one. For the homogeneous modeL we need to identify the cliques of Qr. In the present example, the •
I
4.1.

Hierarchical Interaction Models
67
•
x
•
cliques of Qr are {X, Y} and {X, Z}, so we get the formula
,
•
ABjABX,AY, BZjXY,XZ.
'X •
For the heterogeneous model, we need to find the cliques of Q that intersect r. In Figure 4.1, the cliques are {A,X,Y}, {A.B,X}, and {B,X,Z}, so that we obtain the formula
y
AB j ABX, AY, BZjAXY, ABX, BX Z .
.y
4.1.5 Maximum Likelihood Estimation
y
y •
B
Models, by their nature, need data. Suppose we have a sample of N independent, identically distributed observations (i(k), y(k)) for k = 1 ... N, where i is a ptuple of levels of the discrete variables, and y is a qvector. Let (nj, tj, Yj' SSj, Sj )jEI be the observed counts, variate totals, variate means, uncorrected sums of squares and products, and cell variances for cell j, i.e.,
ni = #{k: ilk) = i}, B l
X

rr
k:i(kbi
j
y
Yi
=
ti/ni,
X •
B
d "
:,
k:i(k)=i
,
' • ·,,
a
,• ••,
i i
f
,, , ,,
f, ,,
·,
. "•
Consider now a given model with formula d1, ... , dr j h, ... , ls j ql, ... , qt. From (4.2), it is straightforward to show that a set of minimal sufficient statistics is given by I
I
I
II
, I
,
IS
,e
SSdni  YiY~'
We also need a notation for some corresponding marginal quantities. For a ~ L\, we write the marginal cell corresponding to i as ia and likewise for d ~ r, we write the subvector of y as yd. Similarly, we write the marginal cell counts as {nia he. ETa' marginal variate totals as {ttJ ia EIa , and marginal uncorrected sums of squares and products as {SSfa} ia EIa .
i
•
=
If •
1. A set of marginal tables of cell counts {nia k ETa corresponding to the discrete generators, i.e., for a = d1 , •.• ,dr.
2. A set of marginal variate totals {( k ETa corresponding to the linear generators, i.e., for a = lj n L\, 'Y = lj n r, for j = 1, ... , s.
::,
,•
, "
• "
!",
,,,
68
F c ~
4. Mixed Models ' •
~
"
~
3. A set of marginal tables of uncorrected sums and squares {SSfJinEIa corresponding to the quadratic generators, i.e., for a = qj n fl, and d = qj n r, for j = 1, ... , t. As we have seen, models are constructed by constraining the canonical parameters through factorial interaction expansions. Given a set of data, we wish to estimate the model parameters subject to these constraints by maximum likelihood estimation. From exponential family theory, we know that the MLEs can be found by equating the expectations of the minimal sufficient statistics with their observed values. That is, for a = d1 ... dr,
(4.14 )
{ m''to },10 EXa  {n''2(1 }.'la EIn'.
7n·/I"!"\.· 'l"'j PaEIa 
.
"
,I' , ,, , ,, ,,
•,
,,
, • I"
r'
print v Empirical discrete, linear and partial correlation parameters A 1
'
,I! ,
C
1
," , "
X Y Z'
Linear
1.000 0.480 0.179 22.032 X
2
",'.,,
X
Y Z
Linear
1.000 0.690 0.100 178.426 X
~
i
1.000 0.010 1.000 80.001 2206.654 782.338 Z Discrete Y
•
/;
1.000 0.084 1.000 18.476 2899.184 1124.376 Y Z Discrete
,, i •
t g t
We observe that the partial correlations between Y and Z, and to a lesser extent between X and Z, are low.
[
f, , r i
To select a model, we can proceed as follows: first we test for variance homogeneity, using Box's test:
,~ ,
>
MIM>model A/AX,AY,AZ/AXYZ; fit; base 0.0000 OF: 0 Deviance: MIM>model A/AX ,·AY ,AZ/XYZ; fit; boxtest Test of HO: A/AX,AY,AZ/XYZ against H: A/AX, AY,AZ/AXYZ Box's test: 2.8688 OF: P: 0.8251 6
,
•, •
,",
[
l
There is no evidence of heterogeneity. We next test for zero partial correlation between Y and Z, and then between X and Z. Since the sample size is small, we use Ftests. MIM>testdelete YZ s Test of HO: A/AX,AY,AZ/XZ,XY against H: A/AX,AY,AZ/XYZ F: 0.1440 OF: 1, 18 MIM>delete YZ MIM>testdelete XZ s Test of HO: A/AX,AY,AZ/Z,XY against H: A/AX,AY,AZ/XZ,XY F: 0.0002 OF: 1, 19 MIM>delete XZ
.
lI I
i, ,, ,
,I
P: 0.7087
f
P: 0.9888
I,
I, I
,, \,
These hypotheses can also be accepted. Next we can test for zero partial correlation between X and Y:
,,i ,, , , ,
,I
MIM>testdelete XY s Test of HO: A/AX,AY,AZ/Z,Y,X against H: A/AX,AY,AZ/Z,XY F: 10.5016 OF: 1, 19
I
i •
P: 0.0042
• ,, ,
! ,
,,
4.1.
, I

,
Hierarchical Interaction Models
75
•
, ,

This is rejected, so we proceed by attempting to remove the linear AZ term:
"
MIM>testdel AZ s Test of HO: A/Z,AY,AX/Z,XY against H: A/AX,AY,AZ/Z,XY F: 0.3433 DF: 1, 20 MIM>delete AZ
, , ,
• i
I
fI
,
t
,,r,,
P: 0.5645
We thus arrive at the model A/AX, AY, Z/ XY, Z. It is not possible to simplify the model further (we omit the details). The independence graph
,
•
oz
,
,,,[ .
,
1
•
i
• ,•
·re I·
x
y
The interpret,ation is clear: tht' Jevel of the compound Z is independent of the treatn1?nt and the compounds X and Y. These are both affected by the treatment. and are mutually correlated.
4.1.9
la
Example: Rats' Weights
We next consider a simple example studied in Morrison (1976). Mardia, Kent, and Bibby (1979) also use the example. The data stem from another drug trial, in which the weight losses of male and female rats under three drug treatments are studied. Four rats of each sex are assigned at random to each drug. Weight losses are observed after one and two weeks. There are thus 24 observations on four variables: sex (A), drug (B), and weight loss after one and two weeks (X and Y, respectively). Again, we first examine whether the covariances are homogeneous using Box's test:
•
ial •
MIM>mod AB/ABX,ABY/ABXY; fit; base Deviance: 0.0000 DF: 0 MIM>mod AB/ABX,ABY/XY; fit; boxtest Deviance: 27.8073 DF: 15 Test of HO: AB/ABX,ABY/XY against H: AB/ABX,ABY/ABXY Box's test: 14.9979 DF: 15 P: 0.4516 I
There is no evidence of heterogeneity. We adopt the homogeneous model, and attempt to simplify the covariance structure by removing the edge
[XYj:
•
.,. "~
•
•, ,•
76
4. Mixed Models
v• !
Drug' Wt 1 1 5 1 7 2 9 14 3 1 7 1 8 2 7 14 3 1 9 2 7 21 1 3 12 1 3
Sex 1 1 1 1 2 2 2 2 1 1
I
Wt 2 Sex Drug Wt 1 Wt 2 1 4 6 1 5 2 7 6 1 6 2 8 6 1 12 12 17 3 1 11 1 2 6 6 10 13 2 2 10 10 2 9 2 6 6 2 8 14 3 9 7 1 2 9 9 7 2 2 7 8 r. 12 10 16 2 3 10 2 5 10 3 ,
,•
~I
" •
•
i.I:
t,
,I ,
,,
TABLE 4.4. Data from drug trial on rats. Source: lvlorrison, Multivariate Statistical Methods, McGrawHill (1976). \\lith permission. I
••
r
MIM>mod AB/ABX,ABY/XY; fit D~viance:
,• ,
t
27.8073 DF: 15
MIM>testdelete XY s Test of HO: AB/ABX,ABY/Y,X against H: AB/ABX,ABY/XY F:
20.2181
DF:
1,
17
P: 0.0003 
This is strongly rejected. \Ve can further simplify the mean structure: MIM>testdelete AX s Test of HO: AB/ABY,BX/XY against H: AB/ABX,ABY/XY F:
0.0438
DF:
3,
1.7368
DF:
3,
17
P: 0.9874
18
P: 0.1953 '
20
P: 0.0000
20
P: 0.0136
MIM>delete AY MIM>testdelete BX s Test of HO: AB/BY,X/XY against H: AB/BY,BX/XY F:
36.1991
DF:
2,
MIM>testdelete BY s Test of HO: AB/Y,BX/XY against H: AB/BY,BX/XY F:
5.3695
DF:
2,
,,f
,
,
MIM>delete AX MIM>testdelete AY s Test of HO: AB/BY,BX/XY against H: AB/ABY,BX/XY F:
,,
We arrive at the model AB / BX, BY/ XY. It is graphical, with the following graph:
,:. ,
,
I ~, .'
,
4.1.
Hierarchical Interaction Models
77
•
,
•
, •
,
,
• •
Sex (A)
Weight loss 2 (X)
Drug (B)
Weight loss 1 (Y)
It has a simple interpretation. If we write the Indices corresponding to factors A and B as j and k, respectively, we can write a cell in the twoway table as i = (j, k). The distribution of X given A and B is clearly N(J.Lk> (jX), i.e., depending on B only. The conditional distribution of Y given X, A, and B is normal with mean
Y E(Y/I = i, X = x) = (f3r  w;Y x)/wr ,
,
and variance Var(YII .: i,X = x) =
Y l/wr .
For the present model, ,,';XY and wrY do not depend on i, and f3r is a function of k 01l1y. We can reexpress the model through the recursive equations X=/lk+C: I
:
• •
I• • •
: i
1
•,,
Y =
>"k
X
(4.19)
,
+ 71X + e
Y
(4.20)
,
where eX rv N(O, (jX) as above, eY rv N(O, TY), say, and eX and eY are independent. In other words, the expected weight loss at the second week is a constant proportion of the previous weight loss, plus a constant that depends on the treatment. Estimates of the regression coefficients in (4.19) and (4.20) can be obtained using the Display command: MIM>Display Y,AXB Parameters of the conditional distribution of Y given A,B,X. AB 1 1
Y
0.953
0.900
2.433
x
Y
0.900 X
2.433 Y
0.900
2.433
X
Y
\
1 2
1 3 ,
•
Y
Y
1.753
3.018
I
2 1
Y
0.953
0.900 X
2.433 Y
2 2
Y
1.753
0.900
2.433
X
Y
0.900
2.433
(f
o
2 3
Y
3.018
•
"
78
4. Mixed Models
•

•
We thus obtain the following estimated regression equation: 0.953 y=
1.753
+ 0.9x + c;Y,
c;Y
rv
N(O, 2.433)
3.Dl8 for the three levels of B. The estimates for 11k and aX in (4.19) can be obtained similarly. . • •
Plots of the data (not shown here) suggest that the effects of the first two drugs may !lot differ widely from each other. This can be tested by omitting the third drug from the analysis:
MIH>hct C2; calc C=B; restrict Bmod AC/CX,CY/XY MIM>fix CA Fixed variables: AC MIM>step u Coherent Backward Selection Unrestricted models, Chisquared tests. Critical value: 0.0500 Initial model: AC/CX,CY/XY Model: AC/CX,CY/XY Deviance: 7.7404 DF: 13 P: 0.8601 Edge Test Excluded Statistic DF P [CX] 0.0020 1 0.9646 [CY] 0.6645 1 0.4150 [XV] 9.8134 1 0.0017 + Removed edge [CX] Model: AC/CY,X/XY Deviance: 7.7423 DF: 14 P: 0.9023 Edge Test Excluded Statistic DF P [CY] 1.1549 1 0.2825 Removed edge [CY] Selected model: AC/Y,X/XY
,•
,
t i
, •
! •
,• I
tt I'( I
t
I
•
The results indicate that there is no difference between the first and second drugs with regard to weight loss. The tests shown here are not identical to likelihood ratio tests under the model including all three treatment levels, since some drugindependent parameters are estimated from the complete data.
II
..i " , \. , .',
'
4.1.
Hierarchical Interaction Models
79

• •
4.1.10 Example: Estrogen and Lipid Metabolism This is a more extended example that. comes from a clinical trial comparing various estrogen replacement therapies (JIvlunkJensen et aI., 1994). One hundred thirteen postmenopausal women were randomised to one of three treatment groups, corresponding to cyclic therapy (in which the doses of estrogen and progestin vary cyclically mimicking the natural menstrual cycle), continuous therapy (fixed dose of estrogen and progestin daily), and placebo. Plasma samples were taken pretreatment and after 18 months' treatment, and the samples were assayed for lipoprotein fractions .
•
•
, •;• L
f
Je !
r,
•
,,,• ,•
,··•
,0
.,1
••
The effect of estrogen replacement therapies on lipid metabolism is of considerable interest. since lipoproteins are believed to represent risk factors for corollary heart disea5e. Hen', we analyze the results for a highdensity lipoprotein fraction (HDL), lowdellsit!" lipoproteins (LDL), and very lowdensity lipoproteins (VLDL). Note that HDL is believed to be beneficial, wlJ('rl~as LDL l1nd VLDL are helie,"pd to be aellO'teriolls.
,,
The data cOllsist of seven "a"iable~: tn'fix auVIJ Fixed variables: AUVW MIM>steplo1ise z MIM>pr The current ~odel is:
,•
,
• •
•
f
f
, ,,
,
A/Z,X,AY,AW,AV,AU/WXZ,~HXY,AVWY,AUVW
•
•l
The Fix command fixes edges between variables in {A, U, V, W} in the model, Le., they are not candidates for removal in the selection process. The Z option suppresses output. The graph of the sdected model is shown in Figure 4.2.
•••
·,• ,
~
The model has two striking implications: first, the treatment only has a direct effect on LDL (Y), since VLDL (X) and HDL (Z) are independent of treatment given LDL and the pretreatment variables. This suggests that the mechanism of action works primarily on LDL; any effect on VLDL and HDL is mediated by LDL. Secondly, the responses LDL, VLDL, and HDL are independent of pretreatment VLDL (U) given the remaining variables. In other words, pretreatment VLDL has no explanatory value and can be omitted from the analysis. We therefore remove it from the model, arriving at
f,•
"•
i
,
,
.,
,,
AjAV, AW,X,AY, ZjWXZ, VWXY,AVWY,
..
,
•
whose graph is shown in Figure 4.3. The deviance is 32.1893 on 24 degrees of freedom, corresponding to a pvalue of 0.1224. It is a relatively simple and wellfitting model. It also haS a straightforward interpretation, in the sense that it can be decomposed into three parts, describing 1. the conditional distribution of Y given A, V, and W,
,
2. the conditional distribution of X given V, W, and Y, and 3. the conditional distribution of Z given W, Y, and Z, •
.,
. '
··" "'...
,
, ." "
'
4.2. •
Breaking Models into Smaller Ones
81
•
post LDL (Y) post VLDL (X)
,
Treatment (A) post HDL (Z) pre LDL (V) pre HDL (W)
•
.
FIGURE 4.3. The model obtained by omitting pretreatment VLDL from the model shown in Figure 4.2.
corresponding fPspectivcly to a O!le\my Ai\OVA with two covariates (and variance hetemgeneity), and two multiple regressions. Our tentative conclusion is that it appears that the estrogen therapy has a direct effect on LDL only, suggesting a lllechanism of action primarily through LDL. Any effect of therapy OIl HDL and VLDL appears to be n1('diated by• LDL. e
This style of analysis is essentially explanatory, seeking to identify possible causal relations. In contrast, if the purpose of the analysis was to address more pragmatic questions like, Does estrogen therapy' increase HDL levels? a univariate analysis of each response variable would be more appropriate.
,. n •
a
,,I ,, ,,, ,
,, 1
In the next three sections, we study more systematically the question of when models can be broken down into simpler models. This is one of the more mathematical parts of the book, and may be skipped at first reading. However, the concepts described are important not only for theoretical reasons, but also for practical ones; in particular, they are useful for interpreting complex models.
•
t t
·•
I
,
) •
.J •
I.
e
4.2 ~
:l
•
Breaking Models into Smaller Ones Let a be a subset of the model variables, Le., a C V, and let b be its complement, i.e., b = V \ a. We want to be able to decompose a model M for V into two models: one (Ma) descrjbing the marginal distribution of a, and the other (Mbla) describing the conditional distribution of b given a. Of course, given any density f on V, we can always decompose this into a marginal and conditional density f = fafbja, but it is unclear how these densities correspond to the models we are interested in. So first let us clarify what we mean by the marginal model (Ma) and the conditional model (Mbl a)'
•
., ,,
82
4. Mixed Models
,
•" By Ma we mean simply the hierarchical interaction model for a that contains all thc interactions between variables in a that were present in M. If M is (homogeneous) graphical with graph Q, then Ma is also (homogeneous) graphical, with graph given as the subgraph Qa. To obtain the model formula for Ma, we delete all variables in b from the formula for M.
, ,
t
r f "• •
,
I ,
,
For example, if M = AB/ABX,AY,BZ/AXY,ABX,BXZ (see Figure 4.1) and a = {A,B,X}, then Ma = AB/ABX/ABX.
,
The conditional model Mbla needs more explanation. For some i E )\;/, consider ibla' We partition 6 into 61 = 6 nb and 62 = 6 na, indexed by i = (j, k), and r into r 1 = r n band r 2 = rna, writing the corresponding random variablc as (Y, Z).
f
,•
f [
l
!
The joint density is given in (4.2). The conditional density is found by renormalizing the joint density: fbla(j, ylk, z) =
/'i,k.z
~y'rwy
!•
y'fWz  ~z'n22z),
f
exp(Q'i
+ (3l'z 
+ fJi'y 
i
\\'here (3i and ni have been partitioned commensurately with r, and is a normalizing constant. If we write Q'jlk,z = Q'i
+ (3l'z 
(3jlk,z =
(3;
nj1k
nIl,
=
~z'n;2z + In(/'i,k,z)
n}2 z
! I\I.:,z
I I
(4.21) (4,22) (4.23)
then we can rewrite the conditional density as ibla(j, ylk, z) = exp(Q'jlk,z
I
+ (3jlk,zY 
~y'njlkY)'
So ibla follows a CGdistribution whose canonical parameters are functions of k and z, In other words, for given K = k and Z = z, (J, Y) is CGdistributed with parameters given in (4,214,23), This is called a CGregression, The conditional model Mbla consists of all the CGregressions ibla (j, ylk, z) that can be generated in this way; in other words, Mbia = {Jbla : i EM}.
,
,,•
I •
I•
Example 1: M = //YZ and a = {Z} Here Mbla consists of a family of univariate normal distributions, one for each z. The canonical parameters of these are given by (4.214.23), Le" YY
8= 8 (3 = (3Y

and YZ 8 z.
Here, Q' is just a normalizing constant. So we obtain
,, ,,
I i
I •
I
~IT ,I
, ,, ,
I
I I, ,
4.2.
,
•
•
Breaking Models into Smaller Ones
83

I
I
•
E(YIZ = z)

, •
1
= 8 (3 1 =8 ((3Y
e
Var(YIZ = z) = 8
•
•

1
8
YZ
z),
and
.
The conditional variance is constant over z, and the conditional mean is a linear function of z. If we write, say,
.,
/'0
y
/'1
J
"
?]
1
= 8 (3Y, 1 YZ
= b 8 =
,
and
1 8 ,
tiwn \'e obtain
,• ,
E(YIZ = z)
= "Yo
+ ~flZ,
Var(Y!Z = :} =.". and
spe th,lt \\'e arf' just dpalillg with (111 unfamiliar parametrization of it wn' familiar ll1odf?l. the linear n'gre,;sioll of Y on Z. So .Vl bla is just an ordinary lillear regre:;sion lllodel. .
•
\\'f'
,
) )
,
I
,
Example 2: M = A/AX/X and a = {X}
•
I,
This model was discllssed in Section 4.1.1. From (4.21), we knew that the conditional distribution of A given X has canonical parameter
)
:d cOllciitionai lllodel:,;. It folluws using (4.'28) that the likelihood ffi·t ; I test ,
,, ,, ,
t ~
t·
rr
£I'
,
, ,
i.e .. the test can be perfornwd in the twoway marginal A x B table [l,S test of A J1 B. A more subtle example of the same kind is the following. Consider t graph • A
in (4.24), we have that exp{aij + (3ijX  !WijX2) ijlx  Ljk exp{ajk (3jkX  !Wjkx2)'
P
_
+
(4.32)
A natural measure of the conditional association between I and J given X is the conditional logodds ratio given by
\lI{x) = In{Pll 1XP22 I x). P121xP211x ,
100
4. Mixed Models
•
This is symmetric in I and J, and zero when these are conditionally independent. We see from (4.32) that
1l1(x) = (all + a22  a12  (21) + (1111  HWll + W:L2  W12  W21)X2.
+ 1122 
1112  112dx
r
If we restrict attention to homogeneous models, then \ve see that the ANOVAmodel with interaction, IJ/IJX/X, corresponds to a CGregression model in which 1l1(x) is linear in x. Setting the interaction to zero in IJ/IX,JX/X sets 1l1(x) constant . •
:More generally, the joint model IJ/ I J X/I X, J X corresponds to a CGregression model in "'hkh 1l1(x) is linear, and IJ/IX,JX/IX,JX to one in which 111 (x) is constant. The following program fragments illustrate an analysis using this kind of model. The data are defined lIsing the statements:
•
fact 69 2 54 1 60 2 51 1 59 2 53 1 62 2 59 1 61 2 53 2 54 1 54 1
i2j2; cont x; read 1 59 2 1 69 2 1 68 2 62 2 2 64 2 2 59 1 64 2 2 58 1 2 58 1 68 2 1 63 1 2 51 2 51 2 2 58 1 2 53 2 60 2 2 59 1 2 60 1 54 1 2 55 1 2 60 2 50 1 2 60 1 2 57 1 56 1 2 59 1 2 55 1 69 2 1 53 1 2 64 2 50 1 2 67 2 1 55 1 !
xij 2 2 66 1 2 61 2 1 50 2 2 67 2 2 54 2 1 69 2 1 59 2 2 52 1 2 53 2 1 68 1 2 52
2 1 63 2 1 60 1 2 58 2 1 53 1 1 65 2 1 54 2 2 50 2 2 52 1 2 65 2 1 65 1 2 65
2 2 2 2 2 1 2 2 2 2
1 62 2 2 57 1 2 54 1 2 66 2 1 58 2 1 63 2 2 64 2 2 54 2 1 58 1 1 63 2 1 2 55 2
1 2 2 1 2 1 1 2 2
62 65 55 64 60 68 54 58 61 1 56 2 66
2 2 1 2 2 1 1 2 2
1 62 1 59 1 50 1 58 2 50 1 64 2 59 2 57 1 64 1 2 56 1 1 55
2 1 1 2 2 2 1 2 1 2 2 1 2 2 1 2 2 1 1 2 2 1
To fit a simple linear conditional logodds model, we define the corresponding joint model and use the CGFit command: MIM>satmod; fix x; cgfit Fixed variables: x Convergence after 40 iterations. 2*Conditional Loglikelihood: 199.046 OF: MIM>base; homsat; cgfit; test Convergence after 5 iterations. 2*Conditional Loglikelihood: 205.284 OF: Test of HO: ij/ijx/x against H: ij/ijx/ijx LR: 6.2380 OF: 3 P: 0.1006
0
3
The homogeneous model (corresponding to linear 1l1( x) )fits the data reasonably well. To examine whether 1l1(x) is constant we remove the interaction term:
·

•
F I
,.
4.5.

I
]ally
•
•
J
•
•
that CGonto
CGone d of
101
CGRegression Models
MIM>base; model ij/ix,jx/x; cgfit; test Convergence after 4 iterations. 2*Conditional Loglikelihood: 209.788 DF: Test of HO: ij/ix,jx/x against H: ij/ijx/x LR: 4.5040 DF: 1 P: 0.0338
4
There is moderate evidence that III (x) is not constant over x. We prillt out the estimates of (Xij and {3ij: •
•
MIM>backtobase MIM>prf 12 6 Printing format: 12,6 MIM>disp ij,x • Linear predictors for i,j glven x. • • 1 J Constant x 1 1 0.000000 0.000000 1 2 6.370484 0.085128 2 1 16.809679 0.304779 2 2 2.610482 0.024100 •
\Ve see that
is estimated as III (x) = 13.05  0.24x: that increasingly negative association with age.
4.5.2
I}! (x)
•
IS,
an
Example: Side Effects of an Antiepileptic Drug
Epilepsy is a common neurological disorder, characterized by unprovoked seizures. For some patients. medical treatment is only partially effective, so their seizures are inadequately controlled_ This example uses data from a doubleblind, parallelgroup clinical trial studying an antiepileptic drug (Kalviainen et al., 1998). This included 154 patients with refractory epilepsy, randomised equally to receive in addition either tiagabil].e or placebo. Their previous drug regimen was held constant throughout the studv. We here focus 011 the occurrence of three side effects of the • drug: headache, tiredness, and dizziness. In the analysis we also include the following variables: age, sex, treatment group, the number of years with epilepsy, and the number of concomitant antiepileptic drugs. The last variable can be regarded as a proxy for how refractory the disorder is. •
We illustrate an exploratory analysis of/these data . •

MIM>sh v Var Label
on
Type Levels In In Fixed Block Data Model
jon
s
Sex
disc
2
x
•
•
•
102
4. Mixed Models •
d t h g a y n
Dizziness Tiredness Headache Group Age Yrs epilepsy No of AEDs
disc disc disc disc cont cont cont
X X X X X X X
2 2 2 2 • • •
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
•
MIM>fix sgayn Fixed variables: agnsy MIM>homsat MIM>step z MIM>pr The current model is: dt,ghs,dgs/ghsy,ghns,ags/any.
We have three binary responses and five covariates, two of which are discrete. As a first step, we choose a preliminary, undirected model using stepwise selection ~tarting from the saturated hor,lOgeneous model: The Z option suppresses the output. The graph of the selectrd modrl is shown ill Figure 4.7. It sugge~ts that Dizziness and Tiredness are closely related, that the frequency of their occurrence differs between men and women, and that this is also affected by treatment. The same is true of Headache, but this is also related to the duration of the illness aild how refractory• it is. For given sex and treatment group, the occurrence of Headache is independent of the occurrell('e of Dizziness and Tiredness. We now turn to CGregression models and continue the selection process, starting out from this preliminary model. We do by using the Stepwise command again, this time with the G option, which searches by fitting and comparing CGregression models. •
Sex
No of AEDs
Age
Tiredness Dizziness Yrs Epilepsy
Group
FIGURE 4.7. Sideeffects of an antiepileptic drug: the preliminary model.
•

•
4.6.
Incomplete Data
103
•
,
MIM>step g Coherent Backward Selection. CGregression models. Unrestricted mode, Chisquared tests. Critical value: 0.0500 Initial model: dt,ghs,dgs/ghsy,ghns,ags/any Model: dt,ghs,dgs/ghsy,ghns,ags/any 2*LogLikelihood: 459.5590 DF: 262 Edge Test Excluded Statistic DF P 6.8130 2 (dg] 0.0332 + 12.1744 2 (ds] 0.0023 + 3.8869 1 edt] 0.0487 + 11.0734 6 [gh] 0.0861 [hn] 11.6532 4 0.0201 + [hs] 24.8656 6 0.0004 + [hy] 10.3847 4 0.0344 + Remov~d edge [gb] Selected model: hs,dt,dgs/bsy.gsy,hns,gns,ags/any
,•
•
•
•
I• ,
\
•
l ~
• •
•
•
t
1 !
I lis:n" I I 0 ,, t'Z .11.I. , •
>
!
"
•
,•
,f. ,.
We see that. the e\"idence that the treatment cau:,es headache is weak, since the test for the remcyal of the edge [gh] gives a JHalue of 0.0861. As we see, the selection procedure removes this edge. Apart from this, the model is unchailged: the preliminary analysis using the undirected models was not too far from the mark.
•
'(
I•
•
•
InG
lut • For ent
In problems im'olving highdimensional CGregression models this may often be a sensible strategy: first select a preliminary undirected model, and then. as it were, finetune the analysis with CGregression models, using the undirected model as a point of departure.
)~.. ,. •
4.6
•
•
Incomplete Data Up to now, we have assumed that all data are available. In practice, it often happens, for a variety of reasons, that some data are lost. Test tubes can be dropped, apples can be scrumped, and patients can withdraw from studies. Furthermore, in some applications it may be appropriate to include completely unobserved or latent variables in the analysis. In this section we show how the mixed models can be applied to incomplete data. A wide variety of latent variable problems, including mixture models, latent class models, factor analysistype models, and other novel models can be handled in this manner. To do this requires that the estimation algorithms previously described are augmented by means of the EMalgorithm (Dempster et al., 1977). (Ap
.
.
,•,
104
,
4. Mixed Models •

••
pendix D.2 describes the computations in some detail.) The advantages of this algorithm are computational simplicity and stable convergence. There . are, however, disadvantages. Although convergence is stable, it is often very slow, and moreover there is no guarantee that it arrives at the global . maximum likelihood estimate; for many incompiete data problems, the likelihood has multiple local maxima. It is often difficult to know whether it is the global maximum that has been found. Furthermore, inference based on hypothesis testing for latent variable models is made difficult by the 2 fact that the X approximation to the distribution of the likelihood ratio test statistic may be inaccurate or invalid: see Shapiro (1986) for some asympt.otic result.s. In line with much lit.erature on latent variable models, we emphasize estimation rather than hypothesis testing. I
\
•
,
models. In the former, data are partially missing, i.e., values are observed for some cases. For such problems, the validity of the approach depends upon a.'>Sumptions about the process whereby data become missing. As a rule, these assumptions are difficult to verify in practice. Nevertheless, their plausibility should be examined carefully since estimatien may be strollgly biased if they do not hold.
! ,,
, I,
In the next section, we study these assumptions more closely. We then describe some latent variable models that fall within the ctlrrf'nt framework. The following section describes in detail the applicat.ion of the EMalgorithm to the mixed models. Finally, we apply the methods to some examples.
4.6.1 Assumptions for Missing Data To examine the assumptions required, suppose that Z, M, and Yare vector random variables such that Z is the (hypothetical) complete observation; M is the configuration of missing values, i.e., a vector of l's and O's indicating whether the corresponding element of Z is observed or missing; and Y is the (incomplete) observation. This is illustrated in Figure 4.8. The missing value process is encapsulated in the conditional probabilities Pr( mlz). A given configuration of missing values m partitions z and y into two subvectors, Yobs = Zobs and Ymis = *, where Zmis are the values that are • • mlssmg. If we observed the complete data so that Y = Z, then we could work with
the ordinary likelihood function for Y, i.e., N
(4.33)
£(8; Y) = k=l
•
I
l
4,6,

•
Incomplete Data
105
Y: the observed data e , 1
M: the missing value configuration r(mlz)
.I •
t i
~
.
)
!
I, I
Z: the complete data FIGURE 4.8. The missing data framework.
!
·•
~
,,,
,· •
•
but since the data are incomplete. this is not possible. If we ignore the process giving risp to tllP missing values, we can instead use the likelihood N
,

i

,, •
,,,
,
,•
(4,34 ) k=l
•
· '. ,
1
r
r' '0 1." . L.. ( : lobs) =
,, ·, , •
, ,,•
1
,
>
r r
.,r ,• ,r
r
(k)
where (y~~~ !8) is shorthand for the marginal density for Yobs' This likelihood is t hI' qllantin' ma.xirnized by the £?lIalgorithm. It is n01 alwa:'s nllid to ignore the ll!i:;sing value process in this way. For example, if high \'alues of a variable are apt to be missing, then clearly estimation of its lnran using observed values only will be heavily biased. Rubin (1976) formulated a key condition for the validity of inference based on the likelihood (4.34), the socalled missing at random (MAR) property, which is defined as follows. For each configuration of missing values m, we require that the cop.ditional probability Pr(mlz) is constant over all z' such that z~bs = Zobs. For example, one outcome is "all data are missing"; here, Zobs is null so the probability of this outcome must be constant for all z. Another outcome is "no data are missing"; here, Zobs = Z so the probability of this pattern of missing values can vary freely for different z. An alternative way of formulating the MAR property is in terms of a binary random variable, say 8 m , indicating whether the configuration is m. Then the J\'1AR property is that for each m, 8m Jl Zmis I Zobs.
For example, consider a survey in which height and gender are recorded. There may be a tendency for men's heights to be missing; as long as this increased probability does not depend on the heights of the men in question (only their gender), then the values are missing at random.
)
•,
As a more realistic example, consider a Glinical study comparing two antiepileptic medicines in which the endpoint is weekly seizure rate. Suppose the patients in the study visit the clinician weekly and report any seizures they have experienced during the previous week. Patients may withdraw from the study for a variety of reasons, but suppose that the major reason is lack of treatment efficacy. During a visit, a patient may inform the clinician that he or she wishes to withdraw and report any seizures experienced
• I
• I
• •
,
•
• ••
106
,•
4. Mixed Models
•
during the previous week. Since the information is complete at the time of withdrawal, the decision cannot depend on the subsequent missing values. In this situation, it is reasonable to assume that the MAR property is satisfied. On the other hand, if the patient withdraws in such a fashion that any seizl1res occurring during the previous week remain unreported, then the MAR property is probably violated. This example illustrates that it is possible to evaluate the plausibility of the MAR property, even without detailed knowledge of the missing data process. When there is a substantial proportion of missing data and these are clearly not missing at random, then an analysis based on the likelibood (4.34) will be heavily biased and should not be attempted. On the other hand, an analysis based on complete cases ouly will also be heavily biased. The only viable approach will be to model the missing data process explicitly, and this will often be very speculative.
We no\\' examine some latent variable models. Note that for these, the MAR property is trivially fulfilled, provided no data are missing for the manifest (i.e., nouJatent) \·ariablcf:.
•
t
I
,
I
t
I
I
•
4.6.2 Some Latent Variable Models
l
The simplest latent variable model we can consider is
I,
L
X
where L is a latent variable and X is a manifest (Le., obsenTed) variable. (Here and elsewhere, filled square vertices denote discrete latent variables and hollow squares denote continuous latent variables.) The model states that the distribution of X is a mixture of several Gaussian distributions. We use this model in Section 4.6.3. Note that the following three bivariate models are not sensible: 0.Jarkov models have prOW'll useful in a wide range of applications, including speech recognition, iOllchannel kinetics and aminoacid sequence analysis. See Farewell (1998) for an overview and bibliography.
•
•oS·
4.6.3 Example: The Components of a Normal Mixture To illustrate some general features of the use of the EMalgorithm, we consider here the problem of separating a distribution into two components. We suppose that the following values
=01
th
1.2, 1.3, L2, 1.5, 1.2,2.3,2.7, 1.2, 1.8,3.2,3.5,3.7
have been drawn from a density of the form
•
•
•
•
where PI is the proportion of the first component, and the components are normal densities with means J.LI and J.L2'and common variance 0'. This is known as the twocomponent mixture model. We wish to estimate these parameters and, if possible, to test whether J.LI = J.L2, i.e., whether the data are adequately described with one component. Day (1969) gives a careful treatment of this problem. We define the data and fit the onecomponent model first: •
110
4. Mixed Models

..
MIM>cont X; read X DATA>1.2 1.3 1.2 1.5 1.2 2.3 2.7 1.2 1.8 3.2 3.5 3.7 ! Reading completed. MIM>mod IIX; fit Calculating marginal statistics ... Deviance: , 0 . 0000 DF: 0 MIM>pr y 2*LogLikelihood of IIX: 32.3910 DF: o
We see that minus twice the log likelihood is 32.3910. To fit the twocomponent model, we declare a discrete variable A, set its values to missing, and then fit the model A/AX/X: MIM>fact A2; calc A=1; model A/AX/X MIM>emfit EM algorithm: random start values. Cycle 2*Loglikelihood Change 1 32.3910 2 32.3910 0.000001 Successful convergence.
We see that the log likelihood has not. changed and we therefore suspect that it was very fiat at the starting point. We try some other starting points by repeating the command a few times:

MIM>emfit EM algorithm: random start values. Cycle 2*Loglikelihood Change 1 32.3910 2 32.3910 0.000000 Successful convergence. MIM>emfit EM algorithm: random start values. Cycle 2*Loglikelihood Change 1 32.3912 2 32.3912 0.000002 Successful convergence. MIM>emfit EM algorithm: random start values. Cycle 2*Loglikelihood Change 1 32.3910 2 32.3910 0.000000 Successful convergence. MIM>emfit EM algorithm: random start values. Cycle 2*Loglikelihood Change 1 32.3891 2 32.3889 0.000157 •
4.6.
•
/" •
•
••
· " ·
~
f: .
•
·t•
,
'"
•
r •
Incomplete Data
3 4 5 6 7. 8 9
32.3887 32.3885 32.3883 32.3881 32.3878 32.3875 32.3.871 10 32.3867 11 32.3862 12 32.3857 13 32.3851 14 32.3843 15 32.3834 16 32.3824 17 32.3811 18 32.3796 19 32.3777 20 32.3753 21 32.3723 22 32.3685 23 32.3634 24 32.3567 25 32.3473 26 32.3341 27 32.3144 28 32.2840 29 32.2340 30 32.1455 31 31.9734 32 31.5993 33 30.6963 34 28.6080 35 25.9970 36 25.2835 • 37 25.2397 38 25.2356 39 25.2346 40 25.2343 41 25.2342 42 25.2342 43 25.2342 Successful convergence.
0.000175 0.000195 0.000218 0.000246 0.000277 0.000314 0.000358 0.000410 0.000472 0.000547 0.000637 0.000748 0.000884 0.001054 0.001268 0.001542 0.001897 0.002364 0.002990 0.003846 0.005045 0.006769 0.009328 0.013271 0.019623 0.030418 0.050009 0.088541 0.172084 0.374114 0.903001 2.088266 2.610971 0.713516 0.043777 0.004159 0.001010 0.000275 0.000076 0.000021 0.000006 •
•
•
,
Success! We look at the maximum likelihood estimates: . •
111
112
4. Mixed Models •
MIM>pr f Fitted counts, means and covariances. A
1
X Means I
2
X Means
0.147 3.208 X
4.285 Count
0.l47 1.433 X
7.715 Count
So PI, the estimated proport.ion of the first component, is 4.285/12 = 0.357, and ill, il2, and iJ are as shown. The estimated probabilities for each observation of deriving from the first or second component can be obtained using the Summary comma..d (see Section A.6.2). To test whether one model A/X/X:
compon~nt
is Hdequate, we may be tempted to fit the
MIM>base; delete AX; emfit EM algorithm: random start values. Cycle 2*Loglikelihood Change 1 32.3910 0.000000 2 32.3910 Successful convergence.
and to perform a likelihood ratio test: MIM·>test Test of HO: A/X/X against H: A/AX/I LR: 7.1568 DF:
1
P: 0.0075
However, this inference is wrong in two respects. First, there are two fewer parameters in the simpler model, not one: the parameter PI is inestimable, but this is not detected by MIM, so the degrees of freedom should be two, 2 not one. Secondly, the likelihood ratio test does not have an asymptotic X distribution (Ghosh and Sen, 1985). McLachlan and Basford (1987) give a. careful treatment of this and suggest use of bootstrapping methods. The twocomponent normal mixture problem described here is the simplest case of a broad class of models called mixture models. McLachlan and Basford describe many applications of these, primarily in cluster analysis.
,•
4.6.
Incomplete Data
113
4.6.4 Example: Mathematics Marks, Revisited Our second illustration of the use of the EMalgorithm is in an exploratory analysis of the mathematics marks data (Section 3.1.6). We suppose that there is a latent binary variable A and that the following model holds.
•
Z
y
•
x
v
•
w (
,. •
We fit this as follows:
r
J
... ,
,.
,
·


. •
. ' ·, ..' , '. .~~
..
MIM>fact A2; calc A=l MIM>model A/AV,AW,AX,AY.AZ/V.W.X.Y.Z MIM>convcrit 0.001 Convergence Criterion: 0.00100000 MIM>emfit EM algorithm: random start values. Cycle 2*Loglikelihood Change 1 3586.6322 2 3544.4148 42.217466 3 3489.8524 54.562365 4 3475.6076 14.244835 3473.6885 1.919064 5 6 3473.1887 0.499796 7 3472.9763 0.212403 8 3472.8779 0.098388 3472.8326 0.045326 9 10 3472.8119 0.020637 11 3472.8026 0.009319 12 3472.7984 0.004185 13 3472.7966 0.001874 14 3472.7957 0.000837 Successful convergenc~ .
.
•
Convergence occurs after the 14th cycle. We save the predicted values of A and then plot these against the observation number, calculated as follows:
•
,
{I'
9
.;•
MIM>impute MIM>calc O=obs
p ,•
The plot is as follows: •
_.
.

,
.
::;~
f::?:
•
114
4. Mixed Models •
,
a
2
00('000000 000 00000 00000000000000 0000000 000 00 0
1
0000000
o
000000000000000000000000000000000000
I
10
20
30
40
50
60
80
90 o
Curiously, the first 52 observations (except for number 45) all belong to the first group, and the remainder belong to the second group. This result is quite stable since repeat.ed use of EMFi t gives the same result. This casts doubt on the idea of the data as a pristine random sample from some ideal population. The dat.a have been processed in some way prior to presentation. Are there confounding variables that explain the variation but are not reported, or ha\'e the data been subject.ed to a factor analysis and then sorted by factor score? Certainly they have been mistreated in some way, doubtless by a statistician.
J I
I •
I
To examine whether the data have been sorted by some criterion, for example, factor score, we try a factor analysis model. MIM>cont S; calc S=ln(l) MIM>mod //SV,SW,SX,SY,SZ MIM>convcrit 0.001 Convergence Criterion: 0.00100000 MIM>emfit EM algorithm: random start values. Cycle 2*Loglikelihood Change 1 3592.5363 2 3588.1598 4.376522 3 3556.1523 32.007540 4 3465.1367 91.015609 5 3414.1564 50.980273 6 3403.9184 10.237988 7 3401.3168 2.601604 8 3400.3137 1.003101 9 3399.8492 0.464440 10 3399.6055 0.243702 11 3399.4648 0.140698 12 3399.3783 0.086549 13 3399.3229 0.055428 14 3399.2864 0.036469 15 3399.2619 0.024470 16 3399.2453 0.016669 17 3399.2338 0.011492 18 3399.2258 0.008001
I, I
•
4.6.
I,
I •
)
t
s .1
r .1
s 1
, • I
I. !
I
I •
·
r
US
•
,
,
Incomplete Data
3399.2202 19 3399.2162 20 3399.2134 21 3399.2114 22 3399.2099 23 3399.2089 24 3399.2081 25 Successful convergence. MIM>Disp VWXYZ,S Fitted conditional means 38.955 10.420 V 8.729 W 50.591 X 50.602 9.679 Y 46.682 11. 409 Z 42.307 12.425 S MIM>Display S,VWXYZ Fitted conditional means 4.314 0.005 S
0.005615 0.003966 0.002817 0.002010 0.001440 0.001035 0.000746
and covariances. 193.723 0.000 94.684 0.000 0.000 0.000 0.000 0.000 0.000 V
17.929 0.000 87.704 0.000 0.000 140.001 X Y Z W
and covariances. 0.009 0.053 0.013 0.009 V W X Y Z
0.098 S
MIM>impute , MIM>calc O=obs; label S "Score" 0 "Obs"
The algorithm converges after 25 iterations. By fixing S, we examine the estimates of the conditional distributions of the manifest variables given the latent variable. For example, we see that E(X I S = s) = 50.602  9.6798.
We note that the conditional variance of X (Algebra) given S = s is estimated to be 17.929: this is considerably smaller than that of the other manifest variables, and is consistent with the central role of Algebra noted in Section 3.1.6. Similarly, we can examine the estimates of the conditional distribution of S given the manifest variables. In factor analysis terminology, these are known as factor loadings. It is seen that the largest loading is that of Algebra (X). The index plot of the latent variable is shown in Figure 4.9. It appears to confirm that the data have been sorted by some criterion.
r
•
•
The discrete latent model we used above resembles the mixture model of Section 4.6.3 in that the joint density is posited to be a mixture of two distributions, say N(/lI,~) and N(/l2, ~), where ~ is diagonal. Given the observed variables, each observation has a posterior probability of deriving from the first or second component. The parameters /ll and /l2 are estimated as means of the observed values weighted by the respective posterior probabilities.
.
•
116
4. Mixed Models
S 3 o
2
0
0
1
o
0
0
°0
0
0
0
o
I 0
0 0
0 0 00
o
o·
o
0
00 0
o
0000 0°0 000 0 000
0
0
O 0
0
00
0
o o
0
0
0°0 o 0 0
1
o
00 0
0
0
0 0
0
00
0
0
•
I
00
o 00 0
00 0 0
o •
2
o
3
30
5'0
do
10
o (
8'0
FIGURE 4.9. An index plot of the continuous latent variable.
If the primary purpose of the analysis is to classify the observations into two groups, another technique can be used. Suppose each observation is assigned t.o the group with the largest posterior probability, alld that the parameters Ji1 and /1.2 are estimated on the basis of the observations assigned to the respective groups. This is a clusteranalytic approach. closely related to clustering by minimum Mahalanobis distance. The following example illustrates how this may be performed in .1vlI.l\t
•
MIM>fact A2; calc A=l+uniform MIM>mod A/AV,AW,AX,AY,AZ/VWXYZ MIM>fit; classify AB; calc A=B Deviance: 19.6597 OF: 15 MIM>fit·, classify AB; calc A=B Deviance: 18.2707 DF: 15 MIM>fit; classify AB; calc A=B Deviance: 18.2707 OF: 15 MIM>fit Deviance: 18.2707 OF: 15 MIM>pr f Fitted counts, means and covariances. A 1 V 145.593 W 103.767 167.787 X 83.589 81. 825 109.794 Y 75.499 89.444 107.663 212.298 94.143 94.807 118.130 149.631 291.303 Z Means 46.182 51.606 51.379 48.045 43.318 V X Z Y 2
V 145.593 103.767 167.787
I
I
, •
66.000 Count
I
4.7. •
Discriminant Analysis
117
•
.•
83.589 Y 75.499 Z 94.143 Means 17.273 X
.
,
81.825 89.444 94.807 47.545
109.794 107.663 212.298 118.130 149.631 291.303 48.273 42.591 39.273 Y Z W X
V
22.000 Count
•
First a binary grouping factor A is declared and its values are set at random. The line fit; classify AB; calc A=B
"
does three things: the model is fit, a factor B containing the group with the ma..ximum post.erior probability for each observation is calculated, and the grouping factor A is reset to the new grouping. These three steps are then repeated until cOllvergence. In the example shown, convergence occurs rapidly, but whet.her the method will always lead to convergence is unclear. Ito •
IS
la
ed ed )Ie
4.7
Discriminant Analysis In this section, we discuss application of the mixed models to discriminant analysis. The approach is closely related to work on mixed model discriminant analysis by Krzanowski (1975, 1980, 1988) and, in particular, Krusinka (1992). Suppose that there are k group~, 7Tl .• • 7Tk, that we wish to discriminate between on the basis of p  1 discrete and q continuous variables. We write a typical cell as i = (g, j) where 9 is a level of the grouping factor G, and j is a (p1 )tuple of discrete measurement variables. The qvector y contains the continuous measurement variables. We wish to allocate an individual to a group on the basis of measurements (j, y). An optimum rule (see, for example, Krzanowski, 1988, p. 336) allocates the individual to the group 7T g with maximum value of j(j, ylG = g)qg, where qg is the prior probability of an individual belonging to group 7T g. We assume the density (4.1) holds, i.e., (4.36 ) I
• •
•
•
with i = (g,j). Often we may use the sample proportions, ng/N, as prior probabilities. In this case, we allocate the individual to the group with the maximum value of the joint density j(g,j, y). Following the graphical modelling approach, we select a parsimonious graphical model to the data and use this as a basis for allocation. Note •
118
4. Mixed Models •
that the allocation rule depends on the relative values of the f(g,j, y) over g. This only involves variables adjacent to G in the independence graph, corresponding to, say, (Jr,Y1 ). To see this, observe that these variables separate G from (h Y2 ), where J = (J 1 , h) and Y = (Y1 , Yd. Thus, we obtain that /
So for some functions a and b, we can write f(g,j,y) = a(g,j1,Yl)b(j,y). It follows that
J(l,j, y) a(1.j1' Y1). f(2,), y)  a(2,j1, yd· ~~
The command Classify can be used to compute predicted classifications using the maximum likelihood discriminant analysis method. Each observa• tion is assigned to the le\"el 9 with the largest estimated density f(g, j, y). The density estimate can either use all available observations or use the leawoutone method that is, t.he density for each observation is estimated using all avaiiable observations except the one in question. This method is computationally intensive (see Section A.ll for details). We now compare the graphical modelling approach to more conventional methods of discriminant analysis. Three widely used approaches are (i) classicallinear discriminant analysis, (ii) quadratic discriminant analysis, and (iii) the independence model. These involve continuous measurement variables only. Method (i) assumes an unrestricted homogeneous covariance matrix, and method (ii) assumes unrestricted heterogeneous covariance matrices. Method (iii) assumes that the measurement variables are independent given the grouping variable corresponding to a star graph as in Section 4.6.4. So (i) and (ii) can be thought of as unstructured approaches, while (iii) is highly structured. All three methods are special cases of hierarchical interaction models. A fourth, related approach (iv) is the location model (Krzanowski, 1975, 1980, 1988). This involves both discrete and continuous measurement variables in a twotiered (chain model) structure. First, the distribution of the discrete measurement variables given G is modelled using a loglinear model. Then the conditional distribution of the continuous measurement variables given the discrete is modelled using a N(f..Li, E) distribution. A MANOVAtype model, with the same linear model for each continuous measurement variable and an unrestricted, homogeneous covariance matrix, is assumed. This approach might perhaps be described as being partially structured. Note that hierarchical interaction models that are collapsible onto !1 are mean linear and have unrestricted homogeneous covariance matrices are location models.
4.7.

119
•
In practice, method (ii) often performs poorly due to overfitting. In other words, the model involves an excessive number of parameters that are estimated poorly, leading to poor prediction. At the other extreme, method (iii) is often patently unrealistic since measurement variables often show considerable correlation .
•
•
Discriminant Analysis
. ,les we •
The rationale underlying the use of the hierarchical interaction models is to base discrimination on a small but accurate model that captures the important interrelationships between the variables. This would appear to be a promising approach, at least for those applications where the models describe the data well.
y). •
4.7.1
)llS
As illustration of the approach to discriminant analysis sketched in the last section, we consider the fourth dat& set described in Krzilnowski (1975). Tllis summarizes the results of ablati\'c surgery for advawed breast cancer. The grouping factor i5 treatmellt Sllccess, classified as surces~fuJ or intermediate lG = 1), or failme (G = 2). The data set comi,;t:=; of 18G observations on ten variables, these being comprised of six cUlltinuous variables (UZ) and four binary variables (AC and G).
va
y). '1, J~
i
'1
\.
~
;t
•
i
, \
nal asllld
Initially, we examine the counts in the 16 cells:
•
lfl
Lee cee deas 3p:ial
, 75, •
Ul
the jel. lies lAent cd. Gd. are are
Example: Breast Cancer
o
""
..
,
MIM>mod ABCG; pr s Calculating marginal statistics ... Empirical counts. means and covariances. ABC G Count 1 1 1 1 3.000 1 1 1 2 10.000 1 1 2 1 5.000 1 1 2 2 17.000 1 2 1 1 12.000 1 2 1 2 6.000 122 1 7.000 1 2 2 2 4.000 2 1 1 1 28.000 211217.000 2 1 2 1 14.000 I 2 1 2 2 7.000 2 2 1 1 21.000 2 2 1 2 18.000 2 2 2 1 9.000 2 2 2 2 8.000
Since there are six continuous variables, the empirical cell covariance matrix must be singular for those cells with ::; six observations. It follows
120
4. Mixed Models

u G
y
W
v C
B
FIGURE 4.10. The model initially selected.
that the 11LE of the heterogeneous saturated model cannot exist (see Section 5.2). We proceed therefore by performing backward selection from the homogeneous saturated model, suppressing the output by using the Z option:
MIM>mod ABCG/ABCGU,ABCGV,ABCGW,ABCGX,ABCGY,ABGGZ!UVWXYZ MIM>fit Likelihood: 11870.2091 DF: 315 MIM>maxlllodel MIM>stepW'ise z The following model is selected: ACG, ABG / X, IF, AY, AU, ACGZ, ABV/Y Z, X Z, WY, ll, UY
whose graph is shown in Figure 4.10. This model is rather complex, but the important thing to note is that only the variables A, B, C, and Z are adjacent to G, so a substantial dimension reduction, from ten to five dimensions, is possible. We therefore consider the marginal distribution of these five variables. We are now able to test for variance homogeneity:
MIM>mod ABCG!ABCGZ/ABCGZ MIM>fit Deviance: 0.0000 OF: 0 MIM>base MIM>mod ABCG/ABCGZ!ABCZ; fit; test Deviance: 16.7532 OF: 8 Test of HO: ABCG!ABCGZ!ABCZ against H: ABCG!ABCGZ!ABCGZ LR: 16.7532 OF: 8 P: 0.0328 MIM>mod ABCG!ABCGZ!ABGZ; fit; test Deviance: 16.2615 OF: 8 Test of HO: ABCG!ABCGZ!ABGZ against H: ABCG/ABCGZ!ABCGZ LR: 16.2615 OF: 8 P: 0.0388 MIM>mod ABCG!ABCGZ!ACGZ; fit; test Deviance: 8.9392 OF: 8


Discriminant Analysis
121
Test of HO: ABCG/ABCGZ/ACGZ against H: ABCG/ABCGZ/ABCGZ LR: 8.9392 OF: 8 P: 0.3475 MIM>mod ABCG/ABCGZ/BCGZ; fit; test Deviance: 14.2978 DF: 8 Test of HO: ABCG/ABCGZ/BCGZ against H: ABCG/ABCGZ/ABCGZ LR: 14.2978 OF: 8 P: 0.0743 MIM>mod ABCG/ABCGZ/CGZ; fit; test Deviance: 20.1567 DF: 12 Test of HO: ABCG/ABCGZ/CGZ against H: ABCG/ABCGZ/ABCGZ LR: 20.1567 DF: 12 P: 0.0642
,
I
•
4.7.
•
•
•
,,
n Z
There is evidence of variance heterogeneity with respect to C and C. We now perform backwards selection starting from the last model tested:
•
It
Z }f
,
•
•
MIM>stepwise u Coherent Backward Selection Unrestricted models, Chisquared tests. Critical value: 0.0500 Initial model: ABCG/ABCGZ/CGZ Model: ABCG/ABCGZ/CGZ Deviance: 20.1570 OF: 12 P: 0.0642 Edge Test Excluded Statistic DF P (AB] 15.8959 8 0.0439 + (AC] 15.3270 8 0.0531 [AG] 17.9454 8 0.0216 + [AZ] 9.2774 8 0.3194 [BC] 5.2809 8 0.7272 [BG] 11.6537 8 0.1673 [BZ] 6.1698 8 0.6282 [CG] 13.8980 9 0.1260 (CZ] 29.1471 10 0.0012 + (GZ] 29.8069 10 0.0009 + Removed edge [BC] Model: ACG,ABG/ACGZ,ABGZ/CGZ Deviance: 25.4379 OF: 20 P: 0.1852 Edge Test Excluded Statistic DF P [AC] 13.1668 4 0.0105 + [AZ] 8.7295 6 0.1894 [BG] 16.0391 4 0.0030 + [BZ] 4.0573 4 0.3983 [CG] 20.9183 5 0.0008 + Removed edge [BZ] I

122
4. Mixed Models
Model: ACG,ABG/ACGZ/CGZ Deviance: 29.4952 DF: 24 P: 0.2021 Edge Test . Excluded Statistic DF P [AZ] 6.7912 4 0.1473 I Removed edge [AZ] Selected model: ACG,ABG/CGZ/CGZ
The graph of the selected model is as follows: A
/""'z B
The allocation rule discussed in Section 4.7 has the form
lnj(Lj,y) lnf(2.j,y) > O. It is convenient to alj  0.2j
USt'
+ (/31j 
the canonical forl1l (4.2) so that this becomef' 1
.B2j)Y 
'2(Wlj  W2j)y 2
> O.
The parameter estimates are as follows:
.4
B
C
alj  (l2j
1
1
1
1.51
1
1
2
1.81
0.52 x 10 3
0.22
X
10 5
1
2
1
0.35
1.53 x 10 3
5.00
X
10 5
1
2
2
0.05
0.52 x 10 3
0.22
X
10 5
2
1
1
0.01
1.53 x 10 3
5.00
X
10 5
2
1
2
0.27
0.52 x 10 3
0.22
2
2
1
0.43
1.53 x 10 3
5.00
X
10 5
2
2
2
0.15
0.52 x 10 3
0.22
X
10 5
f3l j 
1.53
X
f3 2j
Wlj  W2j
10 3
•
5.00 x 10;)
5 X 10
To calculate the error rates, we use the Classify command: MIM>class GM MIM>class GN Leaveoneout method. MIM>mod GMN MIM>pr s Calculating marginal statistics ... Empirical counts, means and cOvariances. G MN Count •

4.7.
1 1 1 1 1 ') 1 2 1 122 211 212 221 222 ~
,
•
•
68.000 1.000 0.000 30.000 33.000 0.000 6.000 48.000
Discriminant Analysis
123
•
From the output, we obtain the following apparent and leaveoneout error rates: Apparent error rates Predicted Observed 1 2
1 2
69 33
30 54
Leaveoneout error rates Predicted 2 Observed 1 31 68 1
39
2
48
As expected, the latter give a more conservative assessment of the performance of the allocation rule.
•
•
I
I
I, , ,
I,
,
• , ,
I
•
•
I•
, • ·
•
I
II
,
eSlS
0
•
I
I
I
I [ ~
,
II I
i ••
l 5.1
An Overview This chapter describes and compares various significance tests that can be used in the framework of hierarchical interaction models. The overall structure is as follows. 2 X
First, the asymptotic test is studied in more detail. This is a very useful test that forms, as it were, a backbone to inference in the model class. However, it does require that the sample is large. In small or moderate 2 samples, the x test will often be inaccurate, and smallsample tests, when available, should be preferred.
•
The next test described, the Ftest, is one such smallsample test. It is available in connection with models with continuous variables. In the subsequent sections, a variety of exact conditional tests are described. In certain circumstances, tests can be constructed whose null distribution is completely known. These tests, known as permutation or randomisation tests, enjoy various advantages. First, the method of construction allows the exact conditional distribution of any, test statistic to be calculated. This enables tests sensitive to specific alternatives to be constructed: for example, contingency table tests appropriate for ordinal discrete variables. Secondly, the tests are valid under weakened distributional assumptionswhich is why they are widely known as distributionfree or non parametric tests. Thirdly, large samples are not required. This is particularly useful in the analysis of highdimensional contingency tables for which the asymp
•
•
•
•
126
5. Hypothesis Testing •
totic tests are often unreliable. A disadvantage of permutation tests is that they are computationintensive. The final three sections of the chapter describe some tests arising in classical· multivariate analysis that can be formulated in the current framework.
,
A general property of the likelihood ratio test is that under suitable reg2 ularity conditions, it has an asymptotic X distribution uuder the null hypothesis as the sample size tends to infinity. In the present context, this means that any pair of nested models can be compared by referring 2 the deviance difference to a X distribution with the appropriate degrees of freedom. The following example illustrates a test of .Mo: B.A.C versus A11 : AB,BC,AC in a threeway contingency table.
•
MIM>fact a2b2c2; sread abc DATA>2 5 4 7 6 8 9 5 ! Reading completed. MIM>model ab, bc, ac; fit; base . 0.1573 OF: 1 Deviance: MIM>del ab,bc; fit Deviance: 1.9689 OF: 3 MIM>test Test of HO: b,ac against H: ab,bc,ac P: 0.4042 LR: 1.8116 OF: 2
First, the model of no threeway interaction is fitted; it is then set as the base model (that is, the alternative hypothesis) using the Base command. A submodel is then fitted. The Test command tests this against the base model. The likelihood ratio test (deviance difference), degrees of freedom, and pvalue are displayed. The degrees of freedom are calculated as the difference in the number of free parameters in the two models, under the assumption that all parameters are estimable. If all cells are nonempty, and all cell SSP matrices are nonsingular, then all parameters are certainly estimable. For homogeneous . models, it is sufficient that all cells are nonempty, and that the overall SSP matrix is nonsingular. For simple models, less stringent requirements will

5.2. .
127
•
~
suffice, but precise conditions are difficult to determine. This is true even in the purely discrete case (Glonek et aI., 1988). If there are not enough data for all parameters to be estimable, then the degrees of freedom as calculated by the Test command may be incorrect.
at •
x2Tests
:al
Problems of moderate to high dimension typically involve large numbers of cells with zero counts and/or singular SSP matrices. In such circumstances, it will generally be difficult to determine whether the standard degrees of freedom calculations are correct.
•
When one or more of the marginal tables of counts corresponding to the discrete generators contains empty cells, a question mark is printed after the degrees of freedom in the output from Fit. For example,
g
III
es
,
MIM>fact a3b3; sread ab MIM>2 7 4 MIM>O 0 0 MIM>4 2 7 ! MIM>mod a,b; fit Deviance: 4.4502 DF:
4
?
In r hi~; example, it is clear that the correct IlIlmber is two (using the formula (R 1) (G  1) for an R x G table). but for more complex models, the correct Humber must be calculated from first principles this calculation can be \'elT• difficult . .
Fortunately, however, in some situations the calculations are straightforward. One of these is the test for variance homogeneity (see Section 5.13). Another is when both the alternative model ..'1.1 1 and the null model Mo are decomposable, and Mo is formed by remo\'ing all edge from MI. We cal! such a test a decomposable edge deletion test. Such tests have a particularly tractable structure, and we \,i11 encounter them repeatedly in this chapter.
ere •
It is useful to be able to characterize these tests precisely. In other words, given a decomposable model .M I, which edges can be removed if the resulting model, Mo, is to be decomposable? From Section 4.4, we know that Ml is decomposable if and only if its graph is triangulated and contains no forbidden paths. We reqUire that the same be true of Mo. It is not difficult to see that it is necessary and sufficient that (i) the edge is one clique only, and (ii) if both endpoints of the edge are discrete, then the whole clique is discrete. Condition (i) is necessary since its violation would give rise to a chordless 4cycle, and condition (ii) because its violation would give rise to a forbidden path .
:P ill
For example, the edge [BG] cannot be removed from the model shown in Figure 5.1, ABC, BCD, but the others can. Similarly, the edges [AX] and
0,
of us
128
5. Hypothesis Testing
,
A
c
A
y
B
D
B
x
•
FIGURE 5.2. AB/ ABX,AY /XY.
FIGURE 5.1. ABC, BCD.
, [AB] cannot be removed from the model AB/ABX,AY/XY (Figure 5.2), but the other edges can. It can be shown that a decomposable edge deletion test is equivalent to a
test for the removal of an edge from a saturated (marginal) model; see Frydenberg and Lauritzen (1989). For example, the test for the removal of [AY] from the model AB/ABX, AY/XY (see Figure 5.2) is equivalent to a test for the removal of [AY] from the saturated marginal model A/AX, AY/ XV. Sometimes these tests are available for nondecomposable models. For example, the test for the removal of [AE] from AB,BC,CD,ADE (see Figure A.l) can be performed in the marginal {A, D, E} table as a decomposable edge deletion test. Edge deletion tests example,
ai"C
performed using the TestDelete command. For
MIM>mod ABCOEF MIM>testdelete AB Test of HO: BCDEF', ACOEF against H: ABCDEF LR: 22.6518 OF: 16
P: 0.1234
Note that TestDelete calculates the correct degrees of freedom whenever the test corresponds to a decomposable edge deletion test. Further aspects of these tests are described later in this chapter.
5.3
FTests To derive the Ftest, suppose we have one continuous response variable X so that we are conditioning on a = V\ {X}. Consiqer first a model M that is collapsible onto a such that the conditional distribution of X given a is variance homogeneous. The conditional density is I
(27rO"xlaf'i exp
)2 2" x  /lxla /O"xla 1(
,
where /lxla and O"xla are the conditional mean and variance. Thus, the conditional log likelihood is

N 2
In(27r) 
N
2 In(axla)
RSS 20"xla'

5.3.

FTests
.where RSS is the residual sum of squares. It can be shown that RSS/N, so the maximized log likelihood can be written as , ,,• ,
•
129

,•
N
2
axla
=
In(27r)
!
!
•
•
Consider now a test of two such models, that is, a test of Mo ~ MI where both models are collapsible onto a, induce the same marginal model M a , and are such that the conditional distribution of X given a is variance homogeneous. Then the deviance difference between Mo and MI is
r•
I,
!
a
;0

•
d=2(£v £v)
'J
•
,, •
•
,• I ,
•
•
•
I
(5.1 )
If we let TO be t.he difference ill the number of free parameters between Mo and M I (a:ld hence bct\\c('n .\.10. and _,\.11. ). and let r I be the number Iia ria, ' of free 'parameters in .>\.1 IIQ I . then we can write the Ftl,:,t for :\.1 Ox I versus ' a _'\.11 as .rIa
 I !
e a
;;0
2( (~Ia  f xla ) = N In(RSSo! RSStl.
=
,•
t
•
I
·
,•• •
·
•
F = tRSSn  RSS] )!rn RSSJ/(N  I'd .
, •
I
t
Using (5.1), we obtain F
I
d N = (e / 
l)((N  rd/ro).
which under Mo is Fdistributed with
1'0
and N 
1'1
degrees of freedom.
The following fragment illustrates an Ftest of no interaction in the twoway layout:
•
MIM>mod AB/ABX/X; fit; base Deviance: 8. 1337 DF: 5 MIM>mod AB/AX,BX/X; fix ab; fit Fixed variables: AB Deviance: 11.5229 DF: 7 MIM>ftest Test of HO: AB/AX,BX/X against H: AB/ABX/X F: 1.3651 DF: 2, 18 P: 0.2806
, (
.t .s
Like the Test command, FTest compares the current model to the base model. The Fix command is used to identify the continuous response variable (i.e., all the variables in the model except one continuous variable must be fixed). The output shows the test statistic, the degrees of freedom, and the pvalue .
•
e •
As with the Test command, the calculated degrees of freedom may be incorrect if the data are not sufficient to estimate all the parameters. In
•
130
5. Hypothesis Testing

the same way as before, a way to get around this problem when it arises is to restrict attention to decomposable models and llse the TestDelete command instead. We can use the following result (Frydenberg and Lauritzen, 1989). Suppose that a test of ,Moversus Ml is a decomposable edge deletion test between homogeneous models such that one or both of the endpoints of the edge are continuous. Then the test can be performed as an Ftest. This is illustrated in the following example: MIM>model AB/AX,ABY/XY MIM>testdelete AX s Test of HO: AB/X,ABY/XY against H: AB/AX,ABY/XY F: 0.0003 DF: 1, 21
P: 0.9856 •
The initial model, AB / AX, ABY/ XY, is homogeneous and decomposable, and the removal of the edge [AX] satisfies the aboYe criteria. For large samples, tllP pvalues from the Ftest and from the \2 tpst are 2 almost identical. For small ~a!l1ples, the reference X distriblltion will be unreliable: generally it will be too liberal, i.e., will tend to reject the null hypothesis too often (Porteous, 1989). For this reason, Ftests should be preferred when available in small sam~les.
5.4
Exact Conditional Tests In the previous sections we described tests based on the hierarchical interaction model family for which the distribution of the variables is specified except for a number of unknown parameters. We now describe some tests of more general validity in the sense that they do not require specific distributional assumptions. In this section, we describe a general rationale for tests and some computational aspects, and in the following sections we review some particular tests. To introduce the topic, consider a simple twosample problem involving a continuous variable. Suppose we have N independent and identically distributed observations on two variables a binary factor A, and a continuous response Y. The null hypothesis, 1io, is that of independence (homogeneity), i.e., that •
jYIA(yI1) = jYIA(yI2) = fy(y).
Under this model, the order statistics yO), Y(2),'" YiN) are sufficient for j(y), and so similar tests can be constructed by conditioning on the observed order statistics, i.e., that Y(.) = y(.). Under 1io, all N! permutations
5.4.
Exact Conditional Tests
131
•
of yare equally likely. We can represent the observed data in a simple table, as follows:
s
,•
·•• , ,
•
,
,•
A
I
1
Y(l)
Y(2)
1 0
0
• •
•
YIN)
Total
•
•
0
nl
1 1
n2
•
•
1
., •
I
2 Total
1
1 1
•
• • •
1
•
N
The columns represent the values of y in ascending order, and for each column, there is a 1 in the row indicating membership of the first or second sample. Each possible permutation of y corresponds to a similar 2 x N table of zeros and ones, with the same row and column sums.
•
For allY test statistic for example. tbe difference in sample means tobs =
,
we can compute tIl(' permutation significance level as Pobs
, , •
I
, 
•
I f •
;
,, r
iii  ih
I}' = P r {I 2 tubsi (.) =
"1..J
Y\.j,
I~"
)
ILO,r ::::: /\/'"''
where 1\, is thl' number of pc'rllllJtations giving values of the test statistic greater thall (1r equal to t(,1". If the observed responses coutain ties, then these will appear in the above table as columns with identical y's. We can represent the observed tabie more compactly by combining such columns so as to arrive at a table of the form A 1 2 Total
Y(l)
Y(2)
nll
n12
•
n21
n+l
• ••
Y(C)
Total
••
nlC
nl+
n22
• ••
n2C
n2+
n+2
• ••
n+c
N
where Y(1), Y(2) ... Y(C) are the distinct order statistics, and nij is the number of observations in sample i with y = Y(j). Similarly, instead of generating all permutations, we can generate all the 2 x C tables of nonnegative integers with fixed column margins n+l, n+2, ... ,n+c and row margins n1+ and n2+' It can be shown that the probability of such a table, M = {mij}, is given by the hypergeometric distribution
•
•
N!
•
IT i=l IT j =l mij!
(5.2) I
For any test statistic T, we can compute the permutation significance level ,
•
• •
,,
Pobs =
Pr(T ~ tobslHo) Pr(MIHo).
'= MET:T(M)~tobs
•
l32
5, Hypothesis Testing
•

This method of constructing tests is easily generalized in several respects. First, the response variable does not have to be continuous, but can be discrete with, say, C levels. The above description applies precisely, . with the distinct order statistics Y(I), Y(2) ... Y(C) being simply the factor levels 1,2 .. " C. Secondly, the row variable A does not have to be binary but can be discrete with, say, R levels. Then the process involves generating all R x C tables of nonnegative integers with fixed column margins n+1,n+2" .n+c and row margins n1+,n2+ ... nR+' The conditional probability of such a table M is then I
R
C
(5,3) Similarly, the row variable could be continuous, but we do Hot use the possibility in the sequel. Finally, we can extend the approach to test conditional independence, given a third, discrete variable. Suppose the hypothesis is A Jl Y I S, where we call S the stratum variable and suppose that it has L levels. The order statistics ill stratum k. }'(Lkj, ¥(2,k)," .1'(n Hko k) say, are sufficient for !YjS=k(Y), and so similar tests can be constructed by conditioning 011 the observed order statistics, i.e" that Y(..k) = Y(,.k) for each k = 1,.", L,
rrf=1 n++k! permutations of yare equally Ekely, We summarize the data in an R x C x L table, N = {nijk}, where R is the number of levels of A, and C is the number of distinct values of Y. The exact tests are constructed by conditioning on the marginal totals {niH} and {n+jd. We can think of the threeway table as a stack of Ax B tables, one for each stratum k of S. The kth slice has fixed row and column totals {ni+d and {n+jk}. The sample space, which consists of all R x C x L contingency tables with these fixed margins, is denoted Y. Under the null hypothesis Ho, the probability of a given table M = {1nijd is L
~ n'
!
c:
n ' !
k=1 n++k! Oi=1 Dj =1 mijk!
(5.4)
In this fashion, we can in principle compute the exact conditional tests for A Jl B I S, where A and S are discrete and where B can either be discrete or continuous. Before we describe ways of computing the tests in practice, let us first consider how they fit into the graphical modelling framework. Recall that a decomposable edge deletion test is equivalent to a test for edge removal from a saturated marginal model (see Section 5,2). That is, if the edge to be removed is [UV], then the test is equivalent to a test for
'
5.4.

133
s.
U lL V I w for some w C IJ. u r. When w consists of discrete variables only,
Ln
and either U or V (or both) are discrete, then the test can be performed as an exact conditional test of the type we have just described. We do this by constructing (conceptually) a new factor S by "crossing" all the factors in w, i.e., with a factor level for each combination of levels for the factors
. .,v
I ,
,,
, I
•
\
,
Exact Conditional Tests
•
~S
III W.
r
We now consider three alternative ways of computing the tests.
3.1
The direct method is exhaustive enumeration. For each table M E Y, the probability Pr(MIHo) and the statistic T(M) are calculated, enabling exact calculation of Pobs. Unfortunately, since the number of tables in Y may be ast.ronomicaL exhalLstin' Cl1IiLl1Cration is often infeasible.
•
l) Ie
We mention ill passing that ~khta and Patel (1983) have developed a net
'n
work algorithm for thes~cakulations that can be used for the computation of stratified linear rank tests. It is implemented in the program StatXact ().!ehta and Patel. 1091). This algorithm is considerably faster than the onl.' used b\'• ~ID!.
'e r
lr Le •
J.
II
I , ,
all altern,Hiw wlH'1I ('xhall:;tiw enumeration is not feasible, Monte Carlo sampling raIL be used. UsilLg an algorithm of Patefield (1981), a large fixed
A~
number., Ea;,' J{' of ralldoll! tables is sampled from Y in such a way that for any AI E Y, the probability of sampling M is Pr(MIHo). For the table Mb define Zk =
} Is
1
o
if T(Mk) 2: tobs otherwise.
L:=
We estimate Pobs as Pobs = 1 Zk / K. It is easy to show that Pobs is an unbiased estimate of Pobs. An approximate 99% confidence interval can be calculated as
PObs ± 2. 576 VPobs(1  Pobs)/ K. By taking K large enough, Pobs can be estimated to any desired precision, although greater precision than four decimals is rarely necessary.
,r •" •
••
• ,r •
1.,
,r
The use of Monte Carlo sampling to compute exact tests in multidimensional contingency tables, particularly in connection with ordinal variables, has been strongly advocated by Kreiner (1987). The third way to estimate Pobs is by using sequential Monte Carlo sampling. The purpose of this is to stop sampling if it is clear early on that there is little or no evidence against the null hypothesis. Sampling is continued until a prescribed number, say h, of tables Mk with T(Mk) 2: tobs are sampled, or a maximum number of samples have been taken, whichever comes first. The estimate of Pobs is, as usual, Pobs = L:=l Zk/ K, where K
.
134
.•
/
5. Hypothesis Testing
is the number of tables sampled. This stopping rule was proposed by Besag and Clifford (1991). It has the attractive property that pvalues close to zero are estimated accurately, whereas in the less interesting region where . pvalues are large, sampling stops quickly. We now turn to a detailed description of some specific tests for conditional ind8pendence, which can be computed as exact conditional tests. The first type uses test statistics we are already familiar with, since these are based, on the standard deviance statistic. I
5.5
DevianceBased Tests To reiterate, we are considering exact tests for A Ji B I S, where A and S are discrete, and B can be either discrete or continuous. In the mixed model framework, there are three cases to consider. The first applies when the column variable (B) is discrete. Then the test corresponds to the test of .lvto : AS, BS
versus .lvt 1 : .1B S, for which the deviance has the form L
C
2=
C
R ijk n++k ) I (n nijk n .
2
ni+kn+jk
k=l j=l iJ_i
As an example of the use of this test, consider the data shown in Table 5.1, taken from Mehta and Patel (1991). Data were obtained on the site of oral lesions in housetohouse surveys in three geographic regions in India. There are two variables: site of oral lesion (A) and region (B). We are interested Site Labial Mucosa Buccal Mucosa Commisure Gingiva Hard Palate Soft Palate Tongue Floor of Mouth Alveolar Ridge
Kerala
Gujarat
Andhra
o
1
o
8
1
8
o o o o o 1 1
1 1 1 1 1
o o
o o o o o 1
1
TABLE 5.1. Oral lesion data from three regions in India. Source: Mehta and Patel, 1991. .
,',. .
" ".. '
.".,'
" >',
5.5.
DevianceBased Tests
135
in determining whether the distribution of lesion sites differs between the three regions. The table is very sparse, so the asymptotic test of A Jl B is unlikely to be reliable. In the following fragment, we first define the data and then estimate the size of the reference set Y using the approximation given in Gail and Mantel (1977) by using TestDelete with the C option.
, MIM>fact A9B3 MIM>sread AB DATA>O 1 0 8 1 8 0 1 0 0 1 0 0 1 0 0 1 0 0 1 0 1 0 1 1 0 1 ! Reading completed . MIM>model AB MIM>testdelete AB c Test of HO: B,A against H: AB Estimated number of tables: 35200 Likelihood Ratio Test. LR: 23.2967 DF: 16 Asymptotic P: 0.1060
•
The size of Y is estimated as 35.200 alld tlIP asymptotic test gives a nonsignificant result. It is feasible to gruerate :F). 200 tablf'~. ;~o we Droceed to calculate t he exact test by \lsi ng the E optioll:
j
•
MIM>testdelete AS e Test of HO: B,A against H: AB Exact test  exhaustive calculations. No. of tables: 26109 Likelihood Ratio Test. LR: 23.2967 DF: 16 Asymptotic P: 0.1060 Exact P: 0.0356
I .f
J •
•

•
•
•
I

The exact pvalue of 0.0356 implies that the distribution of lesion sites differs significantly between regions. For this table, the reference set Y contained 26, 109 tables. We could also have chosen to use sequential Monte Carlo sampling, which is rather more efficient: MIM>testdel AB q Test of HO: B,A against H: AB Exact test  monte carlo yith sequential stopping rule. Maximum no of more extreme tables: 20 Maximum no of random tables: 1000 No. of tables: 719 Likelihood Ratio Test. LR: 23.2967 DF: 16 Asymptotic P: 0.1060 Estimated P: 0.0278
"':r:: ,
~J
,
136
,':,.. '
.., •
5. Hypothesis Testing

•
The prescribed maximum number of sampled tables with T(Mk) ~ tobs is 20 by default, and the ma.."(imum number sampled is 1000. These settings can be changed using the commands SetMaxExcess and SetMaxSim, . respectively. Another example, using the lizard perching habits data (Section 2.1.1), is as follows. First we define the data, and then we perform an asymptotic x,2 test for B Jl CiA. MIM>fact A2B2C2; sread ABC 32 86 11 35 61 73 41 70 ! MIM>model ABC MIM>testdelete BC Test of HO: AC,AB against H: ABC Likelihood Ratio Test. LR: 2.0256 OF: 2 Asymptotic P: 0.3632 I
To evaluate the feasibility of calculating the exact test bv• exhaustive enumerat.ion, we agdin use the C option: MIM>testdel BC c Test of HO: AC,AB against H: ABC Estimated Dumber of tables: 4067 Likelihood Ratio Test. LR: 2.0256 OF: 2 Asymptotic P: 0.3632
It is feasible to generate 4067 tables, so we calculate the exact test by using the E option: MIM>testdel BC e Test of HO: AC,AB against H: ABC Exact test  exhaustive calculations. No. of tables: 4532 Likelihood Ratio Test. 2 Asymptotic P: 0.3632 LR: 2.0256 OF: Exact P: 0.3648
We see that there were precisely 4532 tables in Y. In the example shown, the asymptotic test and the exact test were almost identical. Now suppose that the response is a continuous variable, say X, and that the initial model is homogeneous. A test of A Jl X I S corresponds to Mo: ASjSXjX
versus MJ : ASjASXj X,
, •
I
,, •
5.5.
DevianceBased Tests
137

•
for which the deviance difference takes the form )S
t
I
• n,
d = N In
RSSo RSSI
,
(5.5 )
where RSSo is the sum of squares abollt the stratum (S) means, and RSS 1 is the sum of squares about the cell means (see Section 4.1.7) .
•
IS • ,2
When the initial model is heterogeneous, the test corresponds to
,
Mo : ASjSX/SX
versus Ml : ASjASXjASX,
I
and the deviance difference takes the form [.
•
I
d
L
nH
,
In
k=1
I
I i
RSSk ll~k ,

R
,
•
nik
In
RSSik
,
(5.6)
nik
k= I i= I
where RSSk is the sum of squares about the mean in stratum k, and RSSik is the sum of squares about the Iliean in ('ell (i. k).
I
I
•, • •
I
•
,
, •
The data ill Tahle .5.2. taken from illphta and Patel (1991), serve as an illust rat ion. For 2~ pa tiel! rs lInl 0 1 1 1 8 1 10 DATA>2 0 2 0 2 3 2 3 2 8 DATA>3 5 3 6 3 7 3 14 3 14 DATA>4 1 4 1 4 6 4 7 4 7 4 7 4 8 4 8 4 10 DATA>5 7 5 10 5 11 5 12 5 13 ! Reading completed . MIM>model AIAXIX MIM>testdel AX c Test of HO: AIXlx
Drug regimen 1 2 3 4 5
Hematologic toxicity 01810 00338 5 6 7 14 14 1167778810 7 10 1112 13
I
TABLE 5.2. Hemat.ologic toxicity data. Source: Mehta and Patel, 1991.
138
5. Hypothesis Testing
against H: AIAXIX Estimated number of tables: 2.928522E+0011 Likelihood Ratio Test. LR: 14.6082 OF: 4 Asymptotic P: 0.0056
We see that the number of tables in Y is around 3 x lOll, so exhaustive enumeration is quite infeasible. Instead, we choose to estimate the permutation significance level using crude Monte Carlo sampling, using the M option: f
MIM>testdelete AX m Test of HO: AIXIX against H: AIAXIX Exact test  monte carlo estimates. No. of tables: 1000 Likelihood Ratio Test. LR: 14.6082 OF: 4 Asymptotic Estimated MIM>model A/AX/AX MIM>testdelete AX m Test of HO: A/X/X against H: A/AX/AX Exact test  monte carlo estimates. No. of tables: 1000 Likelihood Ratio Test. LR: 17.7251 OF: 8 Asymptotic Estimated
P: 0.0056 P: 0.0228 +/ 0.012147
P: 0.0234 P: 0.0469 +/ 0.017227
We show both the homogeneous and the heterogeneous variants. The algorithm samples 1000 tables randomly from Y. In both cases, we see that there is reasonable agreement between the asymptotic results and the exact results.
5.6
Permutation FTest We now consider the exact conditional test constructed on the basis of the Fstatistic the socalled permutation Ftest. The advantage of this test is that it is valid for arbitrary distributions of the continuous variable. We illustrate the test using the data shown in Table 5.2. MIM>model A/AX/X MIM>testdel AX Ism Test of HO: AIXIX against H: A/AX/X Exact test  monte carlo estimates.
•
•
•

5.7.
•
Pearson x2Test
139
No. of tables: 1000 Likelihood Ratio Test. LR: 14.6082 OF: 4 Asymptotic P: 0.0056 Estimated P: 0.0191 +/ 0.011158 Permutation F Test. F: 3.9383 OF: 4, 23 Asymptotic P: 0.0141 Estimated P: 0.0191 +/ 0.011158
•
We show computation of the exact deviance test and the permutation Ftest. Since the Fstatistic is a monotonic transformation of t.he deviance difference with respect to the homogeneous model, as shown in (5.3), the two tests are equivalent in the conditional distribution. We see that the Ft.est appears to be rather more reliable than the X2 test, since the pvalue for the former il; closer to the exact conditiollal pvalues. The next two t{'st~ that we exall1ine are only appropriate when the response variable is discrete. They are competitors to the contingency table test C 2 . ,
I •
I •
7 J ..
i •
Pearson
2 X  Test The Pearson
x
2t.est
C
R (
has the form ')2
Ilijk  mijk
•,
k=l j=l i=l
where mijk == nHkn+jk/n++k are the expected cell counts under the null hypothesis. The asympt.otic distribution under the null hypothesis is the 2 2 same as the likelihood ratio test C , namely X with L(R:l)(Cl) degrees of freedom. The option P calculates the Pearson goodnessoffit test. Continuing the lizard data example: •
MIM>testdel BC P Test of HO: AC,AB against H: ABC Pearson Goodnessoffit Test. X2: 2.0174 OF: 2 Asymptotic P: 0.3647 •
•
The exact test gives an almost identical result: MIM>testdel BC pe Test of HO: AC,AB against H: ABC Exact test  exhaustive calculations. No. of tables: 4532
•
•
•
,
140
5. Hypothesis Testing

•
Pearson Goodnessoffit Test. X2: 2.0174 DF: 2 Asymptotic P: 0.3647 Exact P: 0.3643
The likelihood ratio test gives more of the same: MIM>testdel BC pI Test of HO: AC,AB against H: ABC Likelihood Ratio Test. LR: 2.0256 DF: 2 Asymptotic P: 0.3632 Pearson Goodnessoffit Test. X2: 2.0174 DF: 2 Asymptotic P: 0.3647
There is hardly any difference between the tests in this example.
5.8
•
Fisher's Exact i est Fisher's exact test is usually associated with 2 x 2 tables but cal! readily . be extended to R x C x L tables. The test statistic is T = Pr(iYjHo), the hypergeometric probability of the observed table N under Ho. The exact pvalue is the probability of observing a table with a Pr(NIHo) at least as small as the observed . Less wellknown than the exact test is a closely related asymptotic test (Freeman and Halton, 1951). The statistic
FH = 2lnb Pr(NIHo)), where '"Y is a constant given as L
'"Y = (21T)L(Rl)(Cl)/2
C
(RC1)/2
. (Cl)/2
n++k k=l
R (Rll/2
n+J+ j=l
ni++
.
i=l
FH is a monotone transformation of Pr(NIHo). The asymptotic distribution
of FH under Ho is the same as the L(R  l)(C  1) degrees of freedom.
2 X
and C 2 statistics, i.e., X2 with
This test is available through use of the F option. Continuing the lizard data example, we obtain MIM>fact A2B2C2; sread ABC 32 86 11 35 61 73 41 70 ! MIM>model ABC; testdel BC f Test of HO: AC,AB against H: ABC Fishers Test. FI: 1. 9882 DF: 2 Asymptotic P: 0.3700 •
•
5.9.
,
Rank Tests
141
Again, in this example, the results are very close to those given by the X and the likelihood ratio test .
•
2
Calculating the exact test for the lesion data (Table 5.1), we obtain similar results as before:
•
MIM>model AB; testdel AB fm Test of HO: B,A against H: AB Exact test  monte carlo estimates. No. of tables: 1000 Fishers Test. FH: 19.7208 DF: 16 Asymptotic P: 0.2331 Estimated P: 0.0120 +/ 0.008887
,,,• ,
•,
,, •
,
;
5.9
Rank Tests
•
•
, ,
,• •
•
We :::aw in Section 5.4 that exact conditional tests for A Jl B I Scan be computed for any test statistic. In the preceding sections, we considered statistics that are based on parametric models namely, the deviance statistics, the closely related tests due to Pearson and Fisher, and the randomisation Ftest. For such statistics, the exact conditional tests can be regarded as a natural part of the modelling process. In the case of discrete data, the exact conditional tests are attractive because their competitors require large samples. When a continuous response is involved and variance homogeneity can be assumed, smallsample tests (Ftests) are available, but the exact tests are still attractive since their validity does not require assumptions of normal errors. With continuous responses with heterogeneous variances, both advantages apply. We now consider a rather different approach, which is based on test statistics that are not directly derived from models the socalled rank tests. In this section, we introduce the rationai.e for these tests and discuss some general issues, deferring a detailed description of some specific tests to the following sections. Although most of the following discussion refers to Wilcoxon's twosample test, it is also applicable in general terms to other rank tests. A good reference for these tests is Lehmann (1975).
•
As their name implies, these tests use only ranks, that is to say, the serial number of the observations when the observed data are arranged in serial order. For example, in the simple twosample case that introduced Section 5.4, the data were arranged as follows:
•
,
,
•
•
142
5. Hypothesis Testing ••
A 1 2 Total
Y(l)
Y(2)
1 0
0 1 1
1
YIN)
• • •
Total nl
• • •
0 1
1
1
N
• • •
n2
In rank tests, the columns are numbered 1 to N and test statistics use these numbers, not the original observations. Thus, for example, the \Vilcoxon test statistic W is simply the sum of the ranks in one of the rows, say A = 1. Assuming for the moment that all the observations are distinct, every set of data of size N will induce ranks of 1 to N so the null distribution of W depends on nl and n2 only. We say that rank tests are unconditionally distributionfree unconditionally, in the sense that conditioning on the observed order statistics YO is not necessary. This property was particularly important when computers were not readily available, since it meant that critical values for the exact conditional test could be tabulated in terms of 11 1 and n2 only. If t here are tied observations, ranks a re not welldefined. The usual approach is to use the midranks for tied observations. If the data are arranged
in tied form as A 1
2 Total
YO}
Y(2}
• • •
Illl
nl2
• •
n21
n22
n+l
n+2
Y(C}
Total
nlcO
nI+
• • •
n2C
n2+
•
n+c
N
• •
•
then data in the jth column would correspond to the ranks 7+ 1 to T+n+j, where T = Lt~: n+I' The mid rank is the average of these values, i.e., T + (n+j + 1)/2. Ties complicate the tabulation of critical values since in principle every possible configuration of ties should be tabulated. Since the number 'of possible configurations is very large, this is infeasible for all but the most minute sample sizes. Corrections to the asymptotic approximations to take ties into account have been developed, but their performance in small samples is generally unclear. •
Problems with ties have led to the emphasis of rank tests for continuous data, where ties are hopefully infrequent. It is sometimes argued that for continuous distributions, the probability of ties occurring is zero, and this may serve to allay concerns. However, this is rather like proving that bumblebees cannot fiy: ties often do occur, since even continuous variables are measured with finite precision. In any case, it is no longer difficult to compute the exact significance levels, even when ties are present. There is a good case for arguing that in fact rank tests are most appropriate for ordinal categorical data for which there is no metric relation between categories besides that of ordering, so no information is lost when forming ranks. As we see below, such tests often have very superior power to testS not exploiting ordinality.

5.9.
Rank Tests
143
•
• •
•
•
A general point of criticism against rank tests is that they contribute little to the modelling process; that is to say, they may enable association to be detected, but they do not imply a parametric form for that association. In many applications, it is desirable to achieve a concise description of the system under study. This may be in terms of a small number of model parameters, or may take the form of a data reduction whose justification is modelbased. Compare, for example, the Wilcoxon test and the permutation Ftest in connection with a continuous response. The tests are equally valid when normality of errors cannot be assumed. The results of performing the Ftest can be summarized in terms of the difference in sample means, but for the rank test no such summaries are available. (There is a widespread misapprehension that somehow medians are involved). Another point of criticism against rank tests is that., despite their apparent simplicity, ther can be difficult to illterpret. Again, this is due to the fact that they are not modelbased. We illuswtte this using the Wilcoxon test. Consider a clinical study to compare two treatments. A and E, with respect to all ordind endpoint ll1casurillg treatment dfect (c(Jtl'g{)!'ized as worsening. no change, and slight. moderate. and gre;~t improvement). Seppose the n~"ulr,; from the Et IId\' are a~ f( )!J,:,\\':>: • Treatment A B
Worse
No change
Slightly better
14
12
8
9
10 10
IIloderatelvbetter 8 11 •
Much better
6
12
The standard contingency table test treats the variables as nominal. Here, 2 C = 4.6018 on four degrees of freedom, giving P = 0.3306. Thus, the test does not detect a treatment difference. In contrast, if we use a Wilcoxon test, we obtain IF = 2225, with P = 0.0347. So Wilcoxon's test finds a significant treatment effect. So far, so good. Suppose, however, that the study results were not as given above. but instead were as fo1l0ws:
,
•
Treatment
Worse
A
12
B
5
No change 10 10
~Iuch
Slightly better 6
Moderately• better 10
better
20
10
5
12
We see that treatment B often results in a slight imprO\'ement, whereas treatment A appears to lead to improvement in some patients but to no 2 change or worsening in others. Now G = 13.89 on four degrees of freedom, corresponding to P = 0.0076, which is highly significant. However, the Wilcoxon test gives W = 2525, corresponding to a P = 1.0000 in other words, no evidence of difference between treatments. So here the nonordinal test is more powerful. •
144
,
. 5. Hypothesis Testing
.'
Why is this? The Wilcoxon test is sensitive to departures from independence known as stochastic ordering, i.e., that FA(x) < .t"B(x) (or vice versa), where, as usual, F is the distribution function F(x) = Pr(X ::; x) . . The first table above was consistent with FA (x) ::; FB(X), hence the greater power of the Wilcoxon test for these data. The second table does not exhibit stochastic ordering, and here the Wilcoxon test has low power.
,
There is another derivation of the Wilcoxon test that does not appeal to stochastic ordering (this is the !vlannWhitney form). Here it is formulated as a test of Pr(X > Y) = ~ against the alternative Pr(X > Y) i ~ (see Hand, 1992 for a detailed discussion). Notice that here the nuil hypothesis is not homogeneity but a type of symmetry, Pr(X > Y) = ~. This means that a negative outcome of the test can be interpreted in different ways. If we know, or are prepared t.o assume, that stochastic ordering does hold, then we can conclude that there is no eddence against homogeneity. If not, then we conclude that there is no evidence against symmetry. The snag is that it is difficult to examine the stochastic ordering property. Asymptotic tests have been proposed (Robertson and \Vright, 1981) but are difficult to compute and are apparently seldom used. This section has emphasized problematic aspects of rank tests. In their defense, it must be said that they can be valuable, particularly in the preliminary analysis of problems involving ordinal categorical variables. However, a parametric analysis will often be more informative (Agresti, 1984). Exact rank tests are also used when very cautious group comparisons are to be made, for example, in efficacy comparisons in clinical trials. Here, it may be argued that. exact modelbased tests, for example, the permutation Ftest, may often be preferable on the grounds of ease of summarization and interpretation.
5.10 Wilcoxon Test As we saw in the preceding section, the Wilcoxon test is used to compare discrete or continuous distributions between two populations. In terms of variables, it presupposes a binary (row) factor and a column variable that may be either discrete (ordinal) or continuous. Suppose the distribution of B given A = i, S = k is Fi,dx), i.e., that Fi,k(X) = Pr(B < xlA = i, S = k). As we just discussed, the statistic tests homogeneity,
•
. .• ~
•
5.10. Wilcoxon Test
145
•
against the alternative hypothesis that at least one of these distributions is stochastically larger than the other, i.e., that for some k,
." 0
Fl.k(.T) < F2,dx), Vx,
•
or
r • t·
F2,dx) < FI,k(X), 'th: . The test statistic is the sum of the ranks of the first population summed over all the strata, i.e., W = 'L~=I RIb where Rik is the rank sum for the ith treatment group in the kthstratum, given as Rik = 'Lf=1 rjknijk, where rjk is the midrank of an observation in column j, stratum k. Note that IF is appropriate when the difference bCt\\'('el1 the two populations is in the ~amc direction in all strata.
)
.'
•• •
•
•• •
In the conditional distribution, under Ho, lr has Inean
•
••
L
•
•
E(H'!Ho) =
•
•
k=1 L
•
,
c
•
and variance L
•
n
Var(WIHo) =
,
c
nl+k 2+k k=1 n++k(n++k
•
1)
;=1
•
An asymptotic test of Ho compares
•
,,
lr  E(WIHo)
f
•
Var(WIHo) with the N(O, 1) distribution. The twosided pvalue p = Pr(!W 
E(lVIHo)1 ~ IT·Fobs  E(lYIHc,)11 Ho)
is calculated. A simple example with unstratified, continuous data is the following. Diastolic blood pressure (mm Hg) was recorded for four subjects in a treatment group and for 11 subjects in a control group, as shown in Table 5.3. To compare the blood pressure of the two groups, we use the Wilcoxon test:
,, ••
, ,.•
, • •
Group Active Control
•
, ,•
,
Blood pressure (mm Hg) 9410811090 80 94 85 90 90 90 108 94 78 105 88
•
TABLE 5.3. Blood pressure data. •
•
146
5. Hypothesis Testing •
MIM>fact A2; cont X; read AX DATA>l 94 1 108 1 110 1 90 DATA>2 80 2 94 2 85 2 90 2 90 2 90 2 108 2 94 2 78 2 105 2 88 ! Reading completed. MIM>model A/AX/X; testdel AX we Test of HO; A/X/X against H: A/AX/X Exact test  exhaustive calculations. No. of tables: 230 Wilcoxon Test. W: 45.0000 E(WIHO): 32.0000 Asymptotic P: 0.0853 Exact P: 0.0989
There is good agreement between the asymptotic and the exact. tests. For stratified tests, ranks (and mid ranks. if there are ties) can be calculated in two ways. The observations can be ranked within each stratum, giving the stratumspecific scores., or the ranks can be calculated by sorting tbe observations for the combined strata, giving the stratumim'aria nt scores. In rvJr..r, stratumspecific scores are used by default. but strfltUI11ilWariallt scores are also available. The choice between stratumspecific and stratuminvariant scores is discussed in Lehmann (1975. p. 137140). Briefly, it depends both on the nature of the data and on pO\ver considerations. If the response distributiuns differ between the different strata, then stratumspecific scores will be indicated. For example, if different laboratories (strata) use different methods to measure a variable, then this may lead to differences in scale or location or both between the different laboratories, and so stratumspecific scores will be indicated. However, if the stratum sizes are small, then the stratumspecific scores may be associated with reduced power compared to the stratuminvariant scores. The extreme example of this would be a matched pair study where each pair is a stratum: here, the stratumspecific scores give a sign test. Stratified 'Wilcoxon tests were studied Ly van Elteren (1960), who proposed two different statistics. The socalled designfree statistic is of the form W' = L~=l Rlk/(nI+kn2+k}, and the locally most powerful test is of the form W" = L~=l Rlk/(1 + n++k}' The above expressions for the mean and variance of W apply if the Tjk are replaced by Tjk/(nI+kn2+d and Tjk/(1 + n++k}, respectively. To compare these statistics, note that with equal allocation to the row factor, the expected rank sum for the kth stratum under the null hypothesis is E(RIk I Ho} = n++k(n++k + 1}/4. So for the crude \Vilcoxon statistic W = L~=l RIb each stratum contributes a quantity roughly proportional to n~+k' For the locally most powerful version, the contribution is roughly •

5.10.

Wilcoxon Test
147
•
TABLE 5.4. Data from a multicentre analgesic trial. Source: Koch et al. (1983).
proportional to n++k. and for the designfref> \'ersioll, the contributions are roughly equal. The locally most powerful version seems the: most judicious in this respect. If there is much variation between strata in sample size and respons!:' distribution Fi,k thcn the diffrren t versions ma~' lead to very different condusions. There does not appear to be any consenSU:i in the literature as to which version is preft'rable.
•
d
g
e

1\
An example with ~tratifi('d categorical data is showll in Table 5.4. These show the results of a multicent re clinical trial comparing an analgesic to a placebo for the relief of chronic joim pain, reported by Koch et al. (1983). There are two centres, and the piltients were classified prior to treatment into two diagnostic status groups. Response to treatment is rated as poor, moderate, or excellent. Since the response is associated both with centre and diagnostic status, the key hypothesis is
,..• e I
II •
.t
'r
c e
Response II Treatment I (Centre, Diagnostic Status). We illustrate a test for this as follows:
~
MIM>fact A2B2T2R3 MIM>label A "Centre" B "Diag status" T "Treatment" R "Response" MIM>sread ABTR DATA>3 20 5 11 14 8 3 14 12 6 13 5 12 12 0 11 10 0 3 9 4 6 9 3! Reading completed. MIM>model ABRT; testdel TR wc Test of HO: ABT.ABR against H: ABRT Estimated number of tables: 98250064 Stratumspecific scores. Wilcoxon Test. W: 2668.0000 E(WIHO): 2483.0000 Asymptotic P: 0,0459 MIM>testdel TR Ivm Test of HO: ABT.ABR against H: ABRT Exact test  monte carlo estimates .
c I
:I :I.
e 1
i
I
•
5 • "v
I •
I
•
'.
.
148
. 5. Hypothesis Testing

. Stratumspecific scores. No. of tables: 1000 Likelihood Ratio Test. LR: 10.8369 DF: 7 Asymptotic P: 0.1459 Estimated P: 0.1810 +/ 0.031364 Wilcoxon T~st. W: 2668.00 E(wIHO): 2483.00 Asymptotic P: 0.0459 Estimated P: 0.0550 +/ 0.018579 •
First, we compute the approximate number of tables in the reference set, and the asymptotic test. On this basis, treatment effect is just significant at the 5% level. Since H/ > E(TVIHo), the effect of active treatment is larger than that of the placebo, i.e .. has a better response. The number of tables in T is estimated to be nearly 100 million, so exhausti\'E~ enumeration is not possible. We therefore use the :Monte Carlo approach. It is seen that the exact pvalue is in good agreement with the asymptotic value. For 2 comparison, the ordinary C is also calculated: this test deady has much less power than the ordinal test.
5.11
KruskalWallis Test The KruskalWallis test is used to compare discrete or continuous distributions between k populations. It presupposes a nominal (row) factor and a column variable that may be either discrete or continuous. Suppose the distribution of B given A = i, C = k is Fi,k(X), i.e., that Fi.k(X) = Pr(B < xiA = i, C = k). The statistic tests homogeneity
Ho : F1,k(X) = F2,k(x) = .,. = FR,k(X) Vx, Vk against the alternative hypothesis that at least one of these distributions is stochastically larger than one of the others, Le., that Fi.dx) < Fj,k(X), Vx for some k and i f: j. The test statistic is J(W = k=!
i=!
where fk = 12{n++k(n++k
+ 1)[1 
I:;=!(n~jk  7l+jk) (3
n++k  7l++k
) ]}
1
and Rik is the rank sum for row i, stratum k. Under Ho, KW is 2 asymptotically X distributed with L( R  1) degrees of freedom. The test is illustrated using t.he hematological toxicity data shown in Table 5.2.
5.11.

•
Kruskfact A5; cont X; read AX DATA>l 0 1 1 1 8 1 10 DATA>2 0 2 0 2 3 2 3 2 8 DATA>3 5 3 6 3 7 3 14 3 14 DATA>4 1 4 1 4 6 4 7 4 7 4 7 4 8 4 8 4 10 DATA>5 7 5 10 5 11 5 12 5 13 ! Reading completed. MIM>model A/AX/X; testdel AX km Test of HO: A/X/X against H: A/AX/X Exact test  monte carlo estimates. No. of tables: 1000 KruskalWallis Test. KW: 9.4147 DF: 4 Asymptotic P: 0.0515 Estimated P: 0.0471 +/ 0.017255 •
•
•
•
•
,t, Jt of
,• •
•
•
15
It
In this example, the asymptotic and the exact pvalues match closely.
)f
·1,.'
All example \\'ith a discrete respollse \'(uinble i:i the following (Mehta and P"fPI. 1991). For 18 patients undergoing chemotherapy, tumour regression \\'(l~ rt'co!'lj,'d and das:'lfi,'d on an ordillal ~cale: no response, partial respuns". or complete response. The data are shown in Table 5.5, and are anahz·,'d as follows: " MIM>fact A5B3 MIM>sread AB DATA>2 0 0 1 1 0 3 0 0 2 2 0 1 1 4 ! Reading completed. MIM>model AB; testdel AB ke Test of'HO: B,A against H: AB Exact test  exhaustive calculations. No. of tables: 2088 .KruskalWallis Test. KW: 8.6824 DF: 4 Asymptotic P: 0.0695 , Exact P: 0.0390
•
1
d
The exact pvalue of 0.0390 differs somewhat from the asymptotic value.
• •
•
s•
Drug regimen 1 2 3 4
None
5
2
Response Partial Complete
o
0
1
1
3 2
o 2
o o o
1
1
4
,
J
TABLE 5.5. Tumour regression data. Source: Mehta and Patel (1991). ,
150
5. Hypothesis Testing •
5.12
Jonckheere Terpstra Test The JonckheereTerpstra test is designed for tables in which both the row and the column variables are ordinal. For 2 x C tables, it is equivalent to the Wilcoxon test. As before, we consider a test of A lL B J S. Suppose the distribution of B given A = i, S = k is Fi,k(X), i.e., that Fi,k(X) = Pr(B < xJA = i, S = k). The null hypothesis is homogeneity,
Ho : Fl ,k(X) = F2,k(X) = .,. = FR,k(X) Yk, against the alternative hypothesis that these distributions are stochastically ordered, i.e., that either i < j => Fi,k(X) 2:: Fj,k(X), Yx, Yk
or
The test statistic is R
L
C
C
{ ''
•
Wijsknisk  nHdni+k
+ 1)/2},
k=1 i=2 j=1 8=1
where Wijsk are the Wilcoxon scores corresponding to a 2 x C table formed from rows i and j of the table in stratum k, i.e., s1
Wijsk =
(nitk
+ njtk) + (nisk + njsk + 1)/2.
t=1
The mean of Jt under Ho is R
L
~ (n~+k k=1
i=l
The pvalue calculated is the twosided version: p = ~r(JJt
 E(JtJHo)J 2::
IJt(obs) 
E(JtlHo)JJlIo).
Var(JtJHo) is compared with the
As an asymptotic test, (Jt  E(JtJHo))/ N(O, 1) distribution, where the formidable expression cited in Pirie (1983) is used to estimate the asymptotic variance.
We give two illustrations of the test. The first concerns data reported by Norusis (1988), taken from a social survey in the United States. Table 5.6 shows a crossclassification of income and job satisfaction.

•
5.12. JonckheereTerpstra Test
151
•
•
Income (US $) < 6000 6000  15000 15000  25000 > 25000
v )
•
•
Job satisfaction , Very Little . Moderately dissatisfied dissatisfied satisfied 20 24 80 22 38 104 13 28 81 7 18 54
Very sati3ned 82 125 113 92

TABLE 5.6. Crossclassification of Income and Job Satisfaction. Source: Norusis
•
(1988) .
We are interested in testing association between income and job satisfaction. This is shown in the following fragment:
I
I
MIM>fact A4B4; label A "Income" B "Job satis"; sread AB DATA>20 24 80 82 DATA>22 38 104 125 DATA>13 28 81 113 DATA>7 18 54 92 ! Reading completed. HIM>model AB; testdel AB ljm Test of HO: B,A against H: AB Exact test  monte carlo estimates. No. of tables: 1000 Likelihood Ratio Test. 9 Asymptotic P: 0.2112 LR: 12.0369 DF: Estimated P: 0.2110 +/ 0.033237 JonckheereTerpstra Test. JT: 162647.0 E(lTIHO): 150344.5 Asymptotic P: 0.0047 Estimated P: 0.0041 +/ 0.005215
We see that the test detects highly significant association. The association is positive since Jt > E( Jt I H o), i.e., increasing satisfaction for increasing income (not surprisingly). For comparison purposes, the likelihood ratio test is also shown. This fails to find association, illustrating the greatly superior power of the ordinal tests. •, I
•
The second example concerns data reported by Everitt (1977) (see also Christensen, 1990, p. 61ff). A sample of 97 children was classified using three factors: risk of home conditions (A), classroom' behaviour (B), and adversity of school conditions (C). The data are shown in Table 5.7.
•
We test whether classroom behaviour is independent of school conditions, given the home conditions, i.e., B II C I A, using the JonckheereTerpstra test. Note that even though B is binary rather than ordinal, the test is still appropriate since the alternative hypothesis here means that the
•
•
152
5. Hypothesis Testing .
~~~~~.
Home conditions Not at risk •
Adversity of School conditions Low Medium High Low Medium High
Classroom behaviour Nondeviant Deviant
•
I
At risk
16 15 5 7
34 3
1
3 1 1 8
3
TABLE 5.7. Data on classroom behaviour. Source: Everitt (1977).
conditional probabilities qi,k
= Pr( deviant behaviourlC = i, A = k)
are monotone with' rr..spect to i for each k, or in other words, that ql,k ::; q2,k ::; Q3,k
(or vice .
\,('r:;,;'I),
for k = 1. 2.
MIM>fact A2B3C2 MIM>sread BAC DATA>16 1 7 1 15 3 34 8 5 1 3 3 ! Reading completed. MIM>model ABC; testdel CB jle Test of HO: AC,AB against H: ABC Exact test  exhaustive calculations, No. of tables: 1260 Likelihood Ratio Test. LR: 4.1180 DF: 4 Asymptotic P: 0.3903 Exact P: 0.5317 JonckeereTerpstra Test. JT: 435.0000 E(JTIHO): 354.0000 Asymptotic P: 0.1482 Exact P: 0.0730
For cOlllparison purposes, results from the likelihood ratio test are also shown. We see that this test, which does not use the ordinality of C, detects no evidence of association between classroom behaviour and school conditions. In contrast, the JonckheereTerpstra test suggests that there is slight though inconclusive evidence that classroom behaviour depends on school conditions. Since in the example Jt > E(JdHo), the direction of association (if any) is positive: that is, the larger the C (the more adverse the school conditions), the larger the B (the more deviant the behaviour). This is consistent \\'ith our expectation . •

5.13.

Tests for Variance Homogeneity
153
• •
This concludes our description of exact tests. We now turn to some other testing problems. The framework of hierarchical interaction models subsumes some testing situations that have been studied in classical normalbased multivariate analysis. For some of these, sharp distributional results have been derived. In the next three sections, we sketch some of these results and link them to the present framework.
•
•
•
5.13
Tests for Variance Homogeneity Since variance homogeneity plays an important role in inference, it is natural to focus attention on a test for homogeneity given that the means are umestricted. An exalllple with p = q = 2 is a test of . •
Mo : AB /ABX, .4.BY/XY versus ;\1 i : AB JABX. ABY/ABXr. From (4.17) and (.1.18). we find that the deyiance difference is
where, as usual, Sj = Lk:i(kl=j(y(k)  iij )(y(k)  iij)' /nj is the MLE of Ej under M l , and S = LjnjSj/N is the ~\ILE of E j = E under Mo. Under Mo, d is approximately distributed as X~, where l' = q(q + 1)(#1  1)/2, and #1 = noEl;,. #8 is the number of cells ill the underlying contingency table. This approximation is rather poor for small to moderate sample sizes: generally, the test is too liberal, which is to say that it is apt to reject homogeneity too frequently. Various ways of modifying the test to improve the X? approximation have been studied. Seber (1984, p. 448451) gives a useful summary. The following test is due to Box (1949). The test statistic
•
•
,
IS •
d' = (1  c){(N  #1) In 181jEI
e01
where
•
IS
8j = (n;~I)Sj
and
S = ("N!!#I)S, and the constar..t c is given as I
~n
2q2 + 3ql c = .......,..=:,...:::6(q + 1)(#1  1)
•
IS'
), ;h
1 ::
. (nj  1)
J
1
 :
•
(N  #1)
Under Mo, d' has an asymptotic X~ distribution: for small samples, the approximation is far superior to the uncorrected deviance d. This test is the multivariate generalization of Bartlett's test for homogeneity.
;
. "
.
•
154
5. Hypothesis Testing •
Both tests, that is, both d and d', are sensitive to nonnormality so that rejection of Mo can be due to kurtosis rather than heterogeneity. The test is illustrated in Sections 4.1.8 and 4.1.9. There are hierarchical interaction models that are intermediate between homogeneity and (unrestricted) heterogeneity. One such model (discussed in Section 4.4) is A/AX,AY/AX,XY. In a sense, these models supply a more refined parametrization of heterogeneity. Note that by exploiting collapsibility properties, Box's test can sometimes be applied to such models. For example, a test of A.IAX,AY/XY versus AIAX,AY/AX,XY can be performed as a test of A/AX/X versus A/AX/AX using Box's test. "
5.14 Tests for Equality of Means Given Homogeneity Another hypothesis that has been studied in depth is that. of equality of multi\'ariate means assllming homogeneity. An example \\"it.h p = 1 and q = 3 is
Mo : A/X,1", Z/XYZ versus Ml : A/AX,AY,AZ/XYZ, corresponding to the removal of [AX], [AY], and [AZl from the graph
A
x
z
y
More generally, the test concerns removal of all edges between the discrete and the continuous variables, assuming a homogeneous full model. For this test, a function of the likelihood ratio test has a known smallsample distribution called Wilks' Adistribution. We now relate this to the deviance statistic. Under M I , the MLEs of the cell means are the sample cell means Pi = ih, and the MLE of the common covariance is the sample "withincell" SSP, • l.e., •
2: 1 = Under M o, the corresponding estimates are the overall means jl = yand t.he "total" SSP, i.e.,
k
(y(k) _ y)(y(k) _ y)' IN.

,


"
5.15.
Hotelling's T2
155
"
•
I
,t
The corresponding maximized likelihoods are ,
,
n;lnCndN) Nqln(21f)/2 NlnlL:ll/2 Nq/2 J
•
"
t
n •
and
d a 1
,
,
eo =
niln(ndN)  Nqln(21f)/2  NlnlL: o /2  Nq/2. "
t
,. ,
So the deviance difference is d = 2(fl  fo) = N In Itjltol. If we write the corresponding "betweencells" quantity as B,
e
,
\ ' [J  L...O

\'"'1 
.
J
and we can reexpress d as .  ~\'1In!~l Ifl(f 'B)I!  ~\,"1, if . ,'IB:!" =.;l"tarl~ ;""~1
, H
d
(j 
I
The qlIalltil\' til [J i:; rl W~ller;1liz;1ti()n of tIl(' \"ariance ratio from ANOVA; it tellds tu unity under Ai0'
I
, • "
Wilks' Astatistic is defined as
and under Mo this follows a known distribution, the socalled Wilks' Adistribution with parameters (q,N  #A,#.41). Wilks' A is not available in ~IIM. The rationale is that rather than test for the simultaneous removal of all edges between the discrete and the continuous variables, it would generally seem preferable to d~compose this into single edge removal Ftests .
e tS
,..• e • •
5.15
I,
,
)
Hotelling's T2 ,
This is a special case of the previous section where there is one discrete binary variable. A test statistic due to,Hotelling (1931) may be used to test equality of multivariate means assuming equal, unrestricted covariance matrices. This is a generalization of the univariate ttest. Consider a test of
,
d •
Mo: A/X,Y,Z/XYZ versus Ml : A/AX, AY, AZ/XYZ,
•
156
5. Hypothesis Testing
~~~.
where A has two levels. Under Mo, we have that E(jh  112) = 0 and Var(1h  Y2) = L:( nlI + n2I), so we can construct Hotelling's T2 statistic as • 1
I
1
(Yl  Y2)'t(Y1  Y2)
From the previous section, since #A = 2, we can rewrite B as
B=
(YI  Y2)(YI 
Yd·
Hence . •
•
and so d:.= Nln(l + T2jN), or equivalently, T2 = N(efr  1). The test rejects the null hypothesis at the Q level if ]'2
> q(N  2) FQ  (:',;  q  1)
q ..vqj,
where Fq~N_q_1 is the (1  a)percent point of the Fdistribution with q and N  q  1 degrees of freedom. As before, we comment that in general it seems preferable to test for the removal of the edges separately 1Jsing Ftests.
, •
•
•
•
'' e ec Ion an
•
•
•
. . . . fl ICIsm
, •
I
•
•
•
In many applications of statistks, little prior knowledge or relevant theory is available, and so model choice becomes an entirely empirical, exploratory process. Three different approaches to model selection are described in the first three sections of this chapter. The first is a stepwise method, which starts from some initial model and successively adds or removes edges until some criterion is fulfilled. The second is a more global search technique proposed by Edwards and Havranek (1985, 1987), which seeks the simplest models consistent with the data. The third method is to select the model that optimizes one of the socalled information criteria (AIC or BIC). In Section 4 a brief comparison of the three approaches is made. Section 5 describes a method to widen the scope of the CGdistribution by allowing power transformations of the continuous variables (Box and Cox, 1964). The last two sections describe techniques for checking whether the continuous variables satisfy the assumptions of nJulth"ariate normality. We preface the chapter with some introductory remarks about model selection. Perhaps the first thing to be said is that all model selection methods should be used with caution, if not downright scepticism. Any method (or statistician) that takes a complex multivariate dataset and, from it, claims to identify one true model, is both naive p.nd misleading. The techniques described below claim only to identify simple models consistent with the data, as judged by various criteria: this may be inadequate for various reasons . For example, if important variables have been omitted or are unobservable, the models selected may be misleading (for some related issues, see Section 1.4). Some problems seem to require multiple models for a!1 ade•
158
6. Model Selection and Criticism
•
quate description (see the discussion of split models in Section 2.2.6), and for these, the adoption of one grand, allembracing supramodel may be unhelpful. Finally, the purpose to which the models will be put and the scientific interpretation and relevance of the models ought to play decisive roles in the ev;aluation and comparison of different models. The first two model selection approaches described here are based on significance tests; many tests may be performed in the selection process. This may be regarded as a misuse of significance testing, since the overall error properties are not related in any clear way to the error levels of the individual tests (see Section 6.4, however, for a qualification of this statement). There is also a deeper problem. In statistical modelling we generally choose a model that fits the data well, and then proceed under the assumption that the model is true. The problem with this the problem of model uncertainty is that the validity of most modelbased inference rests on the assumption that the model has not been chosen on the basis of t.he data. Typically, estimators that would be unbiased under a true, fixed model are biased when model choice is datadriven. Estimates of Y(triance generally underestimate the true variance, fo!" example. Similarly, hypothesis tests based on models chosen from the data often have su pranominal type I error rates. Chatfield (1995) gives an accessible introduction to model uncertainty. With randomised studies the problem can be circumvented (Edwards, 1999). To summarize: It is essential to regard model selection techniques as explorative tools rather than as truthalgorithms. In interplay with subjectmatter considerations and the careful use of model control and diagnostic techniques, they may make a useful contribution to many analyses.
6.1
Stepwise Selection This is an incremental search procedure. Starting from some initial model, edges are successively added or removed until some criterion is fulfilled. At each step, the inclusion or exclusion of eligible edges is decided using significance tests. tvlany variations are possible and are described in this section. Stepwise selection is performed in MI M using the command Stepwise. The standard operation of this command is backward selection; that is to say, edges are successively removed from the initial model. At each step, the eligible edges are tested for removal using x2tests based on the deviance difference between successive models. The edge whose x2test has the largest (nonsignificant) pvalue is removed. If all pvalues are Significant 

•
I l


•
l
•
•
1
,
.I
6.1.
Stepwise Selection
159

(i.e., all p < (x, where 0: is the critical level), then no edges are removed and the procedure stops. An edge may not be eligible for removal at a step. There can be several reasons for this: first, edges can be fixed in the model, using the Fix command. For example, MIM>mod ABCDE MIM>fix BCD Fixed variables: BCD MIM>stepwise
e .t
initiates stepwise selection starting from ABCDE, in which the edges [Be], [CD], and [BD] are fixed in the model.

Secondly, in the default operation we are describing, the principle of coherence is respected. In backward selection, this just means that if the removal of an edge is rejected at one step (the associated pvalue is less than the critical level (x), then the edge is not subsequently eligible for removal. This rule speeds up the selection process, but this is not the only reason for observiag it, as we will explain later.
•
e
.s e ~I
d lS
A third reason for ineligibility comes into play when the procedure runs in decomposable mode. In other words, only decomposable models are considered at each step: thus, at any step, the edges whose exclusion (or inclusion, in forward selection) would result in a nondecomposable model are considered ineligible. Here is an example of the default mode of operation using the mathematics marks data of Section 3.1.6:
• ~l,
d. 19 •
.IS
e. to
p, e
as nt
MIM>model //VWlYZ MIM>stepwise Coherent Backward Selection Decomposable models, chisquared tests. Critical value: 0.0500 Initial model: //VWlYZ Model: //VWlYZ Deviance: 0.0000 DF: o P: 1.0000 Edge Test p Excluded Statistic DF [VW] 10.0999 1 0.0015 + [VX] 4.8003 1 0.0285 + [VY] 0.0002 1 0.9880 [VZ] 0.0532 1 0.8176 [WX] 7.2286 1 0.0072 + [WY] 0.5384 1 0.4631 [WZ] 0.0361 1 0.8494 •
!
160
6. Model Selection and Criticism
[XV] 18.1640 1 0.0000 + [XZ] 11.9848 1 0.0005 + [YZ) 5.8118 1 0.0159 + Removed edge [VY] Model: //VWXZ,WXYZ Deviance: 0 . 0002 DF: 1 P: 0.9880 Edge Test Excluded Statistic DF P [VZ] 0.0550 1 0.8146 [WY] 0.5960 1 0.4401 Removed edge [VZ] Model: //VWX,WXYZ Deviance: 0.0552 DF: 2 P: 0.9728 Edge. Test Excluded Statistic DF P [WY] 0.5960 1 0.4401 [WZ] 0.0794 1 0.7782 Removed edge [WZ] Model: //VWX,WXY,XYZ Deviance: 0.1346 DF: 3 P: 0.9874 Edge Test Excluded Statistic DF P [WY] . 0.7611 1 0.3830 Removed edge [WY] Selected model: //VWX,XYZ •
I
•
•
•
•
•
/
.
• •
•
.'
At the first step, all 10 edges are tested for removal. Of these, six are rejected at the 5% level and are marked with +'s. Since the procedure is in coherent mode, it does not subsequently try to remove these edges. The edge with the largest pvalue, [VY], is removed. The formula of the resulting model, VW XY, W XYZ, is printed out, together with its deviance, degrees of freedom, and the associated pvalue. At the second step, two edges are tested for removal. Note that [WZ], though not removed or fixed in the model at the first step, is not among the two edges tested. This is because the procedure is in decomposable mode and the removal of [Vl Z] would result in a nondecomposable model. However, after [V Z] is removed at the second step, [H' Z] is eligible for removal and is indeed removed at the third step. After the fourth step, no further simplification is possible. The model VTV X, XY Z, whose graph is shown in Figure 3.2, is selected.
•
.
•
6.l.


6.1.1
Stepwise Selection
161
Forward Selection
Forward selection acts by adding the most significant edges instead of removing the least significant edges. In other words, at each step, the edge with the smallest pvaiue, as long as this is less than the critical level, is added to the current model.
•
•
To compare backward and forward selection, note that backward stepwise methods start with a complex model that is usually consistent with the data and which is then successively simplified. So these methods step through models that are consistent with the data. ,• •
I
In contrast, forward selection methods start with a simple model that is usually incon~istf'nt \\'jr h the nata. This is ther; sllccessiYely enlarged until an a('ceptnble model is reached. So forward selection methods step through modC'ls that are inconsistent with the data. This implil's that in backward selection, the individual significance tests iu\'olw l'ompari:;otl between pairs of mooels where at least the larger model of the pair (the alternative hypothesis) is valid. In contrast, in forward ~electio/l. the significance tests involw pairs of models, both of which are iuynlid. For t hi:::: n'asorL backward selection is generally preferred to forward selection. Despite this, the two approaches often give quite similar results. Forward selection is often attractive with sparse data, where the simple models give rise to fewer problems concerning the existence of maximum likelihood estimates and the accuracy of the asymptotic reference distributions.
f:
.s e g !S
•
1'
g
:e 1. •
• ~l
We illustrate forward selection using the mathematics marks data once more. This time we start from the main effects model. MIM>model //V,W,X,Y,Z MIM>stepwise f Noncoherent Forward Selection Decomposable models, chisquared tests . Critical value: 0.0500 Initial model: //V,W,X,Y,Z Model: //V,W,X,Y,Z Deviance: 202.5151 DF: 10 P: 0.0000 Edge Test Added Statistic DF P [VW] 32. 1776 1 0.0000,+ [VX] 31. 2538 1 0.0000 + [VY] 16.1431 1 0.0001 + [VZ] 14.4465 1 0.0001 + [WX] 40.8922 1 0.0000 + [WY] 23.6083 1 0.0000 + [WZ] 18.5964 1 0.0000 +
•
162
6. Model Selection and Criticism •
[XY] 61.9249 1 [XZ] 51.3189 1 [YZ] 40.4722 1 Added edge [XY] Model: //V,W,XY,Z Deviance: 140.5901 DF: I Edge Test Added Statistic DF [VW] 32. 1776 1 [VX] 31. 2538 1 [VY] 16.1431 1 [VZ] 14.4465 1 [W'X] 40.8922 1 [WY] 23 . 6083 1 [WZ] 18.5964 1 [XZ] 51.3189 1 [YZ] 40.4722 1 Added edge [XZ] Model: //V,W,XY,XZ Deviance: 89.2712 DF: Edge Test Added Statistic DF [VW] 32.1776 1 [VX] 31. 2538 1 [VY] 16.1431 1 [VZ] 14.4465 1 [WX] 40.8922 1 [WY] 23 . 6083 1 [WZ] 18.5964 1 [YZ] 5.9788 1 Added edge [WX] Model: //V,WX,XY,XZ Deviance: 48.3789 DF: Edge Test Added Statistic DF [VW] 32. 1776 1 [VX] 31. 2538 1 [VY] 16.1431 1 [VZ] 14.4465 1 [WY] 0.7611 1 [WZ] 0.2445 1 [YZ] 5.9788 1 Added edge [VW] Model: //VW,WX,XY,XZ Deviance: 16.2014 DF: Edge Test Added Statistic DF
0.0000 + 0.0000 + 0.0000 +
9 P:
0.0000 P
0.0000 0.0000 0.0001 0.0001 0.0000 0.0000 0.0000 0.0000 0.0000
•
8 P:
+ + + +
+ + +
+ +
0.0000 P
0.0000 0.0000 0.0001 0.0001 0.0000 0.0000 0.0000 0.0145
7 P:
+ +
+ +
+ +
+ +
0.0000 P
0.0000 0.0000 0.0001 0.0001 0.3830 0.6209 0.0145
•
6 P:
+
+
+ +
+
0.0127 P


6.l.

Stepwise Selection
163
[VX] 9.3269 1 0.0023 + [WY] 0.7611 1 0.3830 [WZ] 0.2445 1 0.6209 [YZ] 5.9788 1 0.0145 + Added edge [VX] Model: //VWX,XY,XZ Deviance: 6.8745 DF: 5 P: 0.2301 Edge Test Added Statistic DF P [VY] 0.1094 1 0.7408 [VZ] 0.1481 1 0.7003 [WY] 0.7611 1 0.3830 (IoIZ] 0.2445 1 0.6209 [YZ] 5.9788 1 0.0145 + Added edge [YZ] Model: IIVWX,XYZ Deviance: 0.8957 DF: 4 P: 0.9252 Test Edge Added Statistic DF P [VY] 0.1094 1 0.7408 [VZ] 0.1481 1 0.7003 ["1m 0.7611 1 0.3830 [\IZ] 0.2445 1 0.6209 No change. Selected model: //VWX,XYZ
•
•
•
•

•
The first step consists of testing the marginal independence of all variable pairs. It is seen that all pairs are highly correlated. The edge [XY] has the smallest associated pvalue (highest correlation) and so is added at the first step. At the sixth step, the same model as before VW X, XY Z is selected.
•
•
•
6.1.2 Restricting Selection to Decomposable Models The two examples above restricted attention to'decomposable models only. Stepwise selection restricted to decomposable models was first proposed by Wermuth (1976a). Decomposable models are an attractive subclass for a variety of reasons. Ease of interpretation has been discussed in Section 4.4. Considerable efficiency gains can be achieved by using nOI1iterative estimates and by exploiting collapsibility properties to avoid unnecessary fitting. A final consideration is that exact tests, Ftests, and sparsitycorrected degreesoffreedom calculations are only available in decomposable mode . The decomposable models also enjoy a connectedness property in the following sense. For any nested pair of decomposable models, there exists a •
164
6. Model Selection and Criticism

path of decomposable models from the larger to the smaller model, formed by removing a sequence of edges. In other words, there are no blind alleys to trap the stepwise process (Edwards, 1984). Consider a model that has been selected using backward selection restricted to decomposable models. Hopefully, most of the edges in the graph are present because their removal was rejected at some stage. However, there may also be edges that are retained because their removal would have led to a nondecomposable model. Edges occurring in more than one generator (see Section 5.2) may be of this type. Sometimes it may be appropriate to check whether this is the case, either by scrutinizing the output or by initiating a stepwise selection from the selected model in unrestricted mode (using the U option). I
6.1.3 Using FTests .
As described in Section 5.3, a decomposable edge deletion test between homogeneous models can be performed as all Ftest, provided at least one of the nodes of til,::: edge is continuous. This can also be exploited in stepwise selection using the S option. We illustrate this using, once again, the mathematics marks data .
•
MIM>mod //VWXYZ MIM>step s Coherent Backward Selection Decomposable models, Ftests where appropriate. DFs adjusted for sparsity. Critical value: 0.0500 Initial model: //VWXYZ Model: //VWXYZ Deviance: 0.0000 DF: 0 P: 1.0000 Test Edge Excluded Statistic DF P (Wl 10.0943 1 , 83 0.0021 + [VX) 4.6533 1 , 83 0.0339 + [VY] 0.0002 1 , 83 0.9883 [VZ) 0.0502 1, 83 0.8233 [WX] 7.1057 1, 83 0.0092 + [IN] 0.5094 1 , 83 0.4774 [WZ) 0.0340 1, 83 0.8541 [XY] 19.0283 1 , 83 0.0000 + [XZ) 12.1098 1 , 83 0.0008 + [YZ) 5.6667 1 , 83 0.0196 + Removed edge [VY] Model: //WXZ,WXYZ •
..
5.1. ,
••
Stepwise Selection
165
•
Deviance: Edge Excluded
1 P: 0.9880 0.0002 OF: Test Statistic OF P [VZ] 0.0525 1 , 84 0.8193 [WY] 0.5708 1 , 84 0.4521 Removed edge [VZ] Model: //VWX,WXYZ DeviancEpt.illlce of a "false" model. Suppose that (1  o:)lewl tests are employed and consider the upwardsonly version of the procedure, which starts by fitting the minimum model in the family, and at each stage fits the minimal undetermined models. In other words, it first fits the main effects model, then models with one edge, then models with two edges that both were rejected at the first step, and so on. As noted by Havranek (1987) and Smith (1992), this procedure is closed in the sense of Marcus, Peritz, and Gabriel (1976), and so controls the familywise type I error rate, i.e.,
the
ftl ll lets.
•
,Jl8ct
Pr( #false rejections> 0) < 0:. Smith (1992) compared the upwardsonly and downwardsonly versions of the procedure by simulation using a variety of graphical Gaussian models and sample sizes. He found that the upwardsonly version had smaller type I error than the downwardsonly version, and as expected, the proportion of simulations in which true models were falsely rejected using the upwardsonly version was approximately 0:. Conversely, the downwardsonly version had smaller type II error than the upwardsonly version. Smith also considered the standard (bidirectional) version and noted that this has properties intermediate between the upwards and downwardsonly versions. Since it is also the fastest, it seems a good compromise .
• •
. ,
,
That the bidirectional version is the fastest can be seen by considering the sixdimensional contingency table analyzed in Section 2.2.4. Here, 15 there are p(p  1)/2 = 15 edges and hence 2 = 32,768 possible graphical models. The bidirectional version of the search procedure, using Q = 0.05, after fitting 28 models, selected two: ACE, ADE, BC, F •
174
6. Model Selection and Criticism

•
and AC, ADE, BC, BE, F. It is simple to calculate that there are 768 waccepted models and 32, 000 wrejected models, so in this example the downwardsonly version would involve fitting approximately 768 models . and the upwardsonly would involve fitting approximately 32,000 models. The figures are not precise since different solutions may be obtained. I
6.5
BoxCox Transformations Now we leave the topic of model selection and turn to model criticism. An everpresent concern in the application of hierarchical interaction models is the appropriateness of the conditional Gaussian assumption; this is the focus of interest in the remainder of this chapter. In the present section, we extend the usefulness of the models by embedding them in a larger family of distributions. This follows the general approach of Box and Cox (1964). As usual, \Ve assume we have p discrete variables, Ll, and q continuous variables, r. and \Ve write the vector of continuous random ,oariables as Y = (Y1 , ... , }~)'. We assume that for one continuous variable (without loss of generality we can take this to be Yd, it is not Y that is CGdistributed, but rather Z = (9). (Yd, Y2 , ... , Yq )" where 9>.(.) is the transformation {y>'  1) 9>.(Y) =
>. In(y)
in =0
,
where>. is an unknown transformation parameter. For 9>.(Y) to be weIldefined, Y must be positive. When >. ) 0, 9>.(Y) ) In(y), so that 9>.(y) is a continuous function of >.. Thus, we assume that for some unknown >., the assumptions of the model are satisfied when Yj is replaced with 9>.(YI)' We now describe a technique to find the>. for which this holds. Since Z is CGdistributed,, we know that the density of (1, Z) is given by
(6.1) where Qi, {Ji, and !1 i are the canonical parameters, possibly subject to model constraints. It follows that the density of (1, Y) is given as
Yl = y~l f(i, z).
(6.2)
. For each fixed value of>., we transform the observed values of Yl, l.e., YI(k) k to 9>. (yi )), for k = 1, ... , N, and fit the model by maximum likelihood to the transformed data. Writing the maximized log likelihood thus obtained,
•

.
w
he • els k
6.5.
175
•
as €z(A), we compare different values of A by examining the profile log likelihood N
(6.3) k=!
•
•
l
BoxCox Transformations
:\n els he
obtained from (6.2). This is essentially a oneparameter log likelihood function that could be handled in the usual way: for example, we could in principle find the estimate of A that maximizes it. But, since we are only interested in simple transformations such as In(y), yl, y2, or jY, it is sufficient to calculate the profile log likelihood over a grid of suitable A values. \\"e illustrate this using the digoxin clearance data (Secdon 3.1.4):
il"• 4). HIS
;d.
~ll
sa del lue lat •
..1 )
. to
,
;.2) lk)
'1
to
led
MIM>oodsl //XYZ 11IM>fit Calculating marginal statistics ... Del.' i ance : 0.0000 DF: 0 MIM>boxcox X 2 2 4 BoxCox Transformation of X: 2*loglikelihood 2*loglikelihood Lambda (full model) (current model)
2.0000 1.0000 0.0000 1.0000 2.0000
770.5850 726.8188 702.9458 711.6995 750.0611
770.5850 726.8188 702.9458 711.6995 750.0611
Deviance 0.0000 0.0000 0.0000 0.0000 0.0000
Values of 2€y(>.) a!"e displayed for>. = 2, 1,0,1,2. They are calculated for both the full and the current model. In this example, the current model is the full model, so the columns are identical. The minimum value is at >. = 0, indicating that a log transformation should be made . It is sometimes useful to note that an approximate 100(1 0:)% confidence interval for>. consists of those values of >. for which the profile log likelihood is within ~XLad of its maximum. For example, to construct a 95% confidence interval, we need to know which values of >. give rise to values of 2€y(A) within 3.8 of its minimum, using X6.95,1 = 3.8. It should also be noted that the choice of A is sensitive to outliers. Extreme values of y~k) will dominate the factor (A  1) L~=lln(y~k)) in Equation (6.3), often leading to unlikely estimates of >.. So if the technique suggests an implausible value of A, the first thing to look for is the presence of outliers.
176
6. Model Selection and Criticism

Each column can be used as the basis for choosing a value of ..\. To illustrate this, we consider an example described by Box and Cox (1964). This concerns a 3 x 4 factorial experiment studying survival times (X) after treatment (A) and poison (B). The program fragment mod ab/abxYx; boxcox x 2 2 8
gives the following output: Lambda
2*loglikelihood 2*loglikelihood (full model) (current model)
Deviance
 149.7583 118.2175 2.0000 31.5409 135.3324 115.5179 19.8146 1.5000 128.1617 1.0000 113.3732 14.7885 129.2893 111.8172 17 .4726 0.5000 138.7643 110.8743 0.0000 27.8900 155.58~1 110.5556 45.0335 0.5000 178.2841 67.4267 110.8574 1.0000 205.4623 111.7614 93.7008 1.5000 236.0609 2.0000 113.2386 122.8224 _._
The full model assumes normality of errors and variance heterogeneity, while the current model constrains the variances to be homogeneous. Thus, choice of ..\ based on the first column will attempt to achieve normality within cells, and a choice based on the second column will attempt to achieve both normal errors and constant variances. Finally, choosing ..\ to minimize the deviance difference will attempt to stabilize the variances only, ignoring normality of errors. In the present example, a value of ..\ = 1, i.e., the reciprocal transformation, seems indicated (see Box and Cox, 1964 for a thorough discussion).
6.6
Residual Analysis Another way to check the appropriateness of the conditional Gaussian assumption, along with other aspects of the model and the data, is to examine the residuals, i.e., the deviations between the observations and the predicted values under the model. Although residuals can be defined for discrete variables (see, for example, Christensen, 1990, p. 154 ff), we restrict attention to residuals for continuous variables only. In this section, we examine some different types of residuals and show some simple techniques for examining residuals from different perspec
•
•
• •

6.6.
Residual Analysis
177
•
iIi) . • ;er
tives, including plots against covariates and quantilequantile plots of the Mahalanobis distances. In Section 6.7, a rather different form of residual analysis is presented. There the focus of interest is on whether there is evidence of secondorder interactions between the continuous variables .
•
Suppose, as usual, that there are p discrete variables and q continuous variables, and. that we partition the continuous variables into q1 responses and q2 = q  q1 covariates, corresponding to r = r 1 U r 2. The covariates can be fixed by design, or we can choose to condition on them in order to focus attention on the remaining q1 responses.
,
If we denote the corresponding observed random variables as (f,X, Y), then the conditional distribution of Y given f = i and X= x is multivariate normal with conditional mean E(Ylf = i, X = x) and covariance Var(Ylf = i, X = x), where these are
E(Ylf = i,X
=
x) = (~Wt1(/3t  n}2x)
and
If we define a random variable R, the "true" residual, by
~y,
R = Y  (np )1(/3}  n}2 X),
IS,
ty to
(6.4)
then since the conditional distribution of R given f = i and X = x is N(O, (np )1), which does not involve x, it follows that R JL X I f. Moreover, ifnp does not depend on i, then RJL (f,X).
to
ly, 1 , 64
It is useful to derive the independence graph of (f, X, R) obtained when Y is transformed to R. We call this the residuals graph. Suppose, for example, that the current model is homogeneous with the graph shown in Figure 6.4. There are three continuous variables, r = {Y1 , Y2 , Y3 }, and three discrete variables, t::. = {A, B, C}. Let r 1 = r, i.e., we set Y = (Y1, Y2'y3)' and transform to R = Y  E(Ylf, X). Since the model is homogeneous, R J,L f
\S
ne 00 • n
I
A
)D
•
c
rw
c
B
FIGURE 6.4. An independence graph. •
•
'

.,
•
178
6. Model Selection and Criticism
• •
A I
c
B
FIGURE 6.5. The residuals graph corresponding to Figure 6.3.
and so in the derived independence graph of (I, R) shown in Figure 6.5, the discrete variables are unconnected with R = (R 1 , R2 , R3)" The covariance of R is the same of that of Y so the subgraph Qr remains unchanged. Fillally, the subgraph gLl shows the independence structure of the disClp.te variables: using collapsibility arguments, we see that we must add the edge [AC] so tlul,t the boundary of 9r ill the original graph is made complete. For a general homogeneous modeL the residuals graph is formed b~' completing the boundary of every connected component of 9r), and rcmoying all edges between 6.. u r 2 and r 1. In practice, we cannot observe the true residuals R, but we calculate the observed residuals and we can informallv• check whether the relations shown in the residuals graph appear to hold. For example, we may plot a residual against a covariate x: if the model is homogeneous, the two should be approximately independent. If, however, the model is misspecified, and in fact the dependence on x is nonlinear, then this may be evident as a tendency to curvature in the plot. Two types of residual are important. If we write the kth observation as (i(k),x(k),y(k)), then the (ordinary) residual for that observation is A(k)
r
= y
(k)

A
JLYli(k) ,X(k),
where J1 Yli(k),x(k) is the estimate of E(YII = i(k),X = x(k)), obtained from fitting the current model to the data. Note that all observations are used in this fit.
If, alternatively, \ve exclude the kth observation when estimating the conditional mean so as to obtain a modified estimate tLYli(k) .X(k), we can use this to calculate the deletion residuals, i.e., •
f(k) = y(k) 
iiy1i(k) ,x(k).
These will tend to reflect large deviations from the model more dramatically. If the deletion residual for an observation is substantially greater than the (ordinary) residual, then this suggests that the observation has a large influence on the estimates.
I
•
6.6.
 
Residual Analysis
179
Residuals of Y
30 •
20
0 0 0
0
0
0
o
10
•
0
0
0 0
0
0
8
0
0
0
o
0
0
00
0 0
0 .'
0
o
0
o
0
0
,
;ed. 'ete
0
0 0 0
as
,
om
sed •
onuse •
00
0 0
0
0
0
0
30 40
0
10
2b
3b
4b
sb
6b
10
sb
do Algebra
Ill"
lCY
0
0
•
ap'act
00
•
JIll
.wn ual
0
0
20
jge :e .
the
0
0
0
the nce
0
0
B
0
0
.
0
0
10
00 0
0
0
0
0
R
0
0
0 00
a
o
88
o o§
•
0
FIGURE 6.6. A resid'lalcovariate plot based on the mathematics marks data. The deletion residual of Analysis (Y) given V, IV, X and Z is plotted against Algebra (X). The conditional variance of Y appears to decrease with increasing X.
We can illustrate a covariate plot using the mathematics marks data. The program fragment MIM>model //VWX,XYZ; fit Deviance: 0.8957 DF: 4 MIM>fix VWXZ; residual R Fixed variables: VWXZ
calculates the deletion residuals of Analysis (Y) and stores them in R. Figure 6.6 shows a plot of these versus Algebra (X). In the fragment, the Residuals command is used to calculate the residuals and store them in R. The minus sign before R specifies that the deletion residuals are to be calculated; otherwise, the (ordinary) residuals are calculated. Variables that are fixed (using/the command Fix) are treated as covariates in the calculations. In the fragment, V, W, X, and Z are treated as covariates.
"t
.ter sa
Similarly, if under the model //VW X, XY Z (see Figure 3.2), we want to examine the conditional independence of Mechanics (V) and Analysis (Y) more closely, we can calCulate the residuals of V and Y given W, X, and
180
6. Model Selection and Criticism ,
IvIechanics Resid ual (5)
Analysis Residual (R)
o
o
Algebra (X)
I
•
Vectors (W)
Statistics (Z)
FIGURE 6.7. The residuals of the butterfly graph.
Z. From the resid U(1Js graph shown in Figure 6.7, we see that the residuals should be approximately independent .. We examine this by first calculating and storing the residuals as variates Rand 5, as shown in the follo,..ing fragment: •
MIM>model //VWX,XYZ; fit Deviance: 0.8957 LF: 4 MIM>fix WXZ; resid RS Fixp.d variables: WXZ
Figure 6.8 shmvs a plot of R versus 5: the residuals do appear to be independent. Another ~lseful way to display the residuals is in index plots. Here, residuals are plotted against observation number: this indicates whether there were any trends in the data collection process or similar anomalies. Residual of V 40 o
30 o
20
0 0 0
0
10
0
"0
0
0
10
0
00 0
0
0
0
0
0
0
0
0 0
0
0
0
0
o
08
0
0
0
0
0
'"
0
20
0
0
0
0
c
o
0
0
0
0
00
0
0d> 0
Q> 0
0
0
0 0
0
0
0
0 0
0 0
0
0
0
0
0
30
0
0
0 0
00
0
0
0
o
40+ ,_ _ _ _ _ _..,_ _,_ _ _ __  0  0  0  0 10 20 30
Residual of Y FIGURE 6.8. A plot of the deletion residuals of Mechanics (v') and Analysis (Y) ..
,
.
6.6.

Residual Analysis
181
Residual of Y 30 20
•
0 0
o
0
0
10
o
00
0
0
0
o
0
0
0
0
o
0
00
0
0
0
0 0
0 0
0
0 0
0
0
0
0
0
0
0
0 0
0
o
0
0
0
0
10
,
0
0
0 0
0
0
0
0
0
0
0
0
0
0
0
0
•
0
0
0
0 0
0
0 0
0
0 000
0
0
0
o
0
0 0
0 0
20
0
0 •
als
0
0
30
,
0
0
,
tes
40;[,r,'1   ,    . ,     ,      , ,     ,
o
1'0
2'0
~o~n
5'0
6'0
7h
gO
9'0 Case No
FIGllRE 6.9. A index plot of n'siduais bas('d on the mathematics marks data. I
There appears to be a change clner Ob~(T\·';l:on 52 or thereabouts. A similar tinning wa~ de8cribed ill Section I.G.·1.
The following fragment generates data for the index plot shown in Figure 6.9.
be
MIM>model //VWX,XYZ; fit Deviance: 0.8957 DF: 4 MIM>fix VWXZ; calc O=obs; label 0 "Ca3e no" Fixed variables: VWXZ MIM>resid R
als ere
Another way to examine residuals is to calculate the Mahalanobis distance for each case, i.e., d(k)
= f(k)' (z~lf(k) t
for the ordinary residuals, and d(k)
= 1'(1:), (zll1'(k) t
fOi the deletion residuals. If the model is true, then these are approximately 2 X distributed with ql degrees of freedom. The distances can, for example, be plotted against the corresponding quantiles of the X2 distribution (a QQ plot); this should be approximately linear.
,
Mahalanobis distances are obtained using the command Mahalanobis, which calculates the distances and the corresponding quantiles of the X2 distribution. For example, the fragment
•
y Y).
•
MIM>model //VWX,XYZ; fit; mahal mc •
182
6. Model Selection and Criticism •
MaIm! dist o
8
o
7 o
6 I
5
4 o
3 o
2 o
1
00 00
o
000
o
o
o
o
0
0
o
4
Mahal quant
FIGURE 6.10. A QQ plot of r"fahalanobis' distances based on the mathematics marks data..
calculates the distances and the corresponding quantile and stores them in two variates, m and c: Figure 6.10 shows a plot of m versus c.
"Ve now turn to heterogeneous mixed models. If np in Equation (6.4) does depend on i, then the variances of the residuals will depend on the discrete variables; this may distort plots of residuals against covariates. In this case, it is useful to employ standardized residuals. These are defined for univariate residuals (qI = 1), in other words, when we focus on one response variable by conditioning on all the remaining ones. If we define
R s  R wi11 , then we see that Rs rv N(O, 1) so that Rs II (/, X). The observed quantities are similarly defined in the obvious way as '11
w·1
and •
f(k) = r(k) s
11 Wi .
(We could similarly define standardized resid uals for ql > 1, but since this would involve a matrix transformation of e(k), they would be less interpretable in terms of the original variables and thus less suitable for the present purpose.) •

•
6.6.
Since d(k) = (f~k))2 and resid uals as •
f~k) =
d(k) =
Residual Analysis
183
(f~k))2, we can calculate the standardized
d(k)sign(f(k))
and similarly, the deletion standardized residuals as
•
f~k) =
d(k)sign(f(k)).
To illustrate this, we consider the lipid data (Section 4.1.10) and the model whose graph is shown in Figure 4.3. The conditional distribution of Y given the remaining variables is heterogeneous. To calculate the standardized partial residuals ,ve proceed as follows:
, (
•
MIM>mod A/AV,AW,X,AY,Z/WXZ,VWXY,AVWY; fit; fix VWXZ Deviance: 32.1893 OF: 24 MIM>resid R; mahal Me MIM>calc S=sqrt(M)*«R>O)(Rmodel ABCDE; symmtest Test of multivariate symmetry:
40.8424
DF:
15 P:
0.0003
•
This indicates that the original variables are asymmetrically distributed about the sample means. We can also apply the symmetry test to the marginal distribution of each triad of variables in order to compare with the threefactor interaction approach. Table 6.2 shows the results. We would expect the symmetry• test to be more powerful since the no threefactor interaction models contain main effects and so are overparametrized. Comparing Table 6.2 with Table 6.1 confirms this clearly. Continuous variables
Dichotomized variables
V,W,X V,W,Y V,W,Z V,X,Y V,X,Z V,Y.Z tV, X, Y W,X,Z W,Y,Z X, Y,Z
A,B,G A,B,D A,B,E A,G,D A,G,E A.D,E B,G,D B,G,E B,D,E C,D,E
Symmetry Test 3.5598 9.4351 5.3941 21.9078 5.1066 12.3579 . 12.9615 2.2890 10.6461 13.2029
Pvalue 0.4688 0.0511 0.2492 0.0002 0.2765 0.0149 0.0115 0.6828 0.0308 0.0103
TABLE 6.2. Symmetry tests for each triad of variables.

,
·
6.7.

Continuous variables
Dichotomized variables
V
A B
,p
Lre . he :al lC
W X Y
C D
Z
E
Symmetry Test 2.2368 0.l819 0.l819 5.5588 1.6415
Pvalue 0.1348 0.6698 0.6698 0.0184 0.2001
TABLE 6.3. Symmetry tests for each variable.
~ s) 1
IIt
Remarkably, each triad containing D exhibits asymmetry. If we calculate the univariat.e symmetry tests, we obtain Table 6.3.
or \
187
.~~~~
to •
Dichotomization
I ,
or
ks
This suggests that the asymmetries detected abO\'e are due to skewness in the marginal distribution of Y. 'Yc can check this by appl~rillg the symmetry test to the marginal distribution of the remaining variables, i.e., to the joint distribution A. B. C, and E. The test stat.istic is G.G5S1 on eight degrees of frecdoIll. so t hrre is no cvidence of awmmctl"Y. So it does seem that the • asymmet ries arc ::;imply due to skewness in t he marginal distribution of Y. •
:h pto •
ill
.1
, •
\ \'r no\\" (kscri h,> bridi:: how th('s~> met bod:, (,,' Jl bc applied to homogeneolls mixed models. Consider such a model and suppose we wish to focus on the multivariate normal assumption for a Qlvector of responses Y, given (l, X) where X is a Q2vector of covariates. This is the framework adopted in Section 6.6, where we derived the residuals graph from the original independence graph. If we now dichotomize the variables at their conditional means, or in other words, dichotomize the residuals at zero, we can further derive the graph of (I, X, D), where D is the Qltuple of dichotomized variables. For example, Figure 6.12 shows the dichotomized graph corresponding to Figure 6.5.
Dg
A
•
c
I
B
FIGURE 6.12. The dichotomized residuals graph corresponding to Figure 6.5.
:..; .
188
6. Model Selection and Criticism •
To see this, note that whenever we dichotomize a variable in a separating set in a conditional independence relation, the conditional independence is not retained, as illustrated below:
x
z
X 0I
y
z
= ;.
dy
In other words, an artificial dependence is introduced. In the notation of Section 6.6, we can in general derive the dichotomized residuals graph from the residuals graph by simply completing every connected component of 9 rl · It is then easy to see how to apply the model checking techniques. described above: we simply apply them to every connected component of the dichotomized variables.
~;
i·
•
lUg •
•
2 IS
•
s an
___ lreC e
•
elf
o e s
of
t
3m of , les of
Up to now the focus of this book has been on the undirected graphs and models. This chapter describes a variety of other types of independence graph and their associated models. In common for almost all of these is that some or all of the edges are drawn as arrows, indicating direction of influence or, sometimes, causal direction. The first se.::tion treats graphs with arrows only, the socalled DAGs: such graphs have a long history, starting in path analysis (Wright, 1921) and extending to categorical data (Goodman, 1973, and Wermuth and Lauritzen, 1983). The next section describes graphs of more recent origin that have both lines (Le., undirectional edges) and arrows, the socalled chain graphs. These are appropriate when the variables can be grouped into blocks, so that the variables within a block are not ordered, but there is a clear ordering between the blocks. In the remaining sections we examine more briefly some other types of graphs. These include local independence graphs, which appear to be useful in analyzing stochastic processes; covariance graphs, in which marginal rather than conditional independences are represented; and reciprocal graphs, which capture the independence properties of simultaneous equation systems. To introduce these graphs, we can perform a small thought experiment. We suppose that we are market researchers who are studying the prospects of a new instant noodle product. We are' interested in finding out who, if anyone, likes noodles, and to do this we interview a representative sample of people, recording their race, gender, and response to the question "Do you like noodles?" Let us suppose that the results are as shown in Table 7.1.
•
•
If we apply the joint loglinear models described in Chapter 2, we find that the simplest model consistent with these data is the one shown in Figure 7.1. •
",,..

190
7. Directed Graphs and Their Models
Race
Gender
Black
l'dale Female l\1ale . Female
White

Do you like noodles? No Yes
86 121
32 35 61 42
73
70
TABLE 7.1. The fictive noodles data. •
.
But this model is obviously inappropriate. How can we suppose that race altd gender are conditionally independent given the response? Surely the respondents' race and gender, characteristic:,; determined decades before, cannot be affected by whether or not they are pa.rtial to noodles. Race and gender might be marginally independent, but they can hardly be conditionally independent given the response. The problem arises because we have not taken the ordering of the variables into account. Here race and gender are clearly prior to the respons~ .. If \\'e reanalyse the di'.ta \Ising directed graphs and the associated models (described below), then we obtain the graph slJown ill Figure 7.2. This resembles the previous graph, except that the edges are replaced by arrows pointing towards the response. As we shall see, directed graphs have different rules for derivation of conditional independence relations. Now the missing edge between between race and gender means that they are marginally independent, not conditionally independent given the response. The example illustrates another simple but very important point. This is that the arrows need not represent causal links. Here, for example, it would not be appropriate to regard gender and race as causes. They are not manipulable, in the sense that they cannot be regarded as an intervention or treatment. It is not possible to think of a given person with a different race: if "you" were of a different race, "you" would be a completely different person. It may be legitimate to call gender and race determinants of the response, if this term is understood in a purely descriptive sense (for example, indicating merely that males and females have, in general, different responses). But the term cause seems to imply a type of relation that is not appropriate here. See Chapter 8 for further discussion. Gender (G)
Answer (A)
Race (R) FIGURE 7.1. The undirected graph showing G 11. RIA.
•
•
, •

.
7.1.

Directed Acyclic Graphs
191
Gender (G)
•
Answer (A)
Race (R)
•
FIGURE 7.2. The directed graph showing G II R.
ce he 'e , , ce r
The graph shown in Figure 7.2 has only directed edges (arrows), and so falls within the class of directed graphs described in the following section. But if we supposed that gender and race were not marginally independent, then we would be hard put to say which is prior to which. It would be natural to put them on an eqllal footing. by connecting them with a line, not an arrow. This would be a simple chain graph. as described in Section 7.2. Removing the line between gender and rare in the chain graph setting would also lead to Figure 7.2, so this graph can abo be considered a simple chain graph: the t\VO classes of graphs intersect, and the graph shown in Figure 7.2 lies in the intersection.
• •
•
I
:)e
•
:'1." f• • .
,Is •
I),
•
I I•
,· I , •
•
•
•
r
I"e IW
re ;e.
7.1
,
IS
ld a::>c
nt
,rof 1>C T
!l.t •
•
•
•
Directed Acyclic Graphs The graphs we first consider are directed, that is, contain only directed edges (drawn as arrows, not as lines). So we can again write a graph as a pair g = (V, E), where V is a set of vertices and E is a set of edges, but where now we identify edges with ordered pairs of vertices. If there is an arrow from v to w then we write this as v ) w, or equivalently as [vw] E E. Note that [vw] is not the same as [wv]. If v I w or w ) v, we say that v and ware adjacent and write v '" w. By a path we mean a sequence of vertices {VI, ... , Vk} such that Vi '" Vi+! for each i = 1, ... ,k  1. In contrast, for a directed path from Vl to Vk we require that Vi ) Vi+! for each i = 1, ... , k  1. When the first and last vertices coincide, i.e., Vl = Vk, the directed path is called a directed cycle.
We restrict attention to directed graphs with no directed cycles. These are usually known as directed acyclic graphs, or DAGs for short. (As Andersen et a1. (1997) point out, it would be more logical to call them acyclic directed graphs: but then the acronym would be difficult to pronounce. Here we keep to the conventional expression.) Figure 7.3 shows two directed graphs. The first is acyclic, but the second is not.
192
7. Directed Graphs and Their Models

A
c
E
G
B
D
F
H
FIGURE 7.3. Two directed graphs. The first is a DAG, the second is not.
w, then v is called a parent of wand w is called a child of v. The set of parents of w is denoted pa( w) and the set of children as ch( w).
If V
)
If there is a directed path from v to w, then v is called an ancestor of wand
w is called a descendent of v. The set of ancest.ors of w is denoted an(w) and the set of descendents as de(w). These four definitions (of parents, children, ancestors, and descendents) can easily be ext.~nded to apply to sets of nodes. For example, for a set. S ~ V \,:2 define pa(S) = {Ut:ESpa(v)}\S, that is to ~ay. as the set of nodes not in S that are parellt to a node in S. The other definitions are extended similarly. Furthermore, we define an+(S) = S U an(S) to be the ancestral set of S. It is not difficult tc. sho,,' that the absence of any directed cycles is equivalent to the existence of an ordering of the nodes {VI,' .. , 'Un} such that Vi ) Vj only when i < j. In other words, there exists a numbering of the nodes so that arrows point only from lowernumbered nodes to highernumbered nodes. Of course, the numbering is not necessarily unique. A DAG with n nodes and no edges is compatible with all n! orderings, and a complete DAG is compatible with only one. The first DAG in Figure 7.3 is compatible with one ordering, namely A ) B ) C ) D. Figure 7.2 is compatible with two orderings, G ) R ) A and R ) G ) A. Although from a graphtheoretic point of view the natural perspective is to consider which orderings are compatible with a given DAG, from the perspective of an applied modeller the natural starting point is an a priori ordering of the variables. So we assume that subjectmatter knowledge tells us that the variables can be labelled VI, ... ,t'n such that Vi is prior to Vi+l for i = 1, ... , n  1. Corresponding to this ordering, we can factorize the joint density of {VI, ... , V n } as
(7.1) In constructing a DAG, an arrow is drawn from Vi to Vj, where i < j, unless f (Vj IVjl ... vJ) does not depend on Vi, in other words, unless Vi
Jl Vj
1 {VI ... Vj} \ {Vi,Vj}.
(7.2)
•
•
7.1.

Directed Acyclic Graphs
193
This is the key difference between DAGs and undirected graphs. In both types of graph a missing edge between Vi and Vj is equivalent to a conditional independence relation between Vi and Vj; in undirected graphs, they are conditionally independent given all the remaining variables, whereas in DAGs, they are conditionally independent given all prior variables. Thus in Figure 7.2 the missing arrow between G and R means that G II R, not that G II RIA.
•
•
Having constructed the DAG from (7.2), we can rewrite the joint density (7.l) more elegantly as
IIvEvf{vlpa{v))
'he nd
(7.3)
and tIlE' pairwise conditional i!ldcpClldrncc relations corresponding to a missing arrow between Vi and l'J as
w)
V;
Ii Vj
These
I an({vi.l'j}).
expre~siolls
(7.4)
do !lot make lise of allY specific vertex ordering.
ali
V •
lOt
ed ral 'nt V·J
les ed n
.G th
••IS he •
•
lrI
Us
7.1.1
Markov Properties of DAGs
:'Luko\' propcrties 011 directed acyclic graphs have been the subject of much recent research, including Kiiveri et a1. (1984), Pearl and Paz (1986), Pearl and Verma (1987), Smith (!989), Geiger and Pearl (1988,1993) and Lauritzen et al. (1990).
Up to now we have only used DAGs to repr('sent pairwise independences, as in (7.2): this is the DAG version of the pairwise Markov property. The natural question arises: Can we deduce any stronger conditional independence relations from a DAG? In other words, is there an equivalent of the global Markov property for DAGs? For example, in Figme i.4 there is no arrow from B to D. The pairwise Markov property states that B II DI{A,C}; but does it also hold that B II DIG? Intuitively, this would seem likely. For undirected graphs, we saw that a simple criterion of separation in the graphtheoretic sense was equivalent to conditional independence in the statistical sense. A similar result is true of DAGs, though the graphtheoretic property, usually called dseparation, is alas somewhat more difficult to grasp.
t1 ~e
•
There are actually two different formulations of the criterion. The original formulation is due to Pearl (1986a, 1986b) and Verma and Pearl (1990a, I
1) •
•
.r~~.r~~.r~~
S5
B
A
2)
c
FIGURE 7.4. A simple DAG.
•
D
.. •
194
7. Directed Graphs and Their Models
A
D
G
C
B
(a)
•
A
D
G
F
F
E
B
E
(b)
FIGURE 7.5. (a) shows a DAG 9 and (b) shows its moral graph gm, which is formed by marrying parents in 9 and then deleting directions. In 9 we see that pa(C) == {A,B} and pa(F) == {B,D,E}.
1990b); shortly after, Lauritzen et al. (1990) gave an alternative formulatioll. We give both here, since each ha.c; its own advantages. The later one is conceptually simpler, but the original one builds on concepts that are useful in other contexts (for example, see Section 8.3.3).
We first look at the later version of the criterion. To do tNs, we need to define something called a moral g1'O.ph. Given a DAG 9 == (V, £), we construct an undirected graph Qm by marryillg parents and deleting directions, that •
IS,
1. For each v E V, we connect all vertices in pa( v) with lines.
2. \Ve replace all arrows in £ with lines . •
We call Qm the moral graph corresponding to Q. Figure 7.5 shows a DAG and its moral graph. Now suppose that we want to check whether Vi II vjlS for some set S ~ V. We do this in two steps. The first step is to consider the ancestral set of {Vi,Vj} US (see Section 7.1), that is, an+({vi,Vj} U S) = A, say. From (7.3), since for v E A, pa(v) E A, we know that the joint distribution of A is given by
ilvEAJ(vlpa(v)),
(7.5)
which corresponds to the subgraph 9A of 9. This is a product of factors J(vlpa(v)), that is, involving the variables v U pa(v) only. So it factorizes according to QA, and thus the global Markov properties for undirected graphs (see Section 1.3) apply. So, if S separates Vi and Vj in QA' then Vi .ll Vj IS. This is the required criterion. We illustrate its application to Figure 7.5(a). Suppose we want to know whether C II FID under this graph. To do this, we first form in Figure 7.6{a) the subgraph QA corresponding to A == {A,B,C,D,E,F}, and then in Figure 7.6(b) its moral graph. In (b), D does not separate C from F, so C .,11. FID . •
I•


7.1.
A
D
A
•
B
(a) •
IS
E
195
D
F
F
•
Directed Acyclic Graphs
B
E
(b)
FIGURE 7.6. Applying the dseparability criterion .
, tat
1
lane
The criterion is easily extended to sets of variables, in the following sense. The directed version of the global Markov property states that for three disjoint sets 8 1 , 8 2 , and S3, 8 1 II 8 2 183 whenever 8 3 separates 8 1 and S2 ill QA' where A = an+ (81 U 8 2 U 8 3 ),
.•rc c
e
•
ct
at
G ~'.
of m
A •
5) rs
~ !d !n •
). kJ

D
The pairwise and global Markov properties are equivalent under very generaJ conditions (see Lauritzen, 1996, p. 51). In other words, when we construct a DAG Ilsing the pairwise properties, the criterion can be used to derive stronger conditional independence relations. Furthermore, all such conditional independencies can be derived. That is to say, all conditional independencies that hold for all densities in the model can be derived using the criterion. (There may be specific densities that obey additional conditional independencies not implied by the graph.) We now turn to the original version of the criterion. This focusses on individual paths between vertices. In undirected graphs, the existence of a path between V and W, say, indicates that they are marginally dependent. If we are interested in the conditional dependence' of V and W given a set S, then if the path does not contain a node in S, it (still) indicates conditional dependence. If it does cor..tain such a node, then it is not clear whether conditional independence applies or not. However, if all paths between V and W contain a node in S, then S separates V and W, so that these are conditionally independent given S . A similar argument applies to DAGs, though here it is crucial to distinguish a certain type of configuration on a path. We call a node on a path a collider if it has converging arrows. Consider the DAGs shown in Figure 7.7: both have paths from V to W. We examine various (in)dependence relations between V and W that are associated with these paths, keeping in mind that when these graphs are imbedded in larger graphs, the independences we find here may vanish, but the dependences will still hold. In Figure 7.7(a), the path contains no colliders, and we have that V .)l W, but that V II WIX and V II WIY. We can say that the path indicates that V and Ware marginally dependent, but that the path can be blocked by conditioning on the noncolliders X or Y. •
196
7. Directed Graphs and Their Models
•
v
HI
y
v
x
x HI I
(a)
y (b)
FIGURE 7.7. Two DAGs showing a path between \/ and Fl'. In (a), there are no colliders, in (b) there is one collider (X).
•
In Figure 7.7 (b) the opposite is true. The path contains a colUder, and we have that V lL IV, but that V.,Ii lFlX and V.,Ii IVIY. SO the path does not indicate marginal dependence, since it contains a collider: however, if we condition on the collider or on a descelldent of the collider, the path does indicate dependence between F and IF. Putting these ideas together, we say that a path between F and H' can he activc or blocked. BRing actiye means that it indicates a d('IWndence between V and IF. A path is blocked if either (i) it has a nOllcollider that is cOllditioned 011, or (ii) it has a collider that is not conditioned on (and none of its descendents are conditioned on either). We are 11m\" ready to state the dseparation criterion in its original formulation (Pearl, 1986a, 1986b; Verma and Pearl, 1990a, 1990b). We seek to define the dseparation of sets SI and S2 by S3. \Ve consider paths between a vertex in SI and S2. We say that S3 blocks such a path if either (i) the path has a noncollider, say x, such that x E S3, or (ii) the pat.h has a coIlider, say y, such that y (j. S3 and de(x) n S3 = 0. The criterion states that S3 dseparates SI and S2 if S3 blocks all paths between SI and S2'
7.l.2 Modelling with DAGs In a sense, modelling with DAGs is quite straightforward, Since the conditional densities f( Vj IVjl ... vd can be specified freely, any appropriate univariate response models can be used. That is to say, for each j, we can model the dependence Vj on the prior variables VI ... Vjl using any model in which VI ... Vjl are included as covariates; if only a subset are included, then Vj depends only on this subset. There is, of course. a huge variety of univariate response models that may be applied: for example, generalized linear models (McCullagh and Neider, 1989) and regression models for ordinal response variables (Agresti, 1984). Different types of model may be used for each step. Furthermore, standard techniques from univariate models for model criticism, residual analysis, and so on can be applied. This
7.1.

Directed Acyclic Graphs
197
makes for great flexibility of modelling, but it makes it impossible to cover in a book such as this the class of possible models is huge . • •
•
,
One aspect deserves emphasis. This is that the choice of model at each step is quite independent (logically and statistically) of the choice of model at all other steps. In contrast to undirected graphs, in which the decision whether or not to include an edge depends on which other edges are present or absent, with DAGs the ordering of the variables introduces a great simplification. Here the decision to include an arrow v ) w depends only on the presence or absence of other edges pointing at w; the presence or absence of all other edges in the DAG is quite immaterial. In the following section we illustrate the use of DAGs to model a study involving longitudinal measurements of a discrete, ordinal response.
\1'("
,r! 1"2
,
., • 1(. .
7.1.3
Example: Side Effects of Neuroleptics
Lingjrerde et a!. (1~187) descrihe a nlndomised. doubleblind parallel study ("olllparing two antidrprp~sa!lt dn,~~. Fiftv pat!('IJH wpre treated with each drllg. The patients wert' as:'('sSf'd at r hI' end uf a pia("ebo washout period prior to active treatment, and again after one, two, and three weeks on the active drug. The pl'esent analysis concerns a single rating scale item relating to severit? of a side effect. Thp se\,prit~, lewis are: not present, mild, moderate, and severe. The data furm a COmilH!:encv.. table with 2 x 44 = 512 cells.
•
rt
~~
•
This example illustrates several aspects that occur quite frequently in contingency table analysis. Firstly, there is the clear temporal sequence of the measurements, which we address by adopting the DAG modelling approach just described. Secondly, the underlying contingency table is very sparse, with only 100 observations distributed over 512 cells, so asymptotic tests cannot be used. Thirdly, the response is ordinal rather than nominal; that is to say, there is a natural ordering of the categories: not present, mild, moderate, and severe, so we should use tests sensitive to ordinal alternatives.
, 11•
,te
Since only discrete variables are involved, joint loglinear models can be used to model the conditional distributions. This is explained in more detail in Section 4.2. In effect, to condition on a set of variables in a loglinear model, all interactions between these variables' must be included in any model considered. So for !(VjIVj_l ... vd we apply a model to the marginal table for {Vl, ... ,Vj} that includes all interactions between {Vl, ... ,Vjl}. We do this first for j = 2, then for j = 3, and so on.
,11
lei 'J, of ~d
•
Ifje
We return to the example to illustrate this process. The variables are labelled as follows: treatment group (G), baseline score (A), score after one
dIj~ •
198
7. Directed Graphs and Their Models •
week (B), score after two weeks (C), and score after three weeks (D). Since patients were allocated at random to treatment group after the baseline period, A is prior to G, so the order of the variables is {A, G, B, C, D}. First we define the variables and read in the data: •
MIM>fact'A4B4C4D4G2 MIM>label G "Drug" A "Baseline" B "Week I" C "Week 2" D "Week 3" MIM>setblocks AIGIBICID MIM>read GDCBA 1 1 1 1 111 122 1 1 1 111 1 1 1 1 1 1 111 1 112 1 1 1 1 1 1 1 112 1 1 2 122 1 1 1 1 1 1 1 1 1 1 1 223 1 1 1 1 1 1 1 1 1 1 1 122 1 1 1 1 1 1 111 111 142 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 211 1 3 1 1 1 1 1 1 1 1 1 1 1 1 131 1 1 123 1 1 1 2 3 3 1 1 3 1 1 1 1 1 2 3 2 1 1 1 1 1 1. 2 2 2 1 1 1 1 1 1 1 1 1 1 2 111 111 1 1 1 2 1 2 121 1 1 111 111 1 1 1 1 1 1 1 1 1 1 211 1 111 1 1 1 1 1 1 122 1 1 1 1 1 2 1 2 1 1 1 1 1 1 1 1 1 121 1 133 2 1 1 1 1 1 1 1 1 1 1 122 2 3 2 2 1 1 1 121 2 3 2 2 3 3 3 1 2 2 2 2 1 2 122 1 2 2 3 3 1 2 1 2 2 2 233 3 3 2 1 3 2 122 1 1 121 1 2 2 2 2 121 2 1 1 1 122 1 1 1 2 1 1 1 1 2 2 2 2 123 3 2 1 2 2 222 2 1 1 1 121 1 1 121 2 1 1 2 1 3 1 2 2 3 3 3 1 2 1 1 1 121 1 1 1 2 2 331 2 1 3 1 122 1 2 1 2 222 1 223 1 122 1 332 331 121 1 1 123 3 232 1 1 1 12112 1 2 2 3 2 2 2 1 2 2 122 2 4 1 2 122 1 222 2 1 2 2 2 2 1 2 233 223 1 1 223 1 1 123 2 3 2 2 3 2 222 1 1 1 1 22111 ! MIM>satmod
The SetBlocks command defines the order of the variables, and turns block mode on. The SatModel command, in block mode, sets the current model to the full DAG shown in Figure 7.8. The first step involves modelling the dependence of the treatment allocation G on the pretreatment score A, but since we know that the allocation was random, we know that G Jl A, and so we can delete the edge [AG]: MIM>Delete AG
We proceed, therefore, to analyze the dependence of the first week score (B) all G and A. Since the table is very sparse and the response is ordinal, we A
C G FIGURE 7.8. The full model.

7.1.
Ice )e• rst •
3"
Directed Acyclic Graphs
199
•
base our analysis on exact tests for conditional independence that are sensitive to ordinal·alternatives. Two tests in particular are used. For tests of the type X .lL Y I Z, where X is binary and Y ordinal, we use the stratified Wilcoxon test, and where both X and Yare ordinal, we use the stratified JonckheereTerpstra test (Section 5.12). For comparison purposes, the standard likelihood ratio tests are also computed. Both asymptotic and exact conditional tests are shown .
• 1
MIM>TestDelete AB ljm Test of HO: BG,AG against H: ABG Exact test  monte carlo estimates. Stratumspecific scores. No. of tables: 1000 Likelihood Ratio Test. LR: 12.6779 DF: 8 Asymptotic P: 0.1234 Estimated P: 0.1358 +/ 0.021234 JonckheereTerpstra Test. JT: 555.5000 E(JTIHO): 400.5000 Asymptotic P: 0.0022 Estimated P: 0.0016 +/ 0.002503 MIM>TestDelete GB lwm Test of HO: AG,AB against H: ABG Exact test  monte carlo estimates. Stratumspecific scores. No. of tables: 1000 Likelihood Ratio Test. LR: 9.4349 DF: 5 Asymptotic P: 0.0929 Estimated P: 0.1442 +/ 0.021770 Wilcoxon Test. W: 1600.0000 E(WIHO): 1836.0000 Asymptotic P: 0.0070 Estimated P: 0.0102 +/ 0.006222
1 1 2 1
1 3 1 1 3 2 1 2 1
JD
as •
First, the deletion of edge [AB], corresponding to a test of A II BIG, is examined. Since both A and B are ordinal, a stratified JonckheereTerpstra test is used. We see that this test rejects the null hypotheses very decisively, with an estimated pvalue of 0.0022. In contrast, the standard likelihood ratio test, which does not exploit ordinality, does not detect any association.
f
•
•
Next, the deletion of edge [GB] is examined. Since G is binary and B is ordinal, we use the stratified Wilcoxon test. Similar results are obtained. Neither [AB] nor [GB] can be deleted. I
In the next stage of the analysis, we examine the dependence of the score after two weeks' treatment (C) on G, A, and B:
•
200
7. Directed Graphs and Their Models •
•
MIM>TestDelete BC ljm Test of HO: ACG,ABG against H: ABCG Exact test  monte carlo estimates. Stratumspecific scores. No. of tables: 1000 Likelihood Ratio Test. LR: 48.3514 OF: 17 Asymptotic P: 0.0001 Estimated P: 0.0000 +/ 0.000000 JonckheereTerpstra Test. JT: 579.0000 E(JTIHO): 400.0000 Asymptotic P: 0.0000 Estimated P: 0.0000 +/ 0.000000
MIM>TestDelete AC ljm Test of HO:BCG,ABG against H: ABCG Exact test  monte carlo 6stimates. Stratumspecific scores. No. of tables: 1000 Likelihood Ratio Test. LR: 18.8328 OF: 15 Asymptotic P: 0.2214 Estimated P: 0.3988 +/ 0.030348 JonckheereTerpstra Test. JT: 131.0000 E(JTIHO): 135.5000 Asymptotic P: 0.7898
Estimated P: 0.8310 +/ 0.023227
MIM>Delete AC MIM>TestDelete GC Ivrn Test of HO: BC,ABG against H: BCG,ABG Exact test  monte carlo estimates. Stratumspecific scores. No. of tables: 1000 Likelihood Ratio Test. LR: 11.2178 OF: 6 Asymptotic P: 0.0819 Estimated P: 0.1527 +/ 0.022293 Wilcoxon Test. W: 1148.5000 EeWIHO): 1244.0000 Asymptotic P: 0.0327 Estimated P: 0.0384 +/ 0.011913
The deletion of the edge [BC] is tested first and is rejected strongly by both the likelihood ratio and the ordinal test. The deletion of the next edge, [AC], is then tested and accepted by both tests. The edge [AC] is then deleted, and the deletion of the third edge, [GC], is tested. The ordinal test rejects the deletion with an estimated pvalue of 0.042l. The order in which the edges are examined is not arbitrary; the possible effect of treatment on the response is of primary interest. By choosing to test for the deletion of [GC] last, we seek to ma..ximize the power for this
•

7.1.

Directed Acyclic Graphs
201
A •
C
•
•
G FIGURE 7.9. After the second stage.
•
IDa \
test. After [AC] is deleted, the test for the deletion of [GC] becomes a test of G Jl CiB rather than of G Jl CI(A, B), with consequent increase in power. \\'e have now arrived at the model shf)\\'ll in Figure 7.9.
III the third stage, we examine t.he dependence of the score after three weeks' treatment (D), on the treatment group G, and the previous scores A. B. aud C. •
11913
by ~xt •
• IS
lal
•
XIM>TestDelete CD lj~ Test of HO: ABDG,ABCG against H: ABCOG Exact test  monte carlo estimates. Stratumspecific Scores. No. of tables: 1000 Likelihood Ratio Test. LR: 38.0929 DF: 26 Asymptotic P: 0.0594 Estimated P: 0.1498 +/ 0.022122 JonckheereTerpstra Test. JT: 193.5000 E(JTIHO): 143.0000 Asymptotic P: 0.0037 Estimated P: 0.0093 +/ 0.005955 MIM>TestDelete BD ljm Test of HO: ACDG,ABCG against H: ABCDG Exact test  monte carlo estimates. Stratumspecific Scores. No. of tables: 1000 Likelihood Ratio Test. LR: 28.6759 DF: 24 Asymptotic P: 0.2326 . Estimated P: 0.9370 +/ 0.015059 JonckheereTerpstra Test. I JT: 156.0000 E(JTIHO): 133.0000 Asymptotic P: 0.1198 Estimated P: 0.1224 +/ 0.020317 MIM>Delete BD MIM>TestDelete AD ljm Test of HO: CDG,ABCG against H: ACDG,ABCG •
•
ble tD
his
•
202
7. Directed Graphs and Their Models
.~~.~~~~"

Exact test  monte carlo estimates. Stratumspecific scores. No. of tables: 1000 Likelihood Ratio Test. LR: 20.3345 DF: 14 Asymptotic P: 0.1199 I Estimated P: 0.1940 +/ 0.024508 Jonckheere~Terpstra Test. JT: 194.5000 E(JTIHO): 170.0000 Asymptotic P: 0.2111
•
Estimated P: 0.2132 +/ 0.025385
,
MIM>Delete AD MIM>TestDelete GD lwm Test of HO: CD,ABCG against H: CDG,ASCG Exact test  monte carlo estimates. Stratumspecific Scores. No. of tables: 1000 Likelihood Ratio Test. LR: 10.1274 DF: 7 Asymptotic P: 0.1815
Estimated P: 0.3129 +/ 0.028739
Wilcoxon Test. W:
1183.5000
E(WIHO):
1292.0000 Asymptotic P: 0.0237 Estimated P: 0.0195 +/ 0.008563
So [BD] and [AD] are deleted, and [CD] and [CD] are retained. We have arrived at the the DAG shown in Figure 7.10. This graph has a particularly simple structure. If we rename A, B, C, and D as Ro, R J , R 2 , and R3 , then the DAG can be characterized by (i) the marginal independence of C and Ro, and (ii) the transition probabilities Pr{R t = rl(R t _ 1 = s,C = g)} for t = 1,2,3. An obvious question is whether these transition probabilities are constant over time. The present form of the data is
A
B C
G
D
FIGURE 7.10. The final DAG.
....
7


7.2.

Chain Graphs
203
•
T •
C 35
FIGURE 7.11. Timehomogeneity as a graph.
for i = 1. .... 100. To test the timehomogeneity hypothesis, we need to trclllsform t be data to tllf' form (i) HI il Rill t) (C
,
,
7.2

IS
"(
~
/
for t = 1. 2. 3 and for i == 1, .... 100. Suppose \\'I~ have done this and the variables are denoted C {treatment group, as before I. q (i.e., Rtd, R· (i.e .. lit! alld T (i.e .. time ~ 1. 2, 3). In rhis framework, tinwilomogeneity call also be formuifact A2B2C2D2 MIM>label A "Member 1" B "Attitude 1" C "Member 2" D "Attitude 2" MIM>sread ABCD MIM>458 140 110 49 171 182 56 87 184 75 531 281 85 97 338 554 ! MIM>setblocks ABICD; satmod; step
shows how the block structure is set up using the SetBlocks command. Stepwise model selection is then initiated, starting from the saturated model. The selected model is shown in Figure 7.16. The interpretation of the model is that •
1. Membership and attitude at the first interview are associated.
2. Membership at the second interview is affected both by membership at the first interview and by attitude at the first interview.
•
210
7. Directed Graphs and Their Modeis •
Member 1
Member 2
Attitude 1
Attitude 2
FIGURE 7.16. The Leading Crowd
3. Attitude at the second interview is affected by concurrent membership and previous attitude. Further examples of modelling with chain graphs are to be fOHnd in Cox and Wermuth (1996), Mohamed et al. (1998), NeilDwyer et al. (1998), Caputo et al. (1999), Ruggeri et al. (1998), and Stanghellini et al. (1999).
7.3
Local Independence Graphs Broadly speaking, the undirected graphs, DAGs, and chain graphs described above are suitable for different kinds of data. Undirected graphs are suitable for crosssectional data, and represent patterns of symmetric associations. In contrast, DAGs represent directed associations. So when variables represent short, nonoverlap ping events, it is natural to assume that the direction of influence flows from the earlier events to the later. Chain graphs combine these types of association, and similar remarks apply. Variables are regarded as either concurrent or ordered; the former are connected with lines, and the latter with arrows. So DAGs and chain graphs can incorporate the time dimension, but only when this is discretized. It is unclear whether graphs are able to represent continuous time systems, in which several. processes may be in interplay through time; in such a system, one process may both influence and be influenced by another process. In this section we give a brief sketch of some recent work by Didelez (1999) that sheds some light on this question. This work, based on previous results of Schweder (1970) and Aalen (1987), applies the ideas of graphical modelling to multivariate stochastic processes. To avoid introducing the formidable mathematical machinery necessary for a proper treatment, we restrict ourselves to a simple case. We are interested in modelling a :Markov process Y (t) = (Yj (t), ... , YK(t)), where t is time and where the Y; (t) take on a finite number of values: here
•

7.3.
•
Local Independence Graphs
211 ,
•
       ...... t
•
FIGURE 7.17. A twodimensional binary Markov process. •
we suppose that their state spaces are {O, I}, i.e. Y;(t) E {O, I} for all i and t. Figure 7.17 represents a realisation of a twodimensional binary Markov process. This could represent, for example, the timecourse of a recurrent illness and the presence of a virus implicateo ill the illness.
hip ,
We assume the process is governed by transitioll intensities ,,..'j. I ,
h(y: yO; t) = lim PI'(Y(t + bt) = Y*IY(t) = y)/Ot CI+O
.
(;1 . '
for y :j:. y': these represent, as it were, the instantaneous probabilities of changes of state. We need to assume that the probability of simultaneous transition!' is zero: that is, when y and y' differ in more than one component, h(y; y'; t) = for all t. Such a process is called composable (Schweder, 1970).
°
Composable Markov processes are governed by the component transition intensities
•
;nc len
hj(y; y;: t) = h(y; y*; t)
I1Ie
y;
;er. ap
for j = 1, ... , K. where = Yi for i =1= j. For example, a twodimensional composable process has possible states {(O, 0), (0,1)'(1,0), (1, I)}, and is governed by the eight transition intensities
are
(0,0)
n1y ,
~nt
"
.'
lay : ,
be ",
}9) re " •
'cal 'be, ' , I
,
(0,0) (1,0) (1,0) (0,1) (0,1) (1, 1) (1,1)
'(1,0): ,(0,1): , (0,0) : ,(1,1): ,(1,1): , (0,0) : , (0,1): ,(1,0):
hI (0,0; 1; t) h2(0, 0; 1; t) hI (1,0; 0; t) h2(1, 0; 1; t) hl (0,1; 1; t) h2(0, 1; 0; t) hI (1,1; 0; t) h2(1,1;0;t)
,
'
since all the other transition intensities are zero. •
,
'
... . ,
;) ),

~re
,
It is easy to state the condition for independence of YI (t) and Y2 (t): this is that
h1(0,0; I;t) = h l (O,I;I;t), ,
212
7. Directed Graphs and Their Models •
•
hi (1, 0; 0; t) = hI (1,1; 0; t),
h2(0, 0; 1; t) = h2(1, 0; 1; t), and h2(0, 1; 0; t)
=
h2(l, 1; 0; t).
In words, this, says that the transition intensity in the Y1 subprocess does not depend on Y2 (t), and that the transition intensity in the Y2 subprocess does not depend on Yl(t). But notice that these conditions can be separated. It is quite feasible that hI (0,0; 1; t) = hi (0,1; 1; t),
h l (l,O;O;t) = hl (1, l;O;t) holds. but not
h2(0,0: l;t) = h2(1,0;I;t), h2(0, 1; 0; t) = h2(1, 1; 0; t); in this case, we say that YI is locally independent of Y2 , but Y2 is not locally independent of Y1 . Ivlore furmally, we say that Yi is locally independent of Yj with respect to (Y1 •... YK ) when hj(y;yj;t) is a constant function of Yj for every Ys, where s = {I ... K} \ {i, j}. That is, for every fixed value Ys of the remaining subprocesses in (Y1 , ... YK ), the transition intensities hi(y: y;; t) on the ith 1 component do not depend on Yj. We use the notation Yi lL ljl(Y1 •... YK) to represent that Yi is locally independent of lj with respect to (Y1 , ... YK)' This concept of local independence, due to Schweder (1970), plays the same fundamental role in the present framework as conditional independence does in standard graphical modelling. We notice immediately that the relation is asymmetrical: that is, Yi lL 1 }j i(Y1 , ... YK) does not imply lj Jil YiI(Yl , ... YK). Thus graphs representing the local independence structure must have directed edges, i.e., both singleheaded (unidirectional) and doubleheaded (bidirectional)arrows are allowed. An example of a local independence graph is shown in Figure 7.18. It represents seven local independence relations, namely Y1 Jil Y3 1(Y1, .. · Y4), YI Jil Y4 1(Y1 , ... Y4 ), Y2 Jil Y11(Yl,'" Y4 ), Y2 Jil Y4 1(YI.'" Y4 ), Y3 Jll Y11(YI, ... Y4 ), Y3 Jil Y4 1(Y1 , ... Y4 ), and Y4lLi Y11(Y1 , ... Y4 ).
FIGURE 7.18. A local independence graph.
•
7.4. Covariance Graphs

213
Didelez (1999) studied the Markov properties of local independence graphs: in particular, pairwise, local, and global properties framed in terms of local as well as conditional independence. A modified concept of separation is involved; we refrain from explaining this here. Applied to Figure 7.18, for l example, we obtain that Y1lL Y3 1(Yl, Y2 , Y3 ) but Y2 $,l Y4 1(Y1 , Y2 , Y4 ).
•
Armed with these results, the graphs appear to provide a powerful tool for understanding complex multivariate processes. It is interesting to note that continuous time systems can be represented by graphs, but that these are not DAGs or chain graphs .
'0
be • ,
'.
7.4
II\' •
~t
Is,
Covariance Graphs Cox and Wermuth (1993, 1996) suggested the use of undirected graphs to display the marginal independence structure of a set of variables, by connecting two Yertices by an edge whenever the two variables are marginally dependent. By convention, edges are drawn as dashed lines, and the graphs are called covariance graphs. Their treatment was informal; the Markov interpietation of the graphs was unclear in the sense that local and global Markov properties were not given.
ng
th K)
J he
.at 1Jl
, ~p
.t), lL'
,
•
A formal treatment was provided by Kauermann (1996), who studied pairwise, local, and global Markov properties for these graphs and obtained general conditions for their equivalence. A family of models corresponding to the graphs is available for the Gaussian case only. These models are dual to graphical Gaussian models, in that they constrain a set of elements of the covariance matrix ~ to be zero, rather than a set of elements of the precision matrix n.· Kauermann shows that the maximum likelihood estimates have the dual property to those of graphical Gaussian models, xy namely that W only differs from its counterpart in the sample precision matrix for those x, y for which (JXY is constrained to be zero. Fitting these models in MIM is straightforward, using the following trick: use StatRead to read the sample precision matrix instead of the covariance matrix, and interpret the fitted covariances as fitted precisions and vice versa. The quantity shown as the deviance has the same asymptotic properties as the true deviance, and so can be used for this purpose. In illustration we supplement the analysis of the anxiety and anger data shown in Section 3.1.5. We wish to fit the dual model shown Figure 7.19; this represents the hypothesis that X lL Y and H: lL Z. First we obtain the sample precision matrix, using the original data:
•
214
7. Directed Graphs and Their Models
~~~~~~~~~~.
7
Anxiety trait (Y) 0         Q Anxiety state (TV)
•
I I I I I I
I I
I
I
I
I
I I I
I
Anger trait' (Z) 0        .0 Anger state (X) FIGURE 7.19. A covariance graph, expressing that X ..ll Y and W ..ll Z
MIM>pr t Empirical discrete, linear and precision parameters. W 0.05600 X 0.02142 0.04065 Y 0.02676 0.00107 0.05764 Z 0.00187 0.01236 0.01426 0.03484 Linear 0.20728 0.09730 0.36695 0.3609019.64269 x Y Z Discrete
.
•
Then we enter it using StatRead: MIM>StatRead WXYZ 684 18.8744 15.2265 21.2019 23.4217 0.05600 0.02142 0.04065 0.02676 0.00107 0.05764 0.00187 0.01236 0.01426 0.03484 ! Reading completed. MIM>satmod; delete XY,WZ MIM>fit; print fg Deviance: 214.7895 DF: 2 Fitted counts, means and covariances. W 0.056 X 0.021 0.041 Y 0.027 0.013 0.058 Z 0.010 0.012 0.014 0.035 Means 18.874 15.226 21.202 23.422 684.000 II X Y Z Count Fitted discrete, linear and precision parameters. II 26.897 X 11.052 32.976 Y 10.072 0.000 23.678 Z 0.000 8.394 6.680 34.414 Linear 889.491 907.301 848.5541075.47236889.176 II X Y Z Discrete
The model fits very poorly. The fitted covariances are shown as precisions and vice versa.
.

7.5. Chain Graphs with Alternative Markov Properties
215
•
7.5
Chain Graphs with Alternative Markov Properties Cox and Wermuth (1993, 1996) suggest the use of chain graphs with a different Markov interpretation to that described in Section 7.2. To motivate this, consider the recursive linear system
•
•
Xl =
1:1
X2 =
1:2
X3 = bXl + 1:3 X 4 = cX2 + 1:4,
,
where h and c are fixed constants, and 1:1, 1:2, and (1:3,1:4)' are independent stochamodel ab/abx,aby/xy MIM>shov p MIM>fit
I
(which specifies a model, describes its properties, and then fits it to the data) may be shortened to . MIM>mod ab/abx,aby/xy;sh p;fit
,
•
246
Appendix A. The MIM Command Language
•
if so desired. Some commands have operands, which normally follow on the same line. If necessary, ampersand (&) may be used as a continuation symbol; this enables the operands to be given on the following line(s). All text following the ampersand on the same line is ignored. For example, ab/ & MIM>bx,ay & Hoho MIM>/xy
MIM>mode~
•
is equivalent to MIM>model ab/abx,ay/xy
Two characters, hash (#) and percent (%), are useful for tOmmenting command lines. Any text following the hash sign (#) on the same line is completely ignored. Any text following the percent sign (%) on the same line is st.ored internally, but otherwise ignored. The comment lines stored m2.y be displayed. using the ShOl. D command. For example, •
MIM>'l. We MIM>fact MIM>shot.' We begin
begjn by declaring three variables: a,b aad c. a2b2c2 d by declaring three variables: a,b and c.
These features are particularly useful on input files (Section A.l2.1).
A.2
Declaring Variables Variable names consist of single letters from A to Z and a to z. Note that these are distinct; for example, X and x refer to different variables. This very short naming convention is convenient in connection with model formulae, but it can be inconvenient in practical work. To remedy this, labels can be attached to variables, as we describe shortly. Discrete variables are declared using the command Factor. The number of levels they can assume must be specified as an integer following the variable name; for example, MIM>fact a 2 b 2 c 3
The blanks between the variable names and the integers can be omitted. Continuous variables are declared using the command Continuous. For example, MIM>cont
~
x y z
,
___
__~A~.~2.~~D~ec~la~r~in~g;V~a~ri~ab~le=s~~2~47~ Again, the blanks between the variable names can be omitted. The command Variate is synonymous. The Label command is used to define variable labels with up to 12 characters. The labels should be enclosed in quotes ("). For example,
•
MIM>label w "Sepal Length" x "Sepal width"
The labels can be shown on independence graphs. Information on declared variables can be obtained using the command Show, as illustrated below.
The command ValLabel is used to label the levels of fa,ctors. The syntax •
IS
ValLabels var level "label" level "label" lat
where var is a factor name. For example, Factor A2 ValLabel A 1 "Level One Label" 2 "Level Two Label"
of
•
l. •
i'or
defines a binary factor A, 'with "Level One Label" and "Level Two Label" as labels. Value labels may have at most 255 characters. They may be displayed using the Shoy L command . Factors with more than two levels cljn be defined as being ordinal (Le., having ordered categories) using the command Ordinal, or nominal (Le., with unordered categories) with the command Nominal. The syntax of the commands is Ordinal Nominal •
248
Appendix A. The MIM Comma:1d Language
where fset is a list of factors. If fset is blank, a list of the ordinal or nominal factors is written out. Per default, factors are nominal. These commands . are useful in connection with the analysis of contingency tables with ordinal classifying factors. . I
A.3
Undirected Models Formulae for undirected models are entered using the Model conunand. Some examples: MIM>fact a2b2c2d2 MIM>model abc,bcd MIM>cont wxyz MIM>model //wxy,xyz MIM>fact a2b2c2j cont x MIM>model abc/abx/x MIM>fact a2b2; cont wxyz MIM>model a,b/aw,bw,x,y,z/aw,bw,wxyz
The first two examples illustrate pure models (Le., those involving only discrete or only continuous variables) . .
The syntax of the formula is checked and the formula is stored in a concise form. For example, MIM>cont wxyz MIM>model //wx,xy,wy,xy,xz,YZj print

results in the following output: The current model is //wxy,xyz.
As shown here, generators are normally separated by commas; however, plus signs (+) are also allowed. For example, the following is valid: MIM>fact a2b2c2d2 MIM>model abc+bcd
A. 3. I
Deleting Edges
The command DeleteEdge removes edges from the current model. That is to say, it changes the current model to a new model, defined as the maximal submodel of the current model without the specified twofactor interactions. The edges to be removed are separated by commas. For example,
•
•
11
is al
~
A.3.

Undirected Models
249
MIM>factor a2b2c2; cont xyz; model ab,bc/abx,cy,az,cz/yz,xy,bx MIM>delete ab,bx MIM>print The Cllrrent model is: bc,a/cz,cy,az,ax/yz,xy.
illustrates deletion of the edges lab] and [bx] from the previous model. Note that if larger variable subsets are specified, edges corresponding to all variable pairs in the subsets are deleted. For example,
•
•
1.
MIM>delete abc,de
removes the edges lab], [be], lac], and [de]. Note also that if the current model is graphical, the model obtained by deleting an edge will also be graphical.
A.3.2
y e
Adding Edges
The command AddEdge adds edges to the current model. To be precise, the new model contains the additional twofactor interactions together with all the higherorder relatives for which the lowerorder terms were in the current model. The edges to be added are separated by commas. For example, MIM>fact a2b2c2; cont xyz; mod bc,a/cz,cy,az,ax/yz,xy MIM>add aby,bx; pr The current model is: bc,ab/cz,bcy,az,aby,abx/yz,bxy,aby.
illustrates addition of the edges lab], lay], [by], and [bx] to the previous model.
J
,
Note that if the current model is heterogeneous graphical, then the model obtained by adding an edge will also be heterogeneous graphical. However, if an edge between a discrete and a continuous variable is added to a homogeneous graphical modeL the resulting model will be heterogeneous: for example, MIM~>mod a/ax,y/xy
•
•
t
I •
MIM>add ay MIM>pr The current model is: a/ay,ax/xy,ay_
The command HAddEdge (homogeneous add edge) works like AddEdge except that it adds only the higherorder relatives that do not lead to variance heterogeneity: MIM>mod a/ax,y/xy MIM>hadd ay •
250
Appendix A. The MIM Command Language •
MIM>pr The current model is: a/ay,ax/xy.
A.3.3 Other ModelChanging Commands I
The command SatModel changes the current model to the saturated model (or maximum model, if this has been set see Section A.12.6). Similarly, the HomSatModel command changes to the homogeneous saturated model, and the MainEffects command changes the current model to the main effects model. These commands takeno operands. For example, MIM>fact a2b2c2; cant wxyz MIM>satmod; print The current model is: abc/abcw,abcx,abcy,abcz/abcwxyz. MIM>homsat; print The current model is: abc/abcw,abcx,abcy, abcz/wxyz. MIM>maineff; print The current model is: a,b,c/w,x,y,z/z,y,x,w.
•
1
The command BackToBase (which takes no operands) changes the current model to the base (alternative) model, if one such model has been set using the Base command (Section A.7.1).
A.3.4 Model Properties Some properties of the current model can be obtained by using the command Shoy. For mixed models (Le., those involving both discrete and continuous variables), the following information is given: whether the model is collapsible onto the discrete variables, whether it is mean linear, whether it is graphical or homogeneous graphical, and whether it is decomposable. For example,
f.
MIM>mod AB,AC,BC; show The current model isAB,AC,BC. It is not graphical. It is not decomposable. MIM>mod AB,BC,CD,AD; show The current model is AB,BC,CD,AD. It is graphical. It is not decomposable. MIM>mod //WX,XY,YZ,ZW; show The current model is //YZ,XY,WZ,WX. It is graphical .
•
II I

A.4.

BlockRecursive Models
251
•
It is not decomposable. MIM>mod AB/ABX,BY/XY; show The current model is AB/ABX,BY/XY. It is collapsible onto the discrete variables. It is not mean linear . It is homogeneous graphical. It is decomposable. .
•
~I "'
r,
• I,
MIM>mod AB/AX,BX,BY/BXY; show The current model is AB/AX,BX,BY/BXY. It is collapsible onto the discrete variables. It is not mean linear. It is not graphical, It is not decomposable.
n
,
The command Collapse, which has syntax " COLLAPSE varlist I
t
I
determines whether the current model is collapsible onto the variable set specified. For example, MIM>mod ab,bc,ac/ax,bx,cx/x; collapse abc The model is collapsible onto abc.
I !
• ,
A.4
BlockRecursive Models Blockrecursive (or chain graph) models can also be used. A current blockrecursive model may be defined, which may coexist with a current undirected model.
J
The first step in using blockrecursive models is to define the block structure.
A.4.1 •
Defining the Block Structure
This is done using the SetBlocks command, which takes the form: SetBlocks v1 I v2 < I v3 < I v4»
•
where vi, v2 etc are sets of variables. For example, SetBlocks abcx I dy I ez
252
Appendix A. The MIM Command Language •

•
The variables in vi are prior to those in v2, which are prior to those in v3, etc. The command should be used prior to working with blockrecursive models. Note that changing the block structure (by repeating the command) destroys any current blockrecursive information, so this should be done with caution. The current block structure is displayed using the command Shbw B. Note also that when SetBlocks is invoked, block mode is turned on. •
A.4.2
Block Mode
The BlockMode command switches between block mode and mode. In block mode, certain operations (see Table A.1) act current blockrecursive model, whereas in ordinary mode they the current undirected model. It is useful to be able to switch forth between these modes. The syntax of the command is
ordinary upon the act upon back and
BlockMode
where + turns block mode on,  turns it off, and blank shows the current mode. In block mode, the graph window displays the current blockrecursive model, rather than the current undirected model. •
•
A.4.3
Defining BlockRecursive Models
Corresponding to the Model command, the BRModel command is used to define a blockrecursive model. This must be consistent with the block structure set previously by SetBlocks. The syntax is BRModel mfl 1 mf2 < 1 mf3 < Imf4 »
where mf i, mf2 etc are model formulae. For example, MIM>Fact a2b2c2; Cont xyz MIM>SetBlock axlbylcz Block structure set. MIM>BRModel a/ax/x 1 ab/ax,by 1 ab,bc/x,y,bz/bxyz MIM>pr The current blockrecursive model 1S: 1 a/ax/x 2 ab/ax,by/y,ax 3 ab,bc/bz,abx,aby/bxyz,abxy •
Each component model specifies the model for the yariables in that block conditional on the variables in the prior blocks. Note that interaction tern1S
A.4.
.
BlockRecursive Models
253
~~~~~~~~=~ ~,~~~~~~~
Command HomSatModel SatModel MainEffects AddEdge DeleteEdge TestDelete Print
3, re ~
1
.d Ie
•
...
. \' • 
I, •
•
\
Fi t
n
d
I! •
CGFi t
It
StepW'ise DeleteLSEdge DeleteNSEdge o k
Modified Action
set the current blockrecursive model.
act on the appropriate component model. (without operands) displays the formula of the blockrecursive model when in block mode. Note that Print mshows the current undirected model, and Print b the current blockrecursive model, whatever the mode. fi ts all component models by using Fit on each component undirected model in turn. Note that if some component models have discrete response(s) and continuous covariate(s), then the parameter estimates obtained will not maximize the conditional likelihood. fits all component models by using Fit on the first component and CGFi t on the remaining components. (For large models this can be very timeconsuming). selects a blockrecursive model by means of a stepwise selection in each block in turn. deletes the least significant edge from the current blockrecursive model after a onestep stepwise selection. deletes the nonsignificant edges from the current blockrecursive model after a onestep stepwise selection.
TABLE A.I. The modified actions of commands in block mode
between the conditioned variables are added to each component model (see Section 4.5).
A.4.4 Working with Component Models This is done by means of the PutBlock and GetBlock commands. The PutBlock command stores the current (undirected) model as a component in the current (blockrecursive) model. The reverse operation is performed by GetBlock. The syntax is
•
•
k s
PutBlock k GetBlock k
where k is a block number. •
254
Appendix A. The MIM Command Language
•
•
•
A validity check is performed before the current model is stored when PutBlock is used: to be valid, the model must contain all interactions between variables in the prior blocks. In addition to retrieving a model, GetBlock also redefines the set of fixed variables to be the variables in the prior blocks. I
Note that the commands store and retrieve the fitted values, likelihood, etc. for the component models, when these are available.
A.5
Reading and Manipulating Data Data can be read either (i) as raw (casebycase) data using the Read command, or (ii) in summary form, i.e., as counts, means, and observed covariances using StatRead. If data are read in casebycase form, they can subsequently be transformed, new variables can be calculated, and observations can be restricted using the Restrict command (to be described later). Various commands also require raw data; for example, BoxCox, EMFi t and CGFi t. If data are read in summary form, transformations and restrictions are not possible. (However, if purely discrete data are entered in contingency table form, they may be converted to casebycase form using the command Generate).
Only one set of data can be handled at a time. Existing data are deleted when Read or StatRead is called. To add variables to a dataset, see Section B.5.
A.S.l
Reading Casewise Data
The command Read is used for reading casewise data, particularly data stored on files (for example, data transferred from other programs). (For interactive data entry, refer to the EnterData command in Section B.5). The syntax of Read is •
Read varlist
where varlist is a list of variables, i.e., single letters. The letters in the list can be separated by blanks; they must have been declared in advance as variable names. The data should be entered on the subsequent lines, each number being separated by one or more blanks, commas and/or tabs. The data for one case need not be on one line. The data should be terminated by an exclamation mark (!). For example, •
I
i
,• I
•
A.5. 
255
•
•
MIM>fact a2b3; cont wxyz; read awbx DATA>l 3.456 2 5.67654 DATA>2 3.656 3 2.53644 1,3.5354 1 2.4352 !
en ns el, he •
Reading and Manipulating Data
reads three cases. Missing values can be given as asterisks (*). Factors are entered as integers between 1 and k, where k is the number of levels .
d,
The use of input files is described below (Section A.12.l).
•
A.S.2
Reading Counts, Means, and Covariances
Data can also be entered in the form of counts. means. and covariances. These are just the sufficient statistics for the full model. The command StatRead reads the statistics in standard cell order.
,
•
•
•
The syntax is: d,
,,••
,•
StatRead varlist
•
i
re
\\"here varlist is a variable list va ri abies.
first the discrete and then the continuous
For example, for a purely discrete model, the sufficient statistics are just the counts of the contingency table. Thus, for a threeway table we write: MIM>fact a2b2c2; statread abc DATA>12 32 34 23 34 4 12 19 !
m
ja
Dr
The order of the cells is as follows: (1,1,1), (1,1,2), (1,2,1), ... , (2,2,2). In other words, the last index changes fastest. For purely continuous data (graphical Gaussian models), there is only one "cell" and the sufficient statistics are as follows: the number of observations (N), followed by the q empirical means, followed by the q(q + 1)/2 empirical covariances, where q is the number of variables. (The maximum likelihood estimates, i.e., with divisor N, not N 1, should be used for the covariances.) For example, with q = 2, we might have N , = 47 , y =
2.36 9.20
,S=
0.0735
0.1937
0.1937
1.8040
,•
I
• st !IS •
~ le
~d
as the observed count, mean vector, and covariance matrix. These can be entered as follows: MIM>Cont xy . MIM>StatRead xy DATA>47 2.36 9.20 0.0735 0.1937 1.8040 !
256
Appendix A. The MIM Command Language
• "
After the count and the sample means, the elements of the lower triangle of the empirical covariance matrix are entered, row by row.
l
To check that the data have been entered correctly, it is wise to give the command Print S (Le., print the sufficient statistics for the full model). In the current e~ample, we would get: Empirical counts, means and co variances X 0.074 y 0.194 1.804 Means 9.200 47.000 2.360 X Y Count
I
\
For mixed models, the sufficient statistics for the full model are the cell count, empirical cell means, and empirical cell covariances for each cell in the underlying table. The StatRead command expects these to be entered in standard cell order. For example, we might have
1j I•
,
I,
,•
;
!
MIM>fact a 2; cont xy MIM>statread axy DATA>47 2.36 9.20 0.0735 0.1937 1.8040 DATA>54 2.87 9.41 0.0837 0.1822 1.9423 !
!, , •
I
i I
We mention in passing a trick that is useful in the analysis of mixed data. Often published sources do not report the sample cell covariance matrices
I I I I
I, •
I, •
,I
but instead just the overall covariance matrix
I •
I•
,,
S= •
J
,I,
Now we cannot enter {ni, Vi, SihEI as we would like. However if we enter {ni, iii, ShEI and set the maximum model to the saturated homogeneous model (see subsection A.12.6), then as long as we fit only fwmogeneous models, the results are correct. This can be verified from the likelihood equations.
,,
I
1
1,, i ,
A.5.3 Transforming Data
, , •
The Calculate command is used to transform existing variables or to calculate new ones. The functions SQRT, SQR, SIN, COS, ARCTAN, LN, EXP, FACT, and the operators +, , *, /, and  can be used. For example, Calculate x=sin((x+y)2)
A.5.
Reading and Manipulating Data
257
•
•
,
'Ie >
he In
~
The identifier on the lefthand side can be either a variate or factor. New variates need not be declared in advance. The righthand side can contain variates, factors, or constants. Invalid operations, e.g. > In( 1), result in missing values. Similarly, if any value of the variables on the righthand side is missing, the result is also missing. Expressions can also include , =, =. These return 1 when the expression is true and 0 when it is false. For example,
•
•
MIM>calc x=yFact f2 MIM>Calc f = 1
+
(v>4.5)
discretizes v, i.e., calculates a factor ,a,
f=
es
1
f such that
if v < 4.5 
2 otherwise.
Recoding a factor is illustrated in the following fragment: MIM>fact f3 <set values to f> MIM>fact g2 . MIM>calc g = 1*(f=1) + 2*(f=2) + 2*(f=3)
•
Five special functions are also available. OES, UNIFORM, and NORMAL (which do not take operands) return the observation number, a uniform (0,1) random variable, and a standard normal random variable, respectively. PNORMAL (x) returns the right tail probability in the standard normal distribution, and PNORMALINV(p) is its inverse, i.e., returns the x for which
er "US
PNORMAL (x) =po • I
•
A.5.4 Restricting Observations to
•
The Restrict command is used to analyze data subgroups by restricting observations used subsequently. The syntax is:
.P,
•
Restrict expression •
•
258
Appendix A. The MIM Command Language •
•
where expression follows the same rules as with Calculate. The observations are restricted to those for which the result equals 1. For example,
,,
Restrict v<w I
restricts to observations for which V is less than W. Similarly, Restrict (v<w)*(wfact a3; cont x; read ax MIM>mod a/ax/x; fit MIM>rest l(a=l); fit
,
A is a factor with three levels. Restrict here has the effect of omitting level one from the following analysis; however, as described in Section 5.2, MIM usually requires that. no levels are empty when fitting models or calculating degrees of freedom. Thus, the above program fragment will result in an error. To a\'oid this, a new factor B with two levels could be created, as illustrated in the following fragment: fact b2; calc b=al; model b/bx/x; fit
I
J
A.5.S Generating Raw Data The command Generate generates casewise data given data in contingency table form. For example, MIM>Fact a2b3; MIM>StatRead ab DATA>l 2 3 2 1 2 ! Reading completed. MIM>Generate MIM>Print d Obs A B
•
I
,i II
,•
I, • •
I ,, I •
111
212 312 413
i,
,:
!, • •
A.6.
Estimation
259
513 613 721 821 922
10 2 3
•
11
2 3
The command takes no operands.
A.5.6 Deleting Variables
•
~

,
...
,.,~
Variables in the data can be deleted using the command Erase. This has syntax
"
 .   ; ..
•
·. .. '" . , .. .' .. ', .  
,. .
Erase vEst
~
"
< ••
.
'
?,.'  . .' , . '
where vlist is a list of variables. The variables are erased from the raw data and are undeclared. .
,
::

. ,,;'
',.i¢'
,"~.,.
•

6
Estimation Three commands for parameter estimation are available. The Fit command fits the current undirected model by maximum likelihood, using all complete cases. The EMFit command uses the EMalgorithm to fit the current undirected model by maximum likelihood, including all incomplete cases. The CGFi t command fits CGregression models, using all complete cases .
 :
•
.
, .'
... ''' ·.
A.6.1 .
..
•J , '
• •
•
•
•
•
Undirected Models (Complete Data)
The command Fit is used to fit the current model to the data. Consider, for example,
MIM>Cont xy MIM>StatRead xy DATA>47 2.36 9.20 0.0735 0.1937 1.8040 ! , • Reading completed. MIM>Model //x,y MIM>Fit
•
••
•
•
The Fit command gave the following response:
Deviance:
15.6338 DF: 1 ,
,
,'",""
. " •••
'

260
.'C
, , ,
, !
i ,,
,
•

~~_______________________________________________A_._7.__H~y~p_o_th_e_si_s_Te_s,t_i~n~~._J69
..
".
,,
Option S
,
;/
Description Ftest
•
E M Q
e1 • f ,
,e
z
i
D C I L
, .0
P.F W.. X.. Y.. K, J
! ,
).
,•
)t
, ,• i
y
r,
••

1 .,1 I•
;. ·•· ) · 1
,
,, ,, ·
,,
"
Exhaustive enumeration Monte Carlo Sequential Monte Carlo Shows deviance decomposition Prints R x C x L table Estimates size of reference set Use stratuminvariant scores Deviance test Cant ingency table test::; Rank tests
Other requireriients~ Homogeneous, 'one~~ variable continuous Discrete separating set Ditto Ditto Ditto Ditto Ditto Ditto Row variable discrete Both variables discrete Row variable discrete
TABLE A.2, TestDelete options. All options require that the test corresponds to a decomposable edge deletion test.
If no options are specified, an asymptotic likelihood ratio test is performed (as \\'ith the Test command). This COlllparcs the deviancf' difh'rf.'llce with
a X2distribution. When the test corresponds to a dE'colllposable edge deletion test, the degrees of freedom are adjusted to account for parameter inestimabiiity. Otherwise, the degrees of freedom are calculated in the same way as with the Test command, i.e., as the difference in the number of free parameters between the two models, The options are summarized briefly in Table A.2. •
•,
.e
What do we mean by this expression "corresponds to a decomposable edge deletion test"? We mean that either (i) both the initial and the resulting model are decomposable, or (ii) using collapsibility properties, the test is equivalent to a test of type (i). Figure A.I illustrates the latter type.
A. 7.5 Edge Deletion FTests The S option causes an Ftest to be performed instead ofaX2 test. It requires that (i) both the initial and the resultant model are decomposable and variance homogeneous, and (ii) one or both of the vertices in the edge are continuous (see Section 5.3). I
•
A.7.6 Exact Tests e •
The TestDelete command can also perform exact conditional tests. It can do so when 1. The test corresponds to a decomposable edge deletion test.
270
Appendix A. The MIM Command Language •
E D
A
c B
r
FIGURE A.I. The model is not decomposable, but is collapsible onto {A,D,E}. It follows that the test for the deletion of [AE] corresponds to a decomposable edge deletion test.
•
\
2. The conditioning set that is, the variables adjacent to both endpoints of the edge is composed entirely of discrete variables. Table A.3 shows which tests are available and which types of variables they can be used with. Nominal, ordinal, and binary t.ypes must be specified as factors. The rank tests and the randomised Ftest are sensitive to the order of the variables specified: for example, TestDelete AB J will give different results to TestDelete BA J. The first variable specified is called the row variable, and the second is called the column variable, as described in Section 5.4. In all cases, the row variable must be discrete, while the column variable may be either discrete or continuous (depending on the test). Note that if the column variable is continuous, then raw data must be available. To specify the method of computation, one of the following options should be given: E (exhaustive enumeration), M (Monte Carlo), Q (sequ€ntial Monte Carlo). If none are specified, only the asymptotic test is performed. The maximum number of sampled tables in Monte Carlo sampling is set by the command MaxSim, the default being 1,000. The prescribed maximum Option L p F VI
X y
K J S
Test LR test 2 Pearson X Fisher's Wilcoxon van Elteren (a) van Elteren (b) KruskalWallis JonckheereTerpstra Randomised Ftest
Row variable Nominal Nominal Nominal Binary Binary Binary Nominal Ordinal Nominal
Column variable Nominal or variate Nominal Nominal Ordinal or variate Ordinal or variate Ordinal or variate Ordinal or variate Ordinal or variate Variate
,
•
TABLE A.3. TestDelete options for exact tests. (a) Designfree stratum weights; (b) Locally most powerful stratum weights.
I I
A.7.
Hypothesis Testing
271
•
number of tables with T(Mk ) :::: tobs in sequential Monte Carlo sampling is set by the command MaxExcess. Default is 20. For example, the following fragment sets new values for the two parameters: MIM>maxsim 5000 Max no of simulations: MIM>maxexcess 50 Max no of excesses:
•
f.
.
,
.e
y
d e e d
The exhaustive enumeration method can be used for problems for \\'!:ich Rx ex L < 1,900, whereas there are no formal limits to the size of problems for which Monte Carlo sampling can be applied. In practice, however, time will usually be the limiting factor for large problems .
d e e .t
A.7.7 Symmetry Tests
::l e
•
The command SymmTest performs a test of multivariate symmetry (see Appendix C and Section 6.7). It is assumed that all variables in the current model are binary factors. The command takes no operands.
If •
a
50
Three further options are available. The option D prints out the underlying threeway contingency table, and the option C prints an estimate of the number of tables in the reference set Y (see below) llsing the formula in Gail and Mantel (1977). Option I causes stratumillvariant SCOfes to be used for the rank tests instead of the stratumspecific scores, wl!ich are default. A technical detail: when several tests are being computed and the sequential stopping rule is used, then it can only be applied to one test. Which test'this is, is given as the first option appearing in t he following list: LPFWXYKJS.
;8
"*
5000
•
A.7.B Randomisation Tests The command RandomTest enables general randomisation tests to be performed more general than those available using TestDelete. The syntax •
•
1S
RandomTest Z •
•
where Z is the randomisation factor, B is an optional blocking factor, and letter can be E (exhaustive enumeration), M(fixed sample Monte Carlo) or Q (sequential Monte Carlo). Mis default . The hypothesis test considered is identical to that defined in the Test command. That is, the current model is tested against the base model, using the deviance as test statistic . •
272
Appendix A. The MIM Command Language

A randomisation test is based on the randomisation distribution of the deviance, that is, its distribution over all possible randomisations. It is assumed that the total numbers receiving each level of Z are fixed. If a blocking factor B is supplied, then this is also true within each level of B. The Monte Garlo process is controlled by parameters set by commands MaxSim and MaxExcess (for sequential Monte Carlo). The command is inefficient compared with TestDelete and is only intended for hypothesis tests that are not available using TestDelete, that is, do not correspond to single edge removal tests.
A.8
,
Model Selection A.8.! Stepwise Selection The syntax of the Stepwise command is i
J
Stepwise options
•
•
where the available options are described below (see Table AA for a summary). The default operation of Stepwise is backward selection using x2tests based on the deviance difference between successive models. Unless specified otherwise, Stepwise runs in decomposable mode if the initial model is decomposable; otherwise, it runs in unrestricted mode. This may be overridden using the U option. The U option specifies !!nrestricted mode as opposed to decomposable mode.
· • •
The F option results in forward selection so that instead of the least significant edges being successively removed, the most significant edges are added. In other words, at each step, the edge with the smallest pvalue, as long as this is less than the critical level, is added to the current model (see, however, options 0 and H below). Sparsitycorrected degrees of freedom calculations are used in decompoSable mode (see Section 5.2). 2
The S option performs §mallsample tests, namely Ftests, instead of X tests whenever appropriate. This option is only available in decomposable mode (see Section A. 7.1).
In backward selection, coherence is default, whereas in forward selection, noncoherence is default. These settings can be overridden using the Nand C options, standing for noncoherent, and coherent, respectively.
•
. >.
I
i
1 •
I, •
I I

A.B.

Model Selection
273
•
Option F U S C N
e s a • s
o
H
:i
A B G I X V E M Q L P W
II
Result Forwards selection Unrestricted, as opposed to decomposable, mode Smallsample tests (Ftests) Coherent mode Noncoherent mode One step only Headlong Reduce AIC Reduce BIC Use CGregression models Initial model as alternative Maximum (saturated) model as alternative Varianceheterogeneous edge addition Exhaustive enumeration Monte Carlo sampling Sequential Monte Carlo Likelihood ratio test (C2) 2 Pearson X Ordinal test
TABLE AA. Options for stepwise model selection.
The 0 option causes one step only selection to be performed. After execution in backward selection, until the model changes, pvalues can be written on the independence graph. Furthermore, the least significant edge can be deleted using the command DeleteLSEdge, and the nonsignificant edges using DeleteNSEdge.
s
s
5
The H option results in headlong model selection.

The I and Xoptions specify the alternative hypotheses used in the significance tests carried out at each step. For example, in backward selection the tests performed will normally be of Mo versus Mb where M1 is the current model and Mo is obtained from M 1 by removing an edge. The X option causes the test to be of Mo versus the saturated model, rather than versus M l' (If a maximum model is defined, that will be used instead of the saturated model.) The I option causes the test to be of Mo versus the initial model.
1 I •

I
> J
•
, I
•
Normally, it will be preferable to use the tests between successive models (the default method). The I and Moptions have been included mainly to allow comparisons with other selection procedures. Use of the I and Xoptions will give rather silly results in conjunction with f01 ward selection. They cannot be used with the A or S options.
•
274
Appendix A. The MIM Command Language
•
The Voption relates to a technical detail concerning edge addition and variance homogeneneity. Normally, in forward selection from an initial model that is variance homogeneous, all subsequent models considered will be variance homogeneous (corresponding to adding edges using HAddEdge instead of AddEdge). If the initial model is heterogeneous, then the usual operation (i.e:, corresponding to AddEdge) is used. The V option forces the usual operation to be used when the initial model is variance homogeneous. For example,
• J
•
MIM>Model a,b/x,y,z/x,y,z; Stepwise f
steps through homogeneous graphical models, whereas MIM>Model a,b/x,y,z/x,y,z; Stepwise fv
steps through heterogeneous graphical models.
, j
The A and B options allow stepwise selection based on the information criteria AlC and BIC (see Section 6.3). For each edge examined, the change in th'~ criterion specified is calculated and displayed; and the edge corresponding to the greatest reduction in the criterion is chosen for addition or removal. This continues until no further reduction is possible. Options F, G, U, 0, H, and V can also be used, but the remaining options have no effect.
1
•
The G option is used for stepwise selection among CGregression models. The covariates are specified using the Fix command. The options A, B, F, U, C, N, 0, H, and V can also be used, but the remaining options have no effect. The fitting process is controlled by the same parameters that control CGFi t. For highdimensional problems, this selection method may be very timeconsuming. Finally, the E, M, Q, L, and P options perform exact tests (see Section 5.4 for further details). These are useful for the analysis of sparse contingency tables and are only available in decomposable mode. The Woption is useful when there are ordinal factors; the test chosen for the removal or addition of an edge between two factors is sensitive to ordinality. More precisely, the choice is shown in Table A.5. The default critical level is 0.05. This can be changed using the command Cri tLevel, as in
1
1 \
I, I
!
Nominal Nominal Binary Ordinal
2 c 2 C
KruskalWallis
Binary C2 2
C
Wilcoxon
Ordinal Kruskal~ Wallis Wilcoxon JonckheereTerpstra
TABLE A.5. Automatic choice of tests sensitive to ordinality.
!• ,•••
,
•
•• •
A.8.
275
•
•
MIM>CritLev Critical level: 0,050 MIM>CritLev .001 Critical level: 0.001
•
1
~I
e I
II
•
Model Selection
If a new value is not given, the current value is written out.
e ,
" A.8.2 The EHProcedure ,
The EHprocedure is initialised using the command Ini tSearch. This command sets up internal lists for storing information on models that are fitted. ~l()dels that are reject(;d according to the criterion arc stored in one list and models that an accepted (i.e., not rejected) are stored in allot her list.
I I
,
I
,
Prior to initialisation, the model class should be set using the command EHMode, and if extensive output is to be written during the search process, the command EHReport shoulo be used. In addition, the criterion for deciding the consistency of models with the data can be adjusted using the cOlnmar.d Cri tLevel.
,,
Search is started using the StartSearch command.
I I,
•
I
e ,r ,•
,
•
J.
,.
,
,•
,,
, ,
I ,
,,
,,, ,
l I
The command ShoIN S shows the current status of the procedure, Le., whether it has been initialised, and if it has, the current lists of accepted, rejected, minimal undetermined, and maximal undetermined models. The command Clear S clears the EHprocedure.
l
,
o
)1
We now give a more detailed description of the individual commands mentioned above.
,
v •
,I
,I ,
4 y
Choosing the Model Class ,
The EHMode command is used to specify the model class used by the EHprocedure. This can be either heterogeneous or homogeneous graphical models. Default is heterogeneous. The command has syntax
•
Jl n
e •
EHMode
d where letter can be X (heterogeneous) or H (homogeneous).
,
•
Setting the Reporting Level ,
•
The EHReport command controls the output from the EHprocedure. There are three levels of reporting detail: all, medium, and least. At the all level (option A), all four model lists (minimal accepted, maximal rejected, minimal undetermined, and maximal undetermined) are printed out at each , ,
276
Appendix A. The MIM Command Language
step, together with the results of fitting the individual models. At the medium level (option M), lists of minimal and maximal undetermined models are printed out at each step. The default (corresponding to option L) is to print out only the selected models. The syntax of EHReport is .

•
•
EHReport
where letter is A, M, or L.
Initializing Search The lni tSearch command has the following syntax: •
InitSearch <min model>
•
The search is restricted to models that contain min_model as a submodel and that are contained in max_model. For example, InitSearch
A,B/AX,BY/AX,BY  AB/ABX,ABY/ABX,ABY
,
I 1
Either or both of min_model and max_model can be omitted: by default, the main effects model and the model corresponding to the complete graph are used. The search algorithm is based on tests against max_model using the x2test on the deviance differences. The critical value is controlled by the command Cri tLevel. Note that the fitted models and min_iIlodel and max_model must be in the appropriate model class. Models fitted in the usual way (e.g., using Fit or Stepyise) after initiali . sation are added to the lists of accepted or rejected models as appropriate (provided they are in the appropriate model class).
•
i
•
Starting the Search The StartSearch command starts the EHprocedure after initialisation by lni tSearch. The syntax is: ·
•
I
Start Search <MaxNoModels>
where MaxNoModels is a positive integer that controls the maximum number of models to be fitted, and Direction is either D (downwardsonly) or U (upwardsonly). The default value of MaxNoModels is unlimited, and the default direction is bidirectional. For example,
1 , ••
II ••
StartSearch 128
I
•
1
,"C. . ;: "
,,
.,
'
A.S.
Model Selection
277
~~
'
This initiates the model search (bidirectional version), specifying that at most 128 models are to be fitted. Note that the upwards and downwardsonly versions are of primarily theoretical interest: the bidirectional version is much faster. •
A.8.3 Selection Using Information Criteria The Select command selects a model minimizing either Akaike's Information Criterion or Bayesian Information Criterion. The operation can be very timeconsuming for large classes of models. The syntax is Select
where letters can be U, D, A, V, S, and M. The current model must be graph~ ica!. The default operation is to search among all graphical submodels of the current model, identify the model with the least BIC, and change to that model. No output is given. If the U option is specified, then the search is upwards, that is, among all graphical models including the current mode!. If the current model is homogeneous, then the search will be among homogeneous graphical models, otherwise it will be among heterogeneous graphical models. If the D option is specified, then the search will be restricted to decomposable models.
If the A option is specified, then the AIC criterion is used instead of BIC. The V and S options control the amount of output written, V being the most verbose, and S giving some output. The M option implements Monte Carlo search, Le. search among a number of randomly chosen models in the search spaCe. The number is specified using the MaxSim command. The command is subject to the Fix command, in that edges fixed are not eligible for addition or removal. That is, all such edges that are present in the current model are present in all models searched, and all such edges absent from the current model are also absent from all models searched. An example: Model //vwXYZi Select
•
I
and •
Model //v,w,x,y,zi Select u
have the same effect. •
·, •
278
A.9
Appendix A. The MIM Command Language
The BoxCox Transformation

, ,,
, •
This command performs the power transformation proposed by Box and Cox (1964), i.e., A {x 
J>.(x) =
1)
,
A
In(x)
if). = O.
.
Note that the transformation requires x to be positive. The syntax of the BoxCox command is: •
BoxCox var lowlim uplim no
where var is a variate in the data, loylim and up lim are the lower and upper>. values, and no is the number of intervals between the>. values. For example,
,
•
,
l
i,
BoxCox X 2 2 4 . I
calculates the transformation for>. = 2,1,0,1,2. For each value, the following are calculated and displayed: minus twice the profile log likelihood of the full model, minus twice the profile log likelihood of the current model, and the deviance difference. 1
A.10
Residuals The Residuals command calculates and saves residuals. Suppose the current model contains q continuous variables and that q2 < q of these have been fixed (using the Fix command), so that ql = q  q2 remain unfixed. The syntax of the command is
i, ,,
,
~, I
I
I,
.•
I
,
Residuals varlist
I, ,!
, I
where varlist is a list of ql variable names. The residuals, Le.,
I, ,I
A
Yl 
I
j.lYlli,Y2'
are calculated and saved in the variables. The variables need not be declared as variates in advance. If the minus sign is specified, the deletion residuals are calculated; otherwise, the (ordinary) residuals are calculated. Note that the deletion residuals require that the model be refitted for each observation: this may require substantial computation and is tolerable only in connection with decomposable models and datasets of moderate size. An example using the lipids data from Section 4.1.10 is as follows:
.
,
i
~
I

A.10.

Residuals
279
•
MIM>fix UVW Fixed variables: MIM>show v Var Label
I
,
Type
Fixed Block Levels In In Data Model
A Treatment gp disc 3 X X • X U pre VLDL cont X • • V pre LDL X X cont X • • W pre HDL X cont X X • • X post VLDL cont X X • • • y post LDL X cont X • • • Z post HDL cont X X
•
•
•
4
,•
•
o o
0
.A Treatment gp disc 3 X X • • U pre 'JLDL cont X X • X V pre LDL cont X X W pre HDL cont X X oX • • X post VLDL cont X X • • • y post LDL cont X X • • • Z post HDL cont X X • • • L X residual cont X • • • • M Y residual cont X • • • N Z residual cont X • • • • 0
,
0
0
•
0
•
•
0
•
MIM>resid LMN MIM>show v Var Label Type Levels In In Fixed Block Data Model
,
0
•
•
Notice how labels are created indicating the corrrespondence with the response variables. ..J
•
The Mahalanobis command calculates and saves Mahalanobis distances, 2 together with the corresponding X quantiles. If some of the continuous variables in the model have been fixed (using the command Fix), they are treated as covariates in computation of the distances, e.g., for a case written as (i,Yl,Y2), with Yl the covariates and Y2 the responses, the distance is calculated as •
(Y2  Jl,Y2Ii,Yl)'f2 Y2Ii ,Yl (Y2  Jl,Y2Ii,yJ,
•
•
I
where Jl,Y2Ii,Yl is the conditional mean of Y2 given I = i and Y1 = Yl, and • f2 Y2 Ii,Yl is the inverse of the conditional covariance of Y2 given I = i and Y1 = Yl' The command has syntax Mahalanobis varl o
,
1 ,• I
280
Appendix A. The MIM Command Language
,, 1
I
"
I
where var1 and var2 are variates (which need not be declared as such in advance). The J\,1ahalanobis distances are stored in var1 and the corresponding quantiles of the X2distribution are stored in var2. If a minus sign is specified, the distance calculations utilize the deletion residuals and conditional covariances. ,
J
I
I, , ,
,, ,
i,
I
I •
,,I, ,,
,
I I ,
A.11
Discriminant Analysis ,
The Classify command is used in connection with discriminant analysis. The syntax is
, ,
I I
Classify G C
where G is a factor in the model. The command calculates a new factor C with the same number of levels as G. C contains the predicted classification using the maximum likelihood discriminant analysis method; C need not be declared in advance as a factor. Each observation is assigned to the level 9 with the largest 1(9,j, y). The density 1 is estimated using the current model and, by default, all the available observations. By specifying the minus (), the leaveoneout method is used, i.e., the density for each observation is estimated using all available observations except the one in question. This option is computationally intensive.
,,
,, ,
,,
,, ,, ,
,,
,,, ,,
I I
I I
I
I, I, , ,
I I
I, I I, I,
,, ,
C is computed for all unrestricted observations for which the measurement variables (j and y) are not missing. For example, MIM>Model G/GX,GY,GZ/XY,XZ MIM>Classify GQ MIM>Classify GR MIM>Model GQR MIM>Print s
I J,
1 J •,
•
I
,I ,,I
1I ,
,
•
,
calculates and writes out information showing both the apparent and the leaveoneout error rates. The values of the log densities can be stored, if so desired, in the variates V1 to Vk, where k is the number of levels of G. This makes it possible, for example, to compute discriminant functions using arbitrary prior probabilities (sample proportions are implicitly used in the above method).
I
, ,
, ,
,
,I ,,
i, ,
, I
,
i
I
A.12.

Utilities
281
•
A.12
I
Utilities
•
.
In this section, we describe diverse general purpose utilities.
,• I
A.12.1
File Input
Input to MIM is normally from the keyboard. However, commands and data can be read from a file by using the Input command. The syntax is Input filename
•
!•
where filename is the usual filename (possibly including a path). For example, MIM>input \data\iris ••
Subsequent input to l\illIl is from the file until the end of the file is reached, at. which point it rC\'crts to the keyboard. Ail commands can be used in the file. ;\onnalh'" tl1t' C'Ol11mand lines are not echoed to the screen, but the command Echo can change this. The syntax i
I
•
IS
•
•
•
Echo
where + turns echoing on,  turns it off, and blank shows the current mode. The use of comments is described in Section A.I.
.J
•
• •
Two commands, Suspend and Revert, are used in connection with file input, mainly for constructing interactive examples for teaching purposes. Suspend can be used on input files only, where it has the effect of allowing the user to enter commands from the keyboard. When the command Revert is entered, input is again taken from the input file, starting at the line following the Suspend command. Input files can be nested, with up to nine levels . •
A.12.2 The Workspace
I
The Save command saves the whole workspace on a file. The syntax is Save filename
The whole of the workspace is saved on the file, with two minor exceptions: model search results and information about input files are not saved. This •
•
282
Appendix A. The MIM Command Language
is a convenient way of storing data, labels, fitted models, the current graph, and other settings on one file.
,,,
,
••
The Retrieve command retrieves a workspace from a file created using Save. The syntax is I
1
Retrieve filename
Workspace files have a special format that only NIIM can read. It is inadvisable to use these files for longterm storage of data, since the format is in general versionspecific. That is, the format is not backwards compatible. The Show command displays the current model, the variable declarations, or information about the workspace. The syntax is:
1, !
1 1
Shoy letters ,
,
where letters can be:
: •
P (or blank): properties of the current model (see Section A.3.4). V: the variable declarations, labels, etc. (see the example in Section A.IO).
·
• •
l
i
W: the current state of the workspace.
I
D: comment lines (see Section A.I).
! I
S: the current status of the model search. The Clear command clears the contents of the workspace or the current results of the model search procedure. The syntax is:
,
,
where letters can be: A (or blank): clears the whole workspace and sets defaults.
S: clears the model search procedure. •
Printing Information
This subsection describes the use of the Print command, which is used for printing information about the data or the fitted model. The syntax is Print letters, where letters can be blank or can be one or more letters from the following list: S, T , U, V,F ,G ,H, I ,M, Z. These have the following effect: M(or blank): prints the current model formula .
•
I•
,,
Clear letters
A.12.3
"
A.12.
Utilities
283

,
i
I•
The letters F, G, H, and I cause information about the fitted model to be printed. More precisely: F: prints the fitted count, variate means, and covariance matrix for each
cell, i.e., {mi' lLi, ~d are printed for each i. G: prints the corresponding callonical parameters, i.e., {ai, .6i, fld for ea{;h cell i. H: prints the fitted count, means, and correlation matrix for each cell. I: prints the estimated discrete and linear canonical parameters and the
matlix of estimated partial correlations for each cell. The relation between the matrix of partial correlations, Y i =: {v;< }y.(Ef, say. and the inverse covariance matrix fli = {w7'},..(Ef is almost the same as the relation between the correlation and covariance matrix: for I I (, v7< = w7< / {wr L
,
""'f'}
•
•
,
The letters S, T, U, and V calise correspoEding sample quantities to be printed. i\1ore precisely
s: prints the observed counL variate means, and covariance matrix fnr each cell. ,·
T: prints the corresponding discrete, linear, and quadratic canonical
•
••
parameters for each cell.
I
j
U: prints the count, means, and correlation matrix for each cell. V: prints the discrete and linear parameters and the matrix of partial correlations for each cell. Note that the variable set for which the empirical statistics are calculated is taken from the current model (provided the model variables are present in the data). This is illustrated in the following fragment:
1 • •
J
I
•
•
MIM>fact a2j cont x MIM>sread ax DATA>12 4.5 2.1 24 6.5 3.2 ! Reading completed. MIM>print s Calculating marginal statistics ... Empirical counts, means and covariances. A 1
I
x Means
2.100 4.500 X
•
I
12.000 Count
•
2
X
Means
3.200 6.500 X
24.000 Count
•
1I •
)
284
Appendix A. The MIM Command Language

MIM>model Ilx; print s Calculating marginal statistics ... Empirical counts, means and covariances. X 3.722 Means 5.833 36.000 Count I X
I
I
, •
1 , 1
,
1 !
Similarly, the G, H, and I options give the corresponding canonical parameters, correlations, partial correlations, etc.
\
The letters D and E cause the raw (casebycase) data to be printed out when such data is available.
1 •
1 I
I,
,,
D: prints the raw data. Missing values are printed as *'s. Only data for the unrestricted observations are printed.
,
•, •, ,,
E: is the same as D, except that if the results of EMFi t are available, then instead of the missing values, their imputed values are printed out. See EMFi t. Note that data values are stored as an ordered pair (missing flag, value). PRINT D prints out values for which the missing flag is false, whereas PRINT E prillts all values. Finally, the letters X, Y, and Z cause information about the current and the saturated (full) model to be printed out. More precisely,
I
,: •
i
X: prints, when this is available, minus twice the log likelihood of the saturated model or maximum model if such has been defined,
I
i
! , I
.
Y: prints, when this is available, minus twice the log likelihood of the current model, and
,I
, •
Z: prints, when this is available, the deviance and degrees of freedom of the current model. The format used in Print and other commands can be controlled by the PrintFormat command. The syntax of this command is
•
, ,
, • •
,,
,
,
I, I,
PrintFormat fw d
I
I,,
where fy and d are integers specifying the field width and number of decimals, respectively. For example: MIM>print u . Empirical counts, means and correlations III 1.000 X 0.553 1.000 0.547 0.610 1.000 Y 0.409 0.485 0.711 1.000 Z Means 38.955 50.591 50.602 46.682 1,1 •
X
Y
Z
88.000 Count
; •
I, ·
·
•, •
, ,
A.12.4 Displaying Parameter Estimates
,i ,
The Display command prints out parameter estimates, after the current model has been fitted using Fit, CGFit or EMFit. The syntax is:
I,
i i I •
Display RsetFit MIM>Display AB,XY
,,
displays parameter estimates for the conditional distribution of A and B, given X and Y. Similarly,
!
I
1,
I
MIM>Display AB,X
• •
would display parameter estimates for the conditional distribution of Aand B, given X. Note that the variable sets Rset, Cset, and options are each ~ritten as generators, i.e., without commas. Both Cset and options may be omitted. If Cset is omitted, the marginal distribution of Rset is shown. I
I
l •
},
When Rset consists of continuous variables only, the parameters printed out are determined by options. Per default, if options is blank, the moments parameters are printed out: If options=C, then the canonical parameters are shown. If options=S, then counts, means, and correlations are displayed, and if options=SC, then partial correlations are shown. For example, •
•
,
286
Appendix A. The MIM Command Language

MIM>model //VWX,XYZ; fit; Deviance: 0.8957 DF: 4 MIM>Display VWYZ,X Fitted conditional means and covariances. V 6.579 0.900 211.927 W 12.418 0.754 50.020 107.368 0.000 0.000 107.795 Y 3.574 0.993 Z 12.323 0.000 0.000 34.107 164.297 1.080 X
V
W
Y
Z
The output shows the conditional means and conditional covariance of V, W, Y, and Z given X. The last four columns show the conditional covariance matrix, while the first two show the conditional means (constant term and coefficient of X). For example, we see that I
E(V I X = x) = 6.579 + 0.900x.
••
•
'i
•
,i I,
,
•
, •
•
When Rset consists of discrete variables only, and Cset contains continuous variables, then the linear predictor for each combination of levels for the variables in Rset is shown. Here options has no effect. It should be noted that only the linear predictors are shown.
, ! I •
I
,
,, I I
•
When Rset and Cset both contain discrete variables only, then either the conditional probabilities of Rset given Cset, or the corresponding linear predictors, are shown, depending on whether or not option C is specified.
,I,
,, 1,
•
, •
,
•
I
When Rset consists of both discrete and continuous variables, the parameters for the discrete responses given Cset are shown first, followed by the parameters for the continuous responses given Cset and the discrete responses.
! I •••
, ,
•
·•
A.12.S Displaying Summary Statistics . The Describe command displays univariate statistics for a variable. The syntax is
, •
\•
,
••
,•
Describe letter
where letter is a variable. The raw data must be available. For a discrete variable, the marginal counts are displayed, and for a continuous variable, the maximum, minimum, 95%, 75%, 50%, 25%, and 5% fractiles are displayed. The DisplayData command displays sample statistics; the syntax and operation is the same as Display.
1
J

A 12. Utilities

287
A.12.6 Setting the Maximum Model "
,
The deviance of a model is normally defined, as 2( ef  em) where em is the log likelihood of the current model and ef is the log likelihood of the saturated (unrestricted) model. However, if there are insufficient data to ensure existence of the maximum likelihood estimate under the saturated , model, ef and hence the deviance are undefined. The command MaxModel sets the current model to be the maximum model , , so that, the deviance for subsequent models is defined as 2(e x  em), where ex is the log likelihood of the maximum model. Similarly, the degrees of freedom are calculated in respect to the maximum model. For example, in the twoway ANOVA setup with one observation per cell, the models AB/ABX/ABX and AB/ABX/X do not have MLEs. The maximum model can be set to AB / AX, B X/X. This makes the deviance of submodels, for example, AB / AX/X, welldefined.
..
I
I . ..
I
I I
! .. .. ..
This is illustrated in the following program fragment:
..
..
I,
MIM>fact A2B2; cont X; read ABX DATA>1 1 4.3 1 2 5.6 2 1 3.7 2 2 3.6 ! Reading completed. MIM>mod AB/AX,BX/X; fit Calculating marginal statistics ... Warning: the deviance is undefined, probably because there are insufficient data to fit the full model. Set a maximum model using the MaxModel command. Likelihood: 14.0433 DF: 4 MIM>maxmodel; fit MIM>fit Deviance: 0.0000 DF: 0 MIM>del AX; fit Deviance: 5.9707 DF: 1 MIM>fix AB; ftest Fixed variables: AB Test of HO: AB/BX/X against H: AB/AXjBX/X F: 3.4490 DF: 1, 1 P: 0.3145
..
, 1 1 ..
1..
i
I

I I
1I
1 , ..
I
I
1
1 ,
,.;;..
,.• i, , J
•
,
ij
.
Notice that, initially, the deviance of AB /AX, BX/ X was undefined since MLEs for the full model do not exist. In this case, Fit writes out minus twice the log likelihood instead of the deviance .
..
,.,"..

A.12.7 Fixing Variables 
,..
The Fix command is used in various contexts to mark variables as fixed. The syntax is
_.
288
Appendix A. The MIM Command Language
Fix vlist .
where vlist is a list of variables. This has an effect on stepwise model . selection, CGestimation, calculating residuals and Mahalanobis distances, and computing Ftests. To remove all fixing, use Fix without operands.
I
•! •
,
•
•
I
, ,
A.12.8 Macros
,
A simple macro facility is supported. This is useful when sequences of commands are repeated often, for example, in simulation. The facility performs simple text substitution into the command line before parsing. Macros are called as follows:
,
(Of He name (parameterlist)
where f ilen;:une is the name of the file containing the commands and the item parameterlist is a list of parameters, separated by commas. There may be up to nine parameters. A parameter is a sequence of characters, not including comma "," or right parenthesis ")". The file should contain ordinary commands and data, but may also include the symbols &1, ... , &9. When the macro is executed, these are replaced by the corresponding parameters. If no corresponding parameter has been given, a blank is substituted. For example, if there is a file tt in the current directory, containing the line
•
i •
,
I, ,
,,,
, • • •
,J "
;
I
J ,
1, ,, ,, J, I
model &1&2,&1&3,&2&3; fit; test
J
then it may be called using
E
(Ott(A,B,C)
After substitution, the line becomes model AB,AC,BC; fit; test
which is then processed in the usual way. While developing macros, it can be useful to switch echoing on using the Echo command. This echoes the command lines after parameter substitution, before they are parsed. Note also that to pass model formulae as parameters, it is necessary to use "+" signs to separate the generators, instead of commas. Macros can be nested, with up to nine levels .
• ,
•
j
•
, •
•
I I
II
en IX
I, I •
I
,,I• I, •
i I
e
I
•
•
en a Ion
eCI CS 0
I
I,
•
i
I
I
I ,• I I
This appendix describes various implementationspecific (l.spects of 11IM 3.1, in particular the user interface. For a variety of reasons, it makes sense to separate the commandbased numerical engine from the interface. For example, the latter is highly dependent on the operating system, which makes it subject to rapid change. So by the time you read these lines, the user interface may well have changed substantiaily.
,, , ,I ·
,I
,
I
II •
I
i
I
•
I •
,·
j
, •
·
•,
B.1
Calling MIM
i
•, ; • •
• ·•
•
\
MIM is invoked by clicking the 1;1Il\f icon, and responds by displaying the main window, which looks something like Figure B.l. Several features of the main window are worthy of note. At the top is the main menu, for selecting commands and other operations. Sometimes items may be grayed out, if the features are not currently available. For example, if no variables have been defined, a model cannot be specified.
·• •
•
1 ,
l j
r
•,
J
Below the main menu is the work area where output from MIM is shown, and the prompt MIM> where the user is prompted for input. At the bottom of the window there is a status bar. This is used for displaying status messages about calculations that may take some time. At the bottom right there is a status indicator (coloured circle). This functions as an "interrupt" button; click it to abort any lengthy calculations. To the left of this is a small rectangle containing a block mode indicator: a plus sign is shown here in block mode. When appropriate, a scroll bar appears, with which the work area may be scrolled up and down . •
, , "
,
290
Appendix B. Implementation Specifics of MIM
,

• •
"
,
• •
i
,
,

•
1 '1
I
j
,,
, •
•
i
\ i
FIGURE B.lo The MIM interface
, I
The key feature of the interface is t.hat it is both menudriven and commanddriven . .l\1any menu items supply easy access to the command language, in the form of popup dialogues eliciting options, variables, or other information from the user, and then calling the appropriate command. With experience, some may prefer to enter commands directly at the prompt. Most of the popup dialogues are selfexplanatory and are not described in detail here; see the online help for further information. When invoked, MlM searches for a configuration file called mim31.msf in the working directory. This contains information about the interface settings (screen fonts, colours, and the like). If the file is not found or has an invalid format, default values are used. If the interface settings are changed in a session, on exiting the user will be asked whether the new settings should be changed. If the answer is affirmative, the settings will be stored in the mim31.msf file.
!, •
•
I
•
I
•
!, ••
j I •
,•
: ,,
,, , , ,
It. is possible to specify a workspace file (see Section A.l2.2) when invoking MIM. The normal way to do this is to use a certain extension for workspace files, for example .mim, and use ViewlOptionslFileTypes in Windows Explorer to associate this extension with NllM. Then clicking such a file in Windows Explorer, will cause IvIIM to be opened and the workspace retrieved.
B.2
The Main Menu Here a short overview of the menu is given.
•
.,
J
I, ,
•
B.2.
291
The Main Menu
•

, .
,, ,
I
I I
, ,
,, I ,I
,
,
I,
• File  New: this clears the workspace (corresponding to the Clear command).  Retrieve; Save; SaveAs: these call standard Windows dialogues to retrieve and save workspace files (see Appendix A.12.2).  Editor: see Section B.4 below.  Input: calls a standard Windows dialogue to identify a file, and then calls the Input command for that file (see Section A.l2.l). Used to input text files with commands and (often) data.  Printer Setup: calls a standard Windows printer setup dialogue.  Print: prints the work area. ,  Save Output: calls a standard Windows dialogue to identify a file, and then saves the work area to that file.  Clear Output: clears the work area.  Exit: closes MI:M . • Data
,,
•
,
I
i
:1
1
1 ,J
!; • ,, l
,
•
j
f •
 Declare Variate: elicits user input to declare a variate.  Declare Factor: elicits user input to declare a factor.
•
 Show Variables: calls the Shoy Wcommand.  Enter Data: see below.  Edit Data: see below.  Access Database: see below.  List Data: calls the Print D command.  Univariate Statistics: elicits user input to choose a variable, then calls the Describe command.  Show Summary. Statistics: calls the Print command to show empirical statistics.  Erase Variables: elicits user input to choose some variables, then calls the Erase command.
,
j
.1 •
~
I
"t
 Calculate V/!.riable: elicits user input for a calculate expression, then calls the Calculate command.  Restrict Observations: elicits user input for a restrict expression, then calls the Restrict command..
~•
0
",
r
" •
,,,
•
292
Appendix B. Implementation Specifics of MIM
• Model ,
 Saturated IvIodel: sets the current model to the saturated model.
o
•
, o
 Homogeneous Saturated Model: set8 the current model to the homogeneous saturated model.
I ,
I
 Main Effects Model: sets the current model to the main effects model. ,
,
 Model Formula: elicits user input for a model formula, then calls the Model command.
•
 Show Properties: calls the Show p command.
• ,
 Delete Edge: elicits user input to choose one or more edges, then calls the DeleteEdge command.
J
,
 Add Edge: elicits user input to choose one or more edges, then calls the AddEdge command. , ,
• Fit
•,
 Fit: fits the current model to the data, using the Fit command.
1
 Show Estimates: elicits user input to choose the parameters to be shown, then calls the Display command.
1
 Set Maximum Model: sets the maximum model as the current model.  Residuals: elicits user input to calculate residuals (using the Residuals command). .  Mahalanobis: elicits user input to calculate Mahalanobis distances (using the Mahalanobis command).
I •
j j I
J
~ •
•
J,
1, I I
0'
i ,
 EMFit: calls the EMFi t command.  CGFit: calls the CGFit command . •
• Test
, ,
 Set Base: calls the Base command.  Test: calls the Test command.  Delete edge: elicits user input to perform an edge deletion test using the TestDelete command. • Select  Stepwise: elicits user input to specify a stepwise selection using the Stepwise command.  EHprocedurellnitialize: initializes the EHprocedure.  EHprocedureIStart: starts the EHprocedure.  EHprocedurelShow Status: shows the current status of the EHprocedure.  EHprocedureIClear: clears the EHprocedure.
•
, ,
~; ..'
'
• [,. !.• : . '.
"•
B.3,

~.
'
,
,
Entering Commands and Navigating the Work Area
293
,
,,
,,
 SelectlGlobal: elicits user input to specify model selection by minimum AIC or BIe.
,f
• Graphics
J,
 Independence Graph: shows the independence graph of the current model.
•
 Scatterplot: elicits user input to specify a simple scatter plot.
,~,
 Histogram: elicits user input to specify a simple histogram.
,
,
' := '3 . (31 _ ! l ,.. l. T 10
'
I
v
t
...v' I
[W· 1. u
(D.9)
ai
fJt. a
,~,"'(1)
. :/'/ ._ .. ,1'/ •. ,11) _
W'
(D.8)
v
I..!J"
(D.lO)
'la'
Tht second type of update, corresponding to a set pair (a, d) in list H, is very similar, the quantities in equations (D.7) replacing those from (D.6). That is to say, we calculate the sample statistics from the righthand sides of (D.2), (D.5), and (D.7), namely,
and also the corresponding fitted quantities from the lefthand sides, namely,
{mia' Eyta,ES~k€Ia' As before, we transform each to canonical form, to, say, • hd' d {Q:ia' JJi a I n
ha €Ia
and •
respectively. These are then used to update the canonical parameters: for each i E I, and 1I1J E d,
•
:= Q:i + Qia  Qia {3"! := a"'l + Ity _ a,,!
(D.ll)
Q:i
w{'1 •
, •
•
,
,, · •
•• •
L
:=
fJ 1a
}Jla
w{'1 + w"'lll 
(;;"'111.
(D.J2)
(D.13)
The third type of update, corresponding to a set a in list D, consists simply of updating the discrete canonical parameters: for each i E I, Q:i
,•·
iJ'l,
t.
•
I
:= Q:i
+ In(nd In(md·
(D.14)
,
1
310
Appendix D. On the Estimation Algorithms
We do not specify any particular order of steps in each cycle, just that an update is performed for every element of the lists Q, H, and D. The algorithm continues until a convergence criterion is satisfied. This is based the changes in moments parameters in an updating step, written omi, op,{, and 00' More precisely, the criterion states that
r'.
!
I
mdiff =
max
iEI,"'I,.,.,E[
/
should be less than some predetermined (small) value for all steps in a cycle.

When q = 0 this algorithm reduces to the IPS algorithm (see Section 2.2.l), and when P = 0 to Speed and Kiiveri's algorithm (see Section 3.1.2). For decomposable models, the algorithm will converge after the first iteration provided the update steps are performed in a particular sequence (see Section 4.4). Note that the algorithm rp.ql\i[f~s much conversion between the natural (moments) form and the canonical form, for both the model parameters and the updating quantities. For some models these computations can be reduced substantially, as we now show.
0.1.4 The llCollapsible Variant This applies to models that are collapsible onto the set of discrete variables ll. For such models, the logarithms to the cell probabilities {pihEI obey the same factorial constraints as the discrete canonical parameters. In other words, the cell counts follow the loglinear model that is specified by the discrete generators. They can therefore be fitted using the standard IPS algorithm, and the fitted values held constant thereafter. The advantage of this is that in the updates corresponding to the lists Q and H, the discrete canonical parameters need not be updated. By using the mb::ed parametrisation {Pi, (3i' f!diEI rather than the full canonical parametrisation {Qi' (3i, f!i hEI' we avoid having to recalculate Pi and Qi whenever we transform from moments t.o canonical form and vice versa. This applies both for the model parameters and for the updating quantities. So, for example, an update corresponding to a setpair (a, d) in list Q would involve calculating

,
and the corresponding fitted quantities
{ Eyf ESf" k n '
0" ,
... •
D.1.

311
The MIPS Algorithm
deriving from these the linear and quadratic canonical parameters, and then using these in updates (D.9) and (D.10) . •
We can summarize this variant as follows: 1. Fit the cell counts by use of the standard IPS algorithm, using the discrete generators.
2. Repeat until convergence: a. Perform the updates (D.9) and (D.lO) for the setpairs in list Q.

h. Perform the updates (D.l2) and (D.l3) for the setpairs in list H . •
0.1. 5 The Mean Linear Variant •
•
•
Further compu1ational savings can be made for models that are both collapsible onto ~ and mean linear (see Section 4.3). For such models, the cell mealls {t1i }iEI obey the same fadorial constraints as the linear canonical parameters {i3d iEI. Here we can use the mixed parametrisation {Pi./l,. 0, LEI to allow replacement of the updating step (0.9) by
(D.15) for all i Eland lEd. This process, in which the fitted means are incremented by the difference between the observed and fitted marginal means, is the basis of the sweep algorithm (Wilkinson, 1970) for ANOVA models. For balanced designs it converges after one cycle. To fit the covariance matrices, we utilize (DA) and calculate d
5·'La
=
E5~
=
~Q
d
55·'La 1m;"a

•
and their inverses, say, the update
•
~'1 .= w"!'1 W1 .. t
•
+ w"j''I ta
flt
=
(S1.) 1
and
nf.
=
 d
(E5dI, and perform
w" "j''I
to. .
(D.16)
for all i E I and I, T/ E d. We can summarize this variant as follows: 1. Fit the cell counts by use of the standard IPS algorithm, using the discrete generators. •
'.•
2. Fit the cell means by use of the sweep algorithm (D.15), using the linear generators. For balanced designs this converges after one cycle .
•
3. Fit the covariance matrices by means of update (D.16), using the quadratic generators. Repeat this until convergence. •, • ,
,
• ,•
, ,
i
•
,I
!,
,,
•
•
1,
I
312
Appendix D. On the Estimation Algorithms
••
0.1.6 The QEquivalent Variant Still further computational savings are available for models that, in addition to being mean linear and collapsible onto ~, constrain the cell covariances in the same way they constrain the cell precision matrices. A condition for the stated property is that the quadratic generators induce a partition of r, i.e., for all pairs of quadratic generators (qk, ql) it holds that qk n ql ~ 11. For such models we have that (0.17)
•
•

and so we can summarize the variant as follows:
•
1. Fit the cell counts by use of the standard IPS algorithm, using the
discrete generators. 2. Fit the cell means by use the sweep algorithm, using the linear generators. For balanced designs, this converges after one cycle .
I
•
3. Fit the coyariance matrices by applying (D.17), using the quadratic generators (no iteration is necessary).
0.1.7 The StepHalving Variant Finally, we describe a variant which is not designed to improve computational efficiency, but rather to ensure convergence in all cases. For pure models, the algorithm is known to converge to the maximum likelihood es ' timate whenever this exists (Ireland and Kullback, 1968; Speed and Kiiveri, 1986), but for mixed models this does not necessarily hold. Occasionally the algorithm does not succeed, either because it leads to covariance matrices that are not positive definite (preventing matrix inversion), or because the likelihood does not increase at each step. One recalcitrant example is given in the following fragment, due to M. Frydenberg (personal comm.): fact i2j3; cant xy; statread ijxy 10 1 2 1.0 0.9 1 20 3 4 2.0 1.4 1 5 5 6 1.0 0.0 1 15 7 8 1.0 2.0 7 5 9 10 2.0 2.0 3 45 11 12 4.5 5.0 6 ! model i,j/ix,iy,jx,jy/ixy,jxy

•
Frydenberg and Edwards describe a modification to the general algorithm involving a stephalving constant K,. The increment to the model parameters
•
D,2,

The EMAlgorithm
313
are multiplied by this constant, so that for example, (D,8)(D,1O) become (D,18) (D,19)
(D.20) Prior to the updating step, /'i, is set to unity, The update is attempted, but two things are checked: that the likelihood increases, and that the resulting covariance matrices are positive definite. If either condition does not hold, /'i, is halved and the update attempted again. This modification is believed to ensure convergence to the maximum likelihood estimate whenever this· exists. At any rate, no counterexamples have as yet been found,


0.2
The EMAlgorithm We here describe the computations required to illlplement the E:\1algorithm together with the l\UPS algorithm, The incomplett' likelibood (1.34) is complex and would be difficult to maximize directl.\: hU\\'I'H'l'. the power of the mvlalgorithm (Dempster et ai., 19(7) lies in the way that estimation algorithms for the complete likelihood (4.33) can be used to maximize the incomplete likelihood. Each cycle in the algorithm consists of two steps: an E (expectation) step and an 11 (maximisation) step. In the Estep, expected values of the sufficient statistics given the current parameter estimates and the observed data are calculated. In the Mstep, new parameter estimates are calculated on the basis of the expected sufficient statistics using the ordinary algorithms, ,
\Ve now explain the calculations behind the Estep in connection with a hierarchical interaction model. Suppose the current parameter estimates are {Pi, P'i, EdiEI and that the corresponding canonical parameters are {D:i' .Bi, nd iEI. Write the cell counts, variate totals, and variate sums of squares and products as {ni' SSihEI. The minimal sufficient statistics for a model are sums of these quantities over margins corresponding to the generators of the model formula (see Section 4.1.5).
n
•

The complete observations (if there are any) will contribute to these sufficient statistics. We now consider the expected contribution of the incomplete cases, given the nonmissing values and the current parameter estimates.
•
,
I
•
,
, •
, ,
• •
, •
,•
i
Consider then an incomplete case of the form (il,*,YI,*), where the PI non missing discrete variables correspond to the subset a ~ t1 and the ql non missing continuous variables correspond to the subset b ~ r. The marginal distribution of (I,Yd is given by {pi,{L},EPhEI with  corresponding canonical parameters, say, {iii, .Bi, ndiEI.
1
. ..
, •
314
Appendix D. On the Estimation Algorithms

I
For each i2, writing i = (i 1, i2), we calculate 
,
1, 
(D.21) and similarly , f.Li2·1
(")21 = E',Y2 I'Z, Yl ) = ((")22)1((32 Hi iH i Yl )
(D.22)
and
E( Y2Y2'I'Z, Yl ) =
((")22)1 ~ ~i
+
2.1( 2.1), f.Li f.Li • •
The sufficient statistics are incremented for the case at hand as follows: ni := ni
+ p(i2Iil,yt}, Yl
and
, Yl
Yl
2·1 f.Li
2·1 f.Li
+
where i = (il' i 2 ) for all i 2 . This process is repeated for all incomplete cases. For efficiency, all cases with identical non missing values are processed in the same step. In the Mstep, new parameter estimates are calculated on the basis of the expected sufficient statistics in the usual way: that is to say, using these expected sufficient statistics in place of the observed ones. For decomposable models, the explicit formulae of Section 4.4 are used. For nondecomposable models, rather than iteration until convergence with the MIPS algorit.hm, only one iteration may be performed at each Mstep. This is the socalled GEM (generalized EM) algorithm, which is generally much more efficient than iterating until convergence at each step. Since likelihood functions of the form (4.34) typically are not convex, the algorithm may converge to a saddle point or to a local maximum. The algorithm can be started from a random point, so this may be used to find the global maximum. . Little and Schluchter (1985) describe application of the EMalgorithm in a context closely related to the present one. Lauritzen (1995), Geng (2000), and Didelez and Pigeot (1998) study methods for efficient calculation of the Estep.

,,
0.3.
0.3
The MEAlgorithm
315
The MEAlgorithm •
This section describes the estimation algorithm used for CGregression models, the socalled MEalgorithm introduced by Edwards and Lauritzen (1999). For a given hierarchical interaction model M, write the minimal canonical statistics as T = (TI , ... , TK)' As described in Section 4.1.5 and Appendix D.l, the likelihood equations are constructed by equating the expectation of T under the model to its observed value, t. That is, if we denote the model parameters by 8 and the model parameter space by 8, then the maximum likelihood estimate, when it exists, is the unique solution of Ee(T) = t such that 8 E 8 .
•
\0 •

.
Consider now the CGregression model Mol a ' Write T = (U. \'), where 11 only involves \'ariables in a and so is fixed under the conditional model. Then the maximum likelihood estimate under the conditional model, when it exists, is the unique solution of E(I( Ula) = u such that 8 E 8 b !a' Here 8 blll, is the paranH'ter space for the conditional model and Ee(Uia.) is the conditional expectation uf U giwn the covariates a. The cumputation of these conditional expectations \\'as described in Appendix D.2 . •
•
..
Let O(iL v) denote the estimate found by the 1'lIPS algorithm as applied to observed statistics (u, v). Then the rVfEalgorithm is simply described as follows: set Uo = u and 80 = 8( u, v), then repeat A
A
U
•
I
,• •
, J
,
..
J
· · .· · · ·
,
I t
•,
,,•

· '• .. ·
·•,,
n+1 =un+uEeJUla);
8n+1 =8(un+1'v)
until convergence. In words, the algorithm is based on a running set of adjusted statistics Un. These are incremented at each step by the quantity u  EeJUla), that is, the difference between the observed statistics and their current conditional expectation. Not uncommonly the algorithm as just described diverges; however, a simple modification forces convergence. This uses the increment K(U  EeJUla)), where K is a stephalving constant. That is to say, initially at each step the unmodified update is tried (corresponding to K = 1). If the conditional likelihood does not increase in this step, K is halved and the update is attempted. This stephalving process is repeated until a K is found for which the conditional likelihood • Increases.
,
e e :I
,•• •
.1,
As mentioned, the MEalgorithm resembles the EMalgorithm closely but maximizes the conditional rather than the marginal likelihood. In that and other senses it can be regarded as dual'to the EMalgorithm. It is of wide applicability (Edwards and Lauritzen, 1999).
~a •
), )f
..
, "..
•
,
·.,
·
i •
•
,
I

•

•
•
,
j
e erences
•
•
•
f.
•
. ..

". ..
,