Tuesday, January 28, 2020
Decision Tree for Prognostic Classification
Decision Tree for Prognostic Classification Decision Tree for Prognostic Classification of Multivariate Survival Data and Competing Risks 1. Introduction Decision tree (DT) is one way to represent rules underlying data. It is the most popular tool for exploring complex data structures. Besides that it has become one of the most flexible, intuitive and powerful data analytic tools for determining distinct prognostic subgroups with similar outcome within each subgroup but different outcomes between the subgroups (i.e., prognostic grouping of patients). It is hierarchical, sequential classification structures that recursively partition the set of observations. Prognostic groups are important in assessing disease heterogeneity and for design and stratification of future clinical trials. Because patterns of medical treatment are changing so rapidly, it is important that the results of the present analysis be applicable to contemporary patients. Due to their mathematical simplicity, linear regression for continuous data, logistic regression for binary data, proportional hazard regression for censored survival data, marginal and frailty regression for multivariate survival data, and proportional subdistribution hazard regression for competing risks data are among the most commonly used statistical methods. These parametric and semiparametric regression methods, however, may not lead to faithful data descriptions when the underlying assumptions are not satisfied. Sometimes, model interpretation can be problematic in the presence of high-order interactions among predictors. DT has evolved to relax or remove the restrictive assumptions. In many cases, DT is used to explore data structures and to derive parsimonious models. DT is selected to analyze the data rather than the traditional regression analysis for several reasons. Discovery of interactions is difficult using traditional regression, because the interactions must be specified a priori. In contrast, DT automatically detects important interactions. Furthermore, unlike traditional regression analysis, DT is useful in uncovering variables that may be largely operative within a specific patient subgroup but may have minimal effect or none in other patient subgroups. Also, DT provides a superior means for prognostic classification. Rather than fitting a model to the data, DT sequentially divides the patient group into two subgroups based on prognostic factor values (e.g., tumor size The landmark work of DT in statistical community is the Classification and Regression Trees (CART) methodology of Breiman et al. (1984). A different approach was C4.5 proposed by Quinlan (1992). Original DT method was used in classification and regression for categorical and continuous response variable, respectively. In a clinical setting, however, the outcome of primary interest is often duration of survival, time to event, or some other incomplete (that is, censored) outcome. Therefore, several authors have developed extensions of original DT in the setting of censored survival data (Banerjee Noone, 2008). In science and technology, interest often lies in studying processes which generate events repeatedly over time. Such processes are referred to as recurrent event processes and the data they provide are called recurrent event data which includes in multivariate survival data. Such data arise frequently in medical studies, where information is often available on many individuals, each of whom may experience transient clinical events repeatedly over a period of observation. Examples include the occurrence of asthma attacks in respirology trials, epileptic seizures in neurology studies, and fractures in osteoporosis studies. In business, examples include the filing of warranty claims on automobiles, or insurance claims for policy holders. Since multivariate survival times frequently arise when individuals under observation are naturally clustered or when each individual might experience multiple events, then further extensions of DT are developed for such kind of data. In some studies, patients may be simultaneously exposed to several events, each competing for their mortality or morbidity. For example, suppose that a group of patients diagnosed with heart disease is followed in order to observe a myocardial infarction (MI). If by the end of the study each patient was either observed to have MI or was alive and well, then the usual survival techniques can be applied. In real life, however, some patients may die from other causes before experiencing an MI. This is a competing risks situation because death from other causes prohibits the occurrence of MI. MI is considered the event of interest, while death from other causes is considered a competing risk. The group of patients dead of other causes cannot be considered censored, since their observations are not incomplete. The extension of DT can also be employed for competing risks survival time data. These extensions can make one apply the technique to clinical trial data to aid in the development of prognostic classifications for chronic diseases. This chapter will cover DT for multivariate and competing risks survival time data as well as their application in the development of medical prognosis. Two kinds of multivariate survival time regression model, i.e. marginal and frailty regression model, have their own DT extensions. Whereas, the extension of DT for competing risks has two types of tree. First, the ââ¬Å"single eventâ⬠DT is developed based on splitting function using one event only. Second, the ââ¬Å"composite eventsâ⬠tree which use all the events jointly. 2. Decision Tree A DT is a tree-like structure used for classification, decision theory, clustering, and prediction functions. It depicts rules for dividing data into groups based on the regularities in the data. A DT can be used for categorical and continuous response variables. When the response variables are continuous, the DT is often referred to as a regression tree. If the response variables are categorical, it is called a classification tree. However, the same concepts apply to both types of trees. DTs are widely used in computer science for data structures, in medical sciences for diagnosis, in botany for classification, in psychology for decision theory, and in economic analysis for evaluating investment alternatives. DTs learn from data and generate models containing explicit rule-like relationships among the variables. DT algorithms begin with the entire set of data, split the data into two or more subsets by testing the value of a predictor variable, and then repeatedly split each subset into finer subsets until the split size reaches an appropriate level. The entire modeling process can be illustrated in a tree-like structure. A DT model consists of two parts: creating the tree and applying the tree to the data. To achieve this, DTs use several different algorithms. The most popular algorithm in the statistical community is Classification and Regression Trees (CART) (Breiman et al., 1984). This algorithm helps DTs gain credibility and acceptance in the statistics community. It creates binary splits on nominal or interval predictor variables for a nominal, ordinal, or interval response. The most widely-used algorithms by computer scientists are ID3, C4.5, and C5.0 (Quinlan, 1993). The first version of C4.5 and C5.0 were limited to categorical predictors; however, the most recent versions are similar to CART. Other algorithms include Chi-Square Automatic Interaction Detection (CHAID) for categorical response (Kass, 1980), CLS, AID, TREEDISC, Angoss KnowledgeSEEKER, CRUISE, GUIDE and QUEST (Loh, 2008). These algorithms use different approaches for splitting variables. CART, CRUISE, GUIDE and QUEST use the sta tistical approach, while CLS, ID3, and C4.5 use an approach in which the number of branches off an internal node is equal to the number of possible categories. Another common approach, used by AID, CHAID, and TREEDISC, is the one in which the number of nodes on an internal node varies from two to the maximum number of possible categories. Angoss KnowledgeSEEKER uses a combination of these approaches. Each algorithm employs different mathematical processes to determine how to group and rank variables. Let us illustrate the DT method in a simplified example of credit evaluation. Suppose a credit card issuer wants to develop a model that can be used for evaluating potential candidates based on its historical customer data. The companys main concern is the default of payment by a cardholder. Therefore, the model should be able to help the company classify a candidate as a possible defaulter or not. The database may contain millions of records and hundreds of fields. A fragment of such a database is shown in Table 1. The input variables include income, age, education, occupation, and many others, determined by some quantitative or qualitative methods. The model building process is illustrated in the tree structure in 1. The DT algorithm first selects a variable, income, to split the dataset into two subsets. This variable, and also the splitting value of $31,000, is selected by a splitting criterion of the algorithm. There exist many splitting criteria (Mingers, 1989). The basic principle of these criteria is that they all attempt to divide the data into clusters such that variations within each cluster are minimized and variations between the clusters are maximized. The follow- Name Age Income Education Occupation Default Andrew 42 45600 College Manager No Allison 26 29000 High School Self Owned Yes Sabrina 58 36800 High School Clerk No Andy 35 37300 College Engineer No â⬠¦ Table 1. Partial records and fields of a database table for credit evaluation up splits are similar to the first one. The process continues until an appropriate tree size is reached. 1 shows a segment of the DT. Based on this tree model, a candidate with income at least $31,000 and at least college degree is unlikely to default the payment; but a self-employed candidate whose income is less than $31,000 and age is less than 28 is more likely to default. We begin with a discussion of the general structure of a popular DT algorithm in statistical community, i.e. CART model. A CART model describes the conditional distribution of y given X, where y is the response variable and X is a set of predictor variables (X = (X1,X2,â⬠¦,Xp)). This model has two main components: a tree T with b terminal nodes, and a parameter Q = (q1,q2,â⬠¦, qb) ÃÅ' Rk which associates the parameter values qm, with the mth terminal node. Thus a tree model is fully specified by the pair (T, Q). If X lies in the region corresponding to the mth terminal node then y|X has the distribution f(y|qm), where we use f to represent a conditional distribution indexed by qm. The model is called a regression tree or a classification tree according to whether the response y is quantitative or qualitative, respectively. 2.1 Splitting a tree The DT T subdivides the predictor variable space as follows. Each internal node has an associated splitting rule which uses a predictor to assign observations to either its left or right child node. The internal nodes are thus partitioned into two subsequent nodes using the splitting rule. For quantitative predictors, the splitting rule is based on a split rule c, and assigns observations for which {xi For a regression tree, conventional algorithm models the response in each region Rm as a constant qm. Thus the overall tree model can be expressed as (Hastie et al., 2001): (1) where Rm, m = 1, 2,â⬠¦,b consist of a partition of the predictors space, and therefore representing the space of b terminal nodes. If we adopt the method of minimizing the sum of squares as our criterion to characterize the best split, it is easy to see that the best , is just the average of yi in region Rm: (2) where Nm is the number of observations falling in node m. The residual sum of squares is (3) which will serve as an impurity measure for regression trees. If the response is a factor taking outcomes 1,2, K, the impurity measure Qm(T), defined in (3) is not suitable. Instead, we represent a region Rm with Nm observations with (4) which is the proportion of class k(k ÃŽ {1, 2,â⬠¦,K}) observations in node m. We classify the observations in node m to a class , the majority class in node m. Different measures Qm(T) of node impurity include the following (Hastie et al., 2001): Misclassification error: Gini index: Cross-entropy or deviance: (5) For binary outcomes, if p is the proportion of the second class, these three measures are 1 max(p, 1 p), 2p(1 p) and -p log p (1 p) log(1 p), respectively. All three definitions of impurity are concave, having minimums at p = 0 and p = 1 and a maximum at p = 0.5. Entropy and the Gini index are the most common, and generally give very similar results except when there are two response categories. 2.2 Pruning a tree To be consistent with conventional notations, lets define the impurity of a node h as I(h) ((3) for a regression tree, and any one in (5) for a classification tree). We then choose the split with maximal impurity reduction (6) where hL and hR are the left and right children nodes of h and p(h) is proportion of sample fall in node h. How large should we grow the tree then? Clearly a very large tree might overfit the data, while a small tree may not be able to capture the important structure. Tree size is a tuning parameter governing the models complexity, and the optimal tree size should be adaptively chosen from the data. One approach would be to continue the splitting procedures until the decrease on impurity due to the split exceeds some threshold. This strategy is too short-sighted, however, since a seeming worthless split might lead to a very good split below it. The preferred strategy is to grow a large tree T0, stopping the splitting process when some minimum number of observations in a terminal node (say 10) is reached. Then this large tree is pruned using pruning algorithm, such as cost-complexity or split complexity pruning algorithm. To prune large tree T0 by using cost-complexity algorithm, we define a subtree T T0 to be any tree that can be obtained by pruning T0, and define to be the set of terminal nodes of T. That is, collapsing any number of its terminal nodes. As before, we index terminal nodes by m, with node m representing region Rm. Let denotes the number of terminal nodes in T (= b). We use instead of b following the conventional notation and define the risk of trees and define cost of tree as Regression tree: , Classification tree: , (7) where r(h) measures the impurity of node h in a classification tree (can be any one in (5)). We define the cost complexity criterion (Breiman et al., 1984) (8) where a(> 0) is the complexity parameter. The idea is, for each a, find the subtree Ta T0 to minimize Ra(T). The tuning parameter a > 0 governs the tradeoff between tree size and its goodness of fit to the data (Hastie et al., 2001). Large values of a result in smaller tree Ta and conversely for smaller values of a. As the notation suggests, with a = 0 the solution is the full tree T0. To find Ta we use weakest link pruning: we successively collapse the internal node that produces the smallest per-node increase in R(T), and continue until we produce the single-node (root) tree. This gives a (finite) sequence of subtrees, and one can show this sequence must contains Ta. See Brieman et al. (1984) and Ripley (1996) for details. Estimation of a () is achieved by five- or ten-fold cross-validation. Our final tree is then denoted as . It follows that, in CART and related algorithms, classification and regression trees are produced from data in two stages. In the first stage, a large initial tree is produced by splitting one node at a time in an iterative, greedy fashion. In the second stage, a small subtree of the initial tree is selected, using the same data set. Whereas the splitting procedure proceeds in a top-down fashion, the second stage, known as pruning, proceeds from the bottom-up by successively removing nodes from the initial tree. Theorem 1 (Brieman et al., 1984, Section 3.3) For any value of the complexity parameter a, there is a unique smallest subtree of T0 that minimizes the cost-complexity. Theorem 2 (Zhang Singer, 1999, Section 4.2) If a2 > al, the optimal sub-tree corresponding to a2 is a subtree of the optimal subtree corresponding to al. More general, suppose we end up with m thresholds, 0 (9) where means that is a subtree of . These are called nested optimal subtrees. 3. Decision Tree for Censored Survival Data Survival analysis is the phrase used to describe the analysis of data that correspond to the time from a well-defined time origin until the occurrence of some particular events or end-points. It is important to state what the event is and when the period of observation starts and finish. In medical research, the time origin will often correspond to the recruitment of an individual into an experimental study, and the end-point is the death of the patient or the occurrence of some adverse events. Survival data are rarely normally distributed, but are skewed and comprise typically of many early events and relatively few late ones. It is these features of the data that necessitate the special method survival analysis. The specific difficulties relating to survival analysis arise largely from the fact that only some individuals have experienced the event and, subsequently, survival times will be unknown for a subset of the study group. This phenomenon is called censoring and it may arise in the following ways: (a) a patient has not (yet) experienced the relevant outcome, such as relapse or death, by the time the study has to end; (b) a patient is lost to follow-up during the study period; (c) a patient experiences a different event that makes further follow-up impossible. Generally, censoring times may vary from individual to individual. Such censored survival time underestimated the true (but unknown) time to event. Visualising the survival process of an individual as a time-line, the event (assuming it is to occur) is beyond the end of the follow-up period. This situation is often called right censoring. Most survival data include right censored observation. In many biomedical and reliability studies, interest focuses on relating the time to event to a set of covariates. Cox proportional hazard model (Cox, 1972) has been established as the major framework for analysis of such survival data over the past three decades. But, often in practices, one primary goal of survival analysis is to extract meaningful subgroups of patients determined by the prognostic factors such as patient characteristics that are related to the level of disease. Although proportional hazard model and its extensions are powerful in studying the association between covariates and survival times, usually they are problematic in prognostic classification. One approach for classification is to compute a risk score based on the estimated coefficients from regression methods (Machin et al., 2006). This approach, however, may be problematic for several reasons. First, the definition of risk groups is arbitrary. Secondly, the risk score depends on the correct specification of the model. It is difficult to check whether the model is correct when many covariates are involved. Thirdly, when there are many interaction terms and the model becomes complicated, the result becomes difficult to interpret for the purpose of prognostic classification. Finally, a more serious problem is that an invalid prognostic group may be produced if no patient is included in a covariate profile. In contrast, DT methods do not suffer from these problems. Owing to the development of fast computers, computer-intensive methods such as DT methods have become popular. Since these investigate the significance of all potential risk factors automatically and provide interpretable models, they offer distinct advantages to analysts. Recently a large amount of DT methods have been developed for the analysis of survival data, where the basic concepts for growing and pruning trees remain unchanged, but the choice of the splitting criterion has been modified to incorporate the censored survival data. The application of DT methods for survival data are described by a number of authors (Gordon Olshen, 1985; Ciampi et al., 1986; Segal, 1988; Davis Anderson, 1989; Therneau et al., 1990; LeBlanc Crowley, 1992; LeBlanc Crowley, 1993; Ahn Loh, 1994; Bacchetti Segal, 1995; Huang et al., 1998; KeleÃâ¦Ã
¸ Segal, 2002; Jin et al., 2004; Cappelli Zhang, 2007; Cho Hong, 2008), including the text by Zhang Singer (1999). 4. Decision Tree for Multivariate Censored Survival Data Multivariate survival data frequently arise when we faced the complexity of studies involving multiple treatment centres, family members and measurements repeatedly made on the same individual. For example, in multi-centre clinical trials, the outcomes for groups of patients at several centres are examined. In some instances, patients in a centre might exhibit similar responses due to uniformity of surroundings and procedures within a centre. This would result in correlated outcomes at the level of the treatment centre. For the situation of studies of family members or litters, correlation in outcome is likely for genetic reasons. In this case, the outcomes would be correlated at the family or litter level. Finally, when one person or animal is measured repeatedly over time, correlation will most definitely exist in those responses. Within the context of correlated data, the observations which are correlated for a group of individuals (within a treatment centre or a family) or for on e individual (because of repeated sampling) are referred to as a cluster, so that from this point on, the responses within a cluster will be assumed to be correlated. Analysis of multivariate survival data is complex due to the presence of dependence among survival times and unknown marginal distributions. Multivariate survival times frequently arise when individuals under observation are naturally clustered or when each individual might experience multiple events. A successful treatment of correlated failure times was made by Clayton and Cuzik (1985) who modelled the dependence structure with a frailty term. Another approach is based on a proportional hazard formulation of the marginal hazard function, which has been studied by Wei et al. (1989) and Liang et al. (1993). Noticeably, Prentice et al. (1981) and Andersen Gill (1982) also suggested two alternative approaches to analyze multiple event times. Extension of tree techniques to multivariate censored data is motivated by the classification issue associated with multivariate survival data. For example, clinical investigators design studies to form prognostic rules. Credit risk analysts collect account information to build up credit scoring criteria. Frequently, in such studies the outcomes of ultimate interest are correlated times to event, such as relapses, late payments, or bankruptcies. Since DT methods recursively partition the predictor space, they are an alternative to conventional regression tools. This section is concerned with the generalization of DT models to multivariate survival data. In attempt to facilitate an extension of DT methods to multivariate survival data, more difficulties need to be circumvented. 4.1 Decision tree for multivariate survival data based on marginal model DT methods for multivariate survival data are not many. Almost all the multivariate DT methods have been based on between-node heterogeneity, with the exception of Molinaro et al. (2004) who proposed a general within-node homogeneity approach for both univariate and multivariate data. The multivariate methods proposed by Su Fan (2001, 2004) and Gao et al. (2004, 2006) concentrated on between-node heterogeneity and used the results of regression models. Specifically, for recurrent event data and clustered event data, Su Fan (2004) used likelihood-ratio tests while Gao et al. (2004) used robust Wald tests from a gamma frailty model to maximize the between-node heterogeneity. Su Fan (2001) and Fan et al. (2006) used a robust log-rank statistic while Gao et al. (2006) used a robust Wald test from the marginal failure-time model of Wei et al. (1989). The generalization of DT for multivariate survival data is developed by using goodness of split approach. DT by goodness of split is grown by maximizing a measure of between-node difference. Therefore, only internal nodes have associated two-sample statistics. The tree structure is different from CART because, for trees grown by minimizing within-node error, each node, either terminal or internal, has an associated impurity measure. This is why the CART pruning procedure is not directly applicable to such types of trees. However, the split-complexity pruning algorithm of LeBlanc Crowley (1993) has resulted in trees by goodness of split that has become well-developed tools. This modified tree technique not only provides a convenient way of handling survival data, but also enlarges the applied scope of DT methods in a more general sense. Especially for those situations where defining prediction error terms is relatively difficult, growing trees by a two-sample statistic, together with the split-complexity pruning, offers a feasible way of performing tree analysis. The DT procedure consists of three parts: a method to partition the data recursively into a large tree, a method to prune the large tree into a subtree sequence, and a method to determine the optimal tree size. In the multivariate survival trees, the between-node difference is measured by a robust Wald statistic, which is derived from a marginal approach to multivariate survival data that was developed by Wei et al. (1989). We used split-complexity pruning borrowed from LeBlanc Crowley (1993) and use test sample for determining the right tree size. 4.1.1 The splitting statistic We consider n independent subjects but each subject to have K potential types or number of failures. If there are an unequal number of failures within the subjects, then K is the maximum. We let Tik = min(Yik,Cik ) where Yik = time of the failure in the ith subject for the kth type of failure and Cik = potential censoring time of the ith subject for the kth type of failure with i = 1,â⬠¦,n and k = 1,â⬠¦,K. Then dik = I (Yik âⰠ¤ Cik) is the indicator for failure and the vector of covariates is denoted Zik = (Z1ik,â⬠¦, Zpik)T. To partition the data, we consider the hazard model for the ith unit for the kth type of failure, using the distinguishable baseline hazard as described by Wei et al. (1989), namely where the indicator function I(Zik Parameter b is estimated by maximizing the partial likelihood. If the observations within the same unit are independent, the partial likelihood functions for b for the distinguishable baseline model (10) would be, (11) Since the observations within the same unit are not independent for multivariate failure time, we refer to the above functions as the pseudo-partial likelihood. The estimator can be obtained by maximizing the likelihood by solving . Wei et al. (1989) showed that is normally distributed with mean 0. However the usual estimate, a-1(b), for the variance of , where (12) is not valid. We refer to a-1(b) as the naà ¯ve estimator. Wei et al. (1989) showed that the correct estimated (robust) variance estimator of is (13) where b(b) is weight and d(b) is often referred to as the robust or sandwich variance estimator. Hence, the robust Wald statistic corresponding to the null hypothesis H0 : b = 0 is (14) 4.1.2 Tree growing To grow a tree, the robust Wald statistic is evaluated for every possible binary split of the predictor space Z. The split, s, could be of several forms: splits on a single covariate, splits on linear combinations of predictors, and boolean combination of splits. The simplest form of split relates to only one covariate, where the split depends on the type of covariate whether it is ordered or nominal covariate. The ââ¬Å"best splitâ⬠is defined to be the one corresponding to the maximum robust Wald statistic. Subsequently the data are divided into two groups according to the best split. Apply this splitting scheme recursively to the learning sample until the predictor space is partitioned into many regions. There will be no further partition to a node when any of the following occurs: The node contains less than, say 10 or 20, subjects, if the overall sample size is large enough to permit this. We suggest using a larger minimum node size than used in CART where the default value is 5; All the observed times in the subset are censored, which results in unavailability of the robust Wald statistic for any split; All the subjects have identical covariate vectors. Or the node has only complete observations with identical survival times. In these situations, the node is considered as pure. The whole procedure results in a large tree, which could be used for the purpose of data structure exploration. 4.1.3 Tree pruning Let T denote either a particular tree or the set of all its nodes. Let S and denote the set of internal nodes and terminal nodes of T, respectively. Therefore, . Also let |Ãâ"| denote the number of nodes. Let G(h) represent the maximum robust Wald statistic on a particular (internal) node h. In order to measure the performance of a tree, a split-complexity measure Ga(T) is introduced as in LeBlanc and Crowley (1993). That is, (15) where the number of internal nodes, |S|, measures complexity; G(T) measures goodness of split in T; and the complexity parameter a acts as a penalty for each additional split. Start with the large tree T0 obtained from the splitting procedure. For any internal node h of T0, i.e. h ÃŽ S0, a function g(h) is defined as (16) where Th denotes the branch with h as its root and Sh is the set of all internal nodes of Th. Then the weakest link in T0 is the node such that à < Decision Tree for Prognostic Classification Decision Tree for Prognostic Classification Decision Tree for Prognostic Classification of Multivariate Survival Data and Competing Risks 1. Introduction Decision tree (DT) is one way to represent rules underlying data. It is the most popular tool for exploring complex data structures. Besides that it has become one of the most flexible, intuitive and powerful data analytic tools for determining distinct prognostic subgroups with similar outcome within each subgroup but different outcomes between the subgroups (i.e., prognostic grouping of patients). It is hierarchical, sequential classification structures that recursively partition the set of observations. Prognostic groups are important in assessing disease heterogeneity and for design and stratification of future clinical trials. Because patterns of medical treatment are changing so rapidly, it is important that the results of the present analysis be applicable to contemporary patients. Due to their mathematical simplicity, linear regression for continuous data, logistic regression for binary data, proportional hazard regression for censored survival data, marginal and frailty regression for multivariate survival data, and proportional subdistribution hazard regression for competing risks data are among the most commonly used statistical methods. These parametric and semiparametric regression methods, however, may not lead to faithful data descriptions when the underlying assumptions are not satisfied. Sometimes, model interpretation can be problematic in the presence of high-order interactions among predictors. DT has evolved to relax or remove the restrictive assumptions. In many cases, DT is used to explore data structures and to derive parsimonious models. DT is selected to analyze the data rather than the traditional regression analysis for several reasons. Discovery of interactions is difficult using traditional regression, because the interactions must be specified a priori. In contrast, DT automatically detects important interactions. Furthermore, unlike traditional regression analysis, DT is useful in uncovering variables that may be largely operative within a specific patient subgroup but may have minimal effect or none in other patient subgroups. Also, DT provides a superior means for prognostic classification. Rather than fitting a model to the data, DT sequentially divides the patient group into two subgroups based on prognostic factor values (e.g., tumor size The landmark work of DT in statistical community is the Classification and Regression Trees (CART) methodology of Breiman et al. (1984). A different approach was C4.5 proposed by Quinlan (1992). Original DT method was used in classification and regression for categorical and continuous response variable, respectively. In a clinical setting, however, the outcome of primary interest is often duration of survival, time to event, or some other incomplete (that is, censored) outcome. Therefore, several authors have developed extensions of original DT in the setting of censored survival data (Banerjee Noone, 2008). In science and technology, interest often lies in studying processes which generate events repeatedly over time. Such processes are referred to as recurrent event processes and the data they provide are called recurrent event data which includes in multivariate survival data. Such data arise frequently in medical studies, where information is often available on many individuals, each of whom may experience transient clinical events repeatedly over a period of observation. Examples include the occurrence of asthma attacks in respirology trials, epileptic seizures in neurology studies, and fractures in osteoporosis studies. In business, examples include the filing of warranty claims on automobiles, or insurance claims for policy holders. Since multivariate survival times frequently arise when individuals under observation are naturally clustered or when each individual might experience multiple events, then further extensions of DT are developed for such kind of data. In some studies, patients may be simultaneously exposed to several events, each competing for their mortality or morbidity. For example, suppose that a group of patients diagnosed with heart disease is followed in order to observe a myocardial infarction (MI). If by the end of the study each patient was either observed to have MI or was alive and well, then the usual survival techniques can be applied. In real life, however, some patients may die from other causes before experiencing an MI. This is a competing risks situation because death from other causes prohibits the occurrence of MI. MI is considered the event of interest, while death from other causes is considered a competing risk. The group of patients dead of other causes cannot be considered censored, since their observations are not incomplete. The extension of DT can also be employed for competing risks survival time data. These extensions can make one apply the technique to clinical trial data to aid in the development of prognostic classifications for chronic diseases. This chapter will cover DT for multivariate and competing risks survival time data as well as their application in the development of medical prognosis. Two kinds of multivariate survival time regression model, i.e. marginal and frailty regression model, have their own DT extensions. Whereas, the extension of DT for competing risks has two types of tree. First, the ââ¬Å"single eventâ⬠DT is developed based on splitting function using one event only. Second, the ââ¬Å"composite eventsâ⬠tree which use all the events jointly. 2. Decision Tree A DT is a tree-like structure used for classification, decision theory, clustering, and prediction functions. It depicts rules for dividing data into groups based on the regularities in the data. A DT can be used for categorical and continuous response variables. When the response variables are continuous, the DT is often referred to as a regression tree. If the response variables are categorical, it is called a classification tree. However, the same concepts apply to both types of trees. DTs are widely used in computer science for data structures, in medical sciences for diagnosis, in botany for classification, in psychology for decision theory, and in economic analysis for evaluating investment alternatives. DTs learn from data and generate models containing explicit rule-like relationships among the variables. DT algorithms begin with the entire set of data, split the data into two or more subsets by testing the value of a predictor variable, and then repeatedly split each subset into finer subsets until the split size reaches an appropriate level. The entire modeling process can be illustrated in a tree-like structure. A DT model consists of two parts: creating the tree and applying the tree to the data. To achieve this, DTs use several different algorithms. The most popular algorithm in the statistical community is Classification and Regression Trees (CART) (Breiman et al., 1984). This algorithm helps DTs gain credibility and acceptance in the statistics community. It creates binary splits on nominal or interval predictor variables for a nominal, ordinal, or interval response. The most widely-used algorithms by computer scientists are ID3, C4.5, and C5.0 (Quinlan, 1993). The first version of C4.5 and C5.0 were limited to categorical predictors; however, the most recent versions are similar to CART. Other algorithms include Chi-Square Automatic Interaction Detection (CHAID) for categorical response (Kass, 1980), CLS, AID, TREEDISC, Angoss KnowledgeSEEKER, CRUISE, GUIDE and QUEST (Loh, 2008). These algorithms use different approaches for splitting variables. CART, CRUISE, GUIDE and QUEST use the sta tistical approach, while CLS, ID3, and C4.5 use an approach in which the number of branches off an internal node is equal to the number of possible categories. Another common approach, used by AID, CHAID, and TREEDISC, is the one in which the number of nodes on an internal node varies from two to the maximum number of possible categories. Angoss KnowledgeSEEKER uses a combination of these approaches. Each algorithm employs different mathematical processes to determine how to group and rank variables. Let us illustrate the DT method in a simplified example of credit evaluation. Suppose a credit card issuer wants to develop a model that can be used for evaluating potential candidates based on its historical customer data. The companys main concern is the default of payment by a cardholder. Therefore, the model should be able to help the company classify a candidate as a possible defaulter or not. The database may contain millions of records and hundreds of fields. A fragment of such a database is shown in Table 1. The input variables include income, age, education, occupation, and many others, determined by some quantitative or qualitative methods. The model building process is illustrated in the tree structure in 1. The DT algorithm first selects a variable, income, to split the dataset into two subsets. This variable, and also the splitting value of $31,000, is selected by a splitting criterion of the algorithm. There exist many splitting criteria (Mingers, 1989). The basic principle of these criteria is that they all attempt to divide the data into clusters such that variations within each cluster are minimized and variations between the clusters are maximized. The follow- Name Age Income Education Occupation Default Andrew 42 45600 College Manager No Allison 26 29000 High School Self Owned Yes Sabrina 58 36800 High School Clerk No Andy 35 37300 College Engineer No â⬠¦ Table 1. Partial records and fields of a database table for credit evaluation up splits are similar to the first one. The process continues until an appropriate tree size is reached. 1 shows a segment of the DT. Based on this tree model, a candidate with income at least $31,000 and at least college degree is unlikely to default the payment; but a self-employed candidate whose income is less than $31,000 and age is less than 28 is more likely to default. We begin with a discussion of the general structure of a popular DT algorithm in statistical community, i.e. CART model. A CART model describes the conditional distribution of y given X, where y is the response variable and X is a set of predictor variables (X = (X1,X2,â⬠¦,Xp)). This model has two main components: a tree T with b terminal nodes, and a parameter Q = (q1,q2,â⬠¦, qb) ÃÅ' Rk which associates the parameter values qm, with the mth terminal node. Thus a tree model is fully specified by the pair (T, Q). If X lies in the region corresponding to the mth terminal node then y|X has the distribution f(y|qm), where we use f to represent a conditional distribution indexed by qm. The model is called a regression tree or a classification tree according to whether the response y is quantitative or qualitative, respectively. 2.1 Splitting a tree The DT T subdivides the predictor variable space as follows. Each internal node has an associated splitting rule which uses a predictor to assign observations to either its left or right child node. The internal nodes are thus partitioned into two subsequent nodes using the splitting rule. For quantitative predictors, the splitting rule is based on a split rule c, and assigns observations for which {xi For a regression tree, conventional algorithm models the response in each region Rm as a constant qm. Thus the overall tree model can be expressed as (Hastie et al., 2001): (1) where Rm, m = 1, 2,â⬠¦,b consist of a partition of the predictors space, and therefore representing the space of b terminal nodes. If we adopt the method of minimizing the sum of squares as our criterion to characterize the best split, it is easy to see that the best , is just the average of yi in region Rm: (2) where Nm is the number of observations falling in node m. The residual sum of squares is (3) which will serve as an impurity measure for regression trees. If the response is a factor taking outcomes 1,2, K, the impurity measure Qm(T), defined in (3) is not suitable. Instead, we represent a region Rm with Nm observations with (4) which is the proportion of class k(k ÃŽ {1, 2,â⬠¦,K}) observations in node m. We classify the observations in node m to a class , the majority class in node m. Different measures Qm(T) of node impurity include the following (Hastie et al., 2001): Misclassification error: Gini index: Cross-entropy or deviance: (5) For binary outcomes, if p is the proportion of the second class, these three measures are 1 max(p, 1 p), 2p(1 p) and -p log p (1 p) log(1 p), respectively. All three definitions of impurity are concave, having minimums at p = 0 and p = 1 and a maximum at p = 0.5. Entropy and the Gini index are the most common, and generally give very similar results except when there are two response categories. 2.2 Pruning a tree To be consistent with conventional notations, lets define the impurity of a node h as I(h) ((3) for a regression tree, and any one in (5) for a classification tree). We then choose the split with maximal impurity reduction (6) where hL and hR are the left and right children nodes of h and p(h) is proportion of sample fall in node h. How large should we grow the tree then? Clearly a very large tree might overfit the data, while a small tree may not be able to capture the important structure. Tree size is a tuning parameter governing the models complexity, and the optimal tree size should be adaptively chosen from the data. One approach would be to continue the splitting procedures until the decrease on impurity due to the split exceeds some threshold. This strategy is too short-sighted, however, since a seeming worthless split might lead to a very good split below it. The preferred strategy is to grow a large tree T0, stopping the splitting process when some minimum number of observations in a terminal node (say 10) is reached. Then this large tree is pruned using pruning algorithm, such as cost-complexity or split complexity pruning algorithm. To prune large tree T0 by using cost-complexity algorithm, we define a subtree T T0 to be any tree that can be obtained by pruning T0, and define to be the set of terminal nodes of T. That is, collapsing any number of its terminal nodes. As before, we index terminal nodes by m, with node m representing region Rm. Let denotes the number of terminal nodes in T (= b). We use instead of b following the conventional notation and define the risk of trees and define cost of tree as Regression tree: , Classification tree: , (7) where r(h) measures the impurity of node h in a classification tree (can be any one in (5)). We define the cost complexity criterion (Breiman et al., 1984) (8) where a(> 0) is the complexity parameter. The idea is, for each a, find the subtree Ta T0 to minimize Ra(T). The tuning parameter a > 0 governs the tradeoff between tree size and its goodness of fit to the data (Hastie et al., 2001). Large values of a result in smaller tree Ta and conversely for smaller values of a. As the notation suggests, with a = 0 the solution is the full tree T0. To find Ta we use weakest link pruning: we successively collapse the internal node that produces the smallest per-node increase in R(T), and continue until we produce the single-node (root) tree. This gives a (finite) sequence of subtrees, and one can show this sequence must contains Ta. See Brieman et al. (1984) and Ripley (1996) for details. Estimation of a () is achieved by five- or ten-fold cross-validation. Our final tree is then denoted as . It follows that, in CART and related algorithms, classification and regression trees are produced from data in two stages. In the first stage, a large initial tree is produced by splitting one node at a time in an iterative, greedy fashion. In the second stage, a small subtree of the initial tree is selected, using the same data set. Whereas the splitting procedure proceeds in a top-down fashion, the second stage, known as pruning, proceeds from the bottom-up by successively removing nodes from the initial tree. Theorem 1 (Brieman et al., 1984, Section 3.3) For any value of the complexity parameter a, there is a unique smallest subtree of T0 that minimizes the cost-complexity. Theorem 2 (Zhang Singer, 1999, Section 4.2) If a2 > al, the optimal sub-tree corresponding to a2 is a subtree of the optimal subtree corresponding to al. More general, suppose we end up with m thresholds, 0 (9) where means that is a subtree of . These are called nested optimal subtrees. 3. Decision Tree for Censored Survival Data Survival analysis is the phrase used to describe the analysis of data that correspond to the time from a well-defined time origin until the occurrence of some particular events or end-points. It is important to state what the event is and when the period of observation starts and finish. In medical research, the time origin will often correspond to the recruitment of an individual into an experimental study, and the end-point is the death of the patient or the occurrence of some adverse events. Survival data are rarely normally distributed, but are skewed and comprise typically of many early events and relatively few late ones. It is these features of the data that necessitate the special method survival analysis. The specific difficulties relating to survival analysis arise largely from the fact that only some individuals have experienced the event and, subsequently, survival times will be unknown for a subset of the study group. This phenomenon is called censoring and it may arise in the following ways: (a) a patient has not (yet) experienced the relevant outcome, such as relapse or death, by the time the study has to end; (b) a patient is lost to follow-up during the study period; (c) a patient experiences a different event that makes further follow-up impossible. Generally, censoring times may vary from individual to individual. Such censored survival time underestimated the true (but unknown) time to event. Visualising the survival process of an individual as a time-line, the event (assuming it is to occur) is beyond the end of the follow-up period. This situation is often called right censoring. Most survival data include right censored observation. In many biomedical and reliability studies, interest focuses on relating the time to event to a set of covariates. Cox proportional hazard model (Cox, 1972) has been established as the major framework for analysis of such survival data over the past three decades. But, often in practices, one primary goal of survival analysis is to extract meaningful subgroups of patients determined by the prognostic factors such as patient characteristics that are related to the level of disease. Although proportional hazard model and its extensions are powerful in studying the association between covariates and survival times, usually they are problematic in prognostic classification. One approach for classification is to compute a risk score based on the estimated coefficients from regression methods (Machin et al., 2006). This approach, however, may be problematic for several reasons. First, the definition of risk groups is arbitrary. Secondly, the risk score depends on the correct specification of the model. It is difficult to check whether the model is correct when many covariates are involved. Thirdly, when there are many interaction terms and the model becomes complicated, the result becomes difficult to interpret for the purpose of prognostic classification. Finally, a more serious problem is that an invalid prognostic group may be produced if no patient is included in a covariate profile. In contrast, DT methods do not suffer from these problems. Owing to the development of fast computers, computer-intensive methods such as DT methods have become popular. Since these investigate the significance of all potential risk factors automatically and provide interpretable models, they offer distinct advantages to analysts. Recently a large amount of DT methods have been developed for the analysis of survival data, where the basic concepts for growing and pruning trees remain unchanged, but the choice of the splitting criterion has been modified to incorporate the censored survival data. The application of DT methods for survival data are described by a number of authors (Gordon Olshen, 1985; Ciampi et al., 1986; Segal, 1988; Davis Anderson, 1989; Therneau et al., 1990; LeBlanc Crowley, 1992; LeBlanc Crowley, 1993; Ahn Loh, 1994; Bacchetti Segal, 1995; Huang et al., 1998; KeleÃâ¦Ã
¸ Segal, 2002; Jin et al., 2004; Cappelli Zhang, 2007; Cho Hong, 2008), including the text by Zhang Singer (1999). 4. Decision Tree for Multivariate Censored Survival Data Multivariate survival data frequently arise when we faced the complexity of studies involving multiple treatment centres, family members and measurements repeatedly made on the same individual. For example, in multi-centre clinical trials, the outcomes for groups of patients at several centres are examined. In some instances, patients in a centre might exhibit similar responses due to uniformity of surroundings and procedures within a centre. This would result in correlated outcomes at the level of the treatment centre. For the situation of studies of family members or litters, correlation in outcome is likely for genetic reasons. In this case, the outcomes would be correlated at the family or litter level. Finally, when one person or animal is measured repeatedly over time, correlation will most definitely exist in those responses. Within the context of correlated data, the observations which are correlated for a group of individuals (within a treatment centre or a family) or for on e individual (because of repeated sampling) are referred to as a cluster, so that from this point on, the responses within a cluster will be assumed to be correlated. Analysis of multivariate survival data is complex due to the presence of dependence among survival times and unknown marginal distributions. Multivariate survival times frequently arise when individuals under observation are naturally clustered or when each individual might experience multiple events. A successful treatment of correlated failure times was made by Clayton and Cuzik (1985) who modelled the dependence structure with a frailty term. Another approach is based on a proportional hazard formulation of the marginal hazard function, which has been studied by Wei et al. (1989) and Liang et al. (1993). Noticeably, Prentice et al. (1981) and Andersen Gill (1982) also suggested two alternative approaches to analyze multiple event times. Extension of tree techniques to multivariate censored data is motivated by the classification issue associated with multivariate survival data. For example, clinical investigators design studies to form prognostic rules. Credit risk analysts collect account information to build up credit scoring criteria. Frequently, in such studies the outcomes of ultimate interest are correlated times to event, such as relapses, late payments, or bankruptcies. Since DT methods recursively partition the predictor space, they are an alternative to conventional regression tools. This section is concerned with the generalization of DT models to multivariate survival data. In attempt to facilitate an extension of DT methods to multivariate survival data, more difficulties need to be circumvented. 4.1 Decision tree for multivariate survival data based on marginal model DT methods for multivariate survival data are not many. Almost all the multivariate DT methods have been based on between-node heterogeneity, with the exception of Molinaro et al. (2004) who proposed a general within-node homogeneity approach for both univariate and multivariate data. The multivariate methods proposed by Su Fan (2001, 2004) and Gao et al. (2004, 2006) concentrated on between-node heterogeneity and used the results of regression models. Specifically, for recurrent event data and clustered event data, Su Fan (2004) used likelihood-ratio tests while Gao et al. (2004) used robust Wald tests from a gamma frailty model to maximize the between-node heterogeneity. Su Fan (2001) and Fan et al. (2006) used a robust log-rank statistic while Gao et al. (2006) used a robust Wald test from the marginal failure-time model of Wei et al. (1989). The generalization of DT for multivariate survival data is developed by using goodness of split approach. DT by goodness of split is grown by maximizing a measure of between-node difference. Therefore, only internal nodes have associated two-sample statistics. The tree structure is different from CART because, for trees grown by minimizing within-node error, each node, either terminal or internal, has an associated impurity measure. This is why the CART pruning procedure is not directly applicable to such types of trees. However, the split-complexity pruning algorithm of LeBlanc Crowley (1993) has resulted in trees by goodness of split that has become well-developed tools. This modified tree technique not only provides a convenient way of handling survival data, but also enlarges the applied scope of DT methods in a more general sense. Especially for those situations where defining prediction error terms is relatively difficult, growing trees by a two-sample statistic, together with the split-complexity pruning, offers a feasible way of performing tree analysis. The DT procedure consists of three parts: a method to partition the data recursively into a large tree, a method to prune the large tree into a subtree sequence, and a method to determine the optimal tree size. In the multivariate survival trees, the between-node difference is measured by a robust Wald statistic, which is derived from a marginal approach to multivariate survival data that was developed by Wei et al. (1989). We used split-complexity pruning borrowed from LeBlanc Crowley (1993) and use test sample for determining the right tree size. 4.1.1 The splitting statistic We consider n independent subjects but each subject to have K potential types or number of failures. If there are an unequal number of failures within the subjects, then K is the maximum. We let Tik = min(Yik,Cik ) where Yik = time of the failure in the ith subject for the kth type of failure and Cik = potential censoring time of the ith subject for the kth type of failure with i = 1,â⬠¦,n and k = 1,â⬠¦,K. Then dik = I (Yik âⰠ¤ Cik) is the indicator for failure and the vector of covariates is denoted Zik = (Z1ik,â⬠¦, Zpik)T. To partition the data, we consider the hazard model for the ith unit for the kth type of failure, using the distinguishable baseline hazard as described by Wei et al. (1989), namely where the indicator function I(Zik Parameter b is estimated by maximizing the partial likelihood. If the observations within the same unit are independent, the partial likelihood functions for b for the distinguishable baseline model (10) would be, (11) Since the observations within the same unit are not independent for multivariate failure time, we refer to the above functions as the pseudo-partial likelihood. The estimator can be obtained by maximizing the likelihood by solving . Wei et al. (1989) showed that is normally distributed with mean 0. However the usual estimate, a-1(b), for the variance of , where (12) is not valid. We refer to a-1(b) as the naà ¯ve estimator. Wei et al. (1989) showed that the correct estimated (robust) variance estimator of is (13) where b(b) is weight and d(b) is often referred to as the robust or sandwich variance estimator. Hence, the robust Wald statistic corresponding to the null hypothesis H0 : b = 0 is (14) 4.1.2 Tree growing To grow a tree, the robust Wald statistic is evaluated for every possible binary split of the predictor space Z. The split, s, could be of several forms: splits on a single covariate, splits on linear combinations of predictors, and boolean combination of splits. The simplest form of split relates to only one covariate, where the split depends on the type of covariate whether it is ordered or nominal covariate. The ââ¬Å"best splitâ⬠is defined to be the one corresponding to the maximum robust Wald statistic. Subsequently the data are divided into two groups according to the best split. Apply this splitting scheme recursively to the learning sample until the predictor space is partitioned into many regions. There will be no further partition to a node when any of the following occurs: The node contains less than, say 10 or 20, subjects, if the overall sample size is large enough to permit this. We suggest using a larger minimum node size than used in CART where the default value is 5; All the observed times in the subset are censored, which results in unavailability of the robust Wald statistic for any split; All the subjects have identical covariate vectors. Or the node has only complete observations with identical survival times. In these situations, the node is considered as pure. The whole procedure results in a large tree, which could be used for the purpose of data structure exploration. 4.1.3 Tree pruning Let T denote either a particular tree or the set of all its nodes. Let S and denote the set of internal nodes and terminal nodes of T, respectively. Therefore, . Also let |Ãâ"| denote the number of nodes. Let G(h) represent the maximum robust Wald statistic on a particular (internal) node h. In order to measure the performance of a tree, a split-complexity measure Ga(T) is introduced as in LeBlanc and Crowley (1993). That is, (15) where the number of internal nodes, |S|, measures complexity; G(T) measures goodness of split in T; and the complexity parameter a acts as a penalty for each additional split. Start with the large tree T0 obtained from the splitting procedure. For any internal node h of T0, i.e. h ÃŽ S0, a function g(h) is defined as (16) where Th denotes the branch with h as its root and Sh is the set of all internal nodes of Th. Then the weakest link in T0 is the node such that à <
Monday, January 20, 2020
The Man Who Is Endowed with Magic Essays -- Biography, Walt, Walt Disn
ââ¬Å"You wonââ¬â¢t find anyone who can really explain the magic of Walt,â⬠said Peter Ellenshawââ¬âa Disney artist. ââ¬Å"People see him either in black or white, but he was an extraordinary mixture. He was a common man who was endowed with a touch of magicâ⬠(Greene 6). This magic led Walt on one of the most extraordinary adventures anyone could ever wish to live. Walt grew up traveling from place to place as his father, Elias, worked day and night to make sure his family was fed and comfortable. Elias worked Waltââ¬â¢s two other brothers such ferocity that they ran away with the money they had been saving. Without the two brothers, Waltââ¬â¢s fatherââ¬â¢s dream of the farm life shattered. Soon after, Elias caught typhoid fever and became unable to work on his farm again, so they moved. The move took a toll on Walt. His father had him waking up at 3:30 am to deliver papers, a two-hour task. After delivering, he went to school. Then, he delivered papers for two more hours. Delivering the papers affected Waltââ¬â¢s school performance; he was yet again held back in the second grade, which made him almost two years older than his classmates. This action might have caused a rebellious action from most children but not Walt. His motherââ¬â¢s loving touch kept him in high spirits. As Greene said in The Man Behinf The Magic: The Story of Walt Disney, ââ¬Å"If Walt was endowed with a touch of magic, his mother was the person who wielded the magicianââ¬â¢s wandâ⬠(15). This magic and perseverance would play key roles in Waltââ¬â¢s life. Walt Disney had interests in many areas. One knack he had was recreating cartoons. One classmate recalled, ââ¬Å"He wouldnââ¬â¢t just copy what was in the newspaper, either,â⬠rather he would add his own features with the original as the templat... ...rporation went onto produce Alice in Wonderland, Peter Pan, and Pinocchio to name a few. Although Disneyââ¬â¢s interest might have faltered, his heart never did. His love for his employees drove him to become almost bankrupt. His love for his family led him to be one of the greatest fathers. One of his daughters, Sharon, stated, ââ¬Å"we werenââ¬â¢t raised with the idea that this was a great man. He was Daddy. He went to work every morning. He came home every nightâ⬠(Greene 77). His love for his work led him to become one of the best animators of all not, if not arguably the best. As his best friend, Walter Pfeiffer, said, ââ¬Å"Walt was enthused about everything. He was one hundred percent interested in everything he didâ⬠whether it be his home life or his work. Truly, his magic inspired each animator to have the save magic even after some of his heart left the company.
Saturday, January 11, 2020
Individual players Essay
The purpose of this assignment is to investigate whether or not team cohesion has an impact on an individual players performance within a rugby team. The assignment will look into whether (if there is a impact at all) the impact is negative or positive, and how that relates to the different types of players in different ways. The research will look into the methods of increasing or decreasing cohesion within a rugby team e.g. nights on the town. Group dynamics is an interactive process and is concerned with changing patterns of tension, with conflict, adjustments and the building of cohesion within a group. The way a group is affected by dynamics helps determine the way in which leadership and dominant sub group patterns emerge. Therefore this influences players behavioural patterns and how attitudes are altered in the short and long term. These changes can be temporary or permanent. An example of this, relating to rugby, would be the emergence of a captain, vice captain, the first team players and the substitutes. A players status in the team is a major factor in there powers of influence but the character of a player is also very important. A player who does not attend regularly and has no status can have a major influence on the team, this depends on the players ability to impose there ideas, experience, and skill on the team. Shaw described a group as:à ââ¬Å"â⬠¦.two or more people interacting with one another in such a manner that each person influences and is influenced by each other person.â⬠à Woods said that one of the defining features of a group is thatà ââ¬Å"â⬠¦.Its members having common beliefs, usually unspoken, about what is acceptable and unacceptable, how things are doneâ⬠¦.â⬠à In sport groups are usually seen in the form of a team. There are many different types of teams within the game of rugby, such as, the playing team, the coaching team, and the management team. There are a team of supporters in the form of the crowd. Cohesion is a term closely associated with group dynamics and is a two way process, it can be defined asà ââ¬Å"The degree in which members of the group desire to remain in the groupâ⬠(internet 2à Orà ââ¬Å"â⬠¦individuals attracted to a group through common motivation and value the relationships within a groupâ⬠(Galligan). à In the case of a rugby team the common goal would be to win matches, or at a lower level simply to participate in the game. The English national team now has a common. Goal of trying to win the world cup, my research would indicate that the national team is very cohesive and do value the relationships within the group. There success in recent years would be proof of this, as a very high cohesion relates to successful results. ââ¬Å"One is always hearing about how important it is for a team to gel, bond or have good chemistry. Cohesive teams can achieve dramatic and awesome things. The way players interact has a tremendous impact on the way a team performs.â⬠(Internet 1)à How well England do in the world cup will be a good indication to whether success is related to cohesion.
Friday, January 3, 2020
French Expressions Using Monter
The French verb monter literally means to go up and is also used in many idiomatic expressions. Learn how to be promoted, rise gently, turn up the volume, and more with this list of expressions with monter. Possible Meanings of Monter to go upto assembleto climb up/intoto editto mountto organize, set upto put upto riseto stageto take upto turn upto well up Expressions with Monter monter à to go up to visit / work in (a city)monter à bicycletteto cycle up; to ride a bikemonter à bord (dun navire)to go on board a shipmonter à chevalto get on a horse; to ride a horsemonter à / jusquà to come up tomonter à /dans sa chambreto go up to ones roommonter à là ©tageto go upstairsmonter à piedto walk upmonter aux arbresto climb treesmonter dans lestime de quelquunto rise in someones estimationmonter dans un avionto board a planemonter dans un trainto board a trainmonter des maillesto cast on stitchesmonter en courantto run upmonter en flà ¨cheto soar (literally and figuratively)monter en gradeto be promotedmonter en lacetsto wind upwardsmonter en parallà ¨leto connect in parallelmonter en pente douceto rise gentlymonter en sà ©rieto connect in seriesmonter en titubantto stagger upmonter en trainto go up by trainmonter en voitureto drive up; to get into a carmonter le bourrichon à quelquun (informal)to put ideas in someones headmonter le co up à quelquun (fam)to take someone for a ridemonter lescalierto go up the stairsmonter la gamme (music)to go up the scalemonter la gardeto go/be on guardmonter le sonto turn the volume upmonter la tà ªte à quelquunto get someone worked upmonter par lascenseurto take the elevator upmonter prà ©venir quelquunto go up and tell someonemonter quelque chose en à ©pingleto blow something all out of proportionmonter quelquun contre quelquunto set someone against someonemonter voir quelquunto go up and see someonemonter surto climb up onmonter sur le trà ´neto ascend to the thronemonter sur un arbreto climb a treemonter sur une bicycletteto get on a bicyclemonter sur une collineto climb a hillmonter sur une à ©chelleto climb a laddermonter un chevalto ride a horsemonter une cà ´teto go up a hillmonter un coupto plan a jobmonter une histoire pour dà ©shonorer quelquunto invent a scandal to ruin someones nameÃâ¡a fait monter les prix. It makes prices go up.Cest lartiste qui mont e.Hes an up-and-coming artist.Cest lhomme qui monte.Hes on the way up (to fame).Cest une histoire montà ©e de toutes pià ¨ces.Its a complete fabrication.à ªtre montà © contreto be dead set againstfaire monter des blancs en neigeto beat egg whites into stiff peaksfaire monter quelquunto tell someone (e.g., a guest) to come upfaire monter ses valisesto have ones luggage taken upJe monte la garde (sign)Beware of dogLa moutarde me monte au nez.Im losing my temper.se monterto come to, amount tose monter le bourrichon (informal)to get all worked upse monter la tà ªte (pour un rien)to get all worked up (over nothing)Monter conjugations
Subscribe to:
Posts (Atom)