Evan M. Berman
Genre
|
Human Resource Management in Public Service: Paradoxes, Processes, and Problems
by
—
published
2000
—
33 editions
|
|
|
Essential Statistics for Public Managers and Policy Analysts
—
published
2011
—
14 editions
|
|
|
Essential Statistics for Public Managers and Policy Analysts
by |
|
|
Performance and Productivity in Public and Nonprofit Organizations
—
published
2006
—
7 editions
|
|
|
Exercising Essential Statistics
by
—
published
2001
—
5 editions
|
|
|
Public Administration in Southeast Asia: Thailand, Philippines, Malaysia, Hong Kong, and Macao
—
published
2009
—
8 editions
|
|
|
Productivity in Public and Non Profit Organizations: Strategies and Techniques
—
published
1998
—
2 editions
|
|
|
Public Administration in East Asia: Mainland China, Japan, South Korea, Taiwan
—
published
2010
—
5 editions
|
|
|
People Skills at Work
by
—
published
2011
—
5 editions
|
|
|
Public Administration As A Developing Discipline, 2nd Edit
by
—
published
2011
|
|
“regression line will have larger standard deviations and, hence, larger standard errors. The computer calculates the slope, intercept, standard error of the slope, and the level at which the slope is statistically significant. Key Point The significance of the slope tests the relationship. Consider the following example. A management analyst with the Department of Defense wishes to evaluate the impact of teamwork on the productivity of naval shipyard repair facilities. Although all shipyards are required to use teamwork management strategies, these strategies are assumed to vary in practice. Coincidentally, a recently implemented employee survey asked about the perceived use and effectiveness of teamwork. These items have been aggregated into a single index variable that measures teamwork. Employees were also asked questions about perceived performance, as measured by productivity, customer orientation, planning and scheduling, and employee motivation. These items were combined into an index measure of work productivity. Both index measures are continuous variables. The analyst wants to know whether a relationship exists between perceived productivity and teamwork. Table 14.1 shows the computer output obtained from a simple regression. The slope, b, is 0.223; the slope coefficient of teamwork is positive; and the slope is significant at the 1 percent level. Thus, perceptions of teamwork are positively associated with productivity. The t-test statistic, 5.053, is calculated as 0.223/0.044 (rounding errors explain the difference from the printed value of t). Other statistics shown in Table 14.1 are discussed below. The appropriate notation for this relationship is shown below. Either the t-test statistic or the standard error should be shown in parentheses, directly below the regression coefficient; analysts should state which statistic is shown. Here, we show the t-test statistic:3 The level of significance of the regression coefficient is indicated with asterisks, which conforms to the p-value legend that should also be shown. Typically, two asterisks are used to indicate a 1 percent level of significance, one asterisk for a 5 percent level of significance, and no asterisk for coefficients that are insignificant.4 Table 14.1 Simple Regression Output Note: SEE = standard error of the estimate; SE = standard error; Sig. = significance.”
― Essential Statistics for Public Managers and Policy Analysts
― Essential Statistics for Public Managers and Policy Analysts
“regression as dummy variables Explain the importance of the error term plot Identify assumptions of regression, and know how to test and correct assumption violations Multiple regression is one of the most widely used multivariate statistical techniques for analyzing three or more variables. This chapter uses multiple regression to examine such relationships, and thereby extends the discussion in Chapter 14. The popularity of multiple regression is due largely to the ease with which it takes control variables (or rival hypotheses) into account. In Chapter 10, we discussed briefly how contingency tables can be used for this purpose, but doing so is often a cumbersome and sometimes inconclusive effort. By contrast, multiple regression easily incorporates multiple independent variables. Another reason for its popularity is that it also takes into account nominal independent variables. However, multiple regression is no substitute for bivariate analysis. Indeed, managers or analysts with an interest in a specific bivariate relationship will conduct a bivariate analysis first, before examining whether the relationship is robust in the presence of numerous control variables. And before conducting bivariate analysis, analysts need to conduct univariate analysis to better understand their variables. Thus, multiple regression is usually one of the last steps of analysis. Indeed, multiple regression is often used to test the robustness of bivariate relationships when control variables are taken into account. The flexibility with which multiple regression takes control variables into account comes at a price, though. Regression, like the t-test, is based on numerous assumptions. Regression results cannot be assumed to be robust in the face of assumption violations. Testing of assumptions is always part of multiple regression analysis. Multiple regression is carried out in the following sequence: (1) model specification (that is, identification of dependent and independent variables), (2) testing of regression assumptions, (3) correction of assumption violations, if any, and (4) reporting of the results of the final regression model. This chapter examines these four steps and discusses essential concepts related to simple and multiple regression. Chapters 16 and 17 extend this discussion by examining the use of logistic regression and time series analysis. MODEL SPECIFICATION Multiple regression is an extension of simple regression, but an important difference exists between the two methods: multiple regression aims for full model specification. This means that analysts seek to account for all of the variables that affect the dependent variable; by contrast, simple regression examines the effect of only one independent variable. Philosophically, the phrase identifying the key difference—“all of the variables that affect the dependent variable”—is divided into two parts. The first part involves identifying the variables that are of most (theoretical and practical) relevance in explaining the dependent”
― Essential Statistics for Public Managers and Policy Analysts
― Essential Statistics for Public Managers and Policy Analysts
“other and distinct from other groups. These techniques usually precede regression and other analyses. Factor analysis is a well-established technique that often aids in creating index variables. Earlier, Chapter 3 discussed the use of Cronbach alpha to empirically justify the selection of variables that make up an index. However, in that approach analysts must still justify that variables used in different index variables are indeed distinct. By contrast, factor analysis analyzes a large number of variables (often 20 to 30) and classifies them into groups based on empirical similarities and dissimilarities. This empirical assessment can aid analysts’ judgments regarding variables that might be grouped together. Factor analysis uses correlations among variables to identify subgroups. These subgroups (called factors) are characterized by relatively high within-group correlation among variables and low between-group correlation among variables. Most factor analysis consists of roughly four steps: (1) determining that the group of variables has enough correlation to allow for factor analysis, (2) determining how many factors should be used for classifying (or grouping) the variables, (3) improving the interpretation of correlations and factors (through a process called rotation), and (4) naming the factors and, possibly, creating index variables for subsequent analysis. Most factor analysis is used for grouping of variables (R-type factor analysis) rather than observations (Q-type). Often, discriminant analysis is used for grouping of observations, mentioned later in this chapter. The terminology of factor analysis differs greatly from that used elsewhere in this book, and the discussion that follows is offered as an aid in understanding tables that might be encountered in research that uses this technique. An important task in factor analysis is determining how many common factors should be identified. Theoretically, there are as many factors as variables, but only a few factors account for most of the variance in the data. The percentage of variation explained by each factor is defined as the eigenvalue divided by the number of variables, whereby the”
― Essential Statistics for Public Managers and Policy Analysts
― Essential Statistics for Public Managers and Policy Analysts
Is this you? Let us know. If not, help out and invite Evan to Goodreads.

