AGENDA SETTING AND IMPROVEMENT MONITORING
IN A UNIVERSITY DEPARTMENT
School of Economics and Management
Altai State University
Research Program in Social and Organizational Learning
The George Washington University
Washington, DC 20052 USA
With assistance from Daniel Le and Anna Oshkalo
January 2, 2006
Prepared for the Twelfth Annual Deming Research Seminar
13-14 February 2006 in New York City
AGENDA SETTING AND IMPROVEMENT MONITORING IN A UNIVERSITY DEPARTMENT
Altai State University, Barnaul, Russia, Email: email@example.com
The George Washington University, Email: firstname.lastname@example.org
A Quality Improvement Priority Matrix (QIPM) was used by the members of the Department of Management Science at The George Washington University in the years 2001 to 2005 to consider priorities and to monitor progress. Using the importance/ performance ratio (IPR), the authors clustered the features of the Department by their IPR scores into four groups – urgent, high priority, medium priority and low priority. The paper suggests a number of new ways to interpret the data obtained from a QIPM.
A Quality Improvement Priority Matrix (QIPM) is a method for achieving data-driven decision-making. A QIPM asks customers or employees to rate several features of an organization on two scales – Importance and Performance. That is, how important to them is that particular feature, and how effectively is the organization currently performing on that feature.
A Quality Improvement Priority Matrix was described by the specialists from GTE Directories in their presentation in February 1995 describing how they won the Baldrige Award. (Carlson, 1995) A similar matrix, called a ¡°strategic improvement matrix,¡± was used by the people from Armstrong Building Products Operations in their presentation to the February 1996 Baldrige Award conference. (Wellendorf, 1996)
A QIPM is usually used in determining priorities and for monitoring performance improvement. The features of greatest interest are those that fall in the SE quadrant defined by high importance and low performance. Those features are considered to have priority for an organization.
The matrix was used in several George Washington University (GWU) student group projects in the late 1990s. And a matrix was used by members of the GWU Department of Management Science in 2001, 2002, 2003, and 2005. The faculty in the Department of Management Science at GWU evaluated various features of the Department and the School of Business, such as Funds to support research, Salaries, Coordination with other departments, Classroom facilities, Travel support, Teaching assistants and so on (a total of 52 features).
Although the Department is functioning very well, improvement is always possible. We tried to define where improvement is most needed. Thus, we studied how a Quality Improvement Priority Matrix may be used for improvement monitoring in a university department. We wanted to compare the data from 2001 to 2005 to see how feature priorities had changed during this period.
The first questionnaire was distributed in May 2001. The 2001 questionnaire contained 51 features related to the Department and five questions about the matrix itself. Nineteen responses were received from faculty members. The five questions asked whether the members of the Department found the exercise to be useful and whether they thought it would be helpful to other departments in the University. A large majority thought the results were useful and that similar exercises in other departments would be helpful as well. (Umpleby and Melnychenko, 2002)
The second questionnaire was distributed in May 2002 Twenty responses were received. The 2002 survey listed 52 features of the Department and included some questions seeking additional information on the features rated high on Importance and low on Performance in 2001. The third and forth questionnaires were distributed in May 2003 and April 2005 (22 and 13 responses respectively). They both also listed 52 features. We wanted to compare the data from 2001 to 2005 to see how opinions had changed during this period. In 2005 a scale from 1 to 9 was used. Both visual and algebraic analyses were made of the data obtained. We coded the four quadrants as follows: southeast quadrant as 0, northeast quadrant as 1, northwest quadrant as 2, and southwest quadrant as 3. The features of greatest interest are those that fall in the ¡°southeast¡± (0) quadrant, that is, those features rated high on Importance and low on Performance (see Figure 1). We also calculated the Importance/Performance Ratio (IPR).
Controversy and Consensus in Evaluation
Standard deviation was calculated as a way of identifying controversial and consensual items for each of the measures I, P, and IPR. Six features were the most controversial in terms of Importance in the years 2002-2005. They were always within the ten most controversial items: Faculty websites; Consulting opportunities in the DC area; Faculty annual reports; Opportunities to meet local businessmen and government managers; Secretarial support; and Help with writing research proposals. There were no items on which there was consensus on Importance (bottom ten in standard deviation) in the years 2002-2005.
No features of the Department were consistently controversial (top ten standard deviation) in terms of Performance in the years 2002-2005. There was consensus on the Performance of only one feature, Library book collection, in the years in 2002-2005. Similarly, controversial and consensus features relative to IPR were identified.
Analysis of Importance, Performance, and IPR
Table 1. The most stable high Importance features (always in the first 15) from 2001 to 2005
Figure 1. Quality Improvement Priority Matrix for 2005
We compared IPR for the years 2001-2005 and identified features which have always been in the southeast quadrant. These features have a stable high priority for the Department (see Table 5).
Table 5. The features always in the SE quadrant from 2001 to 2005
Approaches to Identifying Priorities
One of the goals of this study was to develop methods to more precisely identify priorities. In particular, we tried to develop an approach for automatically clustering features with different priorities.
In earlier studies only one approach was used for this purpose: a visual analysis of the Quality Improvement Priority Matrix as is shown in Figure 1. Features in the southeast quadrant were considered to have a high priority. However, as our study demonstrates, a visual analysis of a QIPM matrix and identifying features in the SE quadrant do not discriminate priorities sufficiently, primarily because up to half of all features routinely fall into this quadrant. For example, 19 of 51 features lie in quadrant 0 in 2001, 17 of the 52 features do in 2002, 23 of 52 in 2003, and 26 of 52 in 2005.
We identified one more problem with this approach. This is a ¡®border effect¡¯. For example, a feature with a very high Importance (e.g., close to 9) and Performance slightly higher than 5 (e.g., 5.01) will fall in the NE (¡®successful¡¯) quadrant. That seems wrong because of the high priority of this feature (its IPR is close to 2). Vice versa, a feature with Importance slightly more than 5 (e.g., 5.1) and Performance slightly lower than 5 (e.g., 4.95) will fall into the SE (¡®urgent¡¯) quadrant although this factor¡¯s IPR is close to 1 (i.e., it is not urgent).
We tried another approach to identifying priorities by using average Importance and average Performance as a midpoint for the graph rather than a scale midpoint, i.e. 5. This approach to define quadrants provides a more even allocation of features within the quadrants, especially for years when average Importance and Performance significantly differs from 5. The allocation of the 2005 features in the ¡®improved¡¯ matrix is shown in the Appendix (the M2 column) and in Figure 2. For example, after applying this approach, 12 of 51 features lie in quadrant 0 in 2001, 12 of the 52 features do in 2002, 8 of 52 in 2003, and 17 of 52 in 2005. But the border effect remains and becomes apparent even more obviously. For example, for 2001, a factor with I=7.61 and P=3.18 falls into the 3rd (¡®low priority¡¯) quadrant. That seems wrong because the feature has an IPR=2.40.
Therefore, using only Importance indexes or only Performance indexes does not permit making a single list of priorities. (A feature with high Importance may have low priority and vise versa). Calculating IPR as a single index of a feature¡¯s priority (Prytula, et al, 2002) provided a more convenient priority classification. The higher the IPR, the higher priority a feature has.
Comparative analysis of the values of IPR for 4 years and the Quality Improvement Priority Matrixes for the corresponding years demonstrates that the priority of features can be identified using the value of IPR and the position of the feature in the quadrants. Visual analysis of Quality Improvement Priority Matrixes reveals a similar distribution in the data with the same IPR interval for all 4 surveys (e.g., see Figure 3 and Figure 4). This observation served as the basis for developing a new approach to identifying priorities by choosing clusters (ovals) representing different priorities. Comparing the positions of the features on the diagrams and their IPRs led to the idea that the features could be clustered by the IPR interval.
Figure 2. ¡®Improved¡¯ Quality Improvement Priority Matrix for 2005 (Numbers show rank by IPR)
For each year the IPR values >=2, 1.5 – 1.99, 1.25 – 1.49, and < 1.25 were used to identify four clusters. These clusters identified items that were labeled as urgent, high priority, medium priority, and low priority (Figures 3 and 4).
Cluster Priority IPR interval
0 urgent >=2
1 high priority (1.5 – 1.99)
2 medium priority (1.25 – 1.49)
3 low priority <1.25
Figure 3. Priority Clusters in 2005 Figure 4. Priority Clusters in 2003
The next important question in our study was what criterion should be used to select IPR intervals to specify the clusters. To explore this question, we calculated the correlation coefficient (r) within clusters. For unclustered data, there is a low Performance – Importance correlation. For example r = 0.32 in 2001, 0.51 in 2002, 0.52 in 2003, and 0.18 in 2005. Intercorrelation within clusters (ovals) is much higher, for example for 2001 the correlations for the clusters are .96, .88, .85, and .90. Thus, one way to automatically cluster features with different priorities is to choose intervals that create clusters with the highest intercorrelation coefficient.
The second approach, we suggest, is based on calculating the coefficient of determination (r2 ) in the following regression equation:
where P is Performance, I is Importance, and C1, C2 , and C3 are dummy variables corresponding to clusters. These dummy variables have values 1 or 0 depending on whether a point is or is not in the corresponding cluster (0, 1, 2, or 3). Coefficients b1, b2, and b3 represent the increased Performance for each cluster compared with the cluster 0. We suggest that the higher r2 is in this regression equation, the more precise the clustering. For example, for 2005 we have the following regression equation for the clusters indicated above:
P= -1.92 + .61I + 1.59C1 + 2.77C2 + 3.72C3 with r2 = 0.90
The parallel lines on Figure 5 indicate the levels of medium Performance in the corresponding clusters.
The difference between coefficients b1, b2, and b3 is close to 1. This means that the average cluster Performance changes by 1 from cluster to cluster.
Figure 5. Lines of medium performance in the priority clusters, 2005
Practically, it is more convenient to use an ¡°averaged¡± regression equation of the following kind:
where C is a dummy variable corresponding to the number of the cluster. It may have values 0, 1, 2, or 3 if a point falls into the corresponding cluster. The coefficient a2 represents the average shift in performance between clusters. For clusters indicated above, we have the following equation for 2005:
P=-1.56 + .62I + 1.11C with r2 =0.89
The coefficient a2 demonstrates that average shift of Performance is 1.11 for the selected clusters. The shift in average Performance may also be an additional criterion for selecting IPR interval. Selecting the shift in average Performance, it is possible to cluster features to provide the desired average shift in Performance between clusters (coefficient a2 in the regression equation). We composed a macro for Microsoft Excel for realizing this approach to identifying features with different priorities.
The number of points / features in the clusters may also be a useful additional criterion in practice. Table 6 demonstrates the influence of different IPR intervals (clusters) on the number of features in clusters, r2 , and the coefficients of the regression equation As the table demonstrates, it is possible to cluster features, depending on the specifics of the situation, according to several criteria such as the number of features in clusters, r2 , and average shift in performance. For example, choosing IPR interval thresholds as 1.9, 1.6, and 1.3 for clustering 2001 features forms a cluster with 10 urgent priority features, a cluster with 13 high priority features, a cluster with 13 medium priority features, and a cluster with 15 low priority features. The average performance shift between clusters is 1.07 and r2=0.88. Such criteria may be used for developing software for automatically clustering features with different priorities (IPRs).
Table 6. IPR intervals, coefficients of regression equation, r2, and number of features in clusters
In this way, we formulated an integrated approach to automatically clustering features with different priorities. Presently, we are creating a special software product realizing this approach. With this software, it is possible to cluster features, according to several criteria, such as number of clusters (for different levels of accuracy), number of features in clusters, IPR intervals, intercorrelation coefficient in clusters, the coefficient of determination, and average shift in performance between clusters.
Analysis of Dynamics
The correlation matrix (Table 7) demonstrates that the greater the distance in time, the lower the correlation coefficients between indexes (Importance, Performance, and IPR).
Table 7. Correlation matrix for 2001-2005 surveys
To analyze how much the features moved from year to year, we used the following indicators:
1) Difference in IPR: dIPR = IPRt2 – IPRt1
2) The length of a vector in the Importance – Performance coordinate scale:
According to the level of dIPR and DI, as well as the set level of DI threshold (DIt), all features were classified into three groups with different levels and directions of change:
1: DI >= DIt and dIPR is positive (regress and greater urgency)
2: DI >= DIt and dIPR is negative (progress and less urgency)
3: DI < DIt (change is not significant)
The parameter DI represents the amount of movement. dIPR represents the direction of movement (becoming more urgent or less urgent). DIt is a threshold of significance.
For multiyear analysis of feature dynamics, we extended the two year analysis and used the following indicators:
1) Difference in IPR: dIPR = abs(IPRt1 – IPRt2) + abs(IPRt2 – IPRt3) + abs(IPRt3 – IPRt4)
2) The sum of the intervals between points in the Importance – Performance coordinate scale:
The first indicator reflects changes between clusters with different priorities, while the second indicator represents only the amount of movement. Table 8 and Figure 6 show the dynamics of the features that changed the most in terms of dIPR. Table 9 shows the dynamics of features that changed the most in term of DI. It is interesting to note, that the features which have high scores for DI often have low scores for dIPR and vise versa. The total correlation coefficient between these parameters (r2) is equal to -.01 (see Figure 7).
Table 8. The features that changed the most in terms of dIPR (Priority)
Figure 6. The features that changed the most in terms of dIPR
Table 9. The features that changed the most in terms of DI (Movement)
Figure 7. How features changed in time on both dIPR and DI
We compared the distance a feature moved during a time interval with the amount of change in IPR. IPR changes when a feature moves in a perpendicular direction (from cluster to cluster). Although some items moved a significant amount within a cluster, the more important movement was between clusters, even if the distance moved was not as great. Movement between clusters means a change in priority. Therefore, the indicator dIPR is more important for analyzing changes in priorities.
To understand why some features were evaluated with low Performance in the previous year, we also included some additional questions in the survey. For example, for the item,Travel support, two additional questions were included: ¡°We need more support for travel to domestic conferences¡± and ¡°We need more support for travel to international conferences¡±. These questions were answered using a Likert scale: ¡°Strongly agree – Agree – Neutral – Disagree – Strongly disagree¡±. The results obtained from the additional questions indicate, for example, that with regard to building maintenance more people agree with the statement, ¡°More attention should be given to improving the appearance of common work spaces,¡± than agree with the statement, ¡°More attention should be given to cleaning and maintaining buildings.¡± Another result indicates a higher dispersion (interval not cardinal variables) for the statement, ¡°We should coordinate more with other departments on research projects,¡± than with the statement, ¡°We should coordinate more with other departments on curricula.¡±
In this paper, we investigated a number of ways to interpret data obtained from a Quality Improvement Priority Matrix. We used both visual and algebraic analyses of QIPM data. We investigated how choosing different IPR intervals to define clusters changes intercorrelation. We suggested a new, integrated approach for clustering features with different priorities. We compared the distance a feature moved during a time interval with the amount of change in IPR. We used standard deviation as a measure of consensus and controversy. And as expected we found that the correlation between years decreases with the amount of time elapsed.
As we demonstrated, these methods are easy to understand and effective in terms of time and resources. At the same time, this approach provides enough precision for monitoring changes in priorities and performance. The methods described and the results obtained were discussed at a planning meeting of the Department of Management Science, and they helped to formulate a new strategic plan for the Department. QIPM may be used in universities, businesses, government agencies, and non-governmental organizations (NGOs) for selecting priorities and monitoring improvement. Regular and relevant information from employees and customers about the features that most need improvement allows managers to focus attention and resources where they can best contribute to improving employee and customer satisfaction.
Research for this paper was supported in part by the Junior Faculty Development Program, which is funded by the Bureau of Educational and Cultural Affairs (ECA) of the United States Department of State, under authority of the Fulbright-Hays Act of 1961 as amended and administered by American Councils for International Education: ACTR/ACCELS. The opinions expressed herein are those of the authors and do not necessarily express the views of either ECA or American Councils.
The authors wish to thank Daniel Le and Anna Oshkalo for their help with this paper.
Carlson, M.¡°GTE Directories: Customer Focus and Satisfaction,¡± The Quest for Excellence VII, The Official Conference of the Malcolm Baldrige National Quality Award, February 6-8, 1995, Washington, DC.
Prytula, Y., Cimesa, D., and Umpleby, S. "Improving the Performance of Universities in Transitional Economies." Research Program in Social and Organizational Learning, The George Washington University, Washington, DC, USA, 2004, http://www.gwu.edu/~rpsol/
Umpleby, S. and Melnychenko, O. ¡°Quality Improvement Matrix: A Tool to Improve Customer Service in Academia,¡± in J.A. Edosomwan (ed.) Customer Satisfaction Management Frontiers – VI: Serving the 21st Century Customer, Fairfax, VA: Quality University Press, 2002, pp. 6.1-6.12.
Wellendorf, J.A. ¡°Armstrong Building
Products Operations: Information and Analysis,¡± The Quest for Excellence
VIII, The Official Conference of the Malcolm Baldrige National Quality
Award, February 5-7, 1995, Washington, DC.
Appendix. Values for I, P, and IPR in 2005 (ranked by IPR)
back to recent papers page