MEASURING THE VALUE OF LESSONS LEARNED PROGRAMS

Colleen M. Gauntner
Project Performance Corporation

Mary McCune
U.S. Department of Energy Office of Environmental Management

R.F. Shangraw, Jr., Ph.D.
Project Performance Corporation

Steve Wujciak
Volpe Transportation Systems Center
U.S. Department of Transportation

ABSTRACT

In theory, an active lessons learned program is very beneficial to the implementing organization. In practice, however, the benefits of lessons learned programs are very poorly documented. The purpose of this research is to develop an approach for evaluating the benefits of lessons learned programs targeting Environmental Management projects across the United States Department of Energy (DOE) complex. This approach includes: 1) the design of a longitudinal study to assess changes in behavior resulting from a lesson learned issued; and, 2) the development of a standardized survey instrument for quantifiably measuring these changes. This research will be of interest to managers currently funding lessons learned activities because they will better understand the return on their investment. This study is of importance to the Federal Government community, in general, because more resources are likely to be devoted to formal lessons learned programs in the upcoming years.

INTRODUCTION

Over the past several years, the Federal Government has been more active in organizational learning programs. These programs are designed to improve the effectiveness and efficiency of government operations and, most recently, have gained further support through the National Productivity Review and other reinvention efforts. The U.S. Department of Energy (DOE), for example, has an active Lessons Learned Program and recently issued a Lessons Learned Standard (DOE-STD-7501-95) and Handbook (DOE-HDBK-7502-95) to encourage organizational learning and provide guidance on how to structure lessons learned programs. Many of the DOE field sites and contractors have implemented formal lessons learned programs and are in the process of expanding the communication of lessons learned information through the Internet.

The potential benefits of implementing a lessons learned program within an organization include reduced program expenditures, shorter project schedules, more effective technical solutions, and improved organizational communication. With the exception of a small number of qualitative case studies, though, no researcher has systematically documented the benefits accruing from a lessons learned program. In general, the research community has relied upon studies in related fields (e.g., communications, research & development policy, organizational theory) to justify the investment in organizational learning programs. In addition, no studies have attempted to measure the benefits of lessons learned programs specifically targeting Environmental Management activities.

The purpose of this paper is to describe an approach for evaluating the benefits of lessons learned programs targeting Environmental Management projects across the DOE complex. This approach involves: 1) the design of a longitudinal study to assess changes in behavior resulting from a lesson learned issued; and, 2) the development of a standardized survey instrument for quantifiably measuring these changes. The paper summarizes existing literature and research in this area, describes the evaluation approach, discusses challenges specific to this approach, presents preliminary observations, and proposes a path forward for full implementation of the study.

LITERATURE SUMMARY

The problem faced by researchers is the lack of effective measurement practices within organizational learning programs. Although the concept of measuring learning is familiar to social scientists, organizational learning scholars, and managers, the practice of measuring learning has yet to be effectively implemented. To provide a sense of what has been discussed and accomplished to date, related literature is summarized below in three general categories (also depicted in Figure 1):

Fig. 1. Three areas of literature.

Organizational Learning Literature

The literature devoted to organizational learning theory focuses on defining the concept of organizational learning, how this process differs from individual learning, and what factors affect the capacity of an organization to learn -- all important considerations in developing measures of the concept. Despite its growing popularity, "organizational learning" has multiple interpretations, making it increasingly difficult to establish a general definition. Overall, "a company is a learning organization to the degree that it has purposefully built its capacity to learn as a whole system and woven that capacity into all of its aspects: vision and strategy, leadership and management, culture, structure, systems, and processes" (Redding, 1997). Mahler explains that a distinction in learning theories exists between those who view learning as the result of a rational, information-based system ("rational-analytic learning theory") and those who view learning as a socially constructed process ("interpretive theory") (Mahler, 1997).

Despite definitional differences, Fiol and Lyles claim that scholars of organizational learning agree on three general areas: 1) distinction between individual and organizational learning; 2) relevance of environmental alignment (i.e., forming networks with outside sources); and, 3) presence of the contextual factors of culture, strategy, structure, and environment in the learning process (Fiol and Lyles, 1985). These three areas are explored below.

Much discussion exists regarding the first area, examining how individual and organizational learning differ. Individual learning is an important element in the learning process of an entire organization because organizations ultimately learn through individual members. As Senge notes, "Organizations learn only through individuals who learn. Individual learning does not guarantee organizational learning. But without it no organizational learning occurs" (Senge, 1990). It is necessary to emphasize, however, that individual and organizational learning are separate processes, and that organizational learning is not merely the sum of all individuals learning within an organization. Without the understanding of how to utilize individual knowledge to further organizational goals and progress, no organizational learning is actually taking place. Kim asserts that there "needs to be a way to get beyond the fragmented learning of individuals and spread the learning throughout the organization" (Kim, 1993). Kim suggests "the design and implementation of microworlds or learning laboratories" as a mechanism for transferring learning from the individual to the organization (Kim, 1993). The DOE Lessons Learned Program may be viewed as a learning laboratory of sorts, through which lessons learned information is transferred from the originating individual or organization to the entire DOE complex.

The second area of consensus involves establishing connections with the environment. As evidenced above, individual learning is vital to the organizational learning process because it is through individuals that the organization’s memory, past decisions, and methods of administration are retained. Likewise, the members of the organization who are more exposed to external information sources, and to whom others turn for information, are extremely valuable to the organization (Allen, 1977). Allen terms these individuals "gatekeepers". They are essential not only for their combined memories, but also because they help to complete another part of the organizational learning circle—ensuring that the organization is in alignment with its environment.

This exchange of knowledge via links with outside sources is one of the fundamental premises of the DOE Lessons Learned Program. Through the dissemination of information that highlights both negative and positive aspects of DOE business, individuals and organizations throughout the complex are able to help one another improve their operations. Site Lessons Learned Coordinators act as gatekeepers, ensuring that their organization is tapped into information from internal and external sources.

The third concept agreed upon by most organizational learning scholars involves the influence of the contextual factors of organizational culture, strategy, structure, and environment. These contextual factors are vital because they "create and reinforce learning and are created by learning" (Fiol and Lyles, 1985). Organizational culture and strategy are important because they affect an organization’s capacity for change. As Mahler states, "Culture provides a reservoir of organizational meanings against which results, experience, and performance data are interpreted and inquiries about changes in procedures and program technologies can proceed " (Mahler, 1997). Structure is important because it determines the processes and procedures by which an organization learns. An environment conducive to learning is the final influential factor. Organizations need enough external stimulation to foster learning, but not so much that it becomes difficult for an organization to maintain pace with the environment.

Measurement Literature

The measurement literature focuses less on what constitutes learning, and more on how to capture or quantify learning. This emphasis has important implications for today’s business world, because many learning theorists believe that "if you can’t measure it, you can’t manage it" (Garvin, 1993). Consequently, numerous researchers have attempted to conceptualize methods of measuring the effectiveness of learning initiatives or training programs. Though researchers have spent much time and effort assessing the need for measurement, many have had difficulty in developing and implementing an effective tool. Summarized below are several relevant theories and methods of measuring learning. While none are specifically suited to the purposes of this study, each adds value to the development and improvement of the evaluation approach discussed in this paper.

One of the first attempts to quantify learning was the learning curve. The curve was developed from World War II experiences, when researchers realized that, as the same worker population made more of the same product, the time and resources needed to make the product decreased. Essentially, the learning curve graphically portrays the process of learning: the slope of the curve shows the relationship between the quantified values of time and output. Early researchers found the average rate of learning to be represented in an 80% learning curve (See Figure 2), meaning that when production doubles, costs fall to 80% of their previous level (Boze, 1994). Many theorists have since suggested different shapes and slopes, and have also experimented with variations of the learning curve concept.

Fig. 2. Basic 80% learning curve.

David Garvin advocates the half-life curve, a variation of the learning curve developed by Analog Devices, as an alternate method of measuring learning. A half-life curve measures the time necessary to achieve a 50% improvement in a specified performance measure. Unlike the traditional learning curve, the half-life curve is not confined to the outputs of cost or price. One drawback to the half-life curve is that it only measures results, and is therefore unlikely to capture incremental learning involved in lengthy systemic changes. To collect data for the half-life curve, Garvin suggests using surveys, interviews, and direct observation. Calculating the half-life curve enables organizations to visualize changes in worker productivity over time resulting from cognitive and behavioral modifications (Garvin, 1993).

Campbell and Cairns have experimented with another option—the use of behaviorally anchored rating scales (BARS). BARS, originally developed by Smith and Kendall (Smith and Kendall, 1963), can provide credible and reasonably consistent scales for a variety of behaviors. This tool differs from other measurement scales because, rather than using subjective measures such as "excellent performance" or "average performance," specific units representing different grades of behavior are set as baselines. The results are displayed on a scoring scale to measure the gap between actual and desired performance. This gap analysis quickly highlights areas for improvement. Campbell and Cairns advocate the use of BARS as an effective tool for measuring human performance because it is "both objective and systematic while incorporating the inherent judgmental aspects of such a task" (Campbell and Cairns, 1997).

Another alternative is group feedback analysis (GFA), a method suggested by Heller and Brown (Heller and Brown, 1995). GFA combines quantitative and anecdotal measures of learning. Although time intensive, Heller and Brown consider the results to be more reliable because of the broader scope of the assessment. GFA uses employee questionnaires followed by managerial observation. The use of both questionnaires and observations allows "managers and other employees the opportunity to check on validity, omissions, and possible misunderstandings as well as to help with interpretation" (Heller and Brown, 1995).

Clearly, there are many different viewpoints on measuring organizational learning. Finding a method that is appropriate and effective for one organization is dependent upon that organization’s specific culture. Developing an approach to measuring the effectiveness of lessons learned programs will likely involve integrating the most relevant pieces of several existing theories and methods.

Management Literature

Literature emerging from the project management sector focuses on management initiatives to help employees improve performance. The theory of organizational learning is of interest to management, as are methods of measurement; yet, the focus of this literature is on changing behavior. These management initiatives commonly include training, productivity improvement, or rewards and incentives. Discussed below are several concepts from management literature related to performance improvement resulting from learning in the workplace.

Much of the management literature discusses the distinction between learning and changing behavior. An employee can receive the information from a lesson learned, but may or may not have the opportunity to apply the knowledge to their daily operations. This may be due to personal choice, lack of time, or management opposition to changing the status quo (Kirkpatrick, 1994). For example, successful training interventions require identification of the problem, determination of appropriate training content and delivery, measurement of training results, and connection between training and job performance (Galagan, 1994). For a successful lessons learned intervention, as well, managers must help employees understand the connections between the shared information and the resulting changes in performance.

In addition to the lack of connection between learning and performance, Rossett notes the discontinuity in the workplace between individual learning initiatives (Rossett, 1997). Distinct learning initiatives often take place concurrently, but employees sometimes have difficulty realizing the connections between these initiatives. Helping employees form these connections will enable them to better understand how learning can be incorporated into the workplace. To ensure the effectiveness of lessons learned programs, project managers need to demonstrate their commitment to lessons learned activities by encouraging employees to think about how each lesson learned may affect their particular jobs. A good incentive for managers to support and encourage lessons learned activities is the ability to see benefits. An effective measurement tool might succeed in providing this incentive.

Another consideration for management is the existence of different subcultures within an organization that can affect the organization’s ability to learn. Schein asserts that the typical organization consists of three broad subcultures with fundamental differences: (1) operators, (2) engineers, and (3) executives (Schein, 1997). Schein explains that these three groups are frequently misaligned with each other because of the different types of work that each has been trained to do. The operators (e.g., DOE line workers) generally believe that human interaction, communication, trust, and teamwork are all essential for getting the job done. The second group, engineers, think more quantitatively and systematically. The last group, executives, are focused on the financial status of the organization and its relationship to external competitors.

Friction within an organization often develops due to the different viewpoints and backgrounds of each of the subcultures. Though internal disagreement within organizations is inevitable, it becomes more problematic in learning initiatives such as training. As Schein says, "It is when organizations attempt to learn in a generative way… that these three cultures collide and we observe frustration, low productivity, and the failure of innovations to survive and diffuse" (Schein, 1997). To facilitate learning, all subcultures need to accept and understand their differences, and identify ways in which they can work together. Schein believes that awareness of the different subcultures in an organization will lead to smoother operations and more opportunities for employees to learn from each other (Schein, 1997).

Designing the Evaluation approach

As the literature review suggests, the organizational learning discipline lacks an effective measurement tool. The DOE complex, as well, is in need of an approach to measuring the value of lessons learned programs. Therefore, this research team proposes the development of a two-prong evaluation approach consisting of: 1) a longitudinal study to observe changes in behavior resulting from a lesson learned issued, and 2) a standardized survey instrument to measure these observed changes.

Designing the Longitudinal Study

Taking into account the difficulty of many past researchers in developing an effective approach to measuring learning, researchers in this study considered many alternative designs based on the work of Campbell and Stanley (Campbell and Stanley, 1963). The following two study designs were evaluated and subsequently rejected:

  1. One-Shot Case Study - An intervention is administered once to a single group, whose behavior is observed at a single point in time thereafter to determine the effects of the intervention. For example, researchers would examine the behavior of an organization after a lesson learned has been issued in the attempt to draw causal inferences about the effects of the lesson learned on organizational performance. These inferences are based upon expectations of how the organization would have performed had the lesson learned not been issued. This design, although very easy to implement, lacks any degree of control; therefore, researchers rejected it in search of a design with stronger internal and external validity.
  2. Static-Group Comparison - A defined group that has been exposed to an intervention is compared with a control group that has not received the same exposure. This comparison is made in the attempt to establish a connection between the intervention and any differences observed in the resulting behavior of the two groups. For example, this design would compare the behavior two organizations: one in which a lesson learned was distributed, and another which did not receive the lesson learned. Researchers decided to eliminate this design from further consideration because of the ethical issues of withholding a lesson learned from one group but not another. Also, it is difficult to ensure that the control group would not become aware of the information included in the lesson learned through informal communications. Researchers initially concluded that use of a control group would be difficult in the unpredictable environment of the DOE complex.

In light of the decision that a control group would not be feasible for this type of study, researchers selected the One-Group Pretest-Posttest design for its advantages over the one-shot case study. This design observes a single group without a control, measuring behavior before and after the intervention to determine a change over time.

The hypothesis to be tested in this longitudinal study asserts the following: A lesson learned issued will directly result in measurable change in project performance over time. This study attempts to provide data to reject the following plausible rival hypothesis: The change in behavior subsequent to a lesson learned issued at a site may not be the result of applied information; instead, this change may be due to external factors, such as budget, regulations/directives, management support, etc.

The sample pool will consist of distinct Environmental Management projects within individual sites throughout the DOE complex. The intervention will be a lesson learned issued that is related to the scope of these projects. The tool of measurement used for the pretest and posttest will be a standardized survey administered to project managers. Project performance will be measured prior to issuance of the lesson learned, and again, after dissemination of the lesson, at repeated intervals in time to determine a change in behavior.

The independent variables (i.e., factors which induce change) include the lesson learned, as well as uncontrolled external factors such as budget, workforce, regulations, and management policy. The dependent variables of the study (i.e., what will be measured) will consist of quantifiable aspects of performance, such as efficiency and effectiveness of operations, worker productivity, project cost, risk, health and safety indicators (e.g., number of injuries and fatalities), communication indicators (e.g., number of related lessons learned/success stories issued), and project schedule.

Designing the Survey Instrument

The second phase of this evaluation approach involves development of the standardized survey instrument. This tool will be administered to project managers for pretest and posttest measurement of performance. Redding explains that two main factors should be considered when designing a survey instrument--scope and values (Redding, 1997). According to the "systems thinking" theory, an organization is comprised of interrelated parts which contribute to the functioning of the entire system. Thus, an instrument with a broad focus is needed to capture all the distinct parts of an organization involved in learning. Experimenters must also ensure that the values of the organization are in alignment with the goals of the survey instrument in order to carry out an effective assessment (Redding, 1997).

With this knowledge, the research team conceptualized a standardized survey instrument with broad applicability to all DOE sites. In order to generally assess organizational values and culture, researchers administered a preliminary set of case study questions to a randomly selected group of DOE site Lessons Learned Coordinators. From this initial assessment, researchers gathered data on the nature of questions to include in the actual survey instrument.

With the preliminary data gathered from the case studies administered to five Lessons Learned Coordinators, researchers developed a standardized survey instrument for pretest and posttest measurements. The questions composing this survey are grouped into five distinct categories: 1) Organizational Culture; 2) Current Status of Project Activities; 3) Lessons Learned Utilization; 4) External Factors; and, 5) Lessons Learned Program Definition (See Figure 3).

Fig. 3. Standardized survey instrument.

Questions are open-ended in cases where insufficient data exist to provide choices, or when the experimenter does not want to lead the subject. Other questions are closed in order to elicit specific responses from the subject. Some of the closed questions are in Likert scale format, which is a scale obtained by summing the response scores of its constituent items -- a "summative" scale (McIver & Carmines, 1981). In the traditional Likert scale approach, individuals are presented with a list of statements about a single topic and are instructed to respond to each statement in terms of their degree of agreement or disagreement. Responses to specific items are totaled so that subjects with the most favorable attitudes will have the highest scores, and vice versa (McIver & Carmines, 1981). The attempt to construct a standardized survey instrument revealed the many challenges involved in such a task, as explained in the following section.

CHALLENGES TO DEVELOPING A MEASUREMENT APPROACH

Challenges to Designing the Survey

Through use of the standardized survey instrument described above, researchers will attempt to measure learning manifested in behavioral change throughout the DOE community. This is a difficult endeavor because, to start with, there is no consensus on a common definition of "learning" in the research community. Therefore, much room is left for interpretation as to how an organization achieves learning and what exactly should be measured. Measures must be tied to the goals of an organization. Yet, for learning to become a meaningful goal, it must first be understood. A well-grounded definition is necessary to ensure this understanding (Garvin, 1993). From the case study questions administered to Lessons Learned Coordinators, researchers attempted to gather information on what elements the Coordinators considered key to learning within their individual organization.

Researchers determined that, for the survey instrument to be effective in measuring learning at the DOE project level, the content of the survey must be tied to the goals of the organization being assessed. This proved to be a difficult task for several reasons: 1) responses to the preliminary case study questions demonstrated that not all individuals are aware of their organization’s main goals; 2) in developing the survey questions, researchers must somewhat "predict" the answers that will be useful to them; and, 3) specifically tailoring the survey to an organization’s goals might limit the scope of information elicited, as well as the survey’s capacity to be widely applicable to various projects, organizations, and sites across the DOE complex.

Establishing objective, quantifiable measures of learning to be included in the survey instrument is also a challenge because of the theoretical nature of existing organizational learning concepts. Garvin explains that the problem with theoretical descriptions of learning organizations is that they do not provide a framework for action. Concrete changes in behavior are needed (Garvin, 1993). Roth and Senge assert that traditional academic research is unable to link theory and practice because it fails to consider the challenges of change in real life organizations. These challenges include developing practical tools, institutionalizing both individual and collective learning, and addressing barriers to learning posed by distinct organizational cultures which often contradict external cultures (Roth and Senge, 1995).

Challenges to Administering the Survey

Researchers have found the most challenging part of this evaluation approach to be administering the survey instrument. Because the time of the lesson learned intervention cannot be controlled or predicted, it is difficult to determine when the pretest should be administered. In order to ascertain an actual change in behavior, researchers must allow enough time between pretest and posttest measurement. On the other hand, the longer the amount of time that elapses between measurements, the greater the possibility of the confounding effect of history. History is one of many threats to the internal validity of this design (i.e., how well a study actually measures what it intends to) because, with the passage of time, many other events may have contributed to the change in behavior other than the assumed lesson learned. In addition, along with history is the potential confound of maturation. Changes observed over time might not be due to a specific event (i.e., the lesson learned), but might instead be a result of project managers increasing their project knowledge base and ultimately improving performance.

Also, as mentioned previously, researchers must somewhat predict the answers they hope to obtain through the survey, and must also attempt to select a related lesson learned to elicit the appropriate responses. Ideally, to strengthen the internal validity of the design, the pretest and posttest survey would be identical. Yet, under these circumstances, researchers might have to use supplemental questions in the posttest to obtain any additional information needed based on the scope of lesson learned issued. This introduces the potential confound of instrumentation, in which changes in a measuring instrument may account for any differences observed between measurements (Campbell and Stanley, 1963).

Another challenge to the selected approach is the potential confounded effect of testing. The pretest itself may produce a change in behavior confused with the results of the lesson learned intervention. DOE project managers taking a test for the second time may alter their responses based on what is considered "socially acceptable". For example, individuals completing the survey for the posttest measurement phase might claim that project performance has improved (e.g., more productive or cost-effective) only because they know that this would be the desired response to "pass the test".

Roth and Senge propose that studies of this type in which controls are difficult to establish are characterized by behavioral and dynamic complexity. Behavioral complexity exists when goals, values, and mental models of decision makers differ. Dynamic complexity exists when causes and resulting effects are distant in time and space (Roth and Senge, 1995). Within the DOE complex, preliminary research has shown that both behavioral and dynamic complexity exist to a certain degree. Goals and values vary from site to site, which affects external validity (i.e., how well the results from one tested group can easily be generalized for all other groups). For example, if researchers were to tailor the survey questions to the activities of one specific site, than any conclusions drawn about learning within that site would not necessarily apply to any other sites. In the attempt to control for this effect in the present study, researchers have constructed a survey instrument broad enough in scope to enable wide applicability. In addition, the selected pretest-posttest design and the lack of control over the lesson learned intervention increases the dynamic complexity of this approach. In any learning environment, however, dynamic complexity is more difficult to remedy because passage of time is necessary for any results of learning to be manifested in behavior.

PRELIMINARY OBSERVATIONS

The initial data gathered from the case studies administered to five Lessons Learned Coordinators provided valuable insight into the development of the standardized survey instrument. The majority of the Coordinators had difficulty answering the first category of questions, "Project Performance". These questions were intended to assess the current status of performance in meeting set goals. Interestingly enough, only one of the five Coordinators was able to provide insight on a perceived gap between goals and performance. This reflects a lack of awareness of "the big picture"-- the larger goals of everyday activities -- and the need for standards against which to measure performance.

The Coordinators found the next section, "Lessons Learned Utilization", much easier to answer. The responses revealed that the most common benefits experienced from utilizing lessons learned in the field include risk reduction and safety & health improvement. No participants, however, were able to give specific examples of hard (i.e., quantifiable) measures for any of these benefits. The information elicited from this section suggests an opportunity and a need for developing specific metrics of performance related to risk and safety & health.

The information gathered on "External Factors" demonstrated that barriers to learning are numerous and varied, seemingly site specific. Consistent answers, though, revealed budget and organizational culture to be the two most determining factors of lessons learned utilization. These two areas are targeted in the standardized survey instrument in order to gather data on potential confounds.

The questions included in the "Lessons Learned Program Definition" section elicited varied answers from each of the five participants. This was important to note because it reflected the need for a standardized survey instrument with a broad focus. Consistent answers yielded the following information: safety and health is a main area targeted by lessons learned programs; primary customers of lessons learned information are the end users -- the line workers and managers; and, lessons learned are derived from internal as well as external sources.

Overall, the key to obtaining valuable responses from the participants was to use focused questions. In some cases, this specifically involved providing choices from which the respondent could select. In cases where open-ended questions were used, the language was very specific. Feedback received from the five Lessons Learned Coordinators was invaluable in developing the standardized survey instrument.

Experimenters recently conducted a pilot test, administering the survey to a random selection of twenty project managers across DOE to ensure that the selected questions were effective in obtaining relevant information, and that the format was easy to understand and use. The pilot test proved useful as a final assessment of the survey instrument. Certain questions were discarded because they did not add value to the study. Other questions were modified because of ambiguity. Again, certain factors regarding lessons learned activities were consistently present in all subjects’ replies. Yet, preliminary results demonstrate that organizational cultures vary greatly from site to site. These results testify to the difficulty of developing a standardized tool of measurement for DOE complex-wide applicability.

The selected research design has withstood many of the challenges to internal validity thus far. Yet, the small sample size poses a threat to external validity. Because of the numerous external factors affecting lessons learned utilization, and the diversity of organizational cultures across the complex, researchers hope to gain a great deal of information on each lessons learned intervention in the future in order to control for confounding effects. Overall, the research finds that more quantitative methods can be applied to measure the benefits of organizational learning programs and are useful for evaluating the effectiveness of these programs.

NEXT STEPS

Full Study Implementation

Researchers will select a larger sample of Environmental Management project managers for full-scale implementation of the study. Project managers at each selected site will distribute the survey to several key individuals in order to obtain a broad amount of information regarding project performance. The sample subjects will be randomly selected from across the DOE complex. This pretest phase of the study (M1) may consume six months.

A specific lesson learned related to Environmental Management activities will be selected as the intervention (X). Researchers will monitor all sources of lessons learned information and activities and will select a lesson learned related to Environmental Management immediately upon its release to the DOE community. Initial posttest measurement (M2) will occur three months after issuance of the lesson learned and may be supplemented with additional questions based on the scope of the lesson learned selected. Repeated measurements (M3, M4, etc.) will be taken at specific intervals in time thereafter (e.g., after six months, one year, etc.). Thus, researchers will be able to observe and evaluate trends over time to determine if change is sustainable. As Garvin states, "In the absence of learning,…change remains cosmetic, and improvements are either fortuitous or short-lived" (Garvin, 1993). Sustainable change is a vital characteristic of a true learning organization. Figure 4 illustrates this longitudinal study design.

Fig. 4. Longitudinal study design.

Implications for the Future

As Redding explains, there are several secondary purposes of learning organization assessments that are just as important as the primary intentions. These include: 1) educating an organization on what constitutes learning; 2) inspiring innovative ideas and creative solutions; and, 3) encouraging an open dialogue and exchange of different perspectives on the meaning of success (Redding, 1997). The evaluation approach discussed in this paper endeavors to accomplish all of the above for the DOE community.

Once sites are able to track and trend over time the demonstrated results of learning activities, they may be able to compile a core set of Performance measures to ensure that changes have actually yielded results, and that these results can be quantified. Even if the results of the longitudinal study demonstrate that most lessons learned programs are not effective in changing the behavior of organizations, the assessment tool will still be of value in identifying underlying causes and areas for improvement.

An ultimate goal of this approach is to promote widespread utilization of the standardized survey instrument at the project level, whenever a lesson learned is issued, to evaluate the effectiveness of the lesson in changing behavior over time. In addition, researchers hope to encourage integration of assessments into a site’s strategic planning process. Hopefully, as a result of this endeavor, managers across the DOE complex will possess an effective tool for assessing the progress and value of their lessons learned programs over time. This ability to see value will encourage managers to provide more support for the continuing development and improvement of lessons learned programs in the future.

REFERENCES

  1. Allen, T.J., Managing the Flow of Technology: Technology Transfer and the Dissemination of Technological Information Within the R&D Organization, Cambridge: Massachusetts Institute of Technology, 1977.
  2. Boze, K.M., "Measuring Learning Costs," Management Accounting, August 1994, pp. 48-52.
  3. Campbell, D.T. and Stanley, J.C., Experimental and Quasi-Experimental Designs for Research, Houghton Mifflin Company, 1963.
  4. Campbell, T. and Cairns, H., "Developing and Measuring the Learning Organization," Industrial and Commercial Training, 1994, vol. 26, no. 7, pp. 10-15.
  5. Fiol, C.M. and Lyles, M.A., "Organizational Learning," Academy of Management Review, 1985, vol. 10, no. 4, pp. 803-813.
  6. Galagan, P., "Reinventing the Profession," Training & Development, Dec. 1994, pp. 20-27.
  7. Garvin, D.A., "Building a Learning Organization," Harvard Business Review, July-Aug. 1993, pp. 78-91.
  8. Heller, F. & Brown, A., "Group Feedback Analysis Applied to Longitudinal Monitoring of the Decision Making Process," Human Relations, July 1995, vol. 48, no. 7, pp. 815-836.
  9. Kim, D.H. "The Link Between Individual and Organizational Learning," Sloan Management Review, Fall 1993, vol. 35, no. 1, pp. 37-50.
  10. Kirkpatrick, D., Evaluating Training Programs: The Four Levels, Berrett-Koehler, 1994.
  11. Mahler, J., "Influences of Organizational Culture on Learning in Public Agencies," Journal of Public Administration Research and Theory, October 1997, pp. 519-540.
  12. McIver, J.P. and Carmines, E.G., Unidimensional Scaling, In Series: Quantitative Applications in the Social Sciences, Sage Publications, 1987.
  13. Redding, J., "Hardwiring the Learning Organization," Training & Development, August 1997, pp. 61-67.
  14. Rossett, A., "That Was a Great Class, But…," Training and Development, July 1997, pp. 19-24.
  15. Roth, G.L. and Senge, P.M., "From Theory to Practice: Research Territory, Processes and Structure at the MIT Center for Organizational Learning," Journal of Organizational Change Management, July 17, 1995.
  16. Schein, E.H., Working Paper: "Three Cultures of Management: The Key to Organizational Learning in the 21st Century," Center for Organizational Learning, Massachusetts Institute of Technology Sloan School of Management, 1996.
  17. Senge, P.M., The Fifth Discipline: The Art & Practice of the Learning Organization, New York: Doubleday/Currency, 1990.
  18. Smith, P.C. and Kendall, L.M., "Retranslation of Expectations: An Approach to the Construction of Unambiguous Anchors for Rating Scales," Journal of Applied Psychology, 1963, Vol. 47, No. 2, pp. 149-155.
  19. Yelle, L.E., "The Learning Curve: Historical Review and Comprehensive Study," Decision Sciences, vol. 10, 1979, pp. 302-304.

BACK