63 



38 / /MPROVIXa THEAUOCAT/OX PROCESS 



Methods for Evaluating Fundamental Science. DRU-875/2-CTI, Critical Technologies Institute. 

 RAND Corp.. for the Office of Science and Technology Policy, October 1994. 



45. Cozzens, "Assessment of Fundamental Science Programs," 1995, p. 33. 



46. Research programs should be evaluated at a fairly aggregate level by independent individu- 

 als with the requisite scientific and technical expenise, who are capable of judging progress relative 

 to resources invested. By "a fairly aggregate level." the committee means including a fairly large .set of 

 projects and over a sufficient period to capture benefits, which are often long delayed; the more 

 basic the science, the longer the gestation period. Scientists must also be allowed to fail occasionally, 

 although not indefinitely or consistently. Evaluation of some applied research and most fundamental 

 technology programs is more straightforward because the objectives are clearer and the causal chains 

 more direct, although even here there are often surprises. For both science and technology, it takes 

 astute and expert observers, and not bean counters, to tell how reasonable the gambles have been 

 and how great the rewards should be, over appropriate periods. 



Successful programs should be rewarded for achieving or sustaining world-class leadership. 

 Unsuccessful ones should be eliminated, cut back, or reorganized. All programs should present 

 compelling reasons for continuation or expansion. Criteria for success should suit the particular area 

 of science or technology Science intended only to advance understanding (e.g., archaeology or 

 cosmology) will have different mea.sures than mission-oriented fields (e.g.. pharmacology or materi- 

 als science) or fundamental technology (e.g., instrumentation or engineering). Individuals working 

 in the fields are best able to judge value and craft appropriate measures. 



-i". Peter F Drucker. "Really Reinventing Government." The Atlantic Monthly 2~S('2): 49, I99S. 



-»8. One persistent theme of mo.st reports on federal laboratories (note 10) is a strong need tt) 

 free laboratories from micromanagement" by federal agencies in Washington. DC and by Congress. 

 Tills was a major concern of the Packard Rcpon of 1983 (note 1 1). Tlie Foster Report ( 1995) 

 documents the number of task orders and NASA emploxees that oversee the Jet Propulsion Labora- 

 t()r\ contract and judges them to be excessive. Tlie Galvin Report ( 1995) cites this bureaucratic 

 la\cring as among its top concerns In the universirv setting, concerns have centered on the inter- 

 pretation of Office of .Management and Budget circulars A-1 10 and A-21. which .set rules and account- 

 ing practices and in the judgment of many universities impose rigidities and induce inefficiencies, a 

 concern addressed in the Federal Demonstration Project (see note 48). 



-(9 Tlie Federal Demonstration Project is described in annual reports of the Government- 

 Lniversin-Industr\ -Research Roundtable (Washington, DC: National Academy of Sciences): /9y ,' 

 .■l»;;/(^// /?e/7orf (published April 199-1), pp. 12-1-4. and 1 99-i Annual Report (pubWsheA 1995). 

 pp ')-l();and in the brochure "Wliat Is the Federal Demonstration Project?" (August 1991), available 

 from the Roundtable offices. 



50. If functions of programs are shifted from federal responsibilir>', for example through block 

 grants to states, the neces.sary R&D capacirv' must still be sustained. In transportation, state funding 

 is channeled through a private national organization, whereas in public health, drug abuse, and 

 health services most research remains fimded by the federal government, with outreach to the states. 



