842 
assignments, selecting research personnel, or rating the 
efficiency of the forecaster. The verification scheme 
adopted may apply only to the regular official fore- 
casts made by the individual or to a special practice- 
forecast program. Such a program may involve people 
not assigned to regular forecasting and thus provide a 
basis for discovering hidden talent. In the classroom a 
practice-forecast verification program is used to meas- 
ure the progress of student forecasters and can assist 
the professor in vocational guidance. A student showing 
low forecasting ability but high mathematical ability 
might be encouraged to go into theoretical meteorology 
rather than into practical forecasting. Numerous other 
examples of administrative purposes of verification 
could be given. 
Verification procedures, when applied to the official 
forecasts of a weather station, can make a valuable 
contribution to the control of the quality of the output 
of the station. In industry it has been found that 
scientific sampling procedures are necessary to main- 
tain uniformity in the manufacturing process. If the 
forecasts made at a particular station over a period of 
time appear to fall below the standards of accuracy 
that have previously been attained, administrative 
action to investigate the reasons for the falling stand- 
ards might be desirable. There is also a further ad- 
vantage that the mere existence of a checking scheme, 
even if imperfect, tends to keep the forecasters more 
alert and interested in maintaiming the accuracy of 
forecasts. 
Scientific Purposes of Verification. One of the goals 
of scientific meteorology is to be able to predict precisely 
the state of the atmosphere at any time in the future. 
This goal is a difficult one to attain, but considerable 
progress has been made in the past several decades in 
understanding the physics of the atmosphere. Some- 
times the question is raised as to whether there has 
been any increase in the accuracy of weather forecasts 
over a period of time. The question asked may be quite 
general, such as whether temperature or precipitation 
forecasts made by meteorologists in general are more 
accurate than they were fifty years ago. Sometimes the 
question may be directed at a specific group of fore- 
casters using special methods on an experimental basis. 
In both these cases verification statistics can be used to 
provide information on the trend in forecast accuracy, 
although the technical difficulties in obtaining accurate 
statistics may be great. When some new advance in 
meteorological theory which bears on forecasting is 
proposed, it may be possible to compare the verification 
scores of experimental forecasts made using the sup- 
posed advance with forecasts made without the new 
theory. In this case verification is equivalent to testing 
the validity of a scientific hypothesis. 
Another scientific purpose of forecast verification is 
the analysis of the forecast errors to determine their 
nature and possible cause. This is, in the opinion of 
some, the most important and most fruitful objective 
of verification since it is more susceptible of scientific 
treatment than some of the other purposes. A search 
may be made for indicators of forecasting difficulty 
WEATHER FORECASTING 
which will help to locate the synoptic situations under 
which forecasts are most likely to be wrong. It is 
commonly thought, for example, that situations where 
weather changes are taking place rapidly are more diffi- 
cult to forecast than other situations, but it should be 
noted in passing that the literature does not reveal any 
precise studies to support this view. Likewise, although 
this is a widespread belief, it remains to be shown by 
verification figures that the forecasting accuracy for 
some element such as precipitation occurrence is lower 
for cases in which deepening or filling of a low is poorly 
forecast than for cases in which the deepening or filling 
is accurately forecast. Such verification as this can bé 
used to discover the weaknesses of forecasting systems 
in order to decide where research emphasis is needed. 
FUNDAMENTAL CRITERIA TO BE SATISFIED 
Terminology of Forecasts. One essential for satis- 
factory verification is objectivity, which requires that 
the forecasts be explicitly stated, either numerically or 
categorically, thus permitting no element of judgment 
to enter the comparison of forecast with subsequent 
observation. But the relation between the forecaster 
and the public, which depends upon the forecaster’s 
terminology, is usually subjective in nature, since even 
objective terms and the actual weather are interpreted 
subjectively by many individuals. Bleeker [1] discusses 
this point clearly. If every forecast is accompanied by a 
definition of the terms used, the forecaster sometimes 
objects on the basis that it hinders his freedom to 
express himself adequately. At this pomt it is easy to 
raise questions about the psychology of forecasters and 
of the public and many arguments in forecast verifica- 
tion revolve around this point. Although these are 
practical and very important problems to those 
attempting to serve the public with weather informa- 
tion, they are in essence outside the field of forecast 
verification and the goals of verification would be 
reached much sooner if this were more generally recog- 
nized. Only those forecasts which are expressed in ob- 
jective terms can be satisfactorily verified as forecasts. 
The extent to which public forecasts satisfy the public 
is a different question which can be answered, it appears, 
only by public opinion polls; and the answer generally 
will contain little information about the agreement of 
forecast and actual weather. 
Meteorologists themselves sometimes advocate sub- 
jective verification, particularly in the case of prognostic 
charts which attempt to portray the pattern of some 
weather element (such as pressure) rather than to 
specify the weather at individual points. In some cases 
this has led to the use of boards of experts who compare 
the prognostic charts with observed charts. The diffi- 
culty in objective comparison arises because of the 
unsatisfactory state of knowledge as to what are the 
important parameters of the prognostic pattern; or in 
other words, the forecaster is unable to specify ob- 
jectively just what he is trying to represent with a 
prognostic chart. In effect, the forecaster is trying to 
verify something which cannot be observed, hence this 
situation does not meet the definition of verification. 
