Restrepo and Powers: Application of robust regression to tuned stock assessment models 
159 
are optimistic and suggest a contin- 
ued increase in parental biomass 
even at the higher level of landings. 
The projections made after trimming, 
on the other hand, are less optimis- 
tic. These suggest a more modest in- 
crease in spawning biomass at the 
2,000 t level of landings, or a decline 
in spawning biomass after 7 years of 
2,660 t landings (Fig. 6). 
Discussion 
The robust regression methods as 
applied to tuned population assess- 
ment models may be helpful in sev- 
eral ways. The methods can be used 
as an alternative minimization cri- 
terion to obtain estimates of the 
population parameters. They can 
also be used to identify outliers for 
elimination from subsequent fitting. 
In either case, much of the subjectiv- 
ity that can enter discussions about 
individual data points during work- 
ing group meetings would be elimi- 
nated. The latter aspect (identifica- 
tion and elimination of outliers) is 
especially useful because, after elimi- 
nation of the outliers, one can then 
go on and conduct the normal boot- 
strap (Punt, 1994) or Monte Carlo 
(Restrepo et al., 1992) analyses used 
to evaluate uncertainty in the esti- 
mates. The robust regression methods could be used 
to screen the outliers, and then the other methods 
could be used to estimate variability and to project 
the population status under different management 
scenarios. Presently, computation time would pre- 
clude incorporating bootstrap or Monte Carlo tech- 
niques directly into the LTS search. Removing outli- 
ers should, also, have a moderating effect on the so- 
called retrospective patterns (Sinclair et al., 1990), 
some of which are caused by outliers in the indices 
(ICES, 1995). 
It is important to keep in mind a point of caution 
when removing statistical outliers from an assess- 
ment. Observations that appear to be outliers are so 
in the overall context of data-model. That is, it is 
possible that a data point is considered as either an 
outlier or not, depending on the model formulation, 
constraints, etc. For example, if the bluefln tuna in- 
dices of abundance had been considered to be log- 
normally distributed instead of normally-distributed, 
the LTS regression may have identified more or fewer 
observations as outliers. A related point is that we 
do not advocate rushing to eliminate outliers auto- 
matically from stock assessments. Instead, a first 
step should be to look into reasons why such obser- 
vations may seem like outliers, e.g. undetected tran- 
scription errors or environmental influences that 
were not accounted for in the analysis. Additionally, 
the outlier detection would identify candidates for sen- 
sitivity analysis in an objective manner. Instead of de- 
termining data points that are influential on the re- 
sults and trying to determine if those points could be 
considered outliers, we are advocating the converse. 
The outlier detection procedures outlined here in- 
herently assume symmetry in the response surface. 
Thus, it is expected that the trimmed LS technique 
will provide results similar to those coming from bias 
correction procedures used in bootstrapping meth- 
ods (e.g. Prager, 1994). Both methods assume that 
the underlying distributions are symmetrical and 
