For fixed N, FAR, and SNR, there is an optimum Ng and, therefore, an 

 optimum a^. The existence of an optimum Ng has an explanation similar to that 

 for N above: as Ng is increased while holding N, ayvOg/ N, and S constant, a^ 

 decreases and therefore p^ increases and Pg decreases. Figure 10 illustrates 

 this for a typical fixed-sample case (Rice distribution; S= 8 dB/pulse; M^ = 1; 

 N = 1.2; a^ag = 10"'; optimum Kg for each /Vg). The optimum Ng tends to 

 gradually increase with N (as can be seen in figs. 8 and 9); however, because 

 of the nature of binomial tests it conceivably can dip, as it did for the A/^ = 2 

 case in figure 6. For most sequential cases, the Ng giving the maximum Pq 

 is too large to be practical; while the improvement of P^ with iVg is noticeable 

 up to about /Vg = 8, further improvement is not enough to compensate for the 

 disadvantages (when clutter or many targets are present, etc.) of having long 

 second stages. 



The value 1.2 is used for N in many of the figures primarily because it 

 is small enough that rough comparisons with the N = I single-stage system are 

 possible and also because the higher Pq that would be achieved by using a 

 larger iV might not compensate for the smaller average number of scans across 

 the target and for the longer second stage required for optimality. 



1.0 



0.8 



0.6 



0.4 



0.2 



Yb 



^f 





Pd 

















~ — 





1 



















& 



— 















12 



16 



20 



24 



Figure 10. For fixed M^, /V, FAR, and SNR, the manner in which the per-stage 

 miss probabilities change with Ng produces an optimum Ng. 



26 



