Force Pulse Testing of Ship Models 



model testing this may use up a significant proportion of the available test time. 

 The amount of usable test time may give rise to estimates having large sam- 

 pling variability. The arithmetic processes involved here, though lengthy, are 

 well understood and present no difficulty. 



One of the most important contributions of statistics to experimentation is 

 the formalization of the notion and the provision of methodologies for having an 

 experiment provide its own measure of its errors . This is done by setting the 

 problem in a probabilistic framework by either assuming it into existence of 

 manufacturing it by so-called "randomization " operations. 



Those of you who have read books or attended lectures on experimental de- 

 sign, that branch of statistics concerned with these problems, will recall these 

 points. This attribute of an experiment does not come free. The price for put- 

 ting it into this framework is to blunt the precision of the results. 



All of these attributes and philosophy of statistical experimental design 

 have their analogs in random test functions. If one uses a deterministic test 

 function, a transfer function can be calculated whether it exists or the calculated 

 one has any relationship to the true one. Verification is required from some 

 other source, e.g., previous experiments, etc., before one can have confidence 

 in the calculations. 



K these verifications are available, the experiment can be economic and 

 precise. 



On the other hand, if the test function is embedded in a family of test func- 

 tions in such a way as to make the statistical manipulation allowable, the ex- 

 periment itself will provide a measure of its own error; and that will be the 

 coherency function. It seems to me that this is worth something and, as men- 

 tioned above, it does cost. 



What all this comes down to is simply that the choice of a test function is 

 not a simple or clear one. 



A point which seems to have been neglected in Dr. Ochi's paper and the 

 discussion following is why such a simple procedure works at all. After all, 

 slamming is a very complicated process and yet the simplest of models appear 

 to be producing excellent answers. Being the original instigator of this ap- 

 proach, I think I can throw some light on this problem. 



Way back (I guess it is more than 7 years ago by now) when I was wondering 

 if some model could not be constructed for slamming predictions, I was looking 

 at some destroyer data. I do not quite remember who was with me at the time 

 although I think it was Martin Bates, then of the then Bell Aircraft, who com- 

 mented that if you put the data through a low-pass filter you could hardly tell 

 from the resulting record that a slam had occurred. This indicated to me that 

 slamming did not change the gross aspects of the motion and that a simple 

 model, based on the occurrence of conditions which induce slamming, might 

 serve to make an average occurrence prediction. This observation appears to 



have been justified. 



* * * 



459 



