initially calculating the constant coefh- 
cients in the differential equations can 
increase the estimate by a factor of two. 
In two and three dimensions, these fac- 
tors are not significant. 
2. The total number of source iter- 
ations is the product of the number of 
source iterations needed to converge to 
a particular y, (the average number of 
neutrons per fission required to main- 
tain criticality) and the number of »,’s 
required to reach the correct value (i.e., 
2.5). Anadequately converged v, usu- 
ally requires three to fifteen source 
iterations, four being very common. 
The number of converged »v,’s required 
is generally between three and six. 
3. The number of group iterations is 
dependent upon the number of mesh 
points for the two- and three-dimension 
problems. This relationship is quite 
complex and is largely a function of the 
particular iterative technique used. 
4. The final product of source iter- 
Digital Computers 
of the Future 
The two largest computer manufac- 
turers, Sperry-Rand and IBM, have each 
announced two new large machines. 
Sperry-Rand is working on the X301-G 
and the Livermore Automatic Research 
Computer (LARC), while IBM is work- 
ing on the 709 and the STRETCH. 
The X301-G and 709 will not operate 
significantly faster than the present 
1103A and 704, but because of larger 
memories, more versatile orders, and 
advantageous input and output equip- 
ment, many problems will probably be 
computed in half the time now required. 
The LARC is expected to be faster 
then the present 1103A and 704 by a fac- 
tor of about 15. It will read data from 
its ferrite core memory at a rate of 2 
usec per word. The initial capacity of 
the fast memory will be 20,000 ten- 
digit numbers. This may be substan- 
tially increased by adding supplementary 
fast memory units. A LARC is to be 
delivered to Livermore in early 1958, 
while additional LARC’s are on order 
for a year or two later. The contract 
price for the construction of the com- 
puter is $2,895,000. 
IBM’s STRETCH, in turn, is expected 
to be about a factor of 10 faster than 
the LARC. It will also have a magnetic- 
core-type fast memory. A data word 
can be read from the memory within 
0.2 sec. The machine is being designed 
to accommodate a fast memory capac- 
ity of up to a million words. In addi- 
tion, external memories (magnetic discs 
and magnetic tapes) may ultimately 
provide data in blocks up to a total capac- 
ity of 100 million words. STRETCH 
is being built for Los Alamos and is to 
be delivered about January, 1960. 
ations and group iterations can often 
be reduced (sometimes as much as 
a factor of 10) by using improved 
iteration techniques. 
The following numerical examples 
are representative for criticality caleu- 
lations: 
(a) 1-Dimensional 3-Groups; 50-Mesh 
Points; 3-Regions 
(2)(20) (30) (50)(10)(4) = 2.4 X 10° op. 
(b) 2-Dimensional 30-Groups; 1,000- 
Mesh Points; 3-Regions 
(20) (30) (1,000) (20) (50) = 6 X 10% op. 
(c) 3-Dimensional 10-Groups; 10,000- 
Mesh Points; 3-Regions 
(20) (10) (10,000) (30) (100) 
= 6 X 109 op. 
The results of these three examples 
can be combined with the information 
in Table 2 to give the final costs for 
the various machines. 
In one-dimensional calculations, a 
complete criticality calculation for 
sample problem (a) costs, for example, 
about $24 on the 704 and $144 on the 
650 or the UNIVAC I. For a single 
converged v,, the amount is about $6 
on the 704 and $36 on the 650 or the 
UNIVAC. These figures are con- 
firmed by the running time of the 
Eyewash code on the UNIVAC and 
the 704 and the PROD II code on the 
650. The computation of a converged 
y, averages about 20 min for 30 groups 
on the UNIVAC, about 2 min for 
32 groups on the 704, and about 40 min 
for 16 groups on the 650. 
In two-dimensional calculations, the 
cost for a full criticality calculation, 
such as sample problem (b), becomes 
almost exorbitant. Even on the 
704 or 1103A such a problem would 
cost $6,000, while on the 650 or 
Datatron the cost would be over 
$30,000. A single converged v,, of 
course, would reduce the costs to 
about $1,200 and $6,000, respectively. 
Certainly in these problems one must 
attempt to minimize the number of 
groups, mesh points, and iterations. 
A three-group problem is, for example, 
not too expensive on the 704 but is 
expensive on the 650. The running 
times of a two-group code on the 
ORACLE, the Mug II code on the 
UNIVAC, and the Curtiss-Wright 
code on the 701 bear out these costs. 
The 10-group three-dimensional sam- 
ple problem (c) is very expensive on 
present machines. If only three groups 
are used and a single converged vy, is 
sought, the 704 price is around $5,000. 
Caleulations other than multigroup 
calculations can also involve a large 
number of operations. Problems con- 
cerned, for example, with burnup, 
endurance, two- and three-dimensional 
kinetics, temperature distributions and 
transport approximations can all in- 
volve from 106 to 10!° operations. 
At most computing centers, prob- 
lems frequently arise which are ‘‘one- 
shot” calculations, i.e., likely to occur 
only once. Experimental calculations 
and certain exploratory calculations 
often fall in this category. The ques- 
tion often arises as to whether hand 
calculations or a small or medium 
computer should be used rather than a 
large computer. The answer is very 
much a function of the problem, but 
it is even more a function of the 
personnel and subroutines available. 
Given the proper personnel and sub- 
routines, calculations as small as 10% 
operations can properly be placed on 
large machines. 
The time needed to prepare a code 
is largely dependent upon the indi- 
vidual programmer and almost inde- 
pendent of the type of machine in- 
volved. The time to ‘debug,’ Le., 
verify that the code is running cor- 
rectly, is likewise strongly dependent 
upon the individual programmer and, 
to a much lesser extent, on the par- 
ticular machine in use. 
In summary, the cost of coding and 
debugging a given program is almost 
independent of machine size, while 
the cost per mathematical operation is 
much smaller for larger machines (see 
Table 2). The size and the repetitive 
nature of most reactor calculations 
therefore indicate the use of the fastest 
(and therefore generally cheapest per 
problem) commonly available com- 
puter. A commonly available com- 
puter is desirable because of the 
economic advantage of interchanging 
codes with other installations which 
are working on similar problems. 
BIBLIOGRAPHY 
1. A. Radkowsky, R. Brodsky. A bibliography 
of available digital computer codes for nuclear 
reactor problems, AECU-3078 (1955) 
2. Nuclear Codes Group Newsletter No. 1 (AEC 
Computing Facility, New York University, 
New York, 1956) 
3. Nuclear Codes Group Newsletter No. 2 
(AEC Computing Facility, New York Univer- 
sity, New York, 1956) 
4. H. Hurwitz, Jr., R. Ehrlich, TID-2009, Vol. 3, 
13 (1953) (Classified) 
. B. Carlson. Solution of the transport equation 
by Sn approximation, LA-1891 (1955) 
a 
35 
