12.0 CAPACITY OF THE MEMORY M. GENERAL PRINCIPLES #
12.1 #
We consider next the third specific part: the memory M. Memory devices were discussed in 7.5, 7.6, since they are needed as parts of the ×, ÷ networks (cf. 7.4, 7.7 for ×, 8.3 for ÷, 10.2 for √ ) and hence of CA itself (cf. the beginning of 11.1). In all these cases the devices considered had a sequential or delay character, which was in most cases made cyclical by suitable terminal organs. More precisely:
The blocks
and
in 7.5, 7.6 are essentially delays, which hold a stimulus that
enters their input for a time kt, and then emit it. Consequently they can be converted into cyclical memories, which hold a stimulus indefinitely, and make it available at the output at all times which differ from each other by multiples of kt. It suffices for this purpose to feed the output back into the input: or . Since the period kt contains k fundamental periods t, the capacity of such a memory device is k stimuli. The above schemes lack the proper input, clearing and output facilities, but these are shown in Figure 6. It should be noted that in Figure 6 the cycle around goes through one more E-element, and therefore the period of this device is actually (k + 1)t, and its capacity correspondingly k + 1 stimuli. (The of Figure 5 may, of course, be replaced by a , cf. 7.6.) Now it is by no means necessary that memory be of this cyclical (or delay) type. We must therefore, before making a decision concerning M, discuss other possible types and the advantages and disadvantages of the cyclical type in comparison with them.
12.2 #
Preceding this discussion, however, we must consider the capacity which we desire in M. It is the number of stimuli which this organ can remember, or more precisely, the number of occasions for which it can remember whether or not a stimulus was present. The presence or absence of a stimulus (at a given occasion, i.e. on a given line in a given moment) can be used to express the value 1 or 0 for a binary digit (in a given position). Hence the capacity of a memory is the number of binary digits (the values of) which it can retain. In other words: The (capacity) unit of memory is the ability to retain the value of one binary digit. We can now express the “cost” of various types of information in these memory units. Let us consider first the memory capacity required to store a standard (real) number. As indicated in 7.1, we shall fix the size of such a number at 30 binary digits (at least for most uses, cf. {}). This keeps the relative rounding-off errors below 2−30, which corresponds to 10−9, i.e. to carrying 9 significant decimal digits. Thus a standard number corresponds to 30 memory units. To this must be added one unit for its sign (cf. the end of 9.2) and it is advisable to add a further unit in lieu of a symbol which characterizes it as a number (to distinguish it from an order, cf. {14.1}). In this way we arrive at 32 = 25 units per number. The fact that a number requires 32 memory units, makes it advisable to subdivide the entire memory in this way: First, obviously, into units, second into groups of 32 units, to be called minor cycles. (For the major cycles cf. {14.5}.) Each standard (real) number accordingly occupies precisely
one minor cycle. It simplifies the organization of the entire memory, and various synchronization problems of the device along with it, if all other constants of the memory are also made to fit into this subdivision into minor cycles. Recalling the classification (a)–(h) of 2.4 for the presumptive contents of the memory M, we note: (a), according to our present ideas, belongs to CA and not to M (it is handled by to , cf. the beginning of 11.1); (c)–(g), and probably (h), also consist of standard numbers; (b) on the other hand consists of the operation instructions which govern the functioning of the device, to be called standard orders. It will therefore be necessary to formulate the standard orders in such a manner that each one should also occupy precisely one minor cycle, i.e. 32 units. This will be done in {15.0}.
12.3 #
We are now in a position to estimate the capacity requirements of each memory type (a)–(h) of 2.4. Ad (a): Need not be discussed since it is taken care of in CA (cf. above). Actually, since it requires to , each of which must hold essentially a standard number, i.e. 30 units (with small deviations, cf. {}), this corresponds to ≈ 120 units. Since this is not in M, the organization into minor cycles does not apply here, but we note that ≈ 120 units correspond to ≈ 4 minor cycles. Of course some other parts of CA are memory organs too, usually with capacities of one or a few units: e.g. the discriminators of Figures 8 and 12. The complete CA actually contains {missing
text} more (cf. {}).
organs, corresponding to {missing text} units, i.e. {missing text} minor cycles
Ad (b): The capacity required for this purpose can only be estimated after the form of all standard orders has been decided upon, and several typical problems have been formulated—“set up”—in that terminology. This will be done in {}. It will then appear, that the capacity requirements of (b) are small compared to those of some of (c)–(h), particularly to those of (c). Ad (c): As indicated loc. cit., we count on function tables of 100–200 entries. A function table is primarily a switching problem, and the natural numbers of alternatives for a switching system are the powers of 2. (cf. {}.) Hence 128 = 27 is a suitable number of entries. Thus the relative precision obtained directly for the variable is 2−7. Since a relative precision of 2−30 is desired for the result, and (2−7)4 > 2−30, (2−7)5 2−30, the interpolation error must be fifth order, i.e. the interpolation biquadratic. (One might go to even higher order interpolation, and hence fewer entries in the function table. However, it will appear that the capacity requirements of (c) are, even for 128 entries, small compared e.g. to those of (d)–(h).) With biquadratic interpolation five table values are needed for each interpolation: Two above and two below the rounded off variable. Hence 128 entries allow actually the use of 124 only, and these correspond to 123 intervals, i.e. a relative precision of 123−1 for the variable. However even 123−5 2−30 (by a factor—25). Thus a function table consists of 128 numbers, i.e. it requires a capacity of 128 minor cycles. The familiar mathematical problems hardly ever require more than five function tables (very rarely that much), i.e. a capacity of 640 minor cycles seem to be a safe overestimate of the capacity required for (c). Ad (d): These capacities are clearly less than or at most comparable to those required by (e). Indeed the initial values are the same thing as the intermediate values of (f), except that they belong to the first value of t. And in a partial differential equation with n + 1 variables, say x1,… , xn and t, the intermediate values of a given t—to be discussed under (e)—as well as the initial values or the totality of all boundary values for all t correspond all three to n-dimensional manifolds (in the n +1 -dimensional space) of x1,…, xn and t; hence they are likely to involve all about the same number of data. Another important point is that the initial values and the boundary values are usually given— partly or wholly—by a formula or by a moderate number of formulae. I.e., unlike the intermediate values of (e), they need not be remembered as individual numbers. Ad (e): For a partial differential equation with two variables, say x and t, the number of inter-
mediate values for a given t is determined by the number of x lattice points used in the calculation. This is hardly ever more than 150, and it is unlikely that more than 5 numerical quantities should be associated with each point. In typical hydrodynamical problems, where x is a Lagrangian label-coordinate, 50–100 points are usually a light estimate, and 2 numbers are required at each point: A position-coordinate and a velocity. Returning to the higher estimate of 150 points and 5 numbers at each point gives 750 numbers, i.e. it requires a capacity of 750 minor cycles. Therefore 1,000 minor cycles seem to be a safe overestimate of the capacity required for (e) in two variable (x and t) problems. For a partial differential equation with three variables, say x, y and t, the estimate is harder to make. In hydrodynamical problems, at least, important progress could be made with 30 x 30 or 40 x 20 or similar numbers of x, y lattice points—say 1,000 points. Interpreting x, y again as Lagrangian labels shows that at least 4 numbers are needed at each point: Two position coordinates and two velocity components. We take 6 numbers per point to allow for possible other non hydrodynamical quantities. This gives 6,000 numbers, i.e. it requires a capacity of 6,000 minor cycles for (e) in hydrodynamical three variable (x, y and t) problems. It will be seen (cf. {}), that a memory capacity of 6,000 minor cycles—i.e. of 200,000 units—is still conveniently feasible but that essentially higher capacities would be increasingly difficult to control. Even 200,000 units produce somewhat of an unbalance—i.e. they make M bigger than the other parts of the device put together. It seems therefore unwise to go further, and to try to treat four variable (x, y, z and t) problems. It should be noted that two variable (x and t) problems include all linear or circular symmetric plane or spherical symmetric spatial transient problems, also certain general plane or cylinder sym- metric spatial stationary problems (they must be hyperbolic, e.g. supersonic, t is replaced by y). Three variable problems (x, y and t) include all spatial transient problems. Comparing this enu- meration with the well known situation of fluid dynamics, elasticity, etc., shows how important each one of these successive stages is: Complete freedom with two variable problems; extension to four variable problems. As we indicated, the possibilities of the practical size for M draw the natural limit for the device contemplated at present between the second and the third alternatives. It will be seen that considerations of duration place the limit in the same place (cf. {}). Ad (f): The memory capacities required by a total differential equation with two variables {missing text}—i.e. to the lower estimate of (e). Ad (g): As pointed out in (g) in 2.4, these problems are very similar to those of (e), except that the variable t now disappears. Hence the lower estimate of (e) (1,000 minor cycles) applies when a system of (at most 5) one-variable functions (of x) is being sought by successive approximation or relaxation methods, while the higher estimate of (c) (6,000 minor cycles) applies when a system of (at most 6) two-variable functions (of x, y) is being sought. Many problems of this type, however, deal with one function only—this cuts the above estimates considerably (to 200 or 1,000 minor cycles). Problems in which only a system of individual constants is being sought by successive approximations require clearly smaller capacities: They compare to the preceding problems like (f) to (e). Ad (h): These problems are so manifold, that it is difficult to plan for them systematically at this stage. In sorting problems any device not based on freely permutable record elements (like punchcards) has certain handicaps (cf. {}), besides this subject can only be adequately treated after an analysis of the relation of M and of R has been made (cf. 2.9 and {}). It should be noted, however, that the standard punchcard has place for 80 decimal digits, i.e. 9 9-digit decimal numbers, that is 9 numbers in our present sense, i.e. 9 minor cycles. Hence the 6,000 minor cycles considered in (e) correspond to a sorting capacity of ≈ 700 fully used cards. In most sorting problems the 80 columns of the cards are far from fully used—this may increase the equivalent sorting capacity of our device proportionately above 700. This means, that the device has a non negligible, but certainly not impressive sorting capacity. It is probably only worth using on sorting problems of more than usual
mathematical complexity. In statistical experiments the memory requirements are usually small: Each individual problem is usually of moderate complexity, each individual problem is independent (or only dependent by a few data) from its predecessors; and all that need be remembered through the entire sequence of individual problems are the numbers of how many problems successively solved had their results in each one of a moderate number of given distinct classes.
12.4 #
The estimates of 12.3 can be summarized as follows: The needs of (d)–(h) are alternative, i.e. they cannot occur in the same problem. The highest estimate reached here was one of 6,000 minor cycles, but already 1,000 minor cycles would permit to treat many important problems. (a) need not be considered in M. (b) and (c) are cumulative, i.e. they may add to (d)–(h) in the same problem. 1,000 minor cycles for each, i.e. 2,000 together, seem to be a safe overestimate. If the higher value 6,000 is used in (d)–(h), these 2,000 may be added for (b)–(c). If the lower value 1,000 is used in (d)–(h), it seems reasonable to cut the (b)–(c) capacity to 1,000 too. (This amounts to assuming fewer function tables and somewhat less complicated “set ups.” Actually even these estimates are generous, cf. {}.) Thus total capacities of 8,000 or 2,000 minor cycles obtain. It will be seen that it is desirable to have a capacity of minor cycles which is a power of two (cf. {}). This makes the choices of 8,000 or 2,000 minor cycles of a convenient approximate size: They lie very near to powers of two. We consider accordingly these two total memory capacities: 8, 196 = 213 or 2, 048 = 211 minor cycles, i.e. 262, 272 = 218 or 65, 536 = 216 units. For the purposes of the discussions which follow we will use the first higher estimate. This result deserves to be noted. It shows in a most striking way where the real difficulty, the main bottleneck, of an automatic very high speed computing device lies: At the memory. Compared to the relative simplicity of CA (cf. the beginning of 11.1 and {15.6}), and to the simplicity of CC and of its “code” (cf. {14.1} and {15.3}), M is somewhat impressive: The requirements formulated in 12.2, which were considerable but by no means fantastic, necessitate a memory M with a capacity of about a quarter million units! Clearly the practicality of a device as is contemplated here depends most critically on the possibility of building such an M, and on the question of how simple such an M can be made to be.
12.5 #
How can an M of a capacity of 218 ≈ 250, 000 units be built?
The necessity of introducing delay elements of very great efficiency, as indicated in 7.5, 7.6, and 12.1, becomes now obvious: One E-element, as shown in Figure 4, has a unit memory capacity, hence any direct solution of the problem of construction of M with the help of E-elements would require as many E-elements as the desired capacity of M—indeed, because of the necessity of switching and gating about four times more (cf. {}). This is manifestly impractical for the desired capacity of ≈ 250,000—or, for that matter, for the lower alternative in 12.4, of ≈ 65,000. We therefore return to the discussion of the cyclical or delay memory, which was touched upon in 12.1. (Another type will be considered in 12.6.) Delays can be built with great capacities k, without using any E-elements at all. This was mentioned in 7.6, together with the fact that even linear electric circuits of this type exist. Indeed, the contemplated t of about one microsecond requires a circuit passband of 3–5 megacycles (remember Figure 1.!) and then the equipment required for delays of 1–3 microseconds—i.e. k = 1, 2, 3—is simple and cheap, and that for delays up to 30–35 microseconds—i.e. k = 30,…, 35—is available and not unduly expensive or complicated. Beyond this order of k, however, the linear electric circuit approach becomes impractical. This means that the delays →−, →−→−, →−→−→−, which occur in all E-networks of Figures 3–15 can be easily made with linear circuits. Also, that the various of CA (cf. Figures 9, 13, 15, and the beginning of 11.1), which should have k values ≈ 30, and of which only a moderate number will be needed (cf. (a) in 12.3), can be reasonably made with linear circuits. For M itself, however, the situation is different. M must be made up of organs, of a total capacity ≈ 250,000. If these were linear
circuits, of maximum capacity ≈ 30 (cf. above), then ≈ 8,000 such organs would be required, which is clearly impractical. This is also true for the lower alternative of 12.4, capacity ≈ 65,000, since even then ≈ 2,000 such organs would be necessary. Now it is possible to build organs which have an electrical input and output, but not a linear electrical circuit in between, with k values up to several thousand. Their nature is such, that a 4 stage amplification is needed at the output, which, apart from its amplifying character, also serves to reshape and resynchronize the output pulse. I.e. the last stage gates the clock pulse (cf. 6.3) using a non linear part of a vacuum tube characteristic which goes across the cutoff; while all other stages effect ordinary amplifications, using linear parts of vacuum tube characteristics. Thus each one of these requires 4 vacuum tubes at its output, it also requires 4 E-elements for switching and gating (cf. {}). This gives probably 10 or fewer vacuum tubes per organ. The nature of these organs is such that a few hundred of them can be built and incorporated into one device without undue difficulties—although they will then certainly constitute the greater part of the device (cf. {12.4}). Now the M capacity of 250,000 can be achieved with such devices, each one having a capacity 1,000–2,000, by using 250–125 of them. Such numbers are still manageable (cf. above), and they require about 8 times more, i.e. 2,500–1,250 vacuum tubes. This is a considerable but perfectly practical number of tubes—indeed probably considerably lower than the upper limit of practicality. The fact that they occur in identical groups of 10 is also very advantageous. (For details cf. {}.) It will be seen that the other parts of the device of which CA and CC are electrically the most complicated, require together 1, 000 vacuum tubes (cf. {}). Thus the vacuum tube requirements of the device are controlled essentially by M, and they are of the order of 2,000–3,000 (cf. loc. cit. above). This confirms the conclusion of 12.4, that the decisive part of the device, determining more than any other part its feasibility, dimensions and cost, is the memory. We must now decide more accurately what the capacity of each organ should be—within the limits which were found to be practical. A combination of a few very simple viewpoints leads to such a decision.
12.6 #
We saw above that each organ requires about 10 associated vacuum tubes, essentially
independently of its length. (A very long i.e. 11 vacuum tubes.) Thus the number of
might require one more stage of amplification, organs, and not the total capacity, determines
the number of vacuum tubes in M. This would justify using as few organs as possible, i.e. of as high individual capacity as possible. Now it would probably be feasible to develop ’s of the type considered with capacities considerably higher than the few thousand mentioned above.
There are, however, other considerations which set a limit to increases of . In the first place, the considerations at the end of 6.3 show that the definition time must be a fraction tj of t (about 1 – 1 ), so that each stimulus emerging from
’s delay may gate
5 2 the correct clock pulse for the output. For a capacity k, i.e. a delay kt, this is relative precision 5k – 2k, which is perfectly feasible for the device in question when k ≈ 1, 000, but becomes increasingly uncertain when k increases beyond 10,000. However, this argument is limited by the consideration that as the individual capacity increases correspondingly fewer such organs are needed, and therefore each one can be made with correspondingly more attention and precision.
Next there is another more sharply limiting consideration. If each has the capacity
k, then 250,000 250,000
of them will be needed,
and
k amplifying switching and gat-
ing vacuum tube aggregates are necessary. Without going yet into the details of these circuits, the individual and its as- sociated circuits can be shown schemati- cally in Figure 18. Note, that Figure 6 showed the block SG in detail but the block A not at all. The actual arrange- ment will differ from Figure 6 in some details, even regarding SG, cf. {}. Since is to be used as a memory its output must be fed back—directly or indirectly—into its input. In an aggregate of many organs—which M is going to be—we have a choice to feed each back into itself, or to have longer cycles of ’s: Figure 19 (a) and (b), respectively.
It should be noted, that (b) shows a cycle which has a capacity that is a multiple of the individual ’s capacity—i.e. this is a way to produce a cycle which is free of the individual ’s capacity limitations. This is, of course, due to the reforming of the stimuli traversing this aggregate at each station A. The information contained in the aggregate can be observed from the outside at every station SG, and it is also here that it can be intercepted, cleared, and replaced by other information from the outside. (For details cf. {}.) Both statements apply equally to both schemes (a) and (b) of Figure 19. Thus the entire aggregate has its inputs, outputs, as well as its switching and gating controls at the stations SG—it is here that all outside connections for all these purposes must be made. To omit an SG in the scheme (a) would be unreasonable: It would make the corresponding completely inaccessible and useless. In the scheme (b), on the other hand, all SG but one could be omitted (provided that all A are left in place): The aggregate would still have at least one input and output that can be switched and gated and it would therefore remain organically connected with the other parts of the device—the outside, in the sense used above. We saw in the later part of 12.5, that each A and each SG required about the same number of vacuum tubes (4), hence the omission of an SG represents a 50% saving on the associated equipment at that junction. Now the number of SG stations required can be estimated. (It is better to think in terms of scheme (b) of Figure 19 in general, and to turn to (a) only if all SG are known to be present, cf. above.) Indeed: Let each have a capacity k, and let there be an SG after every l of them. Then the aggregate between any two SG has the capacity kj = kl. (One can also use scheme (b)
with aggregates of l
’s each, and one SG each.) Hence 250,000
SG’s are needed altogether,
and the switching problem of M is a 250,000 way one. On the other hand every individual memory unit passes a position SG only at the end of each kjt period. i.e. it becomes accessible to the other parts of the device only then. Hence if the information contained in it is required in any other part of the device, it becomes necessary to wait for it—this waiting time being at most kjt, and averaging 1 kjt. This means that obtaining an item of information from M consumes an average time 1 kjt. This is, of course, not a time requirement per memory unit: Once the first unit has been obtained in this way all those which follow after it (say one or more minor cycles) consume only their natural duration, t. On the other hand this variable waiting time (maximum kjt, average 1 kjt), must be replaced in most cases by a fixed waiting time kjt, since it is usually necessary to return to the point in the process at which the information was desired, after having obtained that information—and this amounts altogether to a precise period kjt. (For details cf. {}.) Finally, this wait kjt is absent if the part of M in which the desired information is contained follows immediately upon the point at which that information is wanted and the process continues from there. We can therefore say: The average time of transfer from a general position in M is kjt. Hence the value of kj must be obtained from the general principles of balancing the time requirements of the various operations of the device. The considerations which govern this particular case are simple: In the process of performing the calculations of mathematical problems a number in M will be required in the other parts of the device in order to use it in some arithmetical operations. It is exceptional if all these operations are linear, i.e. +, −; normally ×, and possibly ÷, √ will also occur. It should be noted that substituting a number u into a function f given by a function table, so as to form f (u), usually involves interpolation—i.e. one × if the interpolation is linear, which is usually not sufficient, and two to four ×’s if it is quadratic to biquadratic, which is normal. (Cf. e.g. (c) in 12.3.) A survey of several problems, which are typical for various branches of computing mathematics, shows that an average of two × (including ÷, √ ) per number obtained from M is certainly not too high. (For examples cf. {}.) Hence every number obtained from M is used for two multiplication times or longer, therefore the waiting time required for obtaining it is not harmful as long as it is a fraction of two multiplication times. A multiplication time is of the order of 302 times t (cf. 5.3, 7.1 and 12.2, for ÷, √ cf. 5.5) say 1, 000t. Hence our condition is that kjt must be a fraction of 2, 000t. Thus kj ≈ 1, 000 seems reasonable. Now a with k ≈ 1, 000 is perfectly feasible (cf. the second part of 12.5), hence
k = kj ≈ 1, 000, l = 1 is a logical choice. In other words: Each and has an SG associated with it, as shown in Figures 18, 19.
has a capacity k ≈ 1, 000
This choice implies that the number of
’s required is ≈ 250,000 ≈ 250 and the number of
vacuum tubes in their associated circuits is about 10 times more (cf. the end of 12.5), i.e. ≈ 2, 500.
12.7 #
The factorization of the capacity ≈ 250, 000 into ≈ 250 organs of a capacity ≈ 1, 000 each can also be interpreted in this manner: The memory capacity 250,000 presents prima facie a 250,000-way switching problem, in order to make all parts of this memory immediately accessible to the other organs of the device. In this form the task is unmanageable for E-elements (e.g. vacuum tubes, cf. however 12.8). The above factorization replaces this by a 250-way switching problem, and replaces, for the remaining factor of 1,000, the (immediate, i.e. synchronous) switching by a temporal succession—i.e. by a wait of 1, 000t. This is an important general principle: A c = hk-way switching problem can be replaced by a k-way switching problem and an h-step temporal succession—i.e. a wait of ht. We had c = 250, 000 and chose k = 1, 000, h = 250. The size of k was determined by the desire to keep h down without letting the waiting time kt grow beyond one multiplication time. This gave k = 1, 000, and proved to be compatible with the physical possibilities of a of capacity k. It will be seen, that it is convenient to have k, h, and hence also c, powers of two. The above values for these quantities are near such powers, and accordingly we choose:
Total capacity of M: c = 262,144 = 218
Capacity of a Number of
organ: k = 1,024 = 210 dl organs in M: h = 256 = 28
The two first capacities are stated in memory units. In terms of minor cycles of 32 = 25 memory units each:
Total capacity of M in minor cycles: c/32 = 8,192 = 213 Capacity of a dl organ in minor cycles: k/32 = 32 = 25
12.8 #
The discussions up to this point were based entirely on the assumption of a delay memory. It is therefore important to note that this need not be the only practicable solution for the memory problem—indeed, there exists an entirely different approach which may even appear prima facie to be more natural.
The solution to which we allude must be sought along the lines of the iconoscope. This device in its developed form remembers the state of 400 × 500 = 200,000 separate points, indeed it remembers for each point more than one alternative. As is well known, it remembers whether each point has been illuminated or not, but it can distinguish more than two states: Besides light and no light it can also recognize—at each point—several intermediate degrees of illumination. These memories are placed on it by a light beam, and subsequently sensed by an electron beam, but it is easy to see that small changes would make it possible to do the placing of the memories by an electron beam also.
Thus a single iconoscope has a memory capacity of the same order as our desideratum for the entire M (≈250,000), and all memory units are simultaneously accessible for input and output. The situation is very much like the one described at the beginning of 12.5, and there characterized as impracticable with vacuum tube-like E-elements. The iconoscope comes nevertheless close to achieving this: It stores 200,000 memory units by means of one dielectric plate: The plate acts in this case like 200,000 independent memory units—indeed a condenser is a perfectly adequate memory unit, since it can hold a charge if it is properly switched and gated (and it is at this point that vacuum tubes are usually required). The 250,000-way switching and gating is done (not by about twice 250,000 vacuum tubes, which would be the obvious solution, but) by a single electron beam—the switching action proper being the steering (deflecting) of this beam so as to hit the desired point on the plate.
Nevertheless, the iconoscope in its present form is not immediately usable as a memory in our sense. The remarks which follow bring out some of the main viewpoints which will govern the use of equipment of this type for our purposes. (a)The charge deposited at a “point” of the iconoscope plate, or rather in one of the elementary areas, influences the neighboring areas and their charges. Hence the definition of an elementary area is actually not quite sharp. This is within certain limits tolerable in the present use of the iconoscope, which is the production of the visual impression of a certain image. It would, however, be entirely unacceptable in connection with a use as a memory, as we are contemplating it, since this requires perfectly distinct and independent registration and storage of digital or logical symbols. It will probably prove possible to overcome this difficulty after an adequate development—but this development may be not inconsiderable and it may necessitate reducing the number of elementary areas (i.e. the memory capacity) considerably below 250,000. If this happens, a correspondingly greater number of modified iconoscopes will be required in M. (b)If the iconoscope were to be used with 400 × 500 = 200,000 elementary areas (cf. above), then the necessary switching, that is the steering of the electron beam, would have to be done with very considerable precision: Since 500 elementary intervals must be distinguished in both directions of linear deflection, a minimum relative precision of 1 × 1 = .1% will be necessary in each linear 2 500 direction. This is a considerable precision, which is rarely and only with great difficulties achieved in “electrical analogy” devices, and hence a most inopportune requirement for our digital device.
A more reasonable, but still far from trivial, linear precision of, say, .5% would cut the memory capacity to 10,000 (since 100 × 100 = 10,000, 1 × 1 = .5%). 2 100 There are ways to circumvent such difficulties, at least in part, but they cannot be discussed here. (c)One main virtue of the iconoscope memory is that it permits rapid switching to any desired part of the memory. It is entirely free of the awkward temporal sequence in which adjacent memory units emerge from a delay memory. Now while this is an important advantage in some respects, the automatic temporal sequence is actually desirable in others. Indeed, when there is no such automatic temporal sequence, it is necessary to state in the logical instructions which govern the problem precisely at which location in the memory any particular item of information that is wanted is to be found. However, it would be unbearably wasteful if this statement had to be made separately for each unit of memory. Thus the digits of a number, or more generally all units of a minor cycle should follow each other automatically. Further, it is usually convenient that the minor cycles expressing the successive steps in a sequence of logical instructions should follow each other automatically. Thus it is probably best to have a standard sequence of the constituent memory units as the basis of switching, which the electron beam follows automatically, unless it receives a special instruction. Such a special instruction may then be able to interrupt this basic sequence, and to switch the electron beam to a different desired memory unit (i.e. point on the iconoscope plate). This basic temporal sequence on the iconoscope plate corresponds, of course, to the usual method of automatic sequential scanning with the electron beam—i.e. to a familiar part of the standard iconoscope equipment. Only the above mentioned exceptional voluntary switches to other points require new equipment. To sum up: It is not the presence of a basic temporal sequence of memory units which constitutes a weakness of a delay memory as compared to an iconoscope memory, but rather the inability of the former to break away from this sequence in exceptional cases (without paying the price of a waiting time, and of the additional equipment required to keep this waiting time within acceptable limits, cf. the last part of 12.6 and the conclusions of 12.7). An iconoscope memory should therefore conserve the basic temporal sequence by providing the usual equipment for automatic sequential scanning with the electron beam, but it should at the same time be able of a rapid switching (deflecting) of the electron beam to any desired point under special instruction. (d)The delay organ contains information in the form of transient waves, and needs a feedback in order to become a (cyclical) memory. The iconoscope on the other hand holds information in a static form (charges on a dielectric plate), and is a memory per se. Its reliable storing ability is, however, not unlimited in time—it is a matter of seconds or minutes. What further measures does this necessitate? It should be noted that M’s main function is to store information which is required while a problem is being solved, since the main advantage of M over outside storage (i.e. over R, cf. 2.9). Longer range storage—e.g. of certain function tables like log10, sin, or equations of state, or of standard logical instructions (like interpolation rules) between problems, or of final results until they are printed—should be definitely effected outside (i.e. in R, cf. 2.9 and {}). Hence M should only be used for the duration of one problem and considering the expected high speed of the device this will in many cases not be long enough to effect the reliability of M. In some problems, however, it will be too long (cf. {}), and then special measures become necessary. The obvious solution is this: Let Nt be a time of reliable storage in the iconoscope. (Since Nt is probably a second to 15 minutes, therefore t = one microsecond gives N ≈ 106 - 109. For N ≈ 109 this situation will hardly ever arise.) Then two iconoscopes should be used instead of one, so that one should always be empty while the other is in use, and after N periods t the latter should transfer its information to the former and then clear, etc. If M consists of a greater number of iconoscopes, say k, this scheme of renewal requires k + 1, and not k iconoscopes. Indeed, let I0, I1,… , Ik be these iconoscopes. Let at a given moment Ii be empty, and I0,…, Ii−1, Ii+1,… , Ik in use. After N periods t, Ii+1 should transfer its information to Ii and then clear (for i = k replace i + 1 by
0). Thus Ii+1 takes over the role of Ii. Hence if we begin with I0, then this process goes through a complete cycle I1, I2,…, Ik and back to I0 in k + 1 steps of duration N t each, i.e. of total duration N t. Thus all I0, I1,…, Ik are satisfactorily renewed. A more detailed plan of these arrangements would have to be based on a knowledge of the precise orders of magnitude of N and k. We need not do this here. We only wish to emphasize this point: All these considerations bring a dynamical and cyclical element into the use of the intrinsically static iconoscope—it forces us to treat them in a manner somewhat comparable to the manner in which a delay (cyclical memory) treats the single memory units. From (a)–(d) we conclude this: It is very probable that in the end the iconoscope memory will prove superior to the delay memory. However this may require some further development in several respects, and for various reasons the actual use of the iconoscope memory will not be as radically different from that of a delay memory as one might at first think. Indeed, (c) and (d) show that the two have a good deal in common. For these reasons it seems reasonable to continue our analysis on the basis of a delay memory although the importance of the iconoscope memory is fully realized.