The other limitation is 2,147,483,647 (32 bit integer). The effect is that the largest intermediate value in the calculation must be less than this number. When we perform long division, the remainder at each step may as large as the divisor−1. Multiplying this by 10000 (for our 4 decimal digit chunks) gives an upper limit of 214,748 for any divisor. The divisors are q^{2} or 2n. In the former case this places an upper limit on q of 463 which eliminates a number of the formulae above. Of course, we could divide twice by anything up to 214,748, but this would take longer.
The other part of the problem is that the slower converging series, such as q = 5, require such a large number of terms that the division by 2n can run into trouble.
A way around both of these problems is to use only threedigit chunks at the expense of increased computation time.
maximum divisor  maximum q (dividing by q^{2}) 

4digit chunks  214,748  463 
3digit chunks  2,147,483  1,465 
This problem may occur anywhere from the second term onward.
The limit on the number of terms (the division by 2n will only create a problem after a certain number of digits.
series  maximum digits  
4digit chunks  3digit chunks  
1/5  150,000  1,500,000 
1/8  193,000  1,930,000 
1/18  269,000  2,690,000 
1/57  377,000  3,770,000 
Using 3digit chunks will require about 33% more memory and take about 33% longer than using 4digit chunks.
Using fewer digits per array location illustrates that the practical limitations can always be surmounted with more intricate programming or more memory and generally at the expense of the time required. We could utilize only 2digit chunks and extend the maximum digits above by a factor of 10. The calculations then would take twice as long as using 4digits.
As a rough approximation the actual computation time depends on the square of the number of digits. If we double the number of digits desired, then each division operation takes twice as long since each array of digits is twice as long, and it will take twice as many terms to converge the terms down to the limit needed. The net result is that it will take four times as long to compute twice the number of digits.
The actual time to perform a calculation will, of course, depend upon the speed of the computer. A computer with a CPU speed of 2 Ghz will compute things 10 times faster than one with a speed of 200 MHz.
As a practical matter, even if the implementation might correctly compute 3,700,000 digits, if it took a number of days or weeks, we might not want to try it!
If we had a computer that utilized 64 bit integers, then we could store more digits in each element of the arrays since the limits on size of values is greatly increased. Even if the computer performed each arithmetic step in the same time, the entire process would be much faster since more is accomplished with each step. Of course, generally such computers also perform individual steps faster as well.
If we have available a computer (or CPU) devoted for each series, then we can compute all two, three, or four at the same time and then combine the results later.