In the development of this MSMB method we were forced to employ various approximations due to current computational limitations. The largest numerical concession was made in section 3.3 where we restricted the calculations of to the small cluster (see Eq. 3.14). Ideally, the irreducible vertices would be interpolated and the full reducible vertex evaluated on the large cluster in turn. This calculation however would scale as ( is the number of timeslices in the QMC(2)) and hence provide little advantage over a single cluster QMC calculation. We therefore restrict the evaluation of in Eq. 3.18 to the small cluster and interpolate the product of and to the large cluster.
With the onset of petascale computing we will be able to make two fundamental improvements to the MSMB approach in the near future. Initially, we will gain the ability to include the fully momentum and frequency dependent in our calculation, thus eliminating the necessity of the approximation (Eq. 3.9). Inherent in this modification is an explicit account of the correct short ranged physics hence removing the need of the Ansatz. However, the memory and CPU requirements for this type of calculation scale as . For a large cluster with and , this would require 66G of double precision complex storage, far exceeding the memory associated with a single CPU. These staggering memory requirements can currently only be met by some sharedmemory parallel processing (SMP) supercomputers.
In the second improvement, the approximation of the large cluster by the small cluster QMC one can be replaced by one utilizing the fully irreducible vertex (the vertex which is twoparticle irreducible in both the horizontal and vertical plane). This results in a selfconsistent renormalization of via the Parquet equations (16,23), and hence an inclusion of long ranged correlations in the crossing channel (see Fig. 3.15) which are missing in both the approximation and the based approximation described above.


The superiority of the latter approach becomes clear in the highdimensional limit, where it is , not , which becomes local. This can be shown by considering the simplest nonlocal corrections to the respective vertices and in Fig. 3.16. The boxes represent a set of graphs restricted to site (local) and (neighboring) respectively. In the limit of high dimensions, each site has adjacent sites . The contributions of each leg within the vertex in the limit is (for details see Ref. (8)). This results in a contribution to the correction of for the two legs in and for . Thus, the nonlocal corrections to including all neighboring sites falls off as and becomes local in the infinitedimensional limit. In contrast, the corrections to remain of order one. Therefore, in the highdimensional limit, is local while has nonlocal corrections. In finite dimensions, we would expect that is more compact than whenever the singleparticle Green function falls quickly with distance. Then should always be better approximated by a small cluster calculation than . (Despite the fact that has nonlocal corrections, one can easily show that in the highdimensional limit, all of the methods discussed here will yield the same self energy and susceptibilities since the nonlocal corrections to fall on a set of zero measure points).
In employing the solution to the Parquet equation in a MSMB method we would be able to resolve two major limitations of the current approach: 1) An implementation considering the full frequency and momentum dependent vertex will be devoid of the causality problems associated with the selfenergy mixing of the two cluster sizes. 2) The approach constitutes a conserving approximation for the large cluster selfenergy. Given these potential gains of a future method, we have to stress the extensive computational demands associated with this approach. While in a based implementation a trivial numerical parallelization of the problem leaves manageable demands, the complex nature of the Parquet approach requires substantial future development.