Chips such as the GK110 can sell for much more money in a Tesla K20X than inside any Geforce card, including the incredibly expensive Titan, and this is one of the ways how to make more money off GPUs. They cope well with data processing, especially when you talk about 3.95 TFLOPS peak single precision and 2.9 peak SGEMM and peak double precision of 1.33 TFLOPS.
Things will get faster with the GK110 successor, again a Kepler based part dubbed the GK180. It is promising more than 4 TFLOPS single peak precision and more than 1.4 TFLOPS peak double precision. Of course, the higher you clock GK180 the better the performance, but bear in mind that this is a huge chip.
According to Videocardz, the first Tesla card to utilise this new GK180 chip is called the K40 Atlas and has 12GB memory and 288 GB/s bandwidth with ECC off. It has 2880 CUDA cores which gives you an idea that this chip has more clusters than the GK110 that stops at 2688. Of course, the GK110 had the potential to deliver 2880 cores, but such parts were reserved for pros who wouldn’t mind spending big bucks on the Quadro K6000. The total board power is 235W (245W SXM) which is also something that fits nicely in either servers and if necessary to high end desktops.
The card supports PCIe 3.0, workload boost clocks for AMBER and ANSYS and fits in PCIe passive, active & TTP, SXM. This card can easily get its own Geforce version, but only if Titan really gets intimidated by the new Radeon R9 290X, something that we have yet to see. However, even if AMD’s new card doesn’t snatch the performance crown, it will definitely offer a lot more bang for buck.
This is probably the last big chip from Nvidia made in 28nm and things should get interesting again with the 20nm transition and Maxwell that is expected to show up sometime next year. The GK180 has 15 SMX blocks enabbled each having 192 CUDA cores, which is basically what you’d get on a fully enabled GK110. However, with that in mind, one has to wonder whether a 2880-core consumer part makes financial sense at this point, regardless of the R9 290X.