Gaming Laptops

If you need a really fast laptop for gaming or for video development, that is probably the most exciting field today. Here are some thoughts. The main one being that we are at the turn for notebooks with both the new nVidia Maxwell (GTX 980/970) class graphics processors and the arrival of NVMe and really fast SSDs.

So we are at the turn of the crank where you want the m.2 with at least 2 lanes and hopefully 4 lanes. And then wait for the NVMe to come to SSDs and vertical NANDs.

Most use the quad core i7 4710HQ

Notebookcheck has the best overview of the actual underlying technology. For instance the fastest mobile card right now is a GTX980M in SLI mode. So there are actually two GPUs at work. It’s interesting to see that a GTX970M in SLI is about 30% faster than a single GTX980M. So the SLI is exotic, but pretty cool

([GTX 980M]. This is the premier graphics chip and there are only a few laptops now shipping with it including:

  • Asus G751JY-DH71: Intel Core i7 4710HQ, 17.3″, 3.8 kg. This is getting good reviews from Wirecutter and the Cnet. Asus G751JY-T7065D: Intel Core i7 4710HQ, 17.3″, 4.8 kg is heavier and more powerful configuration. It’s cheaper than the MSI but doesn’t look so garish. 9 pounds of delight! There are quite a few configurations, but the $1500 barebones make some sense, so you can put your own SSDs into (think the Plexstor m6e for 2x SATA).
  • MSI GT72-2QE32SR311BW. It has a huge range with the MSI GT72 Dominator. Intel Core i7 4710HQ, 17.3″, 3.8 kg and it comes in a range of ram and disk configurations up to 32GB and 4 m.2 slots (?!!) and a 2.5” SSD slot. And it has 8GB VRAM as well.

If you want a different model you can get a slim and light configuration:

  • Gigabyte P35X v3: Intel Corei74710HQ, 15.6″, 2.5 kg. This one actually looks pretty cool. Has aWQHD display, and ismuch smaller than the other monsters. The main issue is that it doesn’t cool as well since it is small so about 4% slower on CPU benchmarks and 7% slower on graphics benchmarks but it is slim and light
    • HP Omen is a thin and light like the Gigabyte, but uses a slower GTX 860M (although apparently with Maxwell). Still a nice unit particularly with the m.2 slot.

4K gaming requires nVidia 980 slip

Well, Cyber Monday is nearly here and while I haven’t finished the post of Haswell processors, given that a Haswell Cube would be all about gaming, its a good question to ask about monitors and displays. Right now the state of the art is:

  • 1080p (aka 1920×1080, aka HD). Not surprisingly the most mature, you can buy a TN panel that runs at 144 Hertz. We’ve had one for a while from ASUS and it works well. They have a new feature called Gsync so that with new nVidia cards, the GPU can manage the frame rates. These monitors are relatively cheap at $400-500.

  • QHD (aka 2550×1440, aka 2.5K). We are just getting the first monitors that can run at a true 120 hertz here. The ASUS ROG does this and costs $800-1000. It’s a TN panel, but for the first time you can get real

Cooler Gotchas

We’ve been building lots of machines lately and it seems like we have struck out twice. Both times with cooler issues and once with a lack of an SSD:

  • We got the nice case and it says there is 65mm of clearance, the problem is the (??!!) people who designed it put the Ac power right next to the cooler, so with a big cooler like the Shrunken B is too wide for it. We could mod the case and move it, but probably better to get a less efficient but 34mm tall cooler like the Noctua instead. So be warned this case is nice, but with the AC where it is the clearance is 65mm over the top but won’t work for a 120mm fan on top. So instead we got the Noctua NH-L12 which is less efficient but shorter

  • We also got a file server case and Supermicro motherboard. The problem is that it uses the relatively unusual narrow ILM. The normal cooler uses a square interlocking module, but because this has 8 DRAM slots and 7 PCI Express, it uses the narrow ILM. So the Prolimatech Genesis won’t fit. But the good news is Noctua which I’ve used for small builds and for big does have a version the NH-U12DX i4 and NH-U9DX i4 and the NH-U12 is well rated by so a good choice at about $65 at

Building a cool Haswell File Server

I haven’t built a machine in a while and have never built a dedicated file server box. So lots of great learnings from the exercise. We’ve used first Mac Mini with Drobo and then Synology (nice embedded product), but now it’s time to go all the way to Linux with ZFS so wish me luck. Here are some of the decisions. Note that if you buy today then you get 10% off from Newegg with promo code visacheckout and using the Visa checkout system.

For a list:

  • Amazon Wish List. Here’s a quick list of recommendations
  • Newegg Wish List. I haven’t quite figure out to beat URL out of this system, but it is there too.


This is more important than with desktop machines where we normally go for the smallest mini-ITX and the coolest processors. Here we need lots of room and hot swap is a good idea given how often disks fail. the big winner and choiceat t seem to be the Norco RPS-4220 as the armored case for file servers:

  • Norco RPC-4220. Can you imagine 20 hot swap bays in a single $300 box? That is definitely overkill and you still need a power supply. These use mini-SAS connectors whereas the RPC-4020 uses traditional SATA cables which are less reliable. You can use mini-SAS to fanout to four SATA cables. These things require RL-26 rails. If you need more, the RPC-4224 has 24 drives and the RPS-4216 has 16 of them if you need less. The Norco comes stock with loud 90mm fans, but you can get a 120mm fan partition for $11 that let’s you put in low noise 120mm fans.
  • Supermicro CSE-743TQ-865B. The is a 4U and costs more at $390 but includes a power supply and is highly rated.
  • ARK 4U-500 if you want 10 drives and don’t need hot swap but you still need a power supply at $199
  • Rosewill RSV-L4411 if you do need 12 hot swap bays (6 x 12 = 96TB of data 🙂 for $199. Note that if you ever think you’ll use SAS drives, the $249 Rosewill RSV-L4412 supports those things. Of course in the distant future everything will just be PCIExpress with flash, so not clear you need that. The main issue is the hot swap drives are flimsy so probably not a great long term choice, but certainly cheap!

For a smaller file server, there are some nice options for mini-ITX:

  • Lian Li PQ-26. If you are building a ultra small system, this has 10 (?!) 3.5″ bays and is mini-ITX.
  • Corsair Air 540. This is a small cube but has room for a couple of hot swap drives. Nice for small SOHO NAS systems.

Cooling Fans

Most of these fans are loud. So if these are going into a server rack, it doesn’t matter, otherwise, you want to use and find some nice quiet 120mm fans if the noise is too much. Some choices are (and used both) and they are amazingly quiet:

  • Scythe Gentle Blocker 120. The 800 rpm unit is expensive at $16-20 each but whisper quiet and very efficient
  • Noiseblocker M12-S1. Expensive at $23 but nearly noiseless particularly the low speed S1 model. The S2 is nearly as quiet but with a top speed of 1200 rpm is more versatile when facing heavy loads.

As an aside, you should need it but the Antec QuietCool 140 is the best 140mm fan out there and they haven’t reviewed 80mm fans in ages but models from Scythe, Noise Blocker, Noctua, Enermax and NMB should do pretty well.


And of course a Xenon is going to generate some considerable heat and has been a good place to find highly effective coolers like the Noctua NH-14. The main issue is that server boards use the narrow ILM rather than the traditional square ILM and there are very few available. The best one seems to be the Noctua NH-U12DX i4 based on the very nice NH-C12 that is a decent cooler.

If you decide not to use the SuperMicro board, then you can use a square ILM in which case the Prolimatech Genesis is amazing:

  • Prolimatech Genesis. $80. Top rated and supports the 2011 according to the product page. It needs two 140mm fans as well so it is more expensive, but has amazing performance. It’s 160mm tall, so it needs a huge case.
  • Thermalright Silverarrow SB-E. $80 A Thermalright was one of the first coolers I ever bought, so it’s nice to see this one at the top of charts.
  • Scythe Kotetsu. $35. A smaller single cooler and it is very efficient and cheap. The main problem is that it is very hard to find.

SAS vs SATA motherboard and drives

Another detail is that the 4220 uses SAS connections. So if you have a SATA hard drive then it direct connects to a SAS backplane (it is plug compatible). And if your motherboard is SATA, then you need a reverse cable to connect the motherboard to the SAS backplane. See below but SAS Motherboards are about $50 more and that seems like an easy decision, get SAS and you can always use SATA drives if needed

Also you can connect SATA drives to a SAS backplane and a SAS controller on a motherboard as SAS is backward compatible with SATA.

Adaptec explains it all but the big difference is that SAS allows a daisy chain of up to 128 drives whereas SATA controllers are point to point (so 8 SATA port is 8 drives, whereas 8 SAS ports can be 128 drives) and the typical cable allows one SAS port to fan out to 4 SATA drives.

Systems like the Norco have a SAS backplane, so you can attach the motherboard controller to it and then fan out with a forward cable to breakout from the SAS backplane to SATA drives. Another solution is to use Nearline SAS which are SATA drives with SAS interfaces.

The final issue is that the SATA data channel is much likely to generate an error (10^17 of the time) you will have a silent data corruption (SDC) on an SATA channel but SAS is much better at error detection so it is 10^21. Since these are silent you have to way to correct for it. That’s a good reason to use SAS for big arrays. There is even a higher standard for SAS called T10 that increase this to 10^28

Additional Enclosures for more drives

SUPERMICRO CSE-M35T-1B. This converts a 3×5.25 external stack in 5×3.5 hot swap on any machine for $99 so you can add it no an existing chassis, but has quality and seems slow. iStarUSA BPU-230SATA-RED does a similar thing with somewhat better reviews for $76 and adds 3 hot swaps to a 2×5.25 enclosure.

Power Supplies

The big boys use redundant power supplies, but most small business servers don’t need that. Also we don’t have a monster graphics card either chewing up lots of power. The new Haswell-EN are really low power as well, so doing a little math shows the main issue is the startup power required by lots of disk drives. And of course you need lots of power connectors. A quick look at the Outervision showed that a good 1000 watt power supply was what is needed.

Newegg has a short list of server power supplies. $600 for a dual 1K watt. The desktop list is much longer, but here’s a list of the most reviewed. also has a review of these. The SeaSonic Platinum is supposed to be even more efficient but has some quality problems. The Kingwin is more efficient and has better power efficiency.

  • SeaSonic X-1250. Also well rated as a editors choice. $220.
  • SeaSonic X-1050. This was highly rated by as well. It is completely silent drawing less than 500 watts and has no electronics noise at all. $169.
  • Kingwin LZW-1000 Platinum. We actually bought this unit and it is truly whisper quiet. 80 Plus Platinum so 94% efficient. It is modular which is very nice. It is also noiseless below 500 watts. But this seems to be out of stock.

These are the other newer well rated ones:

  • Rosewill Lightening-1000. $169. 80-Plus Gold
  • SeaSonic Platinum-1000. $220. Include 11 SATA connectors as well. 80 Plus Platinum
  • Rosewill Lightening-1300. $169. 80-Plus Gold


Now there is the choice of Xeon or the Core series. The big news on Xeon E5 v3 is that you can get an incredible number of cores (up to 18 Haswell’s), so they support terrific concurrency because the Core has fewer, well, Broadwell cores but higher individual performance.

So Xeon is usually one fab behind the Core line. The idea is that these modern server chips are perfect for virtualization. That is a single server might run all kinds of virtual machines running on real cores (if that makes sense 🙂 The map of processors is really complicated with CPU from $4K(?!) to $200. For instance here are some good small system choices.

The naming system for Xeon is as dense as Core, but basically we are on Haswell-EP (Core is on Broadwell) and it was just announced with only Wikipedia really able to keep up.

The E3 are the small business server models and still have integrated graphics and have typically four cores and 8 threads. The lower end ones run on the same socket 1150 as their desktop cousins and uses DDR3 and are from May 2014.

  • Xeon E3-1226. $220 from Newegg at 3.3GHz and 2/3/4/4 turboboost

The E5 are the real server chips running Haswell-EX. Note they don’t have integrated graphics, so you use them with a real console. These things have Turboboost which is a complex calculation saying how many multiples of 133MHz faster they will run from all cores down to a single core running. They all run DDR4 so are theoretically 20-30% faster in Ram access. These were just announced September 2014.

  • Xeon E5-160xv3. It is a uniprocessor with 4 cores ranging from 1607 at 3 GHz at $200 to 3.7GHz (andTurboboost at 1/1/1/1) at 1630 at $372 but not readily available yet.
  • XeonE5-26xxv3. These are dual processors with 4 cores in two processors and confusingly the $444base part (2623) runs at 3GHz but boosts at 3/3/5/5 (that is with 4 or 3 cores, it boosts to 3 x 133 = 3.5GHz, but with 1 or 2 it boosts to 5 x 133 +3GHZ = 3.7GHz). It boosts more because it is actually more processors.
    • Xeon E5-2603 V3. This is a budget chip at $223 with dual processors and six cores at 1.6GHz and no turbo boost so good for lots of small jobs that don’t need much processing power.
    • Xeon E5-2609 V3. $300. 6 cores 1.9Ghz and no turboboost so a very budget chip, but this is great for file servers doing little work.
    • Xeon E5-2620 V3. This is another example six cores at 2.1GHz with dual processors and 3/3/3/3/4/5 turbo boost, so it is a good way to have lots of small jobs run for $434 Newegg, $407 Amazon. This seems like probably the best choice for a high performance server.

Super micro seems to have the most solid reputation in dedicated server boards. Also Newegg has a $100 off promotion if you buy a motherboard and a processor from them:

  • Supermicro MDB-X10SRH-CLN4F. What a board, it supports 8 SAS and 10 SATA as well as 4 Gigabit ethernet. It is $419 from Amazon and sees quite future proof although expensive but nothing compared to all the drives you need.
  • SUPERMICRO MBD-X10SRI-F Server Motherboard LGA 2011 R3. Supports the latest 2011 V3 socket, has dual gigabit ethernet and a single ethernet plus 10 SATA connectors. It also supports DDR4 with LRDIMM with up to 512GB of ram. $290.
  • Supermicro mbd-x10srl vs mbd-x10sri is about $10 cheaper and apparently has virtualization features whatever that means.

Dual Socket or not

As servethehome explains, dual socket motherboards are common with servers. Makes sense as you want lots of cores running around and buying two processors is oftentimes cheaper than buying one with twice as many cores. However, the big tradeoff is that if you use it for expandability, then you lose half the memory and half the PCI Express slots. but it would be interesting to analyze if it is better to buy two $400 processors and mate them. The tradeoff is that each processor gets half the memory, half the PCIe bandwidth, half the disks, etc,  so at peak, you have have the resource available compared with a single $800 processor. I’m sure Intel’s pricing is non-linear though so it might make sense.

However for simpler small business servers, probably that complexity isn’t needed.

DDR4 and LRDIMM Memory

This is the next thing memory compared with DDR3 with 16 banks instead of 8 and there are very high capacity memories for those big virtualization applications like Big Data that use LRDIMM at a 20% cost premium but good for very large physical memories  of up to 1.5TB of physical memory!

DDR3 has been around forever and the stock ones are typically 1600MHz but range from 800 to 2200MTs, but DDR4 starts at 2100MTs eventually up to 4200 Mts! As an aside MTS is Megatransfers per second (you do two transfers per cycle, so the true cycle time is half the MTS) so if you can you want to be able to use it. Particularly for big servers that like to cache things in memory. The big boys like Crucial are already up to 3300 MHz so nearly twice stock DDR4. If you are a gamer geek and have the latest Haswell E (iCore 7) with the X99 chipset in say an ASUS X99 motherboard, then you should be really happy.

Samsung is starting to make 8Gb memory chips with a 20nm process. That density will allow 128GB DIMM modules if you can believe that. Quite a step up from the common 16GB DDR3 modules. And perfect for servers who can use TB of physical storage.

But what servers, does this kind of fast memory matter to them?

Tweaktown did a test and with monster dual 16 core system, they couldn’t even complete their testing with 128GB of Ram! The server we are building is more modest, so the 16GB per module with EEC CT16G4RFD4213 seem pretty reasonable. As another aside because of interleaving, you have to be careful to put your memory into the right slots on a particular motherboard. Normally you want a pair of modules which normally doubles the effective bandwidth. It’s also interesting to see that if you fully populate a memory system, then you can actually slow things down. The full 16 slot filling reduced memory to 1800MTs at least for this test. Scaling-wise Tweaktown found the optimal bandwidth was 4 slots worth of memory to each processor. That is 4-way interleave. The other good thing is that their testing is showing that all memory is pretty much the same at this level, so it is mainly a cost thing whether you are using 8GB per slot or 16GB. If you have relatively smaller memory needs then it makes sense to go with 8GB to increase interleave as it seems that 4-way is fastest. (e.g. 4 x 8 = 32GB) whereas (4 x 16 =64GB).

LRDIMM vs RDIMM for more than 16GB per module

Typical desktop memory is unregistered (and less expensive) as there are not too many modules. With servers, you need a register chip to drive all that memory all that distance. However for really big systems with terabytes of memory, you need Low reduced Registers so you can have big memory arrays. This isn’t that big a deal for our little file server. The net is that soon we will be able to get 64GB memory modules. Wow that really helps density. You do need BIOS support for LRDIMMs as Supermicro explains, so make sure your server has this but most modern server boards have this. So for instance on a X10SRCH, you can have up to 512GB (8 x 64GB) worth of memory on it! It will cost you a little a 8 x 32GB system is $3K worth of memory 🙂

ECC for large memory configurations

The last point is error correction. In desktop this doesn’t matter too much. The amount of memory is small, but in big server systems, you are far more likely to have errors, this also drives up the cost

Buying Recommendations

Newegg has mainly Crucial memory and the pricing shows a 16GB sweet spot. Amazon as usual is slightly cheaper with the Crucial 1x16GB Registered ECC server memory at $204, so the best choice:

  • 1x16GB. $214 for 2100DDR4.
  • 1x8GB. $113
  • 1x4GB. $74

So you can see the sweetspot in cost per bit is the 16GB system. So we might want to start for a small server with 2x16GB ($428) or if you know you won’t need much, the best performance would come from 4x8GB ($580).

Storage with hard disks

These are the still the cheapest for things like video storage and shear bulk. The big issues are what to do with the decision about SAS, nl-SAS and SATA. SAS drives are enterprise, they are higher quality and they are expensive. SATA fail in droves (I lost about a drive every six months or or so in my RAID arrays at home and this is just 8+4+8+8=24 drives in all!).

Anandtech has done some good round ups of SATA 4TB and 6TB drives. The 6TB drives are using Helium filled drives to get 7 platters instead of the five in normal air filled 4TB drives. Finally although more expensive, the Seagate and HGST are enterprise class with 10x fewer errors over their lives than the consumer grade WD Red as well as having five year warranties. Interestingly while the raw read speeds are quite different, in typical NAS scenarios with low client counts, it didn’t make much difference.

  • WD Red 6TB. This uses six platters with higher density. $299 so reasonably cheap, although right now the sweep spot in pricing is still the 4TB at $169 or so. You pay a big premium for 6TB. 130MBps raw transfer as this rotates at 5400 rpm.
  • Seagate Enterprise Capacity 3.5 HDD 6TB. Also air filled. 176MBps raw transfer
  • HGST Ultrastar He6. This use the helium fill. Also note that since it is sealed, you can dump it into an Aquarium PC if you want :-). $470. It runs 4C cooler with 49% less power as well. 142MBps raw transfer

Anandtech also explains and reviews the difference between various lines and the big difference is in warranty and failure rates. The big issue with unrecoverable read errors is that if you run RAID-5 and a drive fails then if a single big fails on the remaining drives, you are cooked and can’t recover.

That is one reason why we run RAID-6 here so that even with consumer drives this is less likely to happen at the cost of less capacity, so in a funny way the more expensive enterprise drives are worth it because they fail less (losing 20% of the drives a year means you are willing to pay a 25% premium on drives) and being able to do RAID 5 instead of RAID 6 gives gain another 12.5% in capacity (for an 8 drive array, you have roughly 1 of 8 as error for RAID-5 vs 2 of 8 for RAID-6, the penalty is less for bigger arrays of course).

One problem is that with consumer drives, every 10TB read is nearly 100% likely you will get a hard error (10^14) so if you lose a drive, then on the next read, it is likely that you will have another hard error. Which is why you need RAID-6 with consumer drives. The so called near-line SAS/SATA drives are 10^15 so they are 10% likely with a 10TB read. So your rebuild times are much less. Basically with these big drive arrays (10-40TB), you need to upgrade from consumer to at least near-line SAS/SATA (e.g. 10^15)

  • WD Red Pro. 7200 rpm. 5 year warranty. For 8-16 bay NAS systems. 4TB and 6TB versions. SATA. $260. 5-year, but it is only 10^14
  • WD Red. For smaller 8-bay NAS. 5400 rpm. $173. 3-year warranty. 10^14
  • Seagate Constellation ES.3. Traditional enterprise drive. 128MB cache vs the normal 64MB. $290. 5 year. 10^15. Actually quite a value as it is last years drive with good performance at $100 less. A target if you can find it.
  • Seagate Enterprise Capacity. Their latest drives. 5 year. 10^15. $400. It is the winner if performance is the key.
  • Seagate NAS. 5900 rpm. 3 year. $170. 10^14.
  • Seagate Terascale SED. Low cost enterprise drive for gnarling enterprise. $260. 10^14
  • WD Re. Western Digital enterprise drive. $300. 5 year. 10^15 failures
  • WD Se. Competitor to Terascale for low cost enterprise. 5 years but 10^14. $244
  • Toshiba MG03AC. Nearline enterprise storage. 7200 rpm. $290 Newegg, $232 Amazon. 5 year. 10^15. Also a value. These are good value because it is faster than consumer NAS drives but is 10^15 so you can use RAID-5 and is 50% faster.
  • HGST Ultrastar 7K4000 SAS.  A chance to compare with a SAS drive. 2M hour MTBF?! $270-380 10^15 so a pretty good deal if you can get it for $270. $232 Amazon SATA

Performance-wise, if you have 1-25 clients hammering away then you definitely get higher throughput on things like Enterprise capacity. Here are the categories of performance in three classes

  • Seagate Enterprise Capacity, WS Red Pro, Seagate Constellation is particularly impressive at 4MBps with the lesser drives at half that on total throughput in RAID-5.
  • Halfway between the cheap drives and the expensive as is the Toshiba at about 3MBps.The HGST SAS had an interesting curve. Was slow at small clients with 2MBps but then at 25 got to the 4MBps
  • While the “consumer” drives are more like 2MBps which are the Seagate and the WD Red

Their summary is Seagate Enterprise Capacity v4 is the winner but is hard to get if you only care about performance. The Constellation is actually a very good value at $100 less. The Toshiba is a good deal if you can get it at $290 and it has the 10^15 although performance is less than the constellation. Interestingly the HGST Ultrastar isn’t a bad deal and is a SAS drive if you can get it for $270 saving connectors etc. Net, net,  the non enterprise drives WD Red Pro performs like an enterprise drive but is $100 cheaper and is less reliable at 10^14.

Here is their pricing on Newegg and Amazon right now and this changes the ordering somewhat

  • Seagate NAS 4TB. $170 on both Newegg and Amazon. A good value drive with decent reviews and decent performance. I’ve been using in my home build. You need to run RAID-6 thought and they do fail even with little use.
  • WD Red 4TB. $170 Amazon and the 5TB is just $226 so a pretty good capacity deal

These are the mid-tier ones:

  • HGST Ultrastar 7K4000. $270. SATA and SAS the same price for Newegg, $207 SATA from Amazon
  • Seagate Terascale with Instant Secure Erase 4TB. $260 on Newegg and $214 on Amazon
  • WD Red Pro. $408 Newegg, $239 Amazon. Good performance but poor reliability

For the high performance drives, they are all pretty close and given that the Seagate Enterprise Capacity seems like the best value.

  • WD RE. $239 Amazon, $399 Newegg SATA. $272 SAS Amazon.
  • WD Se. $240 Amazon
  • Seagate Enterprise Capacity. $255 Amazon SATA and $262 for SAS from Newegg and Amazon (Platinum Micro)
  • Seagate Constellation ES.3. $252 Amazon SATA or $262 SAS. $270 Newegg SATA and $320 SAS. Wow, this is the best deal given the performance figures, last years drive and awesome. As an aside, there is an array of different ST4000NM0023 is SATA ($270), ST4000NM0033 is SAS ($330), ST4000NM0033 is SED ($365), ST4000NM0043 SED ($390), ST4000NM0063 is SED-FIPS encrypted ($330). SED is kind of a cool feature that encrypts the drives at a low level which is pretty cool for enterprises. It is $100 more than the Seagate NAS but has a 5 year warranty, can run RAID-5 and is twice as fast as the Seagate NAS.
  • Toshiba MG03AC. $405. Not worth it at this price.

The conclusion is that Seagate Enterprise Capacity 4TB SAS at $262 seems like a good value. 10^15 reliability, SAS for more reliability in the connector and then double the performance compared with Seagate NAS at $170 plus five year warranty.

Storage with SSDs

If you want to be the cool kid on the block, then spent $1K and get the Intel Tom’s Hardware and get an Intel SSD DC P3700 that use the PCIExpres directly. This blows through the 6GBps limit of SATA and just uses PCIExpress for everything. You can also get an adapter board that takes the new M.2 mobile flash specification and turns it into PCI Express. SAS drives are enterprise class and have more throughput but cost a fortune. For a startup, the cusp case is that SATA is cheaper but fails more, but we aren’t doing as many simultaneous accesses.

The future of course is moving to PCI Express direct connect for drives. There isn’t a reason really to get to this and a good experiment for this project. Amazon now has a sea of adapters that let you connect the M.2 directly into PCI Express. This works pretty well for these huge server boards with 7 slots of PCI Express and should be more reliable that having connectors and then having:

  • SATA 2.0 bottleneck (6Gbps for SATA 3.0). Moving any faster would be a big problem for SATA. That by the way is about 560MBps effective according to Anandtech. When we built a RAID0 configuration with two SATA SSD drives, we in fact got to a real 1GBps of transfer on our Aquarium PC, so this actually can happen at least at peak rates.
  • SAS bottleneck (12Gbps). Can you believe we are talking about 12Gbps to a drive being a bottleneck
  • PCI Express 3.0. There lots of math, but basically each lane is 1GBps of bandwidth so roughly speaking just an x1 PCI Express is 8Gbps . Wow. So an x4 is 4GBps or an amazing 32Gbps with effectively 1.56GBps of bandwidth.
  • SATAe. This is an upcoming standard that basically stuffs PCi Express lanes into a SATA cable, so that you don’t need a PCI Express card adapter. Of course the cables will be unreliable, but its a neat idea. New motherboards like the ASUS Z87 are getting updated with SATAe drivers
  • NVMe. There will also be a move from the ACHI software stack to something called NVMe which you need to build into the SATA SSDs (the SF3700 will have this in the consumer world shortly so look out for those. This will increase the number of IOPs because the latency is much shorter for accessing data. The soon to be shipping Intel PC3500 is a consumer grade PCI Express system (no adapter needed) with NVMe priced at $600 for 400GB
  • M.2 B key vs. M key. Current SSds are either B-key or M-keyed. B-keyed means they support x2 PCI and M means x4. So, you want to get x4 as you get double the bandwidth out of them. Only a few SSDs are M-keyed like the new Samsung XP941 ($750 for 500GB) which is a 1GBps SSD so needs x4. Most SSDs are 500MBps so can live with x2. Note that if you have an adapter that has say a pair of SSDs, then you need to double the number of lanes to sustain peak performance. This can make some sense as a Crucial M550 for instance is one third the cost per bit ($250 for 500GB

As an aside, if you were thinking about storage boxes that are standalone rather than an embedded drive array, here’s why it make sense to go embedded. It’s about speed:

  • USB 2.0. Theoretically 10x slower than USB 3.0 so these are 400Mbps systems
  • Firewire 800. As the name implies these are 800Mbps throughput.
  • USB 3.0. Theoretically it can handle 5Gbps to 640MBps which is competitive with SATA 2.0.
  • Thunderbolt 1 and 2. The first version is 10Gbps and the second 20GBs. With real loads, it is about 2x USB 3.0 for Thunderbolt 1

So what kinds of cards can you get to adapt if you just want to get the advantages of PCI Express now and you can see the speeds are much higher.

  • PCI M.2 to PCI Express card. This is a $25 adapter and it is an x2 so it has a 750GBps bandwidth maximum and works with B-key SSds
  • Lycom DT-120 PCI M.2 to PCI Express x4. $24 and this only supports M-key which means that it is for PCI M.2 cards which support PCI Express 4 lane whereas B-key only supports 2 lane. This would be a good choice for a monster boot drive that needs 1GBps boot times
  • Micro SATA m.2 to PCI Express x4. This has two B-keyed slots so a good match for say a pair of Crucial M550s. It doubles the storage density effectively. This gives you double the density so say 1TB and if you RAID-0 them a cheap way to get 1GBps theoretically. Although some folks say that RAID-0 doesn’t work as effectively in the real world and you are better off with two logical drives and easier time access the data concurrently.

Net, net, you want SSDs in the future with SATAe and NVMe support the thing seems to be:

  • to wait for the Intel PC3500 to ship and get 400GB of glory and incredible boot performance with x4 and NVMe. $600
  • If you can’t wait and want super high bandwidth (but higher latency without NVMe), then get a pair of Crucial and a dual m2. to PCI Express x4. This is less than half the price of PC3500 at $600 for 1TB
  • If you want blazing performance right now (but without NVMe), then its the Samsung XP941 on the Lycom x4 for $800.

The GTX 670 Overclocking Master-Guide

Well overclocking has gotten in much easier. The ASUS P8Z77-V includes a utility that does overclocking for you automagically. And with parts like the Intel iCore 5 3570K, it allows overclocking and even warranties for it. We got to 34% overclock (3.8GHz to 4.4GHz) at the CPU this way and the GPUs got from a stock 980MHz to 1050MHz with this trick.

Using the tricks from this, we nearly doubled the frame rates. So the first cut of the Heaven Benchmark was 26fps, but these tweaks, we got to 44fps. And that is at 4k x 1024 pixels (eg three monitors worth of pixels) with everything turned on from 16x anistrophic, high shaders, texturs, trilinear filtering and 8x anti-aliasing. Pretty remarkable what a pair of GTX 670s can do!

But if you want to get a little better, there are geek guids. The seems pretty good. It let’s you do a quick thing based on the the eVGA tool and CPU-z using the Heavenmark benchmark. You basically run the fans at full speed, set the Power overtarget to 112% (means 112% of the stock wattage), the voltage to the GPUs to max and then go up in increments of 20MHz until it crashes, then down 5MHz.

This definitely has an impact. The first Heaven benchmark I ran at “stock overclock” if that is a term was 26fps, by increasing the GPU by 61MHz to 1.12GHz from 980Mhz and the memory clock on the card by 100MHz from 1.5GHz.


All of the Kepler-based GPUs (670, 680, and 690’s) are a very unique breed of GPU. Gone are the days of manually increasing voltage to stabilize an otherwise unstable overclock. Now, the user must use a great deal of finesse, and a ton of trial and error, to maximize the potential of their overclock. We now have to worry about dynamic clocking, dynamic volt changes, temperatures, and power draw in-order to reach a maximum stable overclock.

via ~~The GTX 670 Overclocking Master-Guide~~.

Gamers PC Blu Ray Player Samsung SE-506 or use digital downloads only

OK, I thought I could get away without an optical drive, but it is pretty clear that PC games are still delivered this way, so I still need some way to read them. Not everything is on Steam or downloadable yet (when will that be???). You do have to suffer with things like EA’s Origin and there is the problem of old games that weren’t digital. The main solution would seem to be copying them as an ISO, but the problem is that many games implement copy protection so you can’t just copy them or back them up for that matter.

A quick review shows that an external USB 2.0 drive isn’t a bad option. I can move it around and while internals are cheaper at $60, when you are submerging a PC there really isn’t anywhere to put an internal drive 🙂

Looking through reviews, they are few, but PC Magazine seems to liek the Samsung SE-506 and the Newegg reviews are decent. It’s $90. It’s a burner too. I can’t think of when I’d want to blow $10 blowing a Blu-ray, but what the heck, a small premium for that. They also like the Pioneer BDR-XD04 which is super slim but at $120, it doesn’t seem worth it.

Gamer PC Drivers, drivers

Well the hard part seemed to be putting the hardware together, but it is really getting the software particularly when everything is flash. Here are the steps:

  1. Download the Windows USB Installer
  2. Put in your DVD and make it
  3. Goto the ASUS site and download all the many new drivers
  4. Goto the nVidia site and download their drivers

Slam all that stuff onto a USB key and go to the machine and start clicking. Not clear if one needs to go before the others, but I like to first do:

  1. Wifi drivers to get connectivity
  2. nVidia drivers to make sure graphics work
  3. Then work your way through the rest

Cyberpower AVR vs LCD

I got a Cyberpower CP1500PFCLCD which is a pure sinewave version costing $200. It is confusing as there is also a model called CP1500AVR and CP1500AVRLCD. AVR means that it isn’t an actual sine wave, but a triangular. Not so good theoretically for modern power supplies that use PFC. If you can get away with it, the AVR model is even cheaper as it doesn’t have an LCD, so the hierarchy is PFCLCD > AVRLCD > AVR

Gamer PC Fans Noiseblocker M12-S1

I’ve used the Nexus 120 SilentX fans for the last five years, but things always get better, so for quite a bit more money, you can get fans that are amazingly quiet and cool. just updated their reviews and liked:

Noiseblocker M12-S1. It is expensive at $22 a pop, but unreal quiet spinning at only 870 rpm but cooling 25C at 12dbA (compared with the Nexus Real Silent Case Fan D12SL-12 cooling 24C at 14dBA, doesn’t seem like much but every 3dBA is doubling in noise).

Scythe Gentle Typhoon 120-12. This is sadly discontinued, but is an amazing 24C at just 12dBA so just slight worse than the Noiseblock but cheaper too.

There are some lower cost alternatives like the

Nexus Basic 120. This is the one I’ve been using and still quite decent and about $10 vs $22.

Yes, it booted! I’m soooo happy!

Ok, so maybe I shook some confidence, but the we got the Aquarium PC to boot for the first time. It’s been five years since I’ve built a computer and this one with its 1000 watt power supply had me a bit worried. Fire extinguisher at the ready and it worked!

You can see the boot screen and also the PC open at the right. I’d send the actual video of me yelling, “Yes, it booted!” but I’m a little embarrassed.

Next steps are:

  1. Getting a RAID 0 array to work from the BIOS across a pair of Samsung 830 256GB drives.
  2. Installing an after market Noshua DH-14 cooler installed as we just had the stock Intel cooler
  3. Getting the three monitors installed and working
  4. Getting Borderland 2 running
  5. Doing some overclock testing to make sure the various components overclock correctly and stably to 4.4GHz (:-)
  6. Getting four more fans and some cool looking lights
  7. Getting the fan control to work right across nine fans in the radiator
  8. Finding an external low lip tub so that if the aquarium fails there is containment
  9. Getting then 108 pounds (12 gallons) of horse laxative
  10. Dunking the whole mess in and seeing if it still works 🙂