Tips, tricks and traps


Get the latest tips on the latest geeky things going on in the world

Learn more


Once you find something there's always a trick or two that will make you even more productive

Learn more


Traps and gotchas that will make it hard for you, so why not read on and avoid them?

Learn more

Ubuntu 14.04 with ZFS and SMB sharing

Ubuntu 14.04 with ZFS and SMB sharing

Wow the documentation for this is so confusing mainly because most of the google things don’t cover how the latest release of Ubuntu actually works. The good news is that many things are now automatic. So to cover the tricks and traps here;

Ubuntu file sharing via SMB

This actually as easy as the Mac now that we are up to Trusty Tahr 14.04. The good news is that:

  • Avahi-daemon. This is the thing that gives you local names is installed by default. So if you ping hostname.local then you should see all the ubuntu machines on your network. Same is true with all Macs. It is using bonjour underneath and the trick is that you need to add .local to the hostnames. I’ve found that this takes some time to work and items don’t appear super well or often in Mac or Ubuntu network browse. But you can check for existence with avahi-browse -a which is a command line utility installed by default
  • Avahi advertising of services. By default, Avahi appears to only advertise the network names for your machine. It doesn’t seem to advertise services. Many old posts talk about adding XML junk into /etc/avahi/services. One note is that unless you want lots of different apparent entities, you want to merge together all the file oriented (samba and afp) into a single file. However when I do a service avahi-daemon restart, nothing happened until I realized that the names of the files must end in exactly .service or avahi will not recognize it. The <a href="">avahi-publish-service</a> does however work and you can advertise http and other service, but this leaves a process running to do this.
  • Share file automatically installs Samba. Samba allows SMB sharing (get it :-). It does require an installation, but if you right click on a folder in the Unity interfaces, you get this and it does an install. It also has a check box for guest access. Under the covers what is happening is that a magic directory /var/lib/samba/usershares/ gets created. All the guides that talk about /etc/samba/smb.conf are in correct as the user shares have now moved here.
  • The confusing thing is user permissions. If you set it as guest allowed, then you can access anything, but Samba has its own notion of users (as there is no central user directory) which you set with smbpasswd and it is stored in a separate smbpasswd file so this is unlike the Mac where the password is the same also your user account.
  • In the Ubuntu user interface, it asks if you want to allow guest, this sets var/lib/samba/usershares/<sharename> with an entry guest=y and if you want other users to be able to change, it changes the permissions of the actual data folder to 777, so it looks like permissions are honored by the smbd (Samba server).
  • It is not clear what the default Samba password is. The guides aren’t very clear, but it looks like user security is quite different for samba, there is a smbpasswd command and it allows you to set local and remote passwords. By default, there are no samba users (which is strange). So you need to add a user. The user must also have a real account on the local machine, so do a sudo smbpasswd -a rich to add an existing user named rich to the smb password database. Then the user can do a smbpasswd to set his password. In search for the smbpasswd file, it looks like it is null at the beginning of using samba which is supposed to life in /etc/samba/smbpasswd or in a tdb database, it isn’t clear.
  • Also it is unclear how with the Mac once you’ve connected to a server, you can forget the user name and password. But there are a host of commands to get rid of the confusing “The server XXX may not exist or is not available at this time”. What is happening is that either the server is really down or more likely the Mac failed to logon properly. Argh. What a message. To see if this is the problem go to the Finder and choose CMD-K and type in manually smb://yourfileserver.local and see if you get a logon prompt. This verifies that the Finder is using the wrong credentials. You can also try to manually list what it has with `smbclient -L yourfileserver -U yourusername -d=2 and see what you get.

Unifi UAP-AC Zero handoff

Unifi UAP-AC Zero handoff

How frustrating, one of the cool things about Unifi and their older product line is how they manager zero handoff. This is needed because most clients are stupid. They hold on to a wifi signal for way too long. What you want in a good environment is for there to be enough coverage (without interference) to sectorize your wifi.

With the older UAP-Pro and UAP lines, they used Atheros and could do this. They all worked by using the same Mac ID and then managing who connects from the Unifi controller. This is a great solution.

The problem is that the new UAP AC use Broadcom and do not support zero handoff. So you get this problem that if you connect to an access point on one side of the room, then if move to another room, you will still be connected.

They have been promising an update for a year (?!) now and it hasn’t shown up. But there is a workaround. You can change the RSSI minimum for each AP. This means that if you see a low relative signal strength, it will kick the client off. It’s a decent solution for now as the UAP AC are the only ones where there is any hope for a relatively cheap price of getting this kind of handoff.

It involves some heavy hacking of files, but it should work pretty well.



Got a great kit from makesmith as part of my Kickstarter addiction. It’s a do it yourself CNC machine. You just need a dremel and it can cut nearly any soft material. But you need to supply your own Dremel. Popular Mechanics has a good review of what is technically called a rotary tool:

  • Dremel 4200. $130. 5/5. Expensive but worth it and variant, the 4000-6/50 is the best seller in rotary power tools. The code means model 4000 with 6 attachment and 50 bits. The 4200 is slightly cheaper and has a holiday promotion at Amazon which is $20 if you buy $125 from Amazon directly. The main difference is the 4200 is Easy changer but Amazon folks report it doesn’t work well in practice. And the 8000 series is cordless
  • Proxxon 38481. $119. 4.5/5. Why not get a Dremel :-)
  • Black & Decker RTX-6. $35. 4/5 starts. Main issue is that it has 3 fixed speeds and isn’t infinitely variable.
  • Chicago Electric 68696. 4/5. $22. Wow now that is cheap and a value leader.
  • Craftsman 32960. 3.5/5. $40.
  • Kawasaki 84059. 3.5/5. $48.

ASRock Z97m motherboard confusion

ASRock Z97m motherboard confusion

Motherboard vendors have so many identical motherboards and it is so hard to tell what does what. Asrock for instance has four nearly identical Z97M motherboards. Only with a detailed comparison can you see the difference:

  • Z97M OC Formula and the Fatality are both $135. There doesn’t seem to be any difference except for a “special USB” port (not sure what that is) and slightly different capacitors. So get the cheaper one.
  • Z97M Anniversary. This doesn’t have an M.2 port
  • Z97M Pro4. The main difference is only one PCI Express slot, so it is good for non-SLI configurations. That is for business professionals.

Wifi network buying guide for December 2014

Wifi network buying guide for December 2014

Inspired by reading all those Tom’s Hardware reviews, if you need to upgrade your hardware and can’t wait for full 802.11ac Wave 2, then here are your options. seems to have the best reviews of these things. Most of the big sites don’t cover this in too much detail, but I find that for wireless and for NAS and other things for SOHO, it is perfect:

Wifi Access Points: To MU-MIMO or not

Access points fundamentally depend on clients to support their new features. So the 802.11ac transition is here. Most clients now support it. The big question is when to migrate to 802.11ac wave two (the feature name is MU-MIMO, so neither of these are consumer friendly names.

802.11ac background

802.11ac which supports what are called 3×3 which means three antennas and three streams going to each client. That is how you get to these amazing network bandwidths. For instance with the old 802.11n the maximum was 150Mbps in a 20MHz channel, so you could get up to 450Mbps if the client had three antennas in it and you burned up 60MHz of bandwidth. That only works at 5GHz and range is the big issue when you do this.

In the 802.11ac world, we use SU-MIMO which means single user multiple input and multiple output. The industry tries to capture the available bandwidth by saying that you are at AC450 if you get 450Mbps in the best, best case. So when you analyze bandwidth, you have to know how many antennas the receiver has and how many channels you are using (the more channels, the more interference potential). For a single antenna (1×1, which means 1 transmit and 1 receive), the maximum in 2.4GHz is 150Mbps in 20MHz (and that’s highly theoretic given how noisy 2.4GHz is in most buildings in the real world) but with 5GHz there is much more bandwidth, you you can actually get 467Mbps out of 80Mhz.

Here is how the math works:

  • 802.11n in 40 MHz is 150Mbps
  • 802.11ac in 40MHz at less than 10m is 200Mbps by switching to 256-QAM which means each signal carries 8 bits rather than 6 bits (normally there are 64 coding points)
  • 802.11ac in 80MHz and less than 10meter is 433Mbps. But you need lots of clean bandwidth and it only runs at 5GHz (and yes, I don’t understand where the extra 33MHz comes from).
  • 802.11ac in 80MHz at less than 10 meters with MIMO. If you have multiple antennas, then you can actually beam steers, so if you have 3×3 MIMO on both sides, ou get to that incredible 433×3=1200. If you have 4×4 MIMO then out get to 1800.

The big issue is that wifi access points is that only one client is talking to the access point at any one time. What this means is that the maximum throughput is the max of your router and the fastest client that you have. So if you only have 1×1 AC433 smartphones, you are not going any faster even if you have a infinitely fast AC3200 router. The terms are really deceiving too since AC3200 refers the maximum bandwidth across all the bands in the best possible case. No single client will see all that bandwidth and bandwidth decreases with distance anyway.

Given all this, what’s a person to do or think, well the main thing is that all this theoretic capacity is well, theoretic. There are two other big variables:

a) Range. How good an antenna system is a big issues. For instance Unifi Pro is just an 802.11n system with 3×3 antennas (450Mbps maximum), but it has incredible range which is what matters in many circumstances
b) Client compatibility. Since you are limited to the SU-MIMO clients right now, the most to MU-MIMO won’t happen until the big client change. This time we waited until the client switched before moving to 802.11ac and that probably makes some sense now.
c) Zero Handoff. In large installations, the clients decide when they move from one AP to another, so you can ‘hang’ for a long time on an AP that is far away. We had these problems when we were all Apple Extreme. Ubiquiti came up with a zero handoff scheme where all the APs use the same Mac id, they look like one big antenna and it is managed at the AP level. The result is much smoother handling and you don’t these bizarre drop offs but unfortunately this doesn’t work for UAP-AC or the UAP-AC Outdoor. Sad! One of the issues with Zero Handoff is that it requires that all APs be on the same frequency, so you don’t want them to be too close together as they will cause interference. The UAP-AC also doesn’t support wireless uplink so all need a hard ethernet connection. The main reason is that UAP AC uses Broadcom while the older ones used Atheros.

Extending SU-MIMO with Xstream.

The last thing is that there is a standard war going on right now with Broadcom delaying their move to MU-MIMO and instead doing something called Stream. They use different techniques for maximizing bandwidth.

So the big move is to MU-MIMO which allows multiple simultaneous client-to-AP transmissions. So one solution is MU-MIMO the math is for a 4×4 router made by Quantenna or QCA (Qualcomm Atheros), you get a total a capacity denoted as AC2300:

  • 5GHz: 256-QAM = 433K/stream x 4 80MHz streams = 1733 Mbps
  • 2.4Ghz: 256-qam x 3 6stream = 600Mbps

Broadcom isn’t doing 4×4 MU-MIMO, but is doing something called XStream in the interim. What this is basically putting two 5GHz radios in a single box, so you can get 3×3 across both 5GHZ bands (there are actually two sets). No one client ever sees 3×3, but it is like having two access points in one.

However all of this is kind of moot because there aren’t any 4×4 MU-MIMO client shipping anyway.

Also unintuitively as new 802.11ac router will speed up existing 802.11n clients by 50-200%. I’m not sure why, but those are the results.


No one is doing a lot of testing and it varies by client and so forth, but performance wise, at least has a chart measuring first 2.4Ghz Up and Dow link Throughput. I include routers on this list as well. Ironically routers have more function (which you can disable), but are cheaper as they are higher volume:

  1. Linksys AC1750 Pro. AC1750. $247. 128.7 for 2.4 and 505 Mbps for 5Ghz. Top rated for AC1750 routers
  2. Ubiquity Unifi UAP-AC. AC1750. $250. 131Mbps 2.4 and 475 5Ghz. Rated #2 overall for AC1750 with about the same speed, but the software definitely works better and it is Linux based. So even without Zero Handoff, it might be worth it.
  3. Linksys EA9200 AC3200 Router. $400. 157 2.4 UpDown Throughput and 462 Up/Down for 5GHz. So even with the fancy new Broadcom Xstream and dual 5GHz radios, it doesn’t show much in single client performance. That makes sense, the dual radios only make sense if you have multiple clients. It’s interesting to see that if you just buy two Unifi UAP-ACs and tune them to different frequency, you get about the same effect, with more flexibility.
  4. Cisco Wireless N WAP561. $264. N900 system 151Mbps 2.4Ghz and 258 for 5Ghz. and interesting to see that is true that an 802.11n system can be faster than most 802.11ac and pretty close to the theoretic maximum.

Then if you can resist the bleeding edge, the choices are:

  • ASUS RT-82 uses the Quantel 4×4 MU-MIMO (multiuser MIMO) as does the Netgear R8000 Nighthawk X6. But of course there are no clients to take advantage of it.

TL;dr here is what is recommended:

  • Unifi Pro AC. This is “only” a 3×3 system, but for $300, you get zero handoff and support for current clients. The next push to GigWifi is going to take a while and a few $300 APs are not that much to throw away when the time comes. The main reason is the reported range of the thing and the unifi multiple AP software. Plus these are Linux devices so we can look into them and it uses POE so it is easy to deploy.

Wifi Client devices

The main issue is that it doesn’t make sense to buy much more capacity than you need. The good news is that most of the modern iPhones, iPads and MacBooks are well equipped for the future. And you can get small USB Adapters to upgrade existing hardware if you really want.

Here’s a list of what Apple supports (there are a zillion hardware vendors, but this is a rough sample):

  • MacBooks before 2013. They had 450Mbps 802.11n that is 3 antenna systems.
  • MacBook Pro Retina (2013). Since 2013, the MacBooks are using 3-antenna 802.11ac so they can get 1.3Gbps in 5GHz with 3x20MHz channels. Each 20MHz channel has more capacity because they are using more points in the modulation. In the best case, you get much lower bandwidth and you have to share that connection so in tests you get about 615Mbps out of a 1.3Gbps channel. Also at 30 feet, the bandwidth drops significantly to just 290Mbps so don’t; believe everything that you read. Much depends on the antennas in the box and what is in the way.
  • MacBook Air 2013. These use 2×2 802.11ac, so they get a theoretically 867MHz in 5GHZ with 2x20MHz channels. Similarly there is a downgrade so you get 537Mbps out of 867. Notice that at close range, the 802.11ac results for 3 or 2 channels isn’t that different. As a comparison, a 1Gbps ethernet does give you 941Gbps of true throughput. At 30 feet, you get 252Mbps so a steep drop.
  • iMac 2013 and later. Like the rMBP you get 3 antennas, but they are larger and much better places, so in a real world benchmark, you while you get 605Mbps at 10 feet, by 30 feet, you are still at 461Mbps.
  • iPad Air. Uses MIMO like the MacBooks and iMacs in 2×2, so you get double the throughput of or 300MBps in 2.4 and 866 in 5GHz in the most optimistic case (80MHz channels, 256-QAM).
  • iPhone 6 and 6 Plus. They use 802.11ac and are theoretically 3x faster and can do about 280Mbps in the real world again with a 1×1 antenna.
  • iPhone 5 and 5S use 802.11n. In the real world, they can do about 100Mbps. They dual band 802.11n at 2.4/6GHz with up to 150Mbps available so that means a 1×1 antenna (1 receive and one send) inthref=”–iphone-5-and-5ghz-wi-fi.html”>40MHz channel.
  • AC580 USB Wireless Adapters. Most desktops have wireless already, but if they don’t getting a simple dongle is not a bad way to go. These are 1×1 802.11ac clients and are rated at AC580. The conclusion that the Edimax EW-7911UTC and Trendnet TEW-804UB are the winners if you are only in 2.4GHz with longer range and more throughput and are as good as the ASUS USB-AC51. But don’t get the Buffalo WI-U2-433DM or the Linksys AE6000 under any circumstances. But for the tough 5GHz transmit test, ASUS USB-AC51 wins at close ranges with 160Mbps, but then the Trendnet TEW-804UB catches up and lasts longer. Net, net the TRENDnet TEW-804Ub is the winner unless you know you are going to be close to a 5GHz AP in which case, the ASUS USB-AC51 at $40 is going to have higher performance.

Wired Ethernet: 10GB future proof

The gigabit ethernet is pretty standard these days. It is about $10 a port, so you can get an amazing unmanaged router of 24 ports for $300 or so. However, for the additional management, you get LADP which is a link aggregation protocol. While this doesn’t increase the available bandwidth, it does give you fail over and if you have a file server with two ports, then you can server more clients.

These things are switches, so that means that every client sees every other client and gets the full 1Gbps. This is quite different than Wifi where you share with other clients and if you’ve got lots of other businesses, the noise floor rises and you get even less. Still amazingly, 900Gbps is actually reasonably slow these days, it equates to about 100MBps in effect transfer speed. And with SSDs, you can easily get 500MB-1GBps (5-10x) that performance.

The translation is that it’s time to start thinking about 10Ghz Ethernet even for a small business particularly if you’ve got lots of video you are moving around. In the old days, you had to get fiber optics or something exotic, but today, for runs of less than 30 meters, you can use Cat 6e which is cheap and easy and still get this high performance.

Now of course, most client devices don’t have 10Gbps adapters, so this is more about talking server to server or machine to machine, but for video work it makes much more sense to get to 10Gbps. And in the old days, hard disks were limited to about 125MBps anyway just because of disk rotational speed. But with the advent or RAID0 and much faster SSDs, we are way past this.

Wired Ethernet client

The vast majority of systems support gigabit ethernet out of the box. To get to 10Gbps ethernet, you need to get a special adapter card. So for servers that means getting a PCI Express card that is 1 lane (10Gbps is a pretty good match).

For laptops that doesn’t make much sense and USB 3.0 isn’t fast enough (it is 4.8Gbps in SuperSpeed). So for gigabit adapter you need USB 3.0. That’s theoretic of course, but in the real world with say a USB 3.0 Flash drive, you can get to 125MBps or about 1.5Gbps so about as fast as gigabit ethernet since you are mainly limmited by SSD speeds of a USB key.

As a rule of thumb for fast ethernet (which isn’t fast any more) of 100Mbps, you can use a USB 2.0 adapter, for gigabit ethernet you need USB 3.0 (1Gbps using 4.8Gbps) and for 10Gbps, you need PCI Express 1 lane (10Gbps).

Home Security Systems

Home Security Systems

The state of the art is in such transition here. The future (e.g. Nest) seems pretty clear. Everything should be IP based, we get out of these incredibly high fee driven systems (although Comcast and AT&T are both trying to monetize).

This is a great example of how current systems for burglar, smoke and fire, phone, HVAC, water and gas leak detection, home theater and  surveillance don’t mesh super well. And it is too expensive to just buy a whole new system because of the rewiring and other costs. Sam just asked me what I thought of the whole area and I’ve been meaning to do a post on it:

  1. The first thing is that it is very expensive to rip everything up, so that means a “layered” or migration strategy probably makes the most sense. The most important thing is getting the existing stuff to speak IP so that you can use one broadband pipe and having systems that do the translation.
  2. We want to eventually get to all IP in the house, broadband internet outside and then you can monitor by VPN and other internet tunneling techniques. The core infrastructure pieces are a good home ethernet and wifi plus a NAS in your basement and then broad band to the outside world.

So here is how to deal with existing devices:

  1. Existing burglar and fire alarms. The biggest thing is what to do with existing burglar alarms. These boxes are super expensive and the system providers are on the old model where they charge a lot for the device, the sensors are embedded. Their main interface is the RJ-11 jack to call someone when there is a problem. Right now, for me, it’s the only reason I am paying $20/month to Comcast for phone service. The solution is to find something that does this translation. I spent quite a bit of time looking at Skype devices that would take a phone and connect it. But these had low ratings and there is the problem of the phone number. Ooma ($109) or so seems like the ticket. It has no monthly charge and you buy a box with a phone number which allows existing phones to dial out. We already have such a thing for our VPIP on Comcast, so this will pay off in less than a year and we are not tethered to Comcast.
  • Existing home phones. While we mainly use mobile phones (thank goodness Verizon coverage is decent in the house), the Ooma or Comcast IP works well so you don’t have to replace existing phone RJ-11 systems.

  • Existing cameras. If you have old surveillance cameras, the key is to find a place to store it that is inexpensive. While Dropcam and others have monitoring services and storage, the best thing seems to be to take advantage of the integration that Synology and other NAS vendors have. Having a local NAS and then ability to rsync or copy out makes so much sense for surveillance. The system is free for two cameras and there is nominal one time charge to add more. They support lots of different legacy ones too. So that’s a great solution. You just need to also have a backup solution. Crashplan is nearly ideal for this purpose as you normally have 30 days worth of storage for video and crash plan let’s you look at it.

  • Existing lighting systems. Well if you like this kind of mood lighting that Lutron has it is super expensive. This is again the case where they sell IP integration now, but I’m not sure you want it. Probably best to leave this standalone if you have it.

  • Existing home audio and theater. This is the domain of Crestron or ridiculously complicated remote controls. Sonus also sells standalone systems. The perfect system without locking doesn’t really exist yet. Since we are mainly a iOS family we’ve found that the best integration is to use Apple Airport Express to connect to home audio and then suffer through the remote crisis.

  • And then the idea is that you should layer on new things but not replace the old:

    1. New cameras. Most of these are IP based, so just make sure they hook into your IP network and then have them connect to your NAS. Managing these things is a bear, but Synology for instance allows you to use it as the central user interface. I’ve found just using a web browser works the best though. You want a camera that is high resolution that does video that is at least HD and which has a wide latitude. Also most of the time what you really want is time lapse high resolution at 5MP as even HD is very grainy. I haven’t found a great camera for this. Dropcam Pro is nice, but it is tied to their device. The Axis and professional cameras are super expensive. For indoor surveillance, the el cheapo cameras really are not very good too. Ironically, I think a Raspberry Pi with its camera is probably the best image quality right now although you have to home-brew hack the thing.
  • New burglar alarm detection and fire detection. This is expensive and hard to figure out, but I’d say that if you can you should wait. Kickstarter has an amazing array of folks working on IP-based systems. I’m sure one will emerge so hold on i you can.

  • New HVAC. Nest is a way to integrate your systems. I’ve not looked into it too much as the existing things are really complicated to figure out. We have one of those programmable things.

  • New home theater. There are lots of IP based systems right now. You just have to put emitters on all your devices. Global Cache seems to be the standard for IR2IP which is perfect. I’m putting them on devices when I have time and then figuring out control next is harder. There are many systems that let you use your existing tablets and phones now, so the home theater thing is much simpler. The main thing is the programming takes forever. But we are very close to a single remote for everything. It’s your phone.

  • The main gotchas in this are:

    1. If you DIY it is super frustrating and you are in the world of Linux and trying to make it work. There isn’t much that just works out of the box. The things that do have massive monthly charges or lock in. After all, given Snowden, how good do you really feel about handing over all your home video feeds to Comcast or AT&T?

    2. I’ve tried so many of these dynamic DNS solutions for accessing your NAS directly over the Internet. There are two big problems. The first is of course security, you are allowing inbound connections, so you have to stay on top of everything. And second is that most don’t work even if you do configure them. I’ve tried everything for Plex Media to Synology cloud to DynDNS. They are not reliable and not very secure. So the best answer is to push things into a cloud outbound only from your home. Then things seem to work well. That’s why Crashplan and Bluehost are so important to me, they look just like storage. If you are a big boy a direct connection to AWS doesn’t sound so bad.

    3. Privacy is the other problem. Anything that gets pushed into the cloud means that someone else can see it. So you want “zero knowledge” vendors. Crashplan works that way and if you are willing to pay for lots of storage, theoretically, you could get a AWS host to do the same. Do the encryption on your personal machines and only store encrypted data in the cloud.

    Gaming Laptops


    If you need a really fast laptop for gaming or for video development, that is probably the most exciting field today. Here are some thoughts. The main one being that we are at the turn for notebooks with both the new nVidia Maxwell (GTX 980/970) class graphics processors and the arrival of NVMe and really fast SSDs.

    So we are at the turn of the crank where you want the m.2 with at least 2 lanes and hopefully 4 lanes. And then wait for the NVMe to come to SSDs and vertical NANDs.

    Most use the quad core i7 4710HQ

    Notebookcheck has the best overview of the actual underlying technology. For instance the fastest mobile card right now is a GTX980M in SLI mode. So there are actually two GPUs at work. It’s interesting to see that a GTX970M in SLI is about 30% faster than a single GTX980M. So the SLI is exotic, but pretty cool

    ([GTX 980M]. This is the premier graphics chip and there are only a few laptops now shipping with it including:

    • Asus G751JY-DH71: Intel Core i7 4710HQ, 17.3″, 3.8 kg. This is getting good reviews from Wirecutter and the Cnet. Asus G751JY-T7065D: Intel Core i7 4710HQ, 17.3″, 4.8 kg is heavier and more powerful configuration. It’s cheaper than the MSI but doesn’t look so garish. 9 pounds of delight! There are quite a few configurations, but the $1500 barebones make some sense, so you can put your own SSDs into (think the Plexstor m6e for 2x SATA).
    • MSI GT72-2QE32SR311BW. It has a huge range with the MSI GT72 Dominator. Intel Core i7 4710HQ, 17.3″, 3.8 kg and it comes in a range of ram and disk configurations up to 32GB and 4 m.2 slots (?!!) and a 2.5” SSD slot. And it has 8GB VRAM as well.

    If you want a different model you can get a slim and light configuration:

    • Gigabyte P35X v3: Intel Corei74710HQ, 15.6″, 2.5 kg. This one actually looks pretty cool. Has aWQHD display, and ismuch smaller than the other monsters. The main issue is that it doesn’t cool as well since it is small so about 4% slower on CPU benchmarks and 7% slower on graphics benchmarks but it is slim and light
      • HP Omen is a thin and light like the Gigabyte, but uses a slower GTX 860M (although apparently with Maxwell). Still a nice unit particularly with the m.2 slot.

    ZFS on Ubuntu

    ZFS on Ubuntu

    Well, we finally got our file server running and with three SAS drives to practice with, it’s time to learn how to use ZFS. For convenience we are using Ubuntu and there are some handy instructions for installing it on Trusty Tahr (14.04) annotated with notes from

    The instructions are pretty easy for installation:

    # get add-apt-repository
    sudo apt-get install software-properties-common
    # get the zfs library
    sudo add-apt-repository ppa:zfs-native/stable
    sudo apt-get update
    sudo apt-get install -y ubuntu-zfs
    # install now
    modprobe zfs
    # and for subsequent reboots
    sudo tee -a /etc/modules <<<“zfs”
    # check to see that it is installed
    lsmod | grep zfs
    # see what disks I have
    # create a mirror disk set named zfs1 with two drives at sdc and sed
    # raidz1 means 1 parity drive (aka raid 5, so resistant to a single drive failure)
    # raidz2 is 2 parity drives (aka raid 6, resistent to two failures)
    # raidz3 is 3 parity driver (no such defined thing but resistant to three failures)
    # shift means use 2^12 block sizes rather than the 512 byte defaults
    sudo zpool create -o ashift=12 zfs1 raidz2 sdc sdd
    # raw capacity
    sudo zpool list
    # capacity after format and parity drives
    sudo zfs list
    # create a file system called users on zfs1
    sudo zfs create zfs1/users
    # make it shareable by samba
    sudo zfs set sharesmb=on zfs1/users

    Trap: ARC maximum memory

    There is a big trap here in that ZFS on Linux will chew up available memory, so you need to limit it’s cache size, to typically half total system memory by creating a /etc/modprobe/zfs.conf file

    # /etc/modprobe.d/zfs.conf
    # yes you really DO have to specify zfs_arc_max IN BYTES ONLY!
    # 16GB=17179869184, 8GB=8589934592, 4GB=4294967296, 2GB=2147483648, 1GB=1073741824, 500MB=536870912, 250MB=268435456
        options zfs zfs_arc_max=858993459

    Trap: vdevs are immutable

    Well this is an even bigger problem. A typically drive (a device in Linux speak) can be formed into a larger Virtual DEVice. This is a raid partition typically. The problem is that one you create a RAID partition, you can’t add drives to it effectively or easily. So if you say have three 4TB drives in a vdev, if you run out of space you can’t just add a new drive.

    Trap: zpools fill up last added drives fast

    A zpool is a basically a striped array (RAID0) and this let’s you add multiple vdevs or raw devices, but when you do, it fills the remaining free space at the same percentage rate (So if you’ve 1TB left on one drive and 10TB on another, then it will write 10x more data to to the last drive).

    Tip: name the drives by the physical labels on them

    Ars Technica. I didn’t know this but you can find those kinds of labels with ls -l /dev/disk/by-id which shows you the names of the disk by

    by their wwn ID, by their model and serial number as connected to the ATA bus, or by their model and serial number as connected to the (virtual, in this case) SCSI bus.

    Tip: Create lots of filesystems because you can compress and grow and shrink in a single line

    In ZFS, a file system looks like a folder, so create it with the syntax sudo zfs create zfs1/images and then you can set properties on that entire file system. You can’t do that with folders you create within a file system.

    # compress documents
    sudo zfs set compression=on zfs1/documents
    # change the file system side is so easy
    sudo zfs set quota=200G zfs1/documents
    # you can resize a file system just like this
    sudo zfs set quota=1T zfs1/documents

    Tricks: Snapshot and backup your file system

    I can’t believe how simple it is to make a backup. ZFS uses a copy-on-write scheme. So when you snapshot, it keeps the old disk blocks when you create new ones. Cool. The syntax is a little weird but it is basically vdev/filesystem@snapsnot name

    # make a snapshot
    sudo zfs snapshot zfs1/documents@snapped-2014-12-11
    # list all your snapshots
    sudo zfs list zfs1/documents
    # recover to that snapshot whenever you want
    sudo umount zfs1/documents@snapped-2014-12-11
    sudo zfs rollback zfs1/documents@snapped-2014-12-11

    Tricks: Replication of ZFS to another machine

    OK, this is pretty cool. If you can ssh into another machine, then you can send all your changes from one system to another

    # You just send changes over an ssh tunnel and as many snapshots as you want to backup-server
    sudo zfs send zfs1/documents@2014-12-11 | ssh backup-server zfs receive

    Converting to PCPartpicker

    Converting to PCPartpicker

    Wow, I can’t believe how convenient this is. I’m in the middle of converting all by recent Haswell buy recommendations into PCPartpicker lists so it is easy to replicate. Thanks so much whoever made this incredible site with:

    • Compatibility checking. You just pick things and you are much more sure everything will fit. I don’t know if they do coolers correctly but that would be amazing. They certainly measure vertical height, but don’t know if they measure interference horizontally. They are also looking at GPU and Power Supply height!
    • Price checking. They include Amazon, Newegg and a host of others. To do the actual buy, you should use, and various reward sites to get maximum bargains, but I’m betting their revenue model is probably associate fees.

    • Build publishing and saving. The most valuable thing for me, it keeps things up to date with nice links.

    • Price histories so you can see when not to buy on a spike

    The only things missing are:

    • Figure out the actual volume for cases. They give HxWxD, but you can’t sort on it as small is beautiful. So that’s some manual math still. Nice to know that cubic liters is 61x less almost exactly than cubic inches :-)

    • Figure out how noisy or quiet. Would be nice if some figures of merit where there for power supplies, chassis fans and coolers. But I can use to know what the quiet ones are.

    • Ties to review sites like anandtech or toms hardware to look at which one gets a good rating, but the links to newegg are really nearly as useful as they don’t include performance data, but you get a sense of reliability.

    Shoutout to the Leatherman Squirt PS4

    Shoutout to the Leatherman Squirt PS4

    Well this is one of the few gizmos that Alex finds useful. We have all manor of Leatherman multi-tools though out the house but the Leatherman Squirt PS4 ($26 at Amazon depending on the store) is the one that gets the most use just because it is tiny.

    Follow me on Twitter