Kitting out the rest of your home theater

If you aren’t going to spend a boatload, then is a great resource got good modest prices stuff.

If you have an older system, then you only need upgrade your AVR really for the video signaling (that is support for UHD signaling) and to handle digital audio via HDMI. Net, net, replace your AVR if you do not HDMI 2.0 capable receiver.

  • Yamaha RX-V381. This is the lower end version with 5.1 and seems to be more of an Amazon favorite at $280 and is 5.1 as well without upscaling (which you do not need typically as the scaler in the TV is usually pretty good).
  • Denon AVR-S510BT is a basic 5.1 channel system that cost $200. It does not internally upscale. This should be a problem since the television does this as well.

If you want a bigger 7.2 system, then you need to buck up to a bigger system:

  • Denon AVR-S720W is a 7.1 channel system. It is $480 only. It has HDMI 2.0 for full UHD. However note that it has a reliability problem with the headphone jack incorrectly detecting a headphone and shutting off the main phone.
  • Yamaha RX-V679. This is the Amazon best seller at $380. Appears to have roughly the same feature set but be reliable. Upscaling is the main feature
  • Yamaha RX-V681 which is $600 and has Atmos and DTS-X. The main complexity with using Atmos is that you need down firing speakers or something fancy to get vertical sound.

The specific cables you need you can get the basic one from Amazon or Monoprice and they don’t fall for paying a zillion dollars for it.

  • Monoprice Ultra Active HDMI with 18Gbps for $11 (order more given the shipping charges) or $12 from Amazon.
  • Amazon Basics for $6 gives you a functional Toslink optical out cable
  • Speaker wire. Do not be deceived, you can a decent set for $15 from Monoprice.

Then you need speakers sets, these two have gotten much cheaper. Personally, I think sound really makes:

But if you want really high quality then it is better to put together a system, so recommends the ELAC series. Note that if you have an existing system, then you want a modest update, just make sure that you have a good center speaker, that’s the key dialogue speaker. You want something that is going to sound good, so here is a good one:

  • ELAC C5 Debut ($180 at Amazon). This is a super reasonably priced center if you have all the other parts, this is the most important upgrade.
  • ELAC B6 Debut ($280 each speaker at Amazon). These are tower speakers for the left and right. You need two
  • ELAC S10 Debut  ($200). This is a 10 inch subwoofer
  • ELAC B5 Debut ($200 for the pair). These are bookshelf rear speakers.

For really good sound, they recommend:

  • KEF Q-100. These are bookshelf left and right speakers for $300 a pair. These are right now at a huge discount from their normal $800 price.
  • KEF Q-200C. For $500 a center speaker. The center is generally the most important given that this is where the dialog is.
  • KEF Q-400. This is a great subwoofer for $400. These are at a big discount as well.



Wifi Access Points and a Decoder ring

Now that Apple is out of the business, it’s time once again to look for a reliable and fast wifi network for home and small business. But it is really hard to figure out what is going on without at least some background in wifi and how it has evolved. So, this is a huge post filled with facts, but the tl;dr will help you.


Some of the criteria for a home router are:

  1. Fast. I’ve had plenty of small ones that will lock up with any traffic. In the prosumer space, you want a router which can handle 1M packet per second. They cost a little more, but are worth it. My own experience with Linksys WRT-54g routers was that they would lock up under load which was one reason I switched to Apple Airport Extreme. Now with Apple out, what’s a good router to look at. I have two small business routers that I have tried yet doing 1-2M packets per second. It used to be fairly easy, there was 802.11b, g, n, a and then finally ac, now it is really confusing with MU-MIMO and 2×2, 3×3 and even 4×4 systems showing up.
  2. Broad coverage. There is nothing more frustrating that intermittent wifi in a house. The symptoms are really strange and hard to diagnose. It’s one reason we switch to Unifi because they had great coverage. So for a long time, I would recommend the Unifi/Apple combination.
  3. Quality and reliability. The main problem isn’t the point speed, but having stuff that is reliable and just well works for a long time. I’ve had plenty of access points hang and routers crash. It’s one reason I’ve been using business grade access points like Unifi for a long time. They actually do not cost more, but they do require you have a management console and understand what is going on.

With all that here are the recommendations:

  1. If you have a small house (defined to be 2,500 square feet), then you want one powerful signal. If you want something rock solid and powerful, get a Unifi AC LR (AC1300) as the access point and connect a small business router like the Linksys or Unifi  and then to your cable modem. If you are cheap, the a Netgear Nighthawk AC1750 should be fine but beware that you may have software failures and hardware failures.
  2. If you have a big house or office (defined to be 3,500 square feet) and you have wired ethernet cables. You should get a collection of Unifi AC Pro (AC1750) and spread them around your house. Put them close to the places where you compute (your office, your bedroom).
  3. If you have a big house without wired ethernet. Then you need a mesh system. The Amplifi and Netgear Orbi have gotten good reviews (but I’ve not tried it). I would probably hat tip to Amplifi just because it comes from Unifi, but that’s me.

Hardware Decoder Ring: Modem vs. Router vs. Access Points

So what now what’s person to do, first some terms:

  1. Cable Modem. Many boxes are combinations, but there are three basic pieces. First is the modem that converts your internet service provider to what your house has. It might seem pretty obvious, but if you do not have servers in your house, then no matter how fast your wifi is you won’t see anything faster than the 20-50Mbps that the cable modem has. Translation, you do not need an AC5300 router (see below) if all you will every need is an N50 worth of bandwidth. Most of the time you want all this stuff separate so you can upgrade it. That is the cable modem separate from the router separate from the wifi access points if you are a nerd.
  2. Router vs Wifi Access Point. The router is what takes all the wired and wireless connections and routes them. They are basically rated by how many packets per second they can pass and how much memory they have. The small business ones might have two connections (called WAN or wide area network) connection so you can have multiple ISPs. Then there are Wifi access points, most consumers end up with a single combination Router+Access Point (or if they buy from Comcast for instance, they met get a Modem+Router+AP). This works in smaller houses where the router is central or they are very sensitive. Many houses though will do better if they have use wired infrastructure and then have APs on multiple floors.

Wifi Decoder Ring

Beyond 802.11ac there is a lot of hype. OK first some terminology. Instead of the various acronyms vendors have started talking about total available bandwidth across all the channels. That is not what an individual client can see, but what is the bandwidth that the wifi access point can handle. So for instance, the most basic form of 802.11/ac might have 1700Mbps of total bandwidth. That is at 2.4GHz a 300MHz channel and at 5 GHz 1.2Gpbs worth and thus be called an AC1700 access point.

One crazy thing about this terminology is that the backhaul from a wifi access point is 1Gbps, so there is no way to actually get 1.7Gbps of total bandwidth to your ethernet. So most of this actually marketing hype. Also with interference, when they say you can get 450Mbps from 2.4Ghz, that assumes that there is no interference at all which is obviously not true and you are very close.

Frequencies to Channels

A quick decoder ring (which I think I posted before) is a little technical, but here is how channels are allocated, at 2.4GHz, there are 11 “channels”, but they all overlap. These channels are non-overlapping with the oldest standard 802.11b or g/n at 1, 6 and 11. For 802.11g/n, the non-overlapping at slightly different at 1, 6and 11. As an aside when you setup your wifi, if you are a nerd, you will see lots of folks at channel 3 for instance which of course interferes with 1 and 6, but most access points “sniff” the environment and try to find the frequencies that are cleanest. Wifi Explorer on the Mac is my favorite app for exploring this

In the rest of the world, thing are a little different since  you can use up to channel 13, they are 1, 5, 9. This gives a total of 20MHz for each 3 channels each is 150Mbps maximum. That is how you get to 450Mbps available in the best case at 2.4GHz. This is of course beyond the best case, since it assume no interference at all from other sources and great signal to noise ratios.

You can by the way bond these to 40MHz channels as well so, outside of the US, you can have two non-overlapping 40MHz channels at channels 3 (sometimes called 1+5) and 11 (sometimes called 9+13). Note that in the US, you can’t actually use channels 12 and 13, so there is really only one 40MHz channel legally available there called channel 3 or 1+5.

With 5GHz, things are even more confusing because many bands are allocated for other uses. So in the US, the available 20MHz bands (this time non overlapping  bands) are 36, 40, 44 and 48. These can be bonded into 40Mhz bands called 38 (or 36+40), 46 (or 44+48). You can also bond them all into one gigantic 80MHz band called 42 (or 36+40+44+48). This is also called the “Lower 80Mhz”.

Then there are some other frequencies and the 20MHz channels called 149, 153, 157, 161 and 165. As with the lower 5GHZ you can bond them into two 40Mhz channels 151 (aka 149+151) and 159 (aka 157+161) plus a 20Mhz one. And you can bond the two 40MHz into a single 80MHz channel called 155 (aka 151+159). It is kind of cool you just take the arithmetic average of each bond to get the new channel number.

There are some other channels that are “shared” with other users mainly weather radar. You have to use DFS (dynamic frequency selection) to access them and the world is very fragmented, but in the US, these channels:

  • Channels 36-48. Completely free but will interfere with other wifi users.
  • Channels 50-64 (so directly above the “low” 36-48) that must turn off if there are weather radars operating
  • Channels 100-144 a totally separate group that also must defer to weather radar
  • Channels 151-159 assuming that there are no radars operating there.

802.11b to 802.11g Decoder Ring

So now we can see how much each channel can carry. To really understand this, it’s easiest to look backwards and see how each standard 802.11b, then g then 802.11n have increased the width of the channel and the modulation techniques. With 802.11b, in a single 20MHz channel, you could get 11Mbps using DSSS modulation. With 802.11g, in a 20Mhz channel you could get 54Mbps because they switched to the more efficient OFDM. These are maximums, what really happens is that when the signal degrades, it will drop down by half. This is how you end up getting low performance but longer range.

OFDM is the big trick and thus 54Mbps for 20 MHz of spectrum is kind of the building

By the way as an useless factoid, ISM actually stands for Industrial, Scientific and Medical spectrum but of course 2.4GHz have been “hijacked” for Bluetooth and Wifi. And the correct name for 5GHx is also ISM but is officially called U-NII

802.11n MIMO

With 802.11n launched in 2009, the simple change was to increase coding efficiency so the basic building block is 72Mbps on a 20MHz channel (using technically 1 antenna, 64-QAM, 5/6 coding rate which means 5 data bits for 6 total bits, so there is one redundancy bit in convolutional coding). And in a 40MHz channel, you get a little more efficiency because you do not need guard MHz which gives you the basic 150Mbps in 40MHz.

Getting more bandwidth, things took a big step forward with the introduction of MIMO (Multiple Input and Multiple Output). This takes advantage of a phenomena in the real world called multipath propagation. What happens is that different signals bounce around off of walls and things. You see this on old time analog TV as ghosts on the screen.

But you can also use this to in effect transmit more signals over the same spectrum. This is how you get beyond just 54Mbps/20MHz by using the ghosts with a modulation called MIMO-OFDM. This requires a separate antenna for each stream which is why in phones there are actually multiple antennas built in and the routers sometimes have antennas sprouting out of them. It gives rise to figure that show how many input and how many output antennas. So if a system is 2×2 MIMO it means that it has 2 input antennas and 2 output antennas.

So in a single 20MHz channel, 802.11n can have a maximum of 4 streams so you get 4×72=288 in 20MHz. With a 40MHz channel, you get more efficiency so that leads to 4x150Mbps  600Mbps. These numbers should start to look familiar in specs. When you have an N600 modem, what this means is that it can handle 4x150Mbps.

Now practically speaking there are not many places where you can actually get 40MHz of clean spectrum. This is why most of these specifications are way to high to be theoretically true. In a 2.4GHz, you need to have the entire spectrum clear to get to 150Mbps x 4 streams = 600Mbps. In 5GHz, you only have two choices a low and a high band for 1.2Gbps total if the 5GHz spectrum is totally clean.

802.11ac MU-MIMO and 160MHz channels (aka Wave 2)

802.11ac is the next iteration. At 2.4GHz, you get exactly the same bandwidth as the old 802.11n standard because you are using the same encodings (16-QAM, 5/6 encoding). But at 5GHz, you get more with these optimizations:

  • Move from 2x40MHz in 5GHZ to a single 80MHz channel (only available in 5GHz of course and there are only two blocks for it). This of course improves efficiency as you do not need as much in terms of guard frequencies.
  • 256-QAM up from 64-QAM so more data just by more coding

Now using the MIMO trick to get more spatial streams to a single client. So if you a transmitter with 2 antennas and client with 2 (aka 2×2) you can get up to 2×433 = 866Mbps up at 5GHz. The net of this is that whereas you could 150Mbps/40MHz with 802.11n, the new “building block” maximum is now 433Mbps/80MHz with 802.11ac. So the two tricks above improve efficiency by 433/(150×2) = 44% high efficiency improvement. Of course this is only because Moore’s law means that we can do more decoding.

There are however there are two cutting edge tricks to get more. The first is 160MHz mode that relies on the additional new channels that have been allocated. originally there was just 36-44 and 149-165, but now There is only one 160MHz block available and it is only at 5GHz (so pretty short range) at channel 36-64. The other method is  Bonded 80+80MHz. This is kind of tricky so the cutting edge research is how to create signal processing that can bond (80+80MHz). And as before this only works with very clean spectrum and 5GHz is much shorter range than 2.4GHz anyway.

Second is multi-user MIMO. In all the previous systems, the assumption is that there is one client spoken to at a time. So if you have say 10 clients on a single MIMO, then time division split the bandwidth. Client 1 takes a turn, then client 2, etc.

With MU-MIMO cooperating clients can transmit at the same time so that multiple clients can talk simultaneously. This is a pretty special case because all clients and all access points have to support MU-MIMO for it to work. One technique is something called client steering, so when a client PC or phone connects, the access point tells it which channel to you use. You want only MU-MIMO clients together to make it work. If you do, then if you have a 4 client MU-MIMO, you again increase the bandwidth (so the trick are MIMO to use the the same frequencies 4-8x to any one client and MU-MIMO get that for multiple clients.

The main trick to get MU-MIMO working is something called beam forming. You basically use the multiple antennas to directly “point” the streams (like phased array radar) to different clients.

What do the AC ratings mean?

So the marketing folks came up a with a brilliant idea. Without having to explain all of this come up with single specification that tells you in the first what network standard (N for 802.11n and AC for 802.11ac). That means an “N600” router has a maximum capability for 802.11n and 600Mbps so we know that 802.11n has a basic building block of 150Mbps/40MHz, so that means somehow this thing can handle 4 channels of 150Mbps each. This could be 1×1 at 2.4GHz and 3×3 at 5GHz.

Now, they take it farther and look at the total bandwidth across the best case where all 2.4 and 5GHz spectrum is completely clean. As you can see if it basically impossible to every get that 600Mbps number.

So for 802.11ac there are some magic numbers that you see quite a bit so the math is complicated, but for the very common AC1750 routers, the derivation is:

  • 3×3 (that is a total of six antennas on the thing)
  • 2.4GHz assuming the full 40MHz at MCS 7 (64-QAM 5/6 bit encoding) gives you 450Mbps
  • 5GHz, you get to 1.3Gbps at 80MHz using 3 streams at MCS 9 (256-QAM 5/6 encoding)

Now of course this is pretty unrealistic  because these routers are only connected to 1Gbps Ethernet, so unless you are talking about a USB3.0 disk attached directly to the access point (USB 3.0 is 5Gbps) there is now way to actually use 1.3Gbps and you have to be really really close to use the full 80MHz at 5GHz.

And for the so-called AC3200 access points, this assumes you have MU-MIMO enabled clients with the math:

  • 2.4GHz you move to MCS 9 (256QAM) which gives you a 33% improvement to 600MBps using the full 40MHz with a 3×3
  • 5GHz there are even more tricks, two clients can talk simultaneously so 2×1.3Gbps which as we’ve said before only really works if the two clients are talking to each other. There is no way that Gigabit ethernet could support it.

Because antenna design in the real world matters so much it is why say a Unific AC Pro a AC1750 device can out perform a theoretically better AC3200 one if it handles the garbage out there well. And in fact a Unifi AC LR which is just AC1300 can deliver much long range with higher bandwidth than a fancy AC5400 particularly if non of the clients are really MU-MIMO anyway.

What does it mean in the real world?

The real net of this is that access points are really moving to massive specmanship without any real benefit. Instead, you have to see how real devices work in the more limited noise free spectrum. So looking right now at the laptop I’m using, it’s an “old” MacBook 2010 that supports 802.11n and 3×3 antennas and it is talking to an old Unifi AC Pro also with 2×2 MIMO 802.11ac, I’m seeing at 25 feet away in a noisy office tower, 300Mbps at 40MHz. Even the newer MacBook Pro 2014 with 3×3 MIMO is actually connects to an 80MHz channel at 585Mbps using MCS 7 (64-QAM 5/6 encoding) compared with 866Mbps that is possible  at MCS-9 (256-QAM, 5/6 encoding).

Why the fall off? Well when the world gets noisy, the router backs off on encoding from 256 to 16 points and adds more error correction so that 33% of the bits are error connecting.

Specific Hardware

So in lieu of these reviews, here are some highly opinionated ideas about what to get in order:

Unifi: Tried and True for Techies or small houses

Unifi. Ok this is really for prosumers, but if you have lots of equipment and really want something trouble free, then the Unifi small business line is the way to go. Seems to work super well in the six (?!) different homes and small businesses that I support. The main drawback is needing a management console and figuring out which model to pick. Also, if you have a big house, you need to figure out how to get ethernet to different access points. So either you are nerd with Ethernet in your walls or your house is small enough to get away with one access point. Also this is just a wifi access point, so you still need a cable modem/router, but it makes a nice add-on to an existing installation.

UAP AP AC LR (Amazon carries it for $96) (Ok that’s a terrible name I know!) for home because you really want the reach and setting up lots of wifi access points is a pain. It supports 450MBps at 2.4GHz and 867Mbps at 5GHz, so spec-wise it doesn’t compare with the consumer products, but the range and reliability are key factors.

UAP AP AC Pro (Amazon $125) If you can afford it though the more expensive UAP AC Pro is indoor/outdoor and also uses standard POE charging and it supports 3×3 MiMO to get you to the full 1300Gbps. Remember this only matters if you have servers in your home, otherwise, you do not need anything like this bandwidth to use the internet. On the other hand the cost difference is nominal, you might as well get the AC Pro.

For big unwired houses, maybe try mesh Orbi or Amplifi

Wirecutter has a good overview of the various systems. They are expensive at $200-400 for a set of 2-3 nodes and they have limitations as you need to do wireless backhaul. They like the Netgear Orbi the best. They say that it is just two units and easier to setup. $380 from Amazon. One interesting note here is that a single powerful access point like the Archer C7 can be much better than these mesh systems. The main reason they didn’t like the Amplifi has to do with it’s design. The end units are supposed to be plugged directly into the wall so are too low. It’s a strange review, since the simple solution is to just get a short extension cord so you can put the unit anywhere.

Anyway some more about Amplifi:

Amplifi just launched (also from Unifi) and it hasn’t gotten many reviews, although my Facebook buddies like it. It gets wifi connections with “mesh” networking. This means that you do not need wires for multiple access points. They use a separate 5GHz back channel to backhaul from the remote access points to the router. It’s a little more expensive at $349 SRP but Digitrends also liked it. It’s most useful for big Maxhouses with coverage holes.

The thing looks really cool (not like the UFO design of the Unifi), you get a white box that is the router and then there are two small antennas that plug into power that you deploy into the nether regions of your house. Arstechnica does a good job explaining the SKUs which are a little confusing since all have a base plus two mesh points but the technology used is different

  • Amplifi Standard. $199. 2×2 MIMO, Max Power is 24Dbm (base) and 22 dBm (mesh) enough for a 10,000 square foot house (egads!)
  • Amplifi LR. $299 boosts power by 2Dbm to 26Dbm base and 24 dBM mesh. Claims this increases range for a 20,000 square foot (?!!!) house.
  • Amplifi HR. $349 goes to 3×3 MIMO and 26dBM at both base and mesh, so more throughput.

Looking at this, it looks like a good choice for the big houses that do not have ethernet inside or where a more consumer friendly look and application make sense.

For the cost sensitive I’m not sure

Wirecutter likes the TP-Link Archer C7 which is an AC1750 router (450Mbps at 2.4GHz and 1.3Gbps at 5GHz) but Amazon folks have found reliability problems which isn’t surprising given the $75 very nominal cost.

It looks like the Netgear AC1750 and it’s family is loved by Amazon, but they just glom together all the reviews of different devices so it is hard to tell which is better. Also Amazon reviews are increasingly getting spammed because of paid reviews, things like, so I do not take 5 star reviews too seriously, you need to look at the 1-star reviews to really figure out what is going on.

Overall, just reviewing Amazon, it does look like these routers are a nest of issues with reliability and updates, so caveat emptor.


4K Content

Wow the world has really improved. Three years ago you had to get a dedicated 4K playback thingy from Sony to watch UltraHD content. Now life is way simpler.

Let’s start with the easy stuff and then move on to the tougher stuff

Movies (a first start)

There’s a great movie guide at, but with most Samsung and other TVs made in 2014 or later and most others can read it directly. Most work with either set top box like the Fire 4K, nVidia Shield or with Samsung Tizen based UHD (Most of the new ones).

  1. Blu-ray UHD. As with the previous shift, the easiest thing to do is to get some new Blu-ray disks that are 4K. They do not have the streaming bandwidth problems, but do cost some money.
  2. Amazon. They have about 33 movies and a few television series in Ultra HD now with the best being Mozart in the Jungle. Most of these are free with Amazon Prime, so might as well just use it. I had a chance to watch these over the weekend and it works well. There is a separate 4K tab for TV Shows and for Movies.
  3. Youtube 4K. This is mainly user content, but great demo footage to test your displays.
  4. Netflix 4K. If you a relatively new Samsung TV there is a built in player for Netflix. Content is limited and you need a good internet connection. House of Cards and other Netflix in house series are in 4K. You do need a $13/month UltraHD plan to see them though.
  5. Google Play Movies. You need a Sony television, Roku, nVidia Shield of other Android box, but they now have 4k movies you can buy.
  6. UltraFlix is a 4K focused content provider with some great UHD movies like Robocop (1987).  But beware that most of the their content is upscaled from HD and not native UHD.

Hardware (At least this is here)

UHD Blu-ray players (Sony). These are expensive but the highest quality are dedicated Kursh uhd. Most folks won’t need them but boy are they cool and they are only $80 for a UHD player, so why not get one. Well, the main reason is the UHD Blu Ray disks cost a fortune 🙂

Gaming and everything else. The new xBox One S is 4K capable. For $300, this is really your all in one box. You get xBox 360 games, UHD Blu-ray player and the Netflix and Amazon streaming clients. Also there is a new option, you can get 4K streaming games via GEForce Go. This costs $8/month, but is way cheaper than buying and supporting a $1,500 machine. It let’s you do 4K gaming (I don’t know about ping times).

Streaming only. If you just want streaming, then the choices are the nVidia Shield, Roku 4 and Amazon Fire HD. The best is nVidia Shield is the only box with 4K HDR content for both Netflix and Amazon at $200 while the others are just streaming boxes at $89-99. As an aside do not get the $300 nVidia Shield with 500GB, they have USB ports, so instead go all solid state and get a USB to SATA (if you care about reliability) convertor or a USB key.

Sports (Upscaling is it)

If you are talking sports on 4K, then you are talking limited and special events for right now. Basically certain classics are being done this way, but it isn’t a regular thing at all. That’s one reason you really want a great upscaler in your TV. The big reason for this is that they need 4Kp60 (60 frames per second) so that is way more bit rate (50Mbps) than you need for HD. It also needs HDMI 2.0 and not the older 1.4 set top boxes, so lots has to change:

  1. DirecTV. You need a new set top box and a premium package, but they will broadcast MLB highlight games this coming summer.
  2. Time Warner. They do not seem to have any 4K plans that I could find.
  3. Comcast. They seem to have stalled out although Samsung 2014 or later TVs have a small selection of content from a dedicated Xfinity application.


Google Fi Group Plan single Pause glitch

Ok, we’ve been using Google Fi for a while when we are away traveling because they have the T-mobile wonderful feature where they do not charge for roaming.

However beware if you try to use their new group plan. We put two numbers on a group plan thus saving $5 on the second line (Instead or $20 per line, the 2nd line and above are $15).

But this leads to quite a few strange behaviors:

  1. When you try to resume the service, you get an email saying you are resumed, but the website and the Project Fi application says you are paused.
  2. The phone itself says the SIM fails and directs you to the Google customer support, so it seems as if it doesn’t actually come out of pause mode.
  3. When you go to the individual phones on the plan, they say they are active (even though the main screen says they are not). But you cannot pause the service there it fails.

I’m guessing that I’m on the cutting edge with wanting multiple phones and pausing them individually. I do have to say the application itself has some cool feature:

  1. When you start a chat on the web from your phone and then logon to from your computer, the chat session follows you from the phone to the computer. Wow that is pretty cool
  2. Also verification of the phone and who you are is handled in band with a popup from the phone saying, are you really you. That’s pretty cool.
  3. Another way that it works for validation is that there is a “secret code” on the website, if you have logged in then it works.

The fix by the way is to back out of the group plan. You have to do this carefully:

  1. Do NOT end a “member” from the main console. This will cause you to lose your phone number!!!
  2. Instead to to the member phone and say I want to leave the plan. This reestablishes that phone an individual plan.
  3. In about one minute all the phones will start to work normally and each phone correctly handles the pause/resume.
  4. You will get a strange artifact in that the “minimum” data on the owner phone is set to the the say 2GB if you have two phones. That minimum falls to 1GB on the next billing cycle.

In any event, this is a reminder to me of how hard billing and provisioning software really is. Glad I could get back without having to put in new SIMs. That is by the way what the Google chat guy told me. But that’s not true.

Storing high resolution photos backup on Amazon, share on iCloud

Well Facebook sharing is cool but limited in resolution. What if you want to share the high resolution images?


Amazon Prime now offers unlimited high resolution storage for five family members. It’s sort of free if ur amazon anyway 

Two sort of good choices: Apple iCloud (for low resolution only but essentially unlimited) and Yahoo’s Flickr (for as long as it lasts up to 1TB)

 iCloud photo sharing is free and unlimited but only limited resolution. 

Apple doesn’t advertise it but you can store original high resolution photos  up there with some sort of bizarre limits.  The biggest limit being 100 users who can contribute per album and only 100 albums per login and the images are down rezed to 4Mp. 

Neither of these are too severe because you can and probably should probably create a dedicate login for sharing from say a group (e.g.,,…) if you have more than 100 albums (that’s a lot by the way).

For non Apple users, you can use web link sharing which allows people to see the photos if they pass a link around. That’s a bit of a security risk, but let’s presume for now that this is for public sharing and small group interchange.

Flickr gives you 1TB free

The second recommendation is Flickr. It has a 1TB Limit per user ID and stores full resolution. It is owned by yahoo however so not clear how long that lasts.

The biggest problem with Flickr is that they really want you to have a Yahoo login to use and comment on the site, although you can of course pass the links around.

Black Friday is all about 70 inch TVs and 40 inch monitors!

Well the price of UHD (aka 4K) has just collapsed. Three years ago a 4K monitor cost $5K now a basic one is more like $500. But if you want a monstrous 70″ one what’s a person to do. is really helpful with detailed reviews and buying guides. They even tell you the best time of year to buy TVs (basically during Black Friday and in the spring when the model is discontinued with Black Friday being great as you get 6 months more modern technology).

In terms of things to watch out for:

  • the mandatory is 4K video of course
  • motion blur if you watch sports (120 Hz panel)
  • local dimming (for movies so black is really black) and the nerd features are HDR which is technically call rec. 2020 (so the colors are really punch)
  • true 4:4:4 (for the same reason).
  • 10-bit panel. This means more colors vs the mid-range 8-bit panel
  • Rec 20-20. Finally there is something called Wide dynamic range coming which is even more colors.

9to5Toys and have a great listing of sales that are going on and the most interesting is the Vizio 70″ (wow!)E3 going for $1K at It is always hard to tell the specs of the various models, but has a great decoder ring for Vizio.

Best for the casual viewer who wants size: Samsung 70″ KU6300 at $1300

From, they like this one at $1300 from Amazon. It has a 7.5/10 rating with the low points being no local dimming to get truer blacks but has pretty poor motion blur so not  good for sports. It’s a good value choice if you want size and decent quality. It is 4:4:4 with a true 10 bit panel but it is not wide color.

Best for a budget conscious videophile: Samsung 65″ KS8000 at $1500

Sometimes if you move up a little bit you get more of the cool features. This one is 8.3/10 at It is about $200 more than the KU6300 but look you are going to own this thing for years. $1500 at Amazon. The big downside to this television is that local dimming is just Ok and there is fall off at wide viewing angles and compared with it’s KU6300 brother, motion blur is no existent because it is a true 120 hertz panel.

Great value for sports in a home theater: Vizio M series 70″ at $1700

The mid range M series is a really good quality TV. It is 7.8/10 at It’s local dimming (movies) and motion blur (sports) works really well but it is not a bright TV, so you want it in a basement or home theater and isn’t true HDR (so true movie loves will not like it as much.

Not such a great deal: Vizio E series 70″ at $1000

E Series. ($1K for 70″ at or $300 off) There is a 1080p/HD version you probably do not want, but the 4K version is an entry level system. The main problem is that the local dimming doesn’t work super well and the motion blur is not very good so not great for sports. Also for nerds it is not high dynamic range and doesn’t actually display all the colors, it is a 4:2:2 set where 4:4:4 is the best.

And if money is no object: Vizio P Series 75″ at $3600 or the 75″ Samsung KS9000 at $4000

OK this is a really nice television and if you are at this level I’m not sure why you wouldn’t get get the very best Samsung for $400 more, the main issue is that local dimming doesn’t work super well (particularly compared with the Vizio P Series) but has wider color (more punchy for movies), so a little bit of a toss up.

The Vizio P series  scores 8.1/10 but that is only because the sound isn’t great (who cares on a TV this big, you will have a home theater system) and the smart tv features (again not super important). On the all important picture quality it’s motion blur, local dimming and 10-bit panel all work. And it is nearly 4:4:4 on output. $3600 at Best Buy.

Kickass Computer Monitor:  43″ Sony X800D for $600 or 49″ for $640 or the 40″ Samsung KU6300 for $400

We had been buying the Philips 40″ monitors but they stopped shipping them. Now it is clear why, with regular monitors providing 4:4:4, 60 Hertz, there is no need for a special monitor. So either get the really amazing LG 27″ 4K at $600 (which doesn’t really show off 4K by the way) as well as 32″ or even 40″. At 40″ a monitor works really differently. It works much better to have static panes when you have a development system.

But some good ones according to are:

Sony X800D 43″. $600 at Amazon. This is a nice 43″ monitor but isn’t good for bright lights which could be a problem in an office. But most of the time you want to be light controlled anyway. This is a VA panel so not much wider viewing area than the cheap TN panels used on low end 4K displays. The monitor also calibrates very well for photo and video editing. It also wide gamut although not HDR REC 20-20. It is a native 60 Hz display and supports true 4:4:4 chroma sampling so works well as a computer monitor. You need to make sure you have a modern HDMI output for drive the thing at 4Kp60

And for $50 more at Amazon, you can upgrade to a 49″ desktop monitor, now that would be an amazing computer monitor.

Finally as we previous looked at the KU6300 is a nice choice for a monitor and hard to beat at $400 for a 40″ on

MacOS Sierra and Adobe Creative Cloud

OK this is a little annoying, but the standalone (perpetual license) versions of Adobe Create Cloud like Photoshop etc. do not install properly on MacOS Sierra, you have to manually navigate to the Installer/Contents/MacOS/install and run the command line application manually.

This seems true for Creative Cloud 2015 and 2014.

Decoder ring… one connector to rule them all USB/C

Well with Apple making a huge statement by moving to a single connector for everything, it’s time to look back and figure out how we got here. So here in chronological order is a short incomplete history of computer cabling and some explanation for how we got here.


The future is that all peripherals, monitors, external disks, network connectors will fit into a single physical connector called USB/C. It is the highway on which everything will eventually travel. In the mean time you dongle your way to the future with existing hardware. One confusion is that USB is actually a family of connector standards USB/A, USB/B, mini-USB, micro-USB and USB/C and a family of protocols that can run on top USB 1, 1.1, 2, 3, 3.1. So you can have a USB/A connector that supports USB 3.1 and a USB/C connector that support USB 3.1, Thunderbolt 3 and DisplayPort 1.3. Confused yet.

On that one hardware highway, there will be different protocols. Not unlike the way the Internet works, there is one connection, but web pages, video, email, etc. all use different exchange protocols. Sort of the way one road can have many different kinds of cars and trucks tuned for different uses. Those protocols are going to vary quite a lot based on low cost (USB 2, 3 and 3.1) and performance for specific purposes (Thunderbolt 3 for external disks and graphics cards vs. Displayport 1.2 and soon 1.3 for monitors).

The big shoe to drop will be DisplayPort 1.3 which will allow 5K and 8K video output and wide dynamic range on the same USB/C connector.

So for example even with USB/C you need a collection of cables:

  • USB/C to USB/A supports USB 3 by Nona ($11) or AUKEY ($3.50 each). Maybe the simplest example, this converts a USB/C to USB/A connector for use with USB/A cables. It is limited to 5Gbps of USB 3.0 though but costs just $11.
  • USB/C to USB/C supports Thunderbolt 2 (20GBps) Cable Matters. This cable supports 20GBps using Thunderbolt. It costs $22 so much more than USB 3 support.
  • USB/C to USB/C support Thunderbolt 3 (40Gbps) Startech and Cable Matters. The same cost as the Thunderbolt 2 cable so be careful!
  • USB/C to Displayport supports Displayport 1.2 Cable Matters. For $20 get a cable that is DisplayPort for video connection up to 4K at 60 hertz. In contrast the Apple version of this only works to 30 Hertz because it is DisplayPort 1.1 and it costs $49!

The IBM PC (c. 1980)

In a brief history of time, the original IBM PC had a different hardware connector and a specific protocol for every peripheral. Computers were slow enough back then and as today, cost vs performance was a big driving factor. The slower peripherals used cheaper connectors. Also back then size wasn’t as much of an issue and electronics dominated costs, so having a bunch of connectors wasn’t a big deal. So in the back of the original IBM PC you would see:

  • Keyboard and Mice. These were the slowest peripherals and used a serial connection running as slow as on a physical connector called DB-9 (9 pins right?). IBM migrated the same nine pins to with the IBM PS/2 to a cleverly named PS/2 connector which was the cool round thing.
  • Joystick. This was an analog input for joysticks
  • Video. Back then video was analog, you basically fed the monitor with the actual RGB values on a CGA, EGA then VGA connector which was 15 pins and easy to break by the way 🙂 Even today many monitors still have this as the fall back connector.
  • Printers. In the day, when you connected a printer or even a terminal to a computer you typically had either a parallel connector called a Centronix interface or a serial connection with a big DB-25 connector using RS-232 serial connection.
  • Modems. These were also serial devices using RS-232 and usually a DB-9 connector connecting to the phone systems with RJ-11 jacks.
  • Floppy disk. These also had a dedicated connector
  • Networking. The ethernet was a huge coax cable so that’s what you got on the few machines with any kind of networking.
  • Internal disks. OK not really part of the cable story, but part of the larger unification is that disks also had their own protocol and connectors called ATA and then IDE. The big disks used something more expensive called SCSI.
  • Internal cards. These used the IBM bus standard and were completely different from the outside world. They were high speed parallel connectors either 8-bit or 16-bit.
  • Power. All of these peripherals were separately powered with their own connectors.

The main point of this was that we started with a very diverse collection of hardware and protocols tuned for very different uses. Everything up until that point was a different wire for each kind of peripheral

USB Convergence (1990s)

The first big change for cabling came with the universal serial bus. Technology moved forward and now instead of a dedicated controller board for each peripheral, a single chip could handle it and as costs plummeted, peripherals started to merge at the low end. At the high end, the drive was still on performance so the cables changed as physical connectors did.

However as USB moved forward the connectors changed significantly from USB A to B to mini USB to micro USB. So you ended up with a huge number of different cables all supporting some version of the USB protocol although the faster peripherals continued to used dedicated physical connectors and protocols. The big change was the move to serial protocols because at higher data rates, the skew on parallel connectors was a problem and electronics got cheap enough that they could handle the additional processing needed for serial connections

  • Keyboard, Mice, Joystick, Printers, Modems, Floppy Disks. USB 1.1 was the version that really took off providing 1.5Mbps and 12Mbps high speed in 1996. This led to so called legacy-free PCs with much simplified systems. This was a huge simplification of the back of a PC although the connectors were a mess. There was USB/A on the back of most PCs, the the peripherals themselves had a wide range of connectors from USB-B, mini-USB, 4-pin USB and finally micro-USB. This was the first connector that began supplying power as well at 500mA up to 1Amp
  • Firewire/400 and 800 External drive or eSATA. Apple of course had their own course of things and they used Firewire instead for things like external disks. eSATA (external SATA) was the equivalent PC standard, but these too were niche products
  • DVI and HDMI Video. At this point, video also moved into the digital world as controllers in CRTs and then flat panels could process digital. Still the computer world (DVI) and the home electronics world (HDMI) were still not quite converged and both used big thick cables cables that were quite different.
  • SATA internal drives. The disk drives also moved to a serial model with eSATA as variant for connecting external drives.
  • PCI bus. The internal bus of PCs because PCI, more on the this later, but the emergence of PCI was the start of convergence between the internal and external worlds.
  • Ethernet. The world moved to twisted pair ethernet and RJ-45 connectors for 100Mbps fast ethernet.
  • Power. Nearly all the “real devices” used a wall wart of a high power source.

USB/A and USB 3 Rules them all (2000s)

As the beat went on, USB went from being a slow bus for peripherals to big Intel support and very high speed. The new USB 2.0 was 480Mbps and then USB  3 at 5Gbps and effectively killed all but the fastest devices including external disks for the first time with speeds of 20MBps possible, so this PC would have:

  • Peripherals and external hard drives. USB/A connector to any of a large number of USB physical connectors. Most of these could be self powered at 10 Watts to 20 Watts. (1-2Amps at 5V).
  • USB Keys and hard drives.. These were a new form of peripheral as SSDs got cheap enough and USB 3 was fast enough to support them
  • Dual DVI and HDMI with most displays working fine at 1080p 120 hertz with DVI, but not with HDMI.
  • PCI Express. This was Intels big move to a fast serial bus internally
  • SATA internal drives ruled although SAS following SCSI was for enterprise systems.’
  • Ethernet. RJ-45 continues to rule as speed move to 1GBps

Internal and external merge (2010s)

In our current decade as processing got faster, the core PCI Express protocol would now work outside. This was a huge change in technology as having a single protocol across internal and external was a great simplification. At the same time, as bus speeds went for USB 3.1 4oGbps even the highest demand peripherals like video and disk could use a single connector. As a result the latest MacBook Pro could get away with a single external connector and a single internal protocol (PCI Express).

The biggest confusion is that cables can look alike (they both have USB/C connectors), but they are spec’ed to carry different protocols. So you will have cheap USB/C cables that only support USB 3.1,

  • All external peripherals. USB/C connector to USB 3.1 at 10Gbps. This protocol is tuned for loosely coupled devices so there are some specialized protocols for specific purposes (disk and video).
  • Self powered even for laptops. USB/C power. Another big change is that power can also be supplied up to 100 watts so that most laptops could be powered from a USB/C
  • External disk and graphics to Thunderbolt 3 on USB/C. For the first time high speed internal components like disk and even the graphics card can move outboard. The Thunderbolt 1, 2 and 3 provides 10, 20 and 40Gbps that are really PCI Express 1x, 2x and 4x exposed externally. A conventional hard disk has a speed of 600MBps (basically Thunderbolt 1) while the fastest SSD barely saturates a Thunderbolt 3 connection. Thunderbolt  unlike USB 3.1 is designed for these fast peripherals, so be careful that you are buying a USB/c Thunderbolt 3 cable when you connect them.
  • Monitors use DisplayPort Alt Mode with DisplayPort 1.2 on USB/C. While there remain some transitional monitors using a mini-Displayport or Displayport connector, these were only used for a short time. The new USB/C connector can carry digital video to monitors efficiently. Monitors in this timeframe had a huge number of transitional connectors. Intel started the DisplayPort family with DisplayPort and mini-DisplayPort connectors and with protocols called DisplayPort 1.1, 1.2 and 1.3 (each with higher power) before they decided to abandon the connectors and move everything to USB/C connectors. Make sure you get a USB/C cable with DisplayPort 1.2 support which is part of the Thunderbolt 3 spec. Confused yet? The monitor world has had a huge transition because 4K UHD really blows out the bandwidth requirements. To get to 4K at over 60 hertz plus wide dynamic range requires moving to DisplayPort 1.3 which isn’t yet int he spec.
  • Disks use PCI Express and m.2. While SATA lives on as a legacy connection, the world moved to SSDs and they use a PCI Express protocol with a new connector called m.2. The m.2 is just a smaller version of the PCI Express slot and comes in 1x, 2x and 4x versions.
  • Graphics cards use PCI Express. Like the the previous world, graphics cards need 16x lanes and PCI Express is the only way to provide them. External graphics cards only have 4x lanes on Thunderbolt 3 so it will be interesting to see how they do.
  • Ethernet using USB/C to RJ-45 convertor cable.


Restarting Amazon Affiliate links

OK for the last year I’ve been terrible at Amazon affiliate links. First I missed the deadline to update my website information (there is some regulation about sites) so my old affiliate ids (tongfamily-20) were invalidated and I have to start again with new ids.

But how to change 20 years of links? Well, at first I thought I would just search for all links amazon and change, but it turns out there are WordPress add-ons that help. So off to find:

  • Amazon Link. This basically gives you a new markup so you can add `[amazon asin=somenumber&text=sometext]` but it does require that you change everything but is useful for new links because it will generate at runtime which is probably want you want
  • Amazon Affiliate Tag (aka Amazonify). This is smarter and works across your whole site to do a one time change of affiliate links. You just tell it the link and then you can set nofollow so search engines do not continue on to your links. You can also set it to target=_blank so that Amazon links open up in separate tabs and your website stays around.