Meraki Is Now In The “F’ACK” Game

I live in Colorado, and for the past couple of years, Fracking has been a HUGE issue, both physically and politically.  I don’t really understand it, but that’s OK, I’m not going to talk about that in this post.  It’s a rabbit hole I don’t want to go to down.  I bring this up because during Mobility Field Day 3, #MFD3, Meraki introduced “FAST-ACK” and the first thing I thought of, both in name and potential impact, was fracking.

For those of you that don’t know what fracking is or didn’t click on the link earlier, fracking is the process of injecting high pressure liquid into rock deep underground to release trapped natural gas that are in tiny pockets that makes traditional drilling financially unfeasible.  It’s easy to reason both the pros and cons to this (pro = access to natural resources, con = “you are doing what?!?!”) but nonetheless, it’s a thing and it is happening.  As I sat in the Meraki presentation during #MFD3 this immediately sprang to mind.

Now that I have covered natural gas extraction using fracking, let me get to the technology of “FAST-ACK” which was quickly reducing to any number of shorter versions, but other than my witty title we are going to stick with FAST-ACK.  FAST-ACK is a patented technology that Meraki introduced at #MFD3 as a way to speed up your wireless network.  To really understand it, you need to be aware that in a traditional Ethernet connection, there is an ACK packet to a TCP packet, and that has always been there.  In wireless, there is an ACK to most frames as wireless professionals are well aware of, but there will still be a TCP ACK that happens; not to a received frame but from the client to the remote end to acknowledge that it received the TCP payload.  In wireless, it’s not flagged as an ACK, it’s a normal frame.  You would need to dig into the payload to figure out that it’s a TCP ACK.Screen Shot 2018-10-06 at 7.58.58 AM

Meraki’s approach to this is quite simple from a high level.  In order to speed up the delivery of content from a remote device (like a Netflix server), the AP will proxy the TCP ACK from wireless client to the remote end based on the fact that it received a Layer 2 ACK from the wireless client.  While the wireless client is processing the received payload to verify that it can send a TCP ACK, the remote end has already received that proxied TCP ACK and is queuing up the next batch of packets to send to the client.  This means that by the time the wireless client is ready to receive the next batch of traffic, that traffic is already cached in the AP, waiting to be sent.  Over time, this saves time and will get the client the movie they are wanting to download FASTER than the traditional manner.  According to Meraki, TCP FAST-ACK offers up to 38% improvement in throughput!  If you know me, I am a huge fan of getting people their content faster, so now you have my attention!Screen Shot 2018-10-06 at 8.09.59 AM

Now, if you are anything like the group that was in the room when all this was explained, there are some immediate questions that spring to mind, as well as additional questions that arise the more you think about it, just like with fracking in the oil and gas business.  One of the first questions, and this was covered during the presentation, was what about roaming?  Meraki has thought about that and the AP that processes the original TCP frames will cache the next batch until the client is ready to receive them.  If the client roams, the AP will transfer that TCP data to the next AP so that the new AP is ready to send it the moment the client is ready.  So that is covered, no problem.  My lingering question surrounding that is what does that do to the price of an AP when it is discovered that the AP will need additional storage to cache more and more data in certain venues, like LPV?  I could see a highly mobile environment with clients downloading a lot AND moving around a lot, there might be problems.  Time will tell, and I’m afraid that only time will tell about that.  Maybe this means that in certain environments, FAST-ACK shouldn’t be turned on.  Does that limit the market for Meraki?  Again, only time will tell.

After the presentation, as the delegates were packing up and talking, the majority of the conversation was focused on FAST-ACK and potential ramifications of this new technology.  Just like with any wireless centric conversation, there were multiple opinions and just like normal, none of them were wrong, per se, they were just different.  What happens if the client doesn’t send the TCP ACK to the remote end but asks for that payload again?  Does the AP send the next batch and then waits for the repeat batch?  All this TCP payload is numbered, so it will be received out of order, but that’s one of the reasons that TCP number is there to begin with.  If that happens, what is the trickle down effect of that?  What else suffers?  Is that throughput performance improvement worth the risk?  Again, only time will tell, because I don’t think we know what that risk is in the wild.  Too many times I have been burned with technology that works great in a lab setting or in an environment that was hand picked for testing, only to find out that in the wild, it’s not really worth the time and investment.  I’m not saying that is what is going to happen with FAST-ACK, I’m just being cautiously optimistic for the time being.

To watch the full Meraki presentation, go here and see for yourself.  To really understand the impression that FAST-ACK made on me, notice that the FAST-ACK portion doesn’t start until the 15 minute mark.  Before that there was a conversation about external antennas for ALL the radios in an AP – client serving, scanning and BLE.  If you know me at all you know that I love antennas so the fact that my first post about the Meraki presentation wasn’t about the fact that they are dealing with one of my pet peeves but instead introducing a mechanism to speed up the TCP flow should tell you something.

I still think that only time will tell on the real impact of Meraki’s FAST-ACK so if you like to be on the bleeding edge of technology, jump in and tell me how the water really is after you have done a couple laps, I am really interested in how this plays out.

In the mean time, I think I need to re-think the name of my blog.  The fact that I typed an entire blog post centered around TCP ACK’s might mean that maybe I DO know squat about networking.

Damn.

Advertisements

Why the Wi-Fi Alliance Numbering Scheme “Matters”

So yesterday, the 3rd of October, 2018, the Wi-Fi Alliance announced a new numbering scheme to define their certifications instead of the traditional 802.11a/b/g/n/ac/ax terminology everyone as used for the past 20 years.  Had you given me a heads up I could have predicted how the Wi-Fi professionals I associate with would respond, and I wasn’t disappointed.  In grand fashion, the majority of them lambasted the Wi-Fi Alliance for coming up with such a needless and pointless “thing” and questioned why they were wasting their time on such a thing when they could be doing things Wi-Fi professionals have been begging for for years.  If you haven’t seen the actual release, you can read about it here.

As the day wore on, there were some voices in the crowd that started to question the outrage and jokes and general negative comments that came out.  Questions like “why do you care, this isn’t intended for you” and “why can’t you just deal with this?”  After getting a not as restful sleep as I would have hoped for, I want to tackle this question today with an argument I have been using for years.

What you say actually matters!

Here is my argument, and I use it on people who understand networking much more than wireless, because that’s who I deal with on a day to day basis.  When someone plugs a network cable into a network switch and a device, generally it will auto-negotiate the connection speed.  The speed that is negotiates at is set at a pretty small number.  The traditional speed chart you can see on a switch looks something similar to this:

10/100/1000BaseTX

On my newer “M-Gig” switches, it actually looks like this:

100/1000/2.5G/5G/10GBaseTX

Now, counting up the different numbers we see, we can safely say that these devices will negotiate a connection speed of one of six possibilities.  Somewhere between 10BaseTX and 10GBaseTX, but that rate has only SIX possible answers.  Guess what, there is an IEEE standard for each of those, but only the geeky among the wireless community knows what those are (802.3ab for 1000BaseTX if you care; I had to Google it.)  I bet if I asked some of the route / switch CCIE’s in my office they could tell you what each standard was (maybe, I didn’t ask) because that is their business.

Since no one in the general public buys network switches, no one cares what 802.3ab is.  There isn’t some alliance out there trying to promote some “certification” using 802.whatever to sell stuff.

Wireless is different.

I mean it’s somewhat different, but different enough that it really, REALLY matters.  Every year there are launch parties for new consumer devices that tout the latest and greatest device and how it is so much better than it’s competitors.  Up until yesterday, the Wi-Fi part of those new devices were relegated to using 802.11acW2 or 802.11ax, or “802.11an” (not a real thing, it supposed to show 802.11n that is 5 GHz capable.  See the confusion?)  Now, thanks to the Wi-Fi Alliance, all we have to say is “Wi-Fi 5″ or Wi-Fi 6” to say what the capabilities are.  From my Mom’s point of view, that’s easy for her to understand.  “6 is better than 5, so I go with 6.”

One slight problem with that.  Remember the 802.3ab comment from 2 paragraphs ago?  How wired switching only negotiated at 6 different speeds (more like 3 but I want to give the wired guys as good a shot as possible)?  Ever really looked at the wireless connection speed negotiation table?  If you haven’t, it looks like this: Notebook-Resources-001This is from the Wireless LAN Professionals Custom Field Notebook, and it is so cool that if I really like you, this is what I will be giving you for Christmas this year.  If you don’t want to wait to see if I really like you, go buy one here.  I use mine multiple times a week, much more than I originally thought.

Anyways, to save you the trouble of counting the different negotiated speeds, the table lists 232 possible connections speeds, based on 11 different criteria.  232 for wireless, 6 for wired.  If you are a marketing professional, this is the type of table that serves exactly ZERO purpose for you.  If you are a wireless professional, you use this table almost every day.

Different responsibilities, different use cases, different requirements.

Marketing people can’t sell a chart with 232 possibilities based on 11 different criteria.  They needed something much more simple, and the Wi-Fi Alliance delivered.  Yes, it wasn’t for professionals, but this is why we care, and more importantly, why it matters.

As Wi-Fi professionals we have to talk to the people that this is intended for.  We have to answer questions when the executive gets back from their latest conference and asks what will it take to get to “Wi-Fi 6” or better yet, “Wi-Fi 7” because they need to out perform their associates on LinkedIn.  Maybe you work for a VAR and your angst comes from having to answer an RFP that lists these in the criteria, but nothing about security.

Bottom line – professionals like to keep things very detailed and technical because that’s how it works.  It’s not a secret, there are books and classes out there for those inclined to learn all the details.  Just like any other profession, you can learn what all of it means if you put in the time and the effort.

We don’t ask building architects or structural engineers to dumb down their profession for the average person on the street; we trust them to do their job correctly.  If not, there are major ramifications.  Of course architects and structural engineers don’t need to market what they do to the average person on the street, but Wi-Fi professionals have to, albeit in a round about sort of way.

Are the new naming conventions going to go away?  Of course not, they are already out there.  Will someone use them?  Of course they will, it shows how smart they are.  Will Wi-Fi professionals complain about it?  Of course we will, we complain about everything.  Bottom line – let us complain because we know at the end of the day, it’s just one more thing we will get to explain to people about the magic that we do.

Aruba Takes The Lead With WPA3

Let’s just call a spade and spade.  And by that I mean that Aruba Networks played their trump card at Mobility Field Day 3 (MFD3), and it was the Ace of Spades.

For three days, delegates of MFD3 listened to vendors talk about A.I. and machine learning and analytics and location and BLE and a little on 802.11ax.  What wasn’t talked about much is the new WPA3 standard/certification that we have been waiting on since WPA2 was first introduced back in 2004.  WPA2 is now a surly teenager in human years but in the world of technology has now been likened to a pensioner heading into retirement.  14 years is a LONG time to have anything in technology a standard.  In the 802.11 realm, it’s just slightly younger than 802.11g!

Let that sink in for a second.

Then comes along this company called Aruba Networks as the last presenter at MFD3.  They started with the standard quick introduction and then talked about a new product that is intended to be used as a Point-To-Point (PTP) link with dual radios; one at 5 GHz (802.11ac) and the other at 60 GHz (802.11ad) that by itself is cool enough to stand on it’s on.  You can watch that presentation here if you want to learn more about it (which you should, it’s cool) and then a quick hit on their 802.11ax stuff.  Again, pretty cool along with a good slide about dates surrounding 802.11ax.  If it wasn’t for what came next, this would be my focus but the next topic changed everything for me, so their hardware will have to wait for a different post.

The next presenter was a gentleman named Chuck Lukaszewski, and he brought the goods.  Chuck presented on Aruba’s efforts in the realm of security, WPA3, and more importantly, Opportunistic Wireless Encryption (OWE).  OWE was recently changed from a requirement by the Wi-Fi Alliance for WPA3 certification to an optional feature, much to the chagrin of wireless professionals everywhere.  The general consensus was if it’s optional then no vendor is going to put any effort into it because why would they?  Chuck changed all of that with this slide:

Aruba WPA3 Summary

I’m sure that myself and others will talk about these other new terms, “SAE” and “Suite B/CSNA” which are still part of the required certifications, but I want to focus on OWE, the optional part that we all wanted but had given up hope on.  If you watch the presentation, it’s easy to pick up on how excited we all were to not only know that this wasn’t dead, but that Aruba was actually able to demo this in action, live, and in front of a technical audience.  802.11ax might promise crazy QAM rates (1024 QAM to be precise) along with BSS coloring and OFDMA (allowing clients to utilize LESS than a full channel if they don’t need it, LOVE that one by the way) but sometimes the improvements that are needed are not always the sexy and marketing bullet points that C-Series executives want to see on their hit sheets.  1.21 JiggaBytes Per Second (JBPS, I just made that up) is much cooler on a marketing sheet than “hey, we did something that is cool but your will never be able to tell because it is seamless to you” but let me assure you, it’s the one thing that we NEED in the wireless industry.

Everyone knows that you don’t use the Wi-Fi in public places because you are going to get hacked by the guy sitting a couple of chairs away and your life will be in shambles.  Guess what OWE solves?  Exactly!  This feature is not meant to authenticate the user, nor account for what they are doing.  That will always be left to the Enterprise version of the WPA2 and now WPA3 standard, this is meant purely for the guest client/user that you want to allow onto your Wi-Fi and you want to ensure that no one can sniff their traffic while they are onsite.  This doesn’t interrupt captive portals (shudder) since that operates further along the network path so you can still stop users from accessing the internet; no, this is intended as a feature that shores up the bane of the Wi-Fi world – guest Wi-Fi is insecure due to the open nature of the network.Aruba OWE Protocol Flow

I call this my “Mom” feature.  My mom uses technology, but she doesn’t understand much more than other mothers of her generation.  She doesn’t understand why having a 4 Way Handshake (seen above) right after the association packets is a good thing, but I don’t need her to know why or understand.  All she needs to know is that if she selects a device that supports OWE from the WPA3 certification and is at a location that supports OWE, she can now have some level of assurance that when she surfs the internet and sees the lock symbol in the upper left, she doesn’t have to call me freaking out.  People not calling me freaking out is a good thing by the way.

So what’s next?  Good question.  Start by going and watching Chuck’s presentation at Mobility Field Day 3.  Watch the reaction from the delegates at what they presented, and then watch the video again to let it sink in of what you just saw.  GCMP/CCMP protected data over the air on a guest Wi-Fi network.  The user only has to select the network and the protocols then take care of the rest.

Next, start bothering your infrastructure vendor of choice to find out what they are doing in the realm of OWE.  Is it on their roadmap?  When are they going to be releasing something about OWE?  If it’s not something they working on, why not?  Aruba has taken the lead on getting this into the public space, it’s now inherent on us, especially those of us in the Large Public Venue (LPV) realm to push ALL vendors to support this.

After the infrastructure vendors, start working on the client side.  Remember, you need a client side device that can do this, or the ability to add it, to make this work.  Ask Apple, Samsung, Motorola, LG, Dell, HP, and all the others, what their roadmap is for supporting this.  Predictions are we will see 802.11ax clients next year and I really hope they have the supplicant side ability to do this.  If not, as an industry we are missing a HUGE opportunity here and I for one won’t sit idly by and watch this opportunity slip away simply because we don’t have to.

I am sure I will harp on this subject more in the future, I think that it’s just that important.  When and if I come up with anything new I will make sure that I share it, but for now I want to thank Aruba Networks and their engineers for taking the lead in this effort.

It’s not the new features you thought you wanted, it’s the features that you didn’t think you needed!

**Disclaimer – I have not received any financial compensation or consideration from Aruba Networks for my thoughts here.  These are my thoughts and opinions alone and Aruba Networks is not responsible or forewarned about what you read.**

Mobility Field Day 3 – A Delegate

So I just finished my first Mobility Field Day 3, #MFD3, put on by Tech Field Day (Gestalt IT.)  It was one of the most amazing experiences in my life!

At this point, you are probably thinking “well, here comes the book report on each presentation and how I was ‘blown away’ by all the vendors that presented” so I will stop reading now.  Sorry to disappoint you but this is not one of those.  Don’t worry, I took many notes and thanks to the way that Tech Field Day is recorded, there are video recordings of all the presentations so I can go back, review the tape and write extensive novels on every presentation and bore you that way.  War and Peace wasn’t written in one weekend (maybe it was, I’m too tired to check right now) so give me some time.

What I want to discuss is the behind the scenes, the “sausage making” that I was completely unaware of, and the rest of the delegates that joined me in our merry jaunt through Silicon Valley (or “Silicone Valley” as one person called it.)

First up is Gestalt IT (@GestaltIT) and the team that was here on the ground with us.  If you don’t know, Gestalt IT is the company formed by Stephen Foskett (@SFoskett) to facilitate getting people together and discussing the topic of the day.  The Gestalt IT team for my first MFD was made up of Tom Hollingsworth (@networkingnerd) and Ben Gage (@BenTGage).  With detailed emails before the event and then from the moment each delegate touched down at the airport, they had everything under control and guided the two new people, me and Scott Lester (@theITrebel) with what we needed to do and wrangled the veterans like the herd of wild cats they are.  I’m pretty sure that without their patience, understanding and just the right amount of snarkiness and sass that none of this would work.  At least not with the crowd I was fortunate enough to join.  Luckily it didn’t take Scott and I very long to shake the newness and jump right into the herd of wild cats to ensure that Tom and Ben had a full compliment of crazy, irreverent wireless geeks to wrangle through the rough and tumble streets of technology town and not just 10.

You’re welcome Tom and Ben!

For three days they tried to get us to focus on the task at hand and behave like adults that we have fooled people into thinking we are.  All I can say is if you watched the live stream and the videos posted on YouTube and Vimeo after the fact it looked they they did a fantastic job with their task; it’s because they did.  Off camera we lived up to the wild cat moniker.  Me personally, I loved it.  More than one time the group had me laughing so hard I cried.  To reveal a secret, even Tom Hollingsworth joined the wild cat pack at times.  Luckily Ben Gage was there to keep us inline.  Ben’s a musician so I find it funny that the musician was the adult of the group.  Tell me how many times you get to say that!  Seriously, Ben deserves an award after our group!

Now for the delegates.

You can go read about them on the site but I want to round this up by filling you in on a few surprises I found out by hanging with the rest of the delegates.

  1. Robert Boardman – I’ve known Rob as the quirky part of the Wi-Fi of Everything duo of him and Rowell Dionicio (I have a man crush on Rowell, and he was here as a delegate as well, but this is about Rob.)  In one of the bigger surprises for me this week, Rob is actually really smart!  My perception of him completely changed when the camera went live and the heat was on.  His questions and understanding of the vendors and the technology was impressive and I have a new found respect of him. Don’t get me wrong, he stilled delivered comedy relief while the camera was on, and he can be an even bigger dork off camera, but my whole perception of Rob has changed for the better because of this.
  2. Jennifer Huber – Jennifer worked a project for us one time a while ago, and I always thought of her as a proper wireless expert and very knowledgeable.  I’m sorry Jennifer, but I might even say a little “boring.”  She teaches yoga and eats healthy, and I really thought we wouldn’t get along much.  Boy was I wrong!  First day we sat next to each other through the Apple product launch, and she can turn on the angry woman in a flash!  The words I heard coming from the professional sitting to my left was impressive!  By the end of the Apple product launch I realized that I could really dig sitting next to her for the next three days and listen to her go off.  Major props Jennifer!
  3. Keith Parsons – Everyone knows Keith, and everyone loves Keith.  I have written many blog posts that talk about Keith and his role in my wireless journey, but this isn’t about that.  I sat next to Keith a couple of times and I don’t know why this surprises me, but this is what I learned about Keith.  That man can multi-task!  I would look over and he would be doing something on his computer that made me think he wasn’t paying attention and then BANG!  Keith would be asking tough questions about the presentation and taking their experts to task on what they said, and never letting them hide.  One more thing to add to his myriad of skills and why it’s good to be friends with him.
  4. Lee Badman – Lee, from #WIFIQ fame, is the crotchety Wi-Fi man that you think of when you think of the hero of the suffers of bug infested wireless code.  What I didn’t realize about Lee until this week is the man has a dry sense of humor that I find very appealing.  That guy, if you give him the time, has some of the funniest ideas that I heard all week.  I don’t want to give them away, but I really hope he follows through with the idea he presented during lunch on the last day.  Lee, please help out the community and move that idea to the top of the list.  It’s what the community needs, even if they don’t know it.

As for the other delegates, please don’t get me wrong, they were all the rock stars you think of when you hear their name.  No disrespect to Amy Arnold, Johnathon Davis, Mitch Dickey, Rowell Dionicio, Sam Clements, Scott Lester, and Stew Goumans for not making my list above.  Every single time there was something that needed to be said, a point made, calling out a vendor for not answering a question, whatever, they were always willing to step up and say what needed to be said.  Sam was the gracious expert that I needed, Rowell was the quiet professional you think of, Stew was the 802.11eh representative we lacked.  Amy was the star of the wired side when we needed the support for that subject, JD was constantly there keeping things moving and Mitch was the SCA champion you know him to be (and the Red Bull champion of the week, hope your heart is ok!) and Scott was always ready with a question or additional guidance and input.

I know that without the full compliment of delegates that the team was able to assemble, my week wouldn’t have been as great.  In the end, I got just as much from the group that I traveled with as the vendors that presented.  My hope is that as the group of delegates, we were able to represent the community at large and did our best to ensure that those that didn’t attend in person were able to benefit from our work as well.

MFD3 Group pic_jpg

 

A Story of Three Companies

During Mobility Field Day 3, we were fortunate enough to visit with three different companies that were in different stages of mergers/acquisitions.  To be fair, the third company, NETSCOUT, hadn’t announced anything while we were onsite, it was business as usual.  This post is being written with the benefit of hindsight.  Luckily for me, it bookends my thoughts nicely so winner-winner, chicken dinner for me!  I’ll get back to NETSCOUT here in a bit.

In chronological order, we met with Arista first.  I know that some might ask why the Mobility Field Day delegates met with Arista.  Some might know why and some might ask who is Arista.  For those not in the know, you can get caught up here.  Shortest story is Arista acquired Mojo last month, so now they are a “cognitive Wi-Fi” company.  I don’t know what that means, and honestly after sitting through a 2 hour presentation with mostly Arista folks and not enough Mojo folks, I still don’t know what that is supposed to mean.  I get why they presented, but I know that as a group we were mostly confused on what was going on during the first hour.  As a first time delegate I didn’t know if the majority of the presentations were going to be this dry and wandering or not.  (Luckily for me, they weren’t.)

My thought after the presentation was here was a company that wanted to be able to support full stack across the enterprise with some version of Artificial Intelligence (A.I.) or Machine Learning (M.L.) that I am still not clear about.  Either way, they wanted/needed a wireless product they could slap their name on and go forth and prosper.  Granted, until they get an access layer switch that provides PoE (RIGHT?!?!) that won’t happen, but I suspect it won’t be too long before that is announced.  At least that is my hope after our feedback.

The next day we met with Fortinet.  Most everyone knows about the Fortinet acquisition of Meru since it did happen back in May of 2015.  What I want to discuss is how their presentation went during #MFD3 and what was learned.  After WLPC 2018 in the US, Mitch Dickey of @Badger_Fi fame wrote an open letter of displeasure to Fortinet asking them to step up and do a better job of explaining what they were as a wireless company, not just a security company.

Boy did they listen!

Fortinet did a great job presenting how their wireless product integrated with the rest of their portfolio and how it was more than something bolted on as an afterthought.  They also announced (OK, it has always been a thing, they just pointed it out) that the Fortinet wireless line was cable of running in both SCA AND MCA configurations!  I know for some of the delegates in the room, this was a new thing to learn.  I also know from some phone calls since #MFD3 that others didn’t know that as well.  The message that was delivered by the Fortinet team was smooth and eye opening.  As we left their facility at the end of the day, the general consensus was that Fortinet listened, changed their approach, and delivered with a great presentation at #MFD3.  While I agree they did a great job with their presentation and everyone was impressed, I want to point out that they didn’t really announce anything new or groundbreaking while we were there.  More on that later.

The next morning we went to NETSCOUT for their presentation.  We didn’t know it at the time, but the ENTIRE product line that they presented on was almost 2 feet out the door.  Think about this; they presented Friday morning at 9 AM PDT and when I woke up at 6:30 AM PDT on Monday they had already made the announcement.  As far as I know the ink was dry on the deal on Friday and they were just waiting for the approval on the press release wording.  Their presenter, Julio Petrovich, did a great job talking about their product line for 2 hours all while having to have some inkling, or concrete knowledge, that something was afoot.  You should go back and watch his presentation, I’ll add the links at the end, and keep in mind of what he might have known or assumed all while presenting.  Got to give the guy some props for that!  One other key piece of information is that I know that Julio will be moving with the Handheld Network Test (HNT) product line as it is “carved out” of NETSCOUT.

All of this to bring me to my point.  While I know company acquisitions and mergers and such are common place in “The Valley” (HP Enterprises bought Aruba in 2015 as a point), for me it was interesting to see such a different approach to their presentations, and possibly a lesson to be learned for whatever the HNT line that was spun out of NETSCOUT will be called.

For three years after Fortinet acquired Meru, I would say they languished in misinformation and confusion about what they were as a wireless company and what they could offer to the wireless community.  I would say that Mitch calling them out after WLPC, other feedback from the community and with the efforts of a gentleman named Christopher Hinsz, Fortinet turned their message around.  I give Chris a lot of credit because he did a nice job while he was at NETSCOUT and it was readily apparent that he had a big hand in the presentation Fortinet did at #MFD3.

To both the “TBD” handheld network testing company that we just found out about on 17 September 2018, and to the team over at Arista, you are both on the same journey, just a couple of weeks apart.  Take a page from what they did at Fortinet (just don’t take Chris, I like him at Fortinet) and learn that the wireless community is always there to help.  That was what we tried to do at #MFD3, and the community at large is always willing to chime in (some better than others) on the good and the bad that a company is doing.

Lastly, and I hate to say this, this is all about your messaging.  Not marketing, we can smell that out with both eyes tied behind our back, but your actual message.  Fortinet was able to redeem themselves by presenting a concise, cohesive message to the wireless community, and that means something.

Be honest.  Admit when things are still in the works but not ready yet.  As a community we are all used to things not going our way (ever heard of client drivers?) and are generally forgiving, especially early on.  Listen to the feedback we give on social media and at conferences, it will go a long way as you try to navigate the world of people that MIGHT have been exposed to just a little more radiation than is really recommended.

You can find all the videos from #MFD3 on YouTube

Cost Of Perfection

When asked, pretty much ANYBODY in IT can tell you what ROI means.  It’s the return on your investment.  What you get for what you put in.  When talking money, it’s really easy to calculate and most people are on board with that.  I am too.

What I want to discuss is the same idea but on a more general, abstract scale.  Last week at #MFD3 I spent some time with some pretty smart Wi-Fi folks and the topic turned to antennas and feedline, Polyphasers (a term was never used by the way) and other assorted instruments used in the day to day use of outdoor radio gear.  Sorry to rain on your parade, but all of this stuff was in use long before 802.11 was even a thing so the topic was applicable to all outdoor gear that is connected to a radio; whether the radio is outside or not wasn’t point.  What this is used for is when the antenna is mounted outdoors.

Disclaimer – I only can think of one scenario where the radio might be mounted outside and the antenna mounted inside, and none of my Wi-Fi peeps should ever do it. It’s a bad idea for 802.11 that I won’t even mention.

We talked about outdoor specific AP’s, AP’s designated as a “P” version as opposed to just a regular “E” types and all the other nerdy stuff that came along with it.  It was during this conversation that it started to dawn on me that we were fulling the age old adage that if you put 10 Wi-Fi professionals in a room and ask for a solution you will get 12 answers, and they are all acceptable.

Don’t get me wrong, it was an evening well spent and I fully enjoyed it.

As the group broke up, it made me start to ponder.  As an evening event, it was time well spent.  Had this happened to me during the day when I actually have other things to get done, I might not have the same attitude about how my time was spent.

Wi-Fi is designed to work when things aren’t awesome, and in most cases, that is what most people end up using on a day to day basis, my customers included.  Every time I look at my system I see all the faults in it, things I want to change and wish I could fix.  Other devices that now have AP’s imbedded in them, things that used to never have them in the past that are now causing problems.  What I discovered was that in my pursuit of perfection, I was chasing a dream that I was never going to reach, and driving me insane in the process.

I think at times, as a wireless professional, we chase perfection without any concern about being able to accept the good.  We worry about 1 dB loss here and a 5 degree difference there, but at what cost?  Does that 1 dB really make that big of a difference?  It might, and in some cases it could be the difference in making a solution work, but is that the only time you chase that 1 dB?  It might be time to pay attention to the work you do to try and achieve that last 1 dB to get from MCS 8 to MCS 9 when in the end, the only thing the client really needs to operate is MCS 5.

Remember, Wi-Fi is designed to work when conditions aren’t optimum, and sometimes, good is all we need to get there.

Long Live the Controller!

Lately, it appears that every time I turn around, I read somewhere where everything, and I mean EVERYTHING is moving to the cloud.  Maybe I am an “old geezer” in this respect, but I believe that not everything belongs in “The Cloud.”

In this particular post I want to focus on the heart of WLAN infrastructure, the venerable WLC.  Now granted, there are situations and the always present “It Depends” that can call for a controller in the cloud, or offsite controller, or controller-less, or mesh, or whatever the vendor is calling it this week, but sometimes, in some situations, having a physical, on-site good old fashioned controller just can’t be beat.

In my current employment, I work at a facility that covers 53 square miles.  Granted, not all of that space if covered in buildings and facilities that have Wi-Fi, or network connectivity (although we have received that request more than once) but we do have facilities that are pretty well spread out.  While I don’t want to spell out all the details, we also have a massive fiber infrastructure that allows us to do some pretty cool things all in house, and we don’t rely on leased lines, or ISP’s, for anything other than our internet connectivity.

Hopefully, at this point, you get the idea of where I am coming from when I say that in an environment like mine, having a centralized, on-premises, good old fashioned chunk of metal and electronics programmed to be a Wireless LAN Controller is a great thing!

Look, I get it.  Not every customer is going to be.  Not every customer can provide their own dedicated fiber between buildings miles apart to get sub-millisecond latency between hardware, but I can.  Not every customer benefits from centralized forwarding, and that’s fine.  I’m not saying that all of the other solutions are not warranted, and don’t have their advantages; they really do.  I can think of a myriad of customers and/or situations where either fully cloud based or a hybrid solution is definitely the way to go.  Companies that have a large central office with branch offices spread across the country immediately springs to mind of a situation where either a full cloud based or hybrid solution would be, and should be, the solution of choice.

Everybody can agree that when it comes to RF coverage, AP placement and AP count, that it all depends on the requirements of the space.  The same thing applies to selecting how the WLAN will be managed and controlled and which type of solution is eventually installed.  Requirements should be the first decision, then cost.  Whether or not your chosen vendor has just rolled out a new shiny cloud based solution should NEVER factor into that decision making process.  I get that sometimes cost will over-ride everything, I’ve been on that side of the fence before, but please don’t immediately jump there, give hardware a chance!

Let me give you some examples in my argument for centralized forwarding to an on-site controller.  Sorry, I can’t bring myself to call in “on prem” or “on premises” or whatever marketing calls it this year.

  1. Configuration of my access layer switch ports has been standardized to a single configuration.  Since I only need an access port with a single VLAN, the wired network team now knows how to configure a switch port where an AP is being installed without the wireless “team” getting involved.  You would be surprised how confusing WLAN technology can be to wired guys who have never dealt with it in the past.  If I need to do a flex connect type scenario, it’s rare enough that I don’t mind dealing with it personally.
  2. VLAN segmentation is much, MUCH easier.  I currently have 28 active VLAN’s off of my WLC’s, and only having to deal with them on a couple of switches relieves a lot of stress, questions and mis-configurations from the wired team.
  3. Security is easier to implement.  I run a Cisco WLAN, so there is an encapsulated (not encrypted) CAPWAP tunnel between the AP and the WLC.  In my environment we added an additional routing “feature” around the CAPWAP to keep it locked down.  That was a one-time configuration challenge that we haven’t had to go back and touch, no matter how many VLAN’s I have added to the WLC.
  4. Using the CAPWAP functionality allows me to “get around” network segmentation on the logical network.  In certain circumstances, it can be very advantageous to have 2 devices 10 miles apart but on the same subnet since they both terminate at the same location.  Yes, concentrators can be used to achieve the same thing but if I have to add hardware onsite, why add just that?  A concentrator will add complexity and another point of failure to deal with, so now I need to add in redundancy.
  5. I have full control over when and how my upgrades are done.  Yes, in theory this shouldn’t be an argument since it is your cloud instance, but how many times have you had a service in the cloud have an update or reboot done simply by accident?  As the engineer/architect on record, I am always the first one blamed.  This leads to the next point.
  6. Troubleshooting during outages is frustrating.  Even when things are in the cloud we are blamed for outages, and in our group alone we have spent countless hours trying to show that issues with reaching an offsite service is an ISP problem, not ours or the cloud data center’s fault.  What ends up happening is we point the finger at the cloud provider, the cloud provider points the finger at us.  Eventually we point a finger at an ISP.  Ever try to get two different ISP’s working together to solve a problem?  It’s bad enough when you are paying them for service and you need them to work for you, let alone work with a different ISP to figure out routing problems between themselves.  It’s a nightmare, and as the customers technical people we are always left holding the bag.

I could go on, but I think you get the point.  Keep in mind, I am not here to say that cloud based controller solutions are the devil or should go away.  On the contrary, I think in the correct situation, cloud based is 100% the way to go, and all vendors should be able to support that model.  I am just here to argue that in that same vein of thought, in the correct situation, physical, on-site, metal chassis based controllers are still very pertinent and needs to be considered as a viable, if not the correct, solution for some solutions. And just like with cloud based controllers, all vendors should be able to support that model.  If not, in my mind, they will always be a second tier vendor since they can’t support ALL possible solutions needed for any given customer.

As Lee Badman reminded us in the #WIFIQ for 8/21/18, try and take emotions out of the discussion.  Emotion should never be part of the conversation when designing the correct WLAN solution for any customer.  Define the requirements and design the solution based on those requirements.  The solution will change based on other factors but to say that I won’t recommend a physical controller no matter what just isn’t fair, and isn’t in keeping with the spirit of designing the best Wi-Fi for any given scenario.

Let me know your thoughts on the subject, sometimes 288 characters just isn’t enough to make your argument.

P.S. – I also don’t think 2.4 GHz is dead and will argue that one until the end of time!  Maybe I am the old geezer who won’t change!