What I Learned Working for a Rural Wireless ISP: Part 2 - What was it like?
This is Part 2 of a 3-part series. Check out “What I Learned Working for a Rural Wireless ISP: Part 1 - Lessons From The Back 40, The Beginning” and “A Retrospective and The End: Part 3” for more! “What I Learned Working for a Rural Wireless ISP: Part 1 -Lessons From the Back 40, The Beginning” has more technical info and “A Retrospective and The End” is more personal anecdotes and funny (and not-so-funny) experiences. Thanks and enjoy!
I am available for IT work, if you read this and think “I need this guy”; contact me directly, or through Upwork. I also offer a service called My Genius where I will answer one question per month for you to your satisfaction, including up to one hour of combined research and discussion. Additional questions are an extra fee. You can hire me as a consultant for literally anything.
What was it like?
In short, a hoot. It was a great learning experience, probably the best skills-match job I’ve ever had, exciting, maddening, innovative; it felt like ‘The Wild West’ of IT to me. I could use my IT skills, DIY/construction skills building custom mounts, enclosures and clever fixes, wiring (one of my favorites), electronics, customer support, scripting, documentation, and of course some boring beaureaucratic stuff like FCC filings (which were a yearly requirement). I liked my coworkers and I liked the work. It wasn’t a desk job - I’d spend half my day answering the phone and doing sysadmin stuff and the other half driving around helping customers and doing maintenance and installs. There really wasn’t much I didn’t like until the end. I got to meet (and assist) a lot of people from the community and expand my skills daily.
Occasionally I’d be tasked with being on-call. We used nagios for network monitoring and had business phones that got text alerts whenever anything went down. Each link and AP was in there individually, so if an entire leg went down, you’d get a barrage of text messages and the phone would go DING DING DING non-stop. We considered doing dependencies and only getting alerts on the primary link that went down, but the amount of dinging kind of helped indicate just how fucked you were and how many people were out of service, so we kept it doing alerts for everything that was down. Then, you’d head down to the office and check the map and use your knowledge and some pings to determine where the outage actually originated. It could be a downed backhaul link - dead radios mostly - or a site power outage. Occasionally when there were storms (this was always a fraught time of “what’s going to go down now?”) the radios would spontaneously factory-default. We kept backups of all the config files for all the radios (which I had gathered using some BASH scripting and updated when we changed things) and we’d hope we could access the radio from the base. I’ve even SSH’d into another radio, added a 192.168.1.x
IP, tunneled into the radio and reprogrammed them from the office once I got slick, but this wasn’t always possible. Occasionally a radar would come through and DFS would get snakey, once a cow chewed through an ethernet cable.
You’d roll into the site, sometimes at night, often while the owner, who was usually a farmer, was working and just get to work. I had a Grand Marquis and a German Polezei jacket and would occasionally come in my personal vehicle where I was more than once mistaken for the police by a confused site host. If you had a work van, you’d either bring your laptop to the equipment or unspool the ethernet and patch in if the weather was really bad. I worked in all weather, from 100+ degree days to -40 degree wind chills (and it was always windy at the top). I once took a picture of myself where in addition to the common beardsicles I had a frozen tear in my eye. The first time my hands went numb I thought they’d fall off. Later, I learned about The Hunting Reaction and how my hands would go numb for half an hour and then flush with hot blood to prevent frostbite.
Fist, you check the power, then reboot the switch, then do a Ubiquiti Discovery
to determine whether there were any defaulted radios, which would show up as 192.168.1.20
. You’d compare the list of which devices should be there to which ones were there and determine what needed to be done. Often you’d have to climb to do more troubleshooting and hope you didn’t forget anything or end up needing something additional. Once in a while the equipment would lose a weatherproofing element and flood, which could cause any number of problems and wasn’t easily fixed. Before I was there they had mostly Tranzeo radios, which were prone to dying in the cold weather, and had put heaters on some of them to prevent this. I’ve seen capillary action draw water into cables and flood the enclosures. Sometimes corrosion in the jacks (the cables were typically terminated in female 8P8C connectors) would cause issues from radio defaulting (the process of defaulting a radio involves shorting some pins, which would occasionally happen spontaneously because of this) to loss of connection, bouncing connections, or other strange issues.
You’d often do your troubleshooting suspended in midair by your harness (we used climbing harnesses, which allowed this without cutting off circulation). I often wished I’d had a tablet with an ethernet jack, but it wasn’t in the budget so I used my laptop and intuition. There wasn’t a convenient way of carrying the laptop up and you could usually figure out what the issue was from the bottom. It was a huge help to have someone at the office confirm and assist, especially if you had to reorient an antenna that had become loose, but this wasn’t always an option. By the end of it, between the climbing and farm work I was doing at home, I was in the best shape of my life. Seeing how things fail gave you insight into how things worked, as well, and how to build them better next time. We didn’t always have the benefit of this knowledge, however, and had to support sites that were made in the early days, which failed more often and were harder to fix.
The absolute worst thing was when a cable went bad. Sometimes (usually) there were so many cables jammed into the conduit, you really couldn’t pull a new cable, even if you attached it to the old one and tried to pull it through. The cables, no matter how much you tried to straighten them before the run had a tendency to twist inside the tubes, preventing movement. If this happened, you may have to run a new cable, attaching it to the side of the conduit as a temporary measure (which often became permanant). You’d be surprised how long some of the temporary fixes would last. I’ve seen things you would not believe would work, at all ever, last years.
The easiest was when a feed radio failed. You’d program another one, climb up, swap the feed, which snapped into a tube in the center of the parabolic dish, and wait for it to re-link; eazy-peazy…well, usually. They locked in with these tabs, and they were an absolute bear to get out, sometimes. Smart techs had realized this was an issue and would cut down the sharp edge of the locking tab with a knife to make it easy to swap if they were doing it right, but occasionally you’d have to disassemble the entire dish to get it out and mangle the tube in the process, then have to re-orient the dish which was easy if someone was at the office to monitor the link and a pain in the ass if they weren’t. Then, because the LED indicators were only so accurate, you’d go check the connection from the base, run a speed test and hope to your higher power you got it right…because if you didn’t, you’d have to climb back up and fix it.
Another big issue was interference
(signals other than your own, which overlapped on the same frequencies). Because WiFi is unregulated there’s nothing stopping anyone, including a rival ISP from stepping on your connection. One provider, in particular, was known to do this on purpose, then send out flyers to the afflicted customers offering a solution to the internet woes that they were intentionally causing in an attempt to convert customers. In general, though, there was a gentleman’s agreenment not to cause intentional interference and the providers in the area all knew each other. You could usually tell pretty easily who was causing it and whether it was intentional or perhaps a new link that had gone up accidentally causing an issue. You couldn’t force someone to stop, but you could ring them up and try to work something out. Because of this, there was an incentive to get your equipment on the open frequencies first and have the other providers make do with what was left.
Sites
Marathon Feed - Old
This was the first AP, IIRC, although it was put in before my time. There was infrastructure in the lower portion and a big box up top. It was one of the tallest APs at >100’. It was also the first to get the new treatment which was to put a PoE
(Power Over Ethernet) switch at the top, which powered the radios, and was in turn fed with a high amperage 48VDC run. This enabled you to easily see which radios were online or offline by querying the switch. The better PoE switches also had some other neat features I don’t really remember. The earlier solution was to have all the PoEs at the bottom. I think this was the only one that had an AC run to the top before we swapped it for this new technique. Eventually we had a rack in the lower building with a Ubiquiti smart switch providing power for everything except the 24GHz link which had a special power supply.
Upside Down Conduit, Power
We were re-running the conduit going to the top and all the lines in preparation for this. This involved assembling a >100’ section of conduit, hooking onto the end and pulling it up to the top. Once we got it in place and mounted, we realized the conduit was upside-down. Although it’s glued together PVC, the sections connect together with a male and female end and the right way was to have the male going up into the female, so if a glue connection ever failed, there wouldn’t be any water ingress. We didn’t want to take it out to flip it back around in one of the few cases of if you’re going to do something, do it right and kept it as-is. The other technician took the fall for this, voluntarily, as he didn’t want to re-run it, but I was the one who had mis-oriented it. I’ve often wondered how it has held up. We glued the shit out of it so I’d guess it’s fine but the owner was pretty picky about things like this (rightfully so, because re-do-ing it later was even more of a pain in the ass).
Dorchester Water Tower
This was the highest site at 150’. I didn’t like the water towers as much because you couldn’t see the ground. Unlike some people, who don’t like looking down, I enjoyed the view and the feeling of being so high above the ground. The water towers just sloped off and you couldn’t see the ground, but you knew if you slid off you were a goner without your harness (and maybe even with, as you can die from lack of blood circulation). This was one of the farthest links and took over half an hour to get to. It was about 4 hops down the line, IIRC.
Fromm Fox Farm
Easily the sketchiest climb the Fromm Fox Farm was a Fox farm and has a very interesting history as the Fromm Brothers found a way to farm Silver Foxes and Ginseng, which was otherwise only found in the wild. Wisconsin Ginseng is reknowned in China as the most energetically spectacular ginseng in the entire world and people fly to Wisconsin from Asia to import it. It’s a multi-million dollar industry and there are some filthy rich farmers because of it. You can only do it once, though, as after that a root disease sets in preventing a second crop. The more you know..-^ There was a water tower constructed in the early 1900s there that we had some equipment on and it was sketchy AF. It was over 100’ up and the ladder actually tilted away from the structure at the top so instead of climbing straight up, you were climbing at an angle away from the structure, almost hanging. I’m pretty sure if you jumped hard on the wood, it would break and you would fall, as it was old and moss-covered. The water tower itself was wood, which is surrounded by ~1” steel rings. When the water is added, the wood expands, sealing it. When the water is gone, the wood contracts and it starts falling apart so it became sketchier every day. That being said, it was one of our most important links, with some of the most equipment, so we had to go up there a lot. The view was beautiful, though.
The Jungle Gym
This aptly-named AP was installed on a jungle gym, which was located at the top of the hill. Don’t ask me, I guess it was the only place available. Obviously, it was pretty easy to get to and it did what it says on the tin but it was one of the more odd setups. We would do anything that worked, and it worked. It was all a matter of how many potential customers we could get in the area, and the geography.
The 14th Channel
We shared a location with another WISP at one of the sites and wanted to add a link. He didn’t want us to, as the spectrum was already congested. The owner assured him he’d never see us, and he never did. Some people might speculate that this was because the equipment was purportedly running on the officially unrecognized 14th channel through some sort of trickery, but this has never been confirmed.
Homemade Infrastructure
Power Feeds
Marathon Feed was the only site with AC at the top and that was only because it was already there. Why? Because we didn’t want to pay an electrical contractor and get a permit low voltage wiring is legal to DIY. We had a couple with a Ubiguiti ToughSwitch at the top doing PoE, but most were fed with PoE from the base.
How Do You Get It Up There?
With a rope. No joke. We used a rope and pulley system, loaded the equipment into duffel bags, and hauled it up. Almost all of the gear was modified climbing equipment, as one of the former employees was an avid climber and that was what he was familiar with. What were you expecting, a helicopter?
Cooler Enclosures
One of the more unusual things was that the equipment was housed in beer coolers. Why? They’re about $50/ea vs up to $500 for a big enough electrical box. They’re insulated, and as I mentioned, some of the equipment was cold-sensitive. The “waste heat” from the power supplies kept things warm with the insulation. They were waterproof and easy to tool with hand tools. Each one had a UPS, PoEs and a switch, and a 12V fan hooked up to a 5V power supply to run on low to keep it cool during the summer, unplugged in winter. The exhaust was covered with a vent with a screen to prevent mouse entry. The conduit was run directly into the cooler and silly-coned in. Easy, cheap and most people won’t fuck with a beer cooler where they might get curious about an electrical box.
BASH and PowerShell
I was only just learning python, so most of my automation was done using BASH
and PowerShell
scripting. The Ubiquiti radios ran busybox and had some really useful built-in utilities and I learned them inside and out - literally. I took apart broken radios to see if there was any physical failure I could see (and there often was), look at the chips, and see how they worked, as well as find which drivers went with which chipsets. It was pretty easy to manually SSH into them and run utilities, but it was also easy to script things with a list of IP addresses, run commands, and capture stdout
to a text file to be parsed. I could program the radios remotely in many instances and was working on an ansible playlist to automate and manage configs that got scrapped before I could finish it. I hear the Ubiquiti gear is less CLI-oriented now and run as IaaS cloud thing, which is kinda lame. I would have loved to write a python program to automate the boring stuff and manage the gear, but I just didn’t get there. Unfortunately I no longer have any of these, but I’ll be making some, and some utilites once I get a radio.
busybox
busybox
is a stripped-down version of hundreds of linux commands as a binary which allows for a massively reduced file size. The Ubiquiti gear I was using had 8 MiB flash that stored the OS and (IIRC) 32MiB of RAM. Since the busybox executables often didn’t have all the features, it was necessary to get creative, combining or daisy-chaining executables to get the results you needed, or sometimes using your local utilities run over SSH on the remote files. It was possible to transfer files using cat
and tar
, capturing stdout
and piping it to your localhost, which worked rather elegantly. Ubiquiti also has some really helpful utilities like athstats
that provide insight into the functioning of the Atheros chipset
used for WiFi, and many other features were exposed via the /proc fs. I learned a lot about these little devices, but there was much more to learn. I’m hoping to get my hands on some more hardware, especially the newer AC gear and would appreciate your help. Please see here if you have old, even non-funcional Ubiquiti hardware you’d be willing to part with for the cost of shipping or a small amount of money as I am a poor, currently at least. I went from making $30/hr at the FHA
to $17/hr at the WISP, but I learned so much and was a beginner when I started, plus I loved my job and didn’t need the money (until a misappropriation of funds led to my duplex getting behind and its eventual foreclosure, which has cost me tens of thousands of dollars in appreciation).
Not That iPad
Have you ever heard of an ipad? No, not the thing you play music on, the earlier DOS-based mail servers. These things are so rare, I can’t even find a link for them anymore, but can assure you they exist. They claim to be unhackable due to their resistance to memory-resident viruses. They have a web interface and were surprisingly resilient. If you have any information on these or links, please contact me at jackd A@T ethertech.org.
I found a Server and Started Doing Layer-7 DPI
I found an old Dell 1950 1U rackmount server, installed Security Onion, and started doing DPI
(Deep Packet Inspection) on all our traffic. It was mirrored to a SPAN port on one of the primary switches and routed to this server and was utterly fascinating. It was also super helpful on a couple occasions when client PCs infected with virii started Layer-2 Packet Storms, which I had to track down and mitigate.
Ubiquiti Toughswitch
The Ubiquiti ToughSwitch was a semi-managed 4 or 8-port PoE switch. The Ubiquiti devices used non-standard 24V PoE, but could be powered by one of these switches, thus saving the space of a power brick for each device. Additionally, the devices could be located at the top of the site in a box and fed with 2 (for redundancy) ethernet feeds (only one connected at a time, of course). How were they powered? We cut the cords and used heavy-guage wire run to the top, calculated the DC loss of the run, and upped the input voltage to compensate, then patched in the 12V barrel connector at the top. Like I said, we were big on improvisation and this worked well. It was way easier to run 2 shielded weather-proof cables than a standard ethernet cable per radio (8 of them).
Alignment: Doing It and Doing It and Do It Again
One of the most important things to do when trying to squeeze the most performance from your long-range (our longest was 10 miles) WiFi was proper alignment. This maximized your signal and minimized interference. Most of the radios did not have fine-tuning devices, however, and had to be very carefully pointed and held in place while they were tightened. I’ve lost more than one connection that was stable until I tightened the bolts and lost it, having to re-orienct the dish. Occasionally, especially with the really big parabolics, they had fine-tuning devices. Our largest dish was 3ft wide and had special hardware which allowed fine tuning. Our longest link was 10 miles.
Every antenna (even “omnis”) has a radiation pattern
that shows its sensitivity to reception in 3 dimensions. It’s usually shown as 2 2-dimensional graphs, but can sometimes be provided as a 3-dimensional graph, as well (which look pretty cool and can be seen in this excellent cisco guide ). Getting to know the receptivity of a given antenna is part knowledge gained from reading these graphs and part intuition gained from experience aiming them. It’s a bit of a “dark art” in that it requires experience to do well, especially translating the graphs to practical application. One of the common pitfalls is aligning a radio to a side lobe
. Directional antennas often have one primary lobe that they’re most sensitive to, but many side lobes that they’re less sensitive to, but more sensitive than they are to a completely off-axis positioning. So proper orientation might provide +12dB, while a side lobe may provide +9dB, then +6dB, +3dB and finally completely off-axis may be +0dB.
There’s really no way to know for sure whether you’re aligned with a side lobe except ‘scanning’ back and forth in at least a 30 degree sweep. If you’re going up in receptivity, you’re moving toward the center lobe and when it starts going down again, you’re moving away from it and should reverse direction. This needs to be done in both directions without altering the other, which can be quite difficult on the radios that provide pivoting on the X and Y axes simultaneously. The PowerBeam M5 radios with the larger dish (they came in two sizes) were a bargain, but notoriously hard to align properly. That being said, you can often get a good usable signal from a side lobe, just not “the best possible” signal.
The process is as follows: 1) Point each dish in roughly the correct orientation using a compass or eyesight and secure them 2) keeping one end stationary, point the remote end (if linked) to get maximum signal, moving in a 30 degree sweep 2.5) You need to center it on the center lobe, so move in one direction as the signal gets stronger and continue until it gets weaker again and reverse direction back to the stronger signal; do this in the vertical AND horizontal axes 3) re-point the near end to get maximum signal 4) re-point the far end again to get maximum signal. Step 2.5 is necessary because the parabolic dishes had side lobes, or directions in which they had high (but not the highest) gain. It was easy to get it pin-pointed on one of these side lobes and think you had it zeroed in on the main lobe. There’s really no way to tell for certain except for careful, repeat, positioning in the procedure mentioned. The absolute worst radios were the Ubiquiti PowerBeam 25 parabolic dishes. They had high gain, but only a single U-bracket for locking in the alignment, which would often get lost in the process. Since the techs only had LEDs to monitor the signa (another situation where it would have been nice to have tablets), the best results were obtained with someone at the office to monitor the connection.
General Principles
Even more than learning about the specific hardware and infrastructure I was working with, I learned a lot of general principles through analyzing failures and monitoring the network (which was one of my favorite things to do - mmm…pretty graphs)
Good Enough and Now is better than Perfect and Sometime which often results in Not At All and Never
It’s easy to get trapped by the idea of a persect implementation, fix, or product, and while its good to strive for, it can prevent you from acting. I’ve seen hasty jobs done in the middle of a storm to just get things working again last longer than my job, and while I will always strive to the best job I can given my resources, those resources are always limited by something. The most importand thing is that it works as intended and is complete - everything else is just details. I’ve also gone years without doing a thing because I couldn’t do it the way I wanted to (i.e. “perfect”). A working solution is better than an imaginary ideal of perfection any day. Perfection is a scam designed to cheat you out of happiness and a working solution. Get it done first, then get it done right when you can (but if you can get it done right the first time).
Guesswork Trumps No Work
Not sure how to do something or if it’s going to work? Try it. Maybe it will work and maybe it won’t, but you never know unless you try. Will the connection support the data rate you’re looking for? Is the signal strong enough? Will it reach? Try it. If it works, you have your answer. If it doesn’t, you have more knowledge that will help you find your answer. You can wonder about something, theorize about something, research something for the rest of time, if you’d like but you will never know unless you try.
Theory Trumps Guesswork
You will be able to intuitively do things quickly once you understand the theory behind what you’re doing. Figure out how to read those radiation patterns. Learn about signal propagation. Learn about information theory (signal vs. noise, fundamentally). Learn how WiFi works in the physical layer and what frames and MCS rates are. Learn about modulation and why SNR is important. Learn the basic principles behind what you’re doing and you will do it better without even realizing it. I guarantee it. Understanding what you’re doing down to the root is going to improve your guesswork and your real work. It will help you guess, and it will help your guesses be more accurate. You will usually never understand anything completely, but try to understand it thoroughly - that is, to the best of your ability. These basic principles inform what you do, how you act, and guide your intuition. When I know how an antenna theoretically responds in the lab, I’m better able to mentally model how it will respond in the real world, which helps immensely.
Reality Trumps Theory
I’ve seen things work that I would have never believed, never tried, and shouldn’t theoretically work. One of my favorites was a radio pointed at an electrical box. It was mounted on a pole and pointed directly at an electrical box of the same height. It had no LoS. It did 25Mbps bidirectionally, or half of what it was theoretically capable of (it was linked at 108 Mbps which translates to around 50Mbps when everything is considered). Theoreticall speaking (according to what I knew, at least) it shouldn’t work at all, but there it was, doing better than some I’d seen that should work fine. I’ve seen ethernet runs longer than 328ft (which were usually unreliable and are not a good practice). There’s only one way to find out, as I’m fond of saying - and that’s to try.
I’ve seen APs loaded with dozens of clients, none of whom ever complained about the quality of their connections, and while if they were all using them maximally at the same time, it would be garbage, they didn’t and it worked fine. I’ve seen radios with solid data rates and a handful of clients barely work, despite theoretically being perfectly fine. Obviously I was missing something and had to figure out what it was. One time someone made a reliable and long-term connection by bouncing a signal off the underside of a water tower. Would I recommend it? No. Was it theoretically sound? Iffy at best. Did it work fine - absolutely. Does that make it okay? In my mind, yeah. If it’s doing what you want it’s working, whether it “should” or not.
Where There’s a Will There’s a Way
If someone 40 miles out, with no visible LoS to seemingly anything can get a connection 40 miles away, things that seem impossible can be done. It required years of infrastructure build-ou, a handful of relay links and a LoS to something, obviously, but it’s been done. People have put up towers, purchased custom equipment, requested additional surveys, and done all sorts of things I wouldn’t have possibly thought “worth it”, but it was worth it to them. They found a way because it was a necessity for them. The entire network wouldn’t have existed without a need, some smart people, some good equipment, time, money, and cooperation as well as some other things I’m not mentioning, but they built it because there was a need and the potential for it to be met and they found a way to meet the need. Necessity is the mother of invention. Will is the mother of actuality.
KISS
Keep It Simple Stupid is great advice, even if you’re not stupid (and all of us are, some times, in some ways). Simplicity is a wonderful thing and will allow you an ease you never thought possible. Is there a more technically correct “better way”? Probably. Are you going to be able to understand it while you’re half awake, struggling not to die from cold and in a hurry? Probably not. The easier it is, the easier it is to understand and the more you understand it the better you will be able to work with it. Things will inevitably go wrong that you didn’t expect and “aren’t supposed to happen”. This is when it’s important that your implementation/solution be simple - so you can easily understand it, easily fix it, and easily support it. It will make your life simpler, and thus easier, making you happier and more satisfied. You will not “waste precious brain power” trying to understand or remember how you did something if, instead of being at the edge of your understanding it’s so simple even an idiot would understand it.
This is why things like complex configurations and “technical solutions” often fail. If you can’t remember how you did something or how it works, how are you going to be able to support and repair it? You won’t, is the answer. That’s not even getting into the time spent researching and implementing a more complex solution when and if the simple one works just as well. Give me a simple solution over a complex one, any day, even at a slight performance cost so long as it fulfills the need. I’ve spent weeks and months struggling to implement a technically elegant solution that I later shit-canned for something simpler because it was easier to implement, easier to support, and often works better in practice.
If It Ain’t Broke, Don’t Fix It
It took me a long time to learn this one, and it keeps proving true. If something is working well, even if it shouldn’t, even if there’s something cosmetically wrong with it and especially if it’s “not perfect”. Don’t. Touch. It. It will almost inevitably break and you will be wondering why you thought you had to “fix” something that was working fine. If if you want to improve something, upgrade something, change something, etc. (this is also a good general principle) make sure you can revert the changes. Make a backup, don’t toss the old one immediately (but don’t keep it forever), avoid irreversible changes unless necessary and learn to live with adequate. I can’t tell you how many times I’ve tried to improve the performance of something and instead went from “ok” to “not working at all” when there was nothing functionally wrong with it. If you’re hitting actual limits, running out of resources, or otherwise have a necessity to change, that’s one thing. If you’re upgrading, that’s another (but test the new product before replacing the old one). But if something is actually working perfectly fine, no complaints, no resource constraints, no dysfunction and you are tempted to change it or replace it, reconsider. If you have to, go for it.
Know Your Device, Know Your Dealer/Manufacturer, Know Yourself
Similar to know your substance, know your source, know yourself, this means that knowing what you’re working with, where it came from, and where you’re at with it and what you can do is of utmost importance. If you don’t undestand what you’re working with, you won’t be able to fully utilize its potential. If you don’t know where something came from or what the typical design/implementation/solutions are like from a particular vendor or distributor, you won’t be able to know what to expect. If you don’t understand yourself, you won’t know your competence and limitations and may be tempted away from the KISS principle for something that seems better but isn’t on your level, preventing you from adequately implementing it. If you know what you know and what you don’t know, you will understand your own capabilities and know when something is over your head and you need help, which is better sought than not. There’s nothing wrong with not knowing something, provided you know that you don’t know, because then you can learn and you won’t waste time on something you aren’t capable of and you won’t give others or yourself false hope.
Data Rate != Throughput
This is a subset of Reality Trumps Theory and consists of a couple different things. Traffic congestion will destroy performance. This could be over-utilization of a link, a large amount of frame collisions resulting in multiple retransmissions before success, thousands of UDP connections that have a low bandwidth sum, or a large number of other factors that can reduce the performance of a connection to way below its theoretical maximum. Just because your computer says you’re connected at 500Mbps doesn’t mean you’ll get that and doesn’t guarantee performance.
Throughput != Performance
Even if you can push a metric shit-ton of data across a connection, doesn’t mean it’s a good connection or that it will respond quickly, repeatedly. It doesn’t guarantee low-latency (which affects ‘perceived speed’) and it doesn’t mean it’s going to perform reliably day in and day out, again, for a number of reasons. Latency greatly affects the perceived performance of a link and can turn something that’s capable of a large amount of data transfer unusable for something like gaming or remote access, which requires responsiveness to work effectively. This is another reason why we tested our connections, using the same equeipment as the customer.
Why Bridging Can Be Better Than Routing
This is another latency-related issue. When packets go through iptables, they’re checked against a set of rules, possibly mangled and re-checked, and only forwarded if they are for the intended recipient network. When packets go through a bridge interface, they’re either forwarded, if broadcast, or forwarded if the MAC address of the recipient is on the other side of the bridge (to the best of my understanding [know yourself], I’m not 100% on exactly how packets transfer in bridging because it’s been a while so I could be mis-remembering). What I can guarantee you, though, is that latency is much lower on bridged interfaces. Sure, that 10ms may not be much on a single interface, but consider when you’re traversing 7 hops before you even get to the data center - your 8ms latency is now 78ms, and that’s assuming traffic congestion is low - it adds up quick, especially when things are congested. Additionally, visibility is much better on bridged interfaces. But what about unnecessary broadcast traffic? What about network loops? Most of our switches were “dumb” switches, so they wouldn’t waste time on things like detecting and preventing bridges or making decisions about whether to forward packets, but this also meant you could loop them. Trust me when I say you will know very quickly if you loop the network in under a minute, things will start going down and you will start getting alerts from your monitoring system. It’s actually kind of helpful, because unlike in a routed network, where the loop is limited to only one section that would have issues, the issue very quickly propagates.
What other benefits does bridging offer apart from vastly lower latency? Visibility One of the radios goes down, resets to 192.168.1.20 and starts sending ARP announcements out and you know it and you can reach it, from anywhere on the network, by giving your computer an ip in that subnet. Otherwise you’d have to SSH to a “internal” radio, create a virtual interface on it, and tunnel through to it from there, which is possible but takes time and is more complicated.
What are the downsides? Vulnerability to layer-2 packet storms, excessive layer-2 broadcast traffic (which is actually way less than you’d think, these are single packets, not thousands of packets like a TCP or UDP transmission). There’s probably some others, but those are the only ones I can think of right now.
The Best Equipment - A Brief Overview and Request
IMHO the best equipment out there is Ubiquiti, although we also worked with Mikrotik and Tranzeo while I was there. I’m particularly looking for XM and XW series equipment, especially international versions. I’m not looking for anything earlier than a nanostation2
. I would like to take a look at their newer equipment as well (I’ve been out of the field for 8 years now, but would jump back in in a second if you offered me remote work). If you have any sitting around that you’re not using and would give it away or sell it cheap, contact me at ubiquiti A@T ethertech.org
and we can try to work something out, but I’m really looking for donations. I’m going to be doing a series of articles with autopsies (dead equipment is fine as long as it’s free), exploring the shell, testing various features and performance testing.
Otterbox
We used Otterbox cases. In addition to being a fun dirty reference if you think about it (my otterbox is spacious and you can easily fit a large one inside it) they’re really good cases. I once dropped my phone off a 80’ silo and it was undamaged, including the case. Granted it landed on ground, not pavement, but I never once broke a phone or cracked a screen using the whole-phone protectors with the plastic screen built-in. I did drop one down a chimney once, but managed to retrieve it.
I’d absolutely love any equipment, working or not, that you could donate contact ubiquiti A@T ethertech.org
I’d also be interested in basically any 802.11 equipment you have, and if you donate something I will post about it, review it, explore it, and test it so if you’re a foreign manufacturer that has new equipment for me to explore, I’d also be interested.