SolarNetOne IPv6 Case Study
I am continually impressed by the adaptability of the Internet’s underlying structure. It can bring so many disparate groups with different interests and goals to work together toward the common aim of making our systems interoperate correctly. The network’s response to the challenges of IPv4 depletion by producing IPv6 is a great example of that process. As the founder of SolarNetOne, we have made every service on every host on every public facing network IPv6-enabled. All of our websites have been dual-stacked since it has been possible to do so, you can query our name servers via IPv6, and our mail exchanges all send and accept mail over IPv6.
The early days
SolarNetOne is a small research and development organization that focuses on improving the efficiency of network equipment and power systems, although our efforts at times become more wide reaching, including the development of a solar flare prediction system, advanced renewable/grid interconnection systems, and low power single layer graphene exfoliation.
We primarily provide off-grid networks in areas that are difficult to bring Internet to in the remote Pacific islands and elsewhere. We worked with the Solar Electric Light Fund, the Network Startup Resource Center, and Dr. Vint Cerf to get started, without any of whom we would not have had the opportunity to make the progress that we have. I had been interested in the core network protocols since 1998 when I started studying them and getting involved in the Internet Engineering Task Force (IETF). When IPv6 started to roll out through the working groups, I took notice and signed up for an original 6bone 3ffe:: address allocation, so I could help test it out. We weren’t using “real” routers, just commodity Linux boxes at the time, and we had to hand-write all of routing tables into the kernel without any available routing daemons. It was an interesting time where we got some early hands-on experience with IPv6 to understand what it can and can’t do.
When we rolled out IPv6 with real addresses years later, I used a cookie cutter replica of what we had done with the 6bone. Initially, there were some technical obstacles like the code not being ready yet on the client daemons. For example, the router advertisement daemon was kind of buggy and there were some bugs in the kernel stack when we first rolled it out back in 2003, but those things were resolved quickly, and we’ve had a very stable system after that, once the testing network was sunset and the production network was lit up.
My small-scale IPv6 deployment has happened via tunnels throughout the history of building out the SolarNetOne network. After 6bone shut down, we began tunneling through the Internet Systems Consortium (ISC), and then we needed more varied address space to do some research, so we tunneled to XS4All. When that closed down, we moved that tunnel over to Hurricane Electric. In each of these cases, our address space came from the operator of the remote tunnel endpoint. With the completion my routing infrastructure build, I am now announcing the address block we were allocated by ARIN this year without needing to get the addresses from anyone else, still via tunnels until direct peering opportunities present themselves.
From tunnels to my own allocation
Whereas previously we only had /48s through tunnels, we got a /36 IPv6 allocation from ARIN. To begin with, I had to fundamentally educate myself in BGP. I’ve also had some problems with the business class carrier upstream from me that has necessitated putting some hosts off site and making IPSEC tunnels between here and there to secure the BGP connections, to ensure no corruption of the routing data.
We have PoPs in Miami and Las Vegas connected via discrete routes coming from Daytona Beach, Florida. The peers that we have on the remote end allow us to originate the IPv4 addresses. We are announcing the IPv6 addresses through tunnel drops to New York and Miami. Once our IPv4 routes were being propagated globally, I renumbered our networks with both protocol stacks. I preferred to do it in one shot rather than once for each protocol, since we were already logging into each host to edit the configurations.
It is interesting to have a little more granularity with a real routing daemon rather than the advertisement daemon we’ve been using all these years. In the past I used Router Advertisement Daemon (radvd) on Linux hosts to do stateless auto-config and allocated routes and IP addresses to clients based on their MAC addresses. I would set up a tunnel endpoint where a /48 was routed to my host with multiple network interfaces and then route the packets out on the /64 subnets to the various hosts needing IPv6 connectivity.
The primary reason why I’ve gone through the learning curve from BGP to IPSEC to making sure everything is well-secured is because when we got hit by hurricanes Matthew and Irma, our IPv6 network would go down since our tunnel was end-pointed from one circuit only, in the old configuration. I had redundant circuits, but the packets wouldn’t re-route automatically without BGP and RIR-allocated addresses. If I wanted to bring it back up in the middle of a hurricane, I would have to re-home the tunnel, renumber all the hosts respecting IPv4. This generally was a lot of work to go through to temporarily restore connectivity, which would then have to be manually undone upon restoration of normal conditions. Now, in the event that one or two of our upstream circuits go down, the packets still get routed, and my rack housing the DNS, web, and mail servers are still reachable. In the past, it has been debilitating not to have the full strength of my network to use in situations when we need it the most. With the expansion into an Autonomous System with its own IPv4 and IPv6 assets, this reconfiguration happens automatically, on the fly, leaving me to attend to handling other aspects of the storm response, confident in the knowledge that my network will remain as fully functional as the off-grid solar power system that keeps the uptime at 5’9s.
Move your truck
Neither of my upstream circuits “have the ability” to route IPv6. I understand they want to leverage their investment in their IPv4 infrastructure for as long as possible, but it’s holding back progress. I would like to go IPv6 only, but I have to dual stack or half of my user base couldn’t find me. In the rural, less connected areas of America, it can be a hard sell to get edge carriers to support IPv6 on their networks. In the greater Daytona Beach area, I would think that I could get native IPv6 from any ISP, yet I’ve gotten responses saying that they “don’t support IPv6 over the Layer 1 that we provide you”. To get IPv6, I would have to pay four times more and do a fiber build in order for them to route IPv6 for me.
In response, I stuck with tunnels to route those packets, but that increases latency thereby reduces performance. The way I explained it to the salesmen from the carrier that “could not” support IPv6 (or BGP sessions) was this: “Your house is burning down and the flames from your house have started to come over to my house, too. Your truck is parked in front of fire hydrant. Can you please move your truck so I can hook up the hose and save both our houses? There is a problem that needs to be solved, and you are standing in the way.” I put the onus on the last mile carriers to get their act together. Get with the times, and deploy IPv6 so we aren’t left behind from the rest of the world, who is actively doing so.
Pros of deploying IPv6
Given the potential of IPv6 and the cost of IPv4, I just decided to do it and had it running in only a few extra late nights. A few more late nights gave time to maintain and expand as things grew. Since deploying IPv6, we have seen an increase in the flexibility of what we can do with our networks, without the added costs the monetization of IPv4 has brought. In the market I’m in, a /28 costs about $80/month, so to have a large number of addresses to experiment with is an expensive proposition. It is a notably less expensive proposition to deploy IPv6 than IPv4 via BGP as well, unless you are already well seated on the backbone. For the small business looking for redundant multi-homing, it is a far less costly and time-consuming option than deploying IPv4 on the network edge.
A large portion of IPv6 traffic to e-commerce hosts on our rack comes from mobile customers. More often than not, mobile customers come in over IPv6, showing the mobile carriers are doing a reasonably good job of rolling out IPv6. Indeed, the only IPv6 address assigned here that was not of my doing is on the mobile I carry every day.
Anybody deploying IoT devices is going to run into an addressing problem or a Network Address Translation (NAT) problem. When it comes to deploying a wide range of sensors or home-connected devices, IPv6 solves the shortage issue, particularly with 6LoWPAN, a protocol designed specifically to employ IPv6 to improve the capabilities of IoT networks. I think we are going to get to the point where you can’t leverage all of the services your device offers if it doesn’t have its own unique identifier very soon. If IoT devices are sitting behind a NAT, you are going to have to do some nasty tunneling to get them to function, whereas they will work natively with IPv6, more so as time progresses.
Crawl before you walk
My advice is to start small, make a tunnel, learn how the network works, and learn how to secure it. Since IPv6 is a whole new protocol stack, you need to learn what you need to do in terms of firewalling so you can protect your assets. This all can be done for free; for nothing more than the cost of your manpower. Once you are comfortable with the protocol, get your address allocation, and route your packets. If you are already routing packets for IPv4, it’s really very simple on almost all modern routers to enable the dual stack function. If you don’t want to do that (or if you have a problem doing that on a license level or at some additional cost), you can leverage Multiprotocol Label Switching (MPLS) to move IPv6 over your existing IPv4 routing infrastructure. Start small with a test host, put a couple network interfaces in it, route some IPv6, and see what it looks like. Crawl before you walk, and deploying IPv6 will seem a lot less daunting.