Guest Blog

Build Your Own IPv6 Lab

Get your hands dirty. Playing with IPv6 can be the best way learn it. Jeffrey L. Carrell lays out how you can build an IPv6 lab from the comfort of your own home for no more than a few dollars.

Guest Blog Post by Jeffrey L. Carrell

IPv6 is called the new Internet protocol. However, it’s been running on the Internet since 1999, so it’s really not so new, it’s just that not a lot of networks have implemented it as of yet. The challenge is that it is different from what we are all used to working with. It’s a bigger number: 128 bits compared to IPv4’s 32 bits. It has colons instead of periods (ok, dots for us diehard networking folks).  It has all new routing protocol components. And on, and on. But, it has WAY MORE possible addresses than IPv4! The theory is, we should never run out in our lifetimes! But, it is different.

So, how do you learn about IPv6 if your company is not implementing IPv6? How do you afford the equipment that is capable of running IPv6? More importantly, should you spend your own money and time to learn about IPv6 if there are no other compelling reasons or funding? The answer: YES, you should learn it on your own! A professional technologist should realize that investing in yourself is important and generally does payoff in the future.  How much are you willing to invest, money wise? How about very little (and I mean ‘little’ as in a few bucks)?

For a small investment of a computer (which you probably already have), a free virtualization application, a free full-blown routing application, an Internet connection (even free WiFi at the coffee shop will work), $5.00 USD investment for an IPv6 tunnel account, and free or evaluation versions of client operating systems; you can build a sophisticated lab and learn IPv6 just as effectively as if you had invested a lot more money.

The platform I’d recommend consists of a single computer with 8+ GB ram, 200MB hard disk, dual-core or better processor, one or more networking interfaces, Oracle’s VirtualBox, VyOS (routing software), Freenet6 account and software (IPv6 tunnel service), client OS’s such as a Linux platform and/or Microsoft Windows evaluation versions, and an Internet connection that is IPv4 only. With this as a base system platform, you can also add external equipment and build a larger lab environment.

The purpose here is to “play” with IPv6. What I have found not only for myself, but for many others who I’ve had in IPv6 training classes, only reading about IPv6 does not provide adequate knowledge or the hands-on experience that leads to the actual learning of IPv6. You need to see the configuration components; you need to look at the packets with a protocol analyzer; you need to try different configuration scenarios. The doing will drive home the learning!

You can create your own IPv6 lab environment with just about any option to what I’ve outlined above. Any VM application will work, many routers and/or routing applications will work, and there are a few choices in choosing an IPv6 tunnel provider. My personal goal was to find the combination that didn’t require a lot of money or special hardware, and didn’t require specific types of Internet connectivity (e.g. you’re not required to have a static IPv4 address, generally the way home Internet services is provided). Another major aspect of this IPv6 lab system, is to have real IPv6 Internet connectivity over an IPv4 only connection, which means you can actually use IPv6 to communicate to the outside world. You can even configure a client VM to not have any IPv4 at all! I have tested this system at various WiFi hotspots, friends’ networks, and even at 37K feet in the air while flying on a plane that had WiFi.

I started with an account with Freenet6, which allowed me to build a system that provides for a /56 subnet for IPv6, which could provide up to 256 /64 IPv6 subnets. I generally design breaking the /56 into 16 /60s and then each /60 provides 16 /64s. This lets me build multiple networks, and I can then enable different IPv6 routing protocols to really test my configs. A most excellent resource specifically covering IPv6 addressing topics soon to be published is “IPv6 Address Planning” by Tom Coffeen by O’Reilly. Another great resource is Rick Graziani’s book “IPv6 Fundamentals: A Straightforward Approach to Understanding IPv6” by Cisco Press which covers not only IPv6 basics but routing in an IPv6 network as well, with a focus on Cisco IOS.

So far I’ve made it sound easy to throw all this stuff together in a pot, stir it around a bit, and presto-changeo you have a way-cool IPv6 lab. Unfortunately that is not exactly the case. It does take a bit of tweaking and modifying to make the base system work. Initially you download all the software you need and also sign up for your Freenet6 account. Then you install VirtualBox and create a VyOS virtual machine (VM). After getting the VyOS VM going, the real fun begins. You must do some updates to the Debian base which VyOS runs on and then install the freenet6 (called gogo6) client software. After getting that all going, there are a few tweaks to the gogo6 main configuration file for account info, etc., and to the router config file gogo6 calls within VyOS. It’s a bit more complicated than I have time or space to cover here. After all this, you can then configure one or more client VMs to play with.

Here is what the IPv6 Lab system could look like:

Network Diagram Screenshot

After configuring the system, I have an IPv6 tunnel up and running, and a Linux client on a different IPv6 subnet, on an IPv4 only connection to the Internet, all in VirtualBox:

VB VyOS Screenshot

If you want to learn more about how you can set up your own IPv6 home lab, I will be facilitating two half-day hands-on workshops on this project at the upcoming 2014 North American IPv6 Summit on September 23-25 in Denver Colorado. There is still time to register for the workshop and/or the IPv6 Summit.


Jeff Carrell

Jeffrey L. Carrell is Network Consultant at Network Conversions. Jeff is a frequent industry speaker, freelance writer, blogger, IPv6 Forum Certified Trainer, network instructor and course developer to major networking manufacturers, and technical lead and co-author on 2 books: Guide to TCP/IP 4th Edition (contributing IPv6 content) and Fundamentals of Communications and Networking 2nd Edition. Jeff focus’s on IPv6 interoperability, and delivers lectures and IPv6 hands-on labs at technical conferences worldwide. As an IPv6 Forum Certified IPv6 Trainer, Jeff offers IPv6 Forum Silver and Gold Certified courses, customized IPv6 training courses, is an IPv6 Instructor for HP Education Services for their IPv6 Foundations course, and an IPv6 Instructor for Nephos6 for their IPv6 Foundations course. Jeff is a featured IPv6 instructor for the gogoNET online community, offering webinars and online workshops on IPv6 technologies via the gogoTRAINING initiative. Jeff is also a “Protocol Analysis Workshop” facilitator for Riverbed. Jeff has been involved in the computer industry for 35 years and has concentrated his endeavors in the internetworking portion of the industry for over 28 of those years. Jeff actively participates on IPv6 topics on twitter @JeffCarrell_v6.


What do terms like multistakeholderism, Internet governance, and technical community really mean?

Reflecting on the Internet Governance Forum, Suzanne Woolf explains how difficult it can be to come to a common understanding about the terminology used at the IGF and her impressions as a first-time attendee last year.

Guest blog post by Suzanne Woolf

Last year I went to my first Internet Governance Forum in Bali, Indonesia.  I was involved in several workshops and discussions about “the role of the technical community in Internet governance,” including the Regional Internet Registries (RIRs) and Internet Engineering Task Force (IETF); the role of governments; questions of increasing access to communications resources for the next billion users; and reactions to “pervasive monitoring” of Internet communications by US and other intelligence agencies.

I’ve been involved with “Internet policy” for many years now, as a member of ARIN’s AC, on various ICANN Advisory Committees, and as a liaison to the ICANN Board of Directors…which turned out to be a useful perspective, but by no means complete!

Words, Words, Words

For the perspective of someone who is new to the IGF, but familiar with “Internet governance” from experience of other venues, it was striking how much confusion there seems to be about many of the key terms thrown around. 

Multistakeholderism.  It’s easy for a techie to listen to 20 minutes of IGF workshops and speeches and conclude the term itself doesn’t actually mean anything. But a few days later I’d concluded it actually means too many things. I think there’s already a partial shared definition, though, in what it *isn’t*. It’s sometimes hard to tell what “multistakeholderism” means, but it does seem to be based on the idea that “Decisions aren’t only made by governments and implemented in treaties.” The problem then becomes figuring out who *does* make decisions, organized by what processes, so the decisions make sense and don’t just represent one or a few interests.

Internet Governance.  “Internet governance” is itself another slippery term. It’s not just about what the RIRs and ICANN do, it also includes topics like spam and child protection online and intellectual property protection and so on. The things RIRs and the Internet Corporation for Assigned Names and Numbers (ICANN) and the IETF oversee are “critical Internet resources” and considered really important, but the technical and operational details of how the Internet actually works are only a small part of what people talk about as “Internet governance”. This by itself can be disorienting for an engineer!

Technical Community.  Another thing that jumped out at me was the phrase: “technical community”. This is another term that’s hard to define in the IGF context. It doesn’t mean there what it means to the ARIN community, where people are “technical” if their primary knowledge/skills/work involves things like routers and peering. In the IGF context, people and roles are defined from astarting point of “some kind of stakeholder, not government”. The definition of “technical community” is lots broader than what we’re used to, and lots less clear.  It’s distinguished from “government,” “business,” and “civil society,” and it includes not only people whose background is technology and engineering, but anyone from an organization oriented on technology, from large ISPs and software companies to the RIRs, the Internet Society (ISOC), and World Wide Web Consortium (W3C). These categories can overlap, too.

Overall Impressions

Techies who step into the IGF, and meetings like it, should be prepared to be a little disoriented, but willing to listen and persist. IGF participants are people of good will and genuine concern for the future of the Internet. They don’t entirely agree on how to go about it, but they want a future that’s not owned only by governments or special interests, and they’re willing to work for it. The rhetoric can be confusing and the outcomes hard to define, but there’s a lot of positive energy and some real insight to be found as well. And the “technical community” has our own contribution to make, if we’re willing to engage.

I think RIR members should know that the Number Resource Organization (NRO) is doing very good work in just showing up, being visible in venues like IGF, and answering questions about the mysteries of how the net really works and the nature of “critical Internet resources” like IP addresses. If we don’t explain those things to “the other stakeholders,” its going to be even more difficult to make progress on “Internet governance” issues.


Suzanne WoolfSuzanne Woolf has extensive experience in internet infrastructure technology and management, particularly DNS and routing, and technical policy for names and addresses, including two terms on ARIN’s Advisory Council. She currently serves as co-chair of the DNSOP working group in the IETF and liaison from the Root Server System Advisory Committee to the ICANN Board of Directors. She’s a freelance consultant in Internet infrastructure and policy, based in the northeastern US.




For more information about the NRO’s participation in this year’s Internet Governance Forum, visit the NRO website.


Live Beyond Layer 3

Based on his time at CANTO, Owen DeLong, ARIN Advisory Council member & Senior Backbone Engineer at Black Lotus, encourages fellow Internet technologists to take the time to field questions from senior management and government officials.

Guest Blog Post by Owen DeLong 

I’m a layer three guy, which means that I am a network guy, specifically an Internet guy. I work on routers and connect big networks to other big networks to try and make the Internet work better. For a long time, I, and many people like me have tried very hard to ignore what we call layers 8/9/10 (the financial, administrative, and governmental entities involved with the Internet).  Or worse, sometimes we have been known to sneer at them as “damage to be routed around”. I know that attitude still persists among some, but it really fails to take in the whole story.

ARIN at CANTO 2014

For the last several years, I’ve had the opportunity to work with ARIN doing outreach in the Caribbean at the annual CANTO conference and exhibition. While there are lots of layer 1/2/3 (fiberoptics, switches, routers, etc.) products on display, but the reality is that most of this show is for senior management and government officials. This year, the opening ceremony included speeches from the secretary general of the ITU and the prime minister of the Bahamas (where the meeting was held). There was no shortage of senior government officials.

There are several reasons that CANTO attracts so many senior executives and government officials. First, the Caribbean has traditionally had a number of state-run and/or state-owned telecommunications services or monopoly telecommunications services that were licensed by the state(s). That’s been changing, but slowly. CANTO has always been a forum where those groups and other industry representatives can come together to learn about new technologies, see what is happening in other parts of the region, and talk about issues that are unique to the region and/or require coordination among various countries in the region. In recent years, it has come to include not only telecom, but all of ICT and has also served as a forum to help move away from monopoly telecommunications towards more deregulated and diverse provider choices.

The Internet has become important enough that we layer 1/2/3 folks can no longer pretend government isn’t relevant, nor can we pretend that government won’t notice us and will continue to leave us alone. It’s critical that we increase our awareness about how things work in the wider world and start educating regulators and senior management in ways that will allow them to do their jobs without damaging what we’ve built. As nice as it is to live in layer 3 without caring what’s above or below, strict layering simply doesn’t work with human relationships. In the end, networks are about connecting people, and that’s a process that transcends all layers.

When a manager or a regulator approaches you and starts asking questions you don’t think are worth your time, remember, your answers are going to shape how they decide many things that may affect your future. Answer wisely and carefully. Be available for follow-ups. Be courteous, and this experiment that escaped from the laboratory might just be able to remain the most awesome tool ever developed for democratizing communications.


Why Is the Transition To IPv6 Taking So Long?

IPv6 is an essential technology if the Internet is to grow, but adoption has been slow. Graeme Caldwell of Interworx takes a look at why organizations are holding back on IPv6.

Guest blog post by Graeme Caldwell 

We stand on the cusp of an explosion in the number of Internet-connected devices. The mobile revolution was just the beginning. Combined, the burgeoning wearables market and the Internet of Things will potentially create billions of new connected devices over the next few years. Every device will need an IP address and there are far too few available addresses within the IPv4 system to handle the sheer quantity of connections. It’s a problem that’s been predicted and solved for many years, in theory at least. But IPv6 is being adopted at a glacially slow pace.

The reasons for the gradual adoption are simple to understand. It’s expensive. The Internet is made up of tens of millions of servers, routers, and switches that were designed to work with IPv4. Upgrading that infrastructure entails a significant capital investment. As things stand, workarounds like NAT take some of the pressure off — but they are a temporary band-aid solution. In the long-term, transition to IPv6 will have to happen, but, given the level of the required investment, there’s not a compelling business argument to make the transition immediately.

To get the full benefit of IPv6, a significant proportion of the net’s infrastructure has to support it, and, with the exception of a few organizations, many don’t want to invest in infrastructure upgrades that don’t have any immediate benefit.

When they were developing IPv6, the Internet Engineering Task Force decided that, in order to implement new features in IPv6, the protocol would not be backward compatible with IPv4. IPv6 native devices are not capable of straightforwardly communicating with IPv4 devices. That makes incremental updating of systems difficult, because workarounds have to be put in place to ensure that legacy hardware and newer IPv6 hardware have a way of talking to each other — most IPv4 hardware will never be updated.

According to Leslie Daigle, Former Chief Internet Technology Officer for the Internet Society, “The lack of real backwards compatibility for IPv4 was the single critical failure. There were reasons at the time for doing that. But the reality is that nobody wants to go to IPv6 unless they think their friends are doing it, too.”

Forward thinking software companies have already included the necessary functionality to handle IPv6 in their products. At InterWorx, we could have left implementing IPv6 support until we absolutely had to, but the benefits of the transition for us and our users in the web hosting industry were undeniable. We wanted to give clients the option of using IPv6 so they can begin to prepare for the inevitable move and implement IPv6 systems. InterWorx includes a full suite of IPv6 management tools, including IPv6 pools management, IPv6 clustering, and diagnostic tools.

In a Feburary 2014 report, Google revealed that their IPv6 traffic had hit 3 percent and it’s currently at about 4 percent. That seems unimpressive, but it’s a sign that adoption rates are accelerating — the move from 2 percent to 3 percent took only 5 months and from 3 percent to 4 percent even less time. Under pressure from the proliferation of connected devices, we can expect to see organizations adopting IPv6 ever more quickly.


GraemeGraeme works as an inbound marketer for InterWorx, a revolutionary web hosting control panel for hosts who need scalability and reliability. Follow InterWorx on Twitter at @interworx, Like them on Facebook and check out their blog.


Caribbean Internet Governance Forum (CIGF) Celebrates 10 Years

CTU Telecommunications Specialist, Nigel Cassimire, shares what happened at this year’s Caribbean Internet Governance forum.

Guest blog post by Nigel Cassimire, Telecommunications Specialist, CTU

Caribbean IGFThe 10th edition of the Caribbean Internet Governance Forum (CIGF) was held at the Atlantis, Paradise Island Resort in The Bahamas from 6th to 8th August 2014. The CIGF is a regional, multi-stakeholder forum which was initiated by the Caribbean Telecommunications Union (CTU) and the Caribbean Community (CARICOM) Secretariat in 2005 in order to coordinate a regional approach to Internet Governance issues for the final session of the World Summit on the Information Society (WSIS) in Tunis that year.

The CIGF has since been convened annually by the CTU and lays claim to being the first such regional forum in the world, all others having been convened after the initial global Internet Governance Forum in 2006. The primary product of the work of the CIGF has been the formulation of a Caribbean Internet Governance Policy Framework issued in 2009, and updated in 2013, which:

  • Articulates a vision, mission and guiding principles for Internet Governance (IG) in the Caribbean
  • Identifies current priority areas in IG of greatest relevance to the Caribbean
  • Offers policy recommendations in such priority areas for the attention of all stakeholders

The theme of the 10th CIGF was “Building National Capacity for Global Influence” and specific objectives addressed in the agenda were to:

  • Build regional capacity in the area of ccTLD operation and administration
  • Review and update the Caribbean Internet Governance Framework V 2.0
  • Facilitate open discussion on the Net Mundial Outcomes, and the proposed NTIA transition.
  • Explore and spread awareness on Opportunities for Caribbean Growth through the Internet Economy
  • Develop a mechanism to ensure effective Caribbean representation at Global Internet Governance Fora.

There were over 40 registered participants representing Caribbean stakeholders in government, operating companies and other private sector, academia, civil society and, in particular, Caribbean ccTLDs for whom dedicated content had been included on the agenda. ICANN, ARIN, LACNIC, ISOC and Google all provided financial support as well as valuable agenda content. Agenda information as well as presentation slides are archived on the CTU’s event web page.

The 10th CIGF successfully addressed its objectives through presentations and several vibrant discussion sessions and, when necessary, focussed review of the policy framework document. Suggested refinements were identified for subsequent wider circulation and comment. This is the first step in the current revision cycle towards a third revision of the document for likely issuance in 2016.

Most importantly, the CTU Secretary General, Ms. Bernadette Lewis proposed an approach for fostering capacity building in IG at the national level in order to enhance Caribbean participation and influence globally in IG, consistent with the 2014 theme. This approach is based on mobilising relevant ICT resources and expertise in the Caribbean not currently focussed on IG e.g. computer societies, IT professional associations and the like.

The CTU will continue to foster multi-stakeholder collaboration in the Caribbean region on Internet issues and in particular through the medium of the CIGF. More deliberate efforts will also be taken in the near future to coordinate the work of the CIGF with the wider regional LACIGF and the global IGF. Please plan to attend the 11th CIGF that will be held in Suriname at a date to be fixed in 2015.


Nigel CassimireNigel Cassimire has been serving as a Telecommunications Specialist at Caribbean Telecommunications Union since July 2005, when he started independent consultancy. The CTU is a regional organisation with responsibility for the development of ICT policy within the Caribbean region. Its members are drawn from Caribbean Governments, private sector and civil society organisations. Nigel has over 30 years of experience in telecommunication industry. He has extensive knowledge in telecommunications technologies and services and is now working in telecommunications policy development at the Caribbean Telecommunications Union Secretariat.




IETF 90 Part 2: IPv6 reverse DNS

ARIN Advisory Council member, Cathy Aronson, shares some of her thoughts on IPv6 reverse DNS from IETF 90 in Toronto, Ontario, Canada last week.

IETF Language ButtonsGuest blog post by Cathy Aronson

Some thoughts on IPv6 reverse DNS.

Lee Howard was speaking in the Sunset4 working group at IETF 90.  He mentioned something that got me thinking.  I have often discussed in my talks problems in IPv6 that were unanticipated. A lot of these problems are unintended consequences of very large subnet sizes.  Some problems are outlined in RFC 6583.

Lee mentioned another interesting problem, reverse DNS.  Best practice [RFC1033] says that every Internet-reachable host should have a name (per RFC 1912) that is recorded with a PTR record in the .arpa zone.  It also says that the PTR and the A record must match.

So in IPv4 for a network block like the entries would be in the form  IN PTR  IN PTR

The corresponding A records would be  IN A  IN A

So imagine an IPv6 /48.

A sample entry for 2001:0db8:0f00:0000:0012:34ff:fe56:789a would be be:

a.  IN PTR

“Since 2^^80 possible addresses could be configured in the 2001:db8:f00/48 zone alone, it is impractical to write a zone with every possible address entered.  If 1000 entries could be written per second, the zone would still not be complete after 38 trillion years.”

It is also the case that addresses are assigned dynamically out of these huge address ranges and so it may be difficult to determine the address ahead of time.

The document outlines several solutions all of which have problems.  For detailed information about the solutions please consult the document.

In my opinion it may be time to take another look at this practice and see if requiring forward and reverse match is still necessary.  There are applications which depend on this and it’s not entirely clear that it is really needed any more.

I have asked some folks what is being done about this on networks today.  I was told that most  residential service providers are simply not providing reverse DNS for their IPv6 customers. Other service providers will delegate the reverse zone to the customer upon request and some provide a web portal for the customer to manage their own reverse.  Yet others generate the in-addr on demand.  So they perform the equivalent of $GENERATE but instead of storing all the generated responses in memory they generate the record when the request is received and respond with the generated record that is then discarded.  Another provider I talked to is planning on returning NXDomain (non-existent domain) when queried for the reverse.


Internet Governance Affects Us All

Guest blog post by John Sweeting, ARIN Advisory Council Chair & Sr. Director of Network Architecture & Engineering, Time Warner Cable

John SweetingWe recently attended the IGF-USA in Washington, DC and it got us thinking about why it is important for the ARIN community members to be involved with what is happening with the Internet as a whole.

Here are three things that are important to us as  users of the Internet and part of ARIN and the global Internet community.  All Internet users should probably put these issues on their radar too.

Evolution of the Internet governance ecosystem is occurring

With the National Telecommunications and Information Administration (NTIA) preparing to turn over oversight of the IANA stewardship functions to the multistakeholder community, there is a huge effort underway to determine a replacement that meets the requirements of the US government and more importantly the global Internet community’s needs for a healthy Internet. Currently a coordination group representing 13 communities (including the Number Resource Organization (NRO) which represents ARIN and the other Regional Internet Registries) has been formed to define and guide the transition process.  The important thing to note is that discussions occurring now could impact Internet operators and users alike for generations to come.

Conversations regarding increasing accountability are also occurring

One of the sessions at IGF-USA touched on increasing accountability, particularly the accountability of ICANN
.  One of the key points we took away from this session was that the more transparency that the key organizations can provide in managing the Internet infrastructure, the better.  Since ARIN is part of that infrastructure, transparency and accountability are important issues for our community as well.

Working together to find solutions to problems is key

The essence of a multistakeholder dialogue is that all parties are present in key forums to make their voices heard – everyone from civil society, government, technologists, research scientists, industry, and academia.  From the ARIN community especially, we have an interest in making sure the technical realities of how the Internet works are understood and unimpeded. It is important that we involve ourselves where discussions about Internet governance are happening.

Some of the sessions from IGF-USA are available to watch online if you’re interested.  We think it is very important to make yourself aware of what is going on now with Internet governance and always be looking for opportunities to contribute.


Getting Serious About IPv6 – Go Big or Go Home

Ed Horley provides a convincing case for the many reasons why you need to get an IPv6 plan in place now and how to overcome some of the common challenges along the way.

Guest Blog Post by Ed Horley

I gave an Interop IPv6 presentation titled “Getting Serious About IPv6 – Go Big or Go Home” in Las Vegas on April 3, 2014. Since then, ARIN announced it has moved to Phase 4 (down to its last /8 of IPv4 – that happened on April 23, 2014).  I think what surprised people the most (based on the feedback I got from the session) was that my argument about adoption for IPv6 had little to do with ARIN running out of IPv4. After all, this is what everyone talks about, that there are no more IPv4 addresses. My argument is:

You have already deployed IPv6… you just didn’t know it.

At this point, you may be scratching your head saying Ed is crazy, what is he talking about? Let me point out that all major OS platforms (and different flavors of those platforms) support IPv6 and have for a while now. It turns out that IPv6 is enabled (on by default) and preferred in almost all cases. To top it off, there are IPv6 transition technologies in Windows, there are zerconf capabilities in all the OSs, there is support for mDNS or LLMNR, and to top it all off, IPv6 has several address mechanisms per active interface on a host. If you add this all up it is highly likely that you have deployed IPv6, you just didn’t do it in a structured and controlled manner the way you did your IPv4 deployment.

If you have deployed IPv6 (congratulations by the way) but didn’t do any planning, what challenges do you now face?

First, do you understand the impact of turning off IPv6? Often when I point out that all the host OSs are running IPv6 many people want to jump immediately to shutting off IPv6. While this is possible (sort of), the question you should ask is, “will this impact my existing services?” Think carefully before you just start shutting off IPv6. Remember, it is enabled and preferred and if your existing production network is using IPv6 for some of its network traffic you will have a production outage while you disable IPv6. Furthermore, you might not even know all the applications that ARE using IPv6, have fun troubleshooting that one. Even after you think you have turned off IPv6 on your equipment, how often do you actually audit and check to see if it is running? Does it get re-enabled with OS patches and updates? What about third party equipment that runs on your network or wireless/wired guest network? How about BYOD and those devices that you can’t control the networking stack? The reality is, even though you think you are simplifying your workload, you aren’t. You will still need to set up sniffers that can detect and capture IPv6 traffic, otherwise, how will you know it is NOT running on your network? You will still have to collect and analysis log files that contain both IPv4 and IPv6. You will still have to write and maintain policy and security rules that include both IPv4 and IPv6.

At this point, it must be obvious, why not just adopt and support IPv6 if you have to do all this work for it anyway?!?

To make matters even more interesting, I argue that if you have industry compliance requirements and you do not have a plan for IPv6 (off, on, whatever) then there is no way you can say you are in compliance of an audit. Why? Because how do you pass an audit when you have a protocol running on your network you don’t understand, can’t get any information from and aren’t even watching?

What challenges do you have once you realize you need to have some sort of IPv6 plan in place?

I have heard repeatedly that education for staff is the biggest issue around IPv6. Does your team know anything about IPv6? Would they even know it if they saw it? ARIN has some great education resources available at along with the IPv6 info center and if you want specific IPv6 and Windows knowledge then consider picking up my book.

The next common challenge is getting your policies (IT, security, purchasing, etc.) modified to include and be thinking about IPv6. For instance, will you purchase the right equipment that supports IPv6 the “first” time or will you have to buy it all again in one to two years? Adopting newer OS platforms becomes easier because these newer platforms support IPv6 from the start. But what do you have to do for older systems? Initially, you really won’t notice anything until your service provider truly depletes their IPv4 address space. Then they will be forced to starting adopting and deploying IPv6 but they will use various methods in the meantime to extend the life of IPv4. They will most likely utilize a tool called Carrier Grade NAT (CGN). CGN breaks IPv4 uniqueness at a much larger scale. We used to hide a single household or commercial company behind a common IPv4 address, now we will hide an entire city, county or larger unit of people. CGN exasperates IPv4 port exhaustion issues; it compounds stateful NAT issues, along with just slowing things down.

Finally, what problems will you see happen as IPv4 runs out? It is going to get harder and harder for your employees to get public IPv4 at home. This can potentially cause problems for VPN, VoIP, Video, Collaboration and Gaming (depending on how those technologies are deployed). If third parties and employees start getting IPv6 through their service provider and you stay on IPv4 only, then their connection will have to be proxied to you. Because the session is proxied, you lose the ability to have end to end connectivity, something taken for granted in our IPv4 only world.

Lack of IPv6 has real world costs and impacts, and you are simply kicking the can down the road with the potential for even greater pain the longer you wait to adopt.

How do we start down the IPv6 path of enlightenment? What do we need to do next?

Well, as I mentioned earlier, education has been identified as the key thing people need, at all levels. This means you need to invest in educating your staff on how to design, deploy, operate and maintain a network running IPv6 and also one doing dual-stack. You will need to have an education plan and resources in place for your company to learn all this. Most importantly, this does not happen overnight, you need to start NOW! Why? Because once your staff is educated it is much easier to build a plan. A plan needs to be tailored to your company needs and requirements. You need to include testing and validation of network, operating systems, apps and everything in between to insure you are on the right path. Oh, and you will need a lab – trust me on this one. You will need people from every team involved in the education and training. Why? Because while IPv6 at first glance appears to be a networking only function you will quickly discover that your application, database and help desk teams will need to know, understand and troubleshoot it. You will also need to understand the business impacts of starting the adoption of IPv6. Seriously? Did he just say business impacts? Yes, you many have critical home grown business applications that do not work with IPv6. You might have partners in the world that only have IPv6 as a protocol option. You likely want to understand what the impacts will be before you run into an unpleasant surprise along the way. If the majority of your business is on, from, or coming across the Internet then supporting IPv6 is critical to your business.

Let’s say I still have not convinced you. You still don’t believe you will be using IPv6 anytime soon in your company. Well, the last holdout OS in the market that did not support IPv6 was Windows XP and Microsoft end of support happened on April 8 2014. This means if you are deploying a newer OS (Microsoft Windows, Apple iOS and OSX, Android, Linux, FreeBSD, CentOS, etc.) of some kind, guess what? Yes, that is right, you will be dealing with IPv6 regardless of how much you want to avoid or ignore it.

IPv6 is the future and the future is NOW!


Ed HorleyEd Horley is the Practice Manager for Cloud Solutions and Practice Lead for IPv6 at Groupware Technology in the San Francisco Bay Area. Ed is actively involved in IPv6 serving as the co-chair of the California IPv6 Task Force and additionally helping with the North American IPv6 Task Force. He has presented at the Rocky Mountain IPv6 Summit, the North American IPv6 Summit, the Texas IPv6 Summit in addition to co-chairing and presenting at the annual gogoNETLive IPv6 conference in Silicon Valley. He has also presented on IPv6 at both Microsoft TechEd North America and Europe, at TechMentor in Redmond, Orlando and Las Vegas, at InterOp in Las Vegas and at Cisco Live in North America and Europe. Ed is the author of Practical IPv6 for Windows Administrators from Apress (2013). He is a former 10 year Microsoft MVP (2004-2013) and has spent the last 18+ years working in networking as an IT professional. Ed enjoys Umpiring Women’s Lacrosse when he isn’t playing around on IPv6 networks. He maintains a blog at where he covers technical topics of interest to him and is on twitter at @ehorley.

IETF 90 Part 1

ARIN Advisory Council member, Cathy Aronson, is at IETF 90 in Toronto, Ontario, Canada this week. Follow along as she shares her findings with us on TeamARIN!

Guest blog post by Cathy Aronson

Cathy Aronson

Yesterday morning I attended the IEPG (Internet Engineering and Planning Group) meeting here at IETF 90.  George Michaelson of APNIC gave an interesting presentation about Teredo (a tunneling technology that allows IPv6 capable hosts to use IPv6 over a IPv4 only connection).  George’s slides are here.  The great thing about his presentation is that he observed Microsoft doing exactly what they said they were going to do.  They turned off their Teredo relays.  It is clear in George’s graphs that the Microsoft Teredo relays have been turned off.   The presentations about sunsetting Teredo are linked here:

George talked about how the Microsoft relays continue to cause a lot of zombie tunnels. Microsoft is apparently still sending “who am I” endpoint signaling but not carrying IPv6 data.   Further there are a lot of other autonomous systems that are serving up Teredo tunnels.  George listed them in his presentation and suggested that they stop doing Teredo.


IPv6 Effects on Web Performance

Will IPv6 positively affect web performance in the future? Blake Crosby shares his thoughts on the answer to this question.

Guest Blog Post By Blake Crosby

There are a lot of efforts to improve the speed of the web. The inevitable release of HTTP 2.0 in the near future will address many of the existing web performance bottlenecks.

Will IPv6 increase web performance in the future?

The answer is Yes! IPv6 has many improvements over its v4 counterpart that will help make the web a faster place.

Packet Fragmentation

IPv6 does not fragment packets; this means that any packet reassembly does so at the client or at some other endpoint. The router is free to use those extra CPU cycles to move packets faster through the network.

Checksumming Done at Higher Layers

Routers don’t need to spend time checking the integrity of the IPv6 header (for TCP packets). Instead, validating the data packet happens at the TCP layer. Less work for the router means moving those packets faster!

Keep It Simple

The IPv6 packet header is much simpler than the IPv4 header, making it much easier to process these packets as they flow through routing equipment

IPv6 and IPv4 Packet Headers

For example, the Time To Live (TTL) field has been replaced with a Hop Limit field (a simple counter), thus routers don’t need to calculate the time the packet has spent in queues. One less calculation to be made before sending that packet along to the next hop.

Bigger Is Better

Reducing the number of round trips is the best way to improve your web browsing experience. IPv6 can help with that by using Jumbograms. Having the ability to squeeze up to 4096 MB in a single packet will reduce the number of round trips required to download data. Provided the link layer has a large enough MTU.

Better Mobile Performance

Due to IPv4 limitations, mobile devices need to use Triangular Routing in order to receive and send packets to/from the Internet. In triangular routing, the mobile device is able to send packets directly to the remote host; however, the remote host must route packets through a “Home Agent” which can be very far away from the actual user.

For example, a particular network may have a limited number of home agents. If the mobile device is located in San Francisco, and the mobile carriers home agent is located in Houston, all packets destined for that San Francisco mobile device must be routed through the home agent in Houston.

Mobile IPv6 eliminates the need for this network architecture. Packets need not be routed through a home agent.

If you are interested in learning more about the challenges of improving web performance, see my analysis of IPv4 versus IPv6.  Additionally, I highly recommend “High Performance Browser Networking” By Illya Grigorik.


Blake CrosbyBlake is an Operations Engineer with Fastly, the smartest CDN on the planet.

His intimate knowledge of web performance ensures that Fastly stays ahead of the curve with emerging technologies.

He’s also on the Board of Directors for the Toronto Internet Exchange (Torix).


IPv6 Addressing Tips

Ross Chandler, Principal Network Architect of IP network evolution at Eircom/Meteor, shares a few tips on working with IPv6 from his own experience.  The bottom line? You can do this!

Guest Blog Post by Ross Chandler

The most significant changes with IPv6 are: vastly more addresses and the way the extra bits are used. Here are a few practical tips for when you’re adding IPv6 to your network and connected devices.

Don’t stress about the length of IPv6 addresses

The long ones only occur when they’re generated automatically. Don’t attempt to read out one of these long addresses for another human being. You can assign shorter IPv6 addresses by static configuration or by DHCPv6.

Use the 4-bit nibbles when making an addressing plan

The 4-bit (hexadecimal) character positions makes subnetting easy.

e.g. Your assignment might be 2001:db8:1234::/48

This can be subnettied into 16 /52s  (prefix length increased by 4)






Each of the /52s can be further subnettted into 16 /56s





And so on down to the /64s.

Combining contiguous nibbles allows a prefix to be subnetted into a larger number [16^(number of nibbles)] of smaller subnets with prefix length increased by 4 * (number of nibbles).

2001:db8:2014:1000::/48 can be subnetted into 16 /52 prefixes. 16 = 16^1 and 52 = 48 + 4 * 1.

2001:db8:2015:1200::/48 can be subnetted into 256 /56 prefixes. 256 = 16^2 and 56 = 48 + 4 * 2.

2001:db8:2016:1230::/48 can be subnetted into 4,096 /60 prefixes. 4,096 = 16^3 and 60 = 48 + 4 * 3.

IPv4 subnetting is not as simple as that.

Odd or even address conventions

If you use a /30 IPv4 subnet on a link then a /126 IPv6 prefix length will allow both the IPv4 and IPv6 address at either end to be odd or even.  Similarly for /31 IPv4 or /127 IPv6 links.

You can be liberal with your use of IPv6 /64 prefixes

Don’t be afraid to be liberal when assigning /64s. It’s often helpful to think of 64 bit prefixes as the smallest unit of address assignment of v6. For example, assign a full /64 for each point-to-point link even if you intend using a /126 or /127 mask. This is all right because whether there are 1 or a 1,000 devices on the LAN, compared to the 2^64 possible addresses both are almost equally sparse. Stateless address autoconfiguration (SLAAC) mandates the use of a /64 prefixes on LANs. This fact and the :: compactor allows manually assigned IPv6 addresses to be written in short form with almost half the number of characters as a typical SLAAC assigned address.

Assigning more specific IPv6 subnets

You can make assignments with larger prefix lengths. For example, you may have IPv4 DNS server addresses and and so decide to use the first two addresses from your IPv6 allocation for your IPv6 DNS server addresses 2001:db8::1 and 2001:db8::2. The service number (e.g 53 for DNS) could be the host part of the address.


Ross ChandlerRoss Chandler
Principal Network Architect – IP network evolution





IPv6 Advice for Service Providers and Home Users

Preparing for IPv6 is easier than you’d think. Chris Phillips, Managing Partner of Aptient Consulting Group gives some useful advice for service providers and home users who are ready to make the effort toward IPv6.

Guest Blog Post by Chris Phillips

Oliver Wendell Holmes once said, “The mode by which the inevitable comes to pass is effort.” With ARIN’s available IPv4 addresses dipping below 1.5 /8s, an IPv6 Internet is inevitable. But the effort bit is underwhelming, at best.

At the current rate the old addresses are being acquired, we may see complete IPv4 depletion by the end of 2014. Yet just a slim 12 percent of the Alexa top 1000 most-visited websites on the Internet are reachable through IPv6. An even more minuscule 2.75 percent of Internet users reach Google via IPv6. And according to the Amsterdam Internet Exchange – the world’s largest single Internet exchange point by member ports – a paltry 0.6 percent (IPv4 2,469 Gbps vs. IPv6 15 Gbps) of traffic exchanged is over IPv6

Based on the disappointing IPv6 adoption numbers, it seems like businesses are avoiding the new protocol because they believe the transition is too difficult. But in my professional experience preparing for IPv6 is actually quite simple.

Service providers

Network service providers who already have an IPv4 allocation from ARIN will find it is easy to get IPv6. All you need to do is apply for an IPv6 allocation, and the process is usually far less complicated than applying for an IPv4 allocation because different policy requirements apply. For example,  instead of the rigors of justifying each subnet as small as a /29 from your current IPv4 assignments, like you would if you were applying for additional IPv4 addresses; your IPv6 allocation size is determined by the number of geographically diverse locations your network has for an end user. Once you provide ARIN that information, you can be allocated a minimum of a /32, or 79 decillion (yes, that’s actually a real number) IP addresses.

Once you have IPv6 addresses, adding them into your existing network is astonishingly easy in most well-engineered networks.   Most tier 1 and tier 2 carriers offer\ native IPv6 transit services and will have no problem accommodating your BGP announcements.  Adding IPv6 into your IGP is also very simple.  OSPFv3′s configuration syntax is nearly identical to OSPF’s on both Cisco and Juniper equipment.  Running OSPF and OSPFv3 in parallel is uncomplicated.  The case is the same for IS-IS.  A lot has been written about how to add IPv6 to your network.  IPv6 implementation will vary depending on your infrastructure, so there’s no one “right” way. But a quick Google search for “IPv6 IGP” should yield a lot of useful results, including many from popular hardware vendors.  Once you receive your IPv6 allocation, it should only be a matter of days before you can start offering IPv6 services.

IPv6 at home

For end users, adoption has been negligible.  Understandably, this is the slowest market to adopt IPv6 – even though it’s the market that needs it the most. The disappointing statistics with regards to IPv6 adoption on the Alexa Top 1000 seem to bear out the perception among uninformed end users that parts of the Internet will become unreachable to them. In actuality, 6to4 gateways will ensure they can communicate with IPv4-only networks. Fortunately for savvier home consumers, Comcast, the largest eyeball network in the US, has started rolling out IPv6 to those users who want it. Comcast customers can log onto to find out whether their areas support IPv6, and how to enable it.

What Can I Do?

Petition your cable and DSL provider to adopt IPv6.  They likely already have a plan in place to adopt IPv6, but it may not be a priority if there doesn’t seem to be much consumer demand.  Inform them that ARIN only has 25 million IPv4 addresses left. That works out to just one IPv4 address for every 7.88 Americans, and doesn’t count Canada and many parts of the Caribbean, which are also served by ARIN. The time to act is now.

Petition your hosting provider to add IPv6 support, or find an alternative provider.. Rackspace, for instance, assigns every new VPS an IPv4 and an IPv6 address by default, and their cloud-based hosted DNS service also supports quad A records.

Finally, you can join me and become an IPv6 evangelist. There are many IPv6 readiness and awareness websites.  Find out how you can become involved in the IPv6 awareness community and help push the Internet’s transition forward. IPv6 is inevitable, but it won’t be pretty without our effort.


Chris Phillips HeadshotChris Phillips
Managing Partner
Aptient Consulting Group






Any views, positions, statements or opinions of a guest blog post are those of the author alone and do not represent those of ARIN. ARIN does not guarantee the accuracy, completeness or validity of any claims or statements, nor shall ARIN be liable for any representations, omissions or errors contained in a guest blog post.


Enhancing Stakeholder Cooperation – Guest Blog

General Manager of LACTLD, Carolina Aguerre, explains the importance of enhanced cooperation for the many diverse stakeholders in Internet governance discussions.

Guest Blog Post by Carolina Aguerre

One of the legacies of the World Summit of the Information Society (WSIS) process is the principle of “enhanced cooperation”. This has been a hotly contested issue in the policy discussions surrounding Internet governance in recent times. The constitution of the “Working Group for Enhanced Cooperation” at the beginning of 2013 is a landmark in the attempts to move forward.

It is well known that the Internet has no owner, and that the forms of exercising control are much more diffuse and complex than those in other intercommunication networks. The Internet’s architecture and its design principles: decentralized, distributed, end-to-end, among others, has imposed coordination and cooperation as key governance attributes of the relationships amongst both stakeholders as well as in the diverse layers of technologies that bring about the Internet. Particularly after the adoption of the Internet at a global scale, cooperation has become a challenge. The Internet pioneers belonging to the scientific communities that promoted the first institutional mechanisms for transnational Internet governance (such as the IETF and the IAB), now coexist with governments, civil society, companies, academic and technical staff which all have different perspectives and capacities to determine Internet policies at a global scale.

With the new millennium, the problem of the new technologies in governmental agendas became much more visible. The challenges to public policy and regulation after the explosion of the Internet in all social orders materialized the need to develop mechanisms to promote an agenda on a topic with diffuse topic, with an impact in specific national contexts in domains such as public administration, health and education. It was in such a context, where organizations such as ICANN had already formed in 1998 for the coordination of critical Internet resources that Kofi Annan, then Secretary General of the United Nations, called for the process called World Summit of the Information Society (Geneva, 2003 and Tunis, 2005).

The Tunis Agenda for the Information Society established two fundamental concepts for Internet governance:

  • (i) the principle that Internet Governance is a multi-stakeholder effort, with specific roles for the different players that participate in its development, use and application in equal conditions;
  • (ii) the principle of enhanced cooperation to promote mechanisms of participation and involvement of all actors, particularly those of governments. 

Enhanced cooperation articles in the Tunis Agenda for the Information Society 

69. We further recognize the need for enhanced cooperation in the future, to enable governments, on an equal footing, to carry out their roles and responsibilities, in international public policy issues pertaining to the Internet, but not in the day-to-day technical and operational matters, that do not impact on international public policy issues.

71. The process towards enhanced cooperation, to be started by the UN Secretary-General, involving all relevant organizations by the end of the first quarter of 2006, will involve all stakeholders in their respective roles, will proceed as quickly as possible consistent with legal process, and will be responsive to innovation. Relevant organizations should commence a process towards enhanced cooperation involving all stakeholders, proceeding as quickly as possible and responsive to innovation. The same relevant organizations shall be requested to provide annual performance reports.

According to Markus Kummer, Internet Society (ISOC) Global Policy Vice President enhanced cooperation is “one of the code words in Internet governance discussions and means different things to different people.  The term goes back to the second phase of World Summit on the Information Society held in Tunis in 2005. There is no common understanding of what is meant with the term, but it is used by some countries to push for setting up a new UN body to deal with Internet issues.”

Between 2006 and 2010 there were several formal attempts to consolidate working principles for this issue. In 2006 Nitin Desai, then special advisor to the United Nations, developed a consultation process which produced no substantive results but which was a starting point. In 2009, a document called “Enhanced cooperation on public policy issues pertaining to the Internet” was published by the UN Social and Economic Council. This report delineated some of the main themes and assessments on the topic and process of enhanced cooperation on the basis of feedback provided by ten organizations. In 2010, another UN initiative with the support of the US government promoted a process of public consultations whereby 98 interventions were received from governments, international organizations, civil society and the private sector.

Since 2015 is fast approaching and that year is a landmark since it is a decade after the Tunis Agenda and evaluations about Internet Governance will be performed, enhanced cooperation becomes of the most controversial issues due to the ambiguity of its definition, scope and operationalization. This has motivated the creation of a working group, a process which began in late 2012 to approach this issue under the institutional umbrella of the United Nations’ Commission for Science and Technology for Development (CSTD).

This group, more well-known as the Working Group for Enhanced Cooperation (WGEC), is formed by 42 members, 22 Member States and then five representatives for each of the following: civil society, technical community and academia, international organizations and the private sector. From Latin America and the Caribbean there are participants from the member States of Brazil, Dominican Republic, Mexico and Peru. The region is also represented by a Andrés Piazza from LACNIC and Carlos Afonso from Instituto Nupef of Brazil. Chris Disspain from .au registry (Australia) is a member of the technical community as a ccTLD.

The WGEC has convened twice in 2013 with the objective of defining the enhanced cooperation agenda and scope of action. Following this line, a survey on enhanced cooperation was launched and it obtained 69 responses. These were analyzed and disseminated during the second meeting of the group, which was held during the first days of November in Geneva and they summarize five main issues:

  • (i) level of implementation of the principles contained in the Tunis Agenda;
  • (ii) public policy issues and possible mechanisms;
  • (iii) the role of the different stakeholders;
  • (iv) the role of developing countries and
  • (v) the barriers for participation in enhanced cooperation.

The following section develops the main points contained in each of these.

The critical issues of enhanced cooperation

With respect to the degree of implementation of the principles for enhanced cooperation in the Tunis Agenda there are three basic positions. In the extremes there are those which consider that it has not been implemented, since no structures have been put in place for governments to develop public policies on the subject (such as the government of Saudi Arabia). Others maintain that enhanced cooperation has been implemented following the processes that promote a multi-stakeholder dialogue, notably the IGF. This position is supported by the government of Japan and Finland, and by ARIN and LACNIC. Middle-ground positions such as those expressed by the Foreign Affairs Ministry of Brazil, or the Association for Progressive Communications (APC), acknowledge that progress has been accomplished since 2005, but that enhanced cooperation has not yet been implemented.

Regarding the mechanisms and public policy issues in enhanced cooperation, there is an agreement that the most relevant themes identified in the main documents such as the Tunis Agenda, the report produced by the Working Group on Internet Governance (WGIG) and the ITU Council Resolution 1035, are barely sufficient since they are not comprehensive enough and they are outdated. Even though several organizations developed compendiums of Internet public policy issues, many consider that this is a moving objective since it is under permanent evolution.

The report also points that the majority of responses value the decentralized open eco-system of Internet governance, comprised by 150 governmental and non-governmental organizations of the technical and private sector. Nevertheless, other responses point to the need to develop new mechanisms in order to face new emerging issues. The most radical proposal of enhanced cooperation mechanisms, in line with those already put forward in 2005 during WSIS, develop the need for an international organization, under the institutional umbrella of the UN, for the supervision of public policies pertaining the Internet, as well as an international board to supervise ICANN. (Initiative proposed by IT for Change, an Indian NGO). The government of Brazil, in line with the actions it has deployed in the last months since the surveillance strategies deployed using cryptography on the Internet, points to the need to develop a new platform, to deal with the new problems that today are out of scope of the current institutional mechanisms of already existing organizations.

One of the most analyzed mechanisms was the IGF. The government of Brazil made an important distinction between the Internet Governance Forum (IGF) and enhanced cooperation as distinct processes. Whereas enhanced cooperation is a “policy–making space”, the IGF is a “policy dialogue space”. The IGF may serve as a ground for future discussions on enhanced cooperation, but according to this stakeholder, this should not be considered as the only forum.

With respect to the role of the different stakeholders, even though paragraph 35 of the Tunis Agenda defines the interests of the main players involved in the process, the survey responses tend to agree that these definitions, and the roles assigned, are not sufficient to cover the interrelatedness of the different issues and current Internet governance mechanisms. The analysis of the different responses distinguishes two positions: some acknowledge a hierarchy between stakeholders, where governments should have a leading position; others on the contrary emphasize the equality of conditions for all. This is a key aspect for the future application of enhanced cooperation.

Under this theme there is a discussion about the mechanisms for the promotion and development of content in national languages, as well as a need to promote national processes, particularly the experiences of national Internet Governance Forums. This last mechanism is one of the most highlighted as an element to promote the participation of developing countries in the Internet governance processes, which is the fourth issue highlighted in the agenda for enhanced cooperation from this survey report.

Regarding the role of developing countries, the generalized assessment of most respondents is that there is an imperative need to incorporate more stakeholders from these countries, inasmuch as they represent the major volume of users of the Internet in the world, and the forthcoming billion. Despite this, the identified barriers do not only respond to a training gap, or to the lack of public visibility of these issues in the national agenda, but also to the need to explicitly open the dialogue in the existing forums to these actors, as the Bulgarian government expressed.

The last issue, barriers to participation in enhanced cooperation, identifies the following obstacles, among others: economic, political, technical, cultural. The Russian government has expressed that there is a participation barrier for governments since the spaces and mechanisms for governmental participation are not clearly defined. IT for Change (NGO from India), underlines that every time new institutional spaces are created, these are overtaken by the same stakeholders, most of the coming from organizations from the developed North. The International Chamber of Commerce expressed that the scarce knowledge on the theme is a central barrier for an effective participation.

There is a long list of issues for improvement and problems raised by this report. The WGEC will meet again in February 2014 to elaborate the main document where the main basis and lines of action will be included in the future agenda for enhanced cooperation in Internet governance.


This post originally appeared in LACTLD Report Year 2, Issue #3


Carolina AguerreCarolina Aguerre
General Manager






Any views, positions, statements or opinions of a guest blog post are those of the author alone and do not represent those of ARIN. ARIN does not guarantee the accuracy, completeness or validity of any claims or statements, nor shall ARIN be liable for any representations, omissions or errors contained in a guest blog post.


Odds and Ends – IETF 88 Part 4 – Guest Blog by Cathy Aronson

ARIN Advisory Council member, Cathy Aronson, was at IETF 88 in Vancouver, BC, Canada last week. Follow along as she continues to share her findings with us on TeamARIN!

Guest blog post by Cathy Aronson

Cathy AronsonIRTF – Internet Research Task Force

On the last day of IETF I went to a group of the IRTF.  It was the Network Management Research Group.  I have to confess I haven’t paid much attention to this group so I wasn’t really sure what they have been working on.  In my mind that morning when I decided to attend that group I thought maybe they would be solving the age old network management problems.  What do I think that is?  Well it is still mostly the case that network management systems (NMS) monitor the network and can usually only tell you when something is down.  They can’t usually tell why it is down.  To me that is the age old problem of network management.  So off I went to this installment of IRTF.

The group is not working on my age old problem with network management.  They are working mostly on autonomous networks.  So basically making the network configure itself and adapt to changes.  The theory is that minimizing human futzing with the network will make it more reliable.

One draft was Network Configuration Negotiation Problem Statement and Requirements.  The premise here is that network devices should be plug and play, less hierarchical and less dependent on humans.  An example they used was that two CGNs might want to negotiate to switch around which IPv4 address blocks they’re using based on traffic and time of day.  The example was horrible because it wasn’t a bit aligned block of addresses and it also had the CGNs connected via the Internet instead of being on the same network.  In this case the address block would have been too small to be advertised on the global Internet.  I suggested they fix the diagram.

There were two other documents describing a framework and implementation of these autonomous networks. I explained my personal concerns about managing all these autonomic networks.  I am not sure that the folks doing this work manage large networks.

The other document is Bootstrapping Trust on a Homenet.  This was super interesting.  The idea is that your home network would figure out its boundaries and build a trust model.  So if someone else plugged a device in it would have to be approved to connect.  The scenario described was one where a home user brings home a router.  They use their phone application to scan the barcode on the box and it has all the information needed to build a trust model in the home.  The phone application would then notify the user if something new connected and give the home user the opportunity to say yes or no regarding its use on the network.  In this way the home network would know its boundaries and what was inside and outside.

Some more odds and ends from IETF

In at least one of my previous ARIN talks about IETF I mentioned Igor’s draft.  The document is now an RFC.

This is the one that describes a problem with the default subnet size in IPv6.  The size is a /64.  This is 18,446,744,073,709,551,616 addresses.  It’s huge.  The average subnet size in IPv4 is 254 usable addresses or a /24.  There are denial of service attacks that could happen if someone scanned a port range that large.  The devices on the LAN have to deal with the requests even though most of the subnet is most likely empty.

In the Benchmarking Methodology group there is now a draft that addresses this issue. This document provides a test methodology to see what happens in this scenario.  At least now it’s possible to see how different networking devices might handle such a scan or just how they may handle a subnet that big.




Any views, positions, statements or opinions of a guest blog post are those of the author alone and do not represent those of ARIN. ARIN does not guarantee the accuracy, completeness or validity of any claims or statements, nor shall ARIN be liable for any representations, omissions or errors contained in a guest blog post.


Please Don’t Push the Buttons Randomly – IETF 88 Part 3 – Guest Blog by Cathy Aronson

ARIN Advisory Council member, Cathy Aronson, is at IETF 88 in Vancouver, BC, Canada this week. Follow along as she shares hers findings with us on TeamARIN!

Guest blog post by Cathy Aronson

Please Don't Push the Buttons RandomlyWell I blinked and it’s already mid week here at IETF.   Here are some highlights as I see them.

Nov 5th the Internet Society hosted a lunch. The topic was “IPv6: what does success look like?”.   I am not sure if we determined yet what success looks like.  Some ideas were these:

  • Usage of IPv4 is trending downwards
  • VPNs also ran over IPv6 so corporate networks running IPv6 would work for folks connecting in remotely
  • A large wireless company pushing out v6 only devices perhaps using NAT64
  • Transition technologies are no longer needed
  • In 2020 we still have the Internet and folks can still get to everything
  • Users get IPv6 by default from their ISP
  • Software is IP version agnostic. IP is IP and should not mean IPv4

I think some really great things are happening.  For example, Comcast’s John Brzozowski says that 75% of their broadband network now supports IPv6 and 25% of those are currently using it.  Next year they plan to have 100% of their broadband network supporting IPv6.   Right now, however, when they turn up a home with IPv6 only 20% or so of the traffic is IPv6.  This needs to improve.   As I have mentioned before in my talks Comcast is also looking for ways to do all their internal traffic over IPv6.   They also have an IPv6 only trial in the works.

Another positive is that Teredo (a transition mechanism) is going to be turned off sometime next year.  Last month at the ARIN meeting I talked about the experiment where they turned of Teredo briefly.  Chris Palmer says that Microsoft will still be using Teredo for peer to peer Xbox applications but not for relay service.

Right now 2% of the Internet traffic is IPv6.  They now can show IPv6 and IPv4 on the same graph without the scales being hosed (it’s the simple things).  The trend in the slides was definitely up and to the right.  These are all good signs.

Cathy Aronson So what else is going on at IETF?  I’ll circle back on some more IPv6 stuff in the upcoming days but I want to talk a little bit about the “Internet-wide Geo-Networking BOF”  It is stated to be “Location aware solution that provides packet delivery using geographical coordinates for packet dissemination over the Internet”.

I am still trying to get my mind around this and to decide if it’s the bad idea fairy or not.  The idea is that there are all these devices out there, for example, cars. So an application may want to tell all the cars in a geographic area where the closest open charging station is located.  They talked some about geographically assigned addresses and at first it seemed they were talking about IP addresses but then it seemed as if maybe they meant some huge database of geolocation addresses that map to IP addresses.  I am having a hard time getting my mind around some car speeding down the road at 70mph and a service trying to maintain at any moment in time where it is and how to contact it.   Further these cars may be connected to different ISP infrastructures.  Just as an FYI if you want to read further there is a vehicular wireless standard that is 802.11p.

So next… as you might expect there is a lot of time being dedicated to security and privacy on the Internet in light of the NSA gathering large amounts of data on everyone.  The IETF is starting to formulate how it is going to be dealt with in Internet Standards.

From the technical plenary on November 6th the following summary has been set out.  These are a series of IETF “hums”.  Basically for those of you who haven’t been to IETF the whole process is based on “rough consensus and running code”.  Rough consensus is often determined by asking folks to hum (yes literally).  These are the hums from the plenary.

1.  The IETF is willing to respond to the pervasive surveillance attack?

    Overwhelming YES.  Silence for NO.

2. Pervasive surveillance is an attack, and the IETF needs to adjust our threat model to consider it when developing standards track specifications.

    Very strong YES.  Silence for NO.

3. The IETF should include encryption, even outside authentication, where practical.

    Strong YES.  Silence for NO.

4.  The IETF should strive for end-to-end encryption, even when there are middleboxes in the path.

    Mixed response, but more YES than NO.

5.  Many insecure protocols are used in the Internet today, and the IETF should create a secure alternative for the popular ones.

    Mostly YES, but some NO.

So we’ll see what happens with this over time.  I suggest that everyone interested in this subject join the IETF perpass mailing list.

Also this draft is really relevant.  I expect more work to be done in this area over the upcoming years.



Any views, positions, statements or opinions of a guest blog post are those of the author alone and do not represent those of ARIN. ARIN does not guarantee the accuracy, completeness or validity of any claims or statements, nor shall ARIN be liable for any representations, omissions or errors contained in a guest blog post.