Analysis of federal .gov domains, 2015 edition

In 2011 and then again in 2014 I used a small tool that I wrote to crawl every site on the publicly available list of Federal Executive .gov domains to get a better sense of the state of federal IT, at least when it comes to agencies’ public-facing web presence. This weekend, I decided to resurrect that effort, with the recently updated list of .gov domains, and with a more finely tuned version of the open source Site Inspector tool, thanks to some contributions from Eric Mill.

You can always compare them to the original 2011 or 2014 crawls, or browse the entire dataset for yourself, but here are some highlights of what I found:

  • 1177 of those domains are live (about 86%, up from 83% last year, and 73% originally)
  • Of those live domains only 75% are reachable without the www. prefix, down from 83% last year
  • 722 sites return an AAAA record, the first step towards IPv6 compliance (up from 64 last year, and 10 before that, more than a 10x increase)
  • 344 sites are reachable via HTTPS (stagnant at one in four from last year), and like last year, only one in ten enforce it.
  • 87% of sites have no decreeable CMS (the same as last year), with Drupal leading the pack with 123 sites, WordPress with 29 sites (double from last year), and Joomla powering 8 (up one from last year)
  • Just shy of 40% of sites advertise that they are powered by open source server software (e.g. Apache, Nginx), up from about a third last year, with about one in five sites responding that they are powered by closed source software (e.g., Microsoft, Oracle, Sun)
  • 61 sites are still somehow running IIS 6.0 (down from 74 last year), a 10+ year old server
  • HHS is still the biggest perpetrator of domain sprawl with 117 domains (up from 110 last year), followed by GSA (104, down from 105), Treasury (95, up from 92), and Interior (86, down from 89)
  • Only 67 domains have a /developer page, 99 have a /data page, and 74 have a /data.json file, all significantly down from past years, due to more accurate means of calculation, which brings us to
  • 255, or just shy of 20% of domains, don’t properly return “page not found” or 404 errors, meaning if you programmatically request their /data.json file (or any other non-existent URL), the server will tell you that it’s found the requested file, but really respond with a human-readable “page not found” error, making machine readability especially challenging

Edit (5/12/2015): As @konklone properly points out the list now includes legislative and judicial .gov domains, and thus isn’t limited to just to federal executive .govs.

As I’ve said in past years, math’s never been my strong point, so I highly encourage you to check my work. You can browse the full results at or check an individual site (.gov or otherwise) at The source code for all tools used, is available on GitHub. If you find an error, I encourage you to open an issue or submit a pull request.


Named one of the top 25 most influential people in government and technology and Fed 50’s Disruptor of the Year, described by the US Chief Technology Officer as one of “the baddest of the badass innovators,” and winner of the Open Source People’s Choice Award, Ben Balter is a Product Manager at GitHub, the world’s largest software development network. Previously, Ben served as GitHub’s Government Evangelist, leading the efforts to encourage government at all levels to adopt open source philosophies for code, for data, and for policy development. More about the author →

This content is open source.
Please help improve it.