In 2011 and then again in 2014 I used a small tool that I wrote to crawl every site on the publicly available list of Federal Executive .gov domains to get a better sense of the state of federal IT, at least when it comes to agencies’ public-facing web presence. This weekend, I decided to resurrect that effort, with the recently updated list of .gov domains, and with a more finely tuned version of the open source Site Inspector tool, thanks to some contributions from Eric Mill.
- 1177 of those domains are live (about 86%, up from 83% last year, and 73% originally)
- Of those live domains only 75% are reachable without the
www.prefix, down from 83% last year
- 722 sites return an
AAAArecord, the first step towards IPv6 compliance (up from 64 last year, and 10 before that, more than a 10x increase)
- 344 sites are reachable via HTTPS (stagnant at one in four from last year), and like last year, only one in ten enforce it.
- 87% of sites have no decreeable CMS (the same as last year), with Drupal leading the pack with 123 sites, WordPress with 29 sites (double from last year), and Joomla powering 8 (up one from last year)
- Just shy of 40% of sites advertise that they are powered by open source server software (e.g. Apache, Nginx), up from about a third last year, with about one in five sites responding that they are powered by closed source software (e.g., Microsoft, Oracle, Sun)
- 61 sites are still somehow running IIS 6.0 (down from 74 last year), a 10+ year old server
- HHS is still the biggest perpetrator of domain sprawl with 117 domains (up from 110 last year), followed by GSA (104, down from 105), Treasury (95, up from 92), and Interior (86, down from 89)
- Only 67 domains have a
/developerpage, 99 have a
/datapage, and 74 have a
/data.jsonfile, all significantly down from past years, due to more accurate means of calculation, which brings us to
- 255, or just shy of 20% of domains, don’t properly return “page not found” or 404 errors, meaning if you programmatically request their
/data.jsonfile (or any other non-existent URL), the server will tell you that it’s found the requested file, but really respond with a human-readable “page not found” error, making machine readability especially challenging
Edit (5/12/2015): As @konklone properly points out the list now includes legislative and judicial .gov domains, and thus isn’t limited to just to federal executive
As I’ve said in past years, math’s never been my strong point, so I highly encourage you to check my work. You can browse the full results at dotgov-browser.herokuapp.com or check an individual site (.gov or otherwise) at site-inspector.herokuapp.com. The source code for all tools used, is available on GitHub. If you find an error, I encourage you to open an issue or submit a pull request.
Ben Balter is a Senior Manager of Product Management at GitHub, the world’s largest software development network, where he oversees the platform’s Community and Safety efforts. Named one of the top 25 most influential people in government and technology, Fed50’s Disruptor of the Year, and winner of the Open Source People’s Choice Award, Ben previously served as GitHub’s Government Evangelist, leading the efforts to encourage government at all levels to adopt open source philosophies for code, data, and policy development. More about the author →