The three biggest challenges in government IT

TL;DR: Change aversion, lack of technocratic leadership, and heavyweight processes bred by distrust keep government IT 5-10 years behind the private sector
8 minute read

It’s often said that government IT is 5–10 years behind the private sector, and most places in government you look, that’s probably true. It’s as if the government learned how to “computer” right around the time Windows XP was released, set its comfort level for risk, and has maintained purposeful blinders to progress since. But why? I’d argue there are three key factors keeping it there: change aversion, lack of technocratic leadership, and processes that don’t scale down.

Change aversion

If there’s one thing that defines government IT, it’s the culture of “no”. If you’re a change agent, a technologist, heck, even someone that wants to use an iPhone instead of your government-mandated Blackberry, at every potential turn, the organizational immune system will release risk-reducing antibodies any time it so much as sniffs something it doesn’t recognize.

This risk-reduction come in the form of process: it’s the procurement process designed to ensure only established firms are qualified to bid. It’s the ATO process designed to ensure only applications that complete a six-month, 150-page security checklist can be brought online. It’s the hiring process that disqualifies applicants without a traditional education. The process is designed to reduce risk, but almost without exception, it serves to increase it. This happens in three ways:

  1. Process increases batch size, meaning any effort becomes an all-or-nothing wager. That’s why we see the enterprise-wide sweeping initiatives, the multi-million, multi-year projects that are all but guaranteed to fail. If standing up one server and one hundred servers both require the same amount of administrative friction, you’re incentivized to maximize your return by betting the farm. Imagine a game of poker where the ante was ten times the minimum bet. You’d be crazy not to go all in on each hand (and extremely lucky if you walked away with any chips on the table).

  2. As non-technocratic management is socialized to a system whereby change only happens in decade-long increments, the agency grows further and further out of band with the private sector. Industry standards are just that. Standards. They change as technology changes, and private sector firms, the ones that establish that standard, must constantly adapt to survive. If every few years you poke your head up, look around, and adopt whatever’s mainstream wholesale, you spend the vast majority of your time using already-outdated technologies and construct your perception of the IT landscape accordingly.

  3. Even if you can convince the powers that be to pilot a new technology, there’s no support structure in place, all but guaranteeing the new initiatives will fail (further supporting the “go big or go home” mentality). Want to stand up a ColdFusion server? The agency has ten spare in its datacenter and a IDIQ contract qualified to support it going forward. Have a Rails app you’d like hosted? Once you convince IT that Rails isn’t a threat to national security, you’ll need to spend a significant amount of time explaining what a Rails console is, how migrations work, and what modern deployment management looks like (hint: it doesn’t involve SSH and a shell script).

Government should be risk averse. Take a look at DC’s brutalist architecture and you’ll quickly realize that government constructs buildings very differently than the rest of society, Corinthian columns and all. After all, government operates on multi-century time scale where private-sector companies focus on a quarter-to-quarter earnings. While government agencies certainly shouldn’t adopt the latest fly-by-night, just-posted-to-hacker-news-yeterday framework, there’s a happy medium between that and “what we’ve used since the ’90s”, and a process designed to reduce risk to as close to absolute zero as humanly possible is not the solution.

A system that seeks to reduce risk by instituting process will increase risk in the long run when that very process fails to adapt to the changing environment it seeks to control.

Lack of technocratic leadership

healthcare.gov was the best thing to happen to government IT. Traditionally, there’s been two classes of change agents in government, geeks and suits. The geeks are exactly what you’d expect to find in the sub-basement of the government agency in a dimly lit room strewn with Mountain Dew cans and Doritos crumbs. They’re the ones that understand today’s IT landscape. The suits are exactly what you’d expect to find on the top floor of the agency, windowed room strewn with business cards and printed Power Point decks. They’re the ones that understanding today’s organizational politics. The problem is, only one has a seat at the table, and it produces exactly the outcomes you’d expect.

healthcare.gov was the first time in recent memory that a policy initiative failed due to our inability to execute from a technical perspective, but it was far from the first time that geeks in government pushed for the need to rethink how we approached technology. The administration quickly realized that simply throwing more money at “enterprise grade solutions” wasn’t a defensible strategy, but that lesson hasn’t been learned across government. Those making strategic decisions are largely still those that spend a life-long career as a bureaucrat making “risk-averse” investments that contract out the bulk of the technical know-how to outside firms. In a world in which policy initiatives increasingly rely on our technical ability, geeks simply lack a seat at the table. That affects agencies in three ways:

  1. The system is rigged for suits and against geeks, which means that it’ll always solve for an effective process over an effective outcome. Enterprise software is a particular breed of software. It’s popular among CIOs because it checks the right boxes and it’s equally unpopular among end users because checking boxes is often all it does well. On paper, an iPhone and a Blackberry both allow you to send and receive email, browse the internet, and make phone calls. Ask a consumer which they’d prefer, and there’s a world of difference. The same is true of enterprise IT and the stacks it’s built upon. Government IT often prefers the vendor which claims to meet an arbitrary compliance standard. However, with some combination of time, money, and effort, compliance is always possible. Given that same trifecta, good, user-centric technology is not a guarantee, a missed connection that translates to delivering citizen services of a lesser quality. Instead of optimizing for process, optimize for the developer (and thus the end user) experience.

  2. Agencies forgo the fundamentals of a sound technology stack that would set up the agency to execute in the long term — tools, systems, and culture — for short term wins and “getting the thing out the door”. Agencies expect 10 to 20 years of planning and forethought from geeks for standing up a new digital systems, but rarely plan six months to a year down the line when contracting out the platforms, tools, and human capital that will make that vision a reality. Practically, it’s easier for non-technical leadership to measure that the agency remains compliant with X government standard than to ensure that they can remain responsive to customer needs or attractive to top talent. As a result, even if an agency’s IT stack looks good on paper, in reality, it’s often held together with little more than duct tape and bubble gum when you look at any qualitative standard.

  3. There’s something to be said for a geek’s need to scratch an itch. Geeks are problem solvers. Geeks are slaves to doing things better than the status quo. Regardless of role or title, geeks find itches in their day-to-day life that they’re dying to scratch. They think “I could write a script to automate this task”, or “if only there were an API, it’d be so much easier to submit this report”. Regardless of the thing, geeks know technology and geeks know if there’s a better way to do it. The same can’t be said of suits, at least not in a technical sense. Geeks that serve under suits often don’t have the tools they need because management isn’t affected by the need to scratch that itch. That’s why you end up in the Catch-22 where it’s against agency policy to code in the open, but there’s also no budget to stand up an on-prem version control system, leaving developers to pass around code on thumb drives. Geeks in leadership positions naturally scratch itches, the same itches their developers are asking to have scratched.

Non-technical leadership will optimize for easily measurable, non-technical concerns at the expense of solving for long-term technical concerns.

Processes that don’t scale down

Government process tends to be both heavyweight, and designed for distrust. And when government designs a process, it designs exactly one. We use the same systems to procure battleships and buildings that we use to procure paperclips and sites. When you’re spending millions of dollars on a multi-year contract, it makes sense to spend months accounting for every possible contingency. When you’re buying a $300 SaaS product on the open market, the obligatory environmental protection clauses (among a dozen other government-specific requirements), although well intentioned, create a disproportionate burden and serve to exclude many potential mainstream vendors.

All government practices are predicated on the belief that if an actor can do something bad, they will, and given centuries of organizational scars from being burned by such bad actors, the only acceptable process is one that limits, to the fullest extent possible, any actor’s ability to act maliciously. This creates situations where, from a government IT perspective, it’d be logical to require a background check before someone could use a government-owned fax machine. There’s rarely, if ever, a concept of de minimus and where there is (for example, in the concept of a micro-purchase), it’s rarely respected culturally. This manifests itself in several forms:

  1. The system distrusts government employees. There’s no concept in government of hiring a smart person or establishing a smart team and trusting their judgement or expertise. Beyond the Senior Executive Service (SES), authority to act is rarely delegated, and even among the SES, technical decisions can be brought into question at any time in the form of a very public congressional hearing. As a result, government agencies strongly prefer technical and administrative constraints over cultural constraints. A government developer can’t deploy their code because there are several layers of administrative safeguards between them and the server that runs the code they write. Even if they were able to practically, for example, if they had the proper credentials, they’d still need to go through a monthly change review board before it could be approved. The developer doesn’t deploy because even if they wanted to, they couldn’t, not because the agency has a culture of only deploying thoroughly tested code.

  2. The system distrusts government contractors. Imagine a world in which there were no established brands, and consumers had no access to Amazon product reviews, Yelp, or the ability to ask friends about their experience. This is roughly how government buys IT. For purposes of fairness, the government can’t rely on anything not in the vendor’s proposal, and in a world of vague, non-technical requirements, one in which nearly all government contractors can qualify for any given contract, government agencies are left with two decisional shortcuts: First, for goods, cost correlates with quality. Prefer Microsoft Office with its expensive license to its free open source alternatives. Second, for services, age correlates with quality. Prefer government contractors who have been doing the same thing for decades over emerging mainstream leaders. To solve for these biases, government agencies fill their RFQs with additional requirements for which the vendor must further qualify. Almost without exception, these requirements are government-specific, rather than adopting existing industry standards, further reducing the potential vendor pool to government-specific vendors and creating a self-reinforcing cycle of distrust.

  3. The system distrusts citizens. The open source community has a mantra: with thousands of eyes, all bugs are shallow. It’s the idea that there’s power in the crowd, that given the right tools and community, the wider something is shared, the more stakeholders directly involved in solving a problem, the better the outcome will be. In government, the exact opposite is often the case. Working in the open, be it working slightly more openly with the agency itself, or opening a process to the world, is seen as one of the ultimate liabilities, a liability which no amount of community contribution can overcome. Non-techincal stakeholders, apparently familiar only with YouTube comments, often cite a mythological commenter boogeyman, which will disproportionately shame the agency for its imperfect work product, despite an empirical lack of evidence of this happening. As a result, any information that leaves the firewall is tightly controlled, scrubbed, and monitored, from press releases to code comments and commit messages, to ensure the agency isn’t subject to criticism. It’s the citizens right to scrutinize the agency’s work — productive or unproductive — and that scrutiny makes the government fairer, more transparent, and more efficient.

A process that doesn’t scale down will optimize for the worst possible outcome at the cost of desired outcomes.

Conclusion

There’s a lot that can be said for why government IT looks like what it looks like today. Procurement, culture, staffing, recruiting, anachronistic regulations, and politics all play a significant role, but aren’t necessarily specific to IT as much as they affect general management practices. Having spent more time than I’d like to admit trying to get the federal government back in sync with the private sector, to fast forward through 5–10 years of stagnation, these three challenges - change aversion, lack of technocratic leadership, and processes that don’t scale down - keep coming up as the biggest blockers each time.

Originally published October 18, 2015 | View revision history

If you enjoyed this post, you might also enjoy:

benbalter

Ben Balter is the Director of Engineering Operations and Culture at GitHub, the world’s largest software development platform. Previously, as Chief of Staff for Security, he managed the office of the Chief Security Officer, improving overall business effectiveness of the Security organization through portfolio management, strategy, planning, culture, and values. As a Staff Technical Program manager for Enterprise and Compliance, Ben managed GitHub’s on-premises and SaaS enterprise offerings, and as the Senior Product Manager overseeing the platform’s Trust and Safety efforts, Ben shipped more than 500 features in support of community management, privacy, compliance, content moderation, product security, platform health, and open source workflows to ensure the GitHub community and platform remained safe, secure, and welcoming for all software developers. Before joining GitHub’s Product team, Ben served as GitHub’s Government Evangelist, leading the efforts to encourage more than 2,000 government organizations across 75 countries to adopt open source philosophies for code, data, and policy development. More about the author →

This page is open source. Please help improve it.

Edit