Technical Standards

Approaching technical standards for cutting-edge technologies that have illustrated the limitations of traditional approaches to IT governance.


I have a variety of interests in the development of technical standards by institutions, especially as they relate to evolving technologies where technological advancement substantially outpaces the development of governance patterns capable of mitigating risk and channeling these technologies toward socially-desirable ends.

A rather substantial area of interest within this much broader project relates to the implementation of information security standards like those promulgated by the National Institute of Standards in Technology (NIST). I have about five years of work experience implementing a variety of NIST information security standards. I am particularly fascinated by how NIST standards related to IT system risk management and authority-to-operate (ATO) correspond to relatively recent developments in how IT centers of excellence approach their work, namely, their increasing tendency to employ Agile work practices and cloud-based technologies.

I am also interested in legal standards related to electronic signatures in the United States, and how a rather permissive standard for validating the legitimacy of digital signatures may create secondary effects for institutions - a phenomenon that I refer to as the “audit-trail problem.” In this area of study, I make rather heavy use of Douglass North’s conception of institutions, and Albert Hirschman’s Exit, Voice, Loyalty to make sense of how the permissive standard may create and exacerbate distrust between firms and their employees. I also explore comparative electronic signature regimes and how these may provide insight into how to solve the “audit trail problem.” I consider a hypothetical NIST publication that would promulgate a series of open technical standards related to electronic signature collection and validation, and consider approaches that allow entities to select signature standards that vary in their extensiveness based on costs, risks, and the entities particular subject matter. I also discuss the economic and social implications of an over-reliance on commercial offerings by companies like Adobe and DocuSign to implement audit trail systems and consider how open electronic signature standards could resolve potential equity concerns arising from this over-reliance.

I am interested in the opportunities presented by generative AI and have been thinking about governance models that seek to channel the use of AI by the government so as to minimize the risk of harm and due process concerns. I am exploring how the use of AI in decision-making roles, or its use in close proximity to certain high-risk tasks, may impact these concerns. I am also interested in existing frameworks for managing how government does its work, such as the idea of Inherently Governmental Functions, and I am considering the robustness of these frameworks to the issue of AI use by the federal government. I believe that there are substantial social benefits to relying on existing frameworks because doing so can leverage expertise and a robust literatyre that has already been developed, as well as the legitimacy that comes from settled expectations. I have taken to conceptualizing the underlying due process and harm-risk problems posted by AI as “Schrödinger’s agency,” by which I mean that AI sometimes acts like a tool, and sometimes acts like a human - though it is generally understood as not being able to explain itself or be held to account in the same way that a human can. This idea has been explored rather well through Madeleine Clare Elish’s discussion of complex systems by in a 2019 article about the “Moral Crumple Zone.” The quasi-autonomy that generative AI exhibits makes it difficult to strictly categorize it as closer to a tool or a human, and invites a more creative approach in how we think about and address the risks stemming from the use of AI by the federal government.