Artificial intelligence (AI) has been a subject of intensive research and discussion between private-sector firms and public-sector advocates long before the release of OpenAI’s ChatGPT. Civil society groups like the AI Now Institute, Data for Black Lives, and the Future of Privacy Forum have led explorations of algorithmic bias, data privacy, and anti-surveillance providing the foundation for discourse around AI’s use in daily life that has now reached the mainstream.
On October 30, President Biden signed the landmark Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence (fact sheet), followed shortly by draft guidance from the Office of Management and Budget (fact sheet). These build on the previously released Blueprint for an AI Bill of Rights and NIST AI Risk Management Framework that provided governments with guidance on the responsible and ethical use of AI.
The Beeck Center for Social Impact + Innovation works directly alongside governments and civic technologists using digital innovation to strengthen the pillars of American government at federal, state, and local levels. As these governments and institutions continue to use AI as a tool for improving digital public infrastructure and the delivery of public services and benefits, many of them are looking to the future to understand what AI can do within its existing privacy and governance constraints. With the private sector taking a forward leaning role in ramping up the use of facial recognition, surveillance, and other predatory forms of AI. Advocates in the field of public-interest technology should stay well informed of both the opportunities and risks posed by AI in all its forms, particularly as it relates to data governance, people-centered technology, procurement, benefits delivery, and hiring tech talent. We collected insights from the Beeck Center’s leaders in each of these areas to give us a glimpse of what to expect next.
Snapshots from Beeck’s leader
What do policymakers and practitioners need to know about using AI in the public interest?
The present moment is an outstanding opportunity to shine sunlight on systems for benefits delivery that have been historically opaque and potentially encode bias into technical systems. Increasing transparency in how these systems collect, store, and build off of data is going to be critical—methods like rules as code may help bring standardization, visibility, and shared understanding across stakeholders. Additionally, policymakers and practitioners need to be equipped with best practices, like ensuring that test data uses privacy-preserving methods, engaging individuals impacted by systems, and conducting evaluation on impact, risks, and harms. Shared tools like the NIST AI Risk Management Framework and forthcoming federal agency guidance will help set the national path forward, and it will be important for states and local governments to champion those approaches and learn from each other. – Ariel Kennan, Fellow, Digital Benefits Network
AI represents an unprecedented opportunity for public services. While ensuring the appropriate guardrails are put in place, we must also find ways in which we test, learn, and adopt AI for good. Following on from approaches elsewhere—such as the UK’s NHS AI Lab—there are design-led approaches to helping government better understand AI and how it might vastly improve efficient service delivery, such as transforming case management in health and human services and elsewhere—traditionally one of the most time consuming and unproductive uses of public servants’ time. While much of the narrative is currently one of caution, as long as we find ways to use human-centered design approaches to test and shape the use of AI around true needs and in a way that is collaborative and transparent, I have high hopes for AI in government. – Dominic Campbell, Fellow
The need and the purpose of public services—education, health services, social assistance, safety, and help during crises or emergencies—has not changed much over time, but the tools have. Data collected and generated by government agencies is even more important as AI is becoming a more commonly used tool. Now is the time to invest in learning more and understanding how AI works, what tools can produce, and what challenges they cannot solve. For example, Estonia has a grand vision to use AI and integrated data to offer a Burokratt—a virtual-government assistant that can even renew passports via a short chat and a photo snap on a phone. Policymakers and practitioners have an outstanding moment to recalibrate public services in ways that bring dignity and meaningful support for the public. Continuing to learn from one another by joining networks and sharing lessons learned, practical resources, and substantive training is a prerequisite for such transformation. – Milda Aksamitauskas, Fellow, State Chief Data Officers Network
AI stands as a transformative tool for modernizing government, with the unprecedented promise of empowering government agencies to distill vast datasets into actionable insights. To harness the full potential of AI in data-informed policymaking, we must recognize its integral partnership with robust data governance, a crucial linchpin in maintaining the integrity, privacy, and security of data. By steadfastly upholding these pillars, we can more effectively identify bias, mitigate risks, and foster AI applications that build public trust. – Ali Benson, Data Labs Program Lead
AI presents great opportunities for municipalities to improve the efficiency of municipal services: by automating tasks or providing real-time data analysis; enhancing citizen engagement by providing personalized information or allowing citizens to participate in decision making; or to make targeting decisions simpler. I especially hope that we can use AI to make data more open, making pdfs and other documents more machine readable or better integrating applications for resources. AI also could potentially scale harmful or discriminatory systems, invade privacy, or reduce transparency and accountability or leave less tech savvy folk behind. The responsible use of AI is something we all can and should contribute to, focusing on the people who underpin those algorithms. – Harold Moore, The Opportunity Project for Cities Program Lead
Policies, Statements & Resources
- Future of Privacy Forum
- Center for Democracy and Technology
- Federation of American Scientists
- Code for America
- Electronic Privacy Information Center (EPIC)
- Data & Society, “Shaping AI Systems by Shifting Power”
- Data & Society, “Democratizing AI: Principles for Meaningful Public Participation”
- Stanford Institute for Human-Centered Artificial Intelligence (HAI), “The Privacy-Bias Trade-Off”
- AI.gov catalog of government AI use cases
- Electronic Privacy Information Center (EPIC), “Outsourced and Automated”
- National Institute of Standards and Technology (NIST) “AI Risk Management Framework”
- Tech Policy Press, “Five Takeaways from NIST AI Risk Management Framework”
- Federation of American Scientists, “Unlocking American Competitiveness: Understanding The Reshaped Visa Policies Under The AI Executive Order”
- Tech Talent Project, “Building the Foundations of AI”
- Center for Democracy & Technology “AI Policy Tracker”
- The Coleridge Initiative’s Applied Data Analytics training program