AI’s Next Frontiers and the Policy Gaps Ahead

One of my favorite experiences this year was a visit in March to the William & Mary Law School to deliver the 2025 Stanley H. Mervis Lecture in Intellectual Property. The students were fantastic — curious, engaged, energetic — and I got to have a long and fascinating conversation with the brilliant Prof. Laura Heymann around the growing misalignments between law and AI.

From the law school’s write-up:

In his talk, titled “There Is No Fate But What We Make: AI’s Next Frontiers and the Policy Gaps Ahead,” McLaughlin highlighted that AI “is not an inevitable force shaping humanity.” As the future of AI extends far beyond the large language models with which many are now familiar to the new realm of quantitative AI, its trajectory will be determined by the choices of those who design, build, and use it.

McLaughlin began with a technical overview, describing how AI systems in general are capable of performing tasks that mimic and, indeed, go beyond the cognitive capabilities of humans by training on millions of inputs to “learn” what results to generate rather than being guided by a long series of rules. For example, rather than having a system learn what kinds of e-mail are likely to be spam by creating a list of keywords for it to recognize, the system can be trained on millions of e-mails that programmers have labeled as either “spam” or “not spam” to learn how to assess future e-mails. Generative large language models take this concept a step further: Rather than using human labeling of the inputs up front, such models instead train on millions of inputs from the Internet, learning patterns from those inputs that allow the models to generate text in response to a query, with feedback provided by humans afterward.

The challenge, McLaughlin noted, is that at some point, “we may be hitting a data wall.” Once large language models have trained on all available content, how can they improve or differentiate themselves from one another? How can AI models overcome their epistemological limits to generate new and reliable insights rather than reflecting the scope of their training data, including any biases or misinformation that data contains?

Quantitative AI may provide the answer. As McLaughin described, quantitative AI models integrate high-level mathematical representations, fundamental equations of quantum mechanics, physics, and related fields, and training based on numerical data, rather than written texts, to generate novel and scientifically reliable results. Thus, unlike LLMs “that predict likely word sequences,” he noted, “quantitative AI models simulate, predict, and discover based on mathematical and scientific principles” at a scale and speed unable to be achieved by human effort.

This development can enable researchers in fields such as healthcare, materials science, and complex systems analysis to accomplish scientific discovery that would otherwise have remained beyond their grasp. For example, quantitative AI can allow scientists working on the development of a therapeutic drug to train a model that can screen more than 100,000 possible solutions in mere days, allowing the process to move much more quickly to patient trials and regulatory approvals. Even more remarkable, such models can become “self-learning,” dramatically accelerating scientific discovery by forming and testing hypothesis without human guidance.

With these new possibilities, McLaughlin cautioned, come important questions about risks and regulation. Quantitative AI is much more likely than large language models to be used for critical functions such as medical decisions, financial market actions, and national security strategy. Where should regulatory oversight lie, and to what extent can model developers create guardrails to prevent undesirable or harmful uses of their technology? Who should own the rights to scientific advances developed with such models? And how should countries prepare for the “tech sovereignty wars” — in which nations compete for AI dominance — and the related cybersecurity issues?

McLaughlin noted that he was leaving his audience with more questions than answers. But, he concluded, whatever the path for AI, “the future is up to us.”

From 2012: Keynote at the Library Technology Conference

Wild, I just discovered that a video exists of a 2012 keynote talk I did at the Library Technology Conference, which ran annually from 2009-2019, at Macalester College. It was a fantastic event — one of the most joyfully nerdy crowds I’ve ever encountered (and I’ve been to numerous IETF meetings, competed in several MIT Mystery Hunts, and even attended one ValleyCon, the annual fan festival hosted by Fargo’s own Red River Science Fiction and Fantasy Club, when I was around 14).

The talk itself now reads as a snapshot of tempered-but-not-yet-cynical Internet optimism circa 2012 — non-delusional, but overly positive about the macro benefits of disintermediation, decentralization, and lightly-moderated speech platforms, and blind to the coming tsunami of rancor, distortion and radicalization powered by attention-monetizing social media. The camera didn’t capture my visuals as I spoke, but the Prezi is still online here.

Keynote by Andrew McLaughlin at 2012 Library Technology Conference, Macalester College.

Internet Policy Wars 3.0: Is the Past a Prologue to the Fight for Web3?

As I get older, it seems I’m becoming less a historian and more a fact witness. Confirming that trend, the Congressional Internet Caucus hosted a fun discussion on emerging Web3 policy battles, with Kevin Werbach and me assigned to draw parallels, contrasts, and lessons from the first wave of Web 1.0 fights in the 1990s - the era of the Communications Decency Act, the codification of Section 230, the first skirmishes over cryptography and key escrow, and so on. Sparking the panel, I suspect, was the clumsy cryptocurrency language in last year’s landmark Infrastructure Bill, suggesting some gaps in Congressional understanding of the latest stack of distributed and decentralized Internet-based technologies that is being referred to as Web3. There’s a lot more — good and bad — coming our way than cryptocurrencies and NFTs.

Our moderator was Danny O’Brien of the Filecoin Foundation, and our new-generation co-panelists were Cleve Mesidor of the Blockchain Association and the National Policy Network of Women of Color in Blockchain and Carlos Acevedo of Brave Software — all terrific thinkers and nimble panelists. Great conversation, part of the Caucus’s 2021 rich lineup of events. Thanks to Tim Lordan for the kind invite!

Governance 3.0

My first big conference post-COVID was the excellent Unfinished Live, held at The Shed in NYC in October 2021. I put together what turned out to be a brisk and (for me, anyway) entertaining discussion on “Governance 3.0” with Kickstarter co-founder, artist and all-around brilliant thinker Perry Chen, Georgetown professor of philosophy and director of its ethics lab Maggie Little, and technologist, decentralization architect, and Unfinished Labs president Braxton Woodham. The video is below; a transcript is here.

The pithy take-away comment was Maggie’s, about complex, morally-anchored, emergent human behavior: ‘You can’t just engineer it away”.

AI Superpowers: A Conversation with Kai-Fu Lee

Thanks to the Asia Society for hosting a fantastic evening with Kai-Fu Lee, my former colleague and partner at Google. His new book, AI Superpowers, is genuinely terrific, both in explaining artificial intelligence, its possibilities, pitfalls, and implications for the future, and in laying out the relative strengths and weaknesses of China and the United States in technology and innovation.

New America: Who's Afraid of Online Speech?

Characteristically awesome event at the New America Foundation today. I spoke on “How Can Platforms Fix Online Speech?”, which: not simple. Co-panelists were:

  • Caroline Sinders, @carolinesinders Product Analyst, Wikimedia Foundation

  • Whitney Phillips, @wphillips49 Assistant Professor of Literary Studies and Writing, Mercer University Author, This Is Why We Can't Have Nice Things Co-author, The Ambivalent Internet

  • Dipayan Ghosh, @ghoshd7 Public Interest Technology fellow, New America Joan Shorenstein Fellow, Harvard Kennedy School Former Technology & Economic Policy Advisor, The White House Former Privacy & Public Policy Advisor, Facebook

  • Moderator: April Glaser, @aprilaser Staff writer, Slate

Our part starts around 1’10'“. Other speakers at the event:

REGULATING POLITICAL SPEECH IN THE AGE OF DIGITAL DISINFORMATION

  • Sen. Amy Klobuchar (D-Minn.), @amyklobuchar Chair, Senate Democratic Steering Committee Ranking Member, Rules Committee

  • Dan Gillmor, @dangillmor Director and co-founder, News Co/Lab at Arizona State University Professor of Practice, Walter Cronkite School of Journalism and Mass Communication at Arizona State University Author, Mediactive and We the Media: Grassroots Journalism by the People, for the People

  • Moderator: Cecilia Kang, @ceciliakang National Technology Correspondent, The New York Times

DOES THE INTERNET REQUIRE US TO RETHINK FREE SPEECH?

  • Rep. Ted W. Lieu (D-Calif.), @reptedlieu Member, House Committees on the Judiciary and Foreign Affairs

  • Jennifer Daskal, @jendaskal Associate Professor of Law, Washington College of Law at American University

  • Kate Klonick, @klonick Future Tense fellow, New America PhD Candidate, Yale Law School Resident fellow, Information Society Project at Yale Law School

  • Moderator: Cecilia Kang, @ceciliakang National Technology Correspondent, The New York Times

Stanford: Digital Platforms and Democratic Responsibility

Excellent event to mark the launch of Stanford’s new Global Digital Policy Incubator, led by the incredible digital human rights leader Eileen Donahoe.

Co-panelists:

  • Moderator: Larry Kramer, President of the Hewlett Foundation

  • Juniper Downs, Global Head of Public Policy and Government Relations, Youtube

  • Daphne Keller, Director of Intermediary Liability, Center for Internet & Society, Stanford Law School

  • Nick Pickles, Senior Public Policy Manager, Twitter

  • Mike Posner, Director, NYU Stern Center for Business & Human Rights, former U.S. Assistant Secretary of State, Democracy, Human Rights, and Labor

Stanford Law: Law, Borders, and Speech Conference - Big Picture Panel

Really fun conference, excellent and diverse participants, provocative policy brawls. Hosted by Stanford Law’s Center for Internet & Society.

The panel set-up: “Which countries’ laws and values will govern Internet users’ online behavior, including their free expression rights? In 1996, David G. Post and David R. Johnson wrote that “The rise of the global computer network is destroying the link between geographical location and: (1) the power of local governments to assert control over online behavior; (2) the effects of online behavior on individuals or things; (3) the legitimacy of the efforts of a local sovereign to enforce rules applicable to global phenomena; and (4) the ability of physical location to give notice of which sets of rules apply.” They proposed that national law must be reconciled with self-regulatory processes emerging from the network itself. Twenty years on, what have we learned? How are we reconciling differences in national laws governing speech, and how should we be reconciling them? What are the responsibilities of Internet speakers and platforms when faced with diverging rules about what online content is legal? And do users have relevant legal rights when their speech, or the information they are seeking, is legal in their own country?”

Speakers:

  • Bertrand de la Chapelle - Co-Founder and Director, Internet & Jurisdiction Project

  • David Johnson - CEO, argumentz.com; Producer, themoosical.com

  • David Post - Professor of Law (ret.), Temple University Law School; Contributor, Volokh Conspiracy

  • Paul Sieminski - General Counsel, Automattic

  • Nicole Wong - Ex-Obama White House, Twitter, Google

  • Me

NYU: Tyranny of the Algorithm? Predictive Analytics & Human Rights

With Michael Posner (Professor, NYU Stern School of Business, Co-Director, Stern Center for Business and Human Rights) and Sarah Labowitz (Co-Director, NYU Stern Center for Business and Human Rights).

SxSW: How Silicon Valley Looks From Inside the White House, and Vice Versa

A conversation the most marvelous Nicole Wong, from whom I have learned more than I can measure. We were colleagues in the trenches at Google; and then Nicole succeeded me as Deputy CTO of the US. She is the greatest, and here we were both at SxSW.

Fight for the Future: Libraries, Tech Policy, and the Fate of Human Knowledge

Librarians + technology = a personal nirvana.  There is no more awesome set of people doing more important work than the librarians and their nerd allies at the bleeding edge of library tech -- they are engaged in an underappreciated struggle to work out how mankind is going to preserve, extend, share, and democratize the sum of human knowledge in our increasingly digital age.  So I was really psyched to go a do a talk at the 2012 Library Technology Conference about the technological forces driving the great policy issues of our age, along with an argument about why and where the library community should be engaged.  Bonus for me: The event was at Macalester College, where I spent my high school summers taking Russian while trying to look like something other than the huge dork I was.

Here's my keynote, "Fight for the Future: Libraries, Tech Policy, and the Fate of Human Knowledge."

Andrew McLaughlin @ Library Technology Conference 2012 from Library Technology Conference on Vimeo.

 The Prezi is here.

Stanford Law: From Public Squares to Platforms: Free Speech in the Networked World

The set-up: “From local issues like the BART protests to national and international movements like Occupy and the Arab Spring, individuals and organizations are increasingly utilizing the Internet, social networking, and mobile devices to communicate and connect. This diverse panel from academia, public interest, and private practice, will discuss the opportunities and challenges for free speech as it increasingly moves from the town square to the networked world. Co-sponsored by the California State Bar Cyberspace Committee and the Stanford Center for Internet and Society.”

Co-panelists:

  • Dorothy Chou Senior Policy Analyst, Google Dorothy Chou is a Senior Policy Analyst and leads Google's policy efforts to increase Transparency. She manages the day-to-day operations of the Central Public Policy team at Google's headquarters, and handles government relations for Google's Crisis Response/Disaster Relief projects as well as the Data Liberation Front. Dorothy began working for Google in the Washington, D.C. office four years ago, managing issues around China, free expression and child safety before moving to the San Francisco Bay Area last summer. Dorothy holds a B.S. in International Politics from Georgetown University's Walsh School of Foreign Service.

  • Linda Lye Staff Attorney, ACLU of Northern California Linda Lye joined the ACLU-NC as a staff attorney in 2010 after serving 5 years on its Board of Directors and 7 years on its Legal Committee. She was formerly a partner at Altshuler Berzon, a San Francisco law firm specializing in labor and employment law, as well as constitutional, civil rights, and environmental law. Early in her legal career, she clerked for Judge Guido Calabresi of the United States Court of Appeals for the Second Circuit and Justice Ruth Bader Ginsburg of the United States Supreme Court. Prior to law school, she was a policy analyst for the fiscal committees of the Assembly in the California Legislature, and also worked as a death penalty investigator at the California Appellate Project. She has an undergraduate degree from Yale University and a JD from Boalt Hall, at the University of California at Berkeley.

  • Philip Hammer Of Counsel, Hoge Fenton Jones & Appel Philip Hammer is Of Counsel to the law firm of Hoge Fenton Jones & Appel in San Jose, California. Mr. Hammer successfully litigated the right to circulate petitions in privately owned shopping centers in the California Supreme Court (1979) and the United States Supreme Court: Pruneyard Shopping Center v. Robins, 447 U.S. 74 (1980).

  • Laurence Pulgram Partner and Chair of Commercial and Copyright Litigation Group, Fenwick and West LLP Lawrence Pulgram is a Partner in the Litigation and Intellectual Property Groups of Fenwick & West LLP, counsel in intellectual property and complex commercial disputes. His practice emphasizes technology related litigation and frequently involves novel legal issues generated by cutting-edge information technologies.

  • Moderator: Nicole Ozer Co-Chair- California State Bar Cyberspace Committee, Technology and Civil Liberties Policy Director, ACLU of Northern California Nicole A. Ozer is the Technology and Civil Liberties Policy Director at the ACLU of Northern California. She works on the intersection of new technology, privacy, and free speech and spearheads the organization’s online privacy campaign, Demand Your dotRights (www.dotrights.org). Nicole is the co- chair of the California State Bar Cyberspace Committee and a founding board member of the Bay Area Legal Chapter of the American Constitution Society (ACS).

Betaworks Brown Bag: My Days in the White House: The Thrill of Victory, The Agony of Defeat

Here's a lunchtime talk I did at betaworks on my experience working in the White House, why it was awesome, why it was, um, frustrating, why it's hard to achieve large-scale change in the U.S. federal bureaucracy, and more.