AI’s Next Frontiers and the Policy Gaps Ahead

One of my favorite experiences this year was a visit in March to the William & Mary Law School to deliver the 2025 Stanley H. Mervis Lecture in Intellectual Property. The students were fantastic — curious, engaged, energetic — and I got to have a long and fascinating conversation with the brilliant Prof. Laura Heymann around the growing misalignments between law and AI.

From the law school’s write-up:

In his talk, titled “There Is No Fate But What We Make: AI’s Next Frontiers and the Policy Gaps Ahead,” McLaughlin highlighted that AI “is not an inevitable force shaping humanity.” As the future of AI extends far beyond the large language models with which many are now familiar to the new realm of quantitative AI, its trajectory will be determined by the choices of those who design, build, and use it.

McLaughlin began with a technical overview, describing how AI systems in general are capable of performing tasks that mimic and, indeed, go beyond the cognitive capabilities of humans by training on millions of inputs to “learn” what results to generate rather than being guided by a long series of rules. For example, rather than having a system learn what kinds of e-mail are likely to be spam by creating a list of keywords for it to recognize, the system can be trained on millions of e-mails that programmers have labeled as either “spam” or “not spam” to learn how to assess future e-mails. Generative large language models take this concept a step further: Rather than using human labeling of the inputs up front, such models instead train on millions of inputs from the Internet, learning patterns from those inputs that allow the models to generate text in response to a query, with feedback provided by humans afterward.

The challenge, McLaughlin noted, is that at some point, “we may be hitting a data wall.” Once large language models have trained on all available content, how can they improve or differentiate themselves from one another? How can AI models overcome their epistemological limits to generate new and reliable insights rather than reflecting the scope of their training data, including any biases or misinformation that data contains?

Quantitative AI may provide the answer. As McLaughin described, quantitative AI models integrate high-level mathematical representations, fundamental equations of quantum mechanics, physics, and related fields, and training based on numerical data, rather than written texts, to generate novel and scientifically reliable results. Thus, unlike LLMs “that predict likely word sequences,” he noted, “quantitative AI models simulate, predict, and discover based on mathematical and scientific principles” at a scale and speed unable to be achieved by human effort.

This development can enable researchers in fields such as healthcare, materials science, and complex systems analysis to accomplish scientific discovery that would otherwise have remained beyond their grasp. For example, quantitative AI can allow scientists working on the development of a therapeutic drug to train a model that can screen more than 100,000 possible solutions in mere days, allowing the process to move much more quickly to patient trials and regulatory approvals. Even more remarkable, such models can become “self-learning,” dramatically accelerating scientific discovery by forming and testing hypothesis without human guidance.

With these new possibilities, McLaughlin cautioned, come important questions about risks and regulation. Quantitative AI is much more likely than large language models to be used for critical functions such as medical decisions, financial market actions, and national security strategy. Where should regulatory oversight lie, and to what extent can model developers create guardrails to prevent undesirable or harmful uses of their technology? Who should own the rights to scientific advances developed with such models? And how should countries prepare for the “tech sovereignty wars” — in which nations compete for AI dominance — and the related cybersecurity issues?

McLaughlin noted that he was leaving his audience with more questions than answers. But, he concluded, whatever the path for AI, “the future is up to us.”