AI Agents Transform Legal Workflows: Insights from ILTACon
At ILTACon Day Three, the legal technology community delved deep into the transformative potential of artificial intelligence, with sessions primarily focusing on the burgeoning role of AI agents and the strategic imperatives for successful AI adoption within law firms. The discussions highlighted a nuanced approach to integrating these advanced tools, balancing innovation with pragmatic implementation and robust governance.
One key session, “Orchestrating Intelligence: AI Agents in the Legal Space,” explored the definition, impact, and future of AI agents. Speakers Lisa Erickson, Matt Zerweck, Adam Ryan, and Joel Hron clarified that AI agents distinguish themselves from traditional AI by functioning as “goal-oriented systems” capable of understanding context, planning actions, and executing tasks autonomously. Unlike static tools, agents operate more like “a good co-worker,” grasping objectives, leveraging available resources, and seeking guidance when necessary. The fundamental difference lies in directing agents on what to achieve rather than dictating how to achieve it.
The strategic impact of these agents is profound. Matt Zerweck noted their capacity to “enable people just to do much more than they were ever able to do before… at a higher quality,” while Joel Hron emphasized their ability to “amplify the most human parts of the job and certainly the most difficult parts.” As agents gain autonomy, human oversight becomes increasingly critical, shifting the focus from manual workflows to optimizing verification speed through transparent citation and source tracking.
Current applications of AI agents are already yielding significant benefits. In email processing, agents proactively understand context and execute tasks like responding to inquiries or generating pitch materials. For document drafting, users report impressive 50-70% time savings in reaching early drafts, with improved consistency achieved by embedding firm and client preferences. Legal research, described by Joel Hron as “the most profound example of agents,” shows over 60% time savings, even discovering new arguments in complex cross-jurisdictional litigation. Agents also excel in contract analysis, identifying standard terms, flagging non-standard provisions, and proactively identifying risks across portfolios. Training these agents involves a three-pillar approach: developing core logical processes for planning and reasoning, creating purpose-built APIs for agent use, and providing comprehensive context through access to both proprietary and third-party data.
Looking ahead, panelists envision an “ecosystem of agents that develop and communicate and collaborate with each other more effectively” within the next decade. Matt Zerweck anticipates “proactive intelligence,” where agents reach out with suggestions before being asked. A crucial prerequisite for this future, as Adam Ryan underscored, is for successful firms to possess “really good structured data sets of their firm’s experience.” Practical implementation considerations include providing agents with high-quality information, starting with simpler tasks, ensuring human review of agent outputs, and maintaining proper access controls. Ultimately, as Joel Hron succinctly put it, “However big you think this is going to be in five years, it will be even bigger than that probably,” a sentiment the panel echoed, agreeing that while technical capabilities exist, strategic transformation requires firms to build robust data foundations and verification processes.
A separate crucial session, “Actionable AI Strategy & Policy,” tackled the practicalities of AI integration. A poll revealed that nearly 50% of attendees were actively piloting AI tools, indicating a widespread, yet nascent, stage of adoption. Key debates emerged regarding the optimal approach to AI strategy. On strategy versus experimentation, Anna Corbett advocated for a “balanced approach,” emphasizing iterative development within a flexible governance framework, which ultimately prevailed over starting with a rigid strategy or purely emergent experimentation.
Regarding policy development, Sukesh Kamra’s argument for a “policy at the outset” for the regulated legal industry gained consensus, prioritizing initial guidelines over Christian Lang’s focus on structural safety alone or Anna Corbett’s view of policies maturing over time. For technology investment, Anna Corbett’s recommendation for “investing in those foundational enterprise AI tools that are going to immediately enhance daily productivity” was favored, contrasting with calls for extensive readiness assessments or purely R&D-focused experimentation. The debate on transformational versus incremental change yielded mixed views, with some predicting rapid, profound shifts where traditional legal skills might become less relevant, while others emphasized a blend of short-term efficiency gains and long-term transformation dependent on organizational leadership and culture.
Insights from a “lightning round” further illuminated challenges: politics were deemed harder than technology in driving adoption, though one dissenter noted that lawyers’ fundamental familiarity with AI acts as a barrier. The majority opposed the idea of a Chief AI Officer by 2026, viewing AI as an integrated component rather than a standalone domain. Views were divided on AI ownership structure, with a consensus favoring cross-functional approaches over single-department ownership.
The session concluded by outlining a three-step implementation framework: first, conduct a readiness assessment to evaluate change tolerance, define success metrics, and assess architectural and training capabilities; second, deploy foundational AI tools, starting with productivity-focused platforms and ensuring lawyers understand AI basics; and third, implement flexible governance, establishing baseline safety requirements without letting risk concerns stifle opportunities, prioritizing structural safety over mere policy compliance. Panelists underscored that the “fundamental barrier to adoption [is] getting lawyers to use prompts” and that “who gets it and who uses this are going to be the people who are going to drive the change.” As Christian Lang starkly warned, “Anyone who truly believes that this is incremental improvement technology and we plan on doing business for the next five, ten years the same way fundamentally that we do today, I think you’re going to be out of a job.”
Beyond the intensive discussions, ILTACon Day Three also featured a lighter moment with a fun run, organized by Draftwise in collaboration with legal tech expert and consultant Ari Kaplan. The overall sentiment from the conference panels and audience alike was a clear alignment on the nuanced and evolving nature of AI implementation in law, advocating for balanced strategies that blend meticulous planning with practical experimentation, all within adaptable governance frameworks.