Harrity attorney Alexander Zajac provides a thorough analysis on IPWatchdog examining how the use of large language models (LLMs) aligns with attorney confidentiality obligations—a topic of growing importance as AI tools become increasingly integrated into legal practice.
The Confidentiality Question
As attorneys increasingly turn to AI-powered tools to assist with research, drafting, and analysis, a fundamental question has emerged: does submitting prompts to remote LLM services compromise an attorney's duty of confidentiality to their clients? Zajac's analysis provides a measured and practical framework for answering this question.
Comparing LLMs to Existing Digital Tools
Central to Zajac's argument is the comparison between remote LLM data storage and the digital tools attorneys already rely on daily. Attorneys routinely use web search engines and cloud storage services that process and temporarily store user data on remote servers. These tools have been widely accepted in legal practice without raising significant confidentiality concerns.
Zajac scrutinizes the data retention policies of major providers, including Google and OpenAI, demonstrating that the security measures and data handling practices of leading LLM providers are comparable to—and in many cases exceed—those of other digital services attorneys use without hesitation.
The Consistency Argument
The core of the analysis rests on a straightforward logical premise: if standard digital tools are trusted by the legal profession, then LLM security measures—which are built on similar or more advanced infrastructure—should be equally reliable. It would be inconsistent to accept the data handling practices of cloud email providers, document management systems, and search engines while simultaneously rejecting those of LLM services that employ comparable or superior protections.
Enhancing Privacy
Zajac also notes that attorneys have the ability to adjust settings to enhance privacy when using LLM tools. Many providers offer options to:
- Opt out of having prompts used for model training
- Enable enhanced data deletion policies
- Use enterprise-grade accounts with additional security controls
- Configure data residency and retention settings
These configurable options give attorneys significant control over how their data is handled, further mitigating any confidentiality concerns.
The Intersection of Technology and Legal Ethics
Zajac's analysis sits at the critical intersection of technology and legal ethics—a space that will only grow in importance as AI capabilities expand and adoption accelerates across the legal profession. Rather than approaching AI tools with reflexive caution, the analysis encourages attorneys to evaluate these tools using the same rational framework they apply to other technology decisions.
The question is not whether AI tools are perfectly secure—no digital tool is. The question is whether they meet the same reasonable standard of protection that we already accept for the digital tools integral to modern legal practice.
As the legal profession continues to navigate the rapid evolution of AI technology, thoughtful analyses like Zajac's provide essential guidance for practitioners seeking to leverage these powerful tools while maintaining their ethical obligations.