A detailed report from NYU’s Journal of Legislation and Public Policy is shedding light on a critical issue for businesses. This includes law firms: the potential legal liabilities of using generative AI. The report is co-authored by EqualAI CEO Miriam Vogel, former Homeland Security Secretary Michael Chertoff, and others. It underscores a widespread misconception. Many believe that liability for AI-related outcomes rests solely with the developers of these technologies.
For attorneys and small business owners, this misconception can be dangerous. As Vogel explains, “There are so many laws on the books that people need to know are applicable to generative AI.” From lending and housing regulations to employment law, the use of AI—even indirectly—can expose firms to significant risks.
What the NYU Report Reveals
The 100-page report provides a comprehensive overview of how existing laws, in the U.S. and abroad, apply to generative AI. The U.S. lacks specific legislation governing AI use. However, courts and regulators demand that companies understand the legal liabilities of generative AI. Companies are responsible for ensuring compliance with existing laws, whether or not AI is involved.
Key takeaways include:
• Applicability of Existing Laws: Regulators are increasingly affirming that laws governing housing, lending, and employment apply. These laws hold regardless of whether decisions are made by humans or AI systems.
• Hidden Risks of Unauthorized AI Use: Companies may unknowingly use AI tools. Employees could experiment with free or unauthorized software. These tools often lack the sophistication of enterprise-level solutions and can perpetuate harmful biases.
• Blurred Lines of Liability: Using generative AI tools doesn’t shield companies—or their legal advisors—from liability. In fact, failure to understand and manage these risks can amplify exposure and underline the importance of recognizing the legal liabilities of generative AI.
Implications for Attorneys and Firms
For attorneys, the stakes are particularly high. Legal professionals must not only protect their firms but also advise clients navigating this complex landscape. Here’s what the report highlights as key concerns regarding generative AI:
1. Client Confidentiality and Data Protection:
Generative AI tools process large amounts of data, and improper use could lead to breaches of sensitive client information. Attorneys should ensure any AI tools comply with data protection laws like HIPAA or GDPR.
2. Bias and Ethical Responsibilities:
AI systems trained on biased datasets can produce discriminatory outcomes, potentially violating anti-discrimination laws. This is especially relevant in hiring, lending, and housing, where regulatory scrutiny is high.
3. Regulatory Compliance:
Attorneys must recognize that existing laws apply to AI decisions. A failure to comply could lead to litigation or regulatory penalties.
4. Unauthorized Use Within Firms:
Employees experimenting with free AI tools can introduce risks, including improper data handling and reliance on faulty or biased outputs.
Steps Attorneys Can Take to Mitigate Risk
The NYU report doesn’t aim to discourage AI adoption. Instead, Vogel emphasizes that the goal is to help organizations use AI responsibly. Attorneys can start by taking these steps:
• Understand the Technology:
Familiarize yourself with the AI tools being used in your firm or by your clients. Know how they work, what data they use, and what potential legal liabilities they pose regarding generative AI.
• Develop Clear AI Policies:
Establish policies for using AI in your firm, including guidelines for data privacy, bias monitoring, and authorized tools.
• Educate Your Team:
Provide training for attorneys and staff on the ethical and legal implications of AI. Emphasize the importance of understanding the technology before relying on it.
• Regularly Audit AI Use:
Review how AI is being used within your firm and by your clients. Identify unauthorized tools and assess whether AI outputs align with legal and ethical standards.
Looking Forward: AI and the Law
The NYU report serves as a wake-up call for businesses and attorneys alike. The use of generative AI doesn’t eliminate liability. Instead, it shifts liability in new and sometimes unexpected ways. Attorneys have a critical role in helping businesses navigate these risks, understanding the legal liabilities tied to generative AI. They ensure that AI is used ethically, responsibly, and complies with the law.
By integrating insights from the report, attorneys can position themselves as trusted advisors in this rapidly evolving area. Generative AI is a powerful tool, but it must be wielded with care. For law firms and small businesses, understanding the legal implications today can prevent costly mistakes tomorrow.
This version reintroduces the NYU report as the foundation of the blog post. It frames the discussion around actionable steps for attorneys and small business owners, emphasizing the importance of recognizing the legal liabilities of generative AI. It should strike the balance you’re looking for between authority and practicality.


Leave a Reply