Ethical AI Law Institute

Empowering Responsible AI Usage for Practicing Attorneys


a lawyer contemplating Legal Liabilities of AI for Attorneys and Small Firms

Legal Liabilities of AI for Attorneys and Small Firms

Estimated reading time: 1 minute

Many small firms and solo attorneys could be in for a nasty shock when it comes to the use of AI. A detailed report from NYU’s Journal of Legislation and Public Policy is shedding light on the potential legal liabilities of using generative AI. Co-authored by EqualAI CEO Miriam Vogel, former Homeland Security Secretary Michael Chertoff, and others, the report underscores a widespread misconception—that liability for AI-related outcomes rests solely with the developers of these technologies.

For attorneys and small business owners, this misconception can be dangerous. As Vogel explains, “There are so many laws on the books that people need to know are applicable.” From lending and housing regulations to employment law, the use of AI—even indirectly—can expose firms to significant risks.

What the NYU Report Reveals

The 100-page report provides a comprehensive overview of how existing laws, in the U.S. and abroad, apply to generative AI. The U.S. lacks specific legislation governing AI use. However, courts and regulators have made it clear that companies are responsible for ensuring compliance with existing laws. This is the case whether or not AI is involved.

Key takeaways of the report include:

  • Applicability of Existing Laws: Laws governing housing, lending, and employment apply regardless of whether humans or AI systems make decisions. Regulators are increasingly affirming this.
  • Hidden Risks of Unauthorized AI Use: Companies may unknowingly use AI tools through employees experimenting with free or unauthorized software. These tools often lack the sophistication of enterprise-level solutions and can perpetuate harmful biases.
  • Blurred Lines of Liability: Using generative AI tools doesn’t shield companies—or their legal advisors—from liability. In fact, failure to understand and manage these risks can amplify exposure.

Implications for Attorneys and Firms

For attorneys, the stakes are particularly high. Legal professionals must not only protect their firms but also advise clients navigating this complex landscape. Here’s what the report highlights as key concerns:

  1. Client Confidentiality and Data Protection:
    Generative AI tools process large amounts of data. Improper use could lead to breaches of sensitive client information. Attorneys should ensure any AI tools comply with data protection laws like HIPAA or GDPR.
  2. Bias and Ethical Responsibilities:
    AI systems trained on biased datasets can produce discriminatory outcomes, potentially violating anti-discrimination laws. This is especially relevant in hiring, lending, and housing, where regulatory scrutiny is high.
  3. Regulatory Compliance:
    Attorneys must recognize that existing laws apply to AI decisions. Failing to comply could lead to litigation or regulatory penalties.
  4. Unauthorized Use Within Firms:
    Employees experimenting with free AI tools can introduce risks. These risks include improper data handling. There is also a risk of reliance on faulty or biased outputs.

Steps Attorneys Can Take to Mitigate Risk

The NYU report doesn’t aim to discourage AI adoption. Vogel emphasizes that the goal is to help organizations use AI responsibly. Attorneys can start by taking these steps:

  • Understand the Technology:
    Familiarize yourself with the AI tools being used in your firm or by your clients. Know how they work, what data they use, and what potential risks they pose.
  • Develop Clear AI Policies:
    Establish policies for using AI in your firm. Include guidelines for data privacy. Monitor bias regularly. Define authorized tools clearly.
  • Educate Your Team:
    Provide training for attorneys and staff on the ethical and legal implications of AI. Emphasize the importance of understanding the technology before relying on it.
  • Regularly Audit AI Use:
    Review how AI is being used within your firm and by your clients. Identify unauthorized tools and assess whether AI outputs align with legal and ethical standards.

Looking Forward: AI and the Law

The NYU report serves as a wake-up call for businesses and attorneys alike. The use of generative AI doesn’t eliminate liability. Instead, it shifts liability in new and sometimes unexpected ways. Attorneys have a critical role in helping businesses navigate these risks. They ensure that AI is used ethically, responsibly, and in compliance with the law.

By integrating insights from the report, attorneys can position themselves as trusted advisors in this rapidly evolving area. Generative AI is a powerful tool, but it must be wielded with care. For law firms and small businesses, understanding the legal implications today can prevent costly mistakes tomorrow.



Leave a Reply

Discover more from Ethical AI Law Institute

Subscribe now to keep reading and get access to the full archive.

Continue reading