Skip to main content


If there is one thing for certain, it is that artificial intelligence (AI) is rapidly transforming many aspects of our lives – and even our legal systems. The same patterns of AI application that we see in industry after industry are beginning to emerge in the modern law firm. From performing legal research, to drafting documents, to predicting outcomes, and to interacting with clients, the modern law firm is reaching for AI. But should we, can we, and what are the parameters and circumstances that inform and guide this decision? As with all of AI, we have these questions and many more. In turn, this brief article offers both a description and an exploration of various positions on these topics.

The output of AI raises primarily ethical considerations focused on two things: accuracy and fact-checking, and transparency. In the law, a small error or oversight can make a big difference to the lives of people or to companies, or to the conduct of a complex piece of litigation. Law firms must put in place validation checking to ensure that any output documents and analyses are accurate. Every lawyer knows that their professionally lives are on the line if they get it wrong. The same logic necessarily applies to AI in the law. Transparency gives lawyers, and people, the confidence to use AI, and the ability to orientate around its interface: as with training lawyers, training AI requires a logical approach to its programming and iterative testing to ensure a good legal and technological outcome. It also allows for clear development direction and easy bug fixing. Insistence on AI and other tech folk adopting this approach in your law firm will help keep the firm safe from mistakes.

Another important dimension of ethically using generative AI at a law firm is meeting legal and regulatory standards. While there are many laws and regulations from which one could choose, for current purposes, I want to focus on (1) general data privacy laws and (2) otherwise applicable rules of professional conduct. It’s very, very important to keep your clients and their secrets, both well-known and well-kept. Like your personally diligent and demanding (we hope) self, you want the AI to have as its primary directive to maintain confidentiality and only then to do no harm. To this end, you should have certain security features in place. You already need these. You have (hopefully) “reasonable efforts to prevent . . . unauthorized access” to your confidential information. But even before that, you have to get “informed consent” in writing to use cloud computing at all.

To conclude, we may say that generative AI offers many interesting uses in the legal sector. But with it come a wide range of potential ethical land mines to navigate over accuracy, transparency, and compliance. Law firms will need to take a strategic approach to introduce the technology in a safe, ethical way, and build protections into their practice as an unintended consequence of its adoption. Only that way will they ensure that the AI software that is used never threatens the principles of the practice, and they remain on the right side of justice and integrity.

Section Cover

AI Regulation and Governance

AI regulation and governance in law firms must now be high on your agenda. AI is transforming the industry, bringing a new wave of regulatory compliance to navigate. The regulations that do exist concerning AI are broadly centered on the ethical and responsible use of AI applications. The themes covered by AI regulations are understandably those of data privacy, algorithmic transparency, and accountability. In parallel, legal professionals need to develop their practice mindful of the emerging global data privacy laws—the GDPR being the best-known example. Likewise, local regulatory bodies, such as the Solicitors Regulation Authority (SRA), the Law Society, and the General Council of the Bar, will publish revisions to their guidelines in light of the transformative effect of AI on the profession.

If your law firm is like most, developing a proactive regime of AI governance would likely follow best practices that include the following:

Development of widely promulgated and clear policies and procedures for the use of AI
Transparent processes for those who will be making AI use decisions
Exceptionally clear and rigorous data security protocols
An AI (practice or functional) or Firm Ethics Committee that takes the time to parse active ethical issues
A photo or static listing of all AI initiatives the firm is launching or actively considering
An ongoing committee agenda that can tie ethics issues to AI initiatives in an organized and timely manner
Regular (likely annual) clinical or audit-based AI risk assessment of firm AI systems, AI strategy, and AI goals
AI losers will include the complacent, so also make sure that your firm develops the will to change and sustain a continuing legal education and staff training function that features robust AI content.

In the same way, in the future, further AI regulations are probably going to require more, not less, including mandatory impact assessments for AI systems, higher standards regarding the comprehensibility of AI, and a greater fear of punishment in the case of non-compliance with the rules. Law firms will not be able to pivot overnight to defend against these coming regulatory winds of change. The development of new, compliant systems is likely to take law firms years to design, not weeks. The cost of the ancillary AI infrastructures needed to remain in compliance in this context promises to be steep, making it incumbent upon Big Law’s leaders to keep themselves informed regarding the latest news in the world of AI, so that they can start planning today for what the law firm of the mid-21st Century may look like.

Section Cover

Ethical Considerations in AI Implementation

In legal practices—the stakes are high and the results can be life-altering—ethical AI is a must. The idea is that law firms will possess AI technologies that have ethics built into them. Ethical AI will permeate through the layers of many of the tools that people use in legal contexts, and it drives the quality and integrity of the overall legal system up, as well. After all, the number one writer in law is to be ethical, right? It will make law firms look a lot better to their clients and (why not?) to the public.

The principles of ethical AI in the legal sector are transparency, accountability, and fairness. Transparency means that the workings and decisions of AI systems are explainable and can be understood by humans. It helps to eliminate the ‘mystery’ of AI and build trust in the AI, both within the legal sector and among consumers of legal services. Accountability ensures that when things go wrong (as they inevitably will) or when AI systems make decisions involving moral dilemmas, someone can be held to account—good to know for the personal injury and whistleblowers among us! Fairness ensures that AI is not agnostic, as the repetition of data and reiteration of solutions can lead to a lack of diversity in outlook and perspective. Here, fairness means more than just being free from bias—people want to know that outcomes will be reasonable and that decisions will be made having regard to universal standards.

The use of ethical AI in law firms has been successful. Consider that some firms are using AI to review discovery materials, which can consist of hundreds of thousands of documents, to ensure no stone is left unturned when looking at facts. The AI’s expected transparency delivers its rationale and explains why its choices mattered. What’s more, there are AI systems that can accurately help prepare for cases by predicting outcomes. They give the lawyers—and the firms’ clients—a sharp, competitive edge by helping shape strategy with solid metrics. Like a sharp defense attorney with a rebellious streak, the AI are programmed to use accountability as their guide. They are regularly audited to ensure the accuracy of their logic and to ensure that good ol’ human gut instinct hasn’t been replaced by cynicism. It shows that the high purpose of AI in the law is to practice law with high ethical standards to deliver service for the common good.

Section Cover

Legal Technology and Compliance

Securing “AI legality” within the legal profession is not a one-sided coin. Rather, it is making AI legal in several respects:

– Technical: The algorithm is programmed to comply with human laws and regulations.
– Processual: The deployment, integration, modification, and use of the AI follow lawful processes.
– Professional: The use of and “collaboration” with the algorithm keep the lawyer “fit and proper” to continue practicing law.

For the sake of complying with the law of legal formulas (the “Input-Legal Principles-Output Black Box”) with AI-technology, the algorithm may have to be able to drop data that it has used (“data-in”) by way of deleting datasets and/or feature variables from column headers so that the continued use of the AI system maintains its legality. Links to more information have been compiled in the “Resources” section below.

Companies can implement several strategies to stay compliant while using generative AI. To start, the data used to train an image model needs to be legally obtained and ethically sourced, so implementing good data governance frameworks is a must. Regular audits and assessments of AI systems can help find compliance issues before they become a problem. Transparency in AI operations matters, especially in areas like algorithms and decision-making. This can help regulators see how a company is currently meeting regulatory requirements. Legal and tech lawyers may also help with projects in some cases.

Legal technology or “legal tech” is vital in the discussion about all things affluence. Simple leveraged programs like an AI contract analysis program create a quick pathway to better identify compliance problems in contracts/disclaimers. A reliable compliance management system uses daily monitoring to inactivate subsequent technologies at scale to catch the important terms and alerts that provide significant value. Legal tech will eventually become directly responsible for the compliance task at most entities and businesses, despite legal tech’s inability to be fully compliant within this article’s subject matter.

Section Cover

Understanding Generative AI in the Legal Context

Generative AI is a form of artificial intelligence that learns from existing data to write text, draw pictures, and maybe even write code. Where traditional AI follows a predefined set of rules, generative AI plays with patterns learned using machine learning models. It tries to be just creative enough to pass as a human—the ultimate AI Turing Test, in a way. This area of AI has progressed greatly and can likely be applied to a number of professional jobs, such as in law, for close but not exact cases.

Law firms are applying generative AI in several ways. Prominent among these applications is the drafting of legal documents. AI can generate contracts, briefs, and other legal documents by scanning existing legal text and extracting relevant information. This technology both speeds the pace of legal drafting and assures a highly accurate legal product. When it comes to researching the law, generative AI can automatically summarize case law and statutes, making it easier for practitioners to quickly pull up the relevant statutory language or case law.

There are several reasons that law practices should care about generative AI. Primarily, these algorithms can help us out in two ways. First, they help simply by reducing the amount of time and effort spent doing routine tasks. These tasks can be outsourced to a machine, which allows us humans to focus on work that is more creative, strategic, and/or client-facing. Second, they can, only very tentatively, help improve the quality of work. With the ultimate goal of increased accuracy and consistency, these programs are aiming to reduce mistakes caused by human error. And, who wouldn’t want to offer a better product or service? Beyond this, there are economic benefits to consider; the less manual (human) labor is required to perform tasks, the more potential there is for cost savings. We are always looking to improve efficiency and increase productivity.

Section Cover

Balancing Innovation and Ethics

Law firms must balance an inherently difficult task when it comes to AI innovation and adoption. On the one hand, the sophistication of these capabilities offers more efficient and potentially better data analytics and results. But this advantage must be balanced against strict ethical guidelines that attorneys are obligated to follow. As an example, attorneys will need to consider issues such as privilege and confidentiality as paramount in the event that AI technology is used to process and analyze that data. Balanced with these considerations is how security protocols will affect the storage, transmission, and accessibility of this data by the firms’ employees. Finally, attorneys should also be aware of potential biases in AI algorithms to assess how and under what circumstances using AI to assist in litigation is both ethical and allowable.

Integrating AI in ways that comply with legal ethics turns out to be harder than it sounds. Here are some of the main challenges:

Confidentiality: How do firms keep from inadvertently compromising client confidentiality in training their AI systems, and in using AI to manage their information or communicate with clients? Even unsophisticated data-sorting software that’s currently in use can be devilishly tricky to manage in ways that fully comply with the ethical duty of confidentiality, given all the details associated with electronic data sorting, cloud storage, and the like. (More on that here and here.)

Impartiality: How do firms get around the fact that AI algorithms are human products that are derived from existing data sets, which injects all kinds of potential bias or perks into the AI’s functioning, with all those functions being, essentially, unprovable? The various forms of bias that have been built into AI have now been well-documented, but the fact that the devil lurks in the details here—with a need for programmers to regularly make and document various decisions on their own—means that the structural characteristics of AI systems will get tweaked in some ways that are not covered by the data, perhaps with some additional tweaks by the AI system itself, at various data-based sync points.

Answers to the ethical AI adoption in law firms might include a number of strategies. Law firms may want to develop a set of guidelines or best practices for what is acceptable or “ethical” AI by engaging both sides of this issue—those who deeply understand the legal profession and those who deeply understand AI and can perform the actual development. Alternatively, a law firm could commission one or more AI developers to create AI that adheres to a detailed specification for what is ethical AI. Law firms could also create an “Ethical AI Practice Group” or an AI oversight committee to police all AI technology used.

Law firms interested in defining AI’s future will also have to commit to making lifelong learning and CLE training a professional staple. Briefings can cover what legal professionals need to know about AI from a user’s perspective: the “do’s and don’ts” of AI “ethics,” decision sidebars, continuous learning, and remedial action training. Through this type of ongoing education and training, firms and their employees/staff will strive to keep everyone up to date and capable of adhering to established and evolving AI ethical and conduct standards.

AI is hugely important in law firms. Whether we know it or not, AI is becoming more of an invisible part of the fabric of legal practice. It is not solely addressed in how computers will behave in a certain way. Law has always been about ethics; it is part of the training. How AI is applied within the practice of law is layered with practical and theoretical issues. As humans, one ethic we carry is a fair process applied equally to everyone. Cleanly stated, ethical considerations wrap procedural law. When humans make a decision, a number of learned biases creep in. When AI is used in an adversarial setting to deliver an AI decision, the same scene plays out. How trust is built, maintained, and kept is derived from inputs. As legal practitioners respect client confidentiality, so should our systems respect client freedom (freedom from spying). A client expects privacy while we apply the facts to the case. So often, lawyers are measured by being true, full of integrity, and consistent with ethical considerations. We project (market) the same image with our written words and website. In so doing, it is an expected adherence to legal ethics prescribed by legal regulators. Would it not be better for law firms to more fully express the constitutional rights of all humans? While using practical application, Justice is built on fair process.

For law firms to ethically implement AI, they need to stay updated in two key areas: (1) the ever-changing regulations and laws regarding the legal landscape, and (2) the technology that is being developed as part of AI.

There is a continuing flow of new laws and guidelines across the globe helping to address the challenges of AI. At the top of the list of must-dos for law firms is to keep track of all these changes so that firms comply with the laws and don’t fall into ethical traps. And for the second point, technology that continues to improve legal AI is worth following if firms want to use more efficient tools.

The laws and tech will carry on changing, so absolute adherence to continuing professional development is a must.

Generative AI could change everything for the legal profession, from automating routine tasks, to enhancing legal research, to improving client services. But this revolution must be managed responsibly, and this is where ethical AI comes in. When you think about the risks that need to be managed—data privacy, managing cultural and other biases potentially “baked in” to algorithms, and developing machine intelligence that does not have unpredictable and unintended (negative) consequences—the role of ethical AI becomes clearer. Ethical AI helps us to create an integrated, end-to-end approach to generative AI, helping the legal team to articulate the benefits of the technology and also to manage the many challenges and conflicts that will likely arise.

In conclusion, adopting ethical AI is central to the survival of law firms as the practice of law becomes more complex. Understanding emerging regulations in the context of developing technologies is critical, particularly if maximal benefit is to be gained from AI. The adoption of ethical standards for the use of generative AI technologies will enable law firms to be on the cutting edge of technological advancement while at the same time holding true to the principles of justice and equality for all that form the philosophical underpinnings of law.

Leave a Reply