Historically, assistants were flesh and blood humans. As such, they did flesh and blood things. Classically, low-level intern assistants get coffee. Personal or executive assistants might do physical tasks too: they might take phone calls, make photocopies, or (at least as movies would have us believe) pick up gifts for our lovers and family who we forget about while working. At a higher level, more sophisticated assistants might plan a person’s private calendar, ghost write emails and make judgement-driven decisions about who matters enough to even get onto the calendar.
AI legal assistants necessitate conceptual clarity. The heuristic of “assistant” is useful to convey that this category of software tools has a ton of capabilities. Like a (good) human assistant, an AI legal assistant has a collection of tasks that it may assist with. Some tasks the AI may even be able to entirely handle through delegation. For other tasks, the software may serve as a collaborative partner; that is, taking portions of the work but not entirely completing the task. Let me provide a brief example of each:
· A firm provided records of its cases to ChatGPT and asked it to draft copy for their website that briefly described the firm’s best wins. The output was sufficient on the first pass. In this example, the tool entirely completed the task, operating like a fully competent assistant.
· An immigration lawyer’s summer intern used 2nd Chair legal AI assistant to provide over 100 administrative decisions that were not indexed in Westlaw or LexisNexis. The AI drafted the summaries, but the intern didn’t trust the summaries to be accurate. He used the internal citation structure to check that the summaries were right and explore key facts that were relevant to the work his firm did. Here the AI acted as a collaborating assistant who required managerial oversight.
In both cases, the AI operated as an assistant, similar to how human assistants might take on work. But digital assistants and human assistants are also dissimilar. Your AI assistant can’t bring you coffee. Your human assistant can’t process information at thousands of words per second.
Here’s a use that, depending on how much you wanted to use the software, you could do either through delegation or through collaboration. Imagine you’ve been working on a contentious case, and opposing counsel has just sent a particularly aggravating email filled with accusatory language. In your frustration, you quickly draft a response that’s equally sharp, eager to defend your position.
Here’s what the original email might look like:
Your Original Email:
“I can’t believe you would even suggest such a ridiculous settlement offer. This is clearly an attempt to undermine the entire negotiation process, and frankly, I expected better from a seasoned attorney like yourself. We won’t be entertaining any more of these insulting proposals.”
I pulled up two different AI assistants, one just for lawyers, and a legal ai assistant. I gave it these instructions:
I am a lawyer and I'm really angry. Rewrite this email to be more professional: “I can’t believe you would even suggest such a ridiculous settlement offer. This is clearly an attempt to undermine the entire negotiation process, and frankly, I expected better from a seasoned attorney like yourself. We won’t be entertaining any more of these insulting proposals.”
General AI-Edited Email:
“I was surprised by the settlement offer you proposed, as it seems to significantly undervalue the matter at hand. This approach does not reflect the seriousness with which we have been negotiating, nor does it align with the interests of either party.
I had expected a more constructive proposal, given your experience in these matters. At this point, we are unable to entertain any further offers of this nature.
I trust we can work toward a more reasonable and equitable resolution.”
Legal Specific AI-Edited Email:
“Your proposed settlement offer is not in line with our expectations for this negotiation. We trust that as an experienced attorney, you would present more reasonable terms. Moving forward, we will not consider further offers of this nature.”
With either of these examples, a user may decide that this is sufficient. If so, we could call this a delegative use. If you instead wanted to draft and redraft the edited angry email, if you wanted to build out the response from the AI, or otherwise iterate on the email, this kind of use would be collaborative.
AI legal assistants capabilities vary dramatically by vendor. For 2nd Chair’s assistant, our users have used our tool for the following: reviewing medical records; summarizing long email chains associated with employee conduct alleged to violate policy; assisting in the drafting of a brief by reviewing 10 law review articles very quickly; storing their private law firm documents in an easy to access place (including the wifi code!); creating an easy to interact with bot for the employees at their company to chat with, rather than chatting with the legal team; and reviewing documents of a migrating attorney to ensure no company secrets were leaving.
Regardless of whether you use an assistant tool as a delegative or collaborative assistant, calling a legal AI tool or software an “assistant” can also constrain our usage. Limiting our beliefs in a software tool’s capabilities because we imagine the tool to have “assistant” capabilities can foreclose our behaviors with that very tool. In other words, if you think of the software as capable of doing only assistant things, you may also only try to use it for assistant things. In doing so, you might miss a wealth of opportunities.
For any task that you’re considering for an AI to do, consider the following practices:
1. Provide detailed instructions and context in your prompts to get the most accurate and relevant responses from AI. Be as specific as possible about what you want the model to do or generate. This may include a story about who you are, or who you would like it to imagine itself as.
2. Pay attention to the form and structure of your prompts. Put instructions at the beginning of the prompt.
3. Use punctuation like commas and line breaks to clarify complex prompts.
4. Frame instructions in positive terms rather than negative ones. Imagine you’re at a restaurant. Saying “I don’t want a sandwich or soup” as your only instruction to the waiter would likely not turn out well. Negative terms or instructions (rewrite but use less examples) is useful as a follow up prompt rather than an initial prompt.
5. AI often suffer from the a “lost in the middle” phenomenon. These tools will prioritize the beginning and end information, but lose context along the way.
Consider instead breaking a complex ask to it complex tasks into smaller steps.
6. Some research suggests that being “nice” when interacting with an AI can yield better results (though earlier research disagrees).
This piece is just to get your brain going. No one knows your work. Even if they know the area of law or subject matter, they don’t know the way you perform work. AI assistant legal tools take trial and error (and often some patience) to integrate into your work. Consider providing them the same brief support that you would provide to a human assistant, especially as you get started. Give a good faith trial period, test and experiment, and maybe one day your ai assistant will bring you a coffee.
Austin Brittenham is the CEO and Co-Founder of 2nd Chair, a legal-tech startup on a mission to streamline legal workflows through AI. A lawyer by trade with a background in corporate tech, Austin brings nearly a decade of AI experience product experience to the company.