Q&A with Co-founder Steve Kramarsky on the Rise of Artificial Intelligence
- Stephen M. Kramarsky
- Jun 13
- 8 min read
Updated: Jun 16
DPK co-founder Steve Kramarsky is an expert on AI technology. He has closely followed the development and progression of AI tools for almost forty years and the revolution that has taken place in those technologies since 2017. In this Q&A, he shares some thoughts about the current state of AI technology in the legal industry.
When did you start researching AI?
I’ve been looking at AI since I was a computer science student in the late 1980s, so what we talked about when I started studying this a million years ago was a very different thing. Some of the models that came out of that era of research were very useful, and they remain in use today, but when you hear the term “AI” these days, most people are using it as a shorthand for generative AI.
How have you seen AI change over the years?
In its current form, generative AI is something pretty specific, and it can mostly be traced back to this huge revolution in neural networks called a transformer—that’s the “T” in ChatGPT. Transformers are a computational architecture that Google researchers described in 2017 in an article called “Attention is All You Need,” and they are at the heart of essentially all current generative AI tools. We’ve come a long way in eight years, and there has been a ton of refinement and addition and iteration in the space, but that specific innovation started the current boom less than a decade ago.
What challenges do you see arising from the use of AI in the legal industry?
I think the challenges I see arising from the broad use of generative AI in particular, leaving aside all my qualms about its environmental costs and what it may do to the creative landscape generally, is that it is the commercial incentives of the people making these products do necessarily not line up with the best interests of the users using them, and that’s a problem. These models are expensive, not just to train, but to run, so for these products to be profitable they must be useful enough to pay for in a very broad range of cases. In fact, I think the use case is probably somewhat narrower than the sales force would have us believe. My concern is that the industry is not always interested in explaining the nuances, and sometimes even seems to want to hide them.
Over the past few months, there have been numerous instances of lawyers using AI tools in legal briefs or court filings and getting sanctioned or otherwise reprimanded. What are your thoughts on that?
The generative AI tools continue to produce terrible results for lawyers. It may tempting to blame the tools themselves, but in this case the tools aren’t the problem, it’s the way they’re being marketed and used. Lawyers believe the hype, and they end up using the tools in ways they shouldn’t.
The fact is that AI is much better at looking trustworthy than actually being trustworthy. That’s not a bug; it’s a feature. It’s a part of the design of the system.
Large language models are designed to generate confident answers, they are not designed to know what the answer actually is, nor can they. Newer products, including the purpose-built legal AI tools and so-called “large research models” go some way to addressing that problem by augmenting the natural language functions with extremely advanced search results and iteration loops, and they add various guardrails to prevent hallucination, but so far these issues persist, even in the most advanced tools.
Unfortunately, I see lawyers fall into the same trap as everyone else. They know they have to check the work that the machine produces, but they run out of time, and the machine is designed to appear confident, and it has probably been largely right in the past, so they submit something that’s full of errors and they get themselves in hot water. Legal drafting is one of those rare professions where doing that can get you in real trouble. The researcher Damine Charlotin has an online database of cases in which litigants have been sanctioned for submissions that included AI hallucinations. He added 30 new cases in May alone, and the number is growing faster and faster. His work is here (https://www.damiencharlotin.com/hallucinations/) and it’s eye-opening. The problem isn’t going away, so we have to train for it.
What new policies or training do you think are needed?
Pretty clearly, these tools are helpful to people, and people like them. They can save time and save clients money when used properly, so we need to implement policies that address the challenges. Law firms and businesses need to have policies in place that say if you’re going to use these tools, you should be using the ones that are purpose-built for the task (if that’s appropriate), and you must have human eyes on your output. Because there are consequences if things get out unchecked. We’ve seen expert reports thrown out because of the use of AI.
We’ve seen lawyers get in trouble for that. It’s a matter of training people on how to use these tools and making sure they understand the limitations and appropriate use cases. Humans get sloppy and lazy and stressed, especially under deadlines, and the more convincing the tools get, the more likely we are to just assume that what the machine produces is good enough. There have to be policies in place and enforced to prevent that from happening. If you look at Damine Charlotin’s chart, you’ll see lawyers using Claude and ChatGPT for brief writing.
There is no universe in which a lawyer should be using a general-purpose chatbot to write a legal brief. It’s inexcusable. If they understood what those products actually do, they wouldn’t dream of handing that output to a judge without going over it first.
Another topic that is widely discussed right now is the potential impact of AI tools on junior associates and junior employees in general. How do you see these tools impacting the tasks that are assigned to junior team members?
As a junior litigation associate, your job is frequently to read through an enormous corpus of material to condense it and determine what’s important. It can feel like a slog, and this is something of a cliché, but the hope is that in doing those tasks, you’re learning how to think like a lawyer, how to construct arguments like a lawyer, and how isolate the elements of a case that you will need to help draft the briefs and make the arguments in the case. If we take that job away and say, well, you don’t need to do that anymore, because the AI can do that better than any junior associate, at least at a B, B+ level, then we’re not teaching anybody how to take the next step. We’re not creating the foundation we need to have senior associates and the next generation of partners.
That said, it’s not fair to ask clients to pay for that work if it can be done more efficiently by AI, however passionately I may believe in the need to train the next generation of human lawyers! So for us, it’s a balance of using the tools to achieve more efficient outcomes and reduce drudge-work, while also making sure that we understand the case, that humans are making the decisions, and that we have our hands and brains in the process.
What are the implications for the business of practicing law?
Every document review platform has an AI component now. What’s changed between the first generation and second generation and now third generation of these tools is how they’re integrating machine learning or AI tools into the platform. A decade ago, the machine learning model was static. Now, the models ride with you and push the documents that they think are most relevant to the top of the review, dynamically ranking as you go and maybe offering summaries and even some reasoning. I think all of that is fine. In fact, it’s great and extremely efficient. At the analysis stage, prior to document production, it can generate some great insights. But at the end of that process, somebody has to have looked at every document. I still believe that you have to have eyes on the documents, you can’t just rely on the AI. Unless you’ve been through them, you risk missing something crucial.
That gets back to my fundamental point, which is that in all these cases, with all of these tools, it becomes very tempting to rely on them in ways they were not built to be relied on.
Do you think that AI will really shift the business model in terms of the billable hour and how you deliver value for clients?
People have been writing about the death of the billable hour for 20 years, probably for 200 years. I think transactional lawyers have been moving away from the billable hour for a long time. We don’t have a transactional practice, but my understanding from people I’ve talked to is that these tools can introduce a great deal of efficiency for them, and I think for that kind of practice, that efficiency, both in drafting and in diligence, is something they can’t turn down. A lot of the transactional tools—Large Language Models that are built to summarize large numbers of documents or generate boilerplate text—are functioning in areas where these computation models really shine. Those are the core strengths of the LLMs and those tools can be very useful and provide a lot of value; but that’s not my practice.
How do you use AI?
For our practice, the most obvious applications come in large document corpus review and things like summarization, note taking, and translation. Those are all very useful areas and we’re using AI tools in all those areas. I’ve also seen that people entering the profession now find the purpose-built legal search tools provided by our legal research vendors to be useful at the front end of search. Essentially, that’s just a different way to interact with the search engines we have used for many years. I learned to use them in a library full of books, then I learned to use Westlaw and Lexis, and now we use those same tools in a more conversational interface. That’s not for me, but again my mantra is use what works for you, just make sure you’re checking the results. So, we are using those tools, where appropriate, to streamline the process that leads up to the final product. I do think that clients look to us to leverage the efficiencies available from AI where we can and of course we do that.
But for us, for a small firm like ours, part of our value add is that when it’s time to write the brief and make the arguments, we can do better than the genAI. We are in some ways privileged as a small office to be able to offer something bespoke, something tailored not only to the case, but to the client, and the court, and the entire context of the matter.
I get a lot of marketing for legal AI products, and inevitably there is a testimonial from some very well-respected lawyer at some enormous firm with a very high billing rate saying “I used such-and-such AI product and it created an argument that was 100% exactly what I would have argued in court” and I always wonder: “then why would anyone hire you?”
I have yet to have that experience. I continue to use and test these products, and I continue to read the academic research alongside the marketing materials, and so far I haven’t found a product that—out of the box—I would be afraid to face in a courtroom. We’ll take what the AI tools have to offer, and we’ll use the advantages and efficiencies they provide, but if we can’t add something to that, we’re not really doing our job.