Join our daily and weekly newsletter for the latest updates and exclusive content on the top AI coverage. Learn more
When Legal Research Company Lexisnexis Created AI Assistant Protégé, it wants to know the best way to use its expertise without dumping a large model.
Protégé aims to assist lawyers, associates and paralegals to write and proof legal documents and ensure that whatever they mention Complaints and briefs are accurate. However, Lexisnexis did not want a general legal assistant in AI; They want to develop one that learns a company's work flow and is more customizable.
Lexisnexis saw the opportunity to bring power of large language models (LLMs) from the anthropo and mistral and find the best models that answered user questions the best, Jeff Riehl, CTO of Lexisnexis legal and professional, told Venturebeat.
“We use the best model for specific use cases as part of our multi-model approach. We use the model that gives the best results at the fastest response time,” Riehl said. “For some cases of use, that would be a small language model like Mistral or we are conducting distillation to improve performance and reduce costs.”
While LLMs still give value to the development of AI applications, some organizations turn to the use of small language models (SLMs) or LLMs to be a small version of the same model.
Distill, in which an LLM “teaches” a smaller model, is become a popular technique For many organizations.
Small models often work for apps like chatbots or simple code completion, which is what Lexisnexis wants to use for Protégé.
This is not the first time Lexisnexis has built AI applications, even before launching the legal research hub of Lexisnexis + AI in July 2024.
“We've used a lot of AI in the past, which is more than around the natural language processing, some deep study and machine studies,” Riehl said. “That really changed in November 2022 when ChatGPT was launched, because before that, AI's many capabilities were the type of rear of the scenes. But once Chatgpt came out, the capabilities of developing, its communication capabilities were very, intriguing to us.”
Small, neat models and rousing model
Riehl said Lexisnexis uses a variety of models from most major model providers when building AI platforms. Lexisnexis + AI uses Claude models from the anthropical, Openai GPT models and a model from Mistral.
This multimodal procedure has helped break every task of the users who want to perform on the platform. To do this, Lexisnexis had to architect its platform to Move between models.
“We will ruin any work carried out with individual ingredients, and then we will recognize the best large language model to support that outfit. An example of this is that we will use Mistral to assess the user's query,” Riehl said.
For Protégé, the company wants faster response hours and models that are better for cases of legal use. So it turned to what Riehl called the “fine-tuned” versions of the models, essentially smaller weight versions of LLMs or distilled models.
“You don't need a GPT-4O to do the analysis of a query, so we use it for more sophisticated work, and we switch models,” he said.
When a user asks Protégé a question about a specific case, this first model is Pings is a neat metral “for query analysis, then determine what the purpose and purpose of that query” before moving to the model best suited to complete the task. Riehl said the next model could be an LLM that generates new queries for the search engine or another model that summarizes the results.
So far, Lexisnexis most relies on a smooth mistral model even though Riehl said it used a well-toneable version of Claude “when it first came out; we didn't use it in the product today but in other ways.” Lexisnexis is also interested in using other Openai models especially since the company came out with new Fine Tuning Strengthening Establishments Last year. Lexisnexis is in the process of evaluating Openai's reasoning models including O3 for its platforms.
Riehl added that it can also look at the use of Gemini models from Google.
Lexisnexis supports all AI platforms with its own graph of knowledge to perform the capabilities of acquiring Augmented Generation (RAG), especially as Protégé can help launch agent processes later.
The AI legal suite
Even before the arrival of Generative AI, Lexisnexis tried the possibility of putting chatbots to work in the legal industry. In 2017, the The company tried an Ai assistant That would compete with IBM's Watson-powered Ross and Protégé sitting on the company's Lexisnexis + AI platform, which combined Lexisnexis' AI services.
Protégé helps law companies with activities that tend to do paralegals or companions. It helps to write legal briefs and complaints based on companies' documents and data, suggesting legal workflows in the next steps, suggesting new signals that refine searches, draft questions for deposits and discovery, link quotes to the file complex legal document.
“We see Protégé as an initial step in personalizing and agent capabilities,” Riehl said. “Think about different types of lawyers: M & A, Litigators, Real Estate. It continues to get more and more personalized based on the specific work you do. Our vision is that every legal professional will have a personal assistant to help them do their work based on what they are doing, not what other lawyers do.”
Protégé is now competing against other legal research and technology platforms. Thomson Reuters customized Openai's O1-mini-model for its Cocounsel Legal Assistant. Harvey, that's it Raised $ 300 million From investors including Lexisnexis, there is also an Ai Ai.