Skip to main content

We have rebranded from Iqidis — meet Irys. A new identity for the future of legal work.

AI Concepts

Token Limit

Definition

A token limit is the maximum number of tokens (word fragments or characters) that an AI model can process in a single request, encompassing both the input and the output. In legal applications, token limits determine how much source material the AI can analyze at once and how long its generated responses can be.

Tokens are the fundamental units that language models use to process text. A token roughly corresponds to three-quarters of a word in English, so a 100,000-token limit translates to approximately 75,000 words. The token limit applies to the combined total of input (the user's prompt plus any retrieved documents) and output (the model's response).

In legal practice, token limits create practical constraints. A complex research memo might require the model to read dozens of cases, a lengthy fact pattern, and specific instructions, all of which consume input tokens. If the combined input and desired output exceed the token limit, the system must make trade-offs: summarizing source documents, reducing the number of retrieved cases, or truncating the response.

Legal AI platforms manage token limits through various strategies. Some use hierarchical summarization to compress large document sets. Others use intelligent retrieval to select only the most relevant passages rather than feeding entire documents. The best systems make these trade-offs transparently, so lawyers understand what the AI has and has not considered in its analysis.

How Irys approaches this

Irys manages token limits transparently through intelligent document chunking and retrieval, ensuring lawyers always know what source material the AI considered in its analysis.

Related terms

AI Concepts

AI Context Window

The context window is the maximum amount of text an AI model can process in a single interaction, measured in tokens. A larger context window allows the model to consider more documents, longer contracts, or more extensive case histories simultaneously, which directly impacts the quality and completeness of its legal analysis.

AI Concepts

Large Language Model (LLM)

A large language model is a neural network trained on vast text corpora that can understand and generate human language. LLMs power the natural language capabilities of legal AI tools, enabling them to read contracts, draft documents, answer research questions, and summarize complex legal materials in plain language.

AI Concepts

Retrieval-Augmented Generation (RAG)

Retrieval-augmented generation is an AI architecture that supplements a language model's response by first retrieving relevant documents from an external knowledge base and then using those documents as context for generating an answer. In legal applications, RAG grounds AI output in actual case law, statutes, and firm documents rather than relying solely on the model's training data.

Workflow

Multi-Document Analysis

Multi-document analysis is the AI capability to simultaneously process, compare, and extract insights from multiple legal documents. This allows lawyers to ask questions across an entire set of contracts, depositions, or discovery documents, identifying patterns, inconsistencies, and key provisions without reading each document individually.

See Token Limit in action

Irys One brings research, drafting, and document intelligence together in one platform. Try it free for 14 days.

Try Irys free