LangChain LLM

wojciech achtelik
Wojciech Achtelik
AI Engineer Lead
June 25, 2025
Glossary Category

LangChain LLM is the core wrapper class that lets developers call any large language model (LLM) through a unified interface in the LangChain framework. With a single import—from langchain.llms import OpenAI, Anthropic, HuggingFaceHub—you can swap GPT-4, Claude, or an open-source model without changing the surrounding code. The wrapper standardizes methods such as generate(), stream(), and get_num_tokens(), handles async batching, and plugs into LangChain’s callbacks for real-time tracing and cost tracking. Built-in retry, exponential back-off, and token-limiting guards improve reliability, while environment variables keep keys secure. Because each LLM subclass inherits the same schema, you can drop an LLMChain, Retrieval-Augmented Generation (RAG) component, or autonomous agent into production and later pin a cheaper or faster model with one line of Python. Fine-tune endpoints, temperature, and system prompts are passed via a typed config, giving teams granular control over creativity, latency, and compliance across clouds