LangChain custom tools
LangChain custom tools are user-defined Python functions wrapped by the @tool
decorator so a large language model (LLM) agent can call them autonomously. Each tool exposes a name, description, and argument schema—often a Pydantic model—allowing the LLM to select it through natural-language reasoning (ReAct). Under the hood, LangChain serializes the call as JSON, validates inputs, executes the function (API query, SQL statement, shell command, or proprietary logic), and streams the result back into the prompt loop. Tools can be synchronous or async, stateless or stateful, and may include built-in guardrails for rate limits or PII redaction. Because they follow a standard interface, custom tools snap into any agent—Zero-Shot, Conversational, or Plan-Execute—without modifying core logic. This plug-and-play design lets teams turn domain knowledge and legacy systems into callable skills, enabling LLMs to fetch live data, write Jira tickets, or trigger cloud workflows while keeping codebases clean and testable.
Want to learn how these AI concepts work in practice?
Understanding AI is one thing. Explore how we apply these AI principles to build scalable, agentic workflows that deliver real ROI and value for organizations.