Class LLMChain<T, Model>

This class will be removed in 0.3.0. Use the LangChain Expression Language (LCEL) instead. See the example below for how to use LCEL with the LLMChain class:

Chain to run queries against LLMs.

import { ChatPromptTemplate } from "@langchain/core/prompts";
import { ChatOpenAI } from "@langchain/openai";

const prompt = ChatPromptTemplate.fromTemplate("Tell me a {adjective} joke");
const llm = new ChatOpenAI();
const chain = prompt.pipe(llm);

const response = await chain.invoke({ adjective: "funny" });

Type Parameters

  • T extends string | object = string
  • Model extends LLMType = LLMType

Hierarchy (view full)

Implements

Constructors

Properties

llm: Model

LLM Wrapper to use

outputKey: string = "text"

Key to use for output, defaults to text

prompt: BasePromptTemplate

Prompt object to use

llmKwargs?: CallOptionsIfAvailable<Model>

Kwargs to pass to LLM

memory?: any
outputParser?: any

OutputParser to use

Accessors

Methods

  • Parameters

    • inputs: ChainValues[]
    • Optionalconfig: any[]

    Returns Promise<ChainValues[]>

    Use .batch() instead. Will be removed in 0.2.0.

    Call the chain on all inputs in the list

  • Run the core logic of this chain and add to output if desired.

    Wraps _call and handles memory.

    Parameters

    • values: any
    • Optionalconfig: any

    Returns Promise<ChainValues>

  • Invoke the chain with the provided input and returns the output.

    Parameters

    • input: ChainValues

      Input values for the chain run.

    • Optionaloptions: any

    Returns Promise<ChainValues>

    Promise that resolves with the output of the chain run.

  • Format prompt with values and pass to LLM

    Parameters

    • values: any

      keys to pass to prompt template

    • OptionalcallbackManager: any

      CallbackManager to use

    Returns Promise<T>

    Completion from LLM.

    llm.predict({ adjective: "funny" })
    
  • Parameters

    • inputs: Record<string, unknown>
    • outputs: Record<string, unknown>
    • returnOnlyOutputs: boolean = false

    Returns Promise<Record<string, unknown>>

  • Parameters

    • input: any
    • Optionalconfig: any

    Returns Promise<string>

    Use .invoke() instead. Will be removed in 0.2.0.

""