Skip to main content
These docs will be deprecated and no longer maintained with the release of LangChain v1.0 in October 2025. Visit the v1.0 alpha docs

HuggingFaceInference

Here's an example of calling a HugggingFaceInference model as an LLM:

  • npm
  • Yarn
  • pnpm
npm install @langchain/community @langchain/core @huggingface/inference@4
tip

We're unifying model params across all packages. We now suggest using model instead of modelName, and apiKey for API keys.

import { HuggingFaceInference } from "@langchain/community/llms/hf";

const model = new HuggingFaceInference({
model: "gpt2",
apiKey: "YOUR-API-KEY", // In Node.js defaults to process.env.HUGGINGFACEHUB_API_KEY
});
const res = await model.invoke("1 + 1 =");
console.log({ res });

Was this page helpful?


You can also leave detailed feedback on GitHub.

Morty Proxy This is a proxified and sanitized view of the page, visit original site.