Huggingface.js documentation
Interface: BaseArgs
Interface: BaseArgs
Properties
accessToken
• Optional accessToken: string
The access token to use. Without it, you’ll get rate-limited quickly.
Can be created for free in hf.co/settings/token
You can also pass an external Inference provider’s key if you intend to call a compatible provider like Sambanova, Together, Replicate…
Defined in
endpointUrl
• Optional endpointUrl: string
The URL of the endpoint to use.
If not specified, will call the default router.huggingface.co Inference Providers endpoint.
Defined in
model
• Optional model: string
The HF model to use.
If not specified, will call huggingface.co/api/tasks to get the default model for the task.
/!\ Legacy behavior allows this to be an URL, but this is deprecated and will be removed in the future.
Use the endpointUrl parameter instead.
Defined in
provider
• Optional provider: "baseten" | "black-forest-labs" | "cerebras" | "clarifai" | "cohere" | "fal-ai" | "featherless-ai" | "fireworks-ai" | "groq" | "hf-inference" | "hyperbolic" | "nebius" | "novita" | "nscale" | "openai" | "ovhcloud" | "publicai" | "replicate" | "sambanova" | "scaleway" | "together" | "wavespeed" | "zai-org" | "auto"
Set an Inference provider to run this model on.
Defaults to “auto” i.e. the first of the providers available for the model, sorted by the user’s order in https://hf.co/settings/inference-providers.