Hub Python Library documentation

추론 타입

Hugging Face's logo
Join the Hugging Face community

and get access to the augmented documentation experience

to get started

추론 타입

이 페이지에는 Hugging Face Hub에서 지원하는 타입(예: 데이터 클래스)이 나열되어 있습니다. 각 작업은 JSON 스키마를 사용하여 지정되며, 이러한 스키마에 의해서 타입이 생성됩니다. 이때 Python 요구 사항으로 인해 일부 사용자 정의가 있을 수 있습니다.

각 작업의 JSON 스키마를 확인하려면 @huggingface.js/tasks를 확인하세요.

라이브러리에서 이 부분은 아직 개발 중이며, 향후 릴리즈에서 개선될 예정입니다.

audio_classification

class huggingface_hub.AudioClassificationInput

< >

( inputs: str parameters: huggingface_hub.inference._generated.types.audio_classification.AudioClassificationParameters | None = None )

Inputs for Audio Classification inference

class huggingface_hub.AudioClassificationOutputElement

< >

( label: str score: float )

Outputs for Audio Classification inference

class huggingface_hub.AudioClassificationParameters

< >

( function_to_apply: typing.Optional[ForwardRef('AudioClassificationOutputTransform')] = None top_k: int | None = None )

Additional inference parameters for Audio Classification

audio_to_audio

class huggingface_hub.AudioToAudioInput

< >

( inputs: typing.Any )

Inputs for Audio to Audio inference

class huggingface_hub.AudioToAudioOutputElement

< >

( blob: typing.Any content_type: str label: str )

Outputs of inference for the Audio To Audio task A generated audio file with its label.

automatic_speech_recognition

class huggingface_hub.AutomaticSpeechRecognitionGenerationParameters

< >

( do_sample: bool | None = None early_stopping: typing.Union[bool, ForwardRef('AutomaticSpeechRecognitionEarlyStoppingEnum'), NoneType] = None epsilon_cutoff: float | None = None eta_cutoff: float | None = None max_length: int | None = None max_new_tokens: int | None = None min_length: int | None = None min_new_tokens: int | None = None num_beam_groups: int | None = None num_beams: int | None = None penalty_alpha: float | None = None temperature: float | None = None top_k: int | None = None top_p: float | None = None typical_p: float | None = None use_cache: bool | None = None )

Parametrization of the text generation process

class huggingface_hub.AutomaticSpeechRecognitionInput

< >

( inputs: str parameters: huggingface_hub.inference._generated.types.automatic_speech_recognition.AutomaticSpeechRecognitionParameters | None = None )

Inputs for Automatic Speech Recognition inference

class huggingface_hub.AutomaticSpeechRecognitionOutput

< >

( text: str chunks: list[huggingface_hub.inference._generated.types.automatic_speech_recognition.AutomaticSpeechRecognitionOutputChunk] | None = None )

Outputs of inference for the Automatic Speech Recognition task

class huggingface_hub.AutomaticSpeechRecognitionOutputChunk

< >

( text: str timestamp: list )

class huggingface_hub.AutomaticSpeechRecognitionParameters

< >

( generation_parameters: huggingface_hub.inference._generated.types.automatic_speech_recognition.AutomaticSpeechRecognitionGenerationParameters | None = None return_timestamps: bool | None = None )

Additional inference parameters for Automatic Speech Recognition

chat_completion

class huggingface_hub.ChatCompletionInput

< >

( messages: list frequency_penalty: float | None = None logit_bias: list[float] | None = None logprobs: bool | None = None max_tokens: int | None = None model: str | None = None n: int | None = None presence_penalty: float | None = None response_format: typing.Union[huggingface_hub.inference._generated.types.chat_completion.ChatCompletionInputResponseFormatText, huggingface_hub.inference._generated.types.chat_completion.ChatCompletionInputResponseFormatJSONSchema, huggingface_hub.inference._generated.types.chat_completion.ChatCompletionInputResponseFormatJSONObject, NoneType] = None seed: int | None = None stop: list[str] | None = None stream: bool | None = None stream_options: huggingface_hub.inference._generated.types.chat_completion.ChatCompletionInputStreamOptions | None = None temperature: float | None = None tool_choice: typing.Union[huggingface_hub.inference._generated.types.chat_completion.ChatCompletionInputToolChoiceClass, ForwardRef('ChatCompletionInputToolChoiceEnum'), NoneType] = None tool_prompt: str | None = None tools: list[huggingface_hub.inference._generated.types.chat_completion.ChatCompletionInputTool] | None = None top_logprobs: int | None = None top_p: float | None = None )

Chat Completion Input. Auto-generated from TGI specs. For more details, check out https://github.com/huggingface/huggingface.js/blob/main/packages/tasks/scripts/inference-tgi-import.ts.

class huggingface_hub.ChatCompletionInputFunctionDefinition

< >

( name: str parameters: typing.Any description: str | None = None )

class huggingface_hub.ChatCompletionInputFunctionName

< >

( name: str )

class huggingface_hub.ChatCompletionInputJSONSchema

< >

( name: str description: str | None = None schema: dict[str, object] | None = None strict: bool | None = None )

class huggingface_hub.ChatCompletionInputMessage

< >

( role: str content: list[huggingface_hub.inference._generated.types.chat_completion.ChatCompletionInputMessageChunk] | str | None = None name: str | None = None tool_calls: list[huggingface_hub.inference._generated.types.chat_completion.ChatCompletionInputToolCall] | None = None )

class huggingface_hub.ChatCompletionInputMessageChunk

< >

( type: ChatCompletionInputMessageChunkType image_url: huggingface_hub.inference._generated.types.chat_completion.ChatCompletionInputURL | None = None text: str | None = None )

class huggingface_hub.ChatCompletionInputResponseFormatJSONObject

< >

( type: typing.Literal['json_object'] )

class huggingface_hub.ChatCompletionInputResponseFormatJSONSchema

< >

( type: typing.Literal['json_schema'] json_schema: ChatCompletionInputJSONSchema )

class huggingface_hub.ChatCompletionInputResponseFormatText

< >

( type: typing.Literal['text'] )

class huggingface_hub.ChatCompletionInputStreamOptions

< >

( include_usage: bool | None = None )

class huggingface_hub.ChatCompletionInputTool

< >

( function: ChatCompletionInputFunctionDefinition type: str )

class huggingface_hub.ChatCompletionInputToolCall

< >

( function: ChatCompletionInputFunctionDefinition id: str type: str )

class huggingface_hub.ChatCompletionInputToolChoiceClass

< >

( function: ChatCompletionInputFunctionName )

class huggingface_hub.ChatCompletionInputURL

< >

( url: str )

class huggingface_hub.ChatCompletionOutput

< >

( choices: list created: int id: str model: str system_fingerprint: str usage: ChatCompletionOutputUsage )

Chat Completion Output. Auto-generated from TGI specs. For more details, check out https://github.com/huggingface/huggingface.js/blob/main/packages/tasks/scripts/inference-tgi-import.ts.

class huggingface_hub.ChatCompletionOutputComplete

< >

( finish_reason: str index: int message: ChatCompletionOutputMessage logprobs: huggingface_hub.inference._generated.types.chat_completion.ChatCompletionOutputLogprobs | None = None )

class huggingface_hub.ChatCompletionOutputFunctionDefinition

< >

( arguments: str name: str description: str | None = None )

class huggingface_hub.ChatCompletionOutputLogprob

< >

( logprob: float token: str top_logprobs: list )

class huggingface_hub.ChatCompletionOutputLogprobs

< >

( content: list )

class huggingface_hub.ChatCompletionOutputMessage

< >

( role: str content: str | None = None reasoning: str | None = None tool_call_id: str | None = None tool_calls: list[huggingface_hub.inference._generated.types.chat_completion.ChatCompletionOutputToolCall] | None = None )

class huggingface_hub.ChatCompletionOutputToolCall

< >

( function: ChatCompletionOutputFunctionDefinition id: str type: str )

class huggingface_hub.ChatCompletionOutputTopLogprob

< >

( logprob: float token: str )

class huggingface_hub.ChatCompletionOutputUsage

< >

( completion_tokens: int prompt_tokens: int total_tokens: int )

class huggingface_hub.ChatCompletionStreamOutput

< >

( choices: list created: int id: str model: str system_fingerprint: str usage: huggingface_hub.inference._generated.types.chat_completion.ChatCompletionStreamOutputUsage | None = None )

Chat Completion Stream Output. Auto-generated from TGI specs. For more details, check out https://github.com/huggingface/huggingface.js/blob/main/packages/tasks/scripts/inference-tgi-import.ts.

class huggingface_hub.ChatCompletionStreamOutputChoice

< >

( delta: ChatCompletionStreamOutputDelta index: int finish_reason: str | None = None logprobs: huggingface_hub.inference._generated.types.chat_completion.ChatCompletionStreamOutputLogprobs | None = None )

class huggingface_hub.ChatCompletionStreamOutputDelta

< >

( role: str content: str | None = None reasoning: str | None = None tool_call_id: str | None = None tool_calls: list[huggingface_hub.inference._generated.types.chat_completion.ChatCompletionStreamOutputDeltaToolCall] | None = None )

class huggingface_hub.ChatCompletionStreamOutputDeltaToolCall

< >

( function: ChatCompletionStreamOutputFunction id: str index: int type: str )

class huggingface_hub.ChatCompletionStreamOutputFunction

< >

( arguments: str name: str | None = None )

class huggingface_hub.ChatCompletionStreamOutputLogprob

< >

( logprob: float token: str top_logprobs: list )

class huggingface_hub.ChatCompletionStreamOutputLogprobs

< >

( content: list )

class huggingface_hub.ChatCompletionStreamOutputTopLogprob

< >

( logprob: float token: str )

class huggingface_hub.ChatCompletionStreamOutputUsage

< >

( completion_tokens: int prompt_tokens: int total_tokens: int )

depth_estimation

class huggingface_hub.DepthEstimationInput

< >

( inputs: typing.Any parameters: dict[str, typing.Any] | None = None )

Inputs for Depth Estimation inference

class huggingface_hub.DepthEstimationOutput

< >

( depth: typing.Any predicted_depth: typing.Any )

Outputs of inference for the Depth Estimation task

document_question_answering

class huggingface_hub.DocumentQuestionAnsweringInput

< >

( inputs: DocumentQuestionAnsweringInputData parameters: huggingface_hub.inference._generated.types.document_question_answering.DocumentQuestionAnsweringParameters | None = None )

Inputs for Document Question Answering inference

class huggingface_hub.DocumentQuestionAnsweringInputData

< >

( image: typing.Any question: str )

One (document, question) pair to answer

class huggingface_hub.DocumentQuestionAnsweringOutputElement

< >

( answer: str end: int score: float start: int )

Outputs of inference for the Document Question Answering task

class huggingface_hub.DocumentQuestionAnsweringParameters

< >

( doc_stride: int | None = None handle_impossible_answer: bool | None = None lang: str | None = None max_answer_len: int | None = None max_question_len: int | None = None max_seq_len: int | None = None top_k: int | None = None word_boxes: list[list[float] | str] | None = None )

Additional inference parameters for Document Question Answering

feature_extraction

class huggingface_hub.FeatureExtractionInput

< >

( inputs: list[str] | str normalize: bool | None = None prompt_name: str | None = None truncate: bool | None = None truncation_direction: typing.Optional[ForwardRef('FeatureExtractionInputTruncationDirection')] = None )

Feature Extraction Input. Auto-generated from TEI specs. For more details, check out https://github.com/huggingface/huggingface.js/blob/main/packages/tasks/scripts/inference-tei-import.ts.

fill_mask

class huggingface_hub.FillMaskInput

< >

( inputs: str parameters: huggingface_hub.inference._generated.types.fill_mask.FillMaskParameters | None = None )

Inputs for Fill Mask inference

class huggingface_hub.FillMaskOutputElement

< >

( score: float sequence: str token: int token_str: typing.Any fill_mask_output_token_str: str | None = None )

Outputs of inference for the Fill Mask task

class huggingface_hub.FillMaskParameters

< >

( targets: list[str] | None = None top_k: int | None = None )

Additional inference parameters for Fill Mask

image_classification

class huggingface_hub.ImageClassificationInput

< >

( inputs: str parameters: huggingface_hub.inference._generated.types.image_classification.ImageClassificationParameters | None = None )

Inputs for Image Classification inference

class huggingface_hub.ImageClassificationOutputElement

< >

( label: str score: float )

Outputs of inference for the Image Classification task

class huggingface_hub.ImageClassificationParameters

< >

( function_to_apply: typing.Optional[ForwardRef('ImageClassificationOutputTransform')] = None top_k: int | None = None )

Additional inference parameters for Image Classification

image_segmentation

class huggingface_hub.ImageSegmentationInput

< >

( inputs: str parameters: huggingface_hub.inference._generated.types.image_segmentation.ImageSegmentationParameters | None = None )

Inputs for Image Segmentation inference

class huggingface_hub.ImageSegmentationOutputElement

< >

( label: str mask: str score: float | None = None )

Outputs of inference for the Image Segmentation task A predicted mask / segment

class huggingface_hub.ImageSegmentationParameters

< >

( mask_threshold: float | None = None overlap_mask_area_threshold: float | None = None subtask: typing.Optional[ForwardRef('ImageSegmentationSubtask')] = None threshold: float | None = None )

Additional inference parameters for Image Segmentation

image_text_to_image

class huggingface_hub.ImageTextToImageInput

< >

( inputs: str | None = None parameters: huggingface_hub.inference._generated.types.image_text_to_image.ImageTextToImageParameters | None = None )

Inputs for Image Text To Image inference. Either inputs (image) or prompt (in parameters) must be provided, or both.

class huggingface_hub.ImageTextToImageOutput

< >

( image: typing.Any )

Outputs of inference for the Image Text To Image task

class huggingface_hub.ImageTextToImageParameters

< >

( guidance_scale: float | None = None negative_prompt: str | None = None num_inference_steps: int | None = None prompt: str | None = None seed: int | None = None target_size: huggingface_hub.inference._generated.types.image_text_to_image.ImageTextToImageTargetSize | None = None )

Additional inference parameters for Image Text To Image

class huggingface_hub.ImageTextToImageTargetSize

< >

( height: int width: int )

The size in pixels of the output image. This parameter is only supported by some providers and for specific models. It will be ignored when unsupported.

image_text_to_video

class huggingface_hub.ImageTextToVideoInput

< >

( inputs: str | None = None parameters: huggingface_hub.inference._generated.types.image_text_to_video.ImageTextToVideoParameters | None = None )

Inputs for Image Text To Video inference. Either inputs (image) or prompt (in parameters) must be provided, or both.

class huggingface_hub.ImageTextToVideoOutput

< >

( video: typing.Any )

Outputs of inference for the Image Text To Video task

class huggingface_hub.ImageTextToVideoParameters

< >

( guidance_scale: float | None = None negative_prompt: str | None = None num_frames: float | None = None num_inference_steps: int | None = None prompt: str | None = None seed: int | None = None target_size: huggingface_hub.inference._generated.types.image_text_to_video.ImageTextToVideoTargetSize | None = None )

Additional inference parameters for Image Text To Video

class huggingface_hub.ImageTextToVideoTargetSize

< >

( height: int width: int )

The size in pixel of the output video frames.

image_to_image

class huggingface_hub.ImageToImageInput

< >

( inputs: str parameters: huggingface_hub.inference._generated.types.image_to_image.ImageToImageParameters | None = None )

Inputs for Image To Image inference

class huggingface_hub.ImageToImageOutput

< >

( image: typing.Any )

Outputs of inference for the Image To Image task

class huggingface_hub.ImageToImageParameters

< >

( guidance_scale: float | None = None negative_prompt: str | None = None num_inference_steps: int | None = None prompt: str | None = None target_size: huggingface_hub.inference._generated.types.image_to_image.ImageToImageTargetSize | None = None )

Additional inference parameters for Image To Image

class huggingface_hub.ImageToImageTargetSize

< >

( height: int width: int )

The size in pixels of the output image. This parameter is only supported by some providers and for specific models. It will be ignored when unsupported.

image_to_text

class huggingface_hub.ImageToTextGenerationParameters

< >

( do_sample: bool | None = None early_stopping: typing.Union[bool, ForwardRef('ImageToTextEarlyStoppingEnum'), NoneType] = None epsilon_cutoff: float | None = None eta_cutoff: float | None = None max_length: int | None = None max_new_tokens: int | None = None min_length: int | None = None min_new_tokens: int | None = None num_beam_groups: int | None = None num_beams: int | None = None penalty_alpha: float | None = None temperature: float | None = None top_k: int | None = None top_p: float | None = None typical_p: float | None = None use_cache: bool | None = None )

Parametrization of the text generation process

class huggingface_hub.ImageToTextInput

< >

( inputs: typing.Any parameters: huggingface_hub.inference._generated.types.image_to_text.ImageToTextParameters | None = None )

Inputs for Image To Text inference

class huggingface_hub.ImageToTextOutput

< >

( generated_text: typing.Any image_to_text_output_generated_text: str | None = None )

Outputs of inference for the Image To Text task

class huggingface_hub.ImageToTextParameters

< >

( generation_parameters: huggingface_hub.inference._generated.types.image_to_text.ImageToTextGenerationParameters | None = None max_new_tokens: int | None = None )

Additional inference parameters for Image To Text

image_to_video

class huggingface_hub.ImageToVideoInput

< >

( inputs: str parameters: huggingface_hub.inference._generated.types.image_to_video.ImageToVideoParameters | None = None )

Inputs for Image To Video inference

class huggingface_hub.ImageToVideoOutput

< >

( video: typing.Any )

Outputs of inference for the Image To Video task

class huggingface_hub.ImageToVideoParameters

< >

( guidance_scale: float | None = None negative_prompt: str | None = None num_frames: float | None = None num_inference_steps: int | None = None prompt: str | None = None seed: int | None = None target_size: huggingface_hub.inference._generated.types.image_to_video.ImageToVideoTargetSize | None = None )

Additional inference parameters for Image To Video

class huggingface_hub.ImageToVideoTargetSize

< >

( height: int width: int )

The size in pixel of the output video frames.

object_detection

class huggingface_hub.ObjectDetectionBoundingBox

< >

( xmax: int xmin: int ymax: int ymin: int )

The predicted bounding box. Coordinates are relative to the top left corner of the input image.

class huggingface_hub.ObjectDetectionInput

< >

( inputs: str parameters: huggingface_hub.inference._generated.types.object_detection.ObjectDetectionParameters | None = None )

Inputs for Object Detection inference

class huggingface_hub.ObjectDetectionOutputElement

< >

( box: ObjectDetectionBoundingBox label: str score: float )

Outputs of inference for the Object Detection task

class huggingface_hub.ObjectDetectionParameters

< >

( threshold: float | None = None )

Additional inference parameters for Object Detection

question_answering

class huggingface_hub.QuestionAnsweringInput

< >

( inputs: QuestionAnsweringInputData parameters: huggingface_hub.inference._generated.types.question_answering.QuestionAnsweringParameters | None = None )

Inputs for Question Answering inference

class huggingface_hub.QuestionAnsweringInputData

< >

( context: str question: str )

One (context, question) pair to answer

class huggingface_hub.QuestionAnsweringOutputElement

< >

( answer: str end: int score: float start: int )

Outputs of inference for the Question Answering task

class huggingface_hub.QuestionAnsweringParameters

< >

( align_to_words: bool | None = None doc_stride: int | None = None handle_impossible_answer: bool | None = None max_answer_len: int | None = None max_question_len: int | None = None max_seq_len: int | None = None top_k: int | None = None )

Additional inference parameters for Question Answering

sentence_similarity

class huggingface_hub.SentenceSimilarityInput

< >

( inputs: SentenceSimilarityInputData parameters: dict[str, typing.Any] | None = None )

Inputs for Sentence similarity inference

class huggingface_hub.SentenceSimilarityInputData

< >

( sentences: list source_sentence: str )

summarization

class huggingface_hub.SummarizationInput

< >

( inputs: str parameters: huggingface_hub.inference._generated.types.summarization.SummarizationParameters | None = None )

Inputs for Summarization inference

class huggingface_hub.SummarizationOutput

< >

( summary_text: str )

Outputs of inference for the Summarization task

class huggingface_hub.SummarizationParameters

< >

( clean_up_tokenization_spaces: bool | None = None generate_parameters: dict[str, typing.Any] | None = None truncation: typing.Optional[ForwardRef('SummarizationTruncationStrategy')] = None )

Additional inference parameters for summarization.

table_question_answering

class huggingface_hub.TableQuestionAnsweringInput

< >

( inputs: TableQuestionAnsweringInputData parameters: huggingface_hub.inference._generated.types.table_question_answering.TableQuestionAnsweringParameters | None = None )

Inputs for Table Question Answering inference

class huggingface_hub.TableQuestionAnsweringInputData

< >

( question: str table: dict )

One (table, question) pair to answer

class huggingface_hub.TableQuestionAnsweringOutputElement

< >

( answer: str cells: list coordinates: list aggregator: str | None = None )

Outputs of inference for the Table Question Answering task

class huggingface_hub.TableQuestionAnsweringParameters

< >

( padding: typing.Optional[ForwardRef('Padding')] = None sequential: bool | None = None truncation: bool | None = None )

Additional inference parameters for Table Question Answering

text2text_generation

class huggingface_hub.Text2TextGenerationInput

< >

( inputs: str parameters: huggingface_hub.inference._generated.types.text2text_generation.Text2TextGenerationParameters | None = None )

Inputs for Text2text Generation inference

class huggingface_hub.Text2TextGenerationOutput

< >

( generated_text: typing.Any text2_text_generation_output_generated_text: str | None = None )

Outputs of inference for the Text2text Generation task

class huggingface_hub.Text2TextGenerationParameters

< >

( clean_up_tokenization_spaces: bool | None = None generate_parameters: dict[str, typing.Any] | None = None truncation: typing.Optional[ForwardRef('Text2TextGenerationTruncationStrategy')] = None )

Additional inference parameters for Text2text Generation

text_classification

class huggingface_hub.TextClassificationInput

< >

( inputs: str parameters: huggingface_hub.inference._generated.types.text_classification.TextClassificationParameters | None = None )

Inputs for Text Classification inference

class huggingface_hub.TextClassificationOutputElement

< >

( label: str score: float )

Outputs of inference for the Text Classification task

class huggingface_hub.TextClassificationParameters

< >

( function_to_apply: typing.Optional[ForwardRef('TextClassificationOutputTransform')] = None top_k: int | None = None )

Additional inference parameters for Text Classification

text_generation

class huggingface_hub.TextGenerationInput

< >

( inputs: str parameters: huggingface_hub.inference._generated.types.text_generation.TextGenerationInputGenerateParameters | None = None stream: bool | None = None )

Text Generation Input. Auto-generated from TGI specs. For more details, check out https://github.com/huggingface/huggingface.js/blob/main/packages/tasks/scripts/inference-tgi-import.ts.

class huggingface_hub.TextGenerationInputGenerateParameters

< >

( adapter_id: str | None = None best_of: int | None = None decoder_input_details: bool | None = None details: bool | None = None do_sample: bool | None = None frequency_penalty: float | None = None grammar: huggingface_hub.inference._generated.types.text_generation.TextGenerationInputGrammarType | None = None max_new_tokens: int | None = None repetition_penalty: float | None = None return_full_text: bool | None = None seed: int | None = None stop: list[str] | None = None temperature: float | None = None top_k: int | None = None top_n_tokens: int | None = None top_p: float | None = None truncate: int | None = None typical_p: float | None = None watermark: bool | None = None )

class huggingface_hub.TextGenerationInputGrammarType

< >

( type: TypeEnum value: typing.Any )

class huggingface_hub.TextGenerationOutput

< >

( generated_text: str details: huggingface_hub.inference._generated.types.text_generation.TextGenerationOutputDetails | None = None )

Text Generation Output. Auto-generated from TGI specs. For more details, check out https://github.com/huggingface/huggingface.js/blob/main/packages/tasks/scripts/inference-tgi-import.ts.

class huggingface_hub.TextGenerationOutputBestOfSequence

< >

( finish_reason: TextGenerationOutputFinishReason generated_text: str generated_tokens: int prefill: list tokens: list seed: int | None = None top_tokens: list[list[huggingface_hub.inference._generated.types.text_generation.TextGenerationOutputToken]] | None = None )

class huggingface_hub.TextGenerationOutputDetails

< >

( finish_reason: TextGenerationOutputFinishReason generated_tokens: int prefill: list tokens: list best_of_sequences: list[huggingface_hub.inference._generated.types.text_generation.TextGenerationOutputBestOfSequence] | None = None seed: int | None = None top_tokens: list[list[huggingface_hub.inference._generated.types.text_generation.TextGenerationOutputToken]] | None = None )

class huggingface_hub.TextGenerationOutputPrefillToken

< >

( id: int logprob: float text: str )

class huggingface_hub.TextGenerationOutputToken

< >

( id: int logprob: float special: bool text: str )

class huggingface_hub.TextGenerationStreamOutput

< >

( index: int token: TextGenerationStreamOutputToken details: huggingface_hub.inference._generated.types.text_generation.TextGenerationStreamOutputStreamDetails | None = None generated_text: str | None = None top_tokens: list[huggingface_hub.inference._generated.types.text_generation.TextGenerationStreamOutputToken] | None = None )

Text Generation Stream Output. Auto-generated from TGI specs. For more details, check out https://github.com/huggingface/huggingface.js/blob/main/packages/tasks/scripts/inference-tgi-import.ts.

class huggingface_hub.TextGenerationStreamOutputStreamDetails

< >

( finish_reason: TextGenerationOutputFinishReason generated_tokens: int input_length: int seed: int | None = None )

class huggingface_hub.TextGenerationStreamOutputToken

< >

( id: int logprob: float special: bool text: str )

text_to_audio

class huggingface_hub.TextToAudioGenerationParameters

< >

( do_sample: bool | None = None early_stopping: typing.Union[bool, ForwardRef('TextToAudioEarlyStoppingEnum'), NoneType] = None epsilon_cutoff: float | None = None eta_cutoff: float | None = None max_length: int | None = None max_new_tokens: int | None = None min_length: int | None = None min_new_tokens: int | None = None num_beam_groups: int | None = None num_beams: int | None = None penalty_alpha: float | None = None temperature: float | None = None top_k: int | None = None top_p: float | None = None typical_p: float | None = None use_cache: bool | None = None )

Parametrization of the text generation process

class huggingface_hub.TextToAudioInput

< >

( inputs: str parameters: huggingface_hub.inference._generated.types.text_to_audio.TextToAudioParameters | None = None )

Inputs for Text To Audio inference

class huggingface_hub.TextToAudioOutput

< >

( audio: typing.Any sampling_rate: float )

Outputs of inference for the Text To Audio task

class huggingface_hub.TextToAudioParameters

< >

( generation_parameters: huggingface_hub.inference._generated.types.text_to_audio.TextToAudioGenerationParameters | None = None )

Additional inference parameters for Text To Audio

text_to_image

class huggingface_hub.TextToImageInput

< >

( inputs: str parameters: huggingface_hub.inference._generated.types.text_to_image.TextToImageParameters | None = None )

Inputs for Text To Image inference

class huggingface_hub.TextToImageOutput

< >

( image: typing.Any )

Outputs of inference for the Text To Image task

class huggingface_hub.TextToImageParameters

< >

( guidance_scale: float | None = None height: int | None = None negative_prompt: str | None = None num_inference_steps: int | None = None scheduler: str | None = None seed: int | None = None width: int | None = None )

Additional inference parameters for Text To Image

text_to_speech

class huggingface_hub.TextToSpeechGenerationParameters

< >

( do_sample: bool | None = None early_stopping: typing.Union[bool, ForwardRef('TextToSpeechEarlyStoppingEnum'), NoneType] = None epsilon_cutoff: float | None = None eta_cutoff: float | None = None max_length: int | None = None max_new_tokens: int | None = None min_length: int | None = None min_new_tokens: int | None = None num_beam_groups: int | None = None num_beams: int | None = None penalty_alpha: float | None = None temperature: float | None = None top_k: int | None = None top_p: float | None = None typical_p: float | None = None use_cache: bool | None = None )

Parametrization of the text generation process

class huggingface_hub.TextToSpeechInput

< >

( inputs: str parameters: huggingface_hub.inference._generated.types.text_to_speech.TextToSpeechParameters | None = None )

Inputs for Text To Speech inference

class huggingface_hub.TextToSpeechOutput

< >

( audio: typing.Any sampling_rate: float | None = None )

Outputs of inference for the Text To Speech task

class huggingface_hub.TextToSpeechParameters

< >

( generation_parameters: huggingface_hub.inference._generated.types.text_to_speech.TextToSpeechGenerationParameters | None = None )

Additional inference parameters for Text To Speech

text_to_video

class huggingface_hub.TextToVideoInput

< >

( inputs: str parameters: huggingface_hub.inference._generated.types.text_to_video.TextToVideoParameters | None = None )

Inputs for Text To Video inference

class huggingface_hub.TextToVideoOutput

< >

( video: typing.Any )

Outputs of inference for the Text To Video task

class huggingface_hub.TextToVideoParameters

< >

( guidance_scale: float | None = None negative_prompt: list[str] | None = None num_frames: float | None = None num_inference_steps: int | None = None seed: int | None = None )

Additional inference parameters for Text To Video

token_classification

class huggingface_hub.TokenClassificationInput

< >

( inputs: str parameters: huggingface_hub.inference._generated.types.token_classification.TokenClassificationParameters | None = None )

Inputs for Token Classification inference

class huggingface_hub.TokenClassificationOutputElement

< >

( end: int score: float start: int word: str entity: str | None = None entity_group: str | None = None )

Outputs of inference for the Token Classification task

class huggingface_hub.TokenClassificationParameters

< >

( aggregation_strategy: typing.Optional[ForwardRef('TokenClassificationAggregationStrategy')] = None ignore_labels: list[str] | None = None stride: int | None = None )

Additional inference parameters for Token Classification

translation

class huggingface_hub.TranslationInput

< >

( inputs: str parameters: huggingface_hub.inference._generated.types.translation.TranslationParameters | None = None )

Inputs for Translation inference

class huggingface_hub.TranslationOutput

< >

( translation_text: str )

Outputs of inference for the Translation task

class huggingface_hub.TranslationParameters

< >

( clean_up_tokenization_spaces: bool | None = None generate_parameters: dict[str, typing.Any] | None = None src_lang: str | None = None tgt_lang: str | None = None truncation: typing.Optional[ForwardRef('TranslationTruncationStrategy')] = None )

Additional inference parameters for Translation

video_classification

class huggingface_hub.VideoClassificationInput

< >

( inputs: typing.Any parameters: huggingface_hub.inference._generated.types.video_classification.VideoClassificationParameters | None = None )

Inputs for Video Classification inference

class huggingface_hub.VideoClassificationOutputElement

< >

( label: str score: float )

Outputs of inference for the Video Classification task

class huggingface_hub.VideoClassificationParameters

< >

( frame_sampling_rate: int | None = None function_to_apply: typing.Optional[ForwardRef('VideoClassificationOutputTransform')] = None num_frames: int | None = None top_k: int | None = None )

Additional inference parameters for Video Classification

visual_question_answering

class huggingface_hub.VisualQuestionAnsweringInput

< >

( inputs: VisualQuestionAnsweringInputData parameters: huggingface_hub.inference._generated.types.visual_question_answering.VisualQuestionAnsweringParameters | None = None )

Inputs for Visual Question Answering inference

class huggingface_hub.VisualQuestionAnsweringInputData

< >

( image: typing.Any question: str )

One (image, question) pair to answer

class huggingface_hub.VisualQuestionAnsweringOutputElement

< >

( score: float answer: str | None = None )

Outputs of inference for the Visual Question Answering task

class huggingface_hub.VisualQuestionAnsweringParameters

< >

( top_k: int | None = None )

Additional inference parameters for Visual Question Answering

zero_shot_classification

class huggingface_hub.ZeroShotClassificationInput

< >

( inputs: str parameters: ZeroShotClassificationParameters )

Inputs for Zero Shot Classification inference

class huggingface_hub.ZeroShotClassificationOutputElement

< >

( label: str score: float )

Outputs of inference for the Zero Shot Classification task

class huggingface_hub.ZeroShotClassificationParameters

< >

( candidate_labels: list hypothesis_template: str | None = None multi_label: bool | None = None )

Additional inference parameters for Zero Shot Classification

zero_shot_image_classification

class huggingface_hub.ZeroShotImageClassificationInput

< >

( inputs: str parameters: ZeroShotImageClassificationParameters )

Inputs for Zero Shot Image Classification inference

class huggingface_hub.ZeroShotImageClassificationOutputElement

< >

( label: str score: float )

Outputs of inference for the Zero Shot Image Classification task

class huggingface_hub.ZeroShotImageClassificationParameters

< >

( candidate_labels: list hypothesis_template: str | None = None )

Additional inference parameters for Zero Shot Image Classification

zero_shot_object_detection

class huggingface_hub.ZeroShotObjectDetectionBoundingBox

< >

( xmax: int xmin: int ymax: int ymin: int )

The predicted bounding box. Coordinates are relative to the top left corner of the input image.

class huggingface_hub.ZeroShotObjectDetectionInput

< >

( inputs: str parameters: ZeroShotObjectDetectionParameters )

Inputs for Zero Shot Object Detection inference

class huggingface_hub.ZeroShotObjectDetectionOutputElement

< >

( box: ZeroShotObjectDetectionBoundingBox label: str score: float )

Outputs of inference for the Zero Shot Object Detection task

class huggingface_hub.ZeroShotObjectDetectionParameters

< >

( candidate_labels: list )

Additional inference parameters for Zero Shot Object Detection

Update on GitHub