--- tags: - neuron - optimized - aws-neuron - fill-mask base_model: hfl/chinese-roberta-wwm-ext --- # Neuron-Optimized hfl/chinese-roberta-wwm-ext This repository contains AWS Neuron-optimized files for [hfl/chinese-roberta-wwm-ext](https://huggingface.co/hfl/chinese-roberta-wwm-ext). ## Model Details - **Base Model**: [hfl/chinese-roberta-wwm-ext](https://huggingface.co/hfl/chinese-roberta-wwm-ext) - **Task**: fill-mask - **Optimization**: AWS Neuron compilation - **Generated by**: [badaoui2](https://huggingface.co/badaoui2) - **Generated using**: [Optimum Neuron Compiler Space](https://huggingface.co/spaces/optimum/neuron-export) ## Usage This model has been optimized for AWS Neuron devices (Inferentia/Trainium). To use it: ```python from optimum.neuron import NeuronModelForMaskedLM model = NeuronModelForMaskedLM.from_pretrained("badaoui2/hfl-chinese-roberta-wwm-ext-neuron") ``` ## Performance These files are pre-compiled for AWS Neuron devices and should provide improved inference performance compared to the original model when deployed on Inferentia or Trainium instances. ## Original Model For the original model, training details, and more information, please visit: [hfl/chinese-roberta-wwm-ext](https://huggingface.co/hfl/chinese-roberta-wwm-ext)