ArtusDev's picture
Upload folder using huggingface_hub
e5ea80c verified
|
raw
history blame
4.34 kB
metadata
library_name: transformers
license: apache-2.0
pipeline_tag: text-generation
extra_gated_prompt: >-
  **Usage Warnings**


  “**Risk of Sensitive or Controversial Outputs**“: This model’s safety
  filtering has been significantly reduced, potentially generating sensitive,
  controversial, or inappropriate content. Users should exercise caution and
  rigorously review generated outputs.

  “**Not Suitable for All Audiences**:“ Due to limited content filtering, the
  model’s outputs is inappropriate for public settings, underage users, or
  applications requiring high security.

  “**Legal and Ethical Responsibilities**“: Users must ensure their usage
  complies with local laws and ethical standards. Generated content may carry
  legal or ethical risks, and users are solely responsible for any consequences.

  “**Research and Experimental Use**“: This model can be used only for research
  in testing and controlled environments, direct use in production or
  public-facing commercial applications is not allowed.

  “**Monitoring and Review Recommendations**“: Users are strongly advised to
  monitor model outputs in real-time and conduct manual reviews when necessary
  to prevent the dissemination of inappropriate content.

  “**No Default Safety Guarantees**“: Unlike standard models, this model has not
  undergone rigorous safety optimization. I bear no responsibility for any
  consequences arising from its use.
base_model:
  - Qwen/Qwen3-4B-Instruct-2507

Model Card for Model ID

Model Description

  • Developed by: Federico Ricciuti
  • License: Apache 2

Direct Use

The resources, including code, data, and model weights, associated with this project are restricted for academic research purposes only and cannot be used for commercial purposes.

To build LLM content safety moderation guardrails for LLMs, can be used to train both prompt and response moderation.

Can be used for alignment of general purpose LLMs towards safety along with carefully using the data for safe-unsafe preference pairs.

Can be used for the evaluation of the safety allignment of LLMs.

Out-of-Scope Use

The model may generate content that may be offensive or upsetting. Topics include, but are not limited to, discriminatory language and discussions of abuse, violence, self-harm, exploitation, and other potentially upsetting subject matter. Please only engage with the data in accordance with your own personal risk tolerance. The data are intended for research purposes, especially research that can make models less harmful. The views expressed in the data do not reflect my view.

Disclaimer

The resources, including code, data, and model weights, associated with this project are restricted for academic research purposes only and cannot be used for commercial purposes.

This model is capable of generating uncensored and explicit content. It should be used responsibly and within the bounds of the law. The creators do not endorse illegal or unethical use of the model. Content generated using this model should comply with platform guidelines and local regulations regarding NSFW material.

Quickstart

The following contains a code snippet illustrating how to use the model generate content based on given inputs.

from transformers import AutoModelForCausalLM, AutoTokenizer

model_name = "fedric95/Qwen3-4B-Instruct-2507-unc"

# load the tokenizer and the model
tokenizer = AutoTokenizer.from_pretrained(model_name)
model = AutoModelForCausalLM.from_pretrained(
    model_name,
    torch_dtype="auto",
    device_map="auto"
)

# prepare the model input
prompt = "Give me a short introduction to large language model."
messages = [
    {"role": "user", "content": prompt}
]
text = tokenizer.apply_chat_template(
    messages,
    tokenize=False,
    add_generation_prompt=True,
)
model_inputs = tokenizer([text], return_tensors="pt").to(model.device)

# conduct text completion
generated_ids = model.generate(
    **model_inputs,
    max_new_tokens=16384
)
output_ids = generated_ids[0][len(model_inputs.input_ids[0]):].tolist() 

content = tokenizer.decode(output_ids, skip_special_tokens=True)

print("content:", content)

More information in the Qwen3-4B-Instruct-2507 repo.