Update README.md
Browse files
README.md
CHANGED
|
@@ -24,13 +24,65 @@ datasets:
|
|
| 24 |
|
| 25 |
# Model Card for peleke-phi-4
|
| 26 |
|
| 27 |
-
This model is a fine-tuned version of [microsoft/phi-4](https://huggingface.co/microsoft/phi-4).
|
| 28 |
-
It
|
| 29 |
|
| 30 |
## Quick start
|
| 31 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 32 |
```python
|
| 33 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 34 |
```
|
| 35 |
|
| 36 |
## Training procedure
|
|
|
|
| 24 |
|
| 25 |
# Model Card for peleke-phi-4
|
| 26 |
|
| 27 |
+
This model is a fine-tuned version of [microsoft/phi-4](https://huggingface.co/microsoft/phi-4) for antibody sequence generation.
|
| 28 |
+
It takes in an antigen sequence, and returns novel Fv portions of heavy and light chain antibody sequences.
|
| 29 |
|
| 30 |
## Quick start
|
| 31 |
|
| 32 |
+
1. Load in the Model
|
| 33 |
+
|
| 34 |
+
```python
|
| 35 |
+
model_name = 'silicobio/peleke-phi-4'
|
| 36 |
+
config = PeftConfig.from_pretrained(model_name)
|
| 37 |
+
|
| 38 |
+
tokenizer = AutoTokenizer.from_pretrained(model_name, trust_remote_code=True)
|
| 39 |
+
|
| 40 |
+
model = AutoModelForCausalLM.from_pretrained(config.base_model_name_or_path, torch_dtype=torch.bfloat16, trust_remote_code=True).cuda()
|
| 41 |
+
model.resize_token_embeddings(len(tokenizer))
|
| 42 |
+
model = PeftModel.from_pretrained(model, model_name).cuda()
|
| 43 |
+
```
|
| 44 |
+
|
| 45 |
+
2. Format your Input
|
| 46 |
+
|
| 47 |
+
This model uses `<epi>` and `</epi>` to annotate epitope residues of interest.
|
| 48 |
+
|
| 49 |
+
It may be easier to use other characters for annotation, such as `[ ]`'s. For example: `...CSFS[S][F][V]L[N]WY...`.
|
| 50 |
+
Then, use the following function to properly format the input.
|
| 51 |
+
|
| 52 |
```python
|
| 53 |
+
def format_prompt(antigen_sequence):
|
| 54 |
+
epitope_seq = re.sub(r'\[([A-Z])\]', r'<epi>\1</epi>', antigen_sequence)
|
| 55 |
+
formatted_str = f"Antigen: {epitope_seq}<|im_end|>\nAntibody:"
|
| 56 |
+
return formatted_str
|
| 57 |
+
```
|
| 58 |
+
|
| 59 |
+
3. Generate an Antibody Sequence
|
| 60 |
+
|
| 61 |
+
```python
|
| 62 |
+
prompt = format_prompt(antigen)
|
| 63 |
+
inputs = tokenizer(prompt, return_tensors="pt")
|
| 64 |
+
inputs = {k: v.cuda() for k, v in inputs.items()}
|
| 65 |
+
|
| 66 |
+
with torch.no_grad():
|
| 67 |
+
outputs = model.generate(
|
| 68 |
+
**inputs,
|
| 69 |
+
max_new_tokens=1000,
|
| 70 |
+
do_sample=True,
|
| 71 |
+
temperature=0.7,
|
| 72 |
+
pad_token_id=tokenizer.eos_token_id,
|
| 73 |
+
use_cache=False,
|
| 74 |
+
)
|
| 75 |
+
|
| 76 |
+
full_text = tokenizer.decode(outputs[0], skip_special_tokens=False)
|
| 77 |
+
antibody_sequence = full_text.split('<|im_end|>')[1].replace('Antibody: ', '')
|
| 78 |
+
print(f"Antigen: {antigen}\nAntibody: {antibody_sequence}\n")
|
| 79 |
+
```
|
| 80 |
+
|
| 81 |
+
This will generate a `|`-delimited output, which is an Fv portion of a heavy and light chain.
|
| 82 |
+
|
| 83 |
+
```sh
|
| 84 |
+
Antigen: NPPTFSPALL...
|
| 85 |
+
Antibody: QVQLVQSGGG...|DIQMTQSPSS...
|
| 86 |
```
|
| 87 |
|
| 88 |
## Training procedure
|