Update README.md
Browse files
README.md
CHANGED
|
@@ -50,7 +50,7 @@ We also provide a ControlNet model trained on top of Arc2Face for pose control.
|
|
| 50 |
|
| 51 |
## Expression Adapter
|
| 52 |
|
| 53 |
-
Our [extension](
|
| 54 |
|
| 55 |
<div align="center">
|
| 56 |
<img src='https://huggingface.co/foivospar/Arc2Face/resolve/main/assets/arc2face_exp.jpg'>
|
|
@@ -66,11 +66,9 @@ hf_hub_download(repo_id="FoivosPar/Arc2Face", filename="arc2face/config.json", l
|
|
| 66 |
hf_hub_download(repo_id="FoivosPar/Arc2Face", filename="arc2face/diffusion_pytorch_model.safetensors", local_dir="./models")
|
| 67 |
hf_hub_download(repo_id="FoivosPar/Arc2Face", filename="encoder/config.json", local_dir="./models")
|
| 68 |
hf_hub_download(repo_id="FoivosPar/Arc2Face", filename="encoder/pytorch_model.bin", local_dir="./models")
|
| 69 |
-
hf_hub_download(repo_id="FoivosPar/Arc2Face", filename="controlnet/config.json", local_dir="./models")
|
| 70 |
-
hf_hub_download(repo_id="FoivosPar/Arc2Face", filename="controlnet/diffusion_pytorch_model.safetensors", local_dir="./models")
|
| 71 |
```
|
| 72 |
|
| 73 |
-
Please check our [GitHub repository](https://github.com/foivospar/Arc2Face) for complete inference instructions
|
| 74 |
|
| 75 |
## Sample Usage with Diffusers (core model)
|
| 76 |
|
|
@@ -119,7 +117,7 @@ Then, pick an image to extract the ID-embedding and generate images:
|
|
| 119 |
app = FaceAnalysis(name='antelopev2', root='./', providers=['CUDAExecutionProvider', 'CPUExecutionProvider'])
|
| 120 |
app.prepare(ctx_id=0, det_size=(640, 640))
|
| 121 |
|
| 122 |
-
img = np.array(Image.open('
|
| 123 |
|
| 124 |
faces = app.get(img)
|
| 125 |
faces = sorted(faces, key=lambda x:(x['bbox'][2]-x['bbox'][0])*(x['bbox'][3]-x['bbox'][1]))[-1] # select largest face (if more than one detected)
|
|
|
|
| 50 |
|
| 51 |
## Expression Adapter
|
| 52 |
|
| 53 |
+
Our [extension](http://arxiv.org/abs/2510.04706) combines Arc2Face with a custom IP-Adapter designed for generating ID-consistent images with precise expression control based on FLAME blendshape parameters. We also provide an optional Reference Adapter which can be used to condition the output directly on the input image, i.e. preserving the subject's appearance and background (to an extent). You can find more details in the report.
|
| 54 |
|
| 55 |
<div align="center">
|
| 56 |
<img src='https://huggingface.co/foivospar/Arc2Face/resolve/main/assets/arc2face_exp.jpg'>
|
|
|
|
| 66 |
hf_hub_download(repo_id="FoivosPar/Arc2Face", filename="arc2face/diffusion_pytorch_model.safetensors", local_dir="./models")
|
| 67 |
hf_hub_download(repo_id="FoivosPar/Arc2Face", filename="encoder/config.json", local_dir="./models")
|
| 68 |
hf_hub_download(repo_id="FoivosPar/Arc2Face", filename="encoder/pytorch_model.bin", local_dir="./models")
|
|
|
|
|
|
|
| 69 |
```
|
| 70 |
|
| 71 |
+
Please check our [GitHub repository](https://github.com/foivospar/Arc2Face) for complete inference instructions, including the ControlNet and the Expression Adapter.
|
| 72 |
|
| 73 |
## Sample Usage with Diffusers (core model)
|
| 74 |
|
|
|
|
| 117 |
app = FaceAnalysis(name='antelopev2', root='./', providers=['CUDAExecutionProvider', 'CPUExecutionProvider'])
|
| 118 |
app.prepare(ctx_id=0, det_size=(640, 640))
|
| 119 |
|
| 120 |
+
img = np.array(Image.open('assets/examples/joacquin.png'))[:,:,::-1]
|
| 121 |
|
| 122 |
faces = app.get(img)
|
| 123 |
faces = sorted(faces, key=lambda x:(x['bbox'][2]-x['bbox'][0])*(x['bbox'][3]-x['bbox'][1]))[-1] # select largest face (if more than one detected)
|