Update README.md
Browse files
README.md
CHANGED
|
@@ -1,5 +1,5 @@
|
|
| 1 |
---
|
| 2 |
-
license:
|
| 3 |
tags:
|
| 4 |
- speech
|
| 5 |
- audio
|
|
@@ -19,7 +19,7 @@ pipeline_tag: voice-activity-detection
|
|
| 19 |
[](https://discord.gg/WNsvaCtmDe)
|
| 20 |
[](https://github.com/FluidInference/FluidAudio)
|
| 21 |
|
| 22 |
-
Speaker diarization based on [pyannote
|
| 23 |
|
| 24 |
Models are trained on acoustic signatures so it supports any lanugage.
|
| 25 |
|
|
@@ -27,6 +27,8 @@ Models are trained on acoustic signatures so it supports any lanugage.
|
|
| 27 |
|
| 28 |
See the SDK for more details [https://github.com/FluidInference/FluidAudio](https://github.com/FluidInference/FluidAudio)
|
| 29 |
|
|
|
|
|
|
|
| 30 |
### Technical Specifications
|
| 31 |
- **Input**: 16kHz mono audio
|
| 32 |
- **Output**: Speaker segments with timestamps and IDs
|
|
|
|
| 1 |
---
|
| 2 |
+
license: cc-by-4.0
|
| 3 |
tags:
|
| 4 |
- speech
|
| 5 |
- audio
|
|
|
|
| 19 |
[](https://discord.gg/WNsvaCtmDe)
|
| 20 |
[](https://github.com/FluidInference/FluidAudio)
|
| 21 |
|
| 22 |
+
Speaker diarization based on [pyannote](https://github.com/pyannote) models optimized for Apple Neural Engine.
|
| 23 |
|
| 24 |
Models are trained on acoustic signatures so it supports any lanugage.
|
| 25 |
|
|
|
|
| 27 |
|
| 28 |
See the SDK for more details [https://github.com/FluidInference/FluidAudio](https://github.com/FluidInference/FluidAudio)
|
| 29 |
|
| 30 |
+
Please note that the SDK itself is Apache 2.0, but the parent model from Pyannote is `cc-by-4.0`
|
| 31 |
+
|
| 32 |
### Technical Specifications
|
| 33 |
- **Input**: 16kHz mono audio
|
| 34 |
- **Output**: Speaker segments with timestamps and IDs
|