Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Appearance settings
Open
Show file tree
Hide file tree
Changes from 1 commit
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Prev Previous commit
Next Next commit
Add model files
  • Loading branch information
ooe1123 committed Jun 21, 2025
commit 021f32af2698503741b3df4487f63f7f75dc6506
Binary file added BIN +219 KB lipsync/hallo/demo.jpg
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added BIN +1.26 MB lipsync/hallo/demo.wav
Binary file not shown.
Binary file not shown.
14 changes: 12 additions & 2 deletions 14 lipsync/hallo/hallo.py
Original file line number Diff line number Diff line change
Expand Up @@ -48,13 +48,16 @@
MODEL_IMAGE_PROJ_PATH = "image_proj.onnx.prototxt"
WEIGHT_DENOISE_PB_PATH = "denoising_unet_weights.pb"

WEIGHT_FACE_ANALYSIS_DET_PATH = "./face_analysis/models/scrfd_10g_bnkps.onnx"
WEIGHT_FACE_ANALYSIS_REG_PATH = "./face_analysis/models/glintr100.onnx"
WEIGHT_WAV2VEC_PATH = "./wav2vec/wav2vec2-base-960h/model.safetensors"

REMOTE_PATH = "https://storage.googleapis.com/ailia-models/hallo/"

IMAGE_SIZE = 512
SAMPLING_RATE = 16000
IMAGE_PATH = "demo.jpg"
WAV_PATH = "demo.wav"
VIDEO_PATH = "demo.wav"
SAVE_VIDEO_PATH = "output.mp4"

# ======================
Expand All @@ -65,8 +68,8 @@
"Hallo: Hierarchical Audio-Driven Visual Synthesis for Portrait Image Animation",
IMAGE_PATH,
SAVE_VIDEO_PATH,
input_ftype="audio",
)
parser.add_argument("--driving_audio", default=WAV_PATH, help="Input audio")
parser.add_argument("--onnx", action="store_true", help="execute onnxruntime version.")
args = update_parser(parser, check_input_type=False)

Expand Down Expand Up @@ -657,6 +660,7 @@ def progress_bar(self, iterable=None, total=None):

def recognize_from_video(pipe: FaceAnimatePipeline):
image_path = args.input[0]
driving_audio_path = args.driving_audio
# prepare input data
image = load_image(image_path)
image = cv2.cvtColor(image, cv2.COLOR_BGRA2BGR)
Expand Down Expand Up @@ -758,6 +762,12 @@ def main():
)
check_and_download_file(WEIGHT_DENOISE_PB_PATH, REMOTE_PATH)

# for insightface
check_and_download_file(WEIGHT_FACE_ANALYSIS_DET_PATH, REMOTE_PATH)
check_and_download_file(WEIGHT_FACE_ANALYSIS_REG_PATH, REMOTE_PATH)
# wav2vec
check_and_download_file(WEIGHT_WAV2VEC_PATH, REMOTE_PATH)

env_id = args.env_id

# initialize
Expand Down
128 changes: 128 additions & 0 deletions 128 lipsync/hallo/wav2vec/wav2vec2-base-960h/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,128 @@
---
language: en
datasets:
- librispeech_asr
tags:
- audio
- automatic-speech-recognition
- hf-asr-leaderboard
license: apache-2.0
widget:
- example_title: Librispeech sample 1
src: https://cdn-media.huggingface.co/speech_samples/sample1.flac
- example_title: Librispeech sample 2
src: https://cdn-media.huggingface.co/speech_samples/sample2.flac
model-index:
- name: wav2vec2-base-960h
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: LibriSpeech (clean)
type: librispeech_asr
config: clean
split: test
args:
language: en
metrics:
- name: Test WER
type: wer
value: 3.4
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: LibriSpeech (other)
type: librispeech_asr
config: other
split: test
args:
language: en
metrics:
- name: Test WER
type: wer
value: 8.6
---

# Wav2Vec2-Base-960h

[Facebook's Wav2Vec2](https://ai.facebook.com/blog/wav2vec-20-learning-the-structure-of-speech-from-raw-audio/)

The base model pretrained and fine-tuned on 960 hours of Librispeech on 16kHz sampled speech audio. When using the model
make sure that your speech input is also sampled at 16Khz.

[Paper](https://arxiv.org/abs/2006.11477)

Authors: Alexei Baevski, Henry Zhou, Abdelrahman Mohamed, Michael Auli

**Abstract**

We show for the first time that learning powerful representations from speech audio alone followed by fine-tuning on transcribed speech can outperform the best semi-supervised methods while being conceptually simpler. wav2vec 2.0 masks the speech input in the latent space and solves a contrastive task defined over a quantization of the latent representations which are jointly learned. Experiments using all labeled data of Librispeech achieve 1.8/3.3 WER on the clean/other test sets. When lowering the amount of labeled data to one hour, wav2vec 2.0 outperforms the previous state of the art on the 100 hour subset while using 100 times less labeled data. Using just ten minutes of labeled data and pre-training on 53k hours of unlabeled data still achieves 4.8/8.2 WER. This demonstrates the feasibility of speech recognition with limited amounts of labeled data.

The original model can be found under https://github.com/pytorch/fairseq/tree/master/examples/wav2vec#wav2vec-20.


# Usage

To transcribe audio files the model can be used as a standalone acoustic model as follows:

```python
from transformers import Wav2Vec2Processor, Wav2Vec2ForCTC
from datasets import load_dataset
import torch

# load model and tokenizer
processor = Wav2Vec2Processor.from_pretrained("facebook/wav2vec2-base-960h")
model = Wav2Vec2ForCTC.from_pretrained("facebook/wav2vec2-base-960h")

# load dummy dataset and read soundfiles
ds = load_dataset("patrickvonplaten/librispeech_asr_dummy", "clean", split="validation")

# tokenize
input_values = processor(ds[0]["audio"]["array"], return_tensors="pt", padding="longest").input_values # Batch size 1

# retrieve logits
logits = model(input_values).logits

# take argmax and decode
predicted_ids = torch.argmax(logits, dim=-1)
transcription = processor.batch_decode(predicted_ids)
```

## Evaluation

This code snippet shows how to evaluate **facebook/wav2vec2-base-960h** on LibriSpeech's "clean" and "other" test data.

```python
from datasets import load_dataset
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
import torch
from jiwer import wer


librispeech_eval = load_dataset("librispeech_asr", "clean", split="test")

model = Wav2Vec2ForCTC.from_pretrained("facebook/wav2vec2-base-960h").to("cuda")
processor = Wav2Vec2Processor.from_pretrained("facebook/wav2vec2-base-960h")

def map_to_pred(batch):
input_values = processor(batch["audio"]["array"], return_tensors="pt", padding="longest").input_values
with torch.no_grad():
logits = model(input_values.to("cuda")).logits

predicted_ids = torch.argmax(logits, dim=-1)
transcription = processor.batch_decode(predicted_ids)
batch["transcription"] = transcription
return batch

result = librispeech_eval.map(map_to_pred, batched=True, batch_size=1, remove_columns=["audio"])

print("WER:", wer(result["text"], result["transcription"]))
```

*Result (WER)*:

| "clean" | "other" |
|---|---|
| 3.4 | 8.6 |
77 changes: 77 additions & 0 deletions 77 lipsync/hallo/wav2vec/wav2vec2-base-960h/config.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,77 @@
{
"_name_or_path": "facebook/wav2vec2-base-960h",
"activation_dropout": 0.1,
"apply_spec_augment": true,
"architectures": [
"Wav2Vec2ForCTC"
],
"attention_dropout": 0.1,
"bos_token_id": 1,
"codevector_dim": 256,
"contrastive_logits_temperature": 0.1,
"conv_bias": false,
"conv_dim": [
512,
512,
512,
512,
512,
512,
512
],
"conv_kernel": [
10,
3,
3,
3,
3,
2,
2
],
"conv_stride": [
5,
2,
2,
2,
2,
2,
2
],
"ctc_loss_reduction": "sum",
"ctc_zero_infinity": false,
"diversity_loss_weight": 0.1,
"do_stable_layer_norm": false,
"eos_token_id": 2,
"feat_extract_activation": "gelu",
"feat_extract_dropout": 0.0,
"feat_extract_norm": "group",
"feat_proj_dropout": 0.1,
"feat_quantizer_dropout": 0.0,
"final_dropout": 0.1,
"gradient_checkpointing": false,
"hidden_act": "gelu",
"hidden_dropout": 0.1,
"hidden_dropout_prob": 0.1,
"hidden_size": 768,
"initializer_range": 0.02,
"intermediate_size": 3072,
"layer_norm_eps": 1e-05,
"layerdrop": 0.1,
"mask_feature_length": 10,
"mask_feature_prob": 0.0,
"mask_time_length": 10,
"mask_time_prob": 0.05,
"model_type": "wav2vec2",
"num_attention_heads": 12,
"num_codevector_groups": 2,
"num_codevectors_per_group": 320,
"num_conv_pos_embedding_groups": 16,
"num_conv_pos_embeddings": 128,
"num_feat_extract_layers": 7,
"num_hidden_layers": 12,
"num_negatives": 100,
"pad_token_id": 0,
"proj_codevector_dim": 256,
"transformers_version": "4.7.0.dev0",
"vocab_size": 32
}
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
{
"do_normalize": true,
"feature_dim": 1,
"padding_side": "right",
"padding_value": 0.0,
"return_attention_mask": false,
"sampling_rate": 16000
}
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
{
"do_normalize": true,
"feature_size": 1,
"padding_side": "right",
"padding_value": 0.0,
"return_attention_mask": false,
"sampling_rate": 16000
}
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
{"bos_token": "<s>", "eos_token": "</s>", "unk_token": "<unk>", "pad_token": "<pad>"}
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
{"unk_token": "<unk>", "bos_token": "<s>", "eos_token": "</s>", "pad_token": "<pad>", "do_lower_case": false, "return_attention_mask": false, "do_normalize": true}
1 change: 1 addition & 0 deletions 1 lipsync/hallo/wav2vec/wav2vec2-base-960h/vocab.json
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
{"<pad>": 0, "<s>": 1, "</s>": 2, "<unk>": 3, "|": 4, "E": 5, "T": 6, "A": 7, "O": 8, "N": 9, "I": 10, "H": 11, "S": 12, "R": 13, "D": 14, "L": 15, "U": 16, "M": 17, "W": 18, "C": 19, "F": 20, "G": 21, "Y": 22, "P": 23, "B": 24, "V": 25, "K": 26, "'": 27, "X": 28, "J": 29, "Q": 30, "Z": 31}
Morty Proxy This is a proxified and sanitized view of the page, visit original site.