Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Appearance settings

How to train/decode on reverberant speech? #251

Copy link
Copy link
@kevinmchu

Description

@kevinmchu
Issue body actions

I'd like to train a model on reverberant speech using the alignments generated from the corresponding anechoic data. Currently, I'm doing something similar to TIMIT_joint_training_liGRU_fbank.cfg, where I am using the reverberant TIMIT recipe to extract the features and the anechoic recipe for lab_folder and lab_graph. I noticed that decode_dnn.sh uses the lab_graph to generate the lattice rather than the graph constructed from the reverberant acoustic model.

What is the easiest way to specify using the anechoic alignments and reverberant graph?

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions

      Morty Proxy This is a proxified and sanitized view of the page, visit original site.