Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Appearance settings

Commit ea1518e

Browse filesBrowse files
authored
llama-tts : avoid crashes related to bad model file paths (ggml-org#12482)
1 parent 1aa87ee commit ea1518e
Copy full SHA for ea1518e

File tree

Expand file treeCollapse file tree

1 file changed

+8
-0
lines changed
Filter options
Expand file treeCollapse file tree

1 file changed

+8
-0
lines changed

‎examples/tts/tts.cpp

Copy file name to clipboardExpand all lines: examples/tts/tts.cpp
+8Lines changed: 8 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -571,6 +571,10 @@ int main(int argc, char ** argv) {
571571
model_ttc = llama_init_ttc.model.get();
572572
ctx_ttc = llama_init_ttc.context.get();
573573

574+
if (model_ttc == nullptr || ctx_ttc == nullptr) {
575+
return ENOENT;
576+
}
577+
574578
const llama_vocab * vocab = llama_model_get_vocab(model_ttc);
575579

576580
// TODO: refactor in a common struct
@@ -586,6 +590,10 @@ int main(int argc, char ** argv) {
586590
model_cts = llama_init_cts.model.get();
587591
ctx_cts = llama_init_cts.context.get();
588592

593+
if (model_cts == nullptr || ctx_cts == nullptr) {
594+
return ENOENT;
595+
}
596+
589597
std::vector<common_sampler *> smpl(n_parallel);
590598
for (int i = 0; i < n_parallel; ++i) {
591599
params.sampling.no_perf = (i != 0);

0 commit comments

Comments
0 (0)
Morty Proxy This is a proxified and sanitized view of the page, visit original site.