Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Appearance settings

Misc. bug: test-backend-ops grad crash by GGML_ASSERT error #12520

Copy link
Copy link
Closed
@masamaru-san

Description

@masamaru-san
Issue body actions

Name and Version

.\llama-cli.exe --version
version: 4942 (fbdfefe)
built with MSVC 19.43.34808.0 for x64

Operating systems

Windows

Which llama.cpp modules do you know to be affected?

Test code

Command line

> .\test-backend-ops.exe grad -o CPY
or
> .\test-backend-ops.exe grad

Problem description & steps to reproduce

description

Commit #12310 crashes test-backend-ops grad.
It doesn't seem to matter which backend.

steps to reproduce

Run test-backend-ops as grad mode.

First Bad Commit

Commit #12310 : SHA ba932df

Relevant log output

[3/23 08:24:26] PS E:\AI\llama.cpp\b4942\llama-b4942-bin-win-vulkan-x64
> .\test-backend-ops.exe grad -o CPY
ggml_vulkan: Found 1 Vulkan devices:
ggml_vulkan: 0 = AMD Radeon(TM) Graphics (AMD proprietary driver) | uma: 1 | fp16: 1 | warp size: 64 | shared memory: 32768 | matrix cores: none
Testing 2 devices

Backend 1/2: Vulkan0
  Device description: AMD Radeon(TM) Graphics
  Device memory: 256 MB (256 MB free)

  CPY(type_src=f32,type_dst=f32,ne=[1,2,3,4],permute_src=[0,0,0,0],permute_dst=[0,0,0,0]): OK
  CPY(type_src=f32,type_dst=f32,ne=[1,2,3,4],permute_src=[0,2,1,3],permute_dst=[0,0,0,0]): OK
  CPY(type_src=f32,type_dst=f32,ne=[1,2,3,4],permute_src=[0,3,1,2],permute_dst=[0,2,1,3]): D:\a\llama.cpp\llama.cpp\ggml\src\ggml.c:5816: GGML_ASSERT(!src0_needs_grads || ggml_are_same_shape(src0, cgraph->grads[isrc0])) failed
[3/23 08:24:39] PS E:\AI\llama.cpp\b4942\llama-b4942-bin-win-vulkan-x64
> cd ..\llama-b4942-bin-win-avx2-x64\
[3/23 08:24:55] PS E:\AI\llama.cpp\b4942\llama-b4942-bin-win-avx2-x64
> .\test-backend-ops.exe grad -o CPY
Testing 1 devices

Backend 1/1: CPU
  Device description: AMD Ryzen 7 5700U with Radeon Graphics
  Device memory: 0 MB (0 MB free)

  CPY(type_src=f32,type_dst=f32,ne=[1,2,3,4],permute_src=[0,0,0,0],permute_dst=[0,0,0,0]): OK
  CPY(type_src=f32,type_dst=f32,ne=[1,2,3,4],permute_src=[0,2,1,3],permute_dst=[0,0,0,0]): OK
  CPY(type_src=f32,type_dst=f32,ne=[1,2,3,4],permute_src=[0,3,1,2],permute_dst=[0,2,1,3]): D:\a\llama.cpp\llama.cpp\ggml\src\ggml.c:5816: GGML_ASSERT(!src0_needs_grads || ggml_are_same_shape(src0, cgraph->grads[isrc0])) failed

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions

      Morty Proxy This is a proxified and sanitized view of the page, visit original site.