Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Appearance settings

Enable build Intel backend in onemkl interfaces on CUDA#2229

Merged
vlad-perevezentsev merged 2 commits intomasterIntelPython/dpnp:masterfrom
enable_intel_backend_cudaIntelPython/dpnp:enable_intel_backend_cudaCopy head branch name to clipboard
Dec 12, 2024
Merged

Enable build Intel backend in onemkl interfaces on CUDA#2229
vlad-perevezentsev merged 2 commits intomasterIntelPython/dpnp:masterfrom
enable_intel_backend_cudaIntelPython/dpnp:enable_intel_backend_cudaCopy head branch name to clipboard

Conversation

@vlad-perevezentsev
Copy link
Contributor

This PR suggests to enable MKLGPU_BACKEND and MKLCPU_BACKEND builds in OneMKL Interfaces during build on CUDA with --target=cuda flag to ensure that all available devices can be used

ONEAPI_DEVICE_SELECTOR=cuda:gpu pytest dpnp/tests/
========= 67432 passed, 5625 skipped, 34 deselected, 2 xfailed in 442.92s (0:07:22) ======================


ONEAPI_DEVICE_SELECTOR=opencl:cpu pytest dpnp/tests/
========= 68332 passed, 4725 skipped, 34 deselected, 2 xfailed in 289.06s (0:04:49) ======================

Previous implementation only allowed array allocation on cuda::gpu device with ONEAPI_DEVICE_SEELCTOR=cuda:gpu env variable enabled and threw RuntimeError otherwise.

  • Have you provided a meaningful PR description?
  • Have you added a test, reproducer or referred to issue with a reproducer?
  • Have you tested your changes locally for CPU and GPU devices?
  • Have you made sure that new changes do not introduce compiler warnings?
  • Have you checked performance impact of proposed changes?
  • If this PR is a work in progress, are you filing the PR as a draft?

@github-actions
Copy link
Contributor

View rendered docs @ https://intelpython.github.io/dpnp/pull/2229/index.html

Copy link
Contributor

@antonwolfy antonwolfy left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@coveralls
Copy link
Collaborator

Coverage Status

coverage: 65.09%. remained the same
when pulling 50cc35a on enable_intel_backend_cuda
into d83ea3d on master.

@vlad-perevezentsev vlad-perevezentsev merged commit 7d491e8 into master Dec 12, 2024
@vlad-perevezentsev vlad-perevezentsev deleted the enable_intel_backend_cuda branch December 12, 2024 13:21
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants

Morty Proxy This is a proxified and sanitized view of the page, visit original site.