Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Appearance settings

The ArrayFire Roadmap

John Melonakos edited this page Feb 7, 2024 · 3 revisions

In quarterly Meetings of the Maintainers, we review the following roadmap setting a high-level direction for future work. The suggested version numbers and features are not binding and often change. The information is updated once per quarter.

ArrayFire 3.10 (Jul 2024)

  • Improved oneAPI performance
  • Continuous delivery
    • Support more distro packages
      • Based on Debian [2 hours]
      • Cent OS 7/8 [16 hours]
      • Perhaps Amazon Linux AMI (or more cloud providers) if it gives a better experience for users compared to using sh files
    • Python CI [16 hours]
  • Debugging/Diagnostic functions [2 hours]
  • Random Shuffle
  • Stream function/Input streaming [80 hours]

ArrayFire 3.11 (Jan 2025)

  • Data Frames
  • JIT CPU
  • Multi-GPU
  • Multiple backend interoperability
  • CPU Optimization/Parallelism
  • OpenCL targeting multiple devices(CPU specific kernels)

ArrayFire 4.0 (Jul 2025)

  • Support only float/double for pow
  • ND Arrays? Is there interest that should be answered before implementation?
  • Drop unnecessary type support for existing API; for example, drop unused types from image processing.
  • Char -> bool types
  • Merge index and index_gen
  • Merge assign and assign_gen

Brainstorming List

Items from the list below are mapped to the

Features

No API Changes

  • Pan ArrayFire support to accept an existing array as output
  • Strings? String operations on af::array
  • Simpler graphics API

API Changes

Maintenance

  • Add additional hardware to CI
  • benchmarking regressions
  • Add bf16 support
  • Backend interop between backends

Bug Fixes

  • [BUG] LAPACKE Error when using solve on OpenCL backend bug
  • test_svd_dense_opencl fails on Ubuntu 20.04 LTS using NVIDIA OpenCL on AWS g3s.xlarge instance bug
  • test_array_cpu hangs on Ubuntu 20.04 LTS using NVIDIA OpenCL bug
  • Failed tests when compiling on AWS c6gd.2xlarge Instance bug
  • [BUG] Unable to use forge/windows in arrayfire (MacOS Big Sur) bug
  • [BUG] matmul nondeterministic result on CPU backend with gomp bug dependency
  • [Perf] OpenCL performance issue with sparse matmul bug OpenCL perf
  • arrayfire warning - exceptions over dll boundary bug
  • 3D scatter-plot has too small extent bug dependency graphics
  • No AF_ERR_NO_MEM error when afopencl array creation fails bug OpenCL
  • Indexing an array using another array containing negative integers indexes wrong elements bug
  • in v3.6.4 c2r FFT changes the value of the input bug CUDA dependency known issue
  • Batched gemv very slow bug CUDA perf
  • Swapped coordinates in FAST detector (OpenCL) bug
  • Matmul throws CL_MEM_OBJECT_ALLOCATION_FAILURE on OpenCL bug OpenCL
  • Sparse-dense matmul with AF_MAT_TRANS very slow in OpenCL on Nvidia card bug OpenCL perf
  • Harris: OpenCL backend does not detect all features that CPU and CUDA does bug CPU
  • ORB: CPU backend does not detect all features that CUDA and OpenCL does bug CPU
  • OpenCL HelloWorld examples produce Segfault without display bug installer OpenCL
  • canny_cuda test fails on Tegra TX2, AF 3.5.1 bug CUDA
  • Internal error:998 after calling scan in multiple tests within OpenCL application bug OpenCL
  • BUG: access to deleted memory in array_proxy index bug
  • CUDA-Aware MPI with device<>()-pointer needs af::sync() bug CUDA-
  • Errors when CUDA-MPS is active bug CUDA
  • OpenCL-OpenGL broken on Windows AMD GPUs bug OpenCL Windows
  • gfor example doesn't work bug
  • Test failure on AMD APU with Open Source drivers bug

Documentation

  • Internals: jit, memory manager, af::proxy
  • Better API documentation
  • New examples
Clone this wiki locally
Morty Proxy This is a proxified and sanitized view of the page, visit original site.