Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Appearance settings

qzhao19/tinyBLAS

Open more actions menu

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

185 Commits
185 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

tinyBLAS

tinyBLAS provides high-performance GEMM (matrix-matrix multiplication) and GEMV (matrix-vector multiplication) algorithms, designed specifically to accelerate matrix operations in large language model (LLM) inference workloads. The library features adaptive micro-kernels and CPU-aware optimizations for modern processors.

Features

  • Optimized Micro-Kernels: Hand-tuned for SIMD instruction sets (SSE, AVX2, FMA), with automatic kernel selection based on CPU features.
  • Blocking and Packing: Efficient data packing and blocking strategies to maximize cache and memory bandwidth utilization.
  • Multi-threading: OpenMP-based parallelism for full multi-core CPU utilization.
  • Extensible Design: Easily add custom micro-kernels and support for multiple data types (float, double, etc.).
  • Reference Implementations & Benchmarks: Includes simple reference GEMM/GEMV and Google Benchmark-based performance tests for validation and comparison.

About

Nano-optimized GEMM (general matrix multiplication) with adaptive micro-kernels and CPU-aware tuning for high performance

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages

Morty Proxy This is a proxified and sanitized view of the page, visit original site.