Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Appearance settings
Intel_logo_2023

Develop on Intel, from AI PC to Data Center

Speed up AI development using Intel®-optimized software on the latest Intel® Core™ Ultra processor, Intel® Xeon® processor, Intel® Gaudi® AI Accelerator, and GPU compute.

As a participant in the open source software community since 1989, Intel uses industry collaboration, co-engineering, and open source contributions to deliver a steady stream of code and optimizations that work across multiple platforms and use cases. We push our contributions upstream so developers get the most current and optimized software that works across multiple platforms and maintains security.

Check out the following repositories to jumpstart your development work on Intel:

  • OPEA GenAI Examples - Examples such as ChatQnA which illustrate the pipeline capabilities of the Open Platform for Enterprise AI (OPEA) project.
  • AI PC Notebooks - A collection of notebooks designed to showcase generative AI workloads on AI PC
  • Open3D - A modern library for 3D data processing
  • Optimum Intel - Accelerate inference with Intel optimization tools
  • Optimum Habana - Easy and lightning fast training of 🤗 Transformers on Habana Gaudi processor (HPU)
  • Intel Neural Compressor - SOTA low-bit LLM quantization (INT8/FP8/INT4/FP4/NF4) & sparsity; leading model compression techniques on TensorFlow, PyTorch, and ONNX Runtime
  • OpenVINO Notebooks - 📚 Jupyter notebook tutorials for OpenVINO™
  • SetFit - Efficient few-shot learning with Sentence Transformers
  • FastRAG - Efficient retrieval augmentation and generation (RAG) framework

DevHub Discord

Join us on the Intel DevHub Discord server to chat with other developers in channels like #dev-projects, #gaudi, and #large-language-models.

Pinned Loading

  1. intel-extension-for-pytorch intel-extension-for-pytorch Public

    A Python package for extending the official PyTorch that can easily obtain performance on Intel platform

    Python 2k 308

  2. neural-compressor neural-compressor Public

    SOTA low-bit LLM quantization (INT8/FP8/MXFP8/INT4/MXFP4/NVFP4) & sparsity; leading model compression techniques on PyTorch, TensorFlow, and ONNX Runtime

    Python 2.6k 288

  3. ai ai Public

    Explore our open source AI portfolio! Develop, train, and deploy your AI solutions with performance- and productivity-optimized tools from Intel.

    61 12

  4. intel-one-mono intel-one-mono Public

    Intel One Mono font repository

    9.8k 321

  5. rohd rohd Public

    The Rapid Open Hardware Development (ROHD) framework is a framework for describing and verifying hardware in the Dart programming language.

    Dart 459 78

Repositories

Loading
Type
Select type
Language
Select language
Sort
Select order
Showing 10 of 1315 repositories
  • auto-round Public

    Advanced quantization toolkit for LLMs and VLMs. Support for WOQ, MXFP4, NVFP4, GGUF, Adaptive Schemes and seamless integration with Transformers, vLLM, SGLang, and llm-compressor

    intel/auto-round’s past year of commit activity
    Python 778 Apache-2.0 65 84 (3 issues need help) 15 Updated Dec 25, 2025
  • PerfSpect Public

    Open-source Linux performance suite for engineers—profiling and tuning workloads and system configurations.

    intel/PerfSpect’s past year of commit activity
    Go 419 BSD-3-Clause 52 11 1 Updated Dec 25, 2025
  • intel-xpu-backend-for-triton Public

    OpenAI Triton backend for Intel® GPUs

    intel/intel-xpu-backend-for-triton’s past year of commit activity
    MLIR 222 MIT 81 249 (2 issues need help) 36 Updated Dec 25, 2025
  • llvm Public

    Intel staging area for llvm.org contribution. Home for Intel LLVM-based projects.

    intel/llvm’s past year of commit activity
    LLVM 1,404 803 650 (19 issues need help) 241 Updated Dec 25, 2025
  • llm-scaler Public
    intel/llm-scaler’s past year of commit activity
    Shell 111 Apache-2.0 15 14 3 Updated Dec 25, 2025
  • torch-xpu-ops Public
    intel/torch-xpu-ops’s past year of commit activity
    C++ 68 Apache-2.0 66 393 106 Updated Dec 25, 2025
  • sycl-tla Public Forked from NVIDIA/cutlass

    SYCL* Templates for Linear Algebra (SYCL*TLA) - SYCL based CUTLASS implementation for Intel GPUs

    intel/sycl-tla’s past year of commit activity
    C++ 59 BSD-3-Clause 1,607 11 54 Updated Dec 25, 2025
  • neural-compressor Public

    SOTA low-bit LLM quantization (INT8/FP8/MXFP8/INT4/MXFP4/NVFP4) & sparsity; leading model compression techniques on PyTorch, TensorFlow, and ONNX Runtime

    intel/neural-compressor’s past year of commit activity
    Python 2,552 Apache-2.0 288 0 9 Updated Dec 25, 2025
  • vpl-gpu-rt Public
    intel/vpl-gpu-rt’s past year of commit activity
    C++ 135 MIT 99 43 18 Updated Dec 25, 2025
  • media-driver Public

    Intel Graphics Media Driver to support hardware decode, encode and video processing.

    intel/media-driver’s past year of commit activity
    C 1,167 373 142 103 Updated Dec 25, 2025
Morty Proxy This is a proxified and sanitized view of the page, visit original site.