Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Appearance settings

vltpkg/benchmarks-semver

Open more actions menu

Repository files navigation

Semver Benchmarks

A comprehensive benchmark suite comparing semantic version libraries in Node.js, specifically:

  • node-semver - The standard semver parser used by npm
  • @vltpkg/semver - High-performance semver library by the vlt team

Features

This benchmark suite tests the most common semver operations:

  • Parsing - Converting version strings to version objects
  • Comparison - Comparing two versions (compare, gt, lt, eq)
  • Satisfies - Checking if a version satisfies a range

Requirements

  • Node.js 22+ (required by @vltpkg/semver)
  • vlt package manager
  • hyperfine (optional, for detailed benchmarking)

Installation

# Install dependencies using vlt
vlt install

# Or use the package script
vlr install-deps

Usage

Run Tests

First, verify that both libraries produce consistent results:

vlr test

Run Basic Benchmarks

# Run the comprehensive benchmark suite
vlr benchmark

# Run individual benchmark categories
vlr benchmark:parsing
vlr benchmark:comparison
vlr benchmark:satisfies

Run Benchmarks with Chart Generation

The new chart generation feature automatically creates performance tables and charts:

# Run benchmarks and generate charts/tables in README
vlr benchmark:generate

# Just collect benchmark data (saves to benchmark-results.json)
vlr benchmark:collect

# Generate charts from existing results
vlr benchmark:charts

Run Hyperfine Benchmarks

For more detailed performance analysis with statistical data:

# Quick hyperfine benchmark
vlr benchmark:quick

# Full hyperfine benchmark suite (exports JSON and Markdown results)
vlr benchmark:hyperfine

# Run all benchmarks (tests + charts + hyperfine)
vlr benchmark:all

Results

The hyperfine benchmarks will generate result files:

  • results-parsing.json/md - Parsing operation results
  • results-comparison.json/md - Comparison operation results
  • results-satisfies.json/md - Satisfies operation results
  • results-combined.json/md - Full benchmark suite results

Expected Performance

Based on the vlt team's documentation, @vltpkg/semver should show:

  • 40-50% faster at parsing versions
  • 15-20% faster at parsing ranges
  • 60-70% faster at testing versions against ranges

🏆 Summary

@vltpkg/semver wins overall, taking 2/3 categories.

Performance Highlights:

  • Parsing: node-semver is 60% faster
  • Comparison: @vltpkg/semver is 84% faster
  • Satisfies: @vltpkg/semver is 10% faster

📊 Performance Results

Operation node-semver @vltpkg/semver Winner Improvement
Parsing 131,968 ops/sec 82,468 ops/sec 🏆 node-semver 60%
Comparison 122,921 ops/sec 225,656 ops/sec 🏆 @vltpkg/semver 84%
Satisfies 17,860 ops/sec 19,685 ops/sec 🏆 @vltpkg/semver 10%

📈 Performance Comparison Chart

parsing      │
node-semver  │████████████████████████████████████████ 131,968
@vltpkg      │█████████████████████████ 82,468
             │
comparison   │
node-semver  │██████████████████████ 122,921
@vltpkg      │████████████████████████████████████████ 225,656
             │
satisfies    │
node-semver  │████████████████████████████████████ 17,860
@vltpkg      │████████████████████████████████████████ 19,685
             │

🔬 Detailed Results

Parsing Benchmark

Metric node-semver @vltpkg/semver
Operations/sec 131,968 82,468
Mean time (ms) 0.008 0.012
Margin of error ±0ms ±0ms
Relative margin ±0.97% ±0.41%
Sample runs 85 96

Comparison Benchmark

Metric node-semver @vltpkg/semver
Operations/sec 122,921 225,656
Mean time (ms) 0.008 0.004
Margin of error ±0ms ±0ms
Relative margin ±0.11% ±0.13%
Sample runs 98 97

Satisfies Benchmark

Metric node-semver @vltpkg/semver
Operations/sec 17,860 19,685
Mean time (ms) 0.056 0.051
Margin of error ±0ms ±0.001ms
Relative margin ±0.23% ±1.37%
Sample runs 101 95

🖥️ System Information

  • Node.js: v24.1.0
  • Platform: darwin (arm64)
  • node-semver: v7.7.2
  • @vltpkg/semver: v0.0.0-22
  • Test Date: 2025-08-20, 2:10:09 p.m.

Test Data

The benchmarks use a comprehensive set of test cases including:

Versions:

  • Standard versions (1.0.0, 2.1.3)
  • Prerelease versions (1.0.0-alpha.1, 1.2.3-beta.4)
  • Build metadata (1.2.3+build.5)
  • Complex prerelease strings
  • Edge cases

Ranges:

  • Caret ranges (^1.0.0)
  • Tilde ranges (~1.2.3)
  • Comparison ranges (>=1.2.7 <1.3.0)
  • Hyphen ranges (1.2.3 - 2.3.4)
  • Complex logical operators

Project Structure

semver-benchmarks/
├── benchmarks/
│   ├── index.js          # Main benchmark suite
│   ├── parsing.js        # Parsing benchmarks
│   ├── comparison.js     # Comparison benchmarks
│   └── satisfies.js      # Satisfies benchmarks
├── test/
│   └── test.js          # Compatibility tests
├── run-hyperfine.sh     # Hyperfine benchmark script
├── package.json         # Project configuration
└── README.md           # This file

Contributing

Feel free to add more test cases, additional libraries, or improve the benchmark methodology.

About

Benchmarking against `node-semver`

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published
Morty Proxy This is a proxified and sanitized view of the page, visit original site.