Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Appearance settings

danja/hyperdata-clients

Open more actions menu

Repository files navigation

hyperdata-clients

Ask DeepWiki

Yet another node API client library for interacting with AI providers using a common interface.

I wanted my own so I knew how it worked

npm run ask mistral 'In brief, how many AGIs will it take to change a lightbulb?'
...
Using API: mistral
Model: default
Prompt: In brief, how many AGIs will it take to change a lightbulb?
Using mistral key from: .env file

...it's uncertain how many would be needed to change a lightbulb...
npm run ask groq 'What is your name, and which model are you?'
...
Using API: groq
Model: default
Prompt: What is your name, and which model are you?

I'm an artificial intelligence model known as Llama. Llama stands for "Large Language Model Meta AI."

import { OpenAI, Claude, KeyManager } from 'hyperdata-clients';

Status: 2025-06-04 doc fixes, barrel file, version bump.

Working for me against :

  • Ollama (local)
  • Mistral (free & speedy, needs API key)
  • Groq (fast inference, needs API key)
  • OpenAI (requires $s and API key)

Various other clients sketched out, will likely need tweaking.

Only tested on recent Ubuntu.

There's a very basic CLI for checking the thing (see below), also runnable hardcoded examples eg.

node examples/MistralMinimal.js
node examples/GroqMinimal.js
import { createClient, createEmbeddingClient } from 'hyperdata-clients';

// For API clients
const openAIClient = await createClient('openai', { apiKey: 'your-api-key' });

// For embedding clients
const nomicClient = await createEmbeddingClient('nomic', { apiKey: 'your-nomic-key' });
const ollamaEmbedding = await createEmbeddingClient('ollama');

// For MCP clients (untested)
const mcpClient = await createClient('mcp', { /* mcp config */ });

Features

  • Support for multiple AI providers
  • Dedicated embedding model support
  • Environment-based configuration
  • Secure API key management
  • Consistent interface across providers
  • Type definitions included
  • Extensive test coverage

Chat Providers

  • Ollama
  • OpenAI
  • Claude
  • Mistral
  • Groq
  • Perplexity
  • HuggingFace

Embedding Providers

  • Nomic (via Atlas API)
  • Ollama (local embeddings)

Installation

Prequisites : recent node

git clone https://github.com/danja/hyperdata-clients.git
cd hyperdata-clients
npm install

CLI

Really minimal for testing purposes :

# Basic usage
npm run ask [provider] [options] "your prompt"

# or more directly
node examples/minimal.js [provider] [options] "your prompt"

# Mistral
npm run ask mistral --model 'open-codestral-mamba' 'tell me about yourself'

# Groq (fast inference)
npm run ask groq --model 'llama-3.1-8b-instant' 'explain quantum computing'

# Example with Ollama running locally, it'll default model to qwen2:1.5b
npm run ask ollama 'how are you?'

# requires an API key
node examples/minimal.js openai 'what are you?'

Embedding Models

The library provides dedicated support for text embeddings through specialized embedding clients:

import { createEmbeddingClient } from 'hyperdata-clients';

// Using Nomic Atlas API (requires NOMIC_API_KEY)
const nomicClient = await createEmbeddingClient('nomic');
const embeddings = await nomicClient.embed([
    'The quick brown fox jumps over the lazy dog',
    'Artificial intelligence is transforming technology'
]);

// Using local Ollama (requires Ollama running with nomic-embed-text-v1.5)
const ollamaClient = await createEmbeddingClient('ollama');
const singleEmbedding = await ollamaClient.embedSingle('Hello world');

// Example output: embeddings are arrays of numbers
console.log(`Generated ${embeddings.length} embeddings`);
console.log(`Each embedding has ${embeddings[0].length} dimensions`);

Embedding Example

# Run the embedding demo
node examples/embedding.js

The embedding clients support:

  • Batch processing: Embed multiple texts at once
  • Single text convenience: embedSingle() method for individual texts
  • Model selection: Custom model via options
  • Error handling: Consistent error handling across providers

Architecture

![docs/images/structure.png]

Documentation

Comprehensive API documentation is available in the docs directory. To generate or update the documentation:

# Install dependencies (if not already installed)
npm install

# Generate documentation
npm run docs

# The documentation will be available in docs/jsdoc/index.html

The documentation includes:

  • API reference for all components
  • Getting started guide
  • Code examples
  • Configuration options

Testing

The project includes an extensive test suite to ensure reliability and compatibility across different providers. To run the tests:

# Run all tests
npm test

# Run tests with coverage report
npm run coverage

# Run tests in watch mode during development
npm run test:ui

Test Coverage

Test coverage reports are generated in the coverage directory after running the coverage command. This includes:

  • Line coverage
  • Function coverage
  • Branch coverage

Contributing

Contributions are welcome! Please ensure all tests pass and add new tests for any new features or bug fixes.

MIT License

About

some API clients

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published
Morty Proxy This is a proxified and sanitized view of the page, visit original site.