Skip to content

Navigation Menu

Sign in
Appearance settings

Search code, repositories, users, issues, pull requests...

Provide feedback

We read every piece of feedback, and take your input very seriously.

Saved searches

Use saved searches to filter your results more quickly

Appearance settings

vishwa-labs/vishwa-ml-sdk

Open more actions menu

Welcome to vishwa.ai (formerly xpuls.ai) 👋

vishwa-ml-sdk

Twitter Follow Discord

PyPI version GitHub version

Roadmap 🚀

Framework Status
Langchain
LLamaIndex Planned
PyTorch Planned
SKLearn Planned
Transformers Planned
Stable Diffusion Next

💡 If support of any framework/feature is useful for you, please feel free to reach out to us via Discord or Github Discussions

🔗 Installation

  1. Install from PyPI
pip install vishwa-ml-sdk

🧩 Usage Example

from vishwa.mlmonitor.langchain.instrument import LangchainTelemetry
import os
import vishwa
from vishwa.prompt_hub import PromptClient

# Enable this for advance tracking with our vishwa-ai platform
vishwa.host_url = "https://api.vishwa.ai"
vishwa.api_key = "********************"  # Get from https://platform.vishwa.ai
vishwa.adv_tracing_enabled = "true" # Enable this for automated insights and log tracing via xpulsAI platform
# Add default labels that will be added to all captured metrics
default_labels = {"service": "ml-project-service", "k8s_cluster": "app0", "namespace": "dev", "agent_name": "fallback_value"}

# Enable the auto-telemetry
LangchainTelemetry(default_labels=default_labels,).auto_instrument()
prompt_client = PromptClient(   
    prompt_id="clrfm4v70jnlb1kph240",  # Get prompt_id from the platform
    environment_name="dev"  # Deployed environment name
)

## [Optional] Override labels for scope of decorator [Useful if you have multiple scopes where you need to override the default label values]
@TelemetryOverrideLabels(agent_name="chat_agent_alpha")
@TagToProject(project_slug="defaultoPIt9USSR")  # Get Project Slug from platform
def get_response_using_agent_alpha(prompt, query):
    agent = initialize_agent(llm=chat_model,
                             verbose=True,
                             agent=CONVERSATIONAL_REACT_DESCRIPTION,
                             memory=memory)
    
    data = prompt_client.get_prompt({"variable-1": "I'm the first variable"})  # Substitute any variables in prompt

    res = agent.run(data) # Pass the entire `XPPrompt` object to run or invoke method

ℹ️ Complete Usage Guides

🧾 License

This project is licensed under the Apache License 2.0. See the LICENSE file for more details.

📢 Contributing

We welcome contributions to xpuls-ml-sdk! If you're interested in contributing.

If you encounter any issues or have feature requests, please file an issue on our GitHub repository.

💬 Get in touch

👉 Join our Discord community!

🐦 Follow the latest from vishwa.ai team on Twitter @vishwa_ai

📮 Write to us at hello@vishwa.ai

Packages

No packages published

Contributors 2

  •  
  •  

Languages

Morty Proxy This is a proxified and sanitized view of the page, visit original site.