Skip to main content
Version: 2.3.1 (Latest)

Interactive Tools

Version 2.3.1 Features

All examples and tools on this page demonstrate features available in RAG Pipeline Utils v2.3.1. The code examples use the stable v2.3.1 API and are ready to use in production.

Welcome to Interactive Tools

These interactive tools help you explore RAG Pipeline Utils features hands-on. You can modify code, generate configurations, and estimate performance.

Explore RAG Pipeline Utils with our interactive tools designed to help you build, configure, and optimize your RAG applications. These tools provide hands-on experience with the library's features.

Playground Tour Available

Look for the ? help icons throughout the tools for detailed explanations. If you see an INTERACTIVE badge, you can modify and interact with that component!


Code Playground

INTERACTIVEYou can modify and interact with this

Try RAG Pipeline Utils directly in your browser with live code examples. Modify the code and run it in StackBlitz.

What you can do:

  • Edit code examples in real-time
  • Switch between different plugin patterns
  • Open examples in StackBlitz to run them
  • Copy code to use in your projects

Interactive Code Playground
?

Explore RAG Pipeline Utils with live examples

index.js
?
?
?

Try it yourself
?
:

  1. Modify the code above to experiment
  2. Click "Open in StackBlitz" to run in a live environment
  3. Or copy the code and run locally with npm

Configuration Generator

INTERACTIVEYou can modify and interact with this

Build your RAG pipeline configuration step-by-step with our interactive wizard. Select your components and generate production-ready code.

What you can do:

  • Choose embedder, retriever, and LLM providers
  • Configure advanced options like caching and timeouts
  • Preview generated code in real-time
  • Copy complete configuration code

Pipeline Configuration Generator
?

Build your RAG pipeline configuration step-by-step

1
Embedder
2
Retriever
3
LLM
4
Advanced

Select Embedder

Choose how to convert text into vector embeddings

OpenAI
text-embedding-ada-002 model
Cohere
Cohere embedding models
HuggingFace
Open source models
Custom
Bring your own embedder

Generated Configuration:

const { createRagPipeline } = require('@devilsdev/rag-pipeline-utils');

// Embedder Plugin
class OpenAIEmbedder {
  constructor(options) {
    this.apiKey = options.apiKey;
    this.model = options.model || 'text-embedding-3-small';
  }
  async embed(text) {
    const response = await fetch('https://api.openai.com/v1/embeddings', {
      method: 'POST',
      headers: {
        'Authorization': `Bearer ${this.apiKey}`,
        'Content-Type': 'application/json'
      },
      body: JSON.stringify({ input: text, model: this.model })
    });
    const data = await response.json();
    return data.data[0].embedding;
  }
}
const embedder = new OpenAIEmbedder({
  apiKey: process.env.OPENAI_API_KEY,
  model: 'text-embedding-3-small'
});

// Retriever Plugin
class PineconeRetriever {
  constructor(options) {
    this.apiKey = options.apiKey;
    this.indexName = options.indexName;
    this.topK = options.topK || 5;
  }
  async retrieve({ queryVector, topK }) {
    // Use Pinecone SDK to query vector database
    const index = pinecone.index(this.indexName);
    const results = await index.query({ vector: queryVector, topK });
    return results.matches;
  }
}
const retriever = new PineconeRetriever({
  apiKey: process.env.PINECONE_API_KEY,
  indexName: 'docs',
  topK: 5
});

// LLM Plugin
class OpenAILLM {
  constructor(options) {
    this.apiKey = options.apiKey;
    this.model = options.model || 'gpt-3.5-turbo';
    this.temperature = options.temperature || 0.7;
  }
  async generate(query, context, options) {
    const prompt = `Context: ${JSON.stringify(context)}\n\nQuestion: ${query}`;
    const response = await fetch('https://api.openai.com/v1/chat/completions', {
      method: 'POST',
      headers: {
        'Authorization': `Bearer ${this.apiKey}`,
        'Content-Type': 'application/json'
      },
      body: JSON.stringify({
        model: this.model,
        messages: [{ role: 'user', content: prompt }],
        temperature: this.temperature
      })
    });
    const data = await response.json();
    return data.choices[0].message.content;
  }
}
const llm = new OpenAILLM({
  apiKey: process.env.OPENAI_API_KEY,
  model: 'gpt-3.5-turbo',
  temperature: 0.7
});

// Create Pipeline with Custom Plugins
const pipeline = createRagPipeline({
  embedder,
  retriever,
  llm
});

// Use the Pipeline
const result = await pipeline.run({
  query: 'Your question here',
  options: { topK: 5, timeout: 30000 }
});
console.log(result);

Performance Calculator

INTERACTIVEYou can modify and interact with this

Estimate throughput, latency, and costs for your RAG pipeline based on your configuration. Adjust parameters to see real-time impact on performance and costs.

What you can do:

  • Input your expected query volume
  • Select your component choices
  • Adjust caching and optimization settings
  • See estimated latency (P50, P95, P99)
  • Calculate monthly costs
  • Get optimization recommendations

Performance Calculator
?

Estimate throughput, latency, and costs for your RAG pipeline

Configuration

Estimated Performance

Latency (P50)
0ms
Latency (P95)
0ms
Latency (P99)
0ms
Throughput
0 qps

Cost Analysis

Embedding Cost
$0.00
LLM Cost
$0.00
Total Monthly Cost
$0.00
Cost per 1K queries:
$0.000
Cost per query:
$0.00000

Optimization Recommendations

  • Enable caching to reduce embedding costs by up to 90% for repeated queries

Educational Mode

All interactive tools include built-in help:

  • Help Icons (?): Hover over help icons to see detailed explanations
  • Interactive Badges: Visual indicators show what's interactive vs static
  • Recommendations: Tools provide context-aware optimization suggestions
  • Live Preview: See changes immediately as you adjust settings

Additional Resources

Need Help?

If you have questions about these tools or need assistance, check our FAQ or open a discussion.