Skip to content

Tutorials

The following tutorials offer simple walk-throughs for common Stained Glass Transform Proxy use-cases.

  • Mistral Inference with Stained Glass Transform (SGT) Proxy and LLM API


    This notebook demonstrates: - Various use-cases of inference from a Stained Glass Transform LLM API instance running a Mistral base model via OpenAI Chat Completions API compatible...

    View Notebook

  • Private RAG Chatbot with Stained Glass Transform Proxy and Langchain


    This notebook demonstrates how to build a private chatbot with a Retrieval Augmented Generation (RAG) architecture with a Stained Glass Transform Proxy, Langchain, Qdrant, and Gradio.

    View Notebook

  • Manually requesting Stained Glass Transform embeddings and sending to vLLM


    Stained Glass Transform Proxy's normal operation (when using the /v1/completions and /v1/chat/completions endpoints) transforms a prompt and then forwards the request to an upstream inference server that accepts prompt embeddings.

    View Notebook