Skip to content

Tutorials

The following tutorials offer simple walk-throughs for common Stained Glass Transform Proxy use-cases.

  • Inference with Stained Glass Transform (SGT) Proxy and vLLM


    This notebook demonstrates: - Various use-cases of inference from a vLLM instance running a Llama base model via OpenAI Chat Completions API compatible clients while using **Stained Glass...

    View Notebook

  • Private RAG Chatbot with Stained Glass Transform Proxy and Langchain


    This notebook demonstrates how to build a private chatbot with a Retrieval Augmented Generation (RAG) architecture with a Stained Glass Transform Proxy, Langchain, Qdrant, and Gradio.

    View Notebook

  • Directly requesting Stained Glass Transform embeddings and sending to vLLM


    Stained Glass Transform Proxy's normal operation (when using the /v1/chat/completions endpoint) transforms a prompt and then forwards the request to an upstream inference server that accepts prompt embeddings.

    View Notebook

  • Tool Calling with Stained Glass Proxy (SGP) and Output Protected vLLM.


    Large Language Models (LLMs) increasingly exhibit agentic behavior through their ability to use external tools.

    View Notebook

  • Agentic AI with Stained Glass Proxy (SGP) and Output Protected vLLM.


    This notebook demonstrates a Pydantic AI agentic workflow using Protopia's Stained Glass Transform (SGT) Proxy and Output Protected vLLM as the data privacy preservation layer.

    View Notebook