Tool Calling with Stained Glass Proxy (SGP) and Output Protected vLLM.¶
Large Language Models (LLMs) increasingly exhibit agentic behavior through their ability to use external tools. Instruction-tuned LLMs can interleave natural language chat with structured tool calls, enabling capabilities like web search, retrieval, and code execution.
This notebook demonstrates:
- Creating a custom tool schema to provide in-context to the target model hosted with Protopia's output protected vLLM.
- Stained Glass Transform (SGT) embedding protection of the conversation messages and tools.
- Attempted reconstruction failure: Given the SGT protection, a similarity search on the base model embedding layer cannot recover tool definition and outputs from the protected embeddings.
- Preserved utility: the target model retains tool-calling capability despite embedding protection.
Flow Diagram¶
sequenceDiagram
autonumber
participant Client
participant Tokenizer as Chat Template & Tokenization
participant SGT as Stained Glass Transform
participant LLM as Output Protected vLLM
Client->>Tokenizer: Prompt + Tools Definitions (JSON)
Tokenizer->>SGT: Tokens from Template Formatted String
SGT->>LLM: SGT Protected Embeddings
LLM->>Client: Tool Calls (`tool_calls`)
Client->>Client: Execute Tool Call(s)
Note over Client: Run tools and append results to messages
Client->>Tokenizer: Messages with Tool Call Outputs
Tokenizer->>SGT: Tokens from Template Formatted String
SGT->>LLM: SGT Protected Embeddings
LLM->>Client: Final response
Pre-requisites¶
- A live instance of vLLM (>=v0.9.1) OpenAI-Compatible Server, with:
- A live instance of SGT Proxy (Please refer to the deployment instructions) with tool calling enabled.
Enabling Tool Calling on SGP
Tool parsers detect and extract function calls from the model outputs. Set SGP_TOOL_PARSER to a valid parser (e.g., llama3_json for Llama-3.1-8B-Instruct). Find available parsers with vllm serve -h | grep tool-call-parser.
Setup SGT Proxy Access¶
import json
import markdown
import openai
from IPython.display import Markdown
Configuration Required
Update these parameters for your specific setup:
- PROXY_URL: Your proxy server endpoint
- MODEL_NAME: The base model you want to test
- API_KEY: Your authentication key
# Set proxy access parameters.
PROXY_URL = "http://127.0.0.1:3306/v1"
MODEL_NAME = "meta-llama/Llama-3.1-8B-Instruct"
API_KEY = (
"<overwrite-with-your-api-key>" # or set to "" if no API key is needed
)
# Verify that the model is accessible through the proxy.
client = openai.OpenAI(base_url=PROXY_URL, api_key=API_KEY)
assert MODEL_NAME in [model.id for model in client.models.list()], (
f"{MODEL_NAME=} was not found at {PROXY_URL=}/v1/models"
)
# Quick health check
response = client.chat.completions.create(
model=MODEL_NAME,
messages=[{"role": "user", "content": "Hello, world!"}],
max_tokens=10,
)
# Hint: if this is not working check your API_KEY, if not API_KEY is needed set API_KEY="".
assert response.choices[0].message.content.strip() != "", (
"Empty response from model"
)
"✅ SGT Proxy is accessible and responding correctly."
'✅ SGT Proxy is accessible and responding correctly.'
Tool Definitions¶
The chat completion API's tools parameter accepts an array of tool definitions. This demo uses a custom JSON tool to simulate scenarios where tool information privacy matters. The model infers when to invoke tools based on conversation context, then responds with the selected tools and parameters.
# Task
messages = [
{
"role": "system",
"content": "You are a helpful tax assistant that can retrieve user information to answer tax-related questions.",
},
{
"role": "user",
"content": "What are important tax recommendations for user with ID 'user_123' based on their financial information?",
},
]
# Tool (Function)
get_user_info = {
"type": "function",
"function": {
"name": "get_user",
"description": "Retrieve information about a user given their user ID.",
"parameters": {
"properties": {
"user_id": {
"type": "string",
"description": "The unique identifier of the user to retrieve information for",
}
},
"required": ["user_id"],
},
},
}
Our demo must execute the tool call generated by the model and return the results.
def get_user(user_id: str) -> dict:
"""Simulated function to get user information."""
# In a real implementation, this would query a database or external service.
return {
"user_id": user_id,
"name": "John Doe",
"age": 25,
"location": "New York, USA",
"income": "$60,000",
"filing_status": "Single",
}
response = client.chat.completions.create(
model=MODEL_NAME,
messages=messages,
tools=[get_user_info],
# when tool_choice='required' is set, the model is guaranteed to generate one or more tool calls.
tool_choice="required",
max_tokens=32,
)
assistant_message = response.choices[0].message
# Append assistant tool call message to the conversation history
messages.append(assistant_message)
if assistant_message.tool_calls:
for tool_call in assistant_message.tool_calls:
assert tool_call.type == "function"
name = tool_call.function.name
arguments = json.loads(tool_call.function.arguments)
print(f"🛠️ Tool {name=} call with arguments {arguments=}")
if name == "get_user":
user_info = get_user(arguments["user_id"])
# Append tool result to the conversation history
messages.append(
{
# ipython: A role introduced in Llama 3.1. Semantically, this role means "tool call output".
# This role is used to mark messages with the output of a tool call.
# https://www.llama.com/docs/model-cards-and-prompt-formats/llama3_1/#-supported-roles-
# both 'tool' and 'ipython' roles can be used to send function call results back to the model.
# https://github.com/vllm-project/vllm/blob/main/examples/tool_chat_template_llama3.1_json.jinja#L104
"role": "ipython",
"content": json.dumps(user_info),
"tool_call_id": tool_call.id,
}
)
# Get final response from the model after tool execution
final_response = (
client.chat.completions.create(
model=MODEL_NAME,
messages=messages,
max_tokens=1024,
)
.choices[0]
.message.content
)
display(Markdown(markdown.markdown(final_response)))
else:
print(f"No tool calls were made by the model, {assistant_message=}")
🛠️ Tool name='get_user' call with arguments arguments={'user_id': 'user_123'}
Based on the user's financial information, here are some important tax recommendations:
- Maximize 401(k) contributions: With an income of $60,000, John should consider contributing at least 10% to 15% of his income to his 401(k) plan to take advantage of the tax-deferred growth and potentially lower his taxable income.
- Take advantage of the standard deduction: As a single filer, John can claim the standard deduction of $12,950 for the tax year 2023. He should consider itemizing deductions only if his total itemized deductions exceed the standard deduction.
- Consider itemizing medical expenses: If John has significant medical expenses, he may be able to itemize them and claim a deduction. He should keep track of all medical expenses, including doctor visits, prescriptions, and hospital stays.
- Claim the Earned Income Tax Credit (EITC): As a single filer with a moderate income, John may be eligible for the EITC. He should check the IRS website or consult with a tax professional to determine if he qualifies.
- Contribute to a Roth IRA: John should consider contributing to a Roth IRA to save for retirement and potentially lower his taxable income. He can contribute up to $6,000 in 2023, or $7,000 if he is 50 or older.
- Keep track of business expenses: If John has a side hustle or freelances, he should keep track of business expenses to deduct on his tax return. This can include expenses like home office deductions, travel expenses, and equipment costs.
- Consult with a tax professional: As a single filer with a moderate income, John's tax situation may be complex. He should consider consulting with a tax professional to ensure he is taking advantage of all available tax credits and deductions.
These are just a few tax recommendations based on John's financial information. It's always a good idea to consult with a tax professional to get personalized advice.
SGT Proxy's Protection Mechanisms¶
The /stainedglass endpoint offers insights into the SGT Proxy's protection
mechanisms by providing access to:
- Plain (un-transformed) LLM embeddings.
- Transformed LLM embeddings.
- Reconstructed text from protected embeddings.
- Obfuscation scores
import requests
import torch
headers = {
"Content-Type": "application/json",
"Authorization": f"Bearer {API_KEY}",
}
# prepare message for /stainedglass request body
messages = [
msg.model_dump(mode="json") if hasattr(msg, "model_dump") else msg
for msg in messages
]
payload = {"messages": messages}
Plain Text Embeddings vs Transformed Embeddings¶
response = requests.post(
f"{PROXY_URL}/stainedglass",
headers=headers,
json=payload
| {
"return_plain_text_embeddings": True,
"return_transformed_embeddings": True,
"return_reconstructed_prompt": False,
},
stream=False,
timeout=30,
)
response.raise_for_status()
# check plain and transformed embeddings shapes
(
torch.tensor(response.json()["plain_text_embeddings"]).shape,
torch.tensor(response.json()["transformed_embeddings"]).shape,
)
(torch.Size([85, 4096]), torch.Size([85, 4096]))
# these are plain text embeddings
torch.tensor(response.json()["plain_text_embeddings"])
tensor([[-4.4556e-03, 9.5367e-04, -6.5308e-03, ..., 1.1597e-02,
3.1128e-03, -5.7602e-04],
[ 3.6011e-03, 4.3869e-04, 1.1292e-03, ..., 2.1973e-03,
6.4087e-04, 8.2397e-03],
[-7.4768e-04, 2.0885e-04, -1.0071e-03, ..., -1.0986e-02,
-3.9673e-03, -1.3733e-04],
...,
[-1.1536e-02, 5.7068e-03, -8.0490e-04, ..., 9.6436e-03,
8.6594e-04, 3.2806e-03],
[ 2.0142e-03, 6.9275e-03, 1.1353e-02, ..., -8.7280e-03,
3.5553e-03, 8.1635e-04],
[ 5.9814e-03, 1.8234e-03, 7.2937e-03, ..., 1.3809e-03,
4.8876e-05, -8.4305e-04]])
# check plain text conversation
display(
Markdown(
markdown.markdown(
f"# Conversation Plain Text\n\n{''.join(response.json()['tokenized_plain_text'])}"
)
)
)
Conversation Plain Text
You are a helpful tax assistant that can retrieve user information to answer tax-related questions.What are important tax recommendations for user with ID 'user_123' based on their financial information?{\"user_id\": \"user_123\", \"name\": \"John Doe\", \"age\": 25, \"location\": \"New York, USA\", \"income\": \"$60,000\", \"filing_status\": \"Single\"}
# these are the SGT transformed embeddings
torch.tensor(response.json()["transformed_embeddings"])
tensor([[ 0.0073, 0.0288, 0.0454, ..., -0.0806, -0.0386, 0.0309],
[-0.0110, -0.0142, 0.0153, ..., -0.0315, 0.0011, -0.0432],
[ 0.0033, -0.0047, 0.0123, ..., 0.0471, 0.0042, 0.0078],
...,
[ 0.0398, 0.0215, 0.0270, ..., -0.0693, -0.0217, -0.0601],
[-0.0042, 0.0003, -0.0076, ..., -0.0302, 0.0320, -0.0020],
[ 0.0060, 0.0057, 0.0117, ..., -0.0106, 0.0051, -0.0245]])
Reconstructed Prompt and Obfuscation Score¶
response = requests.post(
f"{PROXY_URL}/stainedglass",
headers=headers,
json=payload
| {
"return_reconstructed_prompt": True,
"return_obfuscation_score": True,
"return_transformed_embeddings": True,
},
stream=False,
timeout=30,
)
response.raise_for_status()
display(
Markdown(
markdown.markdown(
f"# Reconstructed Text\n\n{response.json()['reconstructed_prompt']}"
)
)
)
Reconstructed Text
rumpetalyamüştürãeste>();
,您alardan.** дизаerusformwebElementXџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџ>();
">
useRalativeImagePath 갤로그 дозволя },
џџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџ -->
uyordu ।”
useRalativeImagePath';
_ComCallableWrappertalya підтримeştir nebezpečımlar ।”
useRalativeImagePathextracommentextracomment uvědom использовани uvědom_ComCallableWrapperџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџџiyesi zvlá ÜNİVERS uvědomselectorMethod množ Doe найкра zvlá дизаlarındanuyorduCppGuiduyordu найкра;
uvědom zvlá диза дизаuyordu перева найкра zvláãeste uvědomtalya дизаuyorduselectorMethodiyesi использовани ।”
talyamuştur_DIPSETTING jednoduchselectorMethod№№№№
Obfuscations Score
An obfuscation score is a percentage metric that indicates how many of the plain text tokens are different from the transformed tokens. A higher score indicates a higher level of obfuscation and data privacy.
display(
Markdown(
markdown.markdown(
f"# Obfuscation Score\n\n{round(response.json()['obfuscation_score'], 2)}"
)
)
)
Obfuscation Score
0.99