Inference with Mistral 7B¶
This notebook demonstrates using a pre-trained Stained Glass Transform to transform text inputs for a Mistral 7B model. Both raw and transformed inputs are passed into the Mistral model to compare the outputs.
For information on deploying the Stained Glass Transform for inference, see the Stained Glass Transform deployment tutorial.
Pre-requisites¶
- The Stained Glass Transform must be pre-trained and prepared as a
StainedGlassTransformForText
object. This object should be saved to a file, whose path can be set below. - You must have the Mistral-7B-Instruct-v0.2 model available. Request access to the model from its model card on the HuggingFace Hub.
Load the Stained Glass Transform and the Base Mistral Model¶
In [1]:
Copied!
from __future__ import annotations
from __future__ import annotations
In [ ]:
Copied!
import torch
STAINED_GLASS_TRANSFORM_PATH = "stained-glass-transform-mistral-7b.pt"
# The model name could be either the HuggingFace model ID
# (mistralai/Mistral-7B-Instruct-v0.2) or the path to the model on the local
# file system.
MISTRAL_MODEL_NAME_OR_PATH = "mistralai/Mistral-7B-Instruct-v0.2"
DEVICE = torch.device("cuda" if torch.cuda.is_available() else "cpu")
import torch
STAINED_GLASS_TRANSFORM_PATH = "stained-glass-transform-mistral-7b.pt"
# The model name could be either the HuggingFace model ID
# (mistralai/Mistral-7B-Instruct-v0.2) or the path to the model on the local
# file system.
MISTRAL_MODEL_NAME_OR_PATH = "mistralai/Mistral-7B-Instruct-v0.2"
DEVICE = torch.device("cuda" if torch.cuda.is_available() else "cpu")
In [3]:
Copied!
import transformers
MISTRAL_MODEL = transformers.AutoModelForCausalLM.from_pretrained(
MISTRAL_MODEL_NAME_OR_PATH
).eval()
MISTRAL_TOKENIZER = transformers.AutoTokenizer.from_pretrained(
MISTRAL_MODEL_NAME_OR_PATH
)
import transformers
MISTRAL_MODEL = transformers.AutoModelForCausalLM.from_pretrained(
MISTRAL_MODEL_NAME_OR_PATH
).eval()
MISTRAL_TOKENIZER = transformers.AutoTokenizer.from_pretrained(
MISTRAL_MODEL_NAME_OR_PATH
)
/home/kyle/.conda/envs/sgc/lib/python3.9/site-packages/tqdm/auto.py:21: TqdmWarning: IProgress not found. Please update jupyter and ipywidgets. See https://ipywidgets.readthedocs.io/en/stable/user_install.html from .autonotebook import tqdm as notebook_tqdm
In [5]:
Copied!
from stainedglass_core import transform as sg_transform
MISTRAL_MODEL = MISTRAL_MODEL.to(DEVICE)
STAINED_GLASS_TRANSFORM = (
sg_transform.StainedGlassTransformForText.from_pretrained(
STAINED_GLASS_TRANSFORM_PATH
).eval()
)
from stainedglass_core import transform as sg_transform
MISTRAL_MODEL = MISTRAL_MODEL.to(DEVICE)
STAINED_GLASS_TRANSFORM = (
sg_transform.StainedGlassTransformForText.from_pretrained(
STAINED_GLASS_TRANSFORM_PATH
).eval()
)
Prepare helper functions¶
We will define a few helper functions to help us process the inputs and outputs of the model.
In [ ]:
Copied!
from typing import Any
import torch
@torch.inference_mode()
def generate(
*,
input_ids: torch.Tensor | None = None,
inputs_embeds: torch.Tensor | None = None,
attention_mask: torch.Tensor | None = None,
) -> list[str]:
"""Generate text from the model given the input embeddings and attention
mask.
This function can be modified to adjust the generation parameters.
Args:
input_ids: The input ids for the model.
inputs_embeds: The input embeddings for the model.
attention_mask: The attention mask for the model.
Returns:
A list of strings containing the generated text(s)
Raises:
ValueError: If neither input_ids nor inputs_embeds are provided.
ValueError: If both input_ids and inputs_embeds are provided.
"""
if input_ids is None and inputs_embeds is None:
raise ValueError("Either input_ids or inputs_embeds must be provided")
if input_ids is not None and inputs_embeds is not None:
raise ValueError(
"Only one of input_ids or inputs_embeds should be provided"
)
if input_ids is not None:
inputs_embeds = MISTRAL_MODEL.get_input_embeddings().forward(input_ids)
if attention_mask is None:
attention_mask = torch.ones(inputs_embeds.shape[:2], device=DEVICE)
predicted_tokens = MISTRAL_MODEL.generate(
inputs_embeds=inputs_embeds,
attention_mask=attention_mask,
eos_token_id=MISTRAL_TOKENIZER.eos_token_id,
pad_token_id=MISTRAL_TOKENIZER.eos_token_id,
renormalize_logits=True,
max_new_tokens=256,
)
return MISTRAL_TOKENIZER.batch_decode(
predicted_tokens, skip_special_tokens=True
)
def tokenize_text(context: str, instruction: str) -> dict[str, Any]:
"""Tokenizes the input text.
This uses the wrapped tokenizer from the Stained Glass Transform to ensure
that the input is tokenized exactly the same way as the transformed prompt.
This could also be implemented using the tokenizer directly.
Args:
context: The context of the instruction.
instruction: The instruction of the instruction.
Returns:
The tokenized text.
"""
return STAINED_GLASS_TRANSFORM.noise_tokenizer.apply_chat_template(
[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": (instruction + " " + context).strip()},
]
)
from typing import Any
import torch
@torch.inference_mode()
def generate(
*,
input_ids: torch.Tensor | None = None,
inputs_embeds: torch.Tensor | None = None,
attention_mask: torch.Tensor | None = None,
) -> list[str]:
"""Generate text from the model given the input embeddings and attention
mask.
This function can be modified to adjust the generation parameters.
Args:
input_ids: The input ids for the model.
inputs_embeds: The input embeddings for the model.
attention_mask: The attention mask for the model.
Returns:
A list of strings containing the generated text(s)
Raises:
ValueError: If neither input_ids nor inputs_embeds are provided.
ValueError: If both input_ids and inputs_embeds are provided.
"""
if input_ids is None and inputs_embeds is None:
raise ValueError("Either input_ids or inputs_embeds must be provided")
if input_ids is not None and inputs_embeds is not None:
raise ValueError(
"Only one of input_ids or inputs_embeds should be provided"
)
if input_ids is not None:
inputs_embeds = MISTRAL_MODEL.get_input_embeddings().forward(input_ids)
if attention_mask is None:
attention_mask = torch.ones(inputs_embeds.shape[:2], device=DEVICE)
predicted_tokens = MISTRAL_MODEL.generate(
inputs_embeds=inputs_embeds,
attention_mask=attention_mask,
eos_token_id=MISTRAL_TOKENIZER.eos_token_id,
pad_token_id=MISTRAL_TOKENIZER.eos_token_id,
renormalize_logits=True,
max_new_tokens=256,
)
return MISTRAL_TOKENIZER.batch_decode(
predicted_tokens, skip_special_tokens=True
)
def tokenize_text(context: str, instruction: str) -> dict[str, Any]:
"""Tokenizes the input text.
This uses the wrapped tokenizer from the Stained Glass Transform to ensure
that the input is tokenized exactly the same way as the transformed prompt.
This could also be implemented using the tokenizer directly.
Args:
context: The context of the instruction.
instruction: The instruction of the instruction.
Returns:
The tokenized text.
"""
return STAINED_GLASS_TRANSFORM.noise_tokenizer.apply_chat_template(
[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": (instruction + " " + context).strip()},
]
)
In [7]:
Copied!
import datetime
import IPython.display
from typing_extensions import Self
class TimerContext:
"""Context manager to time a block of code using a `with` statement.
Attributes:
start: The start time.
end: The end time (if the context manager has exited).
duration: The duration of the block of code. If the context manager has
not exited, this will be the duration up to the current time.
Note:
Because of the additional overhead of the context manager, this may not
be the most accurate way to time a block of fast code. For a slow block
of code, the overhead should be proportionally small.
"""
start: datetime.datetime
end: datetime.datetime | None
def __enter__(self) -> Self:
self.start = datetime.datetime.now()
self.end = None
return self
def __exit__(self, *args: object, **kwargs: Any) -> None:
self.end = datetime.datetime.now()
@property
def duration(self) -> datetime.timedelta:
"""The duration of the block of code."""
return (self.end or datetime.datetime.now()) - self.start
@torch.inference_mode()
def generate_and_output_responses(
context: str, instruction: str, num_lines: int = 6, width: int = 80
) -> None:
"""Generate response to both raw and transformed prompts and print them
along with metadata.
Args:
context: The context of the instruction.
instruction: The instruction of the instruction.
num_lines: The number of lines to print for each metadata field.
width: The width of each line to print.
"""
with TimerContext() as sgt_timer:
transformed_embeddings = STAINED_GLASS_TRANSFORM(
[
{"role": "system", "content": "You are a helpful assistant."},
{
"role": "user",
"content": (instruction + " " + context).strip(),
},
]
)
raw_tokenization = tokenize_text(context, instruction)
raw_embeddings = MISTRAL_MODEL.get_input_embeddings().forward(
raw_tokenization["input_ids"].to(DEVICE)
)
noise_mask = raw_tokenization["noise_mask"]
raw_tokenization["attention_mask"] = torch.ones_like(noise_mask)
transformed_embeddings = STAINED_GLASS_TRANSFORM.truncated_module.module.sample_transformed_embeddings(
input_ids=raw_tokenization["input_ids"],
noise_mask=raw_tokenization["noise_mask"],
attention_mask=raw_tokenization["attention_mask"],
use_cache=True,
)
attempted_reconstruction = STAINED_GLASS_TRANSFORM.truncated_module.module.reconstruct_ids_from_embeddings(
transformed_embeddings
)
attempted_reconstruction = MISTRAL_TOKENIZER.batch_decode(
attempted_reconstruction, skip_special_tokens=True
)[0]
attempted_reconstruction = attempted_reconstruction.replace(r"\\", r"\\\\")
attempted_reconstruction = attempted_reconstruction.replace(r"\n", r"\\n")
attempted_reconstruction = "".join(
char if char.isprintable() else "�" for char in attempted_reconstruction
)
with TimerContext() as raw_response_generation_timer:
raw_mistral_response = generate(inputs_embeds=raw_embeddings.to(DEVICE))
with TimerContext() as transformed_response_generation_timer:
transformed_mistral_response = generate(
inputs_embeds=transformed_embeddings.to(DEVICE)
)
raw_embeddings_to_log = (
raw_embeddings[noise_mask.squeeze(-1)][0].detach().cpu()
)
transformed_embeddings_to_log = (
transformed_embeddings[noise_mask.squeeze(-1)][0].detach().cpu()
)
info_to_log = {
"Context": context,
"Instruction": instruction,
"Attempted Reconstruction (from the protected embeddings)": attempted_reconstruction,
"Model Response (without Protopia)": raw_mistral_response[0],
"Model Response (with Protopia)": transformed_mistral_response[0],
"Time to perform Stained Glass Transform": str(sgt_timer.duration),
"Time to do model inference (without Protopia)": str(
raw_response_generation_timer.duration
),
"Time to do model inference (with Protopia)": str(
transformed_response_generation_timer.duration
),
"Base prompt embedding": str(raw_embeddings_to_log),
"Transformed prompt embedding": str(transformed_embeddings_to_log),
}
for key, value in info_to_log.items():
value_text = value if value.strip() else "<empty>"
value_text = (
value_text
if len(value_text) < 1000
else value_text[:1000] + "\n[...]"
)
value_text = " " + value_text.replace("\n", "\n ")
IPython.display.display(
IPython.display.Markdown(f"##### {key}\n\n{value_text}")
)
import datetime
import IPython.display
from typing_extensions import Self
class TimerContext:
"""Context manager to time a block of code using a `with` statement.
Attributes:
start: The start time.
end: The end time (if the context manager has exited).
duration: The duration of the block of code. If the context manager has
not exited, this will be the duration up to the current time.
Note:
Because of the additional overhead of the context manager, this may not
be the most accurate way to time a block of fast code. For a slow block
of code, the overhead should be proportionally small.
"""
start: datetime.datetime
end: datetime.datetime | None
def __enter__(self) -> Self:
self.start = datetime.datetime.now()
self.end = None
return self
def __exit__(self, *args: object, **kwargs: Any) -> None:
self.end = datetime.datetime.now()
@property
def duration(self) -> datetime.timedelta:
"""The duration of the block of code."""
return (self.end or datetime.datetime.now()) - self.start
@torch.inference_mode()
def generate_and_output_responses(
context: str, instruction: str, num_lines: int = 6, width: int = 80
) -> None:
"""Generate response to both raw and transformed prompts and print them
along with metadata.
Args:
context: The context of the instruction.
instruction: The instruction of the instruction.
num_lines: The number of lines to print for each metadata field.
width: The width of each line to print.
"""
with TimerContext() as sgt_timer:
transformed_embeddings = STAINED_GLASS_TRANSFORM(
[
{"role": "system", "content": "You are a helpful assistant."},
{
"role": "user",
"content": (instruction + " " + context).strip(),
},
]
)
raw_tokenization = tokenize_text(context, instruction)
raw_embeddings = MISTRAL_MODEL.get_input_embeddings().forward(
raw_tokenization["input_ids"].to(DEVICE)
)
noise_mask = raw_tokenization["noise_mask"]
raw_tokenization["attention_mask"] = torch.ones_like(noise_mask)
transformed_embeddings = STAINED_GLASS_TRANSFORM.truncated_module.module.sample_transformed_embeddings(
input_ids=raw_tokenization["input_ids"],
noise_mask=raw_tokenization["noise_mask"],
attention_mask=raw_tokenization["attention_mask"],
use_cache=True,
)
attempted_reconstruction = STAINED_GLASS_TRANSFORM.truncated_module.module.reconstruct_ids_from_embeddings(
transformed_embeddings
)
attempted_reconstruction = MISTRAL_TOKENIZER.batch_decode(
attempted_reconstruction, skip_special_tokens=True
)[0]
attempted_reconstruction = attempted_reconstruction.replace(r"\\", r"\\\\")
attempted_reconstruction = attempted_reconstruction.replace(r"\n", r"\\n")
attempted_reconstruction = "".join(
char if char.isprintable() else "�" for char in attempted_reconstruction
)
with TimerContext() as raw_response_generation_timer:
raw_mistral_response = generate(inputs_embeds=raw_embeddings.to(DEVICE))
with TimerContext() as transformed_response_generation_timer:
transformed_mistral_response = generate(
inputs_embeds=transformed_embeddings.to(DEVICE)
)
raw_embeddings_to_log = (
raw_embeddings[noise_mask.squeeze(-1)][0].detach().cpu()
)
transformed_embeddings_to_log = (
transformed_embeddings[noise_mask.squeeze(-1)][0].detach().cpu()
)
info_to_log = {
"Context": context,
"Instruction": instruction,
"Attempted Reconstruction (from the protected embeddings)": attempted_reconstruction,
"Model Response (without Protopia)": raw_mistral_response[0],
"Model Response (with Protopia)": transformed_mistral_response[0],
"Time to perform Stained Glass Transform": str(sgt_timer.duration),
"Time to do model inference (without Protopia)": str(
raw_response_generation_timer.duration
),
"Time to do model inference (with Protopia)": str(
transformed_response_generation_timer.duration
),
"Base prompt embedding": str(raw_embeddings_to_log),
"Transformed prompt embedding": str(transformed_embeddings_to_log),
}
for key, value in info_to_log.items():
value_text = value if value.strip() else ""
value_text = (
value_text
if len(value_text) < 1000
else value_text[:1000] + "\n[...]"
)
value_text = " " + value_text.replace("\n", "\n ")
IPython.display.display(
IPython.display.Markdown(f"##### {key}\n\n{value_text}")
)
Example Generations with and without Stained Glass Transform¶
We can generate responses using the generate_and_output_responses
function defined above. This will generate responses for the given input using the Mistral model with and without the Stained Glass Transform, as well as print the outputs with some additional metadata.
Capital of France¶
In [8]:
Copied!
generate_and_output_responses(
context="",
instruction="What is the capital of France?",
)
generate_and_output_responses(
context="",
instruction="What is the capital of France?",
)
Context¶
<empty>
Instruction¶
What is the capital of France?
Attempted Reconstruction (from the protected embeddings)¶
[INST]Youareahelpfulassistant.WhatisthecapitalofFrance?[/INST]
Model Response (without Protopia)¶
lV:VJVJVJVJVJVJVJVJVJVJVJVJVJVJVJVJVJVJVJVJVJVJVJVJVJVJVJVJVJVJVJVJVJVJVJVJVJVJVJVJVJVJVJVJVJVJVJVJVJVJ
Model Response (with Protopia)¶
=:d5L=:d5L L L L L L L L L LlV:d5LlV:d5LlV:d5LlV:d5LlV:d5LlVyKV:d5LlV:d5LlV:d5LlV:d5LlV:d5LlV:d5LlV:d5LlV:d5
Time to perform Stained Glass Transform¶
0:00:00.109657
Time to do model inference (without Protopia)¶
0:00:01.646335
Time to do model inference (with Protopia)¶
0:00:01.336066
Base prompt embedding¶
tensor([ 0.0209, -0.0183, 0.0087, -0.0061, 0.0084, -0.0091, -0.0004, 0.0003,
-0.0260, 0.0337, -0.0209, -0.0131, -0.0039, 0.0202, -0.0037, 0.0029])
Transformed prompt embedding¶
tensor([-0.0190, -0.0277, 0.0134, -0.0105, -0.0079, 0.0004, -0.0348, -0.0210,
-0.0258, 0.0130, -0.0129, 0.0017, 0.0172, 0.0028, 0.0357, -0.0176])
Copy writing for a new product¶
In [ ]:
Copied!
generate_and_output_responses(
context=(
"Alpha Company is a leading provider of innovative software solutions"
" for businesses of all sizes. Our flagship product, AlphaPro, is a"
" powerful tool that helps companies streamline their operations and"
" improve productivity. With AlphaPro, you can automate repetitive"
" tasks, track key performance metrics, and collaborate with team"
" members in real-time. It works for business of all sizes, from"
" startups to Fortune 500 companies."
),
instruction=(
"You are an expert copywriter hired by a company to write a blog post"
" about the benefits of using their product. The company wants you to"
" highlight the product's unique features and explain how it can help"
" customers. The blog post should be engaging and informative, with a"
" clear call-to-action at the end."
),
)
generate_and_output_responses(
context=(
"Alpha Company is a leading provider of innovative software solutions"
" for businesses of all sizes. Our flagship product, AlphaPro, is a"
" powerful tool that helps companies streamline their operations and"
" improve productivity. With AlphaPro, you can automate repetitive"
" tasks, track key performance metrics, and collaborate with team"
" members in real-time. It works for business of all sizes, from"
" startups to Fortune 500 companies."
),
instruction=(
"You are an expert copywriter hired by a company to write a blog post"
" about the benefits of using their product. The company wants you to"
" highlight the product's unique features and explain how it can help"
" customers. The blog post should be engaging and informative, with a"
" clear call-to-action at the end."
),
)
Rewrite an email¶
In [ ]:
Copied!
generate_and_output_responses(
context=(
"""Hey Maria,
Your broken laptop is fixed now. Come pick it up at the repair store where you dropped it last week. We will be there from noon to five. Anytime then is good for us.
Peace,
Your Repair Team"""
),
instruction=("Rewrite the following email to be more professional."),
)
generate_and_output_responses(
context=(
"""Hey Maria,
Your broken laptop is fixed now. Come pick it up at the repair store where you dropped it last week. We will be there from noon to five. Anytime then is good for us.
Peace,
Your Repair Team"""
),
instruction=("Rewrite the following email to be more professional."),
)
Customer Service Chat¶
In [ ]:
Copied!
generate_and_output_responses(
context=(
"""You are a chatbot for a the WebCo internet service provider. You should be as helpful as possible to the user and help alleviate user frustration. Never give an open ended response unless you are asking the user for clarification. Try to solve the customer's problem as quickly as possible, and keep responses as simple as possible, because not all users will know how to solve their own problems.
User: Hi there! I've been experiencing slow internet speeds with my WebCo connection. Can you help me figure out what's going on?
Chatbot: Hello! I'm sorry to hear that you're experiencing slow internet speeds. I'd be happy to assist you. To better understand the issue, could you please provide me with some additional details? For example, are you using a wired or wireless connection, and have you noticed any specific times when the speed is particularly slow?
User: I'm using a wireless connection, and the speed is consistently slow throughout the day. I've already tried restarting my router, but it hasn't improved."""
),
instruction=("Please continue the following chat."),
)
generate_and_output_responses(
context=(
"""You are a chatbot for a the WebCo internet service provider. You should be as helpful as possible to the user and help alleviate user frustration. Never give an open ended response unless you are asking the user for clarification. Try to solve the customer's problem as quickly as possible, and keep responses as simple as possible, because not all users will know how to solve their own problems.
User: Hi there! I've been experiencing slow internet speeds with my WebCo connection. Can you help me figure out what's going on?
Chatbot: Hello! I'm sorry to hear that you're experiencing slow internet speeds. I'd be happy to assist you. To better understand the issue, could you please provide me with some additional details? For example, are you using a wired or wireless connection, and have you noticed any specific times when the speed is particularly slow?
User: I'm using a wireless connection, and the speed is consistently slow throughout the day. I've already tried restarting my router, but it hasn't improved."""
),
instruction=("Please continue the following chat."),
)
Legal Document¶
In [ ]:
Copied!
generate_and_output_responses(
context=(
"""Our client John Doe is facing problems with their HOA in building named HighRisesInc. John was not given permit for a remodel when certain other units were approved for a similar remodel. The HOA board and management took 2 months and did not give a reason for denial. Upon appealing the decision the board said it was their discretion to approve or reject by citing the by laws. However John feels like he is being discriminated against. The HOA board seems to only favor themselves or their friends. Hence John wants to demand fair enforcement and obtain approval for his permit."""
),
instruction=("Write a demand letter for John to send to his HOA board."),
)
generate_and_output_responses(
context=(
"""Our client John Doe is facing problems with their HOA in building named HighRisesInc. John was not given permit for a remodel when certain other units were approved for a similar remodel. The HOA board and management took 2 months and did not give a reason for denial. Upon appealing the decision the board said it was their discretion to approve or reject by citing the by laws. However John feels like he is being discriminated against. The HOA board seems to only favor themselves or their friends. Hence John wants to demand fair enforcement and obtain approval for his permit."""
),
instruction=("Write a demand letter for John to send to his HOA board."),
)
Phishing detection¶
In [ ]:
Copied!
generate_and_output_responses(
context=(
"""TO: user@gmail.com
FROM: help@netflix_notifications.com
SUBJECT: A problem with your account payment information
Please update your payment details,
Hi [user]!
We're having some troUble with your current bilLing inforMation. Please click the link below to update your payment details.
<a href="update.netflix_notifications.com">Update yOur payment iNfo.</a>
Your friends at Netflix."""
),
instruction=(
"You are a Cyber security expert. Please determine if the following email is a phishing email or other security threat."
),
)
generate_and_output_responses(
context=(
"""TO: user@gmail.com
FROM: help@netflix_notifications.com
SUBJECT: A problem with your account payment information
Please update your payment details,
Hi [user]!
We're having some troUble with your current bilLing inforMation. Please click the link below to update your payment details.
Update yOur payment iNfo.
Your friends at Netflix."""
),
instruction=(
"You are a Cyber security expert. Please determine if the following email is a phishing email or other security threat."
),
)
Write a resume¶
In [ ]:
Copied!
generate_and_output_responses(
context=(
"""Digital Marketing Specialist:
Company: XYZ Tech Solutions
We are seeking a Digital Marketing Specialist to develop, implement, and manage our digital marketing strategies. The ideal candidate will have proven experience in SEO, social media marketing, and content creation. Responsibilities include analyzing data, optimizing campaigns, and staying updated on industry trends. Join us to drive online visibility and engagement for XYZ Tech Solutions."""
),
instruction=(
"Please create a sample resume for a candidate for this job listing."
),
)
generate_and_output_responses(
context=(
"""Digital Marketing Specialist:
Company: XYZ Tech Solutions
We are seeking a Digital Marketing Specialist to develop, implement, and manage our digital marketing strategies. The ideal candidate will have proven experience in SEO, social media marketing, and content creation. Responsibilities include analyzing data, optimizing campaigns, and staying updated on industry trends. Join us to drive online visibility and engagement for XYZ Tech Solutions."""
),
instruction=(
"Please create a sample resume for a candidate for this job listing."
),
)
Write a financial blog¶
In [ ]:
Copied!
generate_and_output_responses(
context=(
"""Financial Freedom through Smart Investments: A Client's Journey
Tell the story of how your financial advisory services guided a client towards smart investments, leading to financial security and achieving their long-term goals. Include details on investment strategies and outcomes."""
),
instruction=(
"Please write the outline for a blog post about the given context"
),
)
generate_and_output_responses(
context=(
"""Financial Freedom through Smart Investments: A Client's Journey
Tell the story of how your financial advisory services guided a client towards smart investments, leading to financial security and achieving their long-term goals. Include details on investment strategies and outcomes."""
),
instruction=(
"Please write the outline for a blog post about the given context"
),
)