Skip to content

text

Classes:

Name Description
StainedGlassTransformForText

A client for creating protected input embeddings from text using Stained Glass Transform.

StainedGlassTransformForText

Bases: Module, Generic[TokenizerWrapperReturnT_co, SchemaT_contra]

A client for creating protected input embeddings from text using Stained Glass Transform.

Note

Instances of this class simply wrap the noisy model and tokenizer wrapper passed into their constructor. This means that changes made to the noisy model or tokenizer wrapper after the client's creation will affect the client's behavior and vice versa. If you need an independent copy of the client, you should either serialize/deserialize it or use copy.deepcopy.

Warning

Inferring the minimal parameters of the client requires a forward pass through the model. This assumes that the model has a static computational graph, i.e. the forward pass will require the same parameters for any valid input. This inference is done implicitly and automatically at initialization on an arbitrary input. To infer the minimal parameters using a particular input, see the infer_minimal_parameters method. To override the inferred parameters with parameters of your choosing, you can pass in the parameter_names argument to the constructor. Note, however, that you must specify all of the parameters necessary to calculate the base model's input embeddings.

Warning

Calls to forward or __call__ are not guaranteed to be thread-safe or reentrant.

Attributes:

Name Type Description
truncated_module

The truncated module that wraps the noisy model. This module acts like the noisy model, but its forward method will return early as soon as the noise layer is applied, i.e. its output will be the transformed input embeddings, without calling any unnecessary layers of the noisy/base model. Generally, there is little need for users to interact with this attribute directly. If you need to access the noisy model, you can do so via the truncated_module.module attribute.

tokenizer_wrapper

The tokenizer wrapper to use for tokenizing the input text. In many cases, this will be the same tokenizer wrapper used to train the model.

parameter_names_relative_to_base_model

Parameters of the base model to be saved and loaded during serialization and deserialization. This should be the minimal list of parameters necessary to get the base model's input embeddings. Each of these parameter names should be relative to the base model. I.e. if the base model's input embedding submodule can be accessed by base_model.model.embed_tokens, then the parameter name for its weight matrix should be model.embed_tokens.weight. Note that this attribute is only used for serialization and deserialization. Although non-minimal parameters will be included in the state_dict and when serializing, they will not be used to calculate the protected input embeddings.

name

The name of the StainedGlassTransformForText. This is used to identify the transform when saving and loading.

Examples:

Preparing the Model and Tokenizer wrappers:

>>> import transformers
>>> from stainedglass_core.huggingface.tokenization_utils import (
...     tokenizer_wrapper as sg_tokenizer_wrapper,
...     universal,
... )
>>> from stainedglass_core.model import noisy_transformer_masking_model
>>> from stainedglass_core.noise_layer import transformer_cloak
>>>
>>> BASE_MODEL_PATH = "tests/resources/tokenizers/mini-Mistral-7B-Instruct-v0.2"
>>> base_model_config = transformers.AutoConfig.from_pretrained(BASE_MODEL_PATH)
>>> base_model = transformers.AutoModelForCausalLM.from_config(base_model_config)
>>> embedding_size = base_model.config.hidden_size
>>> noisy_model = noisy_transformer_masking_model.NoiseMaskedNoisyTransformerModel(
...     noise_layer_class=transformer_cloak.TransformerCloak,
...     base_model=base_model,
...     transformer_type=transformers.MistralModel,
...     scale=(0.00000001, 1.0),
...     config_path=BASE_MODEL_PATH,
...     target_layer="model.embed_tokens",
...     directly_learn_stds=True,
...     use_causal_mask=True,
...     rho_init=0,
... )
>>> tokenizer = transformers.AutoTokenizer.from_pretrained(BASE_MODEL_PATH)
>>> tokenizer_wrapper = sg_tokenizer_wrapper.TokenizerWrapper(
...     tokenizer=tokenizer,
...     model_type=type(base_model),
...     include_labels=False,
...     ignore_prompt_loss=False,
...     prompt_type=sg_tokenizer_wrapper.PromptType.INSTRUCTION,
... )

StainedGlassTransformForText can (optionally) use explicitly specified parameters needed to get the base model's input embeddings. See below for how to automatically infer the minimal parameters.

>>> input_embedding_module = base_model.get_input_embeddings()
>>> input_embedding_parameters_ids = [
...     id(p) for p in input_embedding_module.parameters()
... ]
>>> embedding_parameters_names = [
...     name
...     for name, p in base_model.named_parameters()
...     if id(p) in input_embedding_parameters_ids
... ]

Creating the Stained Glass Transform:

>>> client = StainedGlassTransformForText(
...     model=noisy_model,
...     tokenizer_wrapper=tokenizer_wrapper,
...     parameter_names=embedding_parameters_names,
...     name="example_transform",
... )

Inference with the Client:

>>> transformed_input_embeddings = client(
...     {
...         "instruction": "What is the capital of France?",
...         "system_prompt": "",
...         "context": "",
...         "response": "",
...     }
... )
>>> transformed_input_embeddings
tensor(...)

Saving the client:

>>> import tempfile
>>> temporary_file = tempfile.NamedTemporaryFile()
>>> FILE_PATH = temporary_file.name
>>> client.save_pretrained(FILE_PATH)

Loading the client:

>>> loaded_client = StainedGlassTransformForText.from_pretrained(
...     FILE_PATH
... )
>>> transformed_input_embeddings = loaded_client(
...     {
...         "instruction": "What is the capital of France?",
...         "system_prompt": "",
...         "context": "",
...         "response": "",
...     }
... )
>>> transformed_input_embeddings
tensor(...)

Minimal parameters are automatically inferred if the constructor is called with parameter_names=None.

>>> client_inferred = StainedGlassTransformForText(
...     model=noisy_model,
...     tokenizer_wrapper=tokenizer_wrapper,
...     parameter_names=None,
...     name="example_transform",
... )
>>> client_inferred.parameter_names_relative_to_client
[...]
>>> client_inferred.save_pretrained(FILE_PATH)

Optionally, you can manually specify an input to use for the forward pass to infer the minimal parameters. Most of the time this is not necessary.

>>> client_inferred.infer_minimal_parameters(
...     {
...         "instruction": "What is the capital of France?",
...         "system_prompt": "",
...         "context": "",
...         "response": "",
...     }
... )
>>> client_inferred.parameter_names_relative_to_client
[...]
>>> client_inferred.save_pretrained(FILE_PATH)

Added in version 0.69.0.

Changed in version 0.73.0: The minimal parameters to be saved/loaded can now be automatically inferred via the `infer_minimal_parameters` method.

Changed in version 0.75.0: The minimal parameters are now inferred automatically at construction if `parameter_names` is `None`.

Changed in version 0.83.0: The `include_all_base_model_params` argument has been added to the constructor to include all base model parameters.

Changed in version 0.99.0: The `tokenizer_wrapper` argument now requires a `TokenizerWrapper` instance, instead of the deprecated `noise_mask_tokenizer_wrapper` object. The `forward` method now takes in a schema argument, which is the same data structure passed into the new tokenizer wrapper. See [][stainedglass_core.huggingface.tokenization_utils.TokenizerWrapper] for more information.

Changed in version v0.113.4: Allow setting a name for a StainedGlassTransformForText to aide in identification.

Methods:

Name Description
__init__

Initialize the Stained Glass Transform text client.

forward

Create the protected input embeddings for the given text.

from_pretrained

Load the client from the given path.

infer_minimal_parameters

Infer the minimal parameters of the client, excluding parameters not needed for the client.

manual_seed

Set seed to enable/disable reproducible behavior.

save_pretrained

Save the client to the given path.

state_dict

Get the state dictionary of the client, excluding parameters not needed for the client.

Attributes:

Name Type Description
parameter_names_relative_to_client list[str]

Get the minimal parameters of the client, excluding parameters not needed for the client.

parameter_names_to_remove_relative_to_client list[str]

Get the parameters to ignore when saving the client, excluding parameters not needed for the client.

stainedglass_core_version str | None

Get the version of Stained Glass Core used to save the Stainglass Transform.

parameter_names_relative_to_client property

parameter_names_relative_to_client: list[str]

Get the minimal parameters of the client, excluding parameters not needed for the client.

This property will first check if self.parameter_names_relative_to_base_model is set (this is usually set via the parameter_names argument in the __init__ method). If it is, then it will return the parameters defined there, but with the submodule names changed to be relative to the client.

If self.parameter_names_relative_to_base_model is not set, then it will return the parameters inferred by the infer_minimal_parameters method's most recent call. This requires that the infer_minimal_parameters method has been called at least once before accessing this property.

Note

self.parameter_names_relative_to_base_model, if specified, will override the inferred parameters in calculating this property.

Returns:

Type Description
list[str]

The minimal parameters of the client, excluding parameters not needed for the client.

Raises:

Type Description
ValueError

If the minimal parameters of the base model have not been specified manually or inferred automatically.

parameter_names_to_remove_relative_to_client property

parameter_names_to_remove_relative_to_client: list[str]

Get the parameters to ignore when saving the client, excluding parameters not needed for the client.

This is effectively the set of all parameters in the client that are not in parameter_names_relative_to_client, considering duplicate parameters shared by multiple modules (and thus can be accessed by multiple names).

Returns:

Type Description
list[str]

The parameters to ignore when saving the client, excluding parameters not needed for the client.

stainedglass_core_version property

stainedglass_core_version: str | None

Get the version of Stained Glass Core used to save the Stainglass Transform.

Returns:

Type Description
str | None

The version of Stained Glass Core used to save the Stained Glass Transform.

__init__

__init__(
    model: NoiseMaskedNoisyTransformerModel[
        Any, ..., TransformerCloak[Any]
    ],
    tokenizer_wrapper: TokenizerWrapper[
        TokenizerWrapperReturnT_co, SchemaT_contra
    ],
    parameter_names: list[str] | None = None,
    include_all_base_model_params: bool = False,
    name: str | None = None,
) -> None

Initialize the Stained Glass Transform text client.

Note

Although any valid tokenizer wrapper can be used with this class, recall that this client object is used for inference and is often embedded into some application that takes user text input. Thus, in practice, we recommend using a tokenizer wrapper that takes in a string or simple data structure that the user application will pass in. The forward and __call_ methods will have the same args/kwargs as the tokenizer wrapper. See the tokenizer_wrapper argument and examples for more information.

Warning

The constructor will automatically infer the minimal base model parameters required to calculate the base model's input embeddings. This requires a forward pass and assumes the model has a static computational graph. If you want to manually specify the minimal parameters, you can pass in the parameter_names argument. Note, however, that you must specify all of the parameters necessary to calculate the base model's input embeddings. Alternatively, if you would like to infer the minimal parameters using a particular input, see the infer_minimal_parameters method.

Parameters:

Name Type Description Default

model

NoiseMaskedNoisyTransformerModel[Any, ..., TransformerCloak[Any]]

The NoisyModel used to train Stained Glass Transform.

required

tokenizer_wrapper

TokenizerWrapper[TokenizerWrapperReturnT_co, SchemaT_contra]

The tokenizer wrapper to use for tokenizing the input text. In many cases, this will be the same tokenizer wrapper used to train the model. For more details, see TokenizerWrapper.

required

parameter_names

list[str] | None

Parameters of the base model to be saved and loaded during serialization and deserialization. This should be the minimal list of parameters necessary to get the base model's input embeddings. If None, then the minimal parameters must be inferred by calling the infer_minimal_parameters method, before serialization. Parameter names specified here explicitly will override any inferred parameters.

None

include_all_base_model_params

bool

Whether to include all base model parameters in the client. If True, then all parameters of the base model will be saved and loaded during serialization and deserialization, regardless of the parameter_names.

False

name

str | None

The name of the StainedGlassTransformForText. This is used to identify the transform when saving and loading.

None

Changed in version 0.73.0: The `parameter_names` argument can now be `None` to not explicitly specify the minimal parameters.

Changed in version 0.75.0: The minimal parameters are now inferred automatically at construction if `parameter_names` is `None`.

Changed in version 0.83.0: The `include_all_base_model_params` argument has been added to the constructor to include all base model parameters.

Changed in version 0.99.0: The `tokenizer_wrapper` argument now requires a `TokenizerWrapper` instance, instead of the deprecated `noise_mask_tokenizer_wrapper` object. The `forward` method now takes in a schema argument, which is the same data structure passed into the new tokenizer wrapper. See [][stainedglass_core.huggingface.tokenization_utils.TokenizerWrapper] for more information.

forward

forward(schema: SchemaT_contra) -> torch.Tensor

Create the protected input embeddings for the given text.

Parameters:

Name Type Description Default

schema

SchemaT_contra

The schema containing the prompt to protect. This is the same data structure passed into the tokenizer wrapper.

required

Returns:

Type Description
torch.Tensor

The embeddings protected by Stained Glass Transform.

Changed in version 0.99.0: The `tokenizer_wrapper` argument now requires a `TokenizerWrapper` instance, instead of the deprecated `noise_mask_tokenizer_wrapper` object. The `forward` method now takes in a schema argument, which is the same data structure passed into the new tokenizer wrapper. See [][stainedglass_core.huggingface.tokenization_utils.TokenizerWrapper] for more information.

from_pretrained classmethod

from_pretrained(
    path: str | Path,
    map_location: MAP_LOCATION = None,
    pickle_module: ModuleType = dill,
    *pickle_load_args: Any,
    **pickle_load_kwargs: Any,
) -> Self

Load the client from the given path.

Note

Because this method uses pickle internally, the saved object is not guaranteed to work if the client application uses a different version of Python or PyTorch.

Warning

This method uses pickle internally, which is known to be insecure. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling. Never load data that could have come from an untrusted source in an unsafe mode, or that could have been tampered with. Only load data you trust.

Parameters:

Name Type Description Default

path

str | Path

The path to load the client from.

required

map_location

MAP_LOCATION

The location to map the client to. See torch.load for more information.

None

pickle_module

ModuleType

The pickle module to use for serialization. See torch.load for more information.

dill

*pickle_load_args

Any

Additional positional arguments to pass to torch.load.

required

**pickle_load_kwargs

Any

Additional keyword arguments to pass to torch.load.

required

Returns:

Type Description
Self

The loaded client.

infer_minimal_parameters

infer_minimal_parameters(
    schema: SchemaT_contra | None = None,
) -> None

Infer the minimal parameters of the client, excluding parameters not needed for the client.

This method will infer the minimal parameters of the client by tracing a forward pass through the model. This is useful when the minimal parameters are not known ahead of time.

Parameters:

Name Type Description Default

schema

SchemaT_contra | None

The schema containing the prompt to protect. This is the same data structure passed into the tokenizer wrapper.

None

Raises:

Type Description
ValueError

If the minimal parameters of the client have been specified

Added in version 0.73.0.

Changed in version 0.75.0: Minimal parameters can now be inferred without providing a sample input.

Changed in version 0.99.0: The `tokenizer_wrapper` argument now requires a `TokenizerWrapper` instance, instead of the deprecated `noise_mask_tokenizer_wrapper` object. The `forward` method now takes in a schema argument, which is the same data structure passed into the new tokenizer wrapper. See [][stainedglass_core.huggingface.tokenization_utils.TokenizerWrapper] for more information.

manual_seed

manual_seed(seed: int | None) -> None

Set seed to enable/disable reproducible behavior.

Setting seed to None will disable reproducible behavior.

Parameters:

Name Type Description Default

seed

int | None

Value to seed into the random number generator.

required

Added in version 0.109.0. This utility can be used to set seed value in the noise layer thereby enabling deterministic behavior within SGT.

save_pretrained

save_pretrained(
    path: str | Path,
    pickle_module: Any = dill,
    pickle_protocol: int = 2,
) -> None

Save the client to the given path.

Note

Because this method uses pickle internally, the saved object is not guaranteed to work if the client application uses a different version of Python or PyTorch.

Warning

This method uses pickle internally, which is known to be insecure. It is possible to construct malicious pickle data which will execute arbitrary code during unpickling. Never load data that could have come from an untrusted source in an unsafe mode, or that could have been tampered with. Only load data you trust.

Parameters:

Name Type Description Default

path

str | Path

The path to save the client to.

required

pickle_module

Any

The pickle module to use for serialization. See torch.save for more information.

dill

pickle_protocol

int

The pickle protocol to use for serialization. See torch.save for more information.

2

state_dict

state_dict(
    *, prefix: str = "", keep_vars: bool = False
) -> dict[str, Any]

Get the state dictionary of the client, excluding parameters not needed for the client.

The parameters considered necessary for the client are those passed into the constructor as parameter_names.

Parameters:

Name Type Description Default

prefix

str

A prefix added to parameter and buffer names to compose the keys in state_dict.

''

keep_vars

By default the torch.Tensors returned in the state dict are detached from autograd. If it's set to True, detaching will not be performed.

False

Returns:

Type Description
dict[str, Any]

The state dictionary of the client, excluding parameters not needed for the client.