Skip to content

transform

Modules:

Name Description
text
vision

Classes:

Name Description
StainedGlassTransformForText

A client for creating protected input embeddings from text using Stained Glass Transform.

TransformedImageVisualizationManager

Captures NoisyModel input images and intermediate activations and creates visualizations, formatted as a grid of images where

StainedGlassTransformForText

Bases: Module, ModelHubMixin

A client for creating protected input embeddings from text using Stained Glass Transform.

Note

Instances of this class simply wrap the noisy model and tokenizer wrapper passed into their constructor. This means that changes made to the noisy model or tokenizer wrapper after the client's creation will affect the client's behavior and vice versa. If you need an independent copy of the client, you should either serialize/deserialize it or use copy.deepcopy.

Warning

Inferring the minimal parameters of the client requires a forward pass through the model. This assumes that the model has a static computational graph, i.e. the forward pass will require the same parameters for any valid input. This inference is done implicitly and automatically at initialization on an arbitrary input. To infer the minimal parameters using a particular input, see the infer_minimal_parameters method. To override the inferred parameters with parameters of your choosing, you can pass in the parameter_names argument to the constructor. Note, however, that you must specify all of the parameters necessary to calculate the base model's input embeddings.

Warning

Calls to forward or __call__ are not guaranteed to be thread-safe or reentrant.

Attributes:

Name Type Description
truncated_module

The truncated module that wraps the noisy model. This module acts like the noisy model, but its forward method will return early as soon as the noise layer is applied, i.e. its output will be the transformed input embeddings, without calling any unnecessary layers of the noisy/base model. Generally, there is little need for users to interact with this attribute directly. If you need to access the noisy model, you can do so via the self.noisy_model attribute.

tokenizer

The tokenizer used with the model

parameter_names_relative_to_base_model

Parameters of the base model to be saved and loaded during serialization and deserialization. This should be the minimal list of parameters necessary to get the base model's input embeddings. Each of these parameter names should be relative to the base model. I.e. if the base model's input embedding submodule can be accessed by base_model.model.embed_tokens, then the parameter name for its weight matrix should be model.embed_tokens.weight. Note that this attribute is only used for serialization and deserialization. Although non-minimal parameters will be included in the state_dict and when serializing, they will not be used to calculate the protected input embeddings.

name

The name of the StainedGlassTransformForText. This is used to identify the transform when saving and loading.

model_card_data

Optional model card data to associate with the Stained Glass Transform. Useful for providing metadata when sharing the transform on the Hugging Face Hub. Follow the documentation on Model Cards and ModelCardData for more information on how to fill out the model card data. Useful for providing context about the model's intended use, training data, and other relevant information.

Examples:

Preparing the Model: This example will run on the CPU, so we will use float32 as flash attention and bfloat16 are not universally supported on the CPU.

>>> import transformers
>>> from stainedglass_core.huggingface.tokenization_utils import universal
>>> from stainedglass_core.model import noisy_transformer_masking_model
>>> from stainedglass_core.noise_layer import transformer_cloak
>>>
>>> BASE_MODEL_PATH = "tests/resources/tokenizers/mini-Meta-Llama-3-8B"
>>> base_model_config = transformers.AutoConfig.from_pretrained(BASE_MODEL_PATH)
>>> base_model = transformers.AutoModelForCausalLM.from_config(base_model_config)
>>> embedding_size = base_model.config.hidden_size
>>> noisy_model = noisy_transformer_masking_model.NoiseMaskedNoisyTransformerModel(
...     noise_layer_class=transformer_cloak.TransformerCloak,
...     base_model=base_model,
...     transformer_type=type(base_model.get_decoder()),
...     scale=(0.00000001, 1.0),
...     config=BASE_MODEL_PATH,
...     target_layer="model.embed_tokens",
...     directly_learn_stds=True,
...     rho_init=0,
...     noise_layer_dtype=torch.float32,
... )
>>> tokenizer = transformers.AutoTokenizer.from_pretrained(BASE_MODEL_PATH)

StainedGlassTransformForText can (optionally) use explicitly specified parameters needed to get the base model's input embeddings. See below for how to automatically infer the minimal parameters.

>>> input_embedding_module = base_model.get_input_embeddings()
>>> input_embedding_parameters_ids = [
...     id(p) for p in input_embedding_module.parameters()
... ]
>>> embedding_parameters_names = [
...     name
...     for name, p in base_model.named_parameters()
...     if id(p) in input_embedding_parameters_ids
... ]

Creating the Stained Glass Transform:

>>> client = StainedGlassTransformForText(
...     model=noisy_model,
...     tokenizer=tokenizer,
...     parameter_names=embedding_parameters_names,
...     name="example_transform",
... )

Inference with the Client:

>>> transformed_input_embeddings = client(
...     [
...         {
...             "role": "system",
...             "content": "You are a helpful assistant.",
...         },
...         {
...             "role": "user",
...             "content": "Write me a poem.",
...         },
...     ]
... )
>>> transformed_input_embeddings
tensor(...)

Inference with the Client using tokens (ones used as a placeholder):

>>> input_ids = torch.ones(1, 2, dtype=torch.long)
>>> noise_mask = torch.ones(1, 2, 1, dtype=torch.bool)
>>> transformed_input_embeddings = client(input_ids, noise_mask)
>>> transformed_input_embeddings
tensor(...)

Saving the client:

>>> import tempfile
>>> temporary_file = tempfile.NamedTemporaryFile(suffix=".sgt")
>>> FILE_PATH = temporary_file.name
>>> client.save_pretrained(FILE_PATH)

Loading the client:

>>> loaded_client = StainedGlassTransformForText.from_pretrained(FILE_PATH)
>>> transformed_input_embeddings = loaded_client(
...     [
...         {
...             "role": "system",
...             "content": "You are a helpful assistant.",
...         },
...         {
...             "role": "user",
...             "content": "Write me a poem.",
...         },
...     ]
... )
>>> transformed_input_embeddings
tensor(...)

Loading the client from the Hugging Face Hub:

>>> hub_client = StainedGlassTransformForText.from_pretrained(
...     "<Model Provider>/<SGT Name>"
... )
>>> transformed_input_embeddings = hub_client(
...     [
...         {
...             "role": "system",
...             "content": "You are a helpful assistant.",
...         },
...         {
...             "role": "user",
...             "content": "Write me a poem.",
...         },
...     ]
... )
>>> transformed_input_embeddings
tensor(...)

Minimal parameters are automatically inferred if the constructor is called with parameter_names=None.

>>> client_inferred = StainedGlassTransformForText(
...     model=noisy_model,
...     tokenizer=tokenizer,
...     parameter_names=None,
...     name="example_transform",
... )
>>> client_inferred.parameter_names_relative_to_client
[...]
>>> client_inferred.save_pretrained(FILE_PATH)

Returning base64-encoded embeddings (useful for sending to vLLM):

>>> b64_string = client.forward_b64(
...     [
...         {"role": "system", "content": "You are a helpful assistant."},
...         {"role": "user", "content": "Tell me a joke."},
...     ]
... )
>>> isinstance(b64_string, str)
True
>>> import torch, io, pybase64
>>> decoded_tensor = torch.load(io.BytesIO(pybase64.b64decode(b64_string)))
>>> isinstance(decoded_tensor, torch.Tensor)
True

Added in version 0.69.0.

Changed in version 0.73.0: The minimal parameters to be saved/loaded can now be automatically inferred via the `infer_minimal_parameters` method.

Changed in version 0.75.0: The minimal parameters are now inferred automatically at construction if `parameter_names` is `None`.

Changed in version 0.83.0: The `include_all_base_model_params` argument has been added to the constructor to include all base model parameters.

Changed in version 0.99.0: The `tokenizer_wrapper` argument now requires a `TokenizerWrapper` instance, instead of the deprecated `noise_mask_tokenizer_wrapper` object. The `forward` method now takes in a schema argument, which is the same data structure passed into the new tokenizer wrapper. See [][stainedglass_core.huggingface.tokenization_utils.TokenizerWrapper] for more information.

Changed in version v0.113.4: Allow setting a name for a StainedGlassTransformForText to aide in identification.

Changed in version v0.144.0: Serialized SGT files now use a zip file containing JSON configuration and safetensor weights files, instead of the legacy pickle-based format.

Changed in version v1.12.0: The `NoiseTokenizer` now has a state that can be saved and loaded, which is used to preserve the chat template and other settings.

Changed in version v2.8.0: Saving and loading from the Hugging Face Hub has been added.

Changed in version v2.8.0: Pairing an SGT with a Hugging Face Hub ModelCard is now supported. This is useful for providing metadata when sharing the transform on the Hub, such as Base Model, training dataset, and evaluation metrics. Use the `model_card_data` argument when creating the SGT instance.

Methods:

Name Description
__getstate__
__init__

Initialize the Stained Glass Transform text client.

__setstate__
forward

Create the protected input embeddings for the given text.

forward_b64

Create protected input embeddings for the given text and return them base64-encoded using pybase64.

from_pretrained

Load the client from the given path.

generate_model_card

Generate model card from instance model card metadata and class templates.

infer_minimal_parameters

Infer the minimal parameters of the client, excluding parameters not needed for the client.

manual_seed

Set seed to enable/disable reproducible behavior.

push_to_hub

Upload model checkpoint to the Hub.

save_pretrained

Save the client to the given path.

state_dict

Get the state dictionary of the client, excluding parameters not needed for the client.

noise_layer property

noise_layer: TransformerCloak[Any]

Alias for the contained TransformerCloak layer.

noisy_model property

Alias for the contained NoiseMaskedNoisyTransformerModel.

Warning

A deserialized StainedGlassTransformForText usually will not have its complete base model parameters, so calling the noisy model referenced in this property may not work.

parameter_names_relative_to_client property

parameter_names_relative_to_client: list[str]

Get the minimal parameters of the client, excluding parameters not needed for the client.

This property will first check if self.parameter_names_relative_to_base_model is set (this is usually set via the parameter_names argument in the __init__ method). If it is, then it will return the parameters defined there, but with the submodule names changed to be relative to the client.

If self.parameter_names_relative_to_base_model is not set, then it will return the parameters inferred by the infer_minimal_parameters method's most recent call. This requires that the infer_minimal_parameters method has been called at least once before accessing this property.

Note

self.parameter_names_relative_to_base_model, if specified, will override the inferred parameters in calculating this property.

Returns:

Type Description
list[str]

The minimal parameters of the client, excluding parameters not needed for the client.

Raises:

Type Description
ValueError

If the minimal parameters of the base model have not been specified manually or inferred automatically.

parameter_names_to_remove_relative_to_client property

parameter_names_to_remove_relative_to_client: list[str]

Get the parameters to ignore when saving the client, excluding parameters not needed for the client.

This is effectively the set of all parameters in the client that are not in parameter_names_relative_to_client, considering duplicate parameters shared by multiple modules (and thus can be accessed by multiple names).

Returns:

Type Description
list[str]

The parameters to ignore when saving the client, excluding parameters not needed for the client.

stainedglass_core_version property

stainedglass_core_version: str | None

Get the version of Stained Glass Core used to save the Stained Glass Transform.

Returns:

Type Description
str | None

The version of Stained Glass Core used to save the Stained Glass Transform.

__getstate__

__getstate__() -> dict[str, Any]

Changed in version v0.144.0: Serialized SGT files now use a zip file containing JSON configuration and safetensor weights files, instead of the legacy pickle-based format.

__init__

__init__(
    model: NoiseMaskedNoisyTransformerModel[
        Any, ..., TransformerCloak[Any]
    ],
    tokenizer: PreTrainedTokenizerBase
    | PreTrainedTokenizer
    | PreTrainedTokenizerFast,
    parameter_names: list[str] | None = None,
    include_all_base_model_params: bool = False,
    name: str | None = None,
    chat_template: str | None = None,
    transform_all_tokens: bool = False,
    transform_tools: bool = False,
    model_card_data: ModelCardData | None = None,
) -> None

Initialize the Stained Glass Transform text client.

Warning

The constructor will automatically infer the minimal base model parameters required to calculate the base model's input embeddings. This requires a forward pass and assumes the model has a static computational graph. If you want to manually specify the minimal parameters, you can pass in the parameter_names argument. Note, however, that you must specify all of the parameters necessary to calculate the base model's input embeddings. Alternatively, if you would like to infer the minimal parameters using a particular input, see the infer_minimal_parameters method.

Parameters:

Name Type Description Default

model

NoiseMaskedNoisyTransformerModel[Any, ..., TransformerCloak[Any]]

The NoisyModel used to train Stained Glass Transform.

required

tokenizer

PreTrainedTokenizerBase | PreTrainedTokenizer | PreTrainedTokenizerFast

The tokenizer to use with the model.

required

parameter_names

list[str] | None

Parameters of the base model to be saved and loaded during serialization and deserialization. This should be the minimal list of parameters necessary to get the base model's input embeddings. If None, then the minimal parameters must be inferred by calling the infer_minimal_parameters method, before serialization. Parameter names specified here explicitly will override any inferred parameters.

None

include_all_base_model_params

bool

Whether to include all base model parameters in the client. If True, then all parameters of the base model will be saved and loaded during serialization and deserialization, regardless of the parameter_names.

False

name

str | None

The name of the StainedGlassTransformForText. This is used to identify the transform when saving and loading.

None

chat_template

str | None

A Jinja template to use for this conversion. It is usually not necessary to pass anything to this argument, as the model's template will be used by default.

None

transform_all_tokens

bool

Whether to also apply Stained Glass Transform to special tokens.

False

transform_tools

bool

Whether to transform the tools.

False

model_card_data

ModelCardData | None

Optional model card data to associate with the Stained Glass Transform. Useful for providing metadata when sharing the transform on the Hugging Face Hub. Follow the documentation on Model Cards and ModelCardData for more information on how to fill out the model card data. Useful fields to consider setting include base_model, datasets, eval_results, and metrics.

None

Changed in version 0.73.0: The `parameter_names` argument can now be `None` to not explicitly specify the minimal parameters.

Changed in version 0.75.0: The minimal parameters are now inferred automatically at construction if `parameter_names` is `None`.

Changed in version 0.83.0: The `include_all_base_model_params` argument has been added to the constructor to include all base model parameters.

Changed in version 0.99.0: The `tokenizer_wrapper` argument now requires a `TokenizerWrapper` instance, instead of the deprecated `noise_mask_tokenizer_wrapper` object. The `forward` method now takes in a schema argument, which is the same data structure passed into the new tokenizer wrapper. See [][stainedglass_core.huggingface.tokenization_utils.TokenizerWrapper] for more information.

Changed in version v0.144.0: Serialized SGT files now use a zip file containing JSON configuration and safetensor weights files, instead of the legacy pickle-based format.

Changed in version v2.8.0: Pairing an SGT with a Hugging Face Hub ModelCard is now supported. This is useful for providing metadata when sharing the transform on the Hub, such as Base Model, training dataset, and evaluation metrics. Use the `model_card_data` argument when creating the SGT instance.

__setstate__

__setstate__(state: dict[str, Any]) -> None

Changed in version v0.144.0: Serialized SGT files now use a zip file containing JSON configuration and safetensor weights files, instead of the legacy pickle-based format.

forward

forward(
    *apply_chat_template_args: Any,
    **apply_chat_template_kwargs: Any,
) -> torch.Tensor

Create the protected input embeddings for the given text.

Note

Either the args/kwargs to NoiseTokenizer.apply_chat_template or the input_ids and noise_mask tensors can be provided. If both are provided, the input_ids and noise_mask tensors will be used and the args to NoiseTokenizer.apply_chat_template will be ignored.

Note

When not using arguments to NoiseTokenizer.apply_chat_template, both input_ids and noise_mask must be provided, and in this case are the only two allowed arguments.

Note

By default, we assume this is being used for generation, so we add the generation prompt to the input. If you want don't want to add the generation prompt, you can set add_generation_prompt to False in the apply_chat_template_kwargs.

Parameters:

Name Type Description Default

apply_chat_template_args

Any

The args to NoiseTokenizer.apply_chat_template or the input_ids and noise_mask tensors. See See apply_chat_template.

required

apply_chat_template_kwargs

Any

The kwargs to NoiseTokenizer.apply_chat_template or input_ids and noise_mask tensors. See apply_chat_template.

required

Returns:

Type Description
torch.Tensor

The embeddings protected by Stained Glass Transform.

Raises:

Type Description
ValueError

If one of, but not both of, input_ids and noise_mask are not provided.

Changed in version 0.99.0: The `tokenizer_wrapper` argument now requires a `TokenizerWrapper` instance, instead of the deprecated `noise_mask_tokenizer_wrapper` object. The `forward` method now takes in a schema argument, which is the same data structure passed into the new tokenizer wrapper. See [][stainedglass_core.huggingface.tokenization_utils.TokenizerWrapper] for more information.

Changed in version v2.15.0: The Stained Glass Transformer for Text accepts `input_ids` and `noise_mask` as positional or keyword arguments to its call/forward method in addition to its existing chat conversation interface. These two kinds of arguments cannot be used simultaneously.

forward_b64

forward_b64(
    *apply_chat_template_args: Any,
    **apply_chat_template_kwargs: Any,
) -> str

Create protected input embeddings for the given text and return them base64-encoded using pybase64.

Parameters:

Name Type Description Default

apply_chat_template_args

Any

Positional arguments for NoiseTokenizer.apply_chat_template.

required

apply_chat_template_kwargs

Any

Keyword arguments for NoiseTokenizer.apply_chat_template.

required

Returns:

Type Description
str

A base64-encoded string representation of the protected input embeddings.

Added in version SGT_B64_FORWARD_METHOD.

from_pretrained classmethod

from_pretrained(
    pretrained_model_name_or_path: str | Path,
    map_location: device | str | None = None,
    index_file_name: str | None = None,
    dtype: str | dtype | None = None,
    noise_layer_attention: Literal[
        "sdpa",
        "flash_attention_2",
        "flash_attention_3",
        "flex_attention",
    ]
    | None = None,
    *,
    force_download: bool = False,
    resume_download: bool | None = None,
    proxies: bool | dict[Any, Any] | None = None,
    token: str | bool | None = None,
    cache_dir: str | Path | None = None,
    local_files_only: bool = False,
    revision: str | None = None,
    **model_kwargs: Any,
) -> Self

Load the client from the given path.

Parameters:

Name Type Description Default

pretrained_model_name_or_path

str | Path

The path to load the client from. This can be a path to a .sgt zipfile or a model name on the Hugging Face Hub (such as Protopia/SGT-for-llama-3.1-8b-instruct-rare-rain-bfloat16). Passing a local directory is not supported.

required

map_location

device | str | None

The location to map the client to. See torch.device for more information.

None

index_file_name

str | None

The name of the index file to use within the zipfile. If None, the default index file name will be used.

None

dtype

str | dtype | None

The dtype, either as a string or a torch.dtype, to use for the noise layer and embedding weights. If None, the default dtype will be used. When passed as a string, it should be formatted as "torch.", e.g. "torch.float32" or "torch.bfloat16".

None

noise_layer_attention

Literal['sdpa', 'flash_attention_2', 'flash_attention_3', 'flex_attention'] | None

The attention type to use for the noise layer. If None, the default attention type will be used.

None

force_download

bool

Whether to force the download of the client. If False, the client will be downloaded if it is not already present in the cache.

False

resume_download

bool | None

Unused. Required for compatibility with the Hugging Face Hub API.

None

proxies

bool | dict[Any, Any] | None

Unused. Required for compatibility with the Hugging Face Hub API.

None

token

str | bool | None

The token to use for authentication with the Hugging Face Hub API.

None

cache_dir

str | Path | None

The directory to use for caching the client. If None, the default cache directory will be used.

None

local_files_only

bool

Whether to only use local files and not attempt to download the client. If True, an error will be raised if the client is not present in the cache.

False

revision

str | None

The revision of the client to use. This can be a branch name, tag name, or commit hash. If None, the default revision will be used.

None

model_kwargs

Any

Unused. Required for compatibility with the Hugging Face Hub API.

required

Returns:

Type Description
Self

The loaded client.

Raises:

Type Description
ValueError

If any model_kwargs are passed in, as they are not supported

IsADirectoryError

If the specified path is a directory, but a .sgt file path is required.

generate_model_card

generate_model_card(
    *args: Any, **kwargs: Any
) -> huggingface_hub.ModelCard

Generate model card from instance model card metadata and class templates.

Parameters:

Name Type Description Default

*args

Any

Positional arguments to huggingface_hub.ModelCard.from_template. Unused (because all arguments are passed by keyword).

required

**kwargs

Any

Keyword arguments to the template_str passed to huggingface_hub.ModelCard.from_template.

required

Returns:

Type Description
huggingface_hub.ModelCard

Generated ModelCard object.

Changed in version v2.8.0: Automatically generated model card files now respect instance model card metadata.

infer_minimal_parameters

infer_minimal_parameters() -> None

Infer the minimal parameters of the client, excluding parameters not needed for the client.

This method will infer the minimal parameters of the client by tracing a forward pass through the model. This is useful when the minimal parameters are not known ahead of time.

Raises:

Type Description
ValueError

If the minimal parameters of the client have been specified

Added in version 0.73.0.

Changed in version 0.75.0: Minimal parameters can now be inferred without providing a sample input.

Changed in version 0.99.0: The `tokenizer_wrapper` argument now requires a `TokenizerWrapper` instance, instead of the deprecated `noise_mask_tokenizer_wrapper` object. The `forward` method now takes in a schema argument, which is the same data structure passed into the new tokenizer wrapper. See [][stainedglass_core.huggingface.tokenization_utils.TokenizerWrapper] for more information.

manual_seed

manual_seed(
    seed: int | None, rank_dependent: bool = True
) -> None

Set seed to enable/disable reproducible behavior.

Setting seed to None will disable reproducible behavior.

Parameters:

Name Type Description Default

seed

int | None

Value to seed into the random number generator.

required

rank_dependent

bool

Whether to add the distributed rank to the seed to ensure that each process samples different noise.

True

Added in version 0.109.0. This utility can be used to set seed value in the noise layer thereby enabling deterministic behavior within SGT.

push_to_hub

push_to_hub(
    repo_id: str,
    *,
    config: dict | DataclassInstance | None = None,
    commit_message: str = "Upload using stainedglass_core.",
    private: bool | None = None,
    token: str | None = None,
    branch: str | None = None,
    create_pr: bool | None = None,
    allow_patterns: list[str] | str | None = None,
    ignore_patterns: list[str] | str | None = None,
    delete_patterns: list[str] | str | None = None,
    model_card_kwargs: dict[str, Any] | None = None,
) -> str

Upload model checkpoint to the Hub.

Warning

This method is currently not supported on StainedGlassTransformForText. Instead use save_pretrained with push_to_hub=True.

Use allow_patterns and ignore_patterns to precisely filter which files should be pushed to the hub. Use delete_patterns to delete existing remote files in the same commit. See [upload_folder] reference for more details.

Parameters:

Name Type Description Default

repo_id

str

ID of the repository to push to (example: "username/my-model").

required

config

dict | DataclassInstance | None

Model configuration specified as a key/value dictionary or a dataclass instance.

None

commit_message

str

Message to commit while pushing.

'Upload using stainedglass_core.'

private

bool | None

Whether the repository created should be private. If None (default), the repo will be public unless the organization's default is private.

None

token

str | None

The token to use as HTTP bearer authorization for remote files. By default, it will use the token cached when running hf auth login.

None

branch

str | None

The git branch on which to push the model. This defaults to "main".

None

create_pr

bool | None

Whether or not to create a Pull Request from branch with that commit.

None

allow_patterns

list[str] | str | None

If provided, only files matching at least one pattern are pushed.

None

ignore_patterns

list[str] | str | None

If provided, files matching any of the patterns are not pushed.

None

delete_patterns

list[str] | str | None

If provided, remote files matching any of the patterns will be deleted from the repo.

None

model_card_kwargs

dict[str, Any] | None

Additional arguments passed to the model card template to customize the model card.

None

Returns:

Type Description
str

The url of the commit of your model in the given repository.

Raises:

Type Description
NotImplementedError

This method is not implemented.

save_pretrained

save_pretrained(
    save_directory: str | Path,
    *,
    compression: int = 8,
    push_to_hub: bool = False,
    repo_id: str | None = None,
    private: bool = True,
    config: dict | DataclassInstance | None = None,
    model_card_kwargs: dict[str, Any] | None = None,
    **push_to_hub_kwargs: Any,
) -> None

Save the client to the given path.

Parameters:

Name Type Description Default

save_directory

str | Path

The path to save the client to. Although this is called save_directory for compatibility with ModelHubMixin.save_pretrained, passing in a directory name is not supported. A .sgt zipfile will be generated at the path provided (even if the path provided does not use the .sgt file extension)

required

compression

int

The compression method to use for the ZIP file. Defaults to zipfile.ZIP_DEFLATED, but this can cause very slow serialization times. If serialization times are a problem, use zipfile.ZIP_STORED instead.

8

push_to_hub

bool

Whether to push the client to the Hugging Face Hub.

False

repo_id

str | None

The repository ID to push the client to. This is required if push_to_hub is True.

None

private

bool

Whether to make the repository private. This is only used if push_to_hub is True.

True

config

dict | DataclassInstance | None

Unused. Required for compatibility with the Hugging Face Hub API.

None

model_card_kwargs

dict[str, Any] | None

The kwargs to pass to the model card generator. This is only used if push_to_hub is True.

None

push_to_hub_kwargs

Any

The kwargs to pass to the HfApi.upload_folder method. This is only used if push_to_hub is True.

required

Raises:

Type Description
IsADirectoryError

If a directory is passed in.

ValueError

If push_to_hub is True and repo_id is None.

UserWarning

If push_to_hub is True and private is False.

compression

The compression method to use for the ZIP file. Defaults to zipfile.ZIP_DEFLATED, but this can cause very slow serialization times. If serialization times are a problem, use zipfile.ZIP_STORED instead.

Examples:

Uploading a Stained Glass Transform zipfile to the Hugging Face Hub (note that this will also create a local copy of the SGT zipfile):

>>> from stainedglass_core.transform import text
>>> sgt = text.StainedGlassTransformForText.from_pretrained(
...     "path/to/sgt_file.sgt"
... )
>>> sgt.save_pretrained(
...     "new-sgt-zipfile.sgt",
...     push_to_hub=True,
...     repo_id="username/new-sgt-repo",
... )

Optionally, you can override any model card metadata before uploading to the Hub. This can be useful for specifying the base model and datasets used for training Stained Glass Transform. You can also specify additional metadata such as eval_results. See huggingface_hub.ModelCardData for more details on the available fields.

>>> sgt.model_card_data.base_model = (
...     "meta-llama/Llama-3.1-8B-Instruct"
... )
>>> sgt.model_card_data.__dict__["base_model_relation"] = (
...     "adapter"
... )
>>> sgt.model_card_data.datasets = ["Open-Orca/OpenOrca"]
>>> sgt.save_pretrained(
...     "new-sgt-zipfile.sgt",
...     push_to_hub=True,
...     repo_id="username/new-sgt-repo",
... )

Changed in version v0.144.0: Serialized SGT files now use a zip file containing JSON configuration and safetensor weights files, instead of the legacy pickle-based format.

Changed in version v0.144.0: pickle-related arguments are no longer accepted to accommodate switching from `torch.save` to safetensors.

Changed in version v2.8.0: Added ability to push Stained Glass Transform to the Hugging Face Hub. BREAKING CHANGE: Argument `path` was renamed `save_directory` for compatibility with ModelHubMixin.save_pretrained

Changed in version v2.20.3: The model safetensors filename was changed for better compatibility with the Hugging Face Hub. This has no practical effect on saving or loading.

state_dict

state_dict(
    *, prefix: str = "", keep_vars: bool = False
) -> dict[str, Any]

Get the state dictionary of the client, excluding parameters not needed for the client.

The parameters considered necessary for the client are those passed into the constructor as parameter_names.

Parameters:

Name Type Description Default

prefix

str

A prefix added to parameter and buffer names to compose the keys in state_dict.

''

keep_vars

bool

By default the torch.Tensors returned in the state dict are detached from autograd. If it's set to True, detaching will not be performed.

False

Returns:

Type Description
dict[str, Any]

The state dictionary of the client, excluding parameters not needed for the client.

TransformedImageVisualizationManager

Captures NoisyModel input images and intermediate activations and creates visualizations, formatted as a grid of images where each row contains an input image and its corresponding activation.

Methods:

Name Description
__init__

Construct a new TransformedImageVisualizationManager.

prepare_activation_images

Collect the input and activation tensors from the most recent forward pass and populate them into a grid of images for each

Attributes:

Name Type Description
grids ActivationImages

The tensors representing the grid of images to visualize.

grids property

The tensors representing the grid of images to visualize.

__init__

__init__(
    noisy_model: NoisyModel[ModuleT, ..., NoiseLayerT],
    input_name: str | None = None,
    max_examples: int = 4,
    max_color_channels: int = 3,
) -> None

Construct a new TransformedImageVisualizationManager.

Parameters:

Name Type Description Default

noisy_model

NoisyModel[ModuleT, ..., NoiseLayerT]

The model to visualize input images and activations for.

required

input_name

str | None

The name of the noisy_model input Tensor argument. If noisy_model's target_parameter is set, it will be used by default, otherwise, this argument is required.

None

max_examples

int

The maximum number of rows to display in the visualizations.

4

max_color_channels

int

The maximum number of individual color channels to additionally display in each visualization. If the target layer has more than 3 color channels, these will be displayed in grayscale.

3

Raises:

Type Description
ValueError

If noisy_model has no target_parameter and input_name is not provided.

ValueError

If noisy_model has a target_parameter and input_name is provided but does not match. In this case, you do not need to provide input_name.

prepare_activation_images

prepare_activation_images() -> ActivationImages

Collect the input and activation tensors from the most recent forward pass and populate them into a grid of images for each activation.

Returns:

Type Description
ActivationImages

A dictionary of image tensors, each formatted as a grid of images, where each row corresponds to a specific example.