Skip to content

patch_cloak

Classes:

Name Description
PatchCloakNoiseLayer

Applies an input-dependent stochastic transformation to an image in non-overlapping patches using Conv2d

PatchCloakNoiseLayer1

Applies an input-dependent stochastic transformation to an image in non-overlapping patches using Conv2d

PatchCloakNoiseLayer2

Applies an input-dependent stochastic transformation to an image in non-overlapping patches using Conv2d

PatchCloakNoiseLayer2_NoClip

Applies an input-dependent stochastic transformation to an image in non-overlapping patches using Conv2d

PatchCloakNoiseLayer

Bases: BaseNoiseLayer[Conv2d, CloakStandardDeviationParameterization, Optional[BatchwiseChannelwisePatchwisePercentMasker]]

Applies an input-dependent stochastic transformation to an image in non-overlapping patches using Conv2d estimators, with standard deviations parameterized by CloakStandardDeviationParameterization, optional standard deviation-based input masking using BatchwiseChannelwisePatchwisePercentMasker, and optional output clamping.

Parameters:

Name Type Description Default

color_channels

int

The number of color channels in the input.

required

patch_size

int | tuple[int, int]

Size of the patches to segment the input into. If an integer is given, a square patch is used.

required

scale

tuple[float, float]

Minimum and maximum values of the range of standard deviations of the generated stochastic transformation.

(0.0001, 2.0)

shallow

float

A temperature-like parameter which controls the spread of the parameterization function. Controls both the magnitude of parameterized standard deviations and their rate of change with respect to rhos.

1.0

percent_to_mask

float | None

The percentage of the outputs to mask per patch.

None

value_range

tuple[float | None, float | None] | None

Minimum and maximum values of the range to clamp the output into.

None

padding_mode

Literal['constant', 'reflect', 'replicate', 'circular']

Type of padding. One of: constant, reflect, replicate, or circular. Defaults to constant.

'constant'

padding_value

float

Fill value for constant padding.

0.0

learn_locs_weights

bool

Whether to learn the weight parameters for the locs estimator. If True, only the weights will be learned, and the bias remains constant, otherwise weights are initialized to zero and only the bias is learned.

True

freeze_std_estimator

bool

Whether to freeze the weight parameters for the std estimator. If False, this estimator will be trained simultaneously with masking and the locs estimator parameters.

True

seed

int | None

Seed for the random number generator used to generate the stochastic transformation. If None, the global RNG state is used.

None
Note

For estimators where we train the weights, we freeze their biases. We suspect that if both the biases and weights are trained together, the biases will converge faster than the weights, causing the weights to be trivial. This has not been verified experimentally. Our choices for initialization values and conditions subject to change in light of experimental evidence, and you are encouraged to challenge and improve our understanding of the effects of these choices.

Methods:

Name Description
__call__

Transform the input data.

__getstate__

Prepare a serializable copy of self.__dict__.

__init_subclass__

Set the default dtype to torch.float32 inside all subclass __init__ methods.

__setstate__

Restore from a serialized copy of self.__dict__.

forward

Transform the input data.

get_applied_transform_components_factory

Create a function that returns the elements of the transform components ('mean' and 'std') applied during the most recent

get_transformed_output_factory

Create a function that returns the transformed output from the most recent forward pass.

initial_seed

Return the initial seed of the CPU device's random number generator.

manual_seed

Seed each of the random number generators.

reset_parameters

Reinitialize parameters and buffers.

seed

Seed each of the random number generators using a non-deterministic random number.

Attributes:

Name Type Description
patch_size tuple[int, int]

The size of the patches to segment the input into.

patch_size property

patch_size: tuple[int, int]

The size of the patches to segment the input into.

__call__

__call__(
    input: Tensor,
    noise_mask: Tensor | None = None,
    **kwargs: Any,
) -> torch.Tensor

Transform the input data.

Parameters:

Name Type Description Default

input

Tensor

The input to transform.

required

noise_mask

Tensor | None

An optional mask that selects the elements of input to transform. Where the mask is False, the original input value is returned. Also used to select the elements of the sampled standard deviations to use to mask the input. If None, the entire input is transformed.

None

**kwargs

Any

Additional keyword arguments to the estimator modules.

required

__getstate__

__getstate__() -> dict[str, Any]

Prepare a serializable copy of self.__dict__.

__init_subclass__

__init_subclass__() -> None

Set the default dtype to torch.float32 inside all subclass __init__ methods.

__setstate__

__setstate__(state: dict[str, Any]) -> None

Restore from a serialized copy of self.__dict__.

forward

forward(
    input: Tensor,
    noise_mask: Tensor | None = None,
    **kwargs: Any,
) -> torch.Tensor

Transform the input data.

Parameters:

Name Type Description Default

input

Tensor

The input to transform.

required

noise_mask

Tensor | None

An optional mask that selects the elements of input to transform. Where the mask is 0, the original input value is returned. Also used to select the elements of the sampled standard deviations to use to mask the input. If None, the entire input is transformed.

None

**kwargs

Any

Additional keyword arguments to the estimator modules.

required

Returns:

Type Description
torch.Tensor

The transformed input data.

get_applied_transform_components_factory

get_applied_transform_components_factory() -> Callable[
    [], dict[str, torch.Tensor]
]

Create a function that returns the elements of the transform components ('mean' and 'std') applied during the most recent forward pass.

Specifically, the applied elements are those selected by the noise mask (if supplied) and standard deviation mask (if std_estimator.masker is not None). If no masks are used, all elements are returned.

The applied transform components are returned flattened.

This function is intended to be used to log histograms of the transform components.

Returns:

Type Description
Callable[[], dict[str, torch.Tensor]]

A function that returns the the elements of the transform components applied during the most recent forward pass.

Examples:

>>> from torch import nn
>>> from stainedglass_core import model as sg_model, noise_layer as sg_noise_layer
>>> base_model = nn.Linear(20, 2)
>>> noisy_model = sg_model.NoisyModel(
...     sg_noise_layer.CloakNoiseLayer1,
...     base_model,
...     target_parameter="input",
... )
>>> get_applied_transform_components = (
...     noisy_model.noise_layer.get_applied_transform_components_factory()
... )
>>> input = torch.ones(1, 20)
>>> noise_mask = torch.tensor(5 * [False] + 15 * [True])
>>> output = noisy_model(input, noise_mask=noise_mask)
>>> applied_transform_components = get_applied_transform_components()
>>> applied_transform_components
{'mean': tensor(...), 'std': tensor(...)}
>>> {
...     component_name: component.shape
...     for component_name, component in applied_transform_components.items()
... }
{'mean': torch.Size([15]), 'std': torch.Size([15])}

get_transformed_output_factory

get_transformed_output_factory() -> Callable[
    [], torch.Tensor
]

Create a function that returns the transformed output from the most recent forward pass.

If super batching is active, only the transformed half of the super batch output is returned.

Returns:

Type Description
Callable[[], torch.Tensor]

A function that returns the transformed output from the most recent forward pass.

Examples:

>>> from stainedglass_core import noise_layer as sg_noise_layer
>>> noise_layer = sg_noise_layer.CloakNoiseLayer1()
>>> get_transformed_output = noise_layer.get_transformed_output_factory()
>>> input = torch.ones(2, 3, 32, 32)
>>> output = noise_layer(input)
>>> transformed_output = get_transformed_output()
>>> assert output.equal(transformed_output)

initial_seed

initial_seed() -> int

Return the initial seed of the CPU device's random number generator.

manual_seed

manual_seed(seed: int | None) -> None

Seed each of the random number generators.

Setting seed to None will destroy any existing generators.

Parameters:

Name Type Description Default

seed

int | None

The seed to set.

required

reset_parameters

reset_parameters() -> None

Reinitialize parameters and buffers.

This method is useful for initializing tensors created on the meta device.

seed

seed() -> None

Seed each of the random number generators using a non-deterministic random number.

PatchCloakNoiseLayer1

Bases: PatchCloakNoiseLayer

Applies an input-dependent stochastic transformation to an image in non-overlapping patches using Conv2d estimators and with standard deviations parameterized by CloakStandardDeviationParameterization.

Parameters:

Name Type Description Default

color_channels

int

The number of color channels in the input.

required

patch_size

int | tuple[int, int]

The size of the patches over which the stochastic transformation is estimated.

required

scale

tuple[float, float]

The range of standard deviations of the stochastic transformation.

required

shallow

float

A temperature-like parameter which controls the spread of the parameterization function. Controls both the magnitude of parameterized standard deviations and their rate of change with respect to rhos.

1.0

padding_mode

Literal['constant', 'reflect', 'replicate', 'circular']

The padding mode to use when extracting patches.

'constant'

padding_value

float

The value to use when padding the input.

0.0

learn_locs_weights

bool

Whether to only learn the locs estimator weights or else to only learn the locs estimator bias.

True

seed

int | None

Seed for the random number generator used to generate the stochastic transformation. If None, the global RNG state is used.

None
Note

For maximum privacy preservation, Stained Glass Patch Transform Step 1 should only be used to pre-train a Stained Glass Transform Patch Step 2 (see PatchCloakNoiseLayer1_NoClip). First training a Step 1, then fine-tuning that transform with step 2 is a common recipe.

Methods:

Name Description
__call__

Transform the input data.

__getstate__

Prepare a serializable copy of self.__dict__.

__init_subclass__

Set the default dtype to torch.float32 inside all subclass __init__ methods.

__setstate__

Restore from a serialized copy of self.__dict__.

forward

Transform the input data.

get_applied_transform_components_factory

Create a function that returns the elements of the transform components ('mean' and 'std') applied during the most recent

get_transformed_output_factory

Create a function that returns the transformed output from the most recent forward pass.

initial_seed

Return the initial seed of the CPU device's random number generator.

manual_seed

Seed each of the random number generators.

reset_parameters

Reinitialize parameters and buffers.

seed

Seed each of the random number generators using a non-deterministic random number.

Attributes:

Name Type Description
patch_size tuple[int, int]

The size of the patches to segment the input into.

patch_size property

patch_size: tuple[int, int]

The size of the patches to segment the input into.

__call__

__call__(
    input: Tensor,
    noise_mask: Tensor | None = None,
    **kwargs: Any,
) -> torch.Tensor

Transform the input data.

Parameters:

Name Type Description Default

input

Tensor

The input to transform.

required

noise_mask

Tensor | None

An optional mask that selects the elements of input to transform. Where the mask is False, the original input value is returned. Also used to select the elements of the sampled standard deviations to use to mask the input. If None, the entire input is transformed.

None

**kwargs

Any

Additional keyword arguments to the estimator modules.

required

__getstate__

__getstate__() -> dict[str, Any]

Prepare a serializable copy of self.__dict__.

__init_subclass__

__init_subclass__() -> None

Set the default dtype to torch.float32 inside all subclass __init__ methods.

__setstate__

__setstate__(state: dict[str, Any]) -> None

Restore from a serialized copy of self.__dict__.

forward

forward(
    input: Tensor,
    noise_mask: Tensor | None = None,
    **kwargs: Any,
) -> torch.Tensor

Transform the input data.

Parameters:

Name Type Description Default

input

Tensor

The input to transform.

required

noise_mask

Tensor | None

An optional mask that selects the elements of input to transform. Where the mask is 0, the original input value is returned. Also used to select the elements of the sampled standard deviations to use to mask the input. If None, the entire input is transformed.

None

**kwargs

Any

Additional keyword arguments to the estimator modules.

required

Returns:

Type Description
torch.Tensor

The transformed input data.

get_applied_transform_components_factory

get_applied_transform_components_factory() -> Callable[
    [], dict[str, torch.Tensor]
]

Create a function that returns the elements of the transform components ('mean' and 'std') applied during the most recent forward pass.

Specifically, the applied elements are those selected by the noise mask (if supplied) and standard deviation mask (if std_estimator.masker is not None). If no masks are used, all elements are returned.

The applied transform components are returned flattened.

This function is intended to be used to log histograms of the transform components.

Returns:

Type Description
Callable[[], dict[str, torch.Tensor]]

A function that returns the the elements of the transform components applied during the most recent forward pass.

Examples:

>>> from torch import nn
>>> from stainedglass_core import model as sg_model, noise_layer as sg_noise_layer
>>> base_model = nn.Linear(20, 2)
>>> noisy_model = sg_model.NoisyModel(
...     sg_noise_layer.CloakNoiseLayer1,
...     base_model,
...     target_parameter="input",
... )
>>> get_applied_transform_components = (
...     noisy_model.noise_layer.get_applied_transform_components_factory()
... )
>>> input = torch.ones(1, 20)
>>> noise_mask = torch.tensor(5 * [False] + 15 * [True])
>>> output = noisy_model(input, noise_mask=noise_mask)
>>> applied_transform_components = get_applied_transform_components()
>>> applied_transform_components
{'mean': tensor(...), 'std': tensor(...)}
>>> {
...     component_name: component.shape
...     for component_name, component in applied_transform_components.items()
... }
{'mean': torch.Size([15]), 'std': torch.Size([15])}

get_transformed_output_factory

get_transformed_output_factory() -> Callable[
    [], torch.Tensor
]

Create a function that returns the transformed output from the most recent forward pass.

If super batching is active, only the transformed half of the super batch output is returned.

Returns:

Type Description
Callable[[], torch.Tensor]

A function that returns the transformed output from the most recent forward pass.

Examples:

>>> from stainedglass_core import noise_layer as sg_noise_layer
>>> noise_layer = sg_noise_layer.CloakNoiseLayer1()
>>> get_transformed_output = noise_layer.get_transformed_output_factory()
>>> input = torch.ones(2, 3, 32, 32)
>>> output = noise_layer(input)
>>> transformed_output = get_transformed_output()
>>> assert output.equal(transformed_output)

initial_seed

initial_seed() -> int

Return the initial seed of the CPU device's random number generator.

manual_seed

manual_seed(seed: int | None) -> None

Seed each of the random number generators.

Setting seed to None will destroy any existing generators.

Parameters:

Name Type Description Default

seed

int | None

The seed to set.

required

reset_parameters

reset_parameters() -> None

Reinitialize parameters and buffers.

This method is useful for initializing tensors created on the meta device.

seed

seed() -> None

Seed each of the random number generators using a non-deterministic random number.

PatchCloakNoiseLayer2

Bases: PatchCloakNoiseLayer

Applies an input-dependent stochastic transformation to an image in non-overlapping patches using Conv2d estimators, with standard deviations parameterized by CloakStandardDeviationParameterization, standard deviation-based input masking using BatchwiseChannelwisePatchwisePercentMasker, and output clamping.

Parameters:

Name Type Description Default

color_channels

int

The number of color channels in the input.

required

patch_size

int | tuple[int, int]

Size of the patches to segment the input into. If an integer is given, a square patch is used.

required

scale

tuple[float, float]

Minimum and maximum values of the range of standard deviations of the generated stochastic transformation.

required

percent_to_mask

float

The percentage of the outputs to mask per patch.

required

shallow

float

A temperature-like parameter which controls the spread of the parameterization function. Controls both the magnitude of parameterized standard deviations and their rate of change with respect to rhos.

1.0

value_range

tuple[float | None, float | None]

Minimum and maximum values of the range to clamp the output into.

(-1.0, 1.0)

padding_mode

Literal['constant', 'reflect', 'replicate', 'circular']

Type of padding. One of: constant, reflect, replicate, or circular. Defaults to constant.

'constant'

padding_value

float

Fill value for constant padding.

0.0

learn_locs_weights

bool

Whether to learn the weight parameters for the locs estimator. If True, only the weights will be learned, and the bias remains constant, otherwise weights are initialized to zero and only the bias is learned.

True

freeze_std_estimator

bool

Whether to freeze the weight parameters for the std estimator. If False, this estimator will be trained simultaneously with masking and the locs estimator parameters.

True

seed

int | None

Seed for the random number generator used to generate the stochastic transformation. If None, the global RNG state is used.

None
Note

For maximum privacy preservation, Stained Glass Transform Patch Step 2 should be pre-trained from a Stained Glass Transform Patch Step 1.

Methods:

Name Description
__call__

Transform the input data.

__getstate__

Prepare a serializable copy of self.__dict__.

__init__
__init_subclass__

Set the default dtype to torch.float32 inside all subclass __init__ methods.

__setstate__

Restore from a serialized copy of self.__dict__.

forward

Transform the input data.

get_applied_transform_components_factory

Create a function that returns the elements of the transform components ('mean' and 'std') applied during the most recent

get_transformed_output_factory

Create a function that returns the transformed output from the most recent forward pass.

initial_seed

Return the initial seed of the CPU device's random number generator.

manual_seed

Seed each of the random number generators.

reset_parameters

Reinitialize parameters and buffers.

seed

Seed each of the random number generators using a non-deterministic random number.

Attributes:

Name Type Description
patch_size tuple[int, int]

The size of the patches to segment the input into.

patch_size property

patch_size: tuple[int, int]

The size of the patches to segment the input into.

__call__

__call__(
    input: Tensor,
    noise_mask: Tensor | None = None,
    **kwargs: Any,
) -> torch.Tensor

Transform the input data.

Parameters:

Name Type Description Default

input

Tensor

The input to transform.

required

noise_mask

Tensor | None

An optional mask that selects the elements of input to transform. Where the mask is False, the original input value is returned. Also used to select the elements of the sampled standard deviations to use to mask the input. If None, the entire input is transformed.

None

**kwargs

Any

Additional keyword arguments to the estimator modules.

required

__getstate__

__getstate__() -> dict[str, Any]

Prepare a serializable copy of self.__dict__.

__init__

__init__(
    color_channels: int,
    patch_size: int | tuple[int, int],
    scale: tuple[float, float],
    percent_to_mask: float,
    shallow: float = 1.0,
    value_range: tuple[float | None, float | None] = (
        -1.0,
        1.0,
    ),
    padding_mode: Literal[
        "constant", "reflect", "replicate", "circular"
    ] = "constant",
    padding_value: float = 0.0,
    learn_locs_weights: bool = True,
    freeze_std_estimator: bool = True,
    seed: int | None = None,
) -> None

Changed in version 0.10.0: `threshold` and `percent_threshold` parameters were removed in favor of `percent_to_mask`

__init_subclass__

__init_subclass__() -> None

Set the default dtype to torch.float32 inside all subclass __init__ methods.

__setstate__

__setstate__(state: dict[str, Any]) -> None

Restore from a serialized copy of self.__dict__.

forward

forward(
    input: Tensor,
    noise_mask: Tensor | None = None,
    **kwargs: Any,
) -> torch.Tensor

Transform the input data.

Parameters:

Name Type Description Default

input

Tensor

The input to transform.

required

noise_mask

Tensor | None

An optional mask that selects the elements of input to transform. Where the mask is 0, the original input value is returned. Also used to select the elements of the sampled standard deviations to use to mask the input. If None, the entire input is transformed.

None

**kwargs

Any

Additional keyword arguments to the estimator modules.

required

Returns:

Type Description
torch.Tensor

The transformed input data.

get_applied_transform_components_factory

get_applied_transform_components_factory() -> Callable[
    [], dict[str, torch.Tensor]
]

Create a function that returns the elements of the transform components ('mean' and 'std') applied during the most recent forward pass.

Specifically, the applied elements are those selected by the noise mask (if supplied) and standard deviation mask (if std_estimator.masker is not None). If no masks are used, all elements are returned.

The applied transform components are returned flattened.

This function is intended to be used to log histograms of the transform components.

Returns:

Type Description
Callable[[], dict[str, torch.Tensor]]

A function that returns the the elements of the transform components applied during the most recent forward pass.

Examples:

>>> from torch import nn
>>> from stainedglass_core import model as sg_model, noise_layer as sg_noise_layer
>>> base_model = nn.Linear(20, 2)
>>> noisy_model = sg_model.NoisyModel(
...     sg_noise_layer.CloakNoiseLayer1,
...     base_model,
...     target_parameter="input",
... )
>>> get_applied_transform_components = (
...     noisy_model.noise_layer.get_applied_transform_components_factory()
... )
>>> input = torch.ones(1, 20)
>>> noise_mask = torch.tensor(5 * [False] + 15 * [True])
>>> output = noisy_model(input, noise_mask=noise_mask)
>>> applied_transform_components = get_applied_transform_components()
>>> applied_transform_components
{'mean': tensor(...), 'std': tensor(...)}
>>> {
...     component_name: component.shape
...     for component_name, component in applied_transform_components.items()
... }
{'mean': torch.Size([15]), 'std': torch.Size([15])}

get_transformed_output_factory

get_transformed_output_factory() -> Callable[
    [], torch.Tensor
]

Create a function that returns the transformed output from the most recent forward pass.

If super batching is active, only the transformed half of the super batch output is returned.

Returns:

Type Description
Callable[[], torch.Tensor]

A function that returns the transformed output from the most recent forward pass.

Examples:

>>> from stainedglass_core import noise_layer as sg_noise_layer
>>> noise_layer = sg_noise_layer.CloakNoiseLayer1()
>>> get_transformed_output = noise_layer.get_transformed_output_factory()
>>> input = torch.ones(2, 3, 32, 32)
>>> output = noise_layer(input)
>>> transformed_output = get_transformed_output()
>>> assert output.equal(transformed_output)

initial_seed

initial_seed() -> int

Return the initial seed of the CPU device's random number generator.

manual_seed

manual_seed(seed: int | None) -> None

Seed each of the random number generators.

Setting seed to None will destroy any existing generators.

Parameters:

Name Type Description Default

seed

int | None

The seed to set.

required

reset_parameters

reset_parameters() -> None

Reinitialize parameters and buffers.

This method is useful for initializing tensors created on the meta device.

seed

seed() -> None

Seed each of the random number generators using a non-deterministic random number.

PatchCloakNoiseLayer2_NoClip

Bases: PatchCloakNoiseLayer

Applies an input-dependent stochastic transformation to an image in non-overlapping patches using Conv2d estimators, with standard deviations parameterized by CloakStandardDeviationParameterization, and standard deviation-based input masking using BatchwiseChannelwisePatchwisePercentMasker.

Parameters:

Name Type Description Default

color_channels

int

The number of color channels in the input.

required

patch_size

int | tuple[int, int]

Size of the patches to segment the input into. If an integer is given, a square patch is used.

required

scale

tuple[float, float]

Minimum and maximum values of the range of standard deviations of the generated stochastic transformation.

required

percent_to_mask

float

The percentage of the outputs to mask per patch.

required

shallow

float

A temperature-like parameter which controls the spread of the parameterization function. Controls both the magnitude of parameterized standard deviations and their rate of change with respect to rhos.

1.0

padding_mode

Literal['constant', 'reflect', 'replicate', 'circular']

Type of padding. One of: constant, reflect, replicate, or circular. Defaults to constant.

'constant'

padding_value

float

Fill value for constant padding.

0.0

learn_locs_weights

bool

Whether to learn the weight parameters for the locs estimator. If True, only the weights will be learned, and the bias remains constant, otherwise weights are initialized to zero and only the bias is learned.

True

freeze_std_estimator

bool

Whether to freeze the weight parameters for the std estimator. If True, this estimator will be trained simultaneously with masking and the locs estimator parameters.

True

seed

int | None

Seed for the random number generator used to generate the stochastic transformation. If None, the global RNG state is used.

None
Note

For maximum privacy preservation, Stained Glass Transform Patch Step 2 should be pre-trained from a Stained Glass Transform Patch Step 1.

Methods:

Name Description
__call__

Transform the input data.

__getstate__

Prepare a serializable copy of self.__dict__.

__init__
__init_subclass__

Set the default dtype to torch.float32 inside all subclass __init__ methods.

__setstate__

Restore from a serialized copy of self.__dict__.

forward

Transform the input data.

get_applied_transform_components_factory

Create a function that returns the elements of the transform components ('mean' and 'std') applied during the most recent

get_transformed_output_factory

Create a function that returns the transformed output from the most recent forward pass.

initial_seed

Return the initial seed of the CPU device's random number generator.

manual_seed

Seed each of the random number generators.

reset_parameters

Reinitialize parameters and buffers.

seed

Seed each of the random number generators using a non-deterministic random number.

Attributes:

Name Type Description
patch_size tuple[int, int]

The size of the patches to segment the input into.

patch_size property

patch_size: tuple[int, int]

The size of the patches to segment the input into.

__call__

__call__(
    input: Tensor,
    noise_mask: Tensor | None = None,
    **kwargs: Any,
) -> torch.Tensor

Transform the input data.

Parameters:

Name Type Description Default

input

Tensor

The input to transform.

required

noise_mask

Tensor | None

An optional mask that selects the elements of input to transform. Where the mask is False, the original input value is returned. Also used to select the elements of the sampled standard deviations to use to mask the input. If None, the entire input is transformed.

None

**kwargs

Any

Additional keyword arguments to the estimator modules.

required

__getstate__

__getstate__() -> dict[str, Any]

Prepare a serializable copy of self.__dict__.

__init__

__init__(
    color_channels: int,
    patch_size: int | tuple[int, int],
    scale: tuple[float, float],
    percent_to_mask: float,
    shallow: float = 1.0,
    padding_mode: Literal[
        "constant", "reflect", "replicate", "circular"
    ] = "constant",
    padding_value: float = 0.0,
    learn_locs_weights: bool = True,
    freeze_std_estimator: bool = True,
    seed: int | None = None,
) -> None

Changed in version 0.10.0: `threshold` and `percent_threshold` parameters were removed in favor of `percent_to_mask`

__init_subclass__

__init_subclass__() -> None

Set the default dtype to torch.float32 inside all subclass __init__ methods.

__setstate__

__setstate__(state: dict[str, Any]) -> None

Restore from a serialized copy of self.__dict__.

forward

forward(
    input: Tensor,
    noise_mask: Tensor | None = None,
    **kwargs: Any,
) -> torch.Tensor

Transform the input data.

Parameters:

Name Type Description Default

input

Tensor

The input to transform.

required

noise_mask

Tensor | None

An optional mask that selects the elements of input to transform. Where the mask is 0, the original input value is returned. Also used to select the elements of the sampled standard deviations to use to mask the input. If None, the entire input is transformed.

None

**kwargs

Any

Additional keyword arguments to the estimator modules.

required

Returns:

Type Description
torch.Tensor

The transformed input data.

get_applied_transform_components_factory

get_applied_transform_components_factory() -> Callable[
    [], dict[str, torch.Tensor]
]

Create a function that returns the elements of the transform components ('mean' and 'std') applied during the most recent forward pass.

Specifically, the applied elements are those selected by the noise mask (if supplied) and standard deviation mask (if std_estimator.masker is not None). If no masks are used, all elements are returned.

The applied transform components are returned flattened.

This function is intended to be used to log histograms of the transform components.

Returns:

Type Description
Callable[[], dict[str, torch.Tensor]]

A function that returns the the elements of the transform components applied during the most recent forward pass.

Examples:

>>> from torch import nn
>>> from stainedglass_core import model as sg_model, noise_layer as sg_noise_layer
>>> base_model = nn.Linear(20, 2)
>>> noisy_model = sg_model.NoisyModel(
...     sg_noise_layer.CloakNoiseLayer1,
...     base_model,
...     target_parameter="input",
... )
>>> get_applied_transform_components = (
...     noisy_model.noise_layer.get_applied_transform_components_factory()
... )
>>> input = torch.ones(1, 20)
>>> noise_mask = torch.tensor(5 * [False] + 15 * [True])
>>> output = noisy_model(input, noise_mask=noise_mask)
>>> applied_transform_components = get_applied_transform_components()
>>> applied_transform_components
{'mean': tensor(...), 'std': tensor(...)}
>>> {
...     component_name: component.shape
...     for component_name, component in applied_transform_components.items()
... }
{'mean': torch.Size([15]), 'std': torch.Size([15])}

get_transformed_output_factory

get_transformed_output_factory() -> Callable[
    [], torch.Tensor
]

Create a function that returns the transformed output from the most recent forward pass.

If super batching is active, only the transformed half of the super batch output is returned.

Returns:

Type Description
Callable[[], torch.Tensor]

A function that returns the transformed output from the most recent forward pass.

Examples:

>>> from stainedglass_core import noise_layer as sg_noise_layer
>>> noise_layer = sg_noise_layer.CloakNoiseLayer1()
>>> get_transformed_output = noise_layer.get_transformed_output_factory()
>>> input = torch.ones(2, 3, 32, 32)
>>> output = noise_layer(input)
>>> transformed_output = get_transformed_output()
>>> assert output.equal(transformed_output)

initial_seed

initial_seed() -> int

Return the initial seed of the CPU device's random number generator.

manual_seed

manual_seed(seed: int | None) -> None

Seed each of the random number generators.

Setting seed to None will destroy any existing generators.

Parameters:

Name Type Description Default

seed

int | None

The seed to set.

required

reset_parameters

reset_parameters() -> None

Reinitialize parameters and buffers.

This method is useful for initializing tensors created on the meta device.

seed

seed() -> None

Seed each of the random number generators using a non-deterministic random number.