patch_cloak
Classes:
Name | Description |
---|---|
PatchCloakNoiseLayer |
Applies an input-dependent stochastic transformation to an image in non-overlapping patches using |
PatchCloakNoiseLayer1 |
Applies an input-dependent stochastic transformation to an image in non-overlapping patches using |
PatchCloakNoiseLayer2 |
Applies an input-dependent stochastic transformation to an image in non-overlapping patches using |
PatchCloakNoiseLayer2_NoClip |
Applies an input-dependent stochastic transformation to an image in non-overlapping patches using |
PatchCloakNoiseLayer
¶
Bases: BaseNoiseLayer[Conv2d, CloakStandardDeviationParameterization, Optional[BatchwiseChannelwisePatchwisePercentMasker]]
Applies an input-dependent stochastic transformation to an image in non-overlapping patches using Conv2d
estimators, with standard deviations parameterized by CloakStandardDeviationParameterization
,
optional standard deviation-based input masking using BatchwiseChannelwisePatchwisePercentMasker
,
and optional output clamping.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
|
int
|
The number of color channels in the input. |
required |
|
int | tuple[int, int]
|
Size of the patches to segment the input into. If an integer is given, a square patch is used. |
required |
|
tuple[float, float]
|
Minimum and maximum values of the range of standard deviations of the generated stochastic transformation. |
(0.0001, 2.0)
|
|
float
|
A temperature-like parameter which controls the spread of the parameterization function. Controls both the magnitude of parameterized standard deviations and their rate of change with respect to rhos. |
1.0
|
|
float | None
|
The percentage of the outputs to mask per patch. |
None
|
|
tuple[float | None, float | None] | None
|
Minimum and maximum values of the range to clamp the output into. |
None
|
|
Literal['constant', 'reflect', 'replicate', 'circular']
|
Type of padding. One of: constant, reflect, replicate, or circular. Defaults to constant. |
'constant'
|
|
float
|
Fill value for constant padding. |
0.0
|
|
bool
|
Whether to learn the weight parameters for the locs estimator. If |
True
|
|
bool
|
Whether to freeze the weight parameters for the std estimator. If |
True
|
|
int | None
|
Seed for the random number generator used to generate the stochastic transformation. If |
None
|
Note
For estimators where we train the weights, we freeze their biases. We suspect that if both the biases and weights are trained together, the biases will converge faster than the weights, causing the weights to be trivial. This has not been verified experimentally. Our choices for initialization values and conditions subject to change in light of experimental evidence, and you are encouraged to challenge and improve our understanding of the effects of these choices.
Methods:
Name | Description |
---|---|
__call__ |
Transform the input data. |
__getstate__ |
Prepare a serializable copy of |
__init_subclass__ |
Set the default dtype to |
__setstate__ |
Restore from a serialized copy of |
forward |
Transform the input data. |
get_applied_transform_components_factory |
Create a function that returns the elements of the transform components ( |
get_transformed_output_factory |
Create a function that returns the transformed output from the most recent forward pass. |
initial_seed |
Return the initial seed of the CPU device's random number generator. |
manual_seed |
Seed each of the random number generators. |
reset_parameters |
Reinitialize parameters and buffers. |
seed |
Seed each of the random number generators using a non-deterministic random number. |
Attributes:
Name | Type | Description |
---|---|---|
patch_size |
tuple[int, int]
|
The size of the patches to segment the input into. |
patch_size
property
¶
The size of the patches to segment the input into.
__call__
¶
Transform the input data.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
|
Tensor
|
The input to transform. |
required |
|
Tensor | None
|
An optional mask that selects the elements of |
None
|
|
Any
|
Additional keyword arguments to the estimator modules. |
required |
__init_subclass__
¶
Set the default dtype to torch.float32
inside all subclass __init__
methods.
__setstate__
¶
Restore from a serialized copy of self.__dict__
.
forward
¶
Transform the input data.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
|
Tensor
|
The input to transform. |
required |
|
Tensor | None
|
An optional mask that selects the elements of |
None
|
|
Any
|
Additional keyword arguments to the estimator modules. |
required |
Returns:
Type | Description |
---|---|
torch.Tensor
|
The transformed input data. |
get_applied_transform_components_factory
¶
Create a function that returns the elements of the transform components ('mean'
and 'std'
) applied during the most recent
forward pass.
Specifically, the applied elements are those selected by the noise mask (if supplied) and standard deviation mask (if
std_estimator.masker is not None
). If no masks are used, all elements are returned.
The applied transform components are returned flattened.
This function is intended to be used to log histograms of the transform components.
Returns:
Type | Description |
---|---|
Callable[[], dict[str, torch.Tensor]]
|
A function that returns the the elements of the transform components applied during the most recent forward pass. |
Examples:
>>> from torch import nn
>>> from stainedglass_core import model as sg_model, noise_layer as sg_noise_layer
>>> base_model = nn.Linear(20, 2)
>>> noisy_model = sg_model.NoisyModel(
... sg_noise_layer.CloakNoiseLayer1,
... base_model,
... target_parameter="input",
... )
>>> get_applied_transform_components = (
... noisy_model.noise_layer.get_applied_transform_components_factory()
... )
>>> input = torch.ones(1, 20)
>>> noise_mask = torch.tensor(5 * [False] + 15 * [True])
>>> output = noisy_model(input, noise_mask=noise_mask)
>>> applied_transform_components = get_applied_transform_components()
>>> applied_transform_components
{'mean': tensor(...), 'std': tensor(...)}
>>> {
... component_name: component.shape
... for component_name, component in applied_transform_components.items()
... }
{'mean': torch.Size([15]), 'std': torch.Size([15])}
get_transformed_output_factory
¶
Create a function that returns the transformed output from the most recent forward pass.
If super batching is active, only the transformed half of the super batch output is returned.
Returns:
Type | Description |
---|---|
Callable[[], torch.Tensor]
|
A function that returns the transformed output from the most recent forward pass. |
Examples:
>>> from stainedglass_core import noise_layer as sg_noise_layer
>>> noise_layer = sg_noise_layer.CloakNoiseLayer1()
>>> get_transformed_output = noise_layer.get_transformed_output_factory()
>>> input = torch.ones(2, 3, 32, 32)
>>> output = noise_layer(input)
>>> transformed_output = get_transformed_output()
>>> assert output.equal(transformed_output)
initial_seed
¶
Return the initial seed of the CPU device's random number generator.
manual_seed
¶
reset_parameters
¶
Reinitialize parameters and buffers.
This method is useful for initializing tensors created on the meta device.
seed
¶
Seed each of the random number generators using a non-deterministic random number.
PatchCloakNoiseLayer1
¶
Bases: PatchCloakNoiseLayer
Applies an input-dependent stochastic transformation to an image in non-overlapping patches using Conv2d
estimators and with standard deviations parameterized by CloakStandardDeviationParameterization
.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
|
int
|
The number of color channels in the input. |
required |
|
int | tuple[int, int]
|
The size of the patches over which the stochastic transformation is estimated. |
required |
|
tuple[float, float]
|
The range of standard deviations of the stochastic transformation. |
required |
|
float
|
A temperature-like parameter which controls the spread of the parameterization function. Controls both the magnitude of parameterized standard deviations and their rate of change with respect to rhos. |
1.0
|
|
Literal['constant', 'reflect', 'replicate', 'circular']
|
The padding mode to use when extracting patches. |
'constant'
|
|
float
|
The value to use when padding the input. |
0.0
|
|
bool
|
Whether to only learn the locs estimator weights or else to only learn the locs estimator bias. |
True
|
|
int | None
|
Seed for the random number generator used to generate the stochastic transformation. If |
None
|
Note
For maximum privacy preservation, Stained Glass Patch Transform Step 1 should only be used to pre-train a Stained Glass Transform
Patch Step 2 (see PatchCloakNoiseLayer1_NoClip
). First training a Step 1, then fine-tuning that transform with step 2 is a
common recipe.
Methods:
Name | Description |
---|---|
__call__ |
Transform the input data. |
__getstate__ |
Prepare a serializable copy of |
__init_subclass__ |
Set the default dtype to |
__setstate__ |
Restore from a serialized copy of |
forward |
Transform the input data. |
get_applied_transform_components_factory |
Create a function that returns the elements of the transform components ( |
get_transformed_output_factory |
Create a function that returns the transformed output from the most recent forward pass. |
initial_seed |
Return the initial seed of the CPU device's random number generator. |
manual_seed |
Seed each of the random number generators. |
reset_parameters |
Reinitialize parameters and buffers. |
seed |
Seed each of the random number generators using a non-deterministic random number. |
Attributes:
Name | Type | Description |
---|---|---|
patch_size |
tuple[int, int]
|
The size of the patches to segment the input into. |
patch_size
property
¶
The size of the patches to segment the input into.
__call__
¶
Transform the input data.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
|
Tensor
|
The input to transform. |
required |
|
Tensor | None
|
An optional mask that selects the elements of |
None
|
|
Any
|
Additional keyword arguments to the estimator modules. |
required |
__init_subclass__
¶
Set the default dtype to torch.float32
inside all subclass __init__
methods.
__setstate__
¶
Restore from a serialized copy of self.__dict__
.
forward
¶
Transform the input data.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
|
Tensor
|
The input to transform. |
required |
|
Tensor | None
|
An optional mask that selects the elements of |
None
|
|
Any
|
Additional keyword arguments to the estimator modules. |
required |
Returns:
Type | Description |
---|---|
torch.Tensor
|
The transformed input data. |
get_applied_transform_components_factory
¶
Create a function that returns the elements of the transform components ('mean'
and 'std'
) applied during the most recent
forward pass.
Specifically, the applied elements are those selected by the noise mask (if supplied) and standard deviation mask (if
std_estimator.masker is not None
). If no masks are used, all elements are returned.
The applied transform components are returned flattened.
This function is intended to be used to log histograms of the transform components.
Returns:
Type | Description |
---|---|
Callable[[], dict[str, torch.Tensor]]
|
A function that returns the the elements of the transform components applied during the most recent forward pass. |
Examples:
>>> from torch import nn
>>> from stainedglass_core import model as sg_model, noise_layer as sg_noise_layer
>>> base_model = nn.Linear(20, 2)
>>> noisy_model = sg_model.NoisyModel(
... sg_noise_layer.CloakNoiseLayer1,
... base_model,
... target_parameter="input",
... )
>>> get_applied_transform_components = (
... noisy_model.noise_layer.get_applied_transform_components_factory()
... )
>>> input = torch.ones(1, 20)
>>> noise_mask = torch.tensor(5 * [False] + 15 * [True])
>>> output = noisy_model(input, noise_mask=noise_mask)
>>> applied_transform_components = get_applied_transform_components()
>>> applied_transform_components
{'mean': tensor(...), 'std': tensor(...)}
>>> {
... component_name: component.shape
... for component_name, component in applied_transform_components.items()
... }
{'mean': torch.Size([15]), 'std': torch.Size([15])}
get_transformed_output_factory
¶
Create a function that returns the transformed output from the most recent forward pass.
If super batching is active, only the transformed half of the super batch output is returned.
Returns:
Type | Description |
---|---|
Callable[[], torch.Tensor]
|
A function that returns the transformed output from the most recent forward pass. |
Examples:
>>> from stainedglass_core import noise_layer as sg_noise_layer
>>> noise_layer = sg_noise_layer.CloakNoiseLayer1()
>>> get_transformed_output = noise_layer.get_transformed_output_factory()
>>> input = torch.ones(2, 3, 32, 32)
>>> output = noise_layer(input)
>>> transformed_output = get_transformed_output()
>>> assert output.equal(transformed_output)
initial_seed
¶
Return the initial seed of the CPU device's random number generator.
manual_seed
¶
reset_parameters
¶
Reinitialize parameters and buffers.
This method is useful for initializing tensors created on the meta device.
seed
¶
Seed each of the random number generators using a non-deterministic random number.
PatchCloakNoiseLayer2
¶
Bases: PatchCloakNoiseLayer
Applies an input-dependent stochastic transformation to an image in non-overlapping patches using Conv2d
estimators, with standard deviations parameterized by CloakStandardDeviationParameterization
,
standard deviation-based input masking using BatchwiseChannelwisePatchwisePercentMasker
,
and output clamping.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
|
int
|
The number of color channels in the input. |
required |
|
int | tuple[int, int]
|
Size of the patches to segment the input into. If an integer is given, a square patch is used. |
required |
|
tuple[float, float]
|
Minimum and maximum values of the range of standard deviations of the generated stochastic transformation. |
required |
|
float
|
The percentage of the outputs to mask per patch. |
required |
|
float
|
A temperature-like parameter which controls the spread of the parameterization function. Controls both the magnitude of parameterized standard deviations and their rate of change with respect to rhos. |
1.0
|
|
tuple[float | None, float | None]
|
Minimum and maximum values of the range to clamp the output into. |
(-1.0, 1.0)
|
|
Literal['constant', 'reflect', 'replicate', 'circular']
|
Type of padding. One of: constant, reflect, replicate, or circular. Defaults to constant. |
'constant'
|
|
float
|
Fill value for constant padding. |
0.0
|
|
bool
|
Whether to learn the weight parameters for the locs estimator. If |
True
|
|
bool
|
Whether to freeze the weight parameters for the std estimator. If |
True
|
|
int | None
|
Seed for the random number generator used to generate the stochastic transformation. If |
None
|
Note
For maximum privacy preservation, Stained Glass Transform Patch Step 2 should be pre-trained from a Stained Glass Transform Patch Step 1.
Methods:
Name | Description |
---|---|
__call__ |
Transform the input data. |
__getstate__ |
Prepare a serializable copy of |
__init__ |
|
__init_subclass__ |
Set the default dtype to |
__setstate__ |
Restore from a serialized copy of |
forward |
Transform the input data. |
get_applied_transform_components_factory |
Create a function that returns the elements of the transform components ( |
get_transformed_output_factory |
Create a function that returns the transformed output from the most recent forward pass. |
initial_seed |
Return the initial seed of the CPU device's random number generator. |
manual_seed |
Seed each of the random number generators. |
reset_parameters |
Reinitialize parameters and buffers. |
seed |
Seed each of the random number generators using a non-deterministic random number. |
Attributes:
Name | Type | Description |
---|---|---|
patch_size |
tuple[int, int]
|
The size of the patches to segment the input into. |
patch_size
property
¶
The size of the patches to segment the input into.
__call__
¶
Transform the input data.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
|
Tensor
|
The input to transform. |
required |
|
Tensor | None
|
An optional mask that selects the elements of |
None
|
|
Any
|
Additional keyword arguments to the estimator modules. |
required |
__init__
¶
__init__(
color_channels: int,
patch_size: int | tuple[int, int],
scale: tuple[float, float],
percent_to_mask: float,
shallow: float = 1.0,
value_range: tuple[float | None, float | None] = (
-1.0,
1.0,
),
padding_mode: Literal[
"constant", "reflect", "replicate", "circular"
] = "constant",
padding_value: float = 0.0,
learn_locs_weights: bool = True,
freeze_std_estimator: bool = True,
seed: int | None = None,
) -> None
Changed in version 0.10.0: `threshold` and `percent_threshold` parameters were removed in favor of `percent_to_mask`
__init_subclass__
¶
Set the default dtype to torch.float32
inside all subclass __init__
methods.
__setstate__
¶
Restore from a serialized copy of self.__dict__
.
forward
¶
Transform the input data.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
|
Tensor
|
The input to transform. |
required |
|
Tensor | None
|
An optional mask that selects the elements of |
None
|
|
Any
|
Additional keyword arguments to the estimator modules. |
required |
Returns:
Type | Description |
---|---|
torch.Tensor
|
The transformed input data. |
get_applied_transform_components_factory
¶
Create a function that returns the elements of the transform components ('mean'
and 'std'
) applied during the most recent
forward pass.
Specifically, the applied elements are those selected by the noise mask (if supplied) and standard deviation mask (if
std_estimator.masker is not None
). If no masks are used, all elements are returned.
The applied transform components are returned flattened.
This function is intended to be used to log histograms of the transform components.
Returns:
Type | Description |
---|---|
Callable[[], dict[str, torch.Tensor]]
|
A function that returns the the elements of the transform components applied during the most recent forward pass. |
Examples:
>>> from torch import nn
>>> from stainedglass_core import model as sg_model, noise_layer as sg_noise_layer
>>> base_model = nn.Linear(20, 2)
>>> noisy_model = sg_model.NoisyModel(
... sg_noise_layer.CloakNoiseLayer1,
... base_model,
... target_parameter="input",
... )
>>> get_applied_transform_components = (
... noisy_model.noise_layer.get_applied_transform_components_factory()
... )
>>> input = torch.ones(1, 20)
>>> noise_mask = torch.tensor(5 * [False] + 15 * [True])
>>> output = noisy_model(input, noise_mask=noise_mask)
>>> applied_transform_components = get_applied_transform_components()
>>> applied_transform_components
{'mean': tensor(...), 'std': tensor(...)}
>>> {
... component_name: component.shape
... for component_name, component in applied_transform_components.items()
... }
{'mean': torch.Size([15]), 'std': torch.Size([15])}
get_transformed_output_factory
¶
Create a function that returns the transformed output from the most recent forward pass.
If super batching is active, only the transformed half of the super batch output is returned.
Returns:
Type | Description |
---|---|
Callable[[], torch.Tensor]
|
A function that returns the transformed output from the most recent forward pass. |
Examples:
>>> from stainedglass_core import noise_layer as sg_noise_layer
>>> noise_layer = sg_noise_layer.CloakNoiseLayer1()
>>> get_transformed_output = noise_layer.get_transformed_output_factory()
>>> input = torch.ones(2, 3, 32, 32)
>>> output = noise_layer(input)
>>> transformed_output = get_transformed_output()
>>> assert output.equal(transformed_output)
initial_seed
¶
Return the initial seed of the CPU device's random number generator.
manual_seed
¶
reset_parameters
¶
Reinitialize parameters and buffers.
This method is useful for initializing tensors created on the meta device.
seed
¶
Seed each of the random number generators using a non-deterministic random number.
PatchCloakNoiseLayer2_NoClip
¶
Bases: PatchCloakNoiseLayer
Applies an input-dependent stochastic transformation to an image in non-overlapping patches using Conv2d
estimators, with standard deviations parameterized by CloakStandardDeviationParameterization
,
and standard deviation-based input masking using BatchwiseChannelwisePatchwisePercentMasker
.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
|
int
|
The number of color channels in the input. |
required |
|
int | tuple[int, int]
|
Size of the patches to segment the input into. If an integer is given, a square patch is used. |
required |
|
tuple[float, float]
|
Minimum and maximum values of the range of standard deviations of the generated stochastic transformation. |
required |
|
float
|
The percentage of the outputs to mask per patch. |
required |
|
float
|
A temperature-like parameter which controls the spread of the parameterization function. Controls both the magnitude of parameterized standard deviations and their rate of change with respect to rhos. |
1.0
|
|
Literal['constant', 'reflect', 'replicate', 'circular']
|
Type of padding. One of: constant, reflect, replicate, or circular. Defaults to constant. |
'constant'
|
|
float
|
Fill value for constant padding. |
0.0
|
|
bool
|
Whether to learn the weight parameters for the locs estimator. If |
True
|
|
bool
|
Whether to freeze the weight parameters for the std estimator. If |
True
|
|
int | None
|
Seed for the random number generator used to generate the stochastic transformation. If |
None
|
Note
For maximum privacy preservation, Stained Glass Transform Patch Step 2 should be pre-trained from a Stained Glass Transform Patch Step 1.
Methods:
Name | Description |
---|---|
__call__ |
Transform the input data. |
__getstate__ |
Prepare a serializable copy of |
__init__ |
|
__init_subclass__ |
Set the default dtype to |
__setstate__ |
Restore from a serialized copy of |
forward |
Transform the input data. |
get_applied_transform_components_factory |
Create a function that returns the elements of the transform components ( |
get_transformed_output_factory |
Create a function that returns the transformed output from the most recent forward pass. |
initial_seed |
Return the initial seed of the CPU device's random number generator. |
manual_seed |
Seed each of the random number generators. |
reset_parameters |
Reinitialize parameters and buffers. |
seed |
Seed each of the random number generators using a non-deterministic random number. |
Attributes:
Name | Type | Description |
---|---|---|
patch_size |
tuple[int, int]
|
The size of the patches to segment the input into. |
patch_size
property
¶
The size of the patches to segment the input into.
__call__
¶
Transform the input data.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
|
Tensor
|
The input to transform. |
required |
|
Tensor | None
|
An optional mask that selects the elements of |
None
|
|
Any
|
Additional keyword arguments to the estimator modules. |
required |
__init__
¶
__init__(
color_channels: int,
patch_size: int | tuple[int, int],
scale: tuple[float, float],
percent_to_mask: float,
shallow: float = 1.0,
padding_mode: Literal[
"constant", "reflect", "replicate", "circular"
] = "constant",
padding_value: float = 0.0,
learn_locs_weights: bool = True,
freeze_std_estimator: bool = True,
seed: int | None = None,
) -> None
Changed in version 0.10.0: `threshold` and `percent_threshold` parameters were removed in favor of `percent_to_mask`
__init_subclass__
¶
Set the default dtype to torch.float32
inside all subclass __init__
methods.
__setstate__
¶
Restore from a serialized copy of self.__dict__
.
forward
¶
Transform the input data.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
|
Tensor
|
The input to transform. |
required |
|
Tensor | None
|
An optional mask that selects the elements of |
None
|
|
Any
|
Additional keyword arguments to the estimator modules. |
required |
Returns:
Type | Description |
---|---|
torch.Tensor
|
The transformed input data. |
get_applied_transform_components_factory
¶
Create a function that returns the elements of the transform components ('mean'
and 'std'
) applied during the most recent
forward pass.
Specifically, the applied elements are those selected by the noise mask (if supplied) and standard deviation mask (if
std_estimator.masker is not None
). If no masks are used, all elements are returned.
The applied transform components are returned flattened.
This function is intended to be used to log histograms of the transform components.
Returns:
Type | Description |
---|---|
Callable[[], dict[str, torch.Tensor]]
|
A function that returns the the elements of the transform components applied during the most recent forward pass. |
Examples:
>>> from torch import nn
>>> from stainedglass_core import model as sg_model, noise_layer as sg_noise_layer
>>> base_model = nn.Linear(20, 2)
>>> noisy_model = sg_model.NoisyModel(
... sg_noise_layer.CloakNoiseLayer1,
... base_model,
... target_parameter="input",
... )
>>> get_applied_transform_components = (
... noisy_model.noise_layer.get_applied_transform_components_factory()
... )
>>> input = torch.ones(1, 20)
>>> noise_mask = torch.tensor(5 * [False] + 15 * [True])
>>> output = noisy_model(input, noise_mask=noise_mask)
>>> applied_transform_components = get_applied_transform_components()
>>> applied_transform_components
{'mean': tensor(...), 'std': tensor(...)}
>>> {
... component_name: component.shape
... for component_name, component in applied_transform_components.items()
... }
{'mean': torch.Size([15]), 'std': torch.Size([15])}
get_transformed_output_factory
¶
Create a function that returns the transformed output from the most recent forward pass.
If super batching is active, only the transformed half of the super batch output is returned.
Returns:
Type | Description |
---|---|
Callable[[], torch.Tensor]
|
A function that returns the transformed output from the most recent forward pass. |
Examples:
>>> from stainedglass_core import noise_layer as sg_noise_layer
>>> noise_layer = sg_noise_layer.CloakNoiseLayer1()
>>> get_transformed_output = noise_layer.get_transformed_output_factory()
>>> input = torch.ones(2, 3, 32, 32)
>>> output = noise_layer(input)
>>> transformed_output = get_transformed_output()
>>> assert output.equal(transformed_output)
initial_seed
¶
Return the initial seed of the CPU device's random number generator.
manual_seed
¶
reset_parameters
¶
Reinitialize parameters and buffers.
This method is useful for initializing tensors created on the meta device.
seed
¶
Seed each of the random number generators using a non-deterministic random number.