noise_layer
BaseNoiseLayer
¶
Bases: Module
, Generic[EstimatorModuleT, ParameterizationT, OptionalMaskerT]
Base Class for Stained Glass Transform Layers.
input_shape
property
¶
The shape of the expected input including its batch dimension.
mask
property
writable
¶
mask: Tensor | None
The mask to apply calculated from parameters of the stochastic transformation computed during the most recent call to forward.
mean
property
writable
¶
mean: Tensor
The means of the stochastic transformation computed during the most recent call to forward.
std
property
writable
¶
std: Tensor
The standard deviations of the stochastic transformation computed during the most recent call to forward.
__call__
¶
Stochastically transform the input.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
input |
Tensor
|
The input to transform. |
required |
noise_mask |
Tensor | None
|
An optional mask that selects the elements of |
None
|
**kwargs |
Any
|
Additional keyword arguments to the estimator modules. |
required |
__init__
¶
__init__(input_shape: tuple[int, ...], seed: int | None, mean_estimator: Estimator[EstimatorModuleT, None, None], std_estimator: Estimator[EstimatorModuleT, ParameterizationT, OptionalMaskerT]) -> None
Initialize necessary input_shape
parameter to use Stained Glass Transform layers.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
input_shape |
tuple[int, ...]
|
Shape of given inputs. The first dimension may be -1, meaning variable batch size. |
required |
seed |
int | None
|
Seed for the random number generator used to generate the stochastic transformation. If |
required |
mean_estimator |
Estimator[EstimatorModuleT, None, None]
|
The estimator to use to estimate the mean of the stochastic transformation. |
required |
std_estimator |
Estimator[EstimatorModuleT, ParameterizationT, OptionalMaskerT]
|
The estimator to use to estimate the standard deviation and optional input mask of the stochastic transformation. |
required |
__init_subclass__
¶
Set the default dtype to torch.float32
inside all subclass __init__
methods.
__setstate__
¶
Restore from a serialized copy of self.__dict__
.
forward
abstractmethod
¶
Transform the input data.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
input |
Tensor
|
The input to transform. |
required |
noise_mask |
Tensor | None
|
An optional mask that selects the elements of |
None
|
**kwargs |
Any
|
Additional keyword arguments to the estimator modules. |
required |
Returns:
Type | Description |
---|---|
NoiseLayerOutput
|
The transformed input data. |
get_applied_transform_components_factory
¶
Create a function that returns the elements of the transform components ('mean'
and 'std'
) applied during the most recent
forward pass.
Specifically, the applied elements are those selected by the noise mask (if supplied) and standard deviation mask (if
std_estimator.masker is not None
). If no masks are used, all elements are returned.
The applied transform components are returned flattened.
This function is intended to be used to log histograms of the transform components.
Returns:
Type | Description |
---|---|
Callable[[], dict[str, torch.Tensor]]
|
A function that returns the the elements of the transform components applied during the most recent forward pass. |
Examples:
>>> from torch import nn
>>> from stainedglass_core import model as sg_model, noise_layer as sg_noise_layer
>>> base_model = nn.Linear(20, 2)
>>> noisy_model = sg_model.NoisyModel(
... sg_noise_layer.CloakNoiseLayer1,
... base_model,
... input_shape=(-1, 20),
... )
>>> get_applied_transform_components = (
... noisy_model.noise_layer.get_applied_transform_components_factory()
... )
>>> input = torch.ones(1, 20)
>>> noise_mask = torch.tensor(5 * [False] + 15 * [True])
>>> output = base_model(input, noise_mask=noise_mask)
>>> applied_transform_components = get_applied_transform_components()
>>> applied_transform_components
{'mean': tensor(...), 'std': tensor(...)}
>>> {
... component_name: component.shape
... for component_name, component in applied_transform_components.items()
... }
{'mean': torch.Size([15]), 'std': torch.Size([15])}
get_transformed_output_factory
¶
Create a function that returns the transformed output from the most recent forward pass.
If super batching is active, only the transformed half of the super batch output is returned.
Returns:
Type | Description |
---|---|
Callable[[], torch.Tensor]
|
A function that returns the transformed output from the most recent forward pass. |
Examples:
>>> from stainedglass_core import noise_layer as sg_noise_layer
>>> noise_layer = sg_noise_layer.CloakNoiseLayer1(input_shape=(-1, 3, 32, 32))
>>> get_transformed_output = noise_layer.get_transformed_output_factory()
>>> input = torch.ones(2, 3, 32, 32)
>>> output = noise_layer(input)
>>> transformed_output = get_transformed_output()
>>> assert output.output.equal(transformed_output)
initial_seed
¶
Return the initial seed of the CPU device's random number generator.
manual_seed
¶
manual_seed(seed: int) -> None
Seed each of the random number generators.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
seed |
int
|
The seed to set. |
required |
seed
¶
Seed each of the random number generators using a non-deterministic random number.
CloakNoiseLayer
¶
Bases: BaseNoiseLayer[ParameterWrapper, CloakStandardDeviationParameterization, Optional[PercentMasker]]
Stained Glass Transform that creates a stochastic re-representation of the input data.
Inspired by the Cloak algorithm defined in the paper: Not All Features Are Equal: Discovering Essential Features for Preserving Prediction Privacy.
Warning
The directly learned locs
and rhos
used by this class are prone to vanishing when trained with weight decay regularization.
To avoid this, ensure that optimizer parameter group(s) containing locs
or rhos
are configured with weight_decay=0.0
.
input_shape
property
¶
The shape of the expected input including its batch dimension.
mask
property
writable
¶
mask: Tensor | None
The mask to apply calculated from parameters of the stochastic transformation computed during the most recent call to forward.
mean
property
writable
¶
mean: Tensor
The means of the stochastic transformation computed during the most recent call to forward.
std
property
writable
¶
std: Tensor
The standard deviations of the stochastic transformation computed during the most recent call to forward.
__call__
¶
Stochastically transform the input.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
input |
Tensor
|
The input to transform. |
required |
noise_mask |
Tensor | None
|
An optional mask that selects the elements of |
None
|
**kwargs |
Any
|
Additional keyword arguments to the estimator modules. |
required |
__init__
¶
__init__(input_shape: tuple[int, ...], scale: tuple[float, float] | Tensor = (0.0001, 2.0), shallow: float | Tensor = 1.0, percent_to_mask: float | Tensor | None = None, value_range: tuple[float | None, float | None] | None = None, locs_requires_grad: bool = True, rhos_requires_grad: bool = True, rhos_init: float = -4.0, seed: int | None = None) -> None
Construct a CloakNoiseLayer
with the given parameters.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
input_shape |
tuple[int, ...]
|
The input shape of the layer. |
required |
scale |
tuple[float, float] | Tensor
|
Used to set bound on the min and max standard deviation of the stochastic transformation. |
(0.0001, 2.0)
|
shallow |
float | Tensor
|
A temperature-like parameter which controls the spread of the parameterization function. Controls both the magnitude of parameterized standard deviations and their rate of change with respect to rhos. |
1.0
|
percent_to_mask |
float | Tensor | None
|
The percentage of the input to mask. |
None
|
value_range |
tuple[float | None, float | None] | None
|
Minimum and maximum values of the range to clamp the output into. |
None
|
locs_requires_grad |
bool
|
Whether the locs parameters (related to the means of the transform) have their gradients tracked. |
True
|
rhos_requires_grad |
bool
|
Whether the rhos parameters (related to the standard deviations of the transform) have their gradients tracked. |
True
|
rhos_init |
float
|
The initial values for the rhos. |
-4.0
|
seed |
int | None
|
Seed for the random number generator used to generate the stochastic transformation. If |
None
|
__init_subclass__
¶
Set the default dtype to torch.float32
inside all subclass __init__
methods.
__setstate__
¶
Restore from a serialized copy of self.__dict__
.
forward
¶
Transform the input data.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
input |
Tensor
|
The input to transform. |
required |
noise_mask |
Tensor | None
|
An optional mask that selects the elements of |
None
|
**kwargs |
Any
|
Additional keyword arguments to the estimator modules. |
required |
Returns:
Type | Description |
---|---|
base.NoiseLayerOutput
|
The transformed input data. |
get_applied_transform_components_factory
¶
Create a function that returns the elements of the transform components ('mean'
and 'std'
) applied during the most recent
forward pass.
Specifically, the applied elements are those selected by the noise mask (if supplied) and standard deviation mask (if
std_estimator.masker is not None
). If no masks are used, all elements are returned.
The applied transform components are returned flattened.
This function is intended to be used to log histograms of the transform components.
Returns:
Type | Description |
---|---|
Callable[[], dict[str, torch.Tensor]]
|
A function that returns the the elements of the transform components applied during the most recent forward pass. |
Examples:
>>> from torch import nn
>>> from stainedglass_core import model as sg_model, noise_layer as sg_noise_layer
>>> base_model = nn.Linear(20, 2)
>>> noisy_model = sg_model.NoisyModel(
... sg_noise_layer.CloakNoiseLayer1,
... base_model,
... input_shape=(-1, 20),
... )
>>> get_applied_transform_components = (
... noisy_model.noise_layer.get_applied_transform_components_factory()
... )
>>> input = torch.ones(1, 20)
>>> noise_mask = torch.tensor(5 * [False] + 15 * [True])
>>> output = base_model(input, noise_mask=noise_mask)
>>> applied_transform_components = get_applied_transform_components()
>>> applied_transform_components
{'mean': tensor(...), 'std': tensor(...)}
>>> {
... component_name: component.shape
... for component_name, component in applied_transform_components.items()
... }
{'mean': torch.Size([15]), 'std': torch.Size([15])}
get_transformed_output_factory
¶
Create a function that returns the transformed output from the most recent forward pass.
If super batching is active, only the transformed half of the super batch output is returned.
Returns:
Type | Description |
---|---|
Callable[[], torch.Tensor]
|
A function that returns the transformed output from the most recent forward pass. |
Examples:
>>> from stainedglass_core import noise_layer as sg_noise_layer
>>> noise_layer = sg_noise_layer.CloakNoiseLayer1(input_shape=(-1, 3, 32, 32))
>>> get_transformed_output = noise_layer.get_transformed_output_factory()
>>> input = torch.ones(2, 3, 32, 32)
>>> output = noise_layer(input)
>>> transformed_output = get_transformed_output()
>>> assert output.output.equal(transformed_output)
initial_seed
¶
Return the initial seed of the CPU device's random number generator.
manual_seed
¶
manual_seed(seed: int) -> None
Seed each of the random number generators.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
seed |
int
|
The seed to set. |
required |
seed
¶
Seed each of the random number generators using a non-deterministic random number.
CloakNoiseLayer1
¶
Bases: CloakNoiseLayer
Stained Glass Transform that applies a stochastic re-representation of the input data without masking.
Inspired by the Cloak Step 1 algorithm defined in the paper: Not All Features Are Equal: Discovering Essential Features for Preserving Prediction Privacy
Warning
The directly learned locs
and rhos
used by this class are prone to vanishing when trained with weight decay regularization.
To avoid this, ensure that optimizer parameter group(s) containing locs
or rhos
are configured with weight_decay=0.0
.
Note
For maximum privacy preservation, Stained Glass Transform Step 1 should only be used to pre-train a Stained Glass Transform Step 2
(see CloakNoiseLayer2
or CloakNoiseLayer2_NoClip
). First training a Step 1, then fine-tuning that transform with step 2 is
a common recipe.
For a more convenient type Stained Glass Transform that requires only one step of training, see CloakNoiseLayerOneShot
.
Note
In almost all cases, rhos_requires_grad
should be True
while training CloakNoiseLayer1
.
input_shape
property
¶
The shape of the expected input including its batch dimension.
mask
property
writable
¶
mask: Tensor | None
The mask to apply calculated from parameters of the stochastic transformation computed during the most recent call to forward.
mean
property
writable
¶
mean: Tensor
The means of the stochastic transformation computed during the most recent call to forward.
std
property
writable
¶
std: Tensor
The standard deviations of the stochastic transformation computed during the most recent call to forward.
__call__
¶
Stochastically transform the input.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
input |
Tensor
|
The input to transform. |
required |
noise_mask |
Tensor | None
|
An optional mask that selects the elements of |
None
|
**kwargs |
Any
|
Additional keyword arguments to the estimator modules. |
required |
__init__
¶
__init__(input_shape: tuple[int, ...], scale: tuple[float, float] | Tensor = (0.0001, 2.0), shallow: float | Tensor = 1.0, locs: bool = True, rhos: bool = True, rhos_init: float = -4.0, seed: int | None = None) -> None
Construct a CloakNoiseLayer1
with the given parameters.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
input_shape |
tuple[int, ...]
|
The input shape of the layer. |
required |
scale |
tuple[float, float] | Tensor
|
Used to set bound on the min and max standard deviation of the stochastic transformation. |
(0.0001, 2.0)
|
shallow |
float | Tensor
|
A temperature-like parameter which controls the spread of the parameterization function. Controls both the magnitude of parameterized standard deviations and their rate of change with respect to rhos. |
1.0
|
locs |
bool
|
Whether the locs parameters (related to the means of the transform) have their gradients tracked. |
True
|
rhos |
bool
|
Whether the rhos parameters (related to the standard deviations of the transform) have their gradients tracked. Usually True. |
True
|
rhos_init |
float
|
The initial values for the rhos. |
-4.0
|
seed |
int | None
|
Seed for the random number generator used to generate the stochastic transformation. If |
None
|
__init_subclass__
¶
Set the default dtype to torch.float32
inside all subclass __init__
methods.
__setstate__
¶
Restore from a serialized copy of self.__dict__
.
forward
¶
Transform the input data.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
input |
Tensor
|
The input to transform. |
required |
noise_mask |
Tensor | None
|
An optional mask that selects the elements of |
None
|
**kwargs |
Any
|
Additional keyword arguments to the estimator modules. |
required |
Returns:
Type | Description |
---|---|
base.NoiseLayerOutput
|
The transformed input data. |
get_applied_transform_components_factory
¶
Create a function that returns the elements of the transform components ('mean'
and 'std'
) applied during the most recent
forward pass.
Specifically, the applied elements are those selected by the noise mask (if supplied) and standard deviation mask (if
std_estimator.masker is not None
). If no masks are used, all elements are returned.
The applied transform components are returned flattened.
This function is intended to be used to log histograms of the transform components.
Returns:
Type | Description |
---|---|
Callable[[], dict[str, torch.Tensor]]
|
A function that returns the the elements of the transform components applied during the most recent forward pass. |
Examples:
>>> from torch import nn
>>> from stainedglass_core import model as sg_model, noise_layer as sg_noise_layer
>>> base_model = nn.Linear(20, 2)
>>> noisy_model = sg_model.NoisyModel(
... sg_noise_layer.CloakNoiseLayer1,
... base_model,
... input_shape=(-1, 20),
... )
>>> get_applied_transform_components = (
... noisy_model.noise_layer.get_applied_transform_components_factory()
... )
>>> input = torch.ones(1, 20)
>>> noise_mask = torch.tensor(5 * [False] + 15 * [True])
>>> output = base_model(input, noise_mask=noise_mask)
>>> applied_transform_components = get_applied_transform_components()
>>> applied_transform_components
{'mean': tensor(...), 'std': tensor(...)}
>>> {
... component_name: component.shape
... for component_name, component in applied_transform_components.items()
... }
{'mean': torch.Size([15]), 'std': torch.Size([15])}
get_transformed_output_factory
¶
Create a function that returns the transformed output from the most recent forward pass.
If super batching is active, only the transformed half of the super batch output is returned.
Returns:
Type | Description |
---|---|
Callable[[], torch.Tensor]
|
A function that returns the transformed output from the most recent forward pass. |
Examples:
>>> from stainedglass_core import noise_layer as sg_noise_layer
>>> noise_layer = sg_noise_layer.CloakNoiseLayer1(input_shape=(-1, 3, 32, 32))
>>> get_transformed_output = noise_layer.get_transformed_output_factory()
>>> input = torch.ones(2, 3, 32, 32)
>>> output = noise_layer(input)
>>> transformed_output = get_transformed_output()
>>> assert output.output.equal(transformed_output)
initial_seed
¶
Return the initial seed of the CPU device's random number generator.
manual_seed
¶
manual_seed(seed: int) -> None
Seed each of the random number generators.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
seed |
int
|
The seed to set. |
required |
seed
¶
Seed each of the random number generators using a non-deterministic random number.
CloakNoiseLayer2
¶
Bases: CloakNoiseLayer
Stained Glass Transform that stochastically re-represents the input data with masking.
Inspired by the Cloak Step 2 algorithm defined in the paper: Not All Features Are Equal: Discovering Essential Features for Preserving Prediction Privacy
Warning
The directly learned locs
and rhos
used by this class are prone to vanishing when trained with weight decay regularization.
To avoid this, ensure that optimizer parameter group(s) containing locs
or rhos
are configured with weight_decay=0.0
.
Note
For maximum privacy preservation, Stained Glass Transform Step 2 should be pre-trained from a Stained Glass Transform Step 1. For a
more convenient type of Stained Glass Transform that requires only one step of training, see CloakNoiseLayerOneShot
.
Note
Masking is done using a static threshold on the learned standard deviation of the stochastic transformation. This threshold is
calculated from percent_to_mask
when loading a pre-trained transform layer. Consequently, continuing to train the stochastic
transformation will not affect masking. For a variant of Cloak Step 2 that recalculates its masking using percent_to_mask
on each
call (i.e. the masking can change during training), see CloakNoiseLayerOneShot
.
input_shape
property
¶
The shape of the expected input including its batch dimension.
mean
property
writable
¶
mean: Tensor
The means of the stochastic transformation computed during the most recent call to forward.
std
property
writable
¶
std: Tensor
The standard deviations of the stochastic transformation computed during the most recent call to forward.
__call__
¶
Stochastically transform the input.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
input |
Tensor
|
The input to transform. |
required |
noise_mask |
Tensor | None
|
An optional mask that selects the elements of |
None
|
**kwargs |
Any
|
Additional keyword arguments to the estimator modules. |
required |
__init__
¶
__init__(input_shape: tuple[int, ...], percent_to_mask: float | Tensor, scale: tuple[float, float] | Tensor = (0.0001, 2.0), shallow: float | Tensor = 1.0, value_range: tuple[float | None, float | None] | None = (-1.0, 1.0), seed: int | None = None) -> None
Construct a CloakNoiseLayer2
with the given parameters.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
input_shape |
tuple[int, ...]
|
The input shape of the layer. |
required |
scale |
tuple[float, float] | Tensor
|
Used to set bound on the min and max standard deviation of the stochastic transformation. |
(0.0001, 2.0)
|
percent_to_mask |
float | Tensor
|
The percentage of the outputs to mask. |
required |
shallow |
float | Tensor
|
A temperature-like parameter which controls the spread of the parameterization function. Controls both the magnitude of parameterized standard deviations and their rate of change with respect to rhos. |
1.0
|
value_range |
tuple[float | None, float | None] | None
|
Minimum and maximum values of the range to clamp the output into. |
(-1.0, 1.0)
|
seed |
int | None
|
Seed for the random number generator used to generate the stochastic transformation. If |
None
|
Note
Specifying percent_to_mask==0.0
with a NoClip
variant of Cloak Step 2 is functionally equivalent to Cloak Step 1, as no
masking occurs.
Note
In training mode, the threshold
buffer is recalculated over the learned standard deviations of the stochastic transformation,
and a new input mask is generated with each forward call. In eval mode, threshold
is static, and the cached mask from the most
recent mask calculation is used.
Raises:
Type | Description |
---|---|
ValueError
|
If |
Changed in version 0.10.0: `threshold` and `percent_threshold` parameters were removed in favor of `percent_to_mask`
__init_subclass__
¶
Set the default dtype to torch.float32
inside all subclass __init__
methods.
__setstate__
¶
Restore from a serialized copy of self.__dict__
.
forward
¶
Transform the input data.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
input |
Tensor
|
The input to transform. |
required |
noise_mask |
Tensor | None
|
An optional mask that selects the elements of |
None
|
**kwargs |
Any
|
Additional keyword arguments to the estimator modules. |
required |
Returns:
Type | Description |
---|---|
base.NoiseLayerOutput
|
The transformed input data. |
get_applied_transform_components_factory
¶
Create a function that returns the elements of the transform components ('mean'
and 'std'
) applied during the most recent
forward pass.
Specifically, the applied elements are those selected by the noise mask (if supplied) and standard deviation mask (if
std_estimator.masker is not None
). If no masks are used, all elements are returned.
The applied transform components are returned flattened.
This function is intended to be used to log histograms of the transform components.
Returns:
Type | Description |
---|---|
Callable[[], dict[str, torch.Tensor]]
|
A function that returns the the elements of the transform components applied during the most recent forward pass. |
Examples:
>>> from torch import nn
>>> from stainedglass_core import model as sg_model, noise_layer as sg_noise_layer
>>> base_model = nn.Linear(20, 2)
>>> noisy_model = sg_model.NoisyModel(
... sg_noise_layer.CloakNoiseLayer1,
... base_model,
... input_shape=(-1, 20),
... )
>>> get_applied_transform_components = (
... noisy_model.noise_layer.get_applied_transform_components_factory()
... )
>>> input = torch.ones(1, 20)
>>> noise_mask = torch.tensor(5 * [False] + 15 * [True])
>>> output = base_model(input, noise_mask=noise_mask)
>>> applied_transform_components = get_applied_transform_components()
>>> applied_transform_components
{'mean': tensor(...), 'std': tensor(...)}
>>> {
... component_name: component.shape
... for component_name, component in applied_transform_components.items()
... }
{'mean': torch.Size([15]), 'std': torch.Size([15])}
get_transformed_output_factory
¶
Create a function that returns the transformed output from the most recent forward pass.
If super batching is active, only the transformed half of the super batch output is returned.
Returns:
Type | Description |
---|---|
Callable[[], torch.Tensor]
|
A function that returns the transformed output from the most recent forward pass. |
Examples:
>>> from stainedglass_core import noise_layer as sg_noise_layer
>>> noise_layer = sg_noise_layer.CloakNoiseLayer1(input_shape=(-1, 3, 32, 32))
>>> get_transformed_output = noise_layer.get_transformed_output_factory()
>>> input = torch.ones(2, 3, 32, 32)
>>> output = noise_layer(input)
>>> transformed_output = get_transformed_output()
>>> assert output.output.equal(transformed_output)
initial_seed
¶
Return the initial seed of the CPU device's random number generator.
manual_seed
¶
manual_seed(seed: int) -> None
Seed each of the random number generators.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
seed |
int
|
The seed to set. |
required |
seed
¶
Seed each of the random number generators using a non-deterministic random number.
CloakNoiseLayer2_NoClip
¶
Bases: CloakNoiseLayer2
Stained Glass Transform that stochastically re-represents the input data with masking but without clipping.
Inspired by the Cloak Step 2 algorithm defined in the paper: Not All Features Are Equal: Discovering Essential Features for Preserving Prediction Privacy
Warning
The directly learned locs
and rhos
used by this class are prone to vanishing when trained with weight decay regularization.
To avoid this, ensure that optimizer parameter group(s) containing locs
or rhos
are configured with weight_decay=0.0
.
Note
For maximum privacy preservation, Stained Glass Transform Step 2 should be pre-trained from a Stained Glass Transform Step 1.
For a more convenient type Stained Glass Transform that requires only one step of training, see CloakNoiseLayerOneShot
.
input_shape
property
¶
The shape of the expected input including its batch dimension.
mean
property
writable
¶
mean: Tensor
The means of the stochastic transformation computed during the most recent call to forward.
std
property
writable
¶
std: Tensor
The standard deviations of the stochastic transformation computed during the most recent call to forward.
__call__
¶
Stochastically transform the input.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
input |
Tensor
|
The input to transform. |
required |
noise_mask |
Tensor | None
|
An optional mask that selects the elements of |
None
|
**kwargs |
Any
|
Additional keyword arguments to the estimator modules. |
required |
__init__
¶
__init__(input_shape: tuple[int, ...], percent_to_mask: float | Tensor, scale: tuple[float, float] | Tensor = (0.0001, 2.0), shallow: float | Tensor = 1.0, seed: int | None = None) -> None
Construct a CloakNoiseLayer2_NoClip
with the given parameters.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
input_shape |
tuple[int, ...]
|
The input shape of the layer. |
required |
scale |
tuple[float, float] | Tensor
|
Used to set bound on the min and max standard deviation of the stochastic transformation. |
(0.0001, 2.0)
|
percent_to_mask |
float | Tensor
|
The percentage of the outputs to mask. |
required |
shallow |
float | Tensor
|
A temperature-like parameter which controls the spread of the parameterization function. Controls both the magnitude of parameterized standard deviations and their rate of change with respect to rhos. |
1.0
|
seed |
int | None
|
Seed for the random number generator used to generate the stochastic transformation. If |
None
|
Note
Specifying percent_to_mask==0.0
with a NoClip
variant of Cloak Step 2 is functionally equivalent to Cloak Step 1, as no
masking occurs.
Note
In training mode, the threshold
buffer is recalculated over the learned standard deviations of the stochastic transformation,
and a new input mask is generated with each forward call. In eval mode, threshold
is static, and the cached mask from the most
recent mask calculation is used.
Raises:
Type | Description |
---|---|
ValueError
|
If |
__init_subclass__
¶
Set the default dtype to torch.float32
inside all subclass __init__
methods.
__setstate__
¶
Restore from a serialized copy of self.__dict__
.
forward
¶
Transform the input data.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
input |
Tensor
|
The input to transform. |
required |
noise_mask |
Tensor | None
|
An optional mask that selects the elements of |
None
|
**kwargs |
Any
|
Additional keyword arguments to the estimator modules. |
required |
Returns:
Type | Description |
---|---|
base.NoiseLayerOutput
|
The transformed input data. |
get_applied_transform_components_factory
¶
Create a function that returns the elements of the transform components ('mean'
and 'std'
) applied during the most recent
forward pass.
Specifically, the applied elements are those selected by the noise mask (if supplied) and standard deviation mask (if
std_estimator.masker is not None
). If no masks are used, all elements are returned.
The applied transform components are returned flattened.
This function is intended to be used to log histograms of the transform components.
Returns:
Type | Description |
---|---|
Callable[[], dict[str, torch.Tensor]]
|
A function that returns the the elements of the transform components applied during the most recent forward pass. |
Examples:
>>> from torch import nn
>>> from stainedglass_core import model as sg_model, noise_layer as sg_noise_layer
>>> base_model = nn.Linear(20, 2)
>>> noisy_model = sg_model.NoisyModel(
... sg_noise_layer.CloakNoiseLayer1,
... base_model,
... input_shape=(-1, 20),
... )
>>> get_applied_transform_components = (
... noisy_model.noise_layer.get_applied_transform_components_factory()
... )
>>> input = torch.ones(1, 20)
>>> noise_mask = torch.tensor(5 * [False] + 15 * [True])
>>> output = base_model(input, noise_mask=noise_mask)
>>> applied_transform_components = get_applied_transform_components()
>>> applied_transform_components
{'mean': tensor(...), 'std': tensor(...)}
>>> {
... component_name: component.shape
... for component_name, component in applied_transform_components.items()
... }
{'mean': torch.Size([15]), 'std': torch.Size([15])}
get_transformed_output_factory
¶
Create a function that returns the transformed output from the most recent forward pass.
If super batching is active, only the transformed half of the super batch output is returned.
Returns:
Type | Description |
---|---|
Callable[[], torch.Tensor]
|
A function that returns the transformed output from the most recent forward pass. |
Examples:
>>> from stainedglass_core import noise_layer as sg_noise_layer
>>> noise_layer = sg_noise_layer.CloakNoiseLayer1(input_shape=(-1, 3, 32, 32))
>>> get_transformed_output = noise_layer.get_transformed_output_factory()
>>> input = torch.ones(2, 3, 32, 32)
>>> output = noise_layer(input)
>>> transformed_output = get_transformed_output()
>>> assert output.output.equal(transformed_output)
initial_seed
¶
Return the initial seed of the CPU device's random number generator.
manual_seed
¶
manual_seed(seed: int) -> None
Seed each of the random number generators.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
seed |
int
|
The seed to set. |
required |
seed
¶
Seed each of the random number generators using a non-deterministic random number.
CloakNoiseLayerOneShot
¶
Bases: CloakNoiseLayer
Stained Glass Transform inspired by the Cloak Algorithm (see paper), with a threshold that is recalculated every forward pass, to allow for one-step training.
The original Cloak Algorithm requires two steps to train. In the first step, the stochastic mapping is trained, and in the second step, masking is trained. This class allows for training in one step, by recalculating the threshold every forward pass.
For most use cases, this class is preferred over a multi-step Stained Glass Transform version, since it is simpler to use. For more advanced use cases, however, the multi-step version may be required, since it allows for more control over the masking process during training.
Warning
The directly learned locs
and rhos
used by this class are prone to vanishing when trained with weight decay regularization.
To avoid this, ensure that optimizer parameter group(s) containing locs
or rhos
are configured with weight_decay=0.0
.
Examples:
>>> img_shape = (1, 3, 8, 8)
>>> noise_layer = CloakNoiseLayerOneShot(
... input_shape=img_shape, percent_to_mask=0.5, scale=(1e-4, 2.0)
... )
>>> img = torch.rand(img_shape)
>>> transformed_img = noise_layer(img)
>>> transformed_img.output.shape == img.shape
True
>>> torch.allclose(transformed_img.output, img)
False
See paper: Not All Features Are Equal: Discovering Essential Features for Preserving Prediction Privacy.
input_shape
property
¶
The shape of the expected input including its batch dimension.
mask
property
writable
¶
mask: Tensor | None
The mask to apply calculated from parameters of the stochastic transformation computed during the most recent call to forward.
mean
property
writable
¶
mean: Tensor
The means of the stochastic transformation computed during the most recent call to forward.
std
property
writable
¶
std: Tensor
The standard deviations of the stochastic transformation computed during the most recent call to forward.
__call__
¶
Stochastically transform the input.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
input |
Tensor
|
The input to transform. |
required |
noise_mask |
Tensor | None
|
An optional mask that selects the elements of |
None
|
**kwargs |
Any
|
Additional keyword arguments to the estimator modules. |
required |
__init__
¶
__init__(input_shape: tuple[int, ...], percent_to_mask: float | Tensor, scale: tuple[float, float], shallow: float = 1.0, rhos_init: float = -4.0, seed: int | None = None) -> None
Construct a CloakNoiseLayerOneShot
with the given parameters.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
input_shape |
tuple[int, ...]
|
The input shape of the layer. |
required |
scale |
tuple[float, float]
|
Used to set bound on the min and max standard deviation of the generated stochastic transformation. |
required |
percent_to_mask |
float | Tensor
|
The percentage of the outputs to mask. |
required |
shallow |
float
|
A temperature-like parameter which controls the spread of the parameterization function. Controls both the magnitude of parameterized standard deviations and their rate of change with respect to rhos. |
1.0
|
rhos_init |
float
|
The initial values for the rhos. |
-4.0
|
seed |
int | None
|
Seed for the random number generator used to generate the stochastic transformation. If |
None
|
Notes
Specifying percent_to_mask==0.0
is functionally equivalent to Cloak Step 1, as no masking occurs.
Raises:
Type | Description |
---|---|
ValueError
|
If |
ValueError
|
If |
Changed in version 0.10.0: `threshold` and `percent_threshold` parameters were removed in favor of `percent_to_mask`
__init_subclass__
¶
Set the default dtype to torch.float32
inside all subclass __init__
methods.
__setstate__
¶
Restore from a serialized copy of self.__dict__
.
forward
¶
Transform the input data.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
input |
Tensor
|
The input to transform. |
required |
noise_mask |
Tensor | None
|
An optional mask that selects the elements of |
None
|
**kwargs |
Any
|
Additional keyword arguments to the estimator modules. |
required |
Returns:
Type | Description |
---|---|
base.NoiseLayerOutput
|
The transformed input data. |
get_applied_transform_components_factory
¶
Create a function that returns the elements of the transform components ('mean'
and 'std'
) applied during the most recent
forward pass.
Specifically, the applied elements are those selected by the noise mask (if supplied) and standard deviation mask (if
std_estimator.masker is not None
). If no masks are used, all elements are returned.
The applied transform components are returned flattened.
This function is intended to be used to log histograms of the transform components.
Returns:
Type | Description |
---|---|
Callable[[], dict[str, torch.Tensor]]
|
A function that returns the the elements of the transform components applied during the most recent forward pass. |
Examples:
>>> from torch import nn
>>> from stainedglass_core import model as sg_model, noise_layer as sg_noise_layer
>>> base_model = nn.Linear(20, 2)
>>> noisy_model = sg_model.NoisyModel(
... sg_noise_layer.CloakNoiseLayer1,
... base_model,
... input_shape=(-1, 20),
... )
>>> get_applied_transform_components = (
... noisy_model.noise_layer.get_applied_transform_components_factory()
... )
>>> input = torch.ones(1, 20)
>>> noise_mask = torch.tensor(5 * [False] + 15 * [True])
>>> output = base_model(input, noise_mask=noise_mask)
>>> applied_transform_components = get_applied_transform_components()
>>> applied_transform_components
{'mean': tensor(...), 'std': tensor(...)}
>>> {
... component_name: component.shape
... for component_name, component in applied_transform_components.items()
... }
{'mean': torch.Size([15]), 'std': torch.Size([15])}
get_transformed_output_factory
¶
Create a function that returns the transformed output from the most recent forward pass.
If super batching is active, only the transformed half of the super batch output is returned.
Returns:
Type | Description |
---|---|
Callable[[], torch.Tensor]
|
A function that returns the transformed output from the most recent forward pass. |
Examples:
>>> from stainedglass_core import noise_layer as sg_noise_layer
>>> noise_layer = sg_noise_layer.CloakNoiseLayer1(input_shape=(-1, 3, 32, 32))
>>> get_transformed_output = noise_layer.get_transformed_output_factory()
>>> input = torch.ones(2, 3, 32, 32)
>>> output = noise_layer(input)
>>> transformed_output = get_transformed_output()
>>> assert output.output.equal(transformed_output)
initial_seed
¶
Return the initial seed of the CPU device's random number generator.
manual_seed
¶
manual_seed(seed: int) -> None
Seed each of the random number generators.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
seed |
int
|
The seed to set. |
required |
seed
¶
Seed each of the random number generators using a non-deterministic random number.
NoiseLayerOutput
dataclass
¶
Bases: ModelOutput
The output of BaseNoiseLayer.forward()
.
__init_subclass__
¶
Register subclasses as pytree nodes.
This is necessary to synchronize gradients when using torch.nn.parallel.DistributedDataParallel(static_graph=True)
with modules
that output ModelOutput
subclasses.
See: https://github.com/pytorch/pytorch/issues/106690.
to_tuple
¶
Convert self to a tuple containing all the attributes/keys that are not None
.
Returns:
Type | Description |
---|---|
tuple[Any, ...]
|
A tuple of all attributes/keys that are not |
PatchCloakNoiseLayer
¶
Bases: BaseNoiseLayer[Conv2d, CloakStandardDeviationParameterization, Optional[BatchwiseChannelwisePatchwisePercentMasker]]
Applies an input-dependent, additive, non-overlapping, convolutional stochastic transformation.
input_shape
property
¶
The shape of the expected input including its batch dimension.
mask
property
writable
¶
mask: Tensor | None
The mask to apply calculated from parameters of the stochastic transformation computed during the most recent call to forward.
mean
property
writable
¶
mean: Tensor
The means of the stochastic transformation computed during the most recent call to forward.
patch_size
property
¶
The size of the patches to segment the input into.
std
property
writable
¶
std: Tensor
The standard deviations of the stochastic transformation computed during the most recent call to forward.
__call__
¶
Stochastically transform the input.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
input |
Tensor
|
The input to transform. |
required |
noise_mask |
Tensor | None
|
An optional mask that selects the elements of |
None
|
**kwargs |
Any
|
Additional keyword arguments to the estimator modules. |
required |
__init__
¶
__init__(input_shape: tuple[int, ...], patch_size: int | tuple[int, int], scale: tuple[float, float] = (0.0001, 2.0), shallow: float = 1.0, percent_to_mask: float | Tensor | None = None, value_range: tuple[float | None, float | None] | None = None, padding_mode: Literal['constant', 'reflect', 'replicate', 'circular'] = 'constant', padding_value: float = 0.0, learn_locs_weights: bool = True, freeze_std_estimator: bool = True, seed: int | None = None) -> None
Construct a Stained Glass Transform layer that generates stochastic transformations over patches of the input images.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
input_shape |
tuple[int, ...]
|
Shape of given inputs. The first dimension may be -1, meaning variable batch size. |
required |
patch_size |
int | tuple[int, int]
|
Size of the patches to segment the input into. If an integer is given, a square patch is used. |
required |
scale |
tuple[float, float]
|
Minimum and maximum values of the range of standard deviations of the generated stochastic transformation. |
(0.0001, 2.0)
|
shallow |
float
|
A temperature-like parameter which controls the spread of the parameterization function. Controls both the magnitude of parameterized standard deviations and their rate of change with respect to rhos. |
1.0
|
percent_to_mask |
float | Tensor | None
|
The percentage of the outputs to mask per patch. |
None
|
value_range |
tuple[float | None, float | None] | None
|
Minimum and maximum values of the range to clamp the output into. |
None
|
padding_mode |
Literal['constant', 'reflect', 'replicate', 'circular']
|
Type of padding. One of: constant, reflect, replicate, or circular. Defaults to constant. |
'constant'
|
padding_value |
float
|
Fill value for constant padding. |
0.0
|
learn_locs_weights |
bool
|
Whether to learn the weight parameters for the locs estimator. If |
True
|
freeze_std_estimator |
bool
|
Whether to freeze the weight parameters for the std estimator. If |
True
|
seed |
int | None
|
Seed for the random number generator used to generate the stochastic transformation. If |
None
|
Note
For estimators where we train the weights, we freeze their biases. We suspect that if both the biases and weights are trained together, the biases will converge faster than the weights, causing the weights to be trivial. This has not been verified experimentally. Our choices for initialization values and conditions subject to change in light of experimental evidence, and you are encouraged to challenge and improve our understanding of the effects of these choices.
__init_subclass__
¶
Set the default dtype to torch.float32
inside all subclass __init__
methods.
__setstate__
¶
Restore from a serialized copy of self.__dict__
.
forward
¶
Transform the input data.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
input |
Tensor
|
The input to transform. |
required |
noise_mask |
Tensor | None
|
An optional mask that selects the elements of |
None
|
**kwargs |
Any
|
Additional keyword arguments to the estimator modules. |
required |
Returns:
Type | Description |
---|---|
base.NoiseLayerOutput
|
The transformed input data. |
get_applied_transform_components_factory
¶
Create a function that returns the elements of the transform components ('mean'
and 'std'
) applied during the most recent
forward pass.
Specifically, the applied elements are those selected by the noise mask (if supplied) and standard deviation mask (if
std_estimator.masker is not None
). If no masks are used, all elements are returned.
The applied transform components are returned flattened.
This function is intended to be used to log histograms of the transform components.
Returns:
Type | Description |
---|---|
Callable[[], dict[str, torch.Tensor]]
|
A function that returns the the elements of the transform components applied during the most recent forward pass. |
Examples:
>>> from torch import nn
>>> from stainedglass_core import model as sg_model, noise_layer as sg_noise_layer
>>> base_model = nn.Linear(20, 2)
>>> noisy_model = sg_model.NoisyModel(
... sg_noise_layer.CloakNoiseLayer1,
... base_model,
... input_shape=(-1, 20),
... )
>>> get_applied_transform_components = (
... noisy_model.noise_layer.get_applied_transform_components_factory()
... )
>>> input = torch.ones(1, 20)
>>> noise_mask = torch.tensor(5 * [False] + 15 * [True])
>>> output = base_model(input, noise_mask=noise_mask)
>>> applied_transform_components = get_applied_transform_components()
>>> applied_transform_components
{'mean': tensor(...), 'std': tensor(...)}
>>> {
... component_name: component.shape
... for component_name, component in applied_transform_components.items()
... }
{'mean': torch.Size([15]), 'std': torch.Size([15])}
get_transformed_output_factory
¶
Create a function that returns the transformed output from the most recent forward pass.
If super batching is active, only the transformed half of the super batch output is returned.
Returns:
Type | Description |
---|---|
Callable[[], torch.Tensor]
|
A function that returns the transformed output from the most recent forward pass. |
Examples:
>>> from stainedglass_core import noise_layer as sg_noise_layer
>>> noise_layer = sg_noise_layer.CloakNoiseLayer1(input_shape=(-1, 3, 32, 32))
>>> get_transformed_output = noise_layer.get_transformed_output_factory()
>>> input = torch.ones(2, 3, 32, 32)
>>> output = noise_layer(input)
>>> transformed_output = get_transformed_output()
>>> assert output.output.equal(transformed_output)
initial_seed
¶
Return the initial seed of the CPU device's random number generator.
manual_seed
¶
manual_seed(seed: int) -> None
Seed each of the random number generators.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
seed |
int
|
The seed to set. |
required |
seed
¶
Seed each of the random number generators using a non-deterministic random number.
PatchCloakNoiseLayer1
¶
Bases: PatchCloakNoiseLayer
Input-dependent, additive, vision Stained Glass Transform layer that segments input images into patches and generates a patch-wise stochastic transformation.
Note
For maximum privacy preservation, Stained Glass Patch Transform Step 1 should only be used to pre-train a Stained Glass Transform
Patch Step 2 (see PatchCloakNoiseLayer1_NoClip
). First training a Step 1, then fine-tuning that transform with step 2 is a
common recipe.
input_shape
property
¶
The shape of the expected input including its batch dimension.
mask
property
writable
¶
mask: Tensor | None
The mask to apply calculated from parameters of the stochastic transformation computed during the most recent call to forward.
mean
property
writable
¶
mean: Tensor
The means of the stochastic transformation computed during the most recent call to forward.
patch_size
property
¶
The size of the patches to segment the input into.
std
property
writable
¶
std: Tensor
The standard deviations of the stochastic transformation computed during the most recent call to forward.
__call__
¶
Stochastically transform the input.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
input |
Tensor
|
The input to transform. |
required |
noise_mask |
Tensor | None
|
An optional mask that selects the elements of |
None
|
**kwargs |
Any
|
Additional keyword arguments to the estimator modules. |
required |
__init__
¶
__init__(input_shape: tuple[int, ...], patch_size: int | tuple[int, int], scale: tuple[float, float], shallow: float = 1.0, padding_mode: Literal['constant', 'reflect', 'replicate', 'circular'] = 'constant', padding_value: float = 0.0, learn_locs_weights: bool = True, seed: int | None = None) -> None
Construct a Stained Glass Transform layer that generates stochastic transformations over patches of the input images.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
input_shape |
tuple[int, ...]
|
The shape of the input tensor. |
required |
patch_size |
int | tuple[int, int]
|
The size of the patches over which the stochastic transformation is estimated. |
required |
scale |
tuple[float, float]
|
The range of standard deviations of the stochastic transformation. |
required |
shallow |
float
|
A temperature-like parameter which controls the spread of the parameterization function. Controls both the magnitude of parameterized standard deviations and their rate of change with respect to rhos. |
1.0
|
padding_mode |
Literal['constant', 'reflect', 'replicate', 'circular']
|
The padding mode to use when extracting patches. |
'constant'
|
padding_value |
float
|
The value to use when padding the input. |
0.0
|
learn_locs_weights |
bool
|
Whether to only learn the locs estimator weights or else to only learn the locs estimator bias. |
True
|
seed |
int | None
|
Seed for the random number generator used to generate the stochastic transformation. If |
None
|
__init_subclass__
¶
Set the default dtype to torch.float32
inside all subclass __init__
methods.
__setstate__
¶
Restore from a serialized copy of self.__dict__
.
forward
¶
Transform the input data.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
input |
Tensor
|
The input to transform. |
required |
noise_mask |
Tensor | None
|
An optional mask that selects the elements of |
None
|
**kwargs |
Any
|
Additional keyword arguments to the estimator modules. |
required |
Returns:
Type | Description |
---|---|
base.NoiseLayerOutput
|
The transformed input data. |
get_applied_transform_components_factory
¶
Create a function that returns the elements of the transform components ('mean'
and 'std'
) applied during the most recent
forward pass.
Specifically, the applied elements are those selected by the noise mask (if supplied) and standard deviation mask (if
std_estimator.masker is not None
). If no masks are used, all elements are returned.
The applied transform components are returned flattened.
This function is intended to be used to log histograms of the transform components.
Returns:
Type | Description |
---|---|
Callable[[], dict[str, torch.Tensor]]
|
A function that returns the the elements of the transform components applied during the most recent forward pass. |
Examples:
>>> from torch import nn
>>> from stainedglass_core import model as sg_model, noise_layer as sg_noise_layer
>>> base_model = nn.Linear(20, 2)
>>> noisy_model = sg_model.NoisyModel(
... sg_noise_layer.CloakNoiseLayer1,
... base_model,
... input_shape=(-1, 20),
... )
>>> get_applied_transform_components = (
... noisy_model.noise_layer.get_applied_transform_components_factory()
... )
>>> input = torch.ones(1, 20)
>>> noise_mask = torch.tensor(5 * [False] + 15 * [True])
>>> output = base_model(input, noise_mask=noise_mask)
>>> applied_transform_components = get_applied_transform_components()
>>> applied_transform_components
{'mean': tensor(...), 'std': tensor(...)}
>>> {
... component_name: component.shape
... for component_name, component in applied_transform_components.items()
... }
{'mean': torch.Size([15]), 'std': torch.Size([15])}
get_transformed_output_factory
¶
Create a function that returns the transformed output from the most recent forward pass.
If super batching is active, only the transformed half of the super batch output is returned.
Returns:
Type | Description |
---|---|
Callable[[], torch.Tensor]
|
A function that returns the transformed output from the most recent forward pass. |
Examples:
>>> from stainedglass_core import noise_layer as sg_noise_layer
>>> noise_layer = sg_noise_layer.CloakNoiseLayer1(input_shape=(-1, 3, 32, 32))
>>> get_transformed_output = noise_layer.get_transformed_output_factory()
>>> input = torch.ones(2, 3, 32, 32)
>>> output = noise_layer(input)
>>> transformed_output = get_transformed_output()
>>> assert output.output.equal(transformed_output)
initial_seed
¶
Return the initial seed of the CPU device's random number generator.
manual_seed
¶
manual_seed(seed: int) -> None
Seed each of the random number generators.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
seed |
int
|
The seed to set. |
required |
seed
¶
Seed each of the random number generators using a non-deterministic random number.
PatchCloakNoiseLayer2
¶
Bases: PatchCloakNoiseLayer
Input-dependent, additive, vision Stained Glass Transform layer that segments input images into patches and generates a patch-wise
stochastic transformation with masking and clipping. Use after training a PatchCloakNoiseLayer1
.
Note
For maximum privacy preservation, Stained Glass Transform Patch Step 2 should be pre-trained from a Stained Glass Transform Patch Step 1.
input_shape
property
¶
The shape of the expected input including its batch dimension.
mask
property
writable
¶
mask: Tensor | None
The mask to apply calculated from parameters of the stochastic transformation computed during the most recent call to forward.
mean
property
writable
¶
mean: Tensor
The means of the stochastic transformation computed during the most recent call to forward.
patch_size
property
¶
The size of the patches to segment the input into.
std
property
writable
¶
std: Tensor
The standard deviations of the stochastic transformation computed during the most recent call to forward.
__call__
¶
Stochastically transform the input.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
input |
Tensor
|
The input to transform. |
required |
noise_mask |
Tensor | None
|
An optional mask that selects the elements of |
None
|
**kwargs |
Any
|
Additional keyword arguments to the estimator modules. |
required |
__init__
¶
__init__(input_shape: tuple[int, ...], patch_size: int | tuple[int, int], scale: tuple[float, float], percent_to_mask: float, shallow: float = 1.0, value_range: tuple[float | None, float | None] = (-1.0, 1.0), padding_mode: Literal['constant', 'reflect', 'replicate', 'circular'] = 'constant', padding_value: float = 0.0, learn_locs_weights: bool = True, freeze_std_estimator: bool = True, seed: int | None = None) -> None
Create Stained Glass Transform to generate a stochastic transformation over patches of the input images.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
input_shape |
tuple[int, ...]
|
Shape of given inputs. The first dimension may be -1, meaning variable batch size. |
required |
patch_size |
int | tuple[int, int]
|
Size of the patches to segment the input into. If an integer is given, a square patch is used. |
required |
scale |
tuple[float, float]
|
Minimum and maximum values of the range of standard deviations of the generated stochastic transformation. |
required |
percent_to_mask |
float
|
The percentage of the outputs to mask per patch. |
required |
shallow |
float
|
A temperature-like parameter which controls the spread of the parameterization function. Controls both the magnitude of parameterized standard deviations and their rate of change with respect to rhos. |
1.0
|
value_range |
tuple[float | None, float | None]
|
Minimum and maximum values of the range to clamp the output into. |
(-1.0, 1.0)
|
padding_mode |
Literal['constant', 'reflect', 'replicate', 'circular']
|
Type of padding. One of: constant, reflect, replicate, or circular. Defaults to constant. |
'constant'
|
padding_value |
float
|
Fill value for constant padding. |
0.0
|
learn_locs_weights |
bool
|
Whether to learn the weight parameters for the locs estimator. If |
True
|
freeze_std_estimator |
bool
|
Whether to freeze the weight parameters for the std estimator. If |
True
|
seed |
int | None
|
Seed for the random number generator used to generate the stochastic transformation. If |
None
|
Changed in version 0.10.0: `threshold` and `percent_threshold` parameters were removed in favor of `percent_to_mask`
__init_subclass__
¶
Set the default dtype to torch.float32
inside all subclass __init__
methods.
__setstate__
¶
Restore from a serialized copy of self.__dict__
.
forward
¶
Transform the input data.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
input |
Tensor
|
The input to transform. |
required |
noise_mask |
Tensor | None
|
An optional mask that selects the elements of |
None
|
**kwargs |
Any
|
Additional keyword arguments to the estimator modules. |
required |
Returns:
Type | Description |
---|---|
base.NoiseLayerOutput
|
The transformed input data. |
get_applied_transform_components_factory
¶
Create a function that returns the elements of the transform components ('mean'
and 'std'
) applied during the most recent
forward pass.
Specifically, the applied elements are those selected by the noise mask (if supplied) and standard deviation mask (if
std_estimator.masker is not None
). If no masks are used, all elements are returned.
The applied transform components are returned flattened.
This function is intended to be used to log histograms of the transform components.
Returns:
Type | Description |
---|---|
Callable[[], dict[str, torch.Tensor]]
|
A function that returns the the elements of the transform components applied during the most recent forward pass. |
Examples:
>>> from torch import nn
>>> from stainedglass_core import model as sg_model, noise_layer as sg_noise_layer
>>> base_model = nn.Linear(20, 2)
>>> noisy_model = sg_model.NoisyModel(
... sg_noise_layer.CloakNoiseLayer1,
... base_model,
... input_shape=(-1, 20),
... )
>>> get_applied_transform_components = (
... noisy_model.noise_layer.get_applied_transform_components_factory()
... )
>>> input = torch.ones(1, 20)
>>> noise_mask = torch.tensor(5 * [False] + 15 * [True])
>>> output = base_model(input, noise_mask=noise_mask)
>>> applied_transform_components = get_applied_transform_components()
>>> applied_transform_components
{'mean': tensor(...), 'std': tensor(...)}
>>> {
... component_name: component.shape
... for component_name, component in applied_transform_components.items()
... }
{'mean': torch.Size([15]), 'std': torch.Size([15])}
get_transformed_output_factory
¶
Create a function that returns the transformed output from the most recent forward pass.
If super batching is active, only the transformed half of the super batch output is returned.
Returns:
Type | Description |
---|---|
Callable[[], torch.Tensor]
|
A function that returns the transformed output from the most recent forward pass. |
Examples:
>>> from stainedglass_core import noise_layer as sg_noise_layer
>>> noise_layer = sg_noise_layer.CloakNoiseLayer1(input_shape=(-1, 3, 32, 32))
>>> get_transformed_output = noise_layer.get_transformed_output_factory()
>>> input = torch.ones(2, 3, 32, 32)
>>> output = noise_layer(input)
>>> transformed_output = get_transformed_output()
>>> assert output.output.equal(transformed_output)
initial_seed
¶
Return the initial seed of the CPU device's random number generator.
manual_seed
¶
manual_seed(seed: int) -> None
Seed each of the random number generators.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
seed |
int
|
The seed to set. |
required |
seed
¶
Seed each of the random number generators using a non-deterministic random number.
PatchCloakNoiseLayer2_NoClip
¶
Bases: PatchCloakNoiseLayer
Input-dependent, additive, vision Stained Glass Transform layer that segments input images into patches and generates a patch-wise
stochastic transformation with masking. Use after training a PatchCloakNoiseLayer1
.
Note
For maximum privacy preservation, Stained Glass Transform Patch Step 2 should be pre-trained from a Stained Glass Transform Patch Step 1.
input_shape
property
¶
The shape of the expected input including its batch dimension.
mask
property
writable
¶
mask: Tensor | None
The mask to apply calculated from parameters of the stochastic transformation computed during the most recent call to forward.
mean
property
writable
¶
mean: Tensor
The means of the stochastic transformation computed during the most recent call to forward.
patch_size
property
¶
The size of the patches to segment the input into.
std
property
writable
¶
std: Tensor
The standard deviations of the stochastic transformation computed during the most recent call to forward.
__call__
¶
Stochastically transform the input.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
input |
Tensor
|
The input to transform. |
required |
noise_mask |
Tensor | None
|
An optional mask that selects the elements of |
None
|
**kwargs |
Any
|
Additional keyword arguments to the estimator modules. |
required |
__init__
¶
__init__(input_shape: tuple[int, ...], patch_size: int | tuple[int, int], scale: tuple[float, float], percent_to_mask: float, shallow: float = 1.0, padding_mode: Literal['constant', 'reflect', 'replicate', 'circular'] = 'constant', padding_value: float = 0.0, learn_locs_weights: bool = True, freeze_std_estimator: bool = True, seed: int | None = None) -> None
Create Stained Glass Transform to generate a stochastic transformation over patches of the input images.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
input_shape |
tuple[int, ...]
|
Shape of given inputs. The first dimension may be -1, meaning variable batch size. |
required |
patch_size |
int | tuple[int, int]
|
Size of the patches to segment the input into. If an integer is given, a square patch is used. |
required |
scale |
tuple[float, float]
|
Minimum and maximum values of the range of standard deviations of the generated stochastic transformation. |
required |
percent_to_mask |
float
|
The percentage of the outputs to mask per patch. |
required |
shallow |
float
|
A temperature-like parameter which controls the spread of the parameterization function. Controls both the magnitude of parameterized standard deviations and their rate of change with respect to rhos. |
1.0
|
padding_mode |
Literal['constant', 'reflect', 'replicate', 'circular']
|
Type of padding. One of: constant, reflect, replicate, or circular. Defaults to constant. |
'constant'
|
padding_value |
float
|
Fill value for constant padding. |
0.0
|
learn_locs_weights |
bool
|
Whether to learn the weight parameters for the locs estimator. If |
True
|
freeze_std_estimator |
bool
|
Whether to freeze the weight parameters for the std estimator. If |
True
|
seed |
int | None
|
Seed for the random number generator used to generate the stochastic transformation. If |
None
|
Changed in version 0.10.0: `threshold` and `percent_threshold` parameters were removed in favor of `percent_to_mask`
__init_subclass__
¶
Set the default dtype to torch.float32
inside all subclass __init__
methods.
__setstate__
¶
Restore from a serialized copy of self.__dict__
.
forward
¶
Transform the input data.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
input |
Tensor
|
The input to transform. |
required |
noise_mask |
Tensor | None
|
An optional mask that selects the elements of |
None
|
**kwargs |
Any
|
Additional keyword arguments to the estimator modules. |
required |
Returns:
Type | Description |
---|---|
base.NoiseLayerOutput
|
The transformed input data. |
get_applied_transform_components_factory
¶
Create a function that returns the elements of the transform components ('mean'
and 'std'
) applied during the most recent
forward pass.
Specifically, the applied elements are those selected by the noise mask (if supplied) and standard deviation mask (if
std_estimator.masker is not None
). If no masks are used, all elements are returned.
The applied transform components are returned flattened.
This function is intended to be used to log histograms of the transform components.
Returns:
Type | Description |
---|---|
Callable[[], dict[str, torch.Tensor]]
|
A function that returns the the elements of the transform components applied during the most recent forward pass. |
Examples:
>>> from torch import nn
>>> from stainedglass_core import model as sg_model, noise_layer as sg_noise_layer
>>> base_model = nn.Linear(20, 2)
>>> noisy_model = sg_model.NoisyModel(
... sg_noise_layer.CloakNoiseLayer1,
... base_model,
... input_shape=(-1, 20),
... )
>>> get_applied_transform_components = (
... noisy_model.noise_layer.get_applied_transform_components_factory()
... )
>>> input = torch.ones(1, 20)
>>> noise_mask = torch.tensor(5 * [False] + 15 * [True])
>>> output = base_model(input, noise_mask=noise_mask)
>>> applied_transform_components = get_applied_transform_components()
>>> applied_transform_components
{'mean': tensor(...), 'std': tensor(...)}
>>> {
... component_name: component.shape
... for component_name, component in applied_transform_components.items()
... }
{'mean': torch.Size([15]), 'std': torch.Size([15])}
get_transformed_output_factory
¶
Create a function that returns the transformed output from the most recent forward pass.
If super batching is active, only the transformed half of the super batch output is returned.
Returns:
Type | Description |
---|---|
Callable[[], torch.Tensor]
|
A function that returns the transformed output from the most recent forward pass. |
Examples:
>>> from stainedglass_core import noise_layer as sg_noise_layer
>>> noise_layer = sg_noise_layer.CloakNoiseLayer1(input_shape=(-1, 3, 32, 32))
>>> get_transformed_output = noise_layer.get_transformed_output_factory()
>>> input = torch.ones(2, 3, 32, 32)
>>> output = noise_layer(input)
>>> transformed_output = get_transformed_output()
>>> assert output.output.equal(transformed_output)
initial_seed
¶
Return the initial seed of the CPU device's random number generator.
manual_seed
¶
manual_seed(seed: int) -> None
Seed each of the random number generators.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
seed |
int
|
The seed to set. |
required |
seed
¶
Seed each of the random number generators using a non-deterministic random number.
PatchCloakNoiseLayerFrequencySpace
¶
Bases: PatchCloakNoiseLayer
Input-dependent, additive, vision Stained Glass Transform layer that segments input images into patches and generates a patch-wise stochastic transformation which is applied in frequency space. This variant uses the consolidated training approach so that all parameters in the layer are trained simultaneously.
input_shape
property
¶
The shape of the expected input including its batch dimension.
mask
property
writable
¶
mask: Tensor | None
The mask to apply calculated from parameters of the stochastic transformation computed during the most recent call to forward.
mean
property
writable
¶
mean: Tensor
The means of the stochastic transformation computed during the most recent call to forward.
patch_size
property
¶
The size of the patches to segment the input into.
std
property
writable
¶
std: Tensor
The standard deviations of the stochastic transformation computed during the most recent call to forward.
__call__
¶
Stochastically transform the input.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
input |
Tensor
|
The input to transform. |
required |
noise_mask |
Tensor | None
|
An optional mask that selects the elements of |
None
|
**kwargs |
Any
|
Additional keyword arguments to the estimator modules. |
required |
__init__
¶
__init__(input_shape: tuple[int, ...], patch_size: int | tuple[int, int], scale: tuple[float, float], percent_to_mask: float, shallow: float = 1.0, value_range: tuple[float | None, float | None] = (-1.0, 1.0), frequency_range: tuple[float, float] = (-inf, inf), padding_mode: Literal['constant', 'reflect', 'replicate', 'circular'] = 'constant', padding_value: float = 0.0, learn_locs_weights: bool = True, preserve_average_color: bool = True, normalization: Normalization = <Normalization.CLAMP: 'clamp'>, seed: int | None = None) -> None
Construct a Stained Glass Transform layer that generates stochastic transformations applied in frequency space over patches of the input images.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
input_shape |
tuple[int, ...]
|
Shape of given inputs. The first dimension may be -1, meaning variable batch size. |
required |
patch_size |
int | tuple[int, int]
|
Size of the patches to segment the input into. If an integer is given, a square patch is used. |
required |
scale |
tuple[float, float]
|
Minimum and maximum values of the range of standard deviations of the generated stochastic transformation. |
required |
percent_to_mask |
float
|
The percentage of the outputs to mask per patch. |
required |
shallow |
float
|
A fixed temperature like parameter which alters the scale of the standard deviation of the stochastic transformation. |
1.0
|
value_range |
tuple[float | None, float | None]
|
Minimum and maximum values of the range to clamp the output into. |
(-1.0, 1.0)
|
frequency_range |
tuple[float, float]
|
Minimum and maximum values of the range in the frequency domain to clamp the output into. |
(-inf, inf)
|
padding_mode |
Literal['constant', 'reflect', 'replicate', 'circular']
|
Type of padding. One of: constant, reflect, replicate, or circular. Defaults to |
'constant'
|
padding_value |
float
|
Value to pad with if padding_mode is constant. |
0.0
|
learn_locs_weights |
bool
|
Whether to learn the weight parameters for the locs estimator. If |
True
|
preserve_average_color |
bool
|
Whether to preserve the average color per patch of the original image in the perturbed output.
Defaults to |
True
|
normalization |
Normalization
|
Which |
<Normalization.CLAMP: 'clamp'>
|
seed |
int | None
|
Seed for the random number generator used to generate the stochastic transformation. If |
None
|
Raises:
Type | Description |
---|---|
ValueError
|
If the input shape is not a 4-tuple. |
ValueError
|
If the patch_size is not square. |
NotImplementedError
|
If a known normalization mode without a corresponding normalization function is encountered. |
Changed in version 0.10.0: `threshold` and `percent_threshold` parameters were removed in favor of `percent_to_mask`
__init_subclass__
¶
Set the default dtype to torch.float32
inside all subclass __init__
methods.
__setstate__
¶
Restore from a serialized copy of self.__dict__
.
forward
¶
Transform the input data.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
input |
Tensor
|
The input to transform. |
required |
noise_mask |
Tensor | None
|
An optional mask that selects the elements of |
None
|
**kwargs |
Any
|
Additional keyword arguments to the estimator modules. |
required |
Returns:
Type | Description |
---|---|
base.NoiseLayerOutput
|
The transformed input data. |
get_applied_transform_components_factory
¶
Create a function that returns the elements of the transform components ('mean'
and 'std'
) applied during the most recent
forward pass.
Specifically, the applied elements are those selected by the noise mask (if supplied) and standard deviation mask (if
std_estimator.masker is not None
). If no masks are used, all elements are returned.
The applied transform components are returned flattened.
This function is intended to be used to log histograms of the transform components.
Returns:
Type | Description |
---|---|
Callable[[], dict[str, torch.Tensor]]
|
A function that returns the the elements of the transform components applied during the most recent forward pass. |
Examples:
>>> from torch import nn
>>> from stainedglass_core import model as sg_model, noise_layer as sg_noise_layer
>>> base_model = nn.Linear(20, 2)
>>> noisy_model = sg_model.NoisyModel(
... sg_noise_layer.CloakNoiseLayer1,
... base_model,
... input_shape=(-1, 20),
... )
>>> get_applied_transform_components = (
... noisy_model.noise_layer.get_applied_transform_components_factory()
... )
>>> input = torch.ones(1, 20)
>>> noise_mask = torch.tensor(5 * [False] + 15 * [True])
>>> output = base_model(input, noise_mask=noise_mask)
>>> applied_transform_components = get_applied_transform_components()
>>> applied_transform_components
{'mean': tensor(...), 'std': tensor(...)}
>>> {
... component_name: component.shape
... for component_name, component in applied_transform_components.items()
... }
{'mean': torch.Size([15]), 'std': torch.Size([15])}
get_transformed_output_factory
¶
Create a function that returns the transformed output from the most recent forward pass.
If super batching is active, only the transformed half of the super batch output is returned.
Returns:
Type | Description |
---|---|
Callable[[], torch.Tensor]
|
A function that returns the transformed output from the most recent forward pass. |
Examples:
>>> from stainedglass_core import noise_layer as sg_noise_layer
>>> noise_layer = sg_noise_layer.CloakNoiseLayer1(input_shape=(-1, 3, 32, 32))
>>> get_transformed_output = noise_layer.get_transformed_output_factory()
>>> input = torch.ones(2, 3, 32, 32)
>>> output = noise_layer(input)
>>> transformed_output = get_transformed_output()
>>> assert output.output.equal(transformed_output)
initial_seed
¶
Return the initial seed of the CPU device's random number generator.
manual_seed
¶
manual_seed(seed: int) -> None
Seed each of the random number generators.
Parameters:
Name | Type | Description | Default |
---|---|---|---|
seed |
int
|
The seed to set. |
required |
seed
¶
Seed each of the random number generators using a non-deterministic random number.