ResNet-20 With Multi-Channel Striding Processing
This example demonstrates multi-channel striding and preprocessing used in the ResNet-20 FHE pipeline.
It shows how to prepare multi-channel kernels, shortcuts, and fully-connected weights and how strided residual blocks are executed in the encrypted domain.
It uses secure_double_optimized_convolution_multi_channels() for the convolution layers with striding.
Overview
The workflow includes:
FHE Context and Key Generation
Initialize FHEController and ANNController.
Generate rotation keys for multi-channel convolutions, pooling, and FC layers.
Data Preparation
Load and encode convolution kernels, shortcut kernels, and biases for each layer.
Support downsampling (striding) by encoding both convolution and shortcut weights with appropriate output sizes.
Multi-Channel Striding
Handle strided residual blocks by preparing downsampled shortcut kernels and matching convolution kernels.
Use multi-channel optimized encodings and rotation positions for efficient FHE computation.
Inference Loop
Encrypt input images, apply convolution blocks and residual blocks (with optional shortcut downsampling), perform global average pooling, and run the final FC layer.
Key Functions
convolution_block() – Load kernels/biases, encode them, and run an optimized convolution on encrypted data.
double_shortcut_convolution_block() – Prepare and run a strided convolution with an accompanying downsampled shortcut (returns both conv and shortcut ciphertexts).
resnet_block() – Execute a residual block (handles bootstrap, striding, shortcut logic, and secure ReLU).
fc_layer_block() – Encode and apply the final fully connected layer.
Data Processing Functions
convolution_data_processing() – Encode per-output-channel convolution kernels and biases.
shorcut_data_processing() – Encode shortcut (1×1 or FC-style) kernels and biases for downsampling.
double_data_processing() – Prepare both conv and shortcut encodings for strided (downsample) blocks.
fclayer_data_processing() – Encode fully connected weights and bias.
Block Functions
convolution_block(…) – Perform a single optimized convolution with encoded kernels and bias.
double_shortcut_convolution_block(…) – Perform strided convolution and produce both main and shortcut outputs (multi-channel).
resnet_block(…) – Full residual block: conv → (optional bootstrap) → ReLU → conv → sum(shortcut) → bootstrap → ReLU.
fc_layer_block(…) – Apply optimized fully connected classification on encrypted features.