TResNet-34 Batch Implementations
This page demonstrates the TResNet-34 batch inference workflow using FHEON. It provides examples of how to set up the FHE context, configure the BatchANNController, and perform encrypted-domain batch inference on CIFAR-100 images using a ResNet-34 architecture with various batch sizes and configurations.
Overview
The workflow includes:
Context and Key Generation
Initializes the FHEONHEController with parameters such as ring degree, number of slots, circuit depth, and scaling factors optimized for the deeper ResNet-34 architecture.
Generates rotation keys optimized for batch convolutional operations on deeper networks.
Serializes keys for efficient batch FHE computations.
Batch Data Preparation
Reads and preprocesses multiple CIFAR-100 images in batch.
Encrypts batch image data for inference.
TResNet-34 Blocks
Implements convolutional blocks for batch processing optimized for the deeper architecture.
Handles shortcut connections and residual blocks.
Applies batch ReLU and pooling operations on encrypted data using the BatchANNController.
Implements fully connected layers for batch classification.
Batch Inference Loop
Processes multiple input images simultaneously.
Applies the ResNet-34 layers sequentially to the batch.
Performs encrypted global average pooling and fully connected classification.
Decrypts the batch results and outputs predicted labels.
Batch Configurations
Multiple variants are available with different batch size parameters (N parameter):
TResNet34N16.cpp – Configured for batch processing with N=16
TResNet34N32.cpp – Configured for batch processing with N=32
TResNet34N64.cpp – Configured for batch processing with N=64
TResNet34N128.cpp – Configured for batch processing with N=128
TResNet34N256.cpp – Configured for batch processing with N=256
Key Functions
The main functions used in the example include:
convolution_block() – Performs convolutional layer on batch encrypted data.
shortcut_convolution_block() – Handles the shortcut connections in batch processing.
double_shortcut_convolution_block() – Implements convolution with shortcut for downsampled layers in batches.
resnet_block() – Combines convolution, ReLU, and shortcut operations for batch processing.
fc_layer_block() – Implements the final fully connected layer for batch classification.
Full Example Source
You can view and download the full source code of these examples: