ResNet-34 FHE Implementation

This example demonstrates a ResNet-34 inference implementation using FHEON. It focuses on an optimized pipeline for CIFAR-100: encoded kernels/biases, multi-channel striding, minimal rotations, strategic bootstrapping, and efficient evaluation of residual blocks.

Overview

The workflow includes:

  1. Context & Key Generation

    • Initialize FHEController and ANNController with tuned parameters.

    • Generate and load rotation/bootstrapping keys required for optimized convolution and FC layers.

  2. Data Preparation

    • Read and normalize CIFAR-100 binary images and encrypt input batches.

    • Pre-encode convolution/shortcut/FC weights and biases to minimize runtime overhead.

  3. ResNet-34 Layers

    • Initial convolution layer followed by 4 residual stages (with varying block counts).

    • Multi-channel striding handled via double_shortcut_convolution_block() for downsampling + shortcut.

    • Secure ReLU and selective bootstrapping for level/precision recovery.

  4. Inference Loop

    • Process encrypted images through all ResNet-34 blocks, perform global average pooling, run final FC classification, decrypt and write predictions.

Key Functions

  • convolution_block() – Encodes and applies an optimized convolution with packed biases.

  • double_shortcut_convolution_block() – Executes strided multi-channel conv + shortcut in one operation.

  • resnet_block() – Full residual block: conv → (optional bootstrap) → ReLU → conv → sum(shortcut) → bootstrap → ReLU.

  • shortcut_convolution_block() – Encode/apply shortcut downsampling (1×1 / FC-style).

  • FClayer_block() – Final fully connected classification layer (optimized flinear).

Data & Settings

  • Input: CIFAR-100 binary test set (./../images/cifar-100-binary/test.bin).

  • Channels & architecture parameters tuned in code (e.g., channelValues = {16,32,64,128,100}).

  • Rotation/bootstrapping keys are generated and deduplicated before evaluation to reduce keygen/eval cost.

Performance Notes

  • Pre-encoding of kernels and bias packing reduces rotations and memory traffic.

  • double_shortcut_convolution_block() merges main and shortcut branches to lower rotation count.

  • Bootstrapping applied selectively to refresh ciphertext levels only when necessary.

Full Example Source

ResNet34.cpp