VGG-11 Optimized Implementation

This document describes the VGG-11 network inference pipeline using fully homomorphic encryption (FHE) with optimized operations, including rotation key management, bootstrapping, and memory-efficient kernel encoding.

Overview

The workflow consists of:

  1. Context Initialization

    • FHEController generates the cryptographic context with parameters: ringDegree, numSlots, circuitDepth, dcrtBits, firstMod, digitSize, levelBudget.

    • Serialization of keys ensures reproducibility and efficient reuse of contexts.

  2. Data Preparation

    • CIFAR-10 images are loaded, normalized, and encrypted into ciphertext vectors.

    • Image dimensions: 3 channels, 32×32 pixels.

  3. Rotation Key Management

    • Each layer (conv, avgpool, fully connected) generates optimized rotation positions.

    • Rotation keys are serialized and deduplicated per layer.

    • Bootstrapping and rotation keys are generated separately for each layer to reduce memory overhead.

    • Context clearing between layers ensures only necessary keys are loaded.

  4. Convolution + ReLU Blocks

    • convolution_relu_block() performs: - Kernel encoding with optimized_encode_kernel() - Convolution with secure_optimized_convolution() - Bias addition and temporal ReLU scaling - Optional bootstrapping for ciphertext refresh

    • Average pooling is applied via secure_optimzed_avgPool_multi_channels() or secure_globalAvgPool().

  5. Fully Connected Layers

    • FClayer_relu_block() implements: - Plaintext kernel encoding and bias encoding with level awareness - Secure matrix multiplication using secure_flinear() - Optional ReLU with dynamic scaling - Bootstrapping for refreshed ciphertext precision

  6. Inference Loop

    • Iterates over all encrypted CIFAR-10 images.

    • Sequential execution through all conv layers, pooling layers, and fully connected layers.

    • Decryption and prediction output after the final fully connected layer.

    • Timing measurements for each block are stored in measuringTime.

Optimizations

  • Layer-wise key serialization: Rotation and bootstrapping keys are handled per layer to reduce memory footprint.

  • Dynamic bootstrapping: Applied selectively after key convolutional, pooling layers, and FC layers.

  • Memory efficiency: Intermediate kernel and bias vectors are cleared after use (clear() and shrink_to_fit()).

  • Power-of-2 alignment: Fully connected layer weights and biases are padded to the next power of 2 for efficient CKKS operations.

  • Level-aware encoding: Inputs, kernels, and biases are encoded respecting ciphertext levels to reduce noise growth.

Full Example Source