TLeNet-5 Batch Implementation
This page demonstrates the TLeNet-5 batch inference workflow using FHEON. It provides an example of how to set up the FHE context, configure the BatchANNController, and perform encrypted-domain batch inference on MNIST images using a LeNet-5 architecture.
Overview
The workflow includes:
Context and Key Generation
Initializes the FHEONHEController with parameters such as ring degree, number of slots, circuit depth, and scaling factors.
Generates rotation keys for convolution, average pooling, and fully connected layers optimized for batch processing.
Serializes keys for optimized FHE batch operations.
Batch Data Preparation
Reads and encodes all model weights and biases.
Reads and preprocesses multiple MNIST images in batch.
Encrypts batch image data for inference.
TLeNet-5 Layers
Implements two convolutional layers (Conv1 and Conv2) for batch processing.
Applies batch average pooling operations.
Implements three fully connected layers (FC1, FC2, FC3) with batch inference.
Applies encrypted-domain ReLU activations and bootstrapping where needed.
Batch Inference Loop
Processes multiple input MNIST images simultaneously.
Sequentially applies convolution, pooling, ReLU, and fully connected layers to the batch.
Decrypts batch results and writes predicted labels to a file.
Key Functions
The main functions used in this example include:
secure_convolution() – Performs convolutional layer on batch encrypted data.
secure_optimzed_avgPool() – Applies average pooling on batch encrypted data.
secure_flinear() – Implements fully connected layer on batch encrypted data.
secure_relu() – Secure ReLU activation for batches.
bootstrap_function() – Optional CKKS bootstrapping to refresh ciphertext precision.
encode_kernel() and encode_inputData() – Encode plaintext weights and batch inputs for FHE operations.
Full Example Source
You can view and download the full source code of this example: