Batch Inputs Encrypted ANN Controller

class FHEONANNBatchController : public FHEONANNController

Public Functions

inline FHEONANNBatchController(CryptoContext<DCRTPoly> ctx)
void setContext(CryptoContext<DCRTPoly> &in_context)

ANN Batch Controller for managing homomorphic batched ANN operations.

This class provides high-level functions for performing batched neural network operations in the encrypted domain using homomorphic encryption (FHE). It supports operations such as:

  • Batched convolution layers

  • Batched average and global pooling

  • Batched fully connected (linear) layers

  • Batched activation functions (e.g., ReLU)

The class handles multiple input channels and multiple batches simultaneously, using optimized rotation and packing strategies to minimize the number of homomorphic operations while preserving the correct layout of data.

Note

  • All operations assume ciphertext packing follows a contiguous layout: [batch0: channel_i feature map][batch1: channel_i feature map]…

  • Designed for high-throughput FHE-based ANN inference pipelines.

inline void setNumSlots(int numSlots)
vector<int> generate_convolution_batch_rotation_positions(int batchSize, int inputWidth, int kernelWidth, int padding = 0, int stride = 1)

Perform a secure convolution operation on batched encrypted data.

This function implements a convolutional layer in the encrypted domain using homomorphic encryption for batched inputs. Given a set of encrypted input feature maps, convolution kernels, and bias terms, it computes the convolution across multiple channels and batches while respecting the specified input dimensions, kernel size, padding, and stride.

For each output channel, the function multiplies rotated input ciphertexts by corresponding kernel weights, sums contributions from all input channels, applies the bias, and aligns outputs across batches using rotations and packing strategies. The result is one ciphertext per output channel, encoding all batches contiguously.

Generate rotation positions for batched convolution operations.

This function calculates the rotation positions required for performing convolution operations on batched encrypted data. It considers the input dimensions, kernel size, padding, and stride to determine the optimal rotation indices for aligning data during the convolution process.

Note

  • The function assumes ciphertext packing follows contiguous layout: [batch0: channel_i feature map][batch1: channel_i feature map]…

  • Optimized rotations and multi-channel processing are used to reduce the number of homomorphic operations, improving performance for FHE-based CNNs.

Parameters:
  • encryptedInputs – Vector of ciphertexts, one per input channel, each encoding batched feature maps of size (batchSize × inputWidth × inputWidth).

  • kernelData – Convolution kernels represented as a 3D vector of plaintexts: [outputChannel][inputChannel][kernelWeights].

  • biasInputs – Bias terms for each output channel (plaintexts).

  • batchSize – Number of batches encoded in the input ciphertexts.

  • inputWidth – Width (and height) of the input feature maps (assumed square).

  • inputChannels – Number of input channels.

  • outputChannels – Number of output channels.

  • kernelWidth – Width (and height) of the convolution kernel (assumed square).

  • paddingLen – Number of zero-padding slots applied around the input feature map.

  • stride – stride length used for the convolution operation.

  • batchSize – Number of batches encoded in the input ciphertexts.

  • inputWidth – Width (and height) of the input feature maps (assumed square).

  • kernelWidth – Width (and height) of the convolution kernel (assumed square).

  • padding – Number of zero-padding slots applied around the input feature map.

  • stride – Stride length used for the convolution operation.

Returns:

vector<Ctext> Vector of ciphertexts containing the encrypted results of the convolution layer, one ciphertext per output channel, with batch data packed contiguously: [batch0 output channel_i][batch1 output channel_i]…

Returns:

vector<int> Vector of rotation positions required for the convolution.

vector<int> generate_avgpool_batch_optimized_rotation_positions(int batchSize, int inputWidth, int kernelWidth, int stride = 2, bool globalPooling = false, int rotationIndex = 16)

Generate rotation positions for optimized average pooling operations.

This function calculates the rotation positions required for performing average pooling operations on batched encrypted data. It supports both global pooling and standard average pooling, optimizing the rotation indices based on the input dimensions, kernel size, and stride.

Parameters:
  • batchSize – Number of batches encoded in the input ciphertexts.

  • inputWidth – Width (and height) of the input feature maps (assumed square).

  • kernelWidth – Width (and height) of the pooling kernel (assumed square).

  • stride – Stride length used for the pooling operation.

  • globalPooling – Flag indicating whether global pooling is used.

  • rotationIndex – Index for rotation optimization.

Returns:

vector<int> Vector of rotation positions required for the pooling operation.

vector<int> generate_linear_batch_rotation_positions(int batchSize, vector<int> outputSizes, vector<int> inputSizes, int rotationIndex = 100)

Generate rotation positions for batched fully connected layers.

This function calculates the rotation positions required for performing fully connected layer operations on batched encrypted data. It considers the input and output sizes, as well as the batch size, to determine the optimal rotation indices for aligning data during the operation.

Parameters:
  • batchSize – Number of batches encoded in the input ciphertexts.

  • outputSizes – Vector of output sizes for each layer.

  • inputSizes – Vector of input sizes for each layer.

  • rotationIndex – Index for rotation optimization.

Returns:

vector<int> Vector of rotation positions required for the fully connected layer.

vector<int> generate_batch_inputs_converter_rotation_positions(int batchSize, int inputChannels, int inputWidth)

Generate rotation positions for converting batched inputs.

This function calculates the rotation positions required for converting batched inputs into the appropriate format for processing. It considers the input dimensions, batch size, and number of channels to determine the optimal rotation indices for aligning data.

Parameters:
  • batchSize – Number of batches encoded in the input ciphertexts.

  • inputChannels – Number of input channels.

  • inputSize – Size of the input feature maps (width × height).

Returns:

vector<int> Vector of rotation positions required for input conversion.

vector<Ctext> he_batch_convolution(vector<Ctext> &encryptedInput, vector<vector<vector<Ptext>>> &kernelData, vector<Ptext> &biasInputs, int batchSize, int inputWidth, int inputChannels, int outputChannels, int kernelWidth, int padding = 0, int stride = 1)
vector<Ctext> he_batch_convolution_optimized(vector<Ctext> &encryptedInputs, vector<vector<vector<Ptext>>> &kernelData, vector<Ptext> &biasInputs, int batchSize, int inputWidth, int inputChannels, int outputChannels, int stride = 1)
vector<Ctext> he_batch_convolution_shortcut_optimized(vector<Ctext> &encryptedInputs, vector<vector<Ptext>> &kernelData, vector<Ptext> &biasInputs, int batchSize, int inputWidth, int inputChannels, int outputChannels, int stride)
vector<Ctext> he_batch_convolution_optimized(FHEONHEController &fheonHEController, vector<Ctext> &encryptedInputs, vector<vector<vector<vector<vector<double>>>>> &rawKernelData, vector<vector<double>> &rawBiasData, int batchSize, int inputWidth, int inputChannels, int outputChannels, int stride)

Perform an optimized secure convolution layer evaluation on batched encrypted data.

This function computes a convolution layer over multiple input channels for batched data under homomorphic encryption. It is optimized for the special case where stride = 1, kernel size = 3, and padding = 1, but also supports larger strides by applying striding across multiple channels simultaneously (multi-channel approach), improving efficiency for deep FHE-based networks.

For each output channel, the function multiplies input ciphertexts by the corresponding plaintext convolution kernels, sums contributions from all input channels, applies bias, and rotates/aggregates results across batches to produce the final output.

Note

  • This function assumes ciphertext packing follows contiguous layout: [batch0: channel_i feature map][batch1: channel_i feature map]…

  • Multi-channel striding is used to reduce the number of rotations and multiplications, making it efficient for FHE-based CNNs with multiple input channels.

Parameters:
  • encryptedInputs – Vector of ciphertexts, one per input channel, each encoding batched feature maps of size (batchSize × inputWidth × inputWidth).

  • kernelData – Convolution kernels, represented as a 3D vector of plaintexts: [outputChannel][inputChannel][kernelWeights].

  • biasInputs – Bias terms for each output channel (plaintexts).

  • batchSize – Number of batches encoded in the input ciphertexts.

  • inputWidth – Width (and height) of the input feature maps (assumed square).

  • inputChannels – Number of input channels.

  • outputChannels – Number of output channels.

  • stride – stride length used for the convolution operation.

Returns:

vector<Ctext> Vector of ciphertexts containing the encrypted results of the convolution layer, one ciphertext per output channel, with batch data packed contiguously: [batch0 output channel_i][batch1 output channel_i]…

vector<Ctext> he_batch_convolution_shortcut_optimized(FHEONHEController &fheonHEController, vector<Ctext> &encryptedInputs, vector<vector<vector<double>>> &rawKernelData, vector<vector<double>> &rawBiasData, int batchSize, int inputWidth, int inputChannels, int outputChannels, int stride)

Perform a secure convolution operation on batched encrypted data.

This function implements a convolutional layer in the encrypted domain using homomorphic encryption for batched inputs. Given a set of encrypted input feature maps, convolution kernels, and bias terms, it computes the convolution across multiple channels and batches while respecting the specified input dimensions, kernel size, padding, and stride.

For each output channel, the function multiplies rotated input ciphertexts by corresponding kernel weights, sums contributions from all input channels, applies the bias, and aligns outputs across batches using rotations and packing strategies. The result is one ciphertext per output channel, encoding all batches contiguously.

Note

  • This function assumes ciphertext packing follows contiguous layout: [batch0: channel_i feature map][batch1: channel_i feature map]…

  • Multi-channel striding is used to reduce the number of rotations and multiplications, making it efficient for FHE-based CNNs with multiple input channels.

Parameters:
  • encryptedInputs – Vector of ciphertexts, one per input channel, each encoding batched feature maps of size (batchSize × inputWidth × inputWidth).

  • kernelData – Convolution kernels represented as a 3D vector of plaintexts: [outputChannel][inputChannel][kernelWeights].

  • biasInputs – Bias terms for each output channel (plaintexts).

  • batchSize – Number of batches encoded in the input ciphertexts.

  • inputWidth – Width (and height) of the input feature maps (assumed square).

  • inputChannels – Number of input channels.

  • outputChannels – Number of output channels.

  • stride – stride length used for the convolution operation.

Returns:

vector<Ctext> Vector of ciphertexts containing the encrypted results of the convolution layer, one ciphertext per output channel, with batch data packed contiguously: [batch0 output channel_i][batch1 output channel_i]…

vector<Ctext> he_batch_avgpool(vector<Ctext> &encryptedInputs, int batchSize, int inputWidth, int inputChannels, int kernelWidth, int stride = 2)
vector<Ctext> he_batch_globalpool(vector<Ctext> &encryptedInputs, int batchSize, int inputWidth, int inputChannels, int kernelWidth, int rotatePositions)

Perform homomorphic global pooling over batched encrypted inputs.

This function applies a global pooling operation on encrypted feature maps across the spatial dimensions for each input channel and batch element. For each channel, the encrypted input is rotated and summed to aggregate all spatial positions, merged across batches, and finally scaled using a plaintext reduction mask to compute the pooled result.

The computation is parallelized across channels using multiple threads.

Parameters:
  • encryptedInputs – Vector of ciphertexts representing encrypted feature maps, sized [inputChannels], where each ciphertext packs batched spatial data.

  • batchSize – Number of images packed in each ciphertext.

  • inputWidth – Width (and height) of the input feature map.

  • inputChannels – Number of input channels.

  • kernelWidth – Width of the pooling kernel (used for layout logic).

  • rotatePositions – Number of rotations required to align batch elements during ciphertext merging.

Returns:

A vector of ciphertexts, one per channel, where each ciphertext contains the globally pooled and scaled encrypted output.

Ctext he_batch_linear(Ctext &encryptedInput, vector<Ptext> &weightMatrix, Ptext &baisInput, int batchSize, int inputSize, int outputSize, int rotatePositions = 100)

Perform a homomorphic batched linear (fully connected) layer.

This function evaluates a fully connected layer over encrypted batched inputs using CKKS packing. The encrypted input is multiplied with a plaintext weight matrix, followed by rotations and slot-wise summations to compute inner products for each output neuron and each batch element.

Intermediate results are merged per batch, rotated into their correct global slot positions, and finally accumulated across batches. A plaintext bias is then added to produce the final encrypted output.

The computation is parallelized across output neurons and batch elements using multiple threads.

Parameters:
  • encryptedInput – Ciphertext containing packed batched input features.

  • weightMatrix – Vector of plaintexts representing the weight matrix, sized [outputSize], where each plaintext corresponds to one output neuron.

  • baisInput – Plaintext bias packed and aligned with the output slots.

  • batchSize – Number of inputs packed in the ciphertext.

  • inputSize – Number of input features per batch element.

  • outputSize – Number of output neurons.

  • rotationIndex – Slot grouping size used to correctly align merged batch outputs via rotations.

Returns:

A ciphertext containing the encrypted outputs of the linear layer for all batches and output neurons.

vector<Ctext> he_batch_linear_multiple_outputs(Ctext &encryptedInput, vector<Ptext> &weightMatrix, Ptext &baisInput, int batchSize, int inputSize, int outputSize)

Perform a homomorphic batched linear layer with per-batch outputs.

This function evaluates a fully connected (linear) layer over encrypted, batched inputs and returns a separate ciphertext for each batch element. The encrypted input is multiplied with plaintext weight vectors, followed by rotations and slot-wise summations to compute inner products for each output neuron.

For each batch element, the per-output ciphertexts are merged into a single ciphertext and a plaintext bias is added. Unlike the single-output variant, this function preserves batch separation in the output by returning a vector of ciphertexts, one per batch.

The computation is parallelized across output neurons and batch elements using multiple threads.

Parameters:
  • encryptedInput – Ciphertext containing packed batched input features.

  • weightMatrix – Vector of plaintexts representing the weight matrix, sized [outputSize], where each plaintext corresponds to one output neuron.

  • baisInput – Plaintext bias packed and aligned with the output slots.

  • batchSize – Number of inputs packed in the ciphertext.

  • inputSize – Number of input features per batch element.

  • outputSize – Number of output neurons.

Returns:

A vector of ciphertexts of size [batchSize], where each ciphertext contains the encrypted outputs of the linear layer for one batch element.

vector<Ctext> he_batch_relu(vector<Ctext> &encryptedInputs, vector<int> scaleValues, int inputChannels, int vectorSize, int polyDegree = 59)

Apply a homomorphic ReLU activation using Chebyshev polynomial approximation.

This function evaluates a ReLU activation on encrypted inputs using a Chebyshev polynomial approximation over a bounded interval. Each input channel is optionally rescaled using a plaintext mask to control the output magnitude before applying the polynomial.

The ReLU function is approximated as: f(x) = 0 if x < 0 f(x) = scaleVal * x otherwise

where the approximation is evaluated over the interval [lowerBound, upperBound]. The computation is parallelized across input channels.

Parameters:
  • encryptedInputs – Vector of ciphertexts representing encrypted inputs, sized [inputChannels].

  • scaleValues – Per-channel scaling factors applied before ReLU.

  • inputChannels – Number of input channels.

  • vectorSize – Number of valid slots per ciphertext.

  • polyDegree – Degree of the Chebyshev polynomial approximation.

Returns:

A vector of ciphertexts where each element contains the encrypted ReLU-activated output for the corresponding input channel.

Ctext he_batch_inputs_converter(vector<Ctext> &encryptedInputs, int batchSize, int inputChannels, int inputWidth)

Convert channel-separated encrypted inputs into a single batched ciphertext.

This function takes encrypted inputs where each ciphertext corresponds to one input channel. It aligns all channel data together within a batch, producing a single ciphertext that represents all inputs across channels and batches. This transformation is necessary when passing data into fully connected layers, which expect a flat neuron-style input rather than channel-separated structures (as in convolutional layers).

Note

  • Internally, the function:

    • Uses a cleaning mask to isolate values in each channel.

    • Applies rotations to align channel data.

    • Aggregates all channel ciphertexts for each batch.

    • Applies additional rotations to stack batches correctly.

  • This ensures compatibility between convolutional outputs (channel-based) and fully connected layers (neuron-based).

Parameters:
  • encryptedInputs – A vector of ciphertexts, where each ciphertext corresponds to one input channel.

  • batchSize – Number of batches to process.

  • inputChannels – Number of input channels per batch.

  • inputSize – Size of each input channel (number of slots used per channel).

Returns:

A single ciphertext that encodes the entire batch across all channels, aligned in a flat neuron-compatible format.

vector<Ctext> he_batch_sum_ciphertexts(vector<Ctext> &firstEncryptedInputs, vector<Ctext> &secondEncryptedInputs, int inputChannels)

Securely sum two sets of encrypted inputs channel-wise.

This function takes two vectors of ciphertexts, each representing encrypted input data across multiple channels, and performs homomorphic addition on corresponding elements. The result is a new vector of ciphertexts where each ciphertext is the sum of the two inputs for that channel.

Parameters:
  • firstEncryptedInputs – Vector of ciphertexts for the first input, sized [inputChannels].

  • secondEncryptedInputs – Vector of ciphertexts for the second input, sized [inputChannels].

  • inputChannels – Number of channels (i.e., number of ciphertexts in each input vector).

Returns:

A vector of ciphertexts where each element is the homomorphic sum of the corresponding inputs from the two provided vectors.

Public Members

int num_slots = 1 << 14
int baseIndex = 1024