TY - GEN
T1 - Towards Efficient Hybrid Quantum-Classical Learning Models
AU - McCollum, Aaron
AU - Bogaraju, Sharatchandra Varma
AU - McLaughlin, James
AU - Amari, Abdelkerim
AU - Soman, Sunish Kumar Orappanpara
PY - 2025/12/1
Y1 - 2025/12/1
N2 - With the limitations of current noisy intermediate scale quantum (NISQ) devices, there is a need for hybrid quantum-classical learning models to operate more efficiently within existing hardware constraints. The design of cost-effective hybrid quantum-classical learning systems is therefore imperative for various applications, including biosignal processing in the healthcare sector. This motivates the investigation into the reduction of computational complexity of a hybrid quantumclassical neural network. We propose a new hybrid model combining a quantum neural network (QNN) with a binary neural network (BNN), known as quBNN, with the aim to reduce model complexity. For benchmarking, we consider a classical multilayer perceptron (MLP) and a standard nonbinarised version of quBNN that combines a QNN with an MLP, named as quilp. We evaluated all models on a multi-class dataset containing multimodal biosignals. We find that quMLP can outperform the classical MLP with minimal additional complexity, while quBNN maintains performance comparable with the classical MLP, with reduced computational complexity. We apply pruning and quantisation techniques to the QNN within the proposed quBNN model, aiming to further reduce its complexity. This compression yields substantial reductions in parameter and quantum gate count when compiled for quantum hardware, albeit with performance degradation.
AB - With the limitations of current noisy intermediate scale quantum (NISQ) devices, there is a need for hybrid quantum-classical learning models to operate more efficiently within existing hardware constraints. The design of cost-effective hybrid quantum-classical learning systems is therefore imperative for various applications, including biosignal processing in the healthcare sector. This motivates the investigation into the reduction of computational complexity of a hybrid quantumclassical neural network. We propose a new hybrid model combining a quantum neural network (QNN) with a binary neural network (BNN), known as quBNN, with the aim to reduce model complexity. For benchmarking, we consider a classical multilayer perceptron (MLP) and a standard nonbinarised version of quBNN that combines a QNN with an MLP, named as quilp. We evaluated all models on a multi-class dataset containing multimodal biosignals. We find that quMLP can outperform the classical MLP with minimal additional complexity, while quBNN maintains performance comparable with the classical MLP, with reduced computational complexity. We apply pruning and quantisation techniques to the QNN within the proposed quBNN model, aiming to further reduce its complexity. This compression yields substantial reductions in parameter and quantum gate count when compiled for quantum hardware, albeit with performance degradation.
KW - binary neural networks (BNN)
KW - hybrid quantum-classical models
KW - pruning
KW - quantisation
KW - quantum machine learning (QML)
KW - quantum neural networks (QNN)
UR - https://pure.ulster.ac.uk/en/publications/095aaec6-5196-4e6c-9e12-416fcf742488
U2 - 10.1109/qce65121.2025.10494
DO - 10.1109/qce65121.2025.10494
M3 - Conference contribution
SN - 979-8-3315-5736-2
SN - 979-8-3315-5737-9
T3 - 2025 IEEE International Conference on Quantum Computing and Engineering (QCE)
SP - 659
EP - 660
BT - 2025 IEEE International Conference on Quantum Computing and Engineering (QCE)
PB - IEEE
T2 - 2025 IEEE International Conference on Quantum Computing and Engineering (QCE)
Y2 - 30 August 2025 through 5 September 2025
ER -