Abstract

Convolutional Neural Networks (CNNs) are widely used in computer vision, but their massive computational cost and parameter redundancy hinder deployment on resource-constrained devices (e.g., edge terminals). Existing filter pruning methods often struggle to balance two critical goals: aggressive redundancy reduction and effective preservation of taskcritical information—either leading to excessive accuracy loss or insufficient compression. To address this challenge, we are the first to jointly exploit k-core decomposition and information entropy in a unified pruning criterion, and we instantiate this idea in a novel graph–entropy collaborative framework that achieves Pareto-optimal compression-accuracy trade-offs. The key steps are as follows: First, we use perceptual hashing (pHash) to calculate the similarity of output feature maps between filters, then model each filter as a node in an undirected graph—edges are established only when filter similarity exceeds a predefined threshold, forming a “redundancy graph” that quantifies inter-filter redundancy. Second, kcore decomposition is applied to this graph to identify high-order redundant substructures, which helps locate redundant filters at the structural level. Finally, information entropy is introduced to evaluate the “informational value” of each node (filter) in the k-core: only filters with low redundancy and high information content are retained, ensuring minimal loss of critical features. Extensive experiments are conducted on CIFAR10 and CIFAR-100 datasets, using representative CNN architectures (VGGNet-16, ResNet-56/110, DenseNet-40). Specifically, VGGNet-16 achieves a 65.8% reduction in floating point operations (FLOPs) and an 88.8% reduction in parameters while experiencing only a 1.24% decrease in Top-1 accuracy. ResNet-56 attains a 50.1% reduction in FLOPs with a nearly imperceptible accuracy loss of 0.03%, markedly surpassing the Fire together wire together (FTWT) method which reduces FLOPs by 54% at the cost of a 1.38% accuracy decline. DenseNet-40 accomplishes a 76.5% FLOPs reduction with a 1.55% accuracy decrease, demonstrating the method’s strong applicability for high-intensity compression of densely connected networks. Furthermore, the method’s scalability is validated on the large-scale ImageNet dataset with ResNet-50, where it achieves a 73.65% FLOPs reduction with competitive accuracy, underscoring its practicality for real-world applications. These outcomes collectively affirm the effectiveness and broad applicability of the proposed graphentropy collaborative pruning framework.


Document

The PDF file did not load properly or your web browser does not support viewing PDF files. Download directly to your device: Download PDF document
Back to Top
GET PDF

Document information

Published on 18/12/25
Accepted on 18/11/25
Submitted on 17/09/25

Volume Online First, 2025
DOI: 10.23967/j.rimni.2025.10.73400
Licence: CC BY-NC-SA license

Document Score

0

Views 0
Recommendations 0

Share this document

claim authorship

Are you one of the authors of this document?