Faculty Mentor: Caiwen Ding (University of Connecticut)
Neural networks are both computationally intensive and memory intensive, making them difficult to deploy on embedded systems. Also, conventional networks fix the architecture before training starts; as a result, training cannot improve the architecture. To address these limitations, we need a method to reduce the storage and computation required by neural networks. In this project, students are asked to implement the pruning algorithm for both convolutional and fully connected layers to realize a significant reduction in the number of weights. Students will be able to obtain a high sparse DNN model with high prediction accuracy.