Neuromorphic computing realizes low-latency and low-power computing by emulating the neural struc-ture and operation of the human brain, and is considered a key research area for third-generation artifi-cial intelligence. However, current neuromorphic computing faces the problems of huge synaptic memory consumption and complex neuron calculations. This paper proposes a batch normalization (BN)-free weight-binarized SNN based on hardware-saving IF neurons to reduce storage requirements and improve the computational efficiency of neuromorphic computing. Hardware-friendly backpropaga-tion through time (BPTT)-based algorithm and SG function are proposed to calculate the gradients of the "integrate" process and "fire" process of IF neuron, respectively. Weight binarization is carried out during training to reduce storage requirements, and spatio-temporal batch normalization (BN) operations are introduced to ensure high performance. During inference, a simple adaptive-threshold IF neuron model is proposed to achieve the effect equivalent to the computationally expensive spatio-temporal BN oper-ation without any performance loss. The proposed BN-free binarized SNNs based on hardware-saving IF neuron achieves competitive accuracies of 99.36%, 94.79%, 90.39%, and 67.10% on the N-MNIST, DvsGesture, N-TIDIGITS18, and DVS-CIFAR10 datasets, respectively, which are comparable to full-precision SNNs, but the weight sizes are significantly reduced by ti 97%. Furthermore, robustness exper-iments show that the binary SNN is more robust to weight noise than the full-precision SNN. This paper presents an efficient algorithm-hardware co-design paradigm for hardware-friendly and high -performance neuromorphic computing.(c) 2023 Elsevier B.V. All rights reserved.