In recent years, Spin-Transfer-Torque Magnetic Random Access Memory (STT-MRAM) has been considered as one of the most promising non-volatile memory candidates for in-memory computing. However, system-level performance gains using STT-MRAM for in-memory computing at deeply scaled nodes have not been assessed with respect to more mature memory technologies. In this letter, we present perpendicular magnetic tunnel junction (pMTJ) STT-MRAM devices at 28nm and 7nm. We evaluate the system-level performance of convolutional neural network (CNN) inference with STT-MRAM arrays in comparison to Static Random Access Memory (SRAM). We benchmark STT-MRAM and SRAM in terms of area, leakage power, energy, and latency from 65nm to 7nm technology nodes. Our results show that STT-MRAM keeps providing similar to 5x smaller synaptic core area, similar to 20x less leakage power, and similar to 7x less energy than SRAM when both devices are scaled from 65nm to 7nm. With the emerging need for low power computation for a broad range of applications such as internet-of-things (IoT) and neural network (NN), STT-MRAM can offer energy-efficient and high-density in-memory computing.