Deep learning-based (DL) reconstruction has been introduced in CT, with two major manufacturers offering such methods in the clinic, which are trained mostly on patient data (or a combination of patient and phantom data). Our purpose was to investigate the influence of DL-based reconstruction on object detectability compared to the current standard of iterative reconstruction in CT head routine protocols, combining a model observer analyzing the detectability of lesion-like objects (brain, bone and lung tissue equivalent, 5 mm diameter, 25mm length) in a commercial anthropomorphic head phantom. The phantom was scanned 10 times in two CT systems (same manufacturer, different model) with routine head protocol and images reconstructed with FBP, iterative (IR) and deep-learning (DL) based methods. As input for the model observer, ROIs were subtracted centered on the locations of the cylinders and for each of them four background locations were selected nearby. The locations of the ROIs in the phantom were analogous for both scanners' data. The non-prewhitening matched filter with an eye filter (NPWE) model observer was applied (Burgess eye filter, peak at 4 cy/deg, 50 cm eye-monitor distance). In visual inspection, the phantom brain background ROIs showed differences in noise texture between the reconstruction methods, with a more uniform distribution for DL-based methods in both CT systems. The average d' and range was, for system 1: [lung-FBP: -124.9 (-178.2, -99.1); IR: -126.7 (-188.2; -102.9); DL:-136.2 (-181.9, - 119.3)]; [bone-FBP: 206.7 (166.7, 269.7); IR: 215.4 (175.8, 278.1); DL: 268.3 (215.3, 339.5)]; soft tissue-FBP: -14.6 (-19.6, -9.8); IR: -15.5 (-20.7; -10.2); DL:-18.8 (-24.6, -10.6)]. The NPWE model obtained consistent higher d' values in the DL-based reconstructed images compared to iterative and FBP for the three materials for both systems.