In the context of next-generation radio-astronomical visual surveys, automated object detection and segmentation are necessary tasks to support astrophysics research from observations. Indeed, identifying manually astronomical sources (e.g., galaxies) from the daunting amount of acquired images is largely unfeasible, greatly limiting the huge potential of big data in the field. As a consequence, the astrophysics research has directed its attention, with increasing interest given the recent success in AI, to learning-based computer vision methods. Several automated visual source extractors have been proposed, but they mainly pose the source identification as an object detection. While this may reduce the time needed for visual inspection, it presents an evident shortcoming in case of objects consisting of multiple, spatial distant, parts (e.g., the same galaxy appearing as a set of isolated objects). This specific limitation can be overcome through semantic segmentation. Consequently, in this paper we evaluate the performance of multiple semantic segmentation models for pixelwise dense prediction in astrophysical images with the objective to identify and segment galaxies, sidelobe, and compact sources. Performance analysis is carried out on a dataset consisting of over 9,000 images and shows how state-of-the-art segmentation models yield accurate results, thus providing a baseline for future works. We also employ the output segmentation maps for object detection and results are better than those obtained with Mask-RCNN based detectors that are largely used in the field.