Organ and tool detection and segmentation in real time during surgery have been significant challenges in the development of robotic surgery. Most existing detection methods are unsuitable for the surgical environment, where the lighting conditions, occlusions, and anatomical structures can vary significantly. This study presents an organ and surgical tool segmentation and detection algorithm using a manually annotated dataset based on YOLOv8 (You Only Look Once), a state-of-the-art object detection framework. The YOLOv8 deep learning neural network is trained to detect and segment organs and tools during laparoscopic cholecystectomy using a manually annotated dataset of frames taken from actual surgeries. After four experiments using combinations of small and extra-large model sizes and the original and a modified dataset, the resulting algorithm is evaluated and tested in real time on a new surgical video. The method shows it can provide real-time feedback to the surgeon by accurately locating and segmenting the target organs displayed in the surgical video. The method outperforms the baseline methods, with a “bounding box” mean average precision (mAP50) and precession (P) of (50.2%, 51.6%), (52.8%, 76.9%), (83.2%, 81.1%), and (86.3%, 85.7%) for the first, second, third, and fourth experiments, respectively, and a “masking segment” of mAP50 and precession of (50.5%, 51.8%), (54.3%, 76.1%), (82.6%, 80.4%), (86.0%, 85.4%) for the first, second, third, and fourth experiments, respectively. The best-performing model has a speed of around 13.1 ms per frame. This novel application could be a stepping stone in future work, such as developing an algorithm to display the results to the surgeon in a heads-up-display (HUD) to help navigate the scenes or even be implemented in robotic surgeries.