Automatic Tracking System For Organs In Robotic Surgery
Through the analysis of the endoscope video stream of a generic surgical robot, the methodology object of this patent can overlay the 3D model of a generic organ to its 2D counterpart represented in the video stream, through the values of position and rotation, and then continue with the tracking in real time.
The proposed method uses Augmented Reality (AR) and Artificial Intelligence (AI) technologies as a navigation aid in minimally invasive robotic surgery (MIS), where the view of the surgery is made possible through an endoscopic camera. Increasing the information flow available to the surgeon through AR techniques is the aim of the methodology of this patent proposal. The video stream is processed through a first neural network in order to localize the different elements within the scene. The organ is then isolated, and a second neural network takes care of rotation prediction. A 3D model of the patient-specific organ (modelled via pre-operative imaging with graphical modeling software) is then projected via AR technologies in order to provide assistance to the surgeon during surgery. Next, a third tracking neural network will assess the displacements of the camera and adapt the position and rotation of the organ accordingly.
The application can be used in the medical field as an intra-operative navigation support during robotic surgery.
- Fast identification of surgery targets and critical structures to be excluded from manipulation;
- Prevention of the surgeon disorientation by increasing his spatial awareness and resulting in less time spent in the operating room;
- Reduction in the cognitive load of surgeons during surgery, no longer having to mentally match information from different sources to the scene.