top of page
Computer Programming

Computer Science Projects

Over the last few years, I've loved working on different computer science projects and developing various skills. I am eager to continue working on such impactful real-world problems in the future. 

image.png

01

Using Neural Network to Identify the Vanishing Point of Surgical Tools in Robot-Assisted Minimally Invasive Surgery

Pioneer Academics Research
Submitted to Journal of Emerging Investigators

Robot-assisted minimally invasive surgery (RMIS) is becoming increasingly popular due to its numerous advantages, such as higher precision and shorter patient recovery time. During the surgery, force feedback from the tool is essential in order to ensure that excess pressure is not being applied, as this could damage the tissue. Currently, force sensors are used for this, but they are very expensive and can only be used once. In this paper, I explored whether we can use computer vision as a substitute for force sensors to estimate the force being applied, as this would significantly reduce the cost of RMIS. A crucial step in this process is finding the vanishing point of the surgical tool, which is the point where the two parallel edges of the tool seem to converge. Traditional methods, such as edge detection and the minimum area enclosing triangle method, need full visibility, so they do not work well when the tool is obstructed by tissue (occlusion). I found that for natural scenes (e.g. roads, train tracks), neural networks have been successful in finding the vanishing points. However, it was unclear whether it would work for surgical tools where there is occlusion, so I decided to explore this approach in my research. The neural network performed surprisingly well in identifying the vanishing point even with occlusion. Therefore, this method has a lot of potential to significantly reduce the cost of robot-assisted surgery, enabling widespread use and leading to better outcomes for millions of patients.

 

I have submitted this work to the Journal of Emerging Investigators with Prof. Yun-Hsuan (Melody) Su, my mentor from Pioneer, as co-author.

​

02

Deep Learning for Aerial Vision-and-Language Navigation: Control a Drone with Language Instructions

Science Internship Program
at UC Santa Cruz

image.png

Drones are used in a variety of applications, such as agriculture, search and rescue operations, and environmental monitoring. However, controlling a drone to perform such complex tasks requires a lot of skill and can be quite challenging, making widespread deployment difficult. Therefore, in this research project, we built vision and language models to enable people to give instructions to the drone as if they were talking to a person, making it much easier to control the drone. For the drone to be able to respond to instructions like “Turn left after the next building”, we need a computer vision model to enable the drone to identify objects such as cars, trees, roads, buildings, etc. By tuning the semantic segmentation model’s parameters, I achieved an average accuracy of 97%. Next, we wrote a program to convert user inputs into instructions for the drone. Finally, we integrated the semantic segmentation model and the instruction interpreter. We tested this using a drone simulator and showed that vision-and-language navigation makes it dramatically easier to control drones. 
 

image.png

03

Using Drones to Fight Mosquito-borne Diseases

Inspired by volunteering with Indian Institute of Public Health: Project NoFever
Submitted to MDPI Journal on Tropical Medicine and Infectious Disease

Dengue severely afflicts my community every monsoon, and many of my friends have been hospitalized because of the disease. I volunteered with the Indian Institute of Public Health Hyderabad (IIPHH) to identify and spray mosquito breeding sites, but we never had enough volunteers to substantially reduce the disease’s spread.

The following summer at UCSC’s Science Internship Program, I used computer vision to find objects in aerial images captured by drones, and realized that this could be an effective approach to identify sites. I extended the model to detect water bodies, and then added georeferencing to calculate their geographical coordinates.

 

After reaching out to drone companies and confirming that the model is practical, I presented this solution to IIPHH and it is currently under evaluation. I have also written a research paper on this with Mr. Yue Fan, my mentor from UCSC's Science Internship Program, as co-author. We have submitted the manuscript to the MDPI Journal on Tropical Medicine and Infectious Disease.

​

04

Flexible Use of Electricity

New York Academy of Sciences: The Junior Academy

image.png

Our solution involved three parts: generating energy using Tellurium nanoparticles, optimizing the power grid by incorporating storage, and investigating storage options. I worked primarily on analyzing data on renewable energy production and demand. I wrote a Python program to find the storage capacity required to ensure a steady supply of energy. This would make renewable energy much more reliable, enabling its widespread adoption. However, the results from the data analysis showed that the storage capacity required is much higher than is possible with current storage technologies. After investigating multiple storage options, we finally landed on zinc-ion batteries as the best solution, because they are more scalable than other energy storage methods.

image.png

05

Solar Farm Health: Using Computer Vision to Identify Defects in Solar Panels

Blurgs: an AI start-up providing data analytics, automation tools for autonomous unmanned vehicles

I worked with Blurgs on one of their current projects: Solar Farm Health. It is currently quite difficult to identify faults in solar panels in solar farms as their are thousands of panels. This leads to a decreased efficiency in solar farms. Using machine learning to identify solar panels with defects in aerial images captured by drones makes the process much faster and easier. I annotated hundreds of aerial images of solar panels on Roboflow and then trained a computer vision model on these images to identify faults in the solar panels. 

© 2024 by Bhavani Venkatesan

bottom of page