Graduate Research Assistantship at School of Informatics and Computing, IUPUI
Link to the news published on IUPUI website
Working with faulty Zebulun M. Wood, the challenge was to research ways to convert real world objects into 3D models without making use of expensive tools, while keeping the quality of the mesh as high as possible. In doing so, we plan to apply this technology in the field of medical prosthetics so in order to help patients with missing facial features like ears or nose by creating one for them using 3D printing.
Smartphones today already have high megapixel cameras on their back, so we thought of utilising them as our device of choice for shooting real life objects. We planned to use them by clicking pictures or shooting videos of the object or subject in all kinds of various angles possible and feeding it to a stitching application like Autodesk ReCap or Agisoft PhotoScan to get the model in 3 dimensions.
Duration: August 2017 onwards.
Members: Parth Patel, lecturer Zeb Wood, Swapnil Kosarabe and Dr. Travis Bellicchi.
Role: Conduct research on the various techniques for capturing photos of objects/subjects to get the best possible details of the mesh, which will lead to an accurate recreation of the body part that is missing using 3D printing. Using university's supercomputers for handling the heavy process intensive computing batch jobs.
We started off from scanning small objects and feeding the 200+ images to Autodesk ReCap software for a quick understanding of how the process works. Then, after multiple tries of multiple objects, we finally figured the best way in which we could scan any object using the least amount of images possible and yet get results with good quality.
The amount of time it took to click photos was around 15 mins and needed to tap to focus every shot to nail the accuracy. But then I thought of shooting a single video around the object and then extract frames from it using software. This provided us with a lot more photos to work with and hence provided the software more reference points for the stitching process.
Then, we proceeded with scanning of my team members and compared it with the quality of dedicated scanning tools available in the industry right now.
As seen above, there is still a lot of details missing in the model created using photogrammetry, but we are trying to improve the quality by tweaking all the parameters we can. E.g. the number of images have been experimented with and has gone as high as 1000+ images for one of the patient that we tried the photogrammetry on, but can't share the work here due to privacy and legal reasons. We also tried playing with various algorithm settings in PhotoScan to find out the differences. Below are couple of the models that we created in the process of experimentation with texture.
Future of VR in my RA: We are ideating of mapping interior of rooms as well as university buildings into 3D models and navigating through them in virtual reality using a game engine.