Designing for Surgeons and Nurses: A Faster Path to Usable Prototypes
- Vikas Chowdhry

- Aug 16
- 3 min read
A few months ago, we were in the midst of a redesign of our app. The goal was to develop a flexible design that could accommodate the ever-broadening list of specific predictive models our potential users (trauma surgeons, ortho-trauma surgeons, & critical care nurses) were asking for, while incorporating future-facing GenAI/Agentic functionality.
Our design team used Figma to create prototypes that we then sent to our collaborators across several major health systems to get their feedback and iterate. In parallel, we had our dev team prototype some of those wireframes so we could also show a working prototype and get more detailed, nuanced feedback.
But iterations on working prototypes were not moving at the same velocity as wireframe iterations—they were fast, but not fast enough. That got us thinking: is there a faster way to speed the development of working prototypes for the next rodeo? Could we experiment with some tools? It seemed like a perfect intern project, and we gave it a go. Please read below regarding our experience with that project from Mahedhar—the bright-eyed, always-enthusiastic, and super-smart high school intern whom we were lucky to have working on this project.
Hey, I’m Mahedhar Sunkara! A Rising Junior at Greenhill School in Dallas, Texas. I have been interning at Traumacare.ai for almost 12 months where I help with projects related to user experience and user interface design. TraumaCare.AI is a healthtech startup focused on building AI tools to assist emergency responders during critical decision making moments.
My most recent project involved using Replit’s ‘Replit Agent’ to prototype a new version of our app based on a Figma design created by our UX designer. The goal for the new design was to provide quick access to AI-assisted suggestions that clinical users could use to get insights for their patients across multiple conditions.
Prior to this project I never had ‘formal’ coding experience. I took a couple of classes at school, but I have always leaned into the UI/UX design side.
Before starting my journey, I wanted to educate myself on what ‘prompt engineering’ was. I quickly found a video by Tina Huang named ‘Vibe Coding Fundamentals in 33 minutes’ — this was just the start of the rabbit hole for me. I then discovered a tutorial that was made in joint contribution by DeepLearning.AI and Replit.
Using the knowledge obtained from those sources, I started to experiment with the Replit Agent. This process consisted of ONLY working through the visual aspects of the app prototype. I found it to be a back-and-forth experience. A limitation I noticed was the Agent’s inability to recognize images. Previous to working with the Agent, I had access to a UI/UX design that was made on Figma. My original plan was to screenshot the image and just ask the Agent to replicate the aesthetic. This wouldn’t work because the command prompt could not view the image.
Another gripe I had was related to inconsistent behavior of the agent - there were times where the Agent did exactly what I asked, while others it would get confused.
Specifically, the Agent would label each prototype ‘V[x]’ i.e after the first ever command the version would be labeled V1. This was both a gift and a curse. It would be really useful to refer back to old versions, and revert back to old versions. But, sometimes when I asked the Agent to transfer everything in a certain version except a feature it would bug out and change many things.
After being satisfied with the visuals I moved on to working on the technical aspects of the prototype. This part was much more time-intensive. It was a similar process in which I communicated with the Agent to perform tasks, but there seemed to be a lot more moving parts.
There were many benefits to using the Replit Agent, the biggest was being able to preview and use the prototype after each command. This allowed me to test the app in order to troubleshoot any issues. I was also able to fix visual aspects in this portion as well.
Throughout this experience, I learned about the fundamentals/basics of prompt engineering. Creating a prototype for TraumaCare.AI was a great experience and I can’t wait to progress my skills.
Looking ahead, I want to optimize the effectiveness of Replit’s Agent. To ensure this, I will create a product requirements document (PRD) to have clear goals and use the 4 levels of thinking. These 4 levels - Logical, Analytical, Conceptual, and Procedural - will allow me to communicate with the Agent in a more efficient manner, because every question asked will be to knock out a specific goal. About Me

Hey I’m Mahedhar Sunkara, a rising Junior at Greenhill School in Dallas, Texas. I’m interested in AI, Finance, and Healthcare.

