Inspiration
Managing patients in low-resource environments (e.g. rural hospitals in developing countries, warzones) is extremely challenging, especially if the clinician is not used to such conditions. My good friend, Zaid, volunteered as a doctor in warzones and having worked in a well-equipped London hospital beforehand, it took him some time to adapt. He used the "doctors without borders" clinical and drug guidelines during his shifts, but these are over 1000 pages long, so finding the right guidelines for a clinical scenario and tailoring it to the patient is a challenge on its own.
What it does
We use Large Language Models to suggest comprehensive treatment plans based on approved clinical guidelines. Doctors can integrate patient information into our Electronic Patient Record by typing, dictating or uploading PDFs and images containing patient details. Afterwards, they can generate tailor-made management plans for all their patients at the click of a button. The management plans are specifically made for low-resource environments, but can be customized for any environment and geographical location.
How we built it
The application was developed using NextJs as a full-stack framework with Prisma as an object relational mapper. More specifically front-end part of the application is Reactjs Typescript and the backend is Nodejs. The database is Postgres. The application is hosted on the web as a Docker image and via MedStack on Microsoft Azure. We use GPT-4-Turbo (2023-11-Preview) via Azure OpenAI API with zero temperature and one top-p parameters for text generation. Doctors without borders clinical and drug guidelines were embedded by the 'text-embedding-ada-002' model and then indexed to be used for Retrieval Augmented Generation. We utilize Microsoft Document Intelligence to process PDFs.
Challenges we ran into
Processing PDF documents as inputs to Large Language Models is a particularly difficult task: one needs to ensure that all information from the PDF including images and tables is transferred accurately to the LLM - this involves Optical Character Recognition on images and the conversion of tables into JSON or markdown.
Accomplishments that we're proud of
We have a working prototype that we currently host on my company's website. We showed the generated management plans to doctors who were impressed by the capabilities of AI. We also added other features such as creating a change of medication record as well as further action points for doctors based on patient documentation given to them.
What we learned
- Medical guidelines are difficult to access and navigate through. It takes physicians a long time to find relevant information that can be applied directly to the patient.
- Integrating with existing Electronic Health Records is almost impossible unless we have a partnership with a hospital that uses them. However, most rural clinics use handwritten patient records, if anything at all.
- Doctors spend most of their day creating and processing clinical documentation. In fact, doctors spend twice as much time with paperwork as with patients. ## What's next for DeepHeal Adding more features to our platform to support doctors working in challenging environments. Talking to these doctors, understanding their pain points and finding ways to make them more productive.
Built With
- azure
- docker
- nextjs
- node.js
- openai
- postgre
- prisma
- react
- typescript
Log in or sign up for Devpost to join the conversation.