Microservices-Based Healthcare Management Backend
This project is a microservices-based backend system for managing appointments, patients, doctors, and billing in a healthcare environment. It follows a modular, scalable design and uses different databases tailored to each service's needs.
6 independent microservices with API Gateway
Fully dockerized with Docker Compose
Apache Kafka for service communication
MySQL, PostgreSQL, and MongoDB
The system consists of the following microservices, each containerized via Docker:
Service | Port | Database | Primary Function |
---|---|---|---|
API Gateway | 4004 | - | Request routing and management |
Patient Service | 8080 | MySQL | Patient records management |
Billing Service | 8081 | PostgreSQL | Payment processing and billing |
Analytics Service | 8082 | MongoDB | Data analytics and reporting |
Doctor Service | 8083 | MySQL | Doctor profiles and specializations |
Appointment Service | 8084 | PostgreSQL | Appointment scheduling and validation |
Auth Service | 8089 | PostgreSQL | Auth process |
Responsibility: This microservice is responsible for managing endpoints about patient records. The relationship between Kafka and Patient Service sends an event to the Analytics Service. The database of this microservice is PostgresSQL and it runs simultaneously with the microservice, thanks to docker-compose.
Responsibilities: This microservice is responsible for managing endpoints about doctor records. The database of that microservice is MySQL. The reason beyond this database selection, it's generally optimized for read-heavy operations and offers slightly better performance in that context.
Responsibilities: This microservice is responsible for managing appointments,
verifying IDs, notifying other services with Kafka. The database of this microservice is
PostgresSQL. It also has two services. One of the services is managing appointment process. With
the help of Rest Template, this service verify the Patient ID and Doctor ID to create an
appointment. Also, if any patient's payment process is successful then Kafka triggers an event
for The Billing Service. The second service of this microservice is
Cleanup Service
, manages deleting outdated appointments every day.
Responsibilities: This microservice is responsible for creating billing records after appointments. It creates an invoice for a patient with invoice-generator API, and it stores invoice instances (PDF) on Billing Service image.
Responsibilities: This microservice is responsible for creating auth records. It generates an JWT token for accessing other endpoints
Responsibilities: This microservice is responsible for managing request routing, and API management.
I chose a microservice architecture to achieve loose coupling, scalability, and independent deployments. Each service is responsible for a specific domain (patients, doctors, appointments, billing, authentication), which makes the system modular and easier to maintain. This design also allows services to use different databases and technologies depending on their requirements, rather than being restricted to a single stack. In addition, microservices enable fault isolation β if one service fails, the others can continue functioning, improving the systemβs reliability
Each microservice has its own database to respect the loose coupling principle. I used PostgreSQL for the Patient Service because of its powerful relational features, JSONB support, and advanced querying capabilities, which fit patient data well. On the other hand, I chose MySQL for the Doctor Service since it is lightweight and efficient for handling structured data with high read/write operations.
I used Apache Kafka for asynchronous, event-driven communication between services. Instead of relying only on HTTP calls, Kafka enabled me to build a scalable and fault-tolerant data pipeline. Producers generate events (such as patient updates or logs), and consumers process them independently, which reduces coupling and improves system resilience.
Docker allowed me to containerize services and databases, ensuring they run consistently across environments. Volumes were used to persist user data. Kubernetes provided orchestration features such as service discovery, load balancing, scaling, and persistent storage management, making deployments production-ready and easier to maintain.
I implemented an API Gateway to act as the single entry point for all requests. This simplified routing, authentication, and abstraction, since clients do not need to know individual service endpoints. It also centralized security and cross-cutting concerns like logging and request filtering, improving both usability and maintainability.
I set up GitHub Actions to automate build, test, and deployment pipelines. This reduced deployment time to under 15 minutes and ensured multi-platform Docker images could be pushed and deployed to cloud providers (AWS, GCP, Azure). The CI/CD pipeline improved reliability, minimized human error, and made the project more production-ready.
I chose JWT (JSON Web Tokens) because it enables stateless and lightweight authentication, which is ideal in a microservices architecture. Tokens are passed with each request, so services can validate user identity without maintaining server-side sessions.
When a user tries to create an appointment, the system follows this validation process:
When a user create an appointment, the system follows this process for billing:
When a user request to an endpoint, the system follows this process for Api Gateway:
http://localhost:4004/api/patients
) to API Gateway.
Path
predicates to determine the correct microservice
(patient-service
, doctor-service
, appointment-service
,
etc.).
StripPrefix
, RewritePath
)
to adjust the request before forwarding.
patient-service:8080
).
http://localhost:4004
.
From code to production: Complete CI/CD pipeline with containerization and Kubernetes orchestration
π‘ Deployment Impact: What previously took hours of manual setup and configuration now takes less than 15 minutes to deploy to any major cloud provider. The entire microservices ecosystem is containerized, orchestrated, and ready for horizontal scaling.
Test all API endpoints with this ready-to-use Postman collection. Import the collection into your Postman to explore requests, headers, and sample responses.
Explore the complete source code and documentation for this microservices architecture