MLOps for Hospitals - Staff Tracking (Part 2)

Learn how to train, deploy, and monitor a computer vision model for real-time hospital staff tracking.

MLOps for Hospitals - Staff Tracking (Part 2)
Written by TechnoLynx Published on 09 Dec 2024

Introduction

Hospitals are the beating hearts of our healthcare system, where dedicated professionals work tirelessly to save lives. However, hospitals come with challenges like staff allocation. Poor management can lead to overcrowded wards, treatment delays, and increased stress for healthcare workers. A real-time staff tracking system can provide insights into staff movements and availability. Effective staff tracking helps guarantee that the right resources are available when needed, reducing wait times and improving patient satisfaction. These systems can also identify bottlenecks in patient care.

It’s possible to solve these challenges using computer vision. With the market for computer vision in healthcare growing at a CAGR of 34.3% between 2024 and 2032, computer vision applications are becoming common in hospitals everywhere. To make use of a computer vision-enabled real-time staff tracking system, hospitals also need an efficient MLOPs (Machine Learning Operations) system.

This article takes a closer look into building a hospital staff tracking system using machine learning. We’ll cover training the model, deploying it effectively, and monitoring its performance. We’ll also show how advanced tracking can transform how hospitals operate, boosting productivity, efficiency, and, ultimately, patient care. We recommend checking out our earlier article, “Part 1: MLOps for Hospitals - Building a Robust Staff Tracking System”, before reading this article.

Breaking Down Model Training

Machine Learning Model Requirements for Hospital Staff Tracking

An efficient hospital tracking system requires a powerful machine-learning model that can process data in real time. This system needs to reliably pinpoint and track staff, handle the massive amount of information it receives, and function smoothly within the hospital setting. Since we’re focusing on a computer vision approach, the model will analyse visual data from high-definition CCTV cameras to follow staff movement.

The Model Training Process

The model training process involves the following steps: data preparation, model selection, hyperparameter tuning, and model evaluation.

Data Preparation

Building such a system starts with a rich dataset, as explained in part one of this article. To make the model even more robust, data augmentation techniques like image rotation and scaling can be used to expand the dataset and improve its ability to handle variations. Finally, the labelled data gets split into three sets: training, validation, and testing. The training set teaches the model, the validation set helps fine-tune its performance, and the testing set evaluates its overall effectiveness.

An Example of How Labelled Data is Split. Source: CloudFactory
An Example of How Labelled Data is Split. Source: CloudFactory

An Example Code for Data Augmentation

Before we discuss selecting a model, let’s take a look at a simple Python script using OpenCV for data augmentation to diversify the data. This Python script with OpenCV showcases three ways to augment images for training a machine learning model: horizontal flip, brightness adjustment, and rotation.

  • First, it imports libraries and loads an image.

  • Next, it flips the image horizontally using ‘cv2.flip’ to create variations in orientation.

  • Then, it adjusts brightness with ‘cv2.convertScaleAbs’ – positive values make it brighter, negative ones darker – helping the model handle different lighting conditions.

  • Finally, it rotates the image using ‘cv2.getRotationMatrix2D’ and ‘cv2.warpAffine’. This makes the model more robust to different viewing angles of the object in the image.

The script then displays the original and modified images side-by-side using ‘cv2.imshow’. Once you press a key, it closes the windows. It is a simple example of how OpenCV can be used for image augmentation.



import cv2
import numpy as np

# Load an image from file
image = cv2.imread('path_to_your_image.jpg')

# 1. Horizontal flip
flipped_image = cv2.flip(image, 1)

# 2. Brightness adjustment
brightness = 50  # Change brightness value here, positive to increase, negative to decrease
bright_image = cv2.convertScaleAbs(image, beta=brightness)

# 3. Rotation
angle = 45  # Change the angle value here
(h, w) = image.shape[:2]
center = (w // 2, h // 2)
M = cv2.getRotationMatrix2D(center, angle, 1.0)
rotated_image = cv2.warpAffine(image, M, (w, h))

# Display the images
cv2.imshow('Original', image)
cv2.imshow('Flipped', flipped_image)
cv2.imshow('Brightness Adjusted', bright_image)
cv2.imshow('Rotated', rotated_image)

# Wait for a key press and close the windows
cv2.waitKey(0)
cv2.destroyAllWindows()


Original Image
Original Image

Augmented Images:

Flipped Image
Flipped Image
Brightness Adjusted Image
Brightness Adjusted Image
Rotated Image
Rotated Image

Choosing a Computer Vision Model

Choosing the right model for your staff tracking system impacts the rest of the MLOps workflow. Popular options include Convolutional Neural Networks (CNNs) like YOLO and Faster R-CNN, which are known for their ability to support real-time detection. Recurrent Neural Networks (RNNs) like LSTMs shine at analysing sequences, which can help track movement patterns. Transformer-based models like DETR are also showing promise for object detection. The key is to find the right balance between accuracy, speed, and how much processing power the model needs to run smoothly in real time within the hospital’s environment.

Model Training and Hyperparameter Tuning

Next, we can tune the model’s hyperparameters. These hyperparameters guide the training process and model structure. Key hyperparameters include learning rate (controls update speed), batch size (number of training examples per update), and the number of layers (model depth). We can experiment with different values using techniques like grid search or random search to find the optimal settings.

Model Evaluation and Testing

The final step before deploying the system is model evaluation. We assess its performance using metrics like accuracy (how often staff are correctly identified and tracked), precision (identifying only real staff, not mistaking other objects), recall (finding all the actual staff members, not missing any), and the F1 score (a balance between precision and recall).

But there’s another critical factor to consider and test for inference time, which is the time it takes the model to analyse an image or video frame and make a prediction. For real-time tracking in a hospital, speed is essential. The model needs to identify and track staff with minimal lag so that the administrators can trust the system to provide accurate data for quick decision-making. By following these steps, you can set up an efficient computer vision model for tracking hospital staff.

Model Deployment Encompasses Many Different Elements

Selecting the appropriate deployment strategy for machine learning models in a hospital setting directly contributes to how well it will work. The three primary options are edge, cloud, and on-premise deployment, each with distinct benefits and challenges. Edge deployment runs the model directly on devices within the hospital, offering low latency, reduced data transmission costs, and enhanced privacy but limited processing power and scalability. Cloud deployment hosts the model on platforms like AWS or Google Cloud, providing high scalability, powerful computational resources, and simplified maintenance. However, it may face latency issues and incur ongoing costs. On-premise deployment involves using the hospital’s internal servers, offering complete control over data and compliance with governance policies but requires significant initial investment and maintenance.

Assuming we choose cloud deployment for our hospital staff tracking system, let’s explore what the detailed process might look like. The deployment process involves various elements like strategies for model serving, API integration, and web application integration. Each of these strategies plays a crucial role in creating a seamless and efficient tracking system. Let’s explore each strategy.

Model Serving

An Example of Model Deployment with TensorFlow Serving Running in Docker and Consumed by a Flask App. Source: Ubuntu
An Example of Model Deployment with TensorFlow Serving Running in Docker and Consumed by a Flask App. Source: Ubuntu

The model serving process makes the model accessible for real-time predictions on live video streams from the hospital. A platform like TensorFlow Serving, TorchServe, or NVIDIA Triton Inference Server must be used to achieve this. These platforms specialise in efficiently deploying models.

Next, containerising the model using Docker makes it easier to deploy. Think of a container as a standardised package that bundles your model with all its dependencies. It ensures consistent performance regardless of the environment it’s deployed in, making it easier to scale up or down in the hospital setting. To handle fluctuating workloads within the hospital, you can leverage Kubernetes for auto-scaling. By doing so, the system can automatically add or remove resources based on real-time demands, keeping things running smoothly.

Finally, to monitor the deployed system’s performance and reliability, you can implement monitoring tools like Prometheus and Grafana. These tools track metrics like processing time and model accuracy so that you can identify and address any potential issues before they impact operations.

API Integration

Once your model is served, APIs can be used to connect it to the hospital’s existing ecosystem. APIs act like messengers between different software programs, allowing your tracking model to communicate with other hospital systems. Building RESTful APIs using frameworks like Flask or FastAPI is a common approach. These APIs can receive video frames directly from the hospital’s CCTV system, feed them to your model for processing, and then send the staff tracking data back.

Security is key here since hospital data is highly sensitive. To safeguard this information, we can implement authentication and encryption methods like OAuth and HTTPS. Then only authorised personnel can access the tracking data. Also, hospitals can experience peak traffic periods, so it’s important to prevent the API from getting overloaded. Techniques like rate limiting and throttling can be implemented to control the number of requests the API receives and keep the system responsive and running smoothly.

Another part of API integration is building robust error-handling mechanisms. If the model encounters an issue processing a video frame, the API should return a clear error message and gracefully handle the situation to ensure continued functionality.

Web Application Integration

To get the most out of such a system, a user-friendly web application is needed. An app can offer a visually appealing interface that displays real-time staff locations. Behind the scenes, a powerful backend system retrieves data, stores past information for analysis, and provides secure access through user logins and controlled permissions. By combining a user-friendly interface and secure backend, hospital management can gain a real-time view of their staff movements that ultimately improve decision-making and patient care.

Model Monitoring

Performance monitoring in the ML model lifecycle. Source: Evidently.ai
Performance monitoring in the ML model lifecycle. Source: Evidently.ai

Model monitoring is the most underrated yet key aspect of deploying machine learning models, especially in dynamic environments like hospitals. Once a model is deployed to track hospital staff, its performance needs to be continuously monitored. The model’s accuracy, reliability, and efficiency should be backed by data, and its performance should be tracked. A successful deployment must continue to be a success as well.

Model monitoring encompasses various practices like performance monitoring, data drift detection, and model retraining and redeployment. Each of these practices helps maintain the model’s effectiveness in real-world scenarios.

Data Drifts & How To Detect Them

Image Showing Data Drift. Source: Evidently.ai
Image Showing Data Drift. Source: Evidently.ai

There are factors that impact the model’s overall performance, which must be observed. If there are changes in data patterns during model processing, it can affect the final output. Here are several factors that cause such data drifts;

  • Environmental Changes: Variations in lighting, camera angles, and new equipment placements can affect the video data captured. These changes can lead to the model misinterpreting staff movements.

  • Behavioural Changes: Changes in staff behaviour, such as new protocols or shifts in staff movement patterns, can also cause data to drift. For example, during an emergency, staff might move differently compared to normal conditions.

There are several approaches to detect data drifts that bring down the model’s performance. Statistical tools like Kullback-Leibler divergence and PSI regularly compare new data to the training data, looking for any major differences. The system can also monitor the data it uses, like pixel values in video frames. Significant changes in these distributions can signal data drift. Finally, feedback from the hospital staff and administrators can also reduce data drifts. If they frequently notice misidentifications, it could be evidence that the data patterns have shifted and the model needs retraining.

Model Retraining and Redeployment

To keep the model accurate and reliable over a long period of time, it needs occasional retraining. New hospital data must be collected and labelled to keep the model up-to-date. Retraining can be done periodically, like every few months, to account for changes in staff behaviour and the environment.

To streamline this process, automated pipelines can be built using tools like Kubeflow or MLflow. However, before using the new model, thorough testing and validation are essential. Testing and validation check that the retrained model performs well and meets all the requirements. A gradual rollout is recommended. It’s a good idea to deploy the retrained model in a limited way first to monitor performance and catch any issues before a full launch.

What We Can Offer as TechnoLynx

At TechnoLynx, we pride ourselves on custom software development that meets the specific demands of incorporating technologies like Computer Vision and MLOps into various sectors. Our team’s expertise can help your operations run smoothly and ensure that large data sets are managed efficiently throughout the machine learning lifecycle.

With our skills in MLOps, IoT, computer vision, generative AI, GPU acceleration, natural language processing, and AR/VR/XR, we can help you explore new possibilities for your business. We are dedicated to advancing innovation while maintaining strict safety and ethical protocols. If you are looking to transform your business with innovative MLOps solutions and need an AI consultant, contact us today, and let’s explore the future together!

Conclusion

Machine learning offers powerful tools, like computer vision, for hospitals to improve patient care and resource allocation through real-time staff tracking systems. The key lies in training models with rich datasets that include a variety of scenarios. Choosing the right algorithms, such as CNNs, RNNs, or transformers, allows for accurate and real-time tracking of staff movements. Efficient serving platforms and APIs ensure these models seamlessly integrate with existing hospital systems.

But that’s not all. Continuous monitoring for performance, data drifts, and model retraining is essential. Monitoring ensures the system remains accurate and reliable over time. By following these steps, hospitals can unlock the potential of staff tracking systems, leading to enhanced operational efficiency, improved decision-making, and, ultimately, better patient care.

Check out MLOps for Hospitals - Building a Robust Staff Tracking System (Part 1)

Sources for the images:

References:

  • Ghaemmaghami, M.P., 2017. Tracking of Humans in Video Stream Using LSTM Recurrent Neural Network. Degree Project in Computer Science and Engineering, Second Cycle, 30 Credits. Stockholm, Sweden.

  • Global Market Insights. (2024) ‘Computer Vision in Healthcare Market – By Component (Software [On-premises, Cloud-based], Services), Application (Medical Imaging & Diagnosis, Surgical Assistance, Patient Identification, Remote Patient Monitoring), End-user – Global Forecast (2024 – 2032)’, gminsights

  • Potrimba, P., 2023. What is DETR? Roboflow Blog

Top UX Design Principles for Augmented Reality Development

Top UX Design Principles for Augmented Reality Development

30/07/2025

Learn key augmented reality UX design principles to improve visual design, interaction design, and user experience in AR apps and mobile experiences.

AI Meets Operations Research in Data Analytics

AI Meets Operations Research in Data Analytics

29/07/2025

AI in operations research blends data analytics and computer science to solve problems in supply chain, logistics, and optimisation for smarter, efficient systems.

Generative AI Security Risks and Best Practice Measures

Generative AI Security Risks and Best Practice Measures

28/07/2025

Generative AI security risks explained by TechnoLynx. Covers generative AI model vulnerabilities, mitigation steps, mitigation & best practices, training data risks, customer service use, learned models, and how to secure generative AI tools.

Best Lightweight Vision Models for Real‑World Use

Best Lightweight Vision Models for Real‑World Use

25/07/2025

Discover efficient lightweight computer vision models that balance speed and accuracy for object detection, inventory management, optical character recognition and autonomous vehicles.

Image Recognition: Definition, Algorithms & Uses

Image Recognition: Definition, Algorithms & Uses

24/07/2025

Discover how AI-powered image recognition works, from training data and algorithms to real-world uses in medical imaging, facial recognition, and computer vision applications.

AI in Cloud Computing: Boosting Power and Security

AI in Cloud Computing: Boosting Power and Security

23/07/2025

Discover how artificial intelligence boosts cloud computing while cutting costs and improving cloud security on platforms.

 AI, AR, and Computer Vision in Real Life

AI, AR, and Computer Vision in Real Life

22/07/2025

Learn how computer vision, AI, and AR work together in real-world applications, from assembly lines to social media, using deep learning and object detection.

Real-Time Computer Vision for Live Streaming

Real-Time Computer Vision for Live Streaming

21/07/2025

Understand how real-time computer vision transforms live streaming through object detection, OCR, deep learning models, and fast image processing.

3D Visual Computing in Modern Tech Systems

3D Visual Computing in Modern Tech Systems

18/07/2025

Understand how 3D visual computing, 3D printing, and virtual reality transform digital experiences using real-time rendering, computer graphics, and realistic 3D models.

Creating AR Experiences with Computer Vision

Creating AR Experiences with Computer Vision

17/07/2025

Learn how computer vision and AR combine through deep learning models, image processing, and AI to create real-world applications with real-time video.

Machine Learning and AI in Communication Systems

Machine Learning and AI in Communication Systems

16/07/2025

Learn how AI and machine learning improve communication. From facial expressions to social media, discover practical applications in modern networks.

The Role of Visual Evidence in Aviation Compliance

The Role of Visual Evidence in Aviation Compliance

15/07/2025

Learn how visual evidence supports audit trails in aviation. Ensure compliance across operations in the United States and stay ahead of aviation standards.

GDPR-Compliant Video Surveillance: Best Practices Today

14/07/2025

Learn best practices for GDPR-compliant video surveillance. Ensure personal data safety, meet EU rules, and protect your video security system.

Next-Gen Chatbots for Immersive Customer Interaction

11/07/2025

Learn how chatbots and immersive portals enhance customer interaction and customer experience in real time across multiple channels for better support.

Real-Time Edge Processing with GPU Acceleration

10/07/2025

Learn how GPU acceleration and mobile hardware enable real-time processing in edge devices, boosting AI and graphics performance at the edge.

AI Visual Computing Simplifies Airworthiness Certification

9/07/2025

Learn how visual computing and AI streamline airworthiness certification. Understand type design, production certificate, and condition for safe flight for airworthy aircraft.

Real-Time Data Analytics for Smarter Flight Paths

8/07/2025

See how real-time data analytics is improving flight paths, reducing emissions, and enhancing data-driven aviation decisions with video conferencing support.

AI-Powered Compliance for Aviation Standards

7/07/2025

Discover how AI streamlines automated aviation compliance with EASA, FAA, and GDPR standards—ensuring data protection, integrity, confidentiality, and aviation data privacy in the EU and United States.

AI Anomaly Detection for RF in Emergency Response

4/07/2025

Learn how AI-driven anomaly detection secures RF communications for real-time emergency response. Discover deep learning, time series data, RF anomaly detection, and satellite communications.

AI-Powered Video Surveillance for Incident Detection

3/07/2025

Learn how AI-powered video surveillance with incident detection, real-time alerts, high-resolution footage, GDPR-compliant CCTV, and cloud storage is reshaping security.

Artificial Intelligence on Air Traffic Control

24/06/2025

Learn how artificial intelligence improves air traffic control with neural network decision support, deep learning, and real-time data processing for safer skies.

5 Ways AI Helps Fuel Efficiency in Aviation

11/06/2025

Learn how AI improves fuel efficiency in aviation. From reducing fuel use to lowering emissions, see 5 real-world use cases helping the industry.

AI in Aviation: Boosting Flight Safety Standards

10/06/2025

Learn how AI is helping improve aviation safety. See how airlines in the United States use AI to monitor flights, predict problems, and support pilots.

IoT Cybersecurity: Safeguarding against Cyber Threats

6/06/2025

Explore how IoT cybersecurity fortifies defences against threats in smart devices, supply chains, and industrial systems using AI and cloud computing.

Large Language Models Transforming Telecommunications

5/06/2025

Discover how large language models are enhancing telecommunications through natural language processing, neural networks, and transformer models.

Real-Time AI and Streaming Data in Telecom

4/06/2025

Discover how real-time AI and streaming data are transforming the telecommunications industry, enabling smarter networks, improved services, and efficient operations.

AI in Aviation Maintenance: Smarter Skies Ahead

3/06/2025

Learn how AI is transforming aviation maintenance. From routine checks to predictive fixes, see how AI supports all types of maintenance activities.

AI-Powered Computer Vision Enhances Airport Safety

2/06/2025

Learn how AI-powered computer vision improves airport safety through object detection, tracking, and real-time analysis, ensuring secure and efficient operations.

Fundamentals of Computer Vision: A Beginner's Guide

30/05/2025

Learn the basics of computer vision, including object detection, convolutional neural networks, and real-time video analysis, and how they apply to real-world problems.

Computer Vision in Smart Video Surveillance powered by AI

29/05/2025

Learn how AI and computer vision improve video surveillance with object detection, real-time tracking, and remote access for enhanced security.

Generative AI Tools in Modern Video Game Creation

28/05/2025

Learn how generative AI, machine learning models, and neural networks transform content creation in video game development through real-time image generation, fine-tuning, and large language models.

Artificial Intelligence in Supply Chain Management

27/05/2025

Learn how artificial intelligence transforms supply chain management with real-time insights, cost reduction, and improved customer service.

Content-based image retrieval with Computer Vision

26/05/2025

Learn how content-based image retrieval uses computer vision, deep learning models, and feature extraction to find similar images in vast digital collections.

What is Feature Extraction for Computer Vision?

23/05/2025

Discover how feature extraction and image processing power computer vision tasks—from medical imaging and driving cars to social media filters and object tracking.

Machine Vision vs Computer Vision: Key Differences

22/05/2025

Learn the differences between machine vision and computer vision—hardware, software, and applications in automation, autonomous vehicles, and more.

Computer Vision in Self-Driving Cars: Key Applications

21/05/2025

Discover how computer vision and deep learning power self-driving cars—object detection, tracking, traffic sign recognition, and more.

Machine Learning and AI in Modern Computer Science

20/05/2025

Discover how computer science drives artificial intelligence and machine learning—from neural networks to NLP, computer vision, and real-world applications. Learn how TechnoLynx can guide your AI journey.

Real-Time Data Streaming with AI

19/05/2025

You have surely heard that ‘Information is the most powerful weapon’. However, is a weapon really that powerful if it does not arrive on time? Explore how real-time streaming powers Generative AI across industries, from live image generation to fraud detection.

Core Computer Vision Algorithms and Their Uses

17/05/2025

Discover the main computer vision algorithms that power autonomous vehicles, medical imaging, and real-time video. Learn how convolutional neural networks and OCR shape modern AI.

Applying Machine Learning in Computer Vision Systems

14/05/2025

Learn how machine learning transforms computer vision—from object detection and medical imaging to autonomous vehicles and image recognition.

Cutting-Edge Marketing with Generative AI Tools

13/05/2025

Learn how generative AI transforms marketing strategies—from text-based content and image generation to social media and SEO. Boost your bottom line with TechnoLynx expertise.

AI Object Tracking Solutions: Intelligent Automation

12/05/2025

AI tracking solutions are incorporating industries in different sectors in safety, autonomous detection and sorting processes. The use of computer vision and high-end computing is key in AI tracking.

Feature Extraction and Image Processing for Computer Vision

9/05/2025

Learn how feature extraction and image processing enhance computer vision. Discover techniques, applications, and how TechnoLynx can assist your AI projects.

Fine-Tuning Generative AI Models for Better Performance

8/05/2025

Understand how fine-tuning improves generative AI. From large language models to neural networks, TechnoLynx offers advanced solutions for real-world AI applications.

Image Segmentation Methods in Modern Computer Vision

7/05/2025

Learn how image segmentation helps computer vision tasks. Understand key techniques used in autonomous vehicles, object detection, and more.

Generative AI's Role in Shaping Modern Data Science

6/05/2025

Learn how generative AI impacts data science, from enhancing training data and real-time AI applications to helping data scientists build advanced machine learning models.

Deep Learning vs. Traditional Computer Vision Methods

5/05/2025

Compare deep learning and traditional computer vision. Learn how deep neural networks, CNNs, and artificial intelligence handle image recognition and quality control.

Control Image Generation with Stable Diffusion

30/04/2025

Learn how to guide image generation using Stable Diffusion. Tips on text prompts, art style, aspect ratio, and producing high quality images.

← Back to Blog Overview