Skip to nav Skip to content

Science Speaks

Blog Home

Can machine learning help HIV treatment programs? Three questions to consider

Jonathan Friedman, MS
,
Anubhuti Mishra
,
Allison Fox
Facebook Twitter LinkedIn Email

machine learning graphicMachine learning is a ubiquitous and powerful tool used to solve problems across the health care space. For all its power and wide applicability, though, machine learning may not always be the right solution to your problem. When is it the right solution then?

To answer this, you need to ask three key questions. Let’s start with a problem that countries frequently face with their HIV treatment programs — interruption in HIV treatment, i.e., patients dropping off their HIV treatment regimen for a variety of reasons. Here we are interested in understanding why are patients dropping off their treatment regimen; who is more likely to experience an interruption in HIV treatment in the future; and can we predict what will happen and understand why it will happen?

Let’s see if machine learning can help us answer these questions.

Are you trying to understand relationships in your data or predict something about the future?

A clear problem statement is the foundation of an effective machine learning solution. If your goal is to identify relationships in your data to retrospectively understand who dropped off their HIV treatment and why, descriptive and diagnostic analytics would suit your needs. Traditional statistical methods such as regression analysis are great at examining associations between variables of interest. Simple descriptive statistics can also identify groups of patients who have experienced interruption in treatment more often than others.

However, if your problem statement is forward-looking, i.e., you are interested in identifying people who are most likely to experience interruptions and remove the barriers to their HIV treatment, then predictive tools like machine learning may be right for you. The future-oriented nature of this problem makes it a better fit for predictive analytics than descriptive analytics. To build a machine learning solution for this problem, you need historical patient records with an outcome variable (label) capturing if these patients experienced interruption in HIV treatment or not. A sample of patient data where you know the outcome that you’re trying to predict is important in this case because we are trying to predict classes of patients (likely and unlikely to experience interruption). However, certain machine learning problems do not need any outcome variables and are used to identify patterns in large datasets — for example, clustering customers into different groups based on their buying habits. Such methods are called unsupervised machine learning, as they do not have an outcome variable (label) to guide the model.

For example, in Data.FI, one of our projects funded by PEPFAR and implemented by USAID, we are developing a machine learning solution for HIV testing services in Nigeria. The goal is to optimize HIV testing resources by using patient characteristics to predict who is at the highest risk of contracting HIV. The country has millions of patients’ records on who tested positive or negative, their sociodemographic information, their sexual behaviors, marital status and their STD status — a machine learning model can learn from this data to generate probability of testing positive (HIV risk-score) given a set of patient characteristics. When new patients come in for HIV testing, a health care worker can enter their information into this model and generate an HIV risk-score for them and decide who should be tested.

Is your data large and multidimensional?

Even if you are trying to use your data to predict something about the future, machine learning may not be right for you. The tool’s advantage is its ability to learn complex interactions that you can’t identify on your own, either because it would take too long or because the relationships are too complex for the human brain or a statistical test to digest and learn from.

If you have lots of entries in your dataset and are interested in looking at the relationships between many variables for each entry, machine learning unlocks a lot of analysis opportunities that wouldn’t be possible with descriptive analytics. For example, if you have 30 to 50 PEPFAR monitoring, evaluation and reporting, or MER, indicators, the existing data quality tools — which use statistical analysis — will look at two-way ratios between indicators. What they’re not going to do is look at 30-way ratios.

Data.FI has developed an anomaly detection tool that can look at patterns across 50 variables at a time — something that no person can do — and which one cannot do in Excel. Machine learning approaches like recommender systems can see patterns across all these variables and predict based on those patterns what the outcome most likely should have been for a variable that has a missing response. But again, if your data set only has five MER indicators, you probably don’t need machine learning. 

On the other hand, if your dataset is on the smaller side, and you only have a few MER indicators, these statistical analysis tools will do a great job of identifying historical patterns in the data. The additional cost and commitment of using machine learning is likely not worth it in this kind of use case.

Do you have the people and information architecture to implement the work?

Standing up and sustaining machine learning models requires staff with specialized skillsets and information architecture that delivers machine learning-generated insights to decision makers when they need it, in the form they need it. The specific skills and architecture will vary based on the use case. To build and evaluate models, you’ll need staff with expertise in machine learning, obviously. But, to make models useful, they need to be deployed in some form. The form this takes can range from irregularly executed analysis available to a small group of central users to real-time decision support available to decentralized networks of clinicians such as models developed by Data.FI, led by Palladium, and the Kenya Health Management Information System project (KeHMIS) that are integrated directly in electronic medical records.

For the former, a simple server with machine learning libraries installed will suffice; in fact, we’re all just a few clicks away from renting space on cloud servers that will provide what’s needed. For the latter, a variety of technologies may be necessary. Some will require application programming interfaces, or APIs, to connect to centrally hosted models, if internet connectivity is available; some will require installing models offline on mobile devices; others may require using PMML, ONNX, POJO or MOJO, ways to convert machine learning models so they can speak to Java-based applications, or other fun acronyms that may be familiar to some but barriers to others.

Conclusion

So, is machine learning right for your project? It depends! The tool can be great for learning from large amounts of labeled historical data to understand what is most likely to happen in the future. The machine can do what the human can’t, but machine learning is not always the answer or may not be the complete answer to your problem. Moreover, if countries do not have the prerequisite infrastructure and skilled people to maintain machine learning solutions, even if you have the right problem statement and data, this approach may not be the right solution.

If you’re already doing machine learning, you should evaluate it. A Data.FI dashboard teaches how to do this.

Loading...

This website uses cookies

We use cookies to ensure that we give you the best experience on our website. Cookies facilitate the functioning of this site including a member login and personalized experience. Cookies are also used to generate analytics to improve this site as well as enable social media functionality.