The ability to collect information far outpaces the ability to fully utilize it — yet that information may hold the key to solving some of the biggest global challenges.

Take, for instance, the frequent outbreaks of waterborne illnesses as a consequence of war or natural disasters. The most recent example comes from Yemen, where, according to the World Health Organization, nearly 536,000 new suspected cases of cholera were reported, with 773 associated deaths, between January and the end of July alone.

History is riddled with similar stories. What if it we could better understand the environmental factors that contributed to the disease, predict which communities are at higher risk, and take action to stem the spread?

Answers to these questions — and others like them — could help avert potential catastrophe.

Data is already collected about virtually everything, from birth and death rates to crop yields and traffic flows. IBM estimates that each day, 2.5 quintillion bytes of data are generated — equivalent to producing all the information in the Library of Congress more than 166,000 times every 24 hours.

Yet the power of all this information is not fully harnessed. It’s time to change that — and thanks to recent advances in data analytics and computational services, we finally have the tools to do it.

Data scientists at Los Alamos National Laboratory study data from wide-ranging, public sources to identify patterns, aiming to predict trends that could threaten global security. Multiple data streams are critical because the ground-truth data (such as surveys) are often delayed, biased, sparse, incorrect or sometimes nonexistent.

For example, knowing mosquito incidence in communities would help public health officials predict the risk of mosquito-transmitted disease such as dengue, the leading cause of illness and death in the tropics, or West Nile virus, which has been found in New Mexico each year since 2003. However, mosquito data at a global (and even national) scale is not available.

To address this gap, Los Alamos is using other sources such as satellite imagery, climate data and demographic information to estimate risk. Using these data streams, as well as clinical surveillance data and Google search queries that used terms related to the disease, Los Alamos has developed a model that successfully predicts the spread of dengue in Brazil at the regional, state and municipality level.

While the predictions aren’t perfect, they show promise. The researchers’ goal is to combine information from each data stream to further refine the models and improve their predictive power.

Similarly, to forecast the flu season, scientists at Los Alamos have found that Wikipedia and Google searches can complement clinical data. Because the rate of people searching the internet for flu symptoms often increases during their onset, the models can predict a spike in cases where data from health clinics lags.

These same concepts are being used to expand research beyond disease prediction to better understand public sentiment. In partnership with the University of California, Los Alamos is conducting a three-year study using disparate data streams to understand whether opinions expressed on social media map to opinions expressed in surveys.

For example, in Colombia, Los Alamos is studying whether social media posts about the peace process between the government and FARC, the socialist guerilla movement, can be ground-truthed with survey data. A UC Berkeley researcher is conducting on-the-ground surveys throughout Colombia — including in isolated rural areas — to poll citizens about the peace process. Meanwhile, at Los Alamos, researchers are analyzing social media data and news sources from the same areas to determine if they align with the survey data.

If it’s possible to demonstrate that social media accurately captures a population’s sentiment, it could be a more affordable, accessible and timely alternative to what are otherwise expensive and logistically challenging surveys.

In the case of disease forecasting, if social media posts indeed predict outbreaks, that data could be used in educational campaigns to inform citizens of the risk of an outbreak (due to vaccine exemptions, for example) and ultimately reduce that risk by promoting protective behaviors (such as washing hands, wearing masks, remaining indoors, etc.).

All of this illustrates the potential for big data to solve big problems. Los Alamos and other national laboratories that are home to some of the world’s largest supercomputers have the computational power augmented by machine learning and data analysis to take this information and shape it into a story for not only one state or even nation, but the world as a whole. The information is there. It’s time to use it.

Sara Del Valle, Ph.D., is a computational epidemiologist who leads the Data Fusion team at Los Alamos National Laboratory. A version of this article first appeared in Scientific American in March.

Show what you're thinking about this story

You must be logged in to react.
Click any reaction to login.

(0) comments

Welcome to the discussion.

Thank you for joining the conversation on Please familiarize yourself with the community guidelines. Avoid personal attacks: Lively, vigorous conversation is welcomed and encouraged, insults, name-calling and other personal attacks are not. No commercial peddling: Promotions of commercial goods and services are inappropriate to the purposes of this forum and can be removed. Respect copyrights: Post citations to sources appropriate to support your arguments, but refrain from posting entire copyrighted pieces. Be yourself: Accounts suspected of using fake identities can be removed from the forum.