Part of:

How AI Might Save, Not Destroy, the Human Race: Harnessing Its Potential during Pandemics and Health Crises

Why Trust Techopedia
KEY TAKEAWAYS

While there are concerns about the potential risks of AI, there is also immense potential for it to be a crucial asset in combating pandemics and protecting human health. By addressing challenges such as data quality, bias, human error, and global coordination, AI can play a pivotal role in rapid response, prediction, and innovative approaches to safeguarding humanity in the face of future health crises. Embracing AI's potential is essential to enhance our resilience and minimize the impact of pandemics on society.

Much of the discussion surrounding artificial intelligence (AI) these days revolves around its potential threat to humanity. But what if the opposite were true? What if, instead of destroying mankind, AI turned out to be its savior?

The recent pandemic highlighted how vulnerable modern society is to perils from the natural world – if not in terms of outright extinction, then severe economic and societal disruption.

Effective collection and analysis of data proved to be vital in the rapid confrontation of Covid-19, and there is every reason it can be even more successful the next time nature takes a shot at us (and there will be a next time).

In a recent Harvard Business Review post, Bhaskar Chakravorti, author and Dean of Global Business at The Fletcher School of Tufts University, highlights the numerous ways AI came up short during the pandemic. While it is fair to say that AI mostly failed in this effort, the research actually provides a template for the corrective actions needed for greater success in the future.

A Good Start

For one thing, Chakravorti says, while AI was the first to identify a strange new virus in Wuhan, China, follow-up studies showed that most models failed to anticipate key trends in prognosis, diagnosis, treatment response, and a host of other factors.

Most of these problems can be linked to four key factors:

Advertisements
  • Bad or incomplete data: Most information was difficult to acquire due to rapidly changing conditions and an inability to draw from proprietary systems and infrastructure.
  • Automated discrimination: Most algorithms were trained on data that reflected biases in healthcare availability, social inequality, and, in some cases, mistrust of the healthcare system.
  • Human error: Whether it was poor data entry or a lack of incentives for effective data sharing, humans are ultimately responsible for guiding AI in the right direction, and humans make mistakes.
  • Complex global contexts: Appropriate intervention must navigate multiple sociopolitical, institutional, and cultural conventions, which even AI is ill-equipped to deal with.

To be sure, these problems are not inherent in the AI models themselves but in the way in which they are trained and put to use.

Solving them will undoubtedly require a global effort, and fortunately, this is already starting to take shape, albeit in a limited fashion.

Spain’s Ministry of Science and Innovation is funding the EPI-DESIGUAL project in conjunction with the Centre for Demographic Studies
The goal is to compile data all the way back to plague and cholera outbreaks in the 1820s to improve the predictability of highly communicable diseases and ascertain their persistence. As well, there is a real dearth of knowledge when it comes to the long-term after-effects of pandemics, such as the impact on birth rates and the prevalence of seemingly non-related conditions like muscle weakness and malaise – what some doctors are now calling “Long Covid”. Ultimately, the project intends to use AI to replace the inductive reasoning methodologies of modern medicine with more data-driven processes that will be more flexible and creative.

Yet another issue is the penchant for pandemics to scale up rapidly during the initial stages of the outbreak. This tends to catch governments and their healthcare systems flat-footed.

AI has the ability to scale rapidly as well, but it must be optimized for this ahead of time for something as complex as an unknown pathogen. The U.S. National Institute of Health is currently assessing the Australian-developed EPIWATCH system as a rapid pandemic response tool, having already proven its effectiveness against other fast-moving viruses like Ebola.

At the same time, NIH is infusing open-source AI and risk intelligence into its existing early-detection tools like the automated red-flagging (ARF) platform and the geographic information system (GIS).

Direct From the Source

Again, though, even the most powerful AI in the world is of only limited use if the data it receives is inaccurate or out of data, and official channels of information exchange are often slow and not always trustworthy. This is why researchers are starting to utilize social media as a way to gain insight directly from the source: patients.

A joint team from UCLA and UC-Irvine was recently awarded a $1 million grant from the National Science Foundation’s Predictive Intelligence and Pandemic program for a project to canvas all manner of social media to identify risk factors before they become known to health organizations. The task involves rapid analysis of billions of tweets, posts, updates, and other data on all of the major social media platforms that the team has compiled in a searchable database dating back to 2015.

In some cases, the process involves searching for a simple term like “cough” and then narrowing the resulting data according to age, date, geographic location, and other variables. At the moment, the team is looking to refine its algorithms to differentiate between medical words like “fever” and even “dying” and their slang definitions that have evolved over time.

The Bottom Line

While all of this is impressive, the caveat is that there are no guarantees when dealing with the vagaries of nature. There’s a reason most viruses predate the rise of modern humans: they are not only tough but adaptable. So, in essence, what we are asking AI to do is predict how a species will evolve in a constantly changing environment, and that’s a tall order.

But one thing is clear: humans alone are not up to this task. If AI is not brought into this fight because of its potential capacity to harm the human race (which is largely theoretical), then we will face another pandemic someday. And it could be far worse than the last one.

As Marc Andreessen, a cofounder and general partner at the venture capital firm Andreessen Horowitz, said in his?latest article, the real risk lies in not pursuing AI development with maximum force and speed. Andreessen added:

The stakes here are high. The opportunities are profound. AI is quite possibly the most important – and best – thing our civilization has ever created, certainly on par with electricity and microchips and probably beyond those. The development and proliferation of AI – far from a risk that we should fear – is a moral obligation that we have to ourselves, to our children, and to our future. We should be living in a much better world with AI, and now we can.
Advertisements

Related Reading

Related Terms

Advertisements
Arthur Cole
Technology Writer
Arthur Cole
Technology Writer

Arthur Cole is a freelance technology journalist who has been covering IT and enterprise developments for more than 20 years. He contributes to a wide variety of leading technology web sites, including IT Business Edge, Enterprise Networking Planet, Point B and Beyond and multiple vendor services.

',a='';if(l){t=t.replace('data-lazy-','');t=t.replace('loading="lazy"','');t=t.replace(/