Much of the discussion surrounding artificial intelligence (AI) these days revolves around its potential threat to humanity. But what if the opposite were true? What if, instead of destroying mankind, AI turned out to be its savior?
The recent pandemic highlighted how vulnerable modern society is to perils from the natural world – if not in terms of outright extinction, then severe economic and societal disruption.
Effective collection and analysis of data proved to be vital in the rapid confrontation of Covid-19, and there is every reason it can be even more successful the next time nature takes a shot at us (and there will be a next time).
In a recent Harvard Business Review post, Bhaskar Chakravorti, author and Dean of Global Business at The Fletcher School of Tufts University, highlights the numerous ways AI came up short during the pandemic. While it is fair to say that AI mostly failed in this effort, the research actually provides a template for the corrective actions needed for greater success in the future.
A Good Start
For one thing, Chakravorti says, while AI was the first to identify a strange new virus in Wuhan, China, follow-up studies showed that most models failed to anticipate key trends in prognosis, diagnosis, treatment response, and a host of other factors.
Most of these problems can be linked to four key factors:
- Bad or incomplete data: Most information was difficult to acquire due to rapidly changing conditions and an inability to draw from proprietary systems and infrastructure.
- Automated discrimination: Most algorithms were trained on data that reflected biases in healthcare availability, social inequality, and, in some cases, mistrust of the healthcare system.
- Human error: Whether it was poor data entry or a lack of incentives for effective data sharing, humans are ultimately responsible for guiding AI in the right direction, and humans make mistakes.
- Complex global contexts: Appropriate intervention must navigate multiple sociopolitical, institutional, and cultural conventions, which even AI is ill-equipped to deal with.
To be sure, these problems are not inherent in the AI models themselves but in the way in which they are trained and put to use.
Solving them will undoubtedly require a global effort, and fortunately, this is already starting to take shape, albeit in a limited fashion.
Yet another issue is the penchant for pandemics to scale up rapidly during the initial stages of the outbreak. This tends to catch governments and their healthcare systems flat-footed.
AI has the ability to scale rapidly as well, but it must be optimized for this ahead of time for something as complex as an unknown pathogen. The U.S. National Institute of Health is currently assessing the Australian-developed EPIWATCH system as a rapid pandemic response tool, having already proven its effectiveness against other fast-moving viruses like Ebola.
At the same time, NIH is infusing open-source AI and risk intelligence into its existing early-detection tools like the automated red-flagging (ARF) platform and the geographic information system (GIS).
Direct From the Source
Again, though, even the most powerful AI in the world is of only limited use if the data it receives is inaccurate or out of data, and official channels of information exchange are often slow and not always trustworthy. This is why researchers are starting to utilize social media as a way to gain insight directly from the source: patients.
A joint team from UCLA and UC-Irvine was recently awarded a $1 million grant from the National Science Foundation’s Predictive Intelligence and Pandemic program for a project to canvas all manner of social media to identify risk factors before they become known to health organizations. The task involves rapid analysis of billions of tweets, posts, updates, and other data on all of the major social media platforms that the team has compiled in a searchable database dating back to 2015.
In some cases, the process involves searching for a simple term like “cough” and then narrowing the resulting data according to age, date, geographic location, and other variables. At the moment, the team is looking to refine its algorithms to differentiate between medical words like “fever” and even “dying” and their slang definitions that have evolved over time.
The Bottom Line
While all of this is impressive, the caveat is that there are no guarantees when dealing with the vagaries of nature. There’s a reason most viruses predate the rise of modern humans: they are not only tough but adaptable. So, in essence, what we are asking AI to do is predict how a species will evolve in a constantly changing environment, and that’s a tall order.
But one thing is clear: humans alone are not up to this task. If AI is not brought into this fight because of its potential capacity to harm the human race (which is largely theoretical), then we will face another pandemic someday. And it could be far worse than the last one.
As Marc Andreessen, a cofounder and general partner at the venture capital firm Andreessen Horowitz, said in his?latest article, the real risk lies in not pursuing AI development with maximum force and speed. Andreessen added: