Using AI in agriculture could boost global food security – but we need to anticipate the risks

Using AI in agriculture could boost global food security – but we need to anticipate the risks

Suwin/Shutterstock

As the global population has expanded over time, agricultural modernisation has been humanity’s prevailing approach to staving off famine.

A variety of mechanical and chemical innovations delivered during the 1950s and 1960s represented the third agricultural revolution. The adoption of pesticides, fertilisers and high-yield crop breeds, among other measures, transformed agriculture and ensured a secure food supply for many millions of people over several decades.

Concurrently, modern agriculture has emerged as a culprit of global warming, responsible for one-third of greenhouse gas emissions, namely carbon dioxide and methane.

Meanwhile, inflation on the price of food is reaching an all-time high, while malnutrition is rising dramatically. Today, an estimated two billion people are afflicted by food insecurity (where having access to safe, sufficient and nutrient-rich food isn’t guaranteed). Some 690 million people are undernourished.

The third agricultural revolution may have run its course. And as we search for innovation to usher in a fourth agricultural revolution with urgency, all eyes are on artificial intelligence (AI).

AI, which has advanced rapidly over the past two decades, encompasses a broad range of technologies capable of performing human-like cognitive processes, such as reasoning. It’s trained to make these decisions based on information from vast amounts of data.


Read more:
Smart labels and allergy sensors – how to make sure the future of food is ethical

Using AI in agriculture

In assisting humans in fields and factories, AI may process, synthesise and analyse large amounts of data steadily and ceaselessly. It can outperform humans in detecting and diagnosing anomalies, such as plant diseases, and making predictions including about yield and weather.

Across several agricultural tasks, AI may relieve growers from labour entirely, automating tilling (preparing the soil), planting, fertilising, monitoring and harvesting.

See also  Facebook wants AI to find your keys and understand your conversations

Algorithms already regulate drip-irrigation grids, command fleets of topsoil-monitoring robots, and supervise weed-detecting rovers, self-driving tractors and combine harvesters. A fascination with the prospects of AI creates incentives to delegate it with further agency and autonomy.

This technology is hailed as the way to revolutionise agriculture. The World Economic Forum, an international nonprofit promoting public-private partnerships, has set AI and AI-powered agricultural robots (called “agbots”) at the forefront of the fourth agricultural revolution.

A farmer surveys crops in a field.

Agricultural AI could transform the way farmers work.
Hryshchyshen Serhii/Shutterstock

But in deploying AI swiftly and widely, we may increase agricultural productivity at the expense of safety. In our recent paper published in Nature Machine Intelligence, we have considered the risks that could come with rolling out these advanced and autonomous technologies in agriculture.

From hackers to accidents

First, given these technologies are connected to the internet, criminals may try to hack them.

Disrupting certain types of agbots would cause hefty damages. In the US alone, soil erosion costs US$44 billion (£33.6 billion) annually. This has been a growing driver of the demand for precision agriculture, including swarm robotics, that can help farms to manage and lessen its effects. But these swarms of topsoil-monitoring robots rely on interconnected computer networks and hence are vulnerable to cyber-sabotage and shutdown.

Similarly, tampering with weed-detecting rovers would let weeds loose at a considerable cost. We might also see interference with sprayers, autonomous drones or robotic harvesters, any of which could cripple cropping operations.

Beyond the farm gate, with increasing digitisation and automation, entire agrifood supply chains are susceptible to malicious cyber-attacks. At least 40 malware and ransomware attacks targeting food manufacturers, processors and packagers were registered in the US in 2021. The most notable was the US$11 million ransomware attack against the world’s largest meatpacker, JBS.

See also  Algorithms are pushing AI-generated falsehoods at an alarming rate. How do we stop this?


Read more:
Our food system is at risk of crossing ‘environmental limits’ – here’s how to ease the pressure

Then there are accidental risks. Before a rover is sent into the field, it’s instructed by its human operator to sense certain parameters and detect particular anomalies, such as plant pests. It disregards, whether by its own mechanical limitations or by command, all other factors.

The same applies to wireless sensor networks deployed in farms, designed to notice and act on particular parameters, for example, soil nitrogen content. By imprudent design, these autonomous systems might prioritise short-term crop productivity over long-term ecological integrity. To increase yields, they might apply excessive herbicides, pesticides and fertilisers to fields, which could have harmful effects on soil and waterways.

Rovers and sensor networks may also malfunction, as machines occasionally do, sending commands based on erroneous data to sprayers and agrochemical dispensers. And there’s the possibility we could see human error in programming the machines.

An aerial view of a tractor tilling land.

There are risks associated with using AI to grow our food.
rsimona/Shutterstock

Safety over speed

Agriculture is too vital a domain for us to allow hasty deployment of potent but insufficiently supervised and often experimental technologies. If we do, the result may be that they intensify harvests but undermine ecosystems. As we emphasise in our paper, the most effective method to treat risks is prediction and prevention.

We should be careful in how we design AI for agricultural use and should involve experts from different fields in the process. For example, applied ecologists could advise on possible unintended environmental consequences of agricultural AI, such as nutrient exhaustion of topsoil, or excessive use of nitrogen and phosphorus fertilisers.

Also, hardware and software prototypes should be carefully tested in supervised environments (called “digital sandboxes”) before they are deployed more widely. In these spaces, ethical hackers, also known as white hackers, could look for vulnerabilities in safety and security.

See also  High school students are using a ChatGPT-style app in an Australia-first trial

This precautionary approach may slightly slow down the diffusion of AI. Yet it should ensure that those machines that graduate the sandbox are sufficiently sensitive, safe and secure. Half a billion farms, global food security and a fourth agricultural revolution hang in the balance.


Read more:
Six big digital trends to watch in 2022

The Conversation

Asaf Tzachor does not work for, consult, own shares in or receive funding from any company or organisation that would benefit from this article, and has disclosed no relevant affiliations beyond their academic appointment.