It was discovered in a recent study that ChatGPT, an artificial intelligence (AI) chatbot, has limitations when it comes to offering locally relevant information on environmental justice issues.
Virginia Tech, an American university, released a report detailing possible biases in ChatGPT, an artificial intelligence (AI) tool that suggests different results for environmental justice issues in different counties. Researchers at Virginia Tech have claimed in a recent report that ChatGPT is not always able to provide region-specific information about environmental justice issues.
Nonetheless, the analysis found a pattern suggesting that the larger, more populous states had easier access to the data.
“In states with larger urban populations such as Delaware or California, fewer than 1 percent of the population lived in counties that cannot receive specific information.”
Smaller population areas, however, did not have the same access. “In rural states such as Idaho and New Hampshire, more than 90 percent of the population lived in counties that could not receive local-specific information,” said the report.
It also quoted a lecturer named Kim from the geography department at Virginia Tech, who emphasised the need for more study as biases are being identified. “Our results show that there are currently geographic biases in the ChatGPT model, though more research is necessary,” Kim said. A map that showed how many Americans lack access to location-specific information on environmental justice issues was also included in the research paper.
This comes after reports that researchers have found what may be political biases displayed by ChatGPT recently. Large language models (LLMs), such as ChatGPT, are known to produce text that is rife with biases and errors that could mislead readers and reinforce political biases propagated by traditional media, according to a study published on August 25 by researchers from the United Kingdom and Brazil.