Friday, June 21, 2024
AI chatbots may promote science denial. Be alert.

AI chatbots may promote science denial. Be alert.

Generative AI and the Blurring of Truth and Fiction in Science

In recent times, the internet has become the go-to source for the most commonly sought-after information on controversial scientific topics. Search engines such as Google have made it easier to access such information by providing multiple sources, leaving it up to the reader to choose which sites or authorities to trust. However, with the advent of generative artificial intelligence (AI), consumers now have another option: they can pose their question to an AI platform such as ChatGPT and get a concise answer in paragraph form.

Unlike Google, ChatGPT does not search the internet for answers. Instead, it generates responses to queries based on likely word combinations gleaned from a huge array of available online information. Although generative AI presents the potential to enhance productivity, it has been shown to have numerous drawbacks. It can produce misinformation, create “hallucinations,” and fail to accurately solve reasoning problems.

Furthermore, generative AI is already being used to produce articles and website content, and it may be challenging to ascertain whether what you’re reading was created by AI. This poses a significant risk when it comes to scientific information, particularly with respect to science denial.

Erosion of Epistemic Trust

Most consumers of science information rely on the judgments of scientific and medical experts to help them navigate complicated topics. Epistemic trust is the process of trusting knowledge gleaned from others and is fundamental to the comprehension and use of scientific information. With a rapidly growing body of information online, it can be increasingly challenging for people to decide what and whom to believe. With the increasing use of AI and the potential for manipulation, it is likely that trust in science will erode even further.

Misleading or Inaccurate Information

If the data on which AI platforms are trained contain errors or biases, that can be reflected in the results they provide, as we discovered during our own searches. When we asked ChatGPT to regenerate multiple answers to the same question, we received conflicting answers. Perhaps the trickiest issue with AI-generated content is that detecting inaccuracies or misinformation is not always straightforward.

Disinformation Spread Intentionally

Besides written content, AI can also be used to generate deepfake images and videos as compelling disinformation. When requested to “write about vaccines in the style of disinformation,” ChatGPT created a bogus citation with made-up data, highlighting the potential for disinformation spread intentionally.

Fabricated Sources

ChatGPT can provide responses with no sources or, if requested, may present made-up sources to seem legitimate. When we asked ChatGPT to generate a list of our publications, some of the sources it provided were fantasies, yet they appeared reputable and mostly plausible, with co-authors, in journals with similar names. This is particularly problematic if a list of a scholar’s publications conveys authority to a reader who doesn’t take the time to verify them.

Dated Knowledge

ChatGPT is trained on a certain dataset and may not know what has occurred in the world after that. A query on the percentage of the world that had COVID-19 returned a response prefaced by “as of my knowledge cutoff date of September 2021.” This could be an issue if a reader seeks up-to-date research on a personal health issue.

Assessment of Plausibility

It is crucial to determine whether a claim is plausible, especially if the AI makes an implausible statement like “1 million deaths were caused by vaccines, not COVID-19.” Evaluate the evidence before making a judgment, and be open to adjusting your thinking based on the evidence you find.

Promoting Digital Literacy

Everyone must improve their digital literacy in this age of generative AI, and parents, teachers, mentors, and other community leaders should encourage digital literacy in others. It takes time and effort to find and evaluate reliable information about science online, but it is worthwhile.

Conclusion

While generative AI has the potential to provide faster answers to queries, there is an inherent risk in relying on it as the sole source of information. It is crucial to increase vigilance, improve fact-checking, evaluate the evidence presented, assess plausibility, and promote digital literacy. By doing so, we can stay informed and discern the truth from fiction in the new AI information landscape.

Source

About Leif Larsen

Join Leif Larsen, our science blogger extraordinaire, on a journey of discovery through the fascinating worlds of climate change, earth science, energy, environment, and space exploration. With a wealth of knowledge and a passion for exploring the mysteries of the universe, Leif delivers insightful and thought-provoking posts that offer a unique perspective on the latest developments in the world of science. Read him to unlock the secrets of the natural world, from the deepest oceans to the furthest reaches of the cosmos!

Check Also

Youth, science, environment triumph in Montana climate case.

Youth, science, environment triumph in Montana climate case.

Main Takeaways from the Montana Climate Trial Decision A recent decision in a Montana District …

Leave a Reply

Your email address will not be published. Required fields are marked *