The ethical implications of Generative Artificial Intelligence (GenAI) are vast and multifaceted. Concerns about the misuse of GenAI includes data privacy, the risk of perpetuating biases present in training datasets, and the need for transparency of GenAI content. The more we learn about these issues, the better informed we are to implement solutions that enhance UNC Libraries services while upholding our commitment to ethical standards and academic integrity.
In Harvard Business Review, Bhaskar Chakravorti, founding Executive Director of fletcher's Institute for Business in the Global context (IBGC), discuses AI's Trust Problem: Twelve persistent risks of AI that are driving skepticism1. Chakravorti believes addressing the twelve issues requires a consistent approach focused on training, empowering, and includes humans in managing GenAI tools. Let's discuss a few of these risks.
Our research has resulted in similar concerns as Chakravorti. Let's discuss a few of these concerns.
Generative AI has the potential to be used in ways that violate the UNC Honor Code, such as using generated text, images, or audio without acknowledgement or plagiarizing work by using GenAI tools.
The rapid expansion of GenAI has created increased demand for the natural resources that power its processes, such as electricity and water.
Further reading: “A Computer Scientist Breaks Down Generative AI’s Hefty Carbon Footprint” Scientific American, May 2023
Many researchers who study the spread of misinformation (incorrect or false info) and disinformation (deliberately misleading information) are concerned about GenAI’s ability to easily and quickly spread false content.
Further reading: “An A.I. Researcher Takes On Election Deepfakes.” New York Times, April 2024
Because Generative AI tools have advanced so rapidly, existing copyright law has struggled to adapt. The rightful copyright owner of GenAI outputs is still unclear, and this area of law will continue to change as new ownership claims are made.
Further reading: “Boom in A.I. Prompts a Test of Copyright Law,” New York Times, December 2023
Some experts are concerned because Generative AI companies are not clear about how users’ data are protected. Other concerns include the lack of consent from creators whose work was used to train AI tools.
Further reading: “Generative AI's privacy problem,” AXIOS, March 2024.
Generative AI holds the possibility of creating content that is more accessible to everyone, including people who use assistive technology. However, it’s also important to ensure that GenAI-created content and tools are accessible and pass digital accessibility checks.
Further reading: “‘Without these tools, I’d be lost’: how generative AI aids in accessibility”Nature, April 2024
1Chakravorti, B. (2024, May 3). AI’s trust problem. Harvard Business Review. https://hbr.org/2024/05/ais-trust-problem
AI4ALL is a "US-based nonprofit dedicated to increasing diversity and inclusion in AI education, research, development, and policy."
AI Now Institute "produces diagnosis and policy research on artificial intelligence."
AJL's mission: "is to raise public awareness about the impacts of AI, equip advocates with resources to bolster campaigns, build the voice and choice of the most impacted communities, and galvanize researchers, policymakers, and industry practitioners to pre vent AI harms."
CRFM creates The Foundation Model Transparency Index which is a comprehensive assessment of the transparency of foundation model developers.
DAIR "is an interdisciplinary and globally distributed AI research institute free from Big Tech's pervasive influence.
Feminist AI seeks to "create space where intergenerational BIPOC and LGBTQIA+ womxn and non-binary folks can gather to build tech together that is informed by our cultures, identities, and experiences."