AI tools can be used to improve the accessibility of resources. AI tools can create and improve features such as audio descriptions and speech recognition, thus increasing the accessibility of websites and other digital spaces.
There are some concerns about data privacy and security with generative AI. It is unclear what happens with user data when input into AI resources or whether it is protected. This is particularly important to consider when handling sensitive and confidential information, such as medical records.
Generative AI requires large datasets that are physical data centers on servers. The centers require rare materials and produce electronic waste. Additionally, to maintain servers (including cooling) significant water and electricity is needed.
AI can produce incorrect or biased information. One way that AI tools can produce misinformation is through hallucinations, where the AI makes up answers or provides incorrect information. Misinformation and bias is also built into AI models. AI models are created by humans and trained using datasets produced and curated by humans, and thus inherit bias from the people and data that created the AI. This can manifest as facial recognition technology that is unable to distinguish between black people, or voice assistants that are unable to understand different accents.
As a result, it is important to have human oversight and correction of AI projects. For more information about human oversight in systematic review AI projects, see Ge, L., Agrawal, R., Singer, M. et al. Leveraging artificial intelligence to enhance systematic reviews in health research: advanced tools and challenges. Syst Rev 13, 269 (2024). https://doi.org/10.1186/s13643-024-02682-2
UNC has guidelines for instructors on language to consider in syllabi. Make sure to check your course syllabi and with your instructors for more specific guidance about how you can use AI for coursework. This link also includes guidance on how to properly cite AI.
Journals have different requirements for how to document AI use. It is important to check with journals about how you are allowed to use AI and report usage. Responsible AI use in Evidence Synthesis Recommendations and Guidance (RAISE) Guidelines, developed in partnership with International Collaboration for the Automation of Systematic Reviews (ICASR), Cochrane, JBI, and Campbell Collaboration, provides guidance and clarifies responsibilities and concerns on AI use in evidence syntheses.