AI has the potential to aid humans in their work, but it is important to consider the negative aspects of this set of technology tools.
Generative AI tools can help users brainstorm ideas, organize information, plan scholarly discussions, and summarize sources. However, they are also notorious for not always using on factual information or rigorous research strategies. They are known for producing "hallucinations," a term used to describe false information created by AI systems to defend their statements. "Hallucinations" can be presented confidently and consist of partially or fully fabricated citations or facts.
AI tools have been used to intentionally produce false images or audiovisual recordings to spread misinformation and mislead the audience. Referred to as "deep fakes," these materials can be utilized to subvert democratic processes and are thus particularly dangerous.
Additionally, the information provided by generative AI tools may not be current as some systems do not have access to the latest information. Rather, they may have been trained on past datasets and generate dated representations of current events and the related information landscape.
Another limitation of AI is the bias that can be embedded in the products it generates. These large language model systems are trained to predict the most likely sequence of words in response to a given prompt and will therefore reflect and perpetuate the biases inherent in the information they were trained on. An additional source of bias is the use of reinforcement learning with human feedback (RLHF) to refine generative AI tools. The human testers used to provide feedback to AI are themselves non-neutral. Accordingly, generative AI like ChatGPT is documented to have provided output that is socio-politically biased. It can also generate sexist, racist, or otherwise offensive information.
Plagiarism is typically defined as presenting someone else's work or ideas as one's own, including work generated by AI. Individual policies for using and crediting AI tools vary from class to class. Make sure you look at your syllabus and have a clear understanding of your professor's expectations.
AI tools can create false citations. Providing false citations in research, whether intentionally or unintentionally, violates academic honesty policies related to improper citation. AI tools such as ChatGPT have been known to generate false citations (hallucinations). Even if the citations represent actual papers, the cited content in ChatGPT or another tool might still be inaccurate.
U.S. Copyright law related to the use of AI is still evolving.
There are currently several court cases directly relating to the unauthorized use of copyrighted material as training data for Generative AI tools. Individual authors, artists, and companies are suing OpenAI, GitHub, and other companies for using their work when training their AI products.
Some library content providers prohibit any amount of their content being used with AI tools. Do not download Library materials (i.e., articles, ebooks, infographics, psychographics, or other datasets) into AI unless you know our licensing allows this use.
Copyright vs plagiarism: Copyright violation is not the same as plagiarism. While plagiarism can be considered fraud if funding is involved, it is largely considered an issue of research integrity and ethics. There is currently no consensus over whether generative AI tools are engaging in plagiarism when they scrape data to generate content.
Copyright law currently has a human authorship requirement, and according to recent guidance, when an AI technology. What this means is that AI-generated art and text is not copyrightable on its own.
This depends on the extent to which the AI tool is part of the creative process. The more human creativity involved, the more likely it is that you will be able to register your work with the U.S. Copyright Office. While you own the copyright to anything you create (or had a large part in the creation of), copyright registration is important as a public record of your copyright claim, which will be helpful to you if you are interested in licensing your work.
There are multiple privacy concerns associated with the use of generative AI tools. The most prominent issues revolve around the possibility of a breach of personal/sensitive data. Most AI-powered language models, including ChatGPT, require users to input large amounts of data to be trained and generate new information products effectively. This translates into personal or sensitive user-submitted data becoming an integral part of the collection of material used to further train the AI without the explicit consent of the user. Moreover, certain generative AI policies permit AI developers to profit off of this personal/sensitive information by selling it to third parties.