Limitations & Ethical Considerations
While AI can be a useful tool, like any tool, it is not perfect. When using AI, you should be aware of its limitations and potential ethical issues.
Hallucinations
Hallucinations are fabricated information and citations generated by AI tools. When an AI tool generates a hallucinated citation, it may appear real because the AI has been trained to mimic the formatting and language of real citations. Hallucinations are the primary reason you should never cite a source provided to you by AI without locating it and accessing its content yourself.
- Hallucination Leaderboard - Vectara: Tracks the hallucination rates of popular chatbots, including ChatGPT and Gemini
Bias
AI is sometimes regarded as free from bias. However, because they are human creations, AI tools can be biased due to flawed design or training data. For example, large language models are trained on human-created sources of information, meaning they may replicate biases found in their training data. Researching a tool's training data and algorithmic design can help users recognize and mitigate potential biases in AI.
Attribution
Since large language models are trained on massive amounts of data, it's essentially impossible for any specific output to be attributed to any one source. In addition to diluting a user's ability to evaluate the reliability of the source, this means that the authors of those sources do not receive credit for their work. Proper attribution is very important in academia, which is why citing AI may be restricted or forbidden in certain academic contexts.
Reproducibility
Because generative AI tools use machine learning to create content based on patterns, it can be difficult to replicate specific outputs. Generally, all AI-generated content is unique to the specific interaction in which it was created. This means that even if you use the same prompt on the same AI tool, you may not receive the same output twice.
Environmental Impact
The manufacturing, development, and training of AI tools can have serious environmental effects. For example, the training process for a single AI model requires significant electricity consumption and produces carbon emissions roughly equivalent to the annual emissions of hundreds of American households. Additionally, AI developers are not always transparent about the environmental impacts of their models, meaning that the true severity can be difficult to gauge.
Privacy Resources
The developers of different AI toolset different policies on user privacy and the collection and use of user data. It's always a good idea to research the privacy policy of any AI tool before using it, especially those that require you to create an account or log in using another account such as Google or Microsoft.