Skip to Main Content

Artificial Intelligence (AI) Literacy: Evaluating AI Research Tools

This guide introduces AI Literacy concepts and definitions, providing insights into the relationship between AI and research.

Which AI tool is right for my research?

Evaluating AI tools is an individual process in which you take into account the needs of your research, your values and ethics, and the pros and cons of the different AI tools available to you. Use the information and the rubric below when you are determining if an AI tool is right for your research.

Grounded vs Ungrounded AI

"There are two types of LLMs: grounded and ungrounded. A grounded LLM has access to the internet and can search when responding to questions or tasks. Two examples of grounded LLMs are Perplexity AI and Microsoft Copilot. An ungrounded LLM only has access to the training data it has stored, which is often more limited and not as current. ChatGPT is an ungrounded LLM, which means it is not a good option for research." - Santa Fe College Research 101 LibGuide

Rubric for AI Tool Evaluation

As we explore new tools for research, it is helpful to evaluate if the tool itself is a good fit for our purposes.

We’ve created a rubric to help evaluate AI research tools based on:​

  • Privacy - Is it clear how the tool uses and/or protects personal data?​
  • Sources and truth – Does it generate high quality information with supporting sources?​
  • Functionality – Does it work well for research?​
  • Socio-ethical considerations– how do you feel about the ethical considerations and using a generative AI tool?

This rubric was created by Kevin Adams, Ellen Bahr, Samantha Dannick, Shana-Kay Harrison, and Maria Planansky and has a CC BY-NC 4.0 License. https://creativecommons.org/licenses/by-nc/4.0/