Generative AI refers to deep-learning models that can generate high-quality text, images, and other content based on the data they were trained on.
GenAI has trained on large data sets in order to identify recognizable patterns, including patterns in word order or links between certain arrangements of shapes, colors, and associated terms. The GenAI tool then uses these patterns to predict what arrangement of words or images is the closest match to the prompt given by the user.
Text-based GenAI is also referred to as a "Large Language Model" (LLM). LLMs break down word and phrases into "tokens" and, after training on hundreds of thousands of words of prewritten text, the GenAI tool is able to predict what token would come next in naturally written language based on the user-input prompts.
When discussing current AI tools, we are often talking about Large Language Models (LLMs) or other types of generative AI. These technologies differ from "General AI," a vision of technology with "intelligence" equivalent or parallel to human intelligence, including critical and emotional thinking. General AI is often closer to the cultural understanding of "artificial intelligence" expressed in science fiction. Currently, General AI does not exist.
Ethics and AI encompasses a wide range of issues.
Take the time to review the GenAI tools you are thinking about working with or already work with to see how the above items are addressed by the company who created the GenAI tool.
The rubric below can help you think through some of these issues.
| Category | Criteria | Favors Use | Minor Concerns | Major Concerns |
| Quality of Data | Transparency | Tool provides clear explanations and consistent sources for its outputs, and the decision-making process is well-documented and accessible to users. | Some level of transparency is provided, but sources may be inaccurate or disconnected from specific information. | Users have little to no understanding of how or why decisions are made. Sources are not provided alongside responses. |
| Bias & Fairness | Tool has been reviewed for bias, and mechanisms are in place to ensure fairness across diverse user groups. | Efforts to reduce bias are in place, but occasional issues may arise that require manual correction. | Tool has known biases or has not been reviewed for bias, potentially perpetuating systemic inequalities. | |
| Privacy, Data Protection, & Rights | Sign Up/ Sign In | Tool uses secure authentication methods and offers options for anonymity. Minimal personal information is required during the sign-up process. | Tool may offer secure authentication on an opt-in basis or authentication process may be inconsistent. There is no option for anonymity. | The sign-up process lacks secure authentication or requires extensive personal information. |
| User Control Over Data | Users have full control over their data, with options to modify, delete, export, or restrict processing of their data. | Users have some control, but there may be limitations on how they can manage their data within the system. | Users have little to no control over their data once it is entered into the system. | |
| Policies | Company policies are available and easy to find. | Policies are mostly available but may be hard to find on their website. | Policies don't exist or can't be found. | |
| Accessibility | Accessibility standards | Includes features like text-to-speech, alternative text for images, and screen reader compatibility. | Tool has some accessibility features. | Tool has no accessibility features. |
| Environment | Energy Efficiency | The tool is designed for energy efficiency to reduce power consumption during training. | The tool is reasonably energy-efficient but could be improved. |
Further Reading
Want to learn more on these topics?
Materials from the Lyman Beecher Brooks Library catalog
Materials from researchers
Materials from government entities
Materials from the businesses