Technology Ethics
The study and application of ethical considerations in the development, implementation, and use of technology.
The study and application of ethical considerations in the development, implementation, and use of technology.
A framework for assessing and improving an organization's ethical practices in the development and deployment of AI.
Guidelines and principles designed to ensure that AI systems are developed and used in a manner that is ethical and responsible.
The practice of developing artificial intelligence systems that are fair, transparent, and respect user privacy and rights.
The principles and guidelines that govern the moral and ethical aspects of design, ensuring that designs are socially responsible and beneficial.
Technology designed to change attitudes or behaviors of users through persuasion and social influence, but not coercion.
The study of computers as persuasive technologies, focusing on how they can change attitudes or behaviors.
Trust, Risk, and Security Management (TRiSM) is a framework for managing the trust, risk, and security of AI systems to ensure they are safe, reliable, and ethical.
The practice of collecting, processing, and using data in ways that respect privacy, consent, and the well-being of individuals.