Red Teaming
A strategy where a team plays the role of an adversary to identify vulnerabilities and improve the security and robustness of a system.
A strategy where a team plays the role of an adversary to identify vulnerabilities and improve the security and robustness of a system.
A method used in AI and machine learning to ensure prompts and inputs are designed to produce the desired outcomes.
Entity Relationship Diagram (ERD) is a visual representation of the relationships between entities in a database.
Also known as the 68-95-99.7 Rule, it states that for a normal distribution, nearly all data will fall within three standard deviations of the mean.
A risk management model that illustrates how multiple layers of defense (like slices of Swiss cheese) can prevent failures, despite each layer having its own weaknesses.
A framework for assessing and improving an organization's ethical practices in the development and deployment of AI.
A statistical theory that states that the distribution of sample means approximates a normal distribution as the sample size becomes larger, regardless of the population's distribution.
Data points that differ significantly from other observations and may indicate variability in a measurement, experimental errors, or novelty.
The process of linking language to its real-world context in AI systems, ensuring accurate understanding and interpretation.