RLHF
Reinforcement Learning from Human Feedback (RLHF) is a machine learning technique that uses human input to guide the training of AI models.
Reinforcement Learning from Human Feedback (RLHF) is a machine learning technique that uses human input to guide the training of AI models.
Human in the Loop (HITL) integrates human judgment into the decision-making process of AI systems.
A cognitive bias that causes people to attribute their own actions to situational factors while attributing others' actions to their character.
Large Language Model (LLM) is an advanced artificial intelligence system trained on vast amounts of text data to understand and generate human-like text.
Human-Centered Design (HCD) is an approach to problem-solving that involves the human perspective in all steps of the process.
The tendency to give more weight to negative experiences or information than positive ones.
The minimum difference in stimulus intensity that a person can detect, also known as the just noticeable difference (JND).
A usability testing method where users interact with a system they believe to be autonomous, but which is actually operated by a human.
A cognitive bias where individuals with low ability at a task overestimate their ability, while experts underestimate their competence.