Establishing Guardrails on Large Language Models
September 21, 2023
Stage MR5

- How do you avoid bias in LLMs from both the dataset level and the algorithms themselves?
- A diverse dataset is better to minimize the occurrence of bias, and yet the human trainers themselves might introduce bias after the training. For example, the AI model would identify a small list of preferred job candidates, but the human recruiters make the final decision, which could introduce biases such as race preferences.
- How do you put guardrails around hallucination when the AI model makes things up?
- The pros and cons of using releasing powerful AI models as open source
Speakers
Theme
Applied Intelligence Applications