Austin skyline from Palmer Events Center

2023

Loading

Establishing Guardrails on Large Language Models

September 21, 2023
Stage MR5
  • How do you avoid bias in LLMs from both the dataset level and the algorithms themselves?
  • A diverse dataset is better to minimize the occurrence of bias, and yet the human trainers themselves might introduce bias after the training. For example, the AI model would identify a small list of preferred job candidates, but the human recruiters make the final decision, which could introduce biases such as race preferences.
  • How do you put guardrails around hallucination when the AI model makes things up?
  • The pros and cons of using releasing powerful AI models as open source
Chairperson
Srimoyee Bhattacharya, Senior Data Scientist - Shell
Speakers
Shubham Saboo, Head of Developer Relations - Tenstorrent Inc
Kris Perez, Director, AI - DataForce
Randall Kenny, Head of Performance and Product Analytics - BP
Patrick Marlow, Patrick Marlow, Conversational AI Engineer, Google Cloud AI Incubator - Google

Theme

Applied Intelligence Applications
View all 2023

Our Sponsors

Industry Partners

Loading

 

Diamond Sponsors

Loading

 

Gold Sponsors

Loading

Silver Sponsors

Loading

Bronze Sponsors

Associate Sponsors

Loading

Networking & Party Sponsor

Loading

Media & Strategic Partners

Loading