About the Team
At Roblox our mission is to bring people together through the power of play. As the leading platform for creating and sharing games we strive to create a safe and inclusive environment for our community. You will help us move away from manual drafting and towards AI-assisted content scaling setting the standard for how modern Trust & Safety teams operate. We are looking for a highly skilled and passionate individual with a strong background in knowledge management and content moderation to join our team. If you are committed to promoting a positive and secure online experience for our users then we want you to be a part of our team at Roblox.
You Will:
- Your core responsibility will be to transform complex legalistic policy language into clear structured operational instructions for human moderators leveraging advanced prompt engineering and few-shot learning methodologies.
- Execute Policy Implementation Testing. Before any policy launches you will test draft guidelines against Gold Label (ground truth) data provided by policy partners. You will ensure that new rules score higher than existing baselines on enforceability metrics before approving them for deployment.
- Work closely with Policy Managers and Product Support to translate complex safety philosophies into clear digestible and machine-readable logic.
- Ensure all team members have access to accurate and up-to-date information and resources.
- Collaborate with cross-functional teams to identify and address knowledge gaps and improve processes.
- Stay up-to-date on industry trends and best practices in content moderation and trust and safety.
- Monitor and analyze data to identify areas for improvement in knowledge management and content moderation.
- Work closely with the Trust & Safety leadership team to develop and implement policies and procedures to maintain a safe and inclusive environment for our community.
- Collaborate with product and engineering teams to develop tools and resources to enhance the effectiveness of content moderation.
You Have:
Minimum Qualifications:
- You have extensive experience using LLMs to assist in creating technical documentation knowledge bases or process guides. You understand prompt engineering context windows and how to verify AI outputs for accuracy.
- Proven experience in cross-functional work within T&S Content Moderation or complex Customer Support operations.
- You dont assume a document works because it reads well. You believe in testing content against reality. You have experience validating processes before deployment.
- Demonstrated ability to manage shifting priorities and handle time-sensitive critical harms launches (e.g. CSAM Terrorism) with absolute precision.
- You are comfortable looking at Quality Alignment scores and Ops metrics to determine if a policy is successful.
Preferred Qualifications:
- Experience building Human-in-the-Loop workflows where AI drafts content and humans validate it.
- Background in Content Strategy Information Architecture or Technical Writing in a tech environment.
Required Experience:
Manager
About the TeamAt Roblox our mission is to bring people together through the power of play. As the leading platform for creating and sharing games we strive to create a safe and inclusive environment for our community. You will help us move away from manual drafting and towards AI-assisted content sc...
About the Team
At Roblox our mission is to bring people together through the power of play. As the leading platform for creating and sharing games we strive to create a safe and inclusive environment for our community. You will help us move away from manual drafting and towards AI-assisted content scaling setting the standard for how modern Trust & Safety teams operate. We are looking for a highly skilled and passionate individual with a strong background in knowledge management and content moderation to join our team. If you are committed to promoting a positive and secure online experience for our users then we want you to be a part of our team at Roblox.
You Will:
- Your core responsibility will be to transform complex legalistic policy language into clear structured operational instructions for human moderators leveraging advanced prompt engineering and few-shot learning methodologies.
- Execute Policy Implementation Testing. Before any policy launches you will test draft guidelines against Gold Label (ground truth) data provided by policy partners. You will ensure that new rules score higher than existing baselines on enforceability metrics before approving them for deployment.
- Work closely with Policy Managers and Product Support to translate complex safety philosophies into clear digestible and machine-readable logic.
- Ensure all team members have access to accurate and up-to-date information and resources.
- Collaborate with cross-functional teams to identify and address knowledge gaps and improve processes.
- Stay up-to-date on industry trends and best practices in content moderation and trust and safety.
- Monitor and analyze data to identify areas for improvement in knowledge management and content moderation.
- Work closely with the Trust & Safety leadership team to develop and implement policies and procedures to maintain a safe and inclusive environment for our community.
- Collaborate with product and engineering teams to develop tools and resources to enhance the effectiveness of content moderation.
You Have:
Minimum Qualifications:
- You have extensive experience using LLMs to assist in creating technical documentation knowledge bases or process guides. You understand prompt engineering context windows and how to verify AI outputs for accuracy.
- Proven experience in cross-functional work within T&S Content Moderation or complex Customer Support operations.
- You dont assume a document works because it reads well. You believe in testing content against reality. You have experience validating processes before deployment.
- Demonstrated ability to manage shifting priorities and handle time-sensitive critical harms launches (e.g. CSAM Terrorism) with absolute precision.
- You are comfortable looking at Quality Alignment scores and Ops metrics to determine if a policy is successful.
Preferred Qualifications:
- Experience building Human-in-the-Loop workflows where AI drafts content and humans validate it.
- Background in Content Strategy Information Architecture or Technical Writing in a tech environment.
Required Experience:
Manager
View more
View less