Research Intern- AI Ethics
San Jose, CA - USA
Job Summary
Sony AI America a branch of Sony AI is a remotely distributed organization spread across the U.S. and Canada. Sony AI is Sonys new research organization pursuing the mission to use AI to unleash human creativity. Sony AI works closely with Sonys other business units including Sony Interactive Entertainment LLC. Sony Pictures Entertainment Inc. and Sony Music Entertainment. With some 900 million Sony devices in hands and homes worldwide today a vast array of Sony movies television shows and music and the PlayStation Network Sony creates and delivers more entertainment experiences to more people than anyone else on earth. To learn more: SonyResearch Ethics Team
Sony Research is dedicated to driving innovation advancingtechnologyand ensuring that AI is developed responsibly ethically and inclusively. The AI Ethics team focuses on creating frameworks tools and methodologies that promote trust transparency fairness and accountability in AI tools and technologies. Summary
We are seeking a motivatedResearch Internto join our interdisciplinary team at the forefront of AI Ethics Safety and Responsible AI for a 3-month research internship. You will work on open research questions in generative AI including data collection guardrails evaluation and benchmarking to tackle fundamental ethical challenges that arise when AI is deployed at scale in a global entertainment company. This is an opportunity to conduct meaningful publishable research in collaboration with experienced researchers and engineers across a globally diverse team. We actively support interns in co-authoring papers andsubmittingto top-tier venues making this an ideal opportunity to advance your doctoral research while contributing to real-world impact.
Responsibilities
Responsibilities will depend on the project you are assigned to and will include a combination of the following:
Conduct innovative research in AI ethics including but not limited to data collection AIevaluationand harm mitigation.
Contribute to the development of tools and frameworks to assess and mitigate AI-related risks in natural language processing and computer vision such as bias privacycopyrightand transparency.
Improve the performance of open models to detect and mitigate AI harms.
Set up technical experiments to conduct evaluations of open models.
Contribute to research with the potential for publication.
Qualifications
Currentlypursuing a degree (PhD preferred) in Computer Science or socio-technical AI topics.
Strong foundations in AI and machine learning techniques in computer vision or natural language processing.
Familiarity with generative AI evaluation benchmarking red-teamingor guardrail development.
Experience using Python and deep learning libraries (e.g.PyTorchHuggingFaceTransformers) to develop fine-tune or evaluate generative AI models.
Strong interest in AI ethics and responsible AI development.
Proven research analytical and problem-solving skills.
Self-motivated and capable of proposing and implementing innovative ideas.
Excellent written and verbal communication skills in English.
Preferred:Experience with research communities including having published papers at conferences/journals