Description:

An AI Safety and Research Company is seeking a thoughtful, multidisciplinary product counsel to support our trust and safety team in preventing harmful misuse of our AI systems. As a member of the Product Counsel function, you will work cross-functionally to identify and mitigate risks, define policies, and advise on safeguards. The ideal candidate will have experience navigating complex issues at the intersection of artificial intelligence, user safety, content policy, and online harms.

You Will:
  • Collaborate with the Trust and Safety team to develop product features, content policies, and moderation workflows that prevent misuse and address safety issues. Provide pragmatic advice and risk mitigation recommendations.
  • Define principles and policies around system misuse, user safety, content moderation, and sensitive data handling. Develop guidelines and controls to proactively mitigate potential harms.
  • Develop expertise on laws and regulations specifically relevant to AI safety, content moderation, and user privacy. Stay up to date on potential AI misuse vectors and risk areas.
  • Partner with external legal counsel, subject matter experts, and policymakers as required to advance understanding, set appropriate safeguards, and proactively address risks.

You might be a good fit if:
  • You have a passion for developing AI that is both innovative and trustworthy. You understand the opportunities of AI to benefit society as well as the risks and limitations that must be addressed.
  • You have experience operating in a fast-paced technology startup in which priorities shift rapidly and schedules “move to the left.” You thrive in this dynamic environment and pride yourself on your adaptability and ability to pivot with speed and grace.
  • You understand how to achieve the right balance between the organization's mission and goals, when to be flexible and when to draw a hard line.
  • You have a knack for identifying and implementing efficient processes and policies.
  • You thrive as a member of cross-functional teams building frontier technologies and want to develop a deep understanding of our technical teams and what we are building.
  • You enjoy wearing many hats in a fast-growing startup environment and are comfortable operating outside your areas of expertise and in uncharted legal territory.
  • You are a “doer” and are willing to roll up your sleeves to get things done. You're a team player who doesn't hesitate to jump in to solve difficult problems.

Qualifications:
  • JD and active membership in at least one U.S. state bar (California preferred)
  • 5-10 years relevant legal experience (in-house preferred). Specific expertise in content moderation, trust & safety, privacy, and online harms strongly preferred.
  • Exceptional analytical skills, technical fluency, and ability to navigate ambiguity
  • Broad understanding of key issues in AI, data privacy, content policy, and online safety. Familiarity with machine learning, generative models and products strongly preferred.
  • Track record of educating internal stakeholders, translating complex concepts, and advising on risk mitigation strategies.
  • Superior communication abilities, with proven experience in cross-functional collaboration and consensus building.
  • Motivated self-starter able to multi-task and juggle multiple priorities in a dynamic environment.
  • Growth mindset, with a passion for AI's potential to positively impact the world and realistic assessment of its risks and limitations. Commitment to building trustworthy, ethical AI systems
  • Authentic integrity and a deep understanding of the importance of ethics in business.