Job Description
JOB DESCRIPTION
Anthropic is focusing on cutting-edge AI research that has the potential to revolutionise how humans and machines interact. As we rapidly improve foundational LLMs, application security is critical. In this role, you will use security protocols designed for high-risk situations to protect model weights as we expand our capabilities. Working collaboratively with software engineers, you will implement controls for access, infrastructure, and data to proactively reduce risks from malicious actors. This is an opportunity to join a team of specialists developing AI for social good while pushing the boundaries of safe and ethical AI research.
About Anthropic
Anthropic is an AI safety and research business dedicated to developing trustworthy, interpretable, and steerable AI systems. We want AI to be safe and helpful for both our consumers and society as a whole. Our interdisciplinary team has expertise in machine learning, physics, policy, business, and product.
Responsibilities:
- Lead “shift left” security initiatives to integrate security across the software development lifecycle.
- Perform secure design evaluations and threat modelling. Identify and prioritise potential threats, attack surfaces, and vulnerabilities.
- Perform security code reviews of source code modifications and advise developers on how to fix vulnerabilities and use secure coding standards.
- Manage Anthropic’s vulnerability management programme. Scans, audits, and bug bounty submissions are triaged and prioritised to identify vulnerabilities. Monitor remediation and validate fixes.
- Oversee the Anthropic bug bounty programme. Set the scope, review submissions, coordinate disclosure with engineering teams, and provide rewards. Develop contacts with the ethical hacking community.
- Investigate and recommend security tools and solutions to improve defences against emerging threats to machine learning systems.
- Create and document security policies, guidelines, and playbooks. Provide security awareness training to engineers.
- Collaborate with product engineers and researchers to implement security best practices. Promote secure architecture, design, and development.
You could be a good fit if:
- Have 5+ years of hands-on expertise with application and infrastructure security, including cloud and containerised systems.
- Empathy, communication skills, and a learning mentality are required to engage cross-functionally with engineers at all levels to integrate security into the product life cycle.
- Can apply creative and strategic thinking to reduce risk through secure design and simplicity, rather than just controls.
- Possess comprehensive security expertise to connect the dots across domains and identify holistic approaches to reducing the entire danger surface.
- Have the capacity to translate complicated security principles into actionable steps and drive agreement without direct authority.
- Take a proactive approach to security throughout the product lifecycle by engaging in activities such as threat modelling, secure code review, and education.
- Have a thorough understanding of offensive security so that you can foresee dangers from an adversary’s perspective rather than simply checking compliance boxes.
- Experience with contemporary application stacks, infrastructure, and security tools is required to design realistic defences.
- Are passionate about security fundamentals such as least privilege, defense-in-depth, and removing complexity in order to sub-linearly scale security via smart design.
A strong candidate may also:
- Have hands-on technical experience protecting complex cloud environments and microservices architectures using technologies such as Kubernetes, Docker, and AWS/GCP.
- Have familiarity with offensive security tactics such as vulnerability scanning, pen testing, and red team exercises.
- Have expertise with AI/ML security threats such as data poisoning, model extraction, adversarial examples, etc. and mitigations.
- Have expertise creating security tools, scripts, and automations.
- Have a thorough understanding of security engineering principles and technology. Desire to continue learning.
- Excellent communication abilities, with the ability to simplify complicated security subjects for a wide range of audiences.
- I am passionate about security and user protection. Willingness to constructively challenge assumptions in order to improve security.
Candidates should not have:
- 100% of the skills needed to accomplish the job
- Formal certifications or educational credentials
Annual salary (USD)
- The projected compensation range for this position is $300-$405,000.
Logistics
Our location-based hybrid policy requires personnel to spend at least 25% of their time in the office.
Deadline for applying: None. Applications will be considered on a rolling basis.
We offer US visa sponsorship. However, we are unable to effectively sponsor visas for every post and candidate; operations positions are particularly challenging to support. But, if we make you an offer, we will make every attempt to get you into the United States, and we have hired an immigration lawyer to assist us.
We encourage you to apply even if you don’t think you meet every single need.Not all strong candidates will meet all of the qualifications listed. According to research, persons who identify as members of underrepresented groups are more likely to experience imposter syndrome and doubt the validity of their candidature, thus we encourage you not to rule yourself out early and to apply if you are interested in this work. We think AI systems like the ones we’re constructing have significant social and ethical consequences. We believe that this emphasises the importance of representation, and we endeavour to have a diverse team.
Compensation and Benefits*
Anthropic’s remuneration structure includes three components: salary, stock, and benefits. We are committed to paying fairly and intend for these three factors to be extremely competitive with market prices.
Equity will make up a significant portion of overall remuneration for this position, in addition to the income indicated above. We intend to provide higher-than-average stock remuneration for a company of our size and will disclose equity levels at the time of offer issuance.
Our US-based workers receive the following benefits:
- Optional equity donation matching at a 3:1 ratio, for up to 50% of your equity grant.
- Comprehensive health, dental, and vision coverage for you and your dependents.
- 401(k) plan with 4% match.
- 21 weeks of compensated parental leave.
- Unlimited PTO – most employees take 4-6 weeks per year, sometimes more!
- Stipends for schooling, home office renovations, transportation, and wellbeing.
- Carrot has fertility advantages.
- Our office serves daily lunches and snacks.
- Relocation assistance for folks migrating to the Bay Area.
UK Benefits: The following benefits are available to our UK-based employees:
- Optional equity donation matching at a 3:1 ratio, for up to 50% of your equity grant.
- Private health, dental, and vision insurance for yourself and your dependents.
- Pension contribution (4% of your earnings).
- 21 weeks of compensated parental leave.
- Unlimited PTO – most employees take 4-6 weeks per year, sometimes more!
- Health cash plan.
- Life insurance and income protection.
- Our office serves daily lunches and snacks.
This pay and benefits information is based on Anthropic’s best estimate for this position as of the date of publishing and may be updated in the future. Employees based in countries other than the United Kingdom or the United States will receive a different benefits package. The amount of remuneration within the range will be determined by a number of job-related elements, including your position on our internal performance ladders, which are based on factors such as previous work experience, applicable education, and performance during our interviews or work trials.
How we are different.
We believe that the most influential AI research will be big science. At Anthropic, we work as a coherent team on a few large-scale research projects. And we prioritise effect – achieving our long-term goals of steerable, trustworthy AI — over working on smaller, more particular issues. We see AI research as an empirical discipline that shares many similarities with physics and biology, as well as traditional computer science activities. We are a very collaborative group, and we hold frequent research discussions to ensure that we are working on the most impactful projects at all times. As such, we place a high priority on communication abilities. We don’t distinguish between engineering and research, and we want all of our technical team to contribute to both as needed.
The simplest approach to comprehend our research directions is to read our most recent findings. This study builds on many of the directions our team worked on prior to Anthropic, such as GPT-3, Circuit-Based Interpretability, Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems in AI Safety, and Learning from Human Preferences.
Come to work with us!
Anthropic is a public benefit corporation headquartered in San Francisco. We provide competitive remuneration and benefits, optional equity donation matching, substantial vacation and parental leave, flexible working hours, and a great office space for collaboration with colleagues.