Security Architect, Applied AI
Company: Solicitud de empleo para Security Architect, Appli
Location: New York City
Posted on: April 4, 2026
|
|
|
Job Description:
About Anthropic Anthropic’s mission is to create reliable,
interpretable, and steerable AI systems. We want AI to be safe and
beneficial for our users and for society as a whole. Our team is a
quickly growing group of committed researchers, engineers, policy
experts, and business leaders working together to build beneficial
AI systems. About the role: As an Applied AI Security Architect,
you will serve as Anthropic's trusted security expert for our most
demanding enterprise customers. You'll engage directly with CISOs,
security architects, compliance officers, and technical leaders at
the world's largest financial institutions, insurance companies,
and other highly regulated enterprises to address their most
critical questions about deploying Claude safely and securely. This
is a pre-sales technical role focused on security, compliance,
networking, and data architecture. Your job is to walk into a room
full of security professionals and demonstrate deep expertise in
enterprise security, regulatory compliance, and data protection.
You'll help customers understand Claude's security architecture,
data handling practices, and deployment options, and partner with
them to design solutions that meet their specific regulatory and
organizational requirements. You'll bring significant experience in
enterprise security, cloud architecture, and technical pre-sales
within regulated industries. Whether you've been a Security
Architect, Solutions Architect, Field CTO, or senior pre-sales
engineer at a cloud or security vendor, what matters is that you
understand how large institutions evaluate and adopt technology,
especially in financial services, and can speak credibly to their
security and compliance concerns. We are looking for someone
excited to help define how enterprises should think about security
and compliance in the age of AI. How do MCP, autonomous agents, and
RBAC work together? If working at the intersection of AI adoption
and regulated industries excites you, this is the role for you.
Responsibilities: Serve as the primary security and compliance
expert during customer engagements, addressing technical questions
about Claude's architecture, data flows, encryption, access
controls, and deployment models. Partner with CISOs, security
architects, and compliance teams at financial services and
insurance companies to understand their security requirements and
design solutions that meet regulatory standards (SOC 2, SOX,
PCI-DSS, GDPR, state insurance regulations, etc.). Lead technical
deep-dives on network architecture, data residency, API security,
authentication/authorization, audit logging, and integration
patterns for regulated environments. Support enterprise security
reviews, vendor assessments, and due diligence processes by
providing detailed technical documentation and expert guidance.
Collaborate with Sales and Applied AI teams before and after
customer engagements to align on strategy, prepare for security
discussions, and ensure continuity from initial conversations
through deployment. Partner closely with Anthropic’s product and
engineering teams to deeply understand Claude's security
capabilities, provide real-time customer feedback on feature gaps
and priorities, help assess technical feasibility of
customer-specific security requirements, and influence roadmap
priorities. Develop and maintain security-focused collateral,
reference architectures, and best practices documentation for
regulated industries. Travel regularly to customer sites for
security workshops, architecture reviews, and strategic account
meetings. You may be a good fit if you have: 8 years of experience
in enterprise security, cloud architecture, or technical pre-sales,
with significant exposure to regulated industries (financial
services, insurance, healthcare). Deep technical knowledge of
enterprise security concepts: network security, identity and access
management, encryption (at rest and in transit), API security, and
audit/logging requirements. Experience navigating compliance
frameworks relevant to financial services and insurance (SOC 2,
SOX, PCI-DSS, GDPR, CCPA, state insurance regulations, banking
regulators' guidance on AI/ML). A track record of engaging with
CISOs, security teams, and compliance officers at large
enterprises. Strong understanding of cloud architecture and
deployment models (AWS, Azure, GCP), including VPCs, private
endpoints, and hybrid connectivity. Excellent communication skills,
including the ability to explain complex security topics clearly to
both technical and non-technical audiences. The ability to navigate
ambiguity and move fast in a rapidly evolving market. A
collaborative mindset: sales at Anthropic is a team sport.
Excitement about AI's potential to transform highly regulated
industries, and a genuine desire to help customers adopt it safely
and responsibly. Deadline to apply: None. Applications will be
reviewed on a rolling basis. The annual compensation range for this
role is listed below. For sales roles, the range provided is the
role’s On Target Earnings ("OTE") range, meaning that the range
includes both the sales commissions/sales bonuses target and annual
base salary for the role. Annual Salary: $240,000 - $315,000 USD
Logistics Minimum education: Bachelor’s degree or an equivalent
combination of education, training, and/or experience Required
field of study: A field relevant to the role as demonstrated
through coursework, training, or professional experience Minimum
years of experience: Years of experience required will correlate
with the internal job level requirements for the position
Location-based hybrid policy: Currently, we expect all staff to be
in one of our offices at least 25% of the time. However, some roles
may require more time in our offices. Visa sponsorship: We do
sponsor visas! However, we aren't able to successfully sponsor
visas for every role and every candidate. But if we make you an
offer, we will make every reasonable effort to get you a visa, and
we retain an immigration lawyer to help with this. We encourage you
to apply even if you do not believe you meet every single
qualification. Not all strong candidates will meet every single
qualification as listed. Research shows that people who identify as
being from underrepresented groups are more prone to experiencing
imposter syndrome and doubting the strength of their candidacy, so
we urge you not to exclude yourself prematurely and to submit an
application if you're interested in this work. We think AI systems
like the ones we're building have enormous social and ethical
implications. We think this makes representation even more
important, and we strive to include a range of diverse perspectives
on our team. Your safety matters to us. To protect yourself from
potential scams, remember that Anthropic recruiters only contact
you from @anthropic.com email addresses. In some cases, we may
partner with vetted recruiting agencies who will identify
themselves as working on behalf of Anthropic. Be cautious of emails
from other domains. Legitimate Anthropic recruiters will never ask
for money, fees, or banking information before your first day. If
you're ever unsure about a communication, don't click any
links—visit anthropic.com/careers directly for confirmed position
openings. How we're different We believe that the highest-impact AI
research will be big science. At Anthropic we work as a single
cohesive team on just a few large-scale research efforts. And we
value impact — advancing our long-term goals of steerable,
trustworthy AI — rather than work on smaller and more specific
puzzles. We view AI research as an empirical science, which has as
much in common with physics and biology as with traditional efforts
in computer science. We're an extremely collaborative group, and we
host frequent research discussions to ensure that we are pursuing
the highest-impact work at any given time. As such, we greatly
value communication skills. The easiest way to understand our
research directions is to read our recent research. This research
continues many of the directions our team worked on prior to
Anthropic, including: GPT-3, Circuit-Based Interpretability,
Multimodal Neurons, Scaling Laws, AI & Compute, Concrete Problems
in AI Safety, and Learning from Human Preferences. Come work with
us! Anthropic is a public benefit corporation headquartered in San
Francisco. We offer competitive compensation and benefits, optional
equity donation matching, generous vacation and parental leave,
flexible working hours, and a lovely office space in which to
collaborate with colleagues. Guidance on Candidates' AI Usage:
Learn about our policy for using AI in our application process
Keywords: Solicitud de empleo para Security Architect, Appli, Sayreville , Security Architect, Applied AI, IT / Software / Systems , New York City, New Jersey