Lead Technical Analyst, Workspace AI, Trust and Safety
Company: Google
Location: Seattle
Posted on: April 2, 2026
|
|
|
Job Description:
info_outline X In accordance with Washington state law, we are
highlighting our comprehensive benefits package, which is available
to all eligible US based employees. Benefits for this role include:
Health, dental, vision, life, disability insurance Retirement
Benefits: 401(k) with company match Paid Time Off: 20 days of
vacation per year, accruing at a rate of 6.15 hours per pay period
for the first five years of employment Sick Time: 40 hours/year
(statutory, where applicable); 5 days/event (discretionary)
Maternity Leave (Short-Term Disability Baby Bonding): 28-30 weeks
Baby Bonding Leave: 18 weeks Holidays: 13 paid days per year
Minimum qualifications: Bachelor's degree or equivalent practical
experience. 7 years of work experience in data analysis, security
threat detection, or abuse investigation. Experience in one or more
programming languages (e.g., Python, SQL, Go, C++, Java), or with
Machine Learning, Anomaly Detection, or AI models. Preferred
qualifications: Master's degree or PhD in a technical field.
Experience building and deploying anti-abuse systems at the scale
of Google Cloud or Workspace. Experience in exploratory data
analysis and statistical analysis with a track record of
identifying non-obvious patterns in datasets. Ability to navigate
ambiguity and solve problems in the AI safety domain. Excellent
problem-solving and critical thinking skills with attention to
detail in an ever-changing environment. Excellent written and
verbal communication skills, with the ability to articulate
technical safety concerns to executive leadership. About the job
Trust and Safety team members are tasked with identifying and
taking on the biggest problems that challenge the safety and
integrity of our products. They use technical know-how, excellent
problem-solving skills, user insights, and proactive communication
to protect users and our partners from abuse across Google products
like Search, Maps, Gmail, and Google Ads. On this team, you're a
big-picture thinker and strategic team-player with a passion for
doing what’s right. You work globally and cross-functionally with
Google engineers and product managers to identify and fight abuse
and fraud cases at Google speed - with urgency. And you take pride
in knowing that every day you are working hard to promote trust in
Google and ensuring the highest levels of user safety. The
Workspace AI Trust and Safety team enables the rapid growth of
Workspace AI businesses by curbing associated safety and security
risks. We support products throughout their life cycle by advancing
safety protection mechanisms in the earliest stages of design. Our
portfolio includes both pre and post-launch capabilities, ensuring
AI products are powerful, safe, secure, and aligned with our AI
Principles. As the Staff Analyst for Workspace AI Trust and Safety,
you will move beyond individual execution to define the direction
for how we measure, mitigate, and prevent AI risks at scale. You
will serve as the technical anchor for a team of analysts, setting
the standards for our anti-abuse detection systems and safety
frameworks. At Google we work hard to earn our users’ trust every
day. Trust and Safety is Google’s team of abuse fighting and user
trust experts working daily to make the internet a safer place. We
partner with teams across Google to deliver bold solutions in abuse
areas such as malware, spam and account hijacking. A team of
Analysts, Policy Specialists, Engineers, and Program Managers, we
work to reduce risk and fight abuse across all of Google’s
products, protecting our users, advertisers, and publishers across
the globe in over 40 languages. The US base salary range for this
full-time position is $189,000-$274,000 bonus equity benefits. Our
salary ranges are determined by role, level, and location. Within
the range, individual pay is determined by work location and
additional factors, including job-related skills, experience, and
relevant education or training. Your recruiter can share more about
the specific salary range for your preferred location during the
hiring process. Please note that the compensation details listed in
US role postings reflect the base salary only, and do not include
bonus, equity, or benefits. Learn more about benefits at Google .
Responsibilities Define the technical roadmap and long-term
strategy for AI safety, prompt injection evaluations, and misuse
prevention across Workspace AI Products. Lead the design and
implementation of scalable anti-abuse detection and action systems,
including the "AI agent" frameworks used to automate enforcement.
Lead the investigation of novel and failure modes for GenAI
products (e.g., sociotechnical harms, adversarial misuse) and
establish benchmarking and evaluation protocols. Act as a trusted
advisor to executive stakeholders in Engineering and Product,
translating safety and security risks into actionable business
insights and influencing product design to prioritize safety.
Mentor analysts, review technical work, and elevate the team’s
capabilities in data extraction, statistical analysis, and machine
learning.
Keywords: Google, South Hill , Lead Technical Analyst, Workspace AI, Trust and Safety, IT / Software / Systems , Seattle, Washington