
AI Cybersecurity Expert
- Espoo, Helsinki
- Vakituinen
- Täyspäiväinen
- Establish practical AI security guardrails & patterns for LLMs, classical ML, and AI assisted workflows—clear how-to guidance, examples, and checklists that make secure choices easy.
- Design and build AI agents for cybersecurity use cases and guide other teams building agents for business/product scenarios—safe tool use & permissioning, approval/fallback flows, containment, and user experience that minimizes risk.
- Provide hands on consultancy and enablement: join design sessions, review solutions, help teams prototype safely, and run clinics/office hours for both developers and everyday business users.
- Advance LLMOps/MLOps and oversee MCP usage: fit for purpose evaluation/safety tests, versioning and rollback, prompt/tool hygiene, secrets handling, and enterprise Model Context Protocol (MCP) governance (tool registration, permissions, boundary controls, auditability).
- Run pragmatic AI risk assessments for models and use cases—data minimization, isolation and context scoping, abuse/misuse prevention—including MCP specific agent–tool–data interactions.
- Partner with capability owners (identity, data protection, data platforms, cloud, API gateways, product security, enterprise architects) to integrate AI securely by scaling through others with patterns, sample configs, and baselines.
- Strengthen the AI supply chain with procurement, product security, and third-party risk management—technical due diligence for model providers, vector stores, orchestration frameworks, plugins/tools, and SaaS—plus practical evaluation criteria and reference controls.
- Create reusable assets and educate users—playbooks, “dos and don’ts,” starter kits, example prompts, decision trees, and reference implementations—and continuously update patterns and training as platform capabilities evolve.
- Solid, hands on expertise in AI/ML security across LLM and classical ML—protecting data, avoiding leakage, resisting abuse, establishing safe boundaries for agents/tools, and promoting secure user experiences.
- Experience guiding teams to build with AI safely, including AI agents and copilots; familiarity with enterprise agent frameworks and MCP (in practice, not just theory), and how everyday users interact with these systems. · Security architecture and application security fundamentals—identity and access (human and workload), API and data protection, cloud platform controls, containerization/runtime boundaries—and the ability to apply them pragmatically to AI scenarios.
- Operational MLOps/LLMOps know how: safety/evaluation harnesses, rollout/rollback, versioning, telemetry for AI workloads, dataset hygiene, and drift/regression monitoring—ideally in partnership with platform teams.
- Clear, audience appropriate communication: you can explain complex topics simply, write user friendly guidance, and coach technical and non-technical stakeholders alike.
- Influence without authority: facilitation, design reviews, and decision making with architects, engineers, product owners, and business sponsors.
- Comfortable with rapid prototyping and vendor/tool evaluation to demonstrate safe approaches and accelerate adoption.
- Awareness of AI governance and regulation, and the ability to collaborate with governance specialists without owning that domain.
- 8+ years of experience in cybersecurity and/or technical IT
- Master’s degree in information security, computer science, data/ML - or equivalent practical experience.
- Fluent English; other languages are a plus.