
AI Tech Lead
Posted 1 day ago

Posted 1 day ago
• Direct the technical design, architecture, and execution of AI solutions, emphasizing AI agents, agentic workflows, automation, and AI-enhanced business processes.
• Manage the complete engineering lifecycle of AI products: from discovery and prototyping to evaluation, production implementation, rollout, monitoring, and ongoing improvement.
• Lead and oversee an AI engineering team, focusing on technical guidance, task segmentation, mentoring, code reviews, delivery planning, and maintaining engineering quality.
• Architect and execute solutions utilizing AWS AI/ML services, including Amazon Bedrock, Amazon Bedrock AgentCore, Amazon SageMaker, and additional AWS offerings for model hosting, orchestration, data processing, monitoring, and security.
• Develop and integrate AI applications using technologies such as Python (FastAPI/Flask/Django) or equivalent, along with pertinent AI/ML frameworks.
• Create agentic systems capable of interacting with APIs, internal platforms, business workflows, knowledge bases, and external tools in a secure, observable, and controlled manner.
• Establish and implement best practices for LLM application development, covering prompt engineering, RAG, tool utilization, function calling, memory management, evaluation, guardrails, and hallucination mitigation.
• Propel advancements in internal engineering practices surrounding AI-assisted development, engineering efficiency, AI effectiveness, automation, and the responsible application of AI tools throughout software delivery.
• Collaborate with stakeholders to pinpoint high-value AI use cases, evaluate feasibility, define success metrics, and prioritize implementation.
• Set engineering standards for AI systems that encompass code quality, testing, observability, reliability, security, scalability, and maintainability.
• Champion MLOps and LLMOps methodologies, including model lifecycle management, deployment pipelines, monitoring, evaluation, drift detection, and rollback strategies.
• Partner with DevOps, cloud, security, and platform teams to ensure AI systems are prepared for production, compliant, cost-effective, and operationally stable.
• Facilitate the rollout and adoption of AI solutions throughout the organization, including documentation, training, stakeholder communication, and production support.
• Assess emerging AI technologies, frameworks, models, and vendors, providing practical recommendations for adoption.
• Ensure that AI solutions adhere to responsible AI principles, including data privacy, access control, auditability, fairness, explainability where applicable, and the secure handling of sensitive data.
• At least 10 years of professional experience in software engineering, data engineering, machine learning engineering, AI engineering, or related fields.
• A minimum of 3 years in a leadership or managerial role within engineering teams, including technical leadership, mentoring, planning, and delivery ownership.
• Extensive hands-on experience in developing production-quality AI, ML, and data-driven systems.
• Practical knowledge of AI agents, agentic workflows, LLM-based applications, workflow automation, tool-calling architectures, and AI orchestration patterns.
• Strong understanding of AWS, with hands-on experience in cloud-native architectures, Amazon Bedrock, Amazon Bedrock AgentCore, Amazon SageMaker, and associated AWS AI/ML services (the more, the better).
• Develop and integrate AI applications using technologies such as Python (FastAPI/Flask/Django) and relevant AI/ML frameworks.
• Experience with advanced LLM frameworks such as LangChain, LlamaIndex, Semantic Kernel, CrewAI, AutoGen, or similar agent/orchestration frameworks.
• Background in building RAG systems, including document ingestion, chunking strategies, embeddings, retrieval evaluation, reranking, and grounding techniques.
• Comprehensive understanding of machine learning concepts, including supervised/unsupervised learning, model training, feature engineering, evaluation, inference, and model performance metrics.
• Familiarity with MLOps / LLMOps practices, including CI/CD for ML and AI applications, model deployment, experiment tracking, model/prompt/version management, monitoring, evaluation pipelines, and production rollback strategies.
• Experience with vector databases and retrieval/search technologies, such as Amazon OpenSearch, Pinecone, pgvector, or similar.
• Proficiency in model fine-tuning, embedding models, transformer architectures, open-source LLMs, and model benchmarking.
• Experience designing APIs, microservices, event-driven systems, and cloud-native backend architectures.
• Strong understanding of security and governance requirements for AI systems, including access control, secrets management, data privacy, audit logging, and the safe handling of sensitive data.
• Proven ability to collaborate with cross-functional teams, including product managers, architects, engineers, data scientists, security teams, and business stakeholders.
• Capacity to transition from prototype to production without creating mere “AI demo theater” — the system must function effectively, scale, and withstand real user interactions.
• Excellent communication abilities, enabling the explanation of complex AI and engineering subjects to both technical and non-technical audiences.
• A strong ownership mindset, pragmatic decision-making skills, and the ability to balance innovation with delivery discipline.
• Fast-growing payment company;
• Excellent working conditions, casual atmosphere, and state-of-the-art hardware;
• Modern, challenging, and constantly evolving business;
• Opportunities for professional development – books, training, certifications, etc.;
• Team-building events and enjoyable activities;
• 25 days of paid holiday, with an additional day for every 2 years of service;
• Fully distributed and remote work environment.
LottieFiles
Get handpicked remote jobs straight to your inbox weekly.