
AI Engineer – R&D
Posted 19 hours ago

Posted 19 hours ago
• Design and implement strategies for fine-tuning structured generation and editing workflows.
• Create supervised datasets derived from successful generations, retries, failures, and user modifications.
• Establish measurable benchmarks to assess generation quality, correctness, and the preservation of edits.
• Conduct experiments with open-source models like Llama, Qwen, Mistral, DeepSeek, or similar architectures.
• Apply techniques such as LoRA, QLoRA, supervised fine-tuning (SFT), distillation, preference tuning, or synthetic data methods as suitable.
• Develop automated pipelines for gathering, cleansing, evaluating, and incorporating production data into training datasets.
• Utilize validation systems, intermediate representations, runtime analysis, and rendered outputs as structured feedback for models.
• Enhance retry, repair, and self-correction workflows within generation pipelines.
• Work collaboratively with engineering and product teams to boost model reliability and output quality.
• Proven experience in building with LLMs or structured generation systems in production or applied research environments.
• Practical experience in fine-tuning or adapting open-source language models.
• Proficient Python programming skills.
• Background in developing evaluation systems, ML experimentation workflows, or data pipelines.
• Solid understanding of prompt engineering, structured outputs, tool utilization, and model failure analysis.
• Capability to define measurable evaluation criteria instead of depending solely on subjective assessments.
• Proficient in troubleshooting systems that encompass model outputs, validation systems, runtime behavior, and rendered results.
• Excellent communication and teamwork abilities.
• Experience with code generation, DSL generation, or AI systems that are aware of compilers.
• Familiarity with LoRA, QLoRA, SFT, preference tuning, distillation, or synthetic data generation techniques.
• Knowledge of animation systems, graphics pipelines, design tools, SVG, WebGL, shaders, or procedural graphics.
• Experience with multimodal or visual-language-model evaluation workflows.
• Familiarity with observability or ML evaluation tools like Weights & Biases, Langfuse, MLflow, or OpenTelemetry.
• Experience in building agentic systems, orchestration pipelines, or multi-step generation workflows.
• Understanding of ASTs, intermediate representations (IRs), or structured program representations.
• Opportunity to work on innovative projects in a cutting-edge field.
• Collaborative and inclusive work environment.
• Professional development and learning opportunities.
• Competitive salary and comprehensive benefits package.
Get handpicked remote jobs straight to your inbox weekly.