AI Model Validation
- Design and implement comprehensive testing strategies to ensure the robustness, accuracy, and reliability of AI/ML models in production.
- Develop automated test scenarios to evaluate model performance, detect data drift, and identify algorithmic bias.
- Collaborate with Data Science and MLOps teams to integrate testing into CI/CD pipelines.
Data Quality & Integration Testing
- Validate the consistency, integrity, and traceability of datasets used for training and inference.
- Support the verification of data pipelines and the integration of models into cloud environments (Azure, AWS, GCP).
Automation & Monitoring
- Build automated testing tools for ML/AI models, including regression, performance, and robustness tests.
- Implement monitoring systems to detect anomalies and ensure model stability post-deployment.
Collaboration & Documentation
- Work closely with ML engineers, developers, and project managers to ensure the quality of AI deliverables.
- Document test plans, results, and continuous improvement recommendations.
Requirements
Technical Skills:
- Strong proficiency in Python and testing frameworks (e.g., PyTest, unittest).
- Familiarity with AI testing tools (e.g., DeepChecks, Great Expectations, MLflow).
- Experience with cloud platforms (Azure, AWS, GCP) and CI/CD tools (Git, Jenkins, GitLab CI).
- Solid understanding of ML/AI models and their potential vulnerabilities.
Experience:
- 6+ years in software or test engineering, with at least 3 years focused on AI/ML systems.
- Proven experience validating production-grade models and implementing automated test suites.
الإبلاغ عن وظيفة