Quality Assurance Engineer

AI71

Employer Active

Posted on 6 Apr

Experience

5 - 7 Years

Education

Bachelor of Science(Computers)

Nationality

Any Nationality

Gender

Not Mentioned

Vacancy

1 Vacancy

Job Description

Roles & Responsibilities

Key Responsibilities 1. AI & LLM Validation (LeverEDGE)
  • Non-Deterministic Testing: Architect automated frameworks to evaluate Generative AI outputs for hallucination, consistency, and factual accuracy against "Gold Standard" datasets.

  • RAG Evaluation: Implement automated metrics (e.g., RAGAS, faithfulness, answer relevance) to verify that Retrieval-Augmented Generation pipelines accurately cite technical and regulatory documentation.

  • Prompt Regression: Design regression suites to monitor "prompt drift," ensuring model updates do not degrade the quality of AI-generated engineering documents.

2. Integration & System Verification
  • Enterprise Integration: Build robust tests to validate data consistency between AI agents and critical systems (e.g., SAP S/4HANA, Ariba), ensuring the integrity of Bill of Materials (BOM) and financial data.

  • Performance Benchmarking: Design tests to validate latency and throughput for forecasting models and risk-scoring engines using tools like Locust, JMeter, or K6.

  • API & Security Validation: Automate testing of secure API gateways, verifying Role-Based Access Control (RBAC) and PII redaction logic before data reaches AI models.

3. Governance & Traceability
  • V-Model Alignment: Map automated test cases to "System Requirements" to create digital evidence for formal Verification and Validation (V&V) reports.

  • Stage Gate Compliance: Prepare "Test Readiness" packages for formal reviews, providing quantitative evidence that systems are stable enough to move from MVP to Production.

  • Defect Lifecycle Management: Manage the feedback loop between Requirements Quality Assistants and development teams, tracing AI logic defects back to specific model versions.

What You ll Bring Technical Requirements
  • Core Automation: Expert proficiency in Python (Pytest) and standard libraries (Selenium/Playwright, Requests).

  • AI Evaluation: Hands-on experience with LLM evaluation frameworks (e.g., DeepEval, TruLens) and "Ground Truth" dataset management.

  • Performance Engineering: Proficiency in crafting Performance Test Plans and implementations (Locust, K6, etc.).

  • Data Validation: Expertise in SQL and data quality tools (e.g., Great Expectations) for Data Lakehouses and Vector Databases.

  • CI/CD & DevOps: Strong experience integrating quality gates into GitLab CI/CD pipelines.

  • Engineering Practices: Deep understanding of modern QE practices, including Shift Left, Test Pyramid, and Mono-repo architectures.

Desired Candidate Profile

AI71 is seeking a Senior QA Automation Engineer to lead the validation and verification strategies for EDGE Group s AI transformation. In this role, you will define "what good looks like" for non-deterministic AI systems, ensuring that Large Language Models (LLMs) and predictive engines meet the strict reliability standards required for the defense and enterprise sectors.

You will act as the bridge between Agile development and formal Systems Engineering. Your mandate is to build automated testing frameworks that validate AI behaviors against "Ground Truth" datasets and ensure our AI agents pass rigorous Test Readiness Reviews (TRR) and Functional Configuration Audits (FCA)

Company Industry

Department / Functional Area

Keywords

  • Quality Assurance Engineer

Disclaimer: Naukrigulf.com is only a platform to bring jobseekers & employers together. Applicants are advised to research the bonafides of the prospective employer independently. We do NOT endorse any requests for money payments and strictly advice against sharing personal or bank related information. We also recommend you visit Security Advice for more information. If you suspect any fraud or malpractice, email us at abuse@naukrigulf.com