What Are SecAI+ Performance-Based Questions?
The CompTIA SecAI+ (CY0-001) exam does not just test whether you can recognise AI security vocabulary. It tests whether you can actually secure AI systems and use AI to enhance security operations. Performance-based questions (PBQs) are where that practical ability is measured. They place you in a simulated environment and ask you to analyse, configure, or respond to a realistic AI security scenario.
The SecAI+ PBQ guide you are reading breaks down every scenario type you are likely to encounter, maps each one to the exam domains, and gives you a repeatable framework for solving them under time pressure. The CY0-001 exam has a maximum of 60 questions (a mix of multiple-choice and PBQs), lasts just 60 minutes, and requires a scaled score of 600 out of 900 to pass. With roughly one minute per question, you cannot afford to freeze on a PBQ.
SecAI+ Exam Domains: Where PBQs Come From
Before diving into scenario types, understand the four domains and their weights. PBQs draw from all four domains, but Domain 2 (Securing AI Systems) is the heaviest contributor at 40% of the exam.
| Domain | Weight | PBQ Likelihood |
|---|---|---|
| 1.0 Basic AI Concepts Related to Cybersecurity | 17% | Low (mostly MCQ) |
| 2.0 Securing AI Systems | 40% | Very High |
| 3.0 AI-Assisted Security | 24% | High |
| 4.0 AI Governance, Risk, and Compliance | 19% | Moderate |
Exam Tip: Domain 2, Securing AI Systems, accounts for 40% of the SecAI+ exam. Expect most of your PBQs to originate here. If you only have time to practise one domain's scenarios deeply, make it this one.
The 12 SecAI+ PBQ Scenario Types
Based on the exam objectives, beta candidate reports, and the official CompTIA blueprint, these are the 12 scenario types you should prepare for. They are grouped by the domain they primarily test.
Domain 2: Securing AI Systems (Scenarios 1-7)
Scenario Type 1: Prompt Injection Detection and Mitigation
What you will see: A simulated AI chatbot or LLM-powered application is receiving user inputs. Some inputs contain direct or indirect prompt injection attempts. You must identify the malicious prompts and select or configure the appropriate mitigation controls.
Key knowledge required:
- Direct prompt injection (user manipulates the model through its input interface)
- Indirect prompt injection (malicious instructions hidden in external data sources the model retrieves)
- Mitigation controls: prompt firewalls, input validation, prompt templates, and output filtering
Solving framework:
- Read each input carefully and identify any instruction that attempts to override the system prompt
- Distinguish between direct injection (in the user's message) and indirect injection (in retrieved content)
- Apply the correct control: prompt firewalls for input filtering, guardrails for output filtering, and template enforcement for structural protection
Exam Tip: Prompt injection is listed as the number one risk in the OWASP Top 10 for LLM Applications. CompTIA treats it as a top-priority topic. Know the difference between direct and indirect injection cold.
Scenario Type 2: Model Poisoning and Data Integrity Assessment
What you will see: A scenario where an AI model's training data or fine-tuning pipeline has been compromised. You may be given logs, data samples, or model behaviour anomalies and asked to identify the poisoning vector and recommend remediation.
Key knowledge required:
- Data poisoning (manipulating training data to influence model behaviour)
- Model poisoning (corrupting model weights or parameters during training or fine-tuning)
- Backdoor attacks (embedding triggers that activate specific malicious behaviour)
- Remediation: data validation, provenance tracking, anomaly detection in training pipelines
Solving framework:
- Identify the stage of compromise (data collection, preprocessing, training, or fine-tuning)
- Assess the impact (targeted misclassification vs broad performance degradation)
- Select remediation controls from the options provided, prioritising data provenance and pipeline integrity checks
Scenario Type 3: AI Gateway and Access Control Configuration
What you will see: A simulated AI deployment environment where you must configure security controls for an AI system's API gateway or access layer. This could involve setting rate limits, token limits, input quotas, or role-based access controls.
Key knowledge required:
- Gateway controls: prompt firewalls, rate limiting, token limits, input quotas
- Model access controls: who can query, fine-tune, or modify the model
- Data access controls: who can access training data, inference logs, and model outputs
- Network and API access: endpoint security, authentication, and authorisation
Solving framework:
- Read the scenario's security requirements (who needs access, what level, what restrictions)
- Apply the principle of least privilege to model, data, and API access
- Configure gateway controls to prevent abuse (rate limits stop denial-of-service; token limits prevent excessive consumption)
- Verify that your configuration blocks unauthorised access without disrupting legitimate use
Scenario Type 4: AI Data Security Controls Implementation
What you will see: A scenario requiring you to implement data protection measures for an AI system. This typically involves selecting the correct combination of encryption, anonymisation, masking, or redaction controls for data at different stages.
Key knowledge required:
- Encryption requirements: in-transit (TLS), at-rest (AES-256), and in-use (homomorphic or secure enclaves)
- Data anonymisation and pseudonymisation techniques
- Data classification labels and their implications for AI training data
- Data redaction and masking for sensitive fields in datasets
- Data minimisation principles
Solving framework:
- Identify the data stage (collection, storage, processing, inference output)
- Classify the data sensitivity level
- Apply the appropriate control for each stage: encryption for transit and rest, anonymisation or masking for training data containing PII, redaction for inference outputs
| Data Stage | Primary Controls |
|---|---|
| Collection | Input validation, classification labelling |
| Storage (at-rest) | AES-256 encryption, access controls |
| Processing (in-use) | Secure enclaves, data minimisation |
| Transit | TLS encryption, API authentication |
| Training data | Anonymisation, pseudonymisation, masking |
| Inference output | Redaction, output filtering, guardrails |
Scenario Type 5: Adversarial Input Analysis
What you will see: A scenario where an AI system (often a classification model or computer vision system) is receiving adversarial inputs designed to cause misclassification or unexpected behaviour. You must identify the attack type and recommend defences.
Key knowledge required:
- Evasion attacks (modifying inputs at inference time to cause misclassification)
- Membership inference attacks (determining whether specific data was in the training set)
- Model inversion attacks (reconstructing training data from model outputs)
- MITRE ATLAS techniques: AML.T0043 (Craft Adversarial Data), AML.T0024 (Exfiltration via ML Inference API)
Solving framework:
- Determine the attacker's goal (misclassification, data extraction, or model theft)
- Identify the attack vector (input perturbation, API queries, or output analysis)
- Match the attack to the appropriate MITRE ATLAS technique
- Select defences: adversarial training, input preprocessing, output rate limiting, or differential privacy
Scenario Type 6: Guardrail Testing and Validation
What you will see: A scenario where you must evaluate whether an AI system's guardrails are functioning correctly. You may need to test guardrails against specific attack vectors and identify gaps.
Key knowledge required:
- Model guardrails (content filters, topic restrictions, safety classifiers)
- Jailbreaking techniques and how guardrails prevent them
- Red-teaming methodologies for AI systems
- Guardrail validation testing approaches
Solving framework:
- Review the guardrail configuration presented in the scenario
- Identify which attack vectors the guardrails address and which they miss
- Recommend additional controls or configuration changes to close the gaps
- Prioritise fixes based on risk (prompt injection and jailbreaking are higher priority than edge-case hallucinations)
Scenario Type 7: AI System Monitoring and Audit Log Analysis
What you will see: A simulated monitoring dashboard or set of audit logs from an AI system. You must identify anomalous behaviour, potential security incidents, or compliance violations.
Key knowledge required:
- Model drift detection (when a model's behaviour changes unexpectedly over time)
- Anomalous query patterns that may indicate attack reconnaissance
- Audit logging requirements for AI systems
- Key metrics: inference latency, error rates, confidence score distributions
Solving framework:
- Establish the baseline (what does normal look like for this system?)
- Identify deviations: unusual query volumes, confidence score shifts, or unexpected output patterns
- Correlate anomalies with known attack patterns (high query volume may indicate data extraction; confidence score changes may indicate poisoning)
- Recommend the appropriate response: investigate, escalate, or implement additional controls
Domain 3: AI-Assisted Security (Scenarios 8-10)
Scenario Type 8: AI-Powered Threat Detection Triage
What you will see: A scenario where an AI-powered security tool (SIEM, EDR, or SOAR) has flagged potential threats. You must triage the alerts, distinguishing between true positives, false positives, and items requiring further investigation.
Key knowledge required:
- How AI/ML enhances threat detection (anomaly detection, behavioural analysis, pattern recognition)
- Common sources of false positives in AI-driven security tools
- Triage prioritisation based on confidence scores and contextual risk
- Human-in-the-loop workflows for AI-assisted security decisions
Solving framework:
- Review the alert details: confidence score, data source, affected assets, and contextual information
- Assess each alert's risk level using the provided context
- Classify alerts: confirm (high confidence + high risk), investigate further (moderate confidence), or dismiss (low confidence + known benign pattern)
- Document your reasoning for each classification decision
Scenario Type 9: AI-Driven Vulnerability Assessment
What you will see: A scenario presenting AI-generated vulnerability scan results. You must interpret the findings, prioritise remediation, and identify potential false positives or missed vulnerabilities.
Key knowledge required:
- How AI models score and prioritise vulnerabilities differently from traditional scanners
- Common AI vulnerability assessment limitations (bias in training data, novel vulnerability blindness)
- Risk-based vulnerability prioritisation frameworks
- When to trust AI-generated findings vs when to verify manually
Solving framework:
- Review each vulnerability finding with its AI-assigned severity and confidence score
- Cross-reference against asset criticality and exposure
- Identify findings that may be false positives (low confidence, unusual context)
- Prioritise remediation: critical assets with high-confidence findings first
Scenario Type 10: AI-Assisted Incident Response
What you will see: A security incident is in progress, and AI tools are providing analysis, recommendations, or automated responses. You must evaluate the AI's recommendations and decide which actions to approve, modify, or reject.
Key knowledge required:
- AI's role in each phase of incident response (detection, analysis, containment, eradication, recovery)
- Limitations of AI in incident response (adversarial manipulation, novel attack types, context gaps)
- Human oversight requirements for AI-recommended actions
- Automation boundaries: what should be automated vs what requires human approval
Solving framework:
- Review the incident context and the AI's analysis
- Evaluate each AI-recommended action for appropriateness and potential unintended consequences
- Approve low-risk automated actions (log collection, alert enrichment)
- Require human approval for high-impact actions (network isolation, account lockouts)
- Identify where the AI's analysis may be incomplete or incorrect
Domain 4: AI Governance, Risk, and Compliance (Scenarios 11-12)
Scenario Type 11: AI Compliance Framework Assessment
What you will see: A scenario describing an organisation deploying an AI system in a specific regulatory environment. You must identify applicable compliance requirements and assess whether the deployment meets them.
Key knowledge required:
- EU AI Act risk classification (unacceptable, high, limited, minimal risk)
- NIST AI Risk Management Framework (AI RMF) core functions: Govern, Map, Measure, Manage
- ISO/IEC 42001 AI management system standard
- OECD AI Principles
- Data privacy regulations that apply to AI (GDPR, CCPA)
Solving framework:
- Identify the regulatory environment (EU, US, or global)
- Classify the AI system's risk level under the applicable framework
- Map the organisation's current controls against the framework's requirements
- Identify compliance gaps and recommend specific corrective actions
| Framework | Scope | Key Focus |
|---|---|---|
| EU AI Act | European Union | Risk-based classification, prohibited uses, transparency |
| NIST AI RMF | United States | Govern, Map, Measure, Manage lifecycle |
| ISO/IEC 42001 | Global | AI management system certification |
| OECD AI Principles | Global | Transparency, accountability, fairness |
Exam Tip: The SecAI+ exam tests your ability to distinguish between these frameworks in context. Know which framework applies in which regulatory environment. The EU AI Act questions are particularly common based on beta candidate reports.
Scenario Type 12: AI Risk Assessment and Bias Evaluation
What you will see: A scenario where an AI system's outputs show potential bias, fairness issues, or risk indicators. You must assess the risk, identify the source of the issue, and recommend governance actions.
Key knowledge required:
- Types of AI bias: training data bias, algorithmic bias, selection bias, confirmation bias
- Fairness metrics and evaluation approaches
- AI transparency and explainability requirements
- Risk assessment methodologies specific to AI systems
- Organisational governance structures for AI oversight
Solving framework:
- Identify the type of bias or risk indicator in the scenario
- Trace the issue to its source (training data, model design, deployment context, or feedback loop)
- Assess the impact on affected stakeholders
- Recommend governance actions: retraining with balanced data, implementing fairness constraints, establishing human review processes, or pausing deployment
The OWASP and MITRE Frameworks You Must Know
Two frameworks appear repeatedly across SecAI+ PBQ scenarios. Knowing them well gives you a significant advantage.
OWASP Top 10 for LLM Applications (2025)
The OWASP LLM Top 10 identifies the most critical security risks for large language model applications. The top five are:
- LLM01: Prompt Injection - manipulating model behaviour through crafted inputs
- LLM02: Sensitive Information Disclosure - models leaking training data or system prompts
- LLM03: Supply Chain Vulnerabilities - compromised models, plugins, or data sources
- LLM04: Data and Model Poisoning - corrupting training data to influence model behaviour
- LLM05: Improper Output Handling - trusting model outputs without validation
When a PBQ scenario involves an LLM or generative AI system, mentally map the threat to the OWASP LLM Top 10. This helps you quickly identify the attack category and recall the recommended mitigations.
MITRE ATLAS
MITRE ATLAS (Adversarial Threat Landscape for Artificial-Intelligence Systems) is a knowledge base of adversary tactics and techniques targeting AI systems. As of version 5.4.0 (February 2026), it contains 16 tactics, 84 techniques, 56 sub-techniques, 32 mitigations, and 42 real-world case studies.
Key ATLAS techniques for SecAI+ scenarios:
- AML.T0043 - Craft Adversarial Data: creating inputs designed to cause misclassification
- AML.T0051 - LLM Prompt Injection: manipulating LLM behaviour through crafted prompts
- AML.T0024 - Exfiltration via ML Inference API: extracting training data through model queries
- AML.T0020 - Poison Training Data: manipulating training or fine-tuning data sources
Exam Tip: You do not need to memorise every ATLAS technique ID. Focus on understanding the four main attack categories documented by NIST: evasion, poisoning, privacy, and abuse attacks. If you can classify an attack into one of these four categories, you can reason through the correct mitigation.
Time Management Strategy for SecAI+ PBQs
With 60 questions in 60 minutes, you have exactly one minute per question on average. PBQs take longer than multiple-choice questions, so you need a deliberate time strategy.
The Three-Pass Approach
Pass 1 (30 minutes): Quick wins first. Scan through all 60 questions. Answer every multiple-choice question you are confident about. Flag any question (MCQ or PBQ) that requires more than 60 seconds of thought. Do not spend more than 45 seconds on any question during this pass.
Pass 2 (20 minutes): PBQs and flagged questions. Return to flagged PBQs with a clearer head. Use the solving frameworks above. If a PBQ has multiple parts, complete the parts you are confident about and make your best attempt on the rest. Partial answers are better than blank answers.
Pass 3 (10 minutes): Review and guess. Review any remaining flagged questions. Eliminate obviously wrong answers and select your best option. There is no negative marking, so never leave a question blank.
Time Allocation by Scenario Type
| Scenario Type | Target Time | Complexity |
|---|---|---|
| Prompt injection detection | 2-3 minutes | Medium |
| Model poisoning assessment | 2-3 minutes | Medium-High |
| Gateway/access control config | 3-4 minutes | High |
| Data security controls | 2-3 minutes | Medium |
| Adversarial input analysis | 2-3 minutes | Medium |
| Guardrail testing | 2-3 minutes | Medium |
| Monitoring/log analysis | 3-4 minutes | High |
| Threat detection triage | 2-3 minutes | Medium |
| Vulnerability assessment | 2-3 minutes | Medium |
| Incident response | 3-4 minutes | High |
| Compliance framework | 2-3 minutes | Medium |
| Risk/bias evaluation | 2-3 minutes | Medium |
How to Practise for SecAI+ PBQs
Reading about scenario types is not the same as practising them. Here is how to build real PBQ readiness:
1. Map Every Scenario to the Exam Objectives
Download the official CompTIA CY0-001 exam objectives (free PDF from CompTIA's website). For each of the 12 scenario types above, identify which specific objectives it tests. This ensures your preparation covers the full exam blueprint.
2. Learn the Frameworks First
Before attempting scenarios, build solid knowledge of OWASP LLM Top 10, MITRE ATLAS, NIST AI RMF, and the EU AI Act risk classification system. These frameworks provide the vocabulary and mental models you need to solve PBQs quickly.
3. Practise Under Timed Conditions
Set a timer when practising. If you cannot solve a scenario in 3-4 minutes, you need to study the underlying concepts more deeply. Speed comes from pattern recognition, and pattern recognition comes from repeated practice.
4. Use Realistic Practice Scenarios
Generic AI security questions are not enough. You need practice scenarios that match the CY0-001 exam format, difficulty level, and domain weighting. CertCrush's SecAI+ practice exams include both multiple-choice questions and performance-based scenarios modelled on the 12 types covered in this guide.
5. Review With the Solving Frameworks
After each practice session, review your answers using the solving frameworks above. Did you follow the steps? Where did you get stuck? What knowledge gap caused the error? This structured review is more valuable than simply reading the correct answer.
Ready to Start Practising?
The SecAI+ exam is unlike any cybersecurity certification that came before it. It tests a new discipline, AI security, with scenario-based questions that require genuine applied knowledge. The candidates who pass on their first attempt are the ones who practise with realistic PBQ scenarios until the solving frameworks become second nature.
CertCrush offers SecAI+ CY0-001 practice exams built to match the format, domain weighting, and difficulty of the real exam. Every question includes detailed explanations that walk you through the reasoning, not just the correct answer.
Create your free account and start building your SecAI+ PBQ confidence today.