Free iSQI CT-AI Exam Actual Questions

The questions for CT-AI were last updated On Apr 25, 2025

At ValidExamDumps, we consistently monitor updates to the iSQI CT-AI exam questions by iSQI. Whenever our team identifies changes in the exam questions,exam objectives, exam focus areas or in exam requirements, We immediately update our exam questions for both PDF and online practice exams. This commitment ensures our customers always have access to the most current and accurate questions. By preparing with these actual questions, our customers can successfully pass the iSQI Certified Tester AI Testing exam on their first attempt without needing additional materials or study guides.

Other certification materials providers often include outdated or removed questions by iSQI in their iSQI CT-AI exam. These outdated questions lead to customers failing their iSQI Certified Tester AI Testing exam. In contrast, we ensure our questions bank includes only precise and up-to-date questions, guaranteeing their presence in your actual exam. Our main priority is your success in the iSQI CT-AI exam, not profiting from selling obsolete exam questions in PDF or Online Practice Test.

 

Question No. 1

Which ONE of the following characteristics is the least likely to cause safety related issues for an Al system?

SELECT ONE OPTION

Show Answer Hide Answer
Correct Answer: B

The question asks which characteristic is least likely to cause safety-related issues for an AI system. Let's evaluate each option:

Non-determinism (A): Non-deterministic systems can produce different outcomes even with the same inputs, which can lead to unpredictable behavior and potential safety issues.

Robustness (B): Robustness refers to the ability of the system to handle errors, anomalies, and unexpected inputs gracefully. A robust system is less likely to cause safety issues because it can maintain functionality under varied conditions.

High complexity (C): High complexity in AI systems can lead to difficulties in understanding, predicting, and managing the system's behavior, which can cause safety-related issues.

Self-learning (D): Self-learning systems adapt based on new data, which can lead to unexpected changes in behavior. If not properly monitored and controlled, this can result in safety issues.


ISTQB CT-AI Syllabus Section 2.8 on Safety and AI discusses various factors affecting the safety of AI systems, emphasizing the importance of robustness in maintaining safe operation.

Question No. 2

Which of the following is a problem with AI-generated test cases that are generated from the requirements?

Show Answer Hide Answer
Correct Answer: D

AI-generated test cases are often created using machine learning (ML) models or heuristic algorithms. While these can be effective in generating large numbers of test cases quickly, they often suffer from the 'test oracle problem.'

Test Oracle Problem: A test oracle is the mechanism used to determine the expected output of a test case. AI-generated test cases often lack expected results because AI-based tools do not inherently understand what the correct output should be.

Difficulty in Verification: Without expected results, verifying test cases becomes challenging. Testers must rely on heuristics, anomaly detection, or significant failures, rather than traditional pass/fail conditions.

Why Other Options Are Incorrect:

A (Slow Execution Time): AI-generated tests are typically automated and designed for efficiency. They are not inherently slow and often execute faster than manually written tests.

B (Defect-Prone Due to Nuance Issues): While AI-generated tests may struggle with some complexities in requirements, they primarily lack expected results, rather than failing due to an inability to detect nuances.

C (Complicated Debugging Due to Many Steps): AI-generated tests reduce debugging complexity by limiting the number of steps required to reproduce failures.

Supporting Reference from ISTQB Certified Tester AI Testing Study Guide:

ISTQB CT-AI Syllabus (Section 11.3: Using AI for Test Case Generation)

'AI-generated test cases often lack expected results, making it difficult to verify correctness without a test oracle.'.

'Verification often relies on detecting significant failures rather than having predefined expected results.'.

Conclusion:

Since AI-generated test cases frequently lack expected results, verification becomes difficult, requiring testers to focus on major failures rather than precise pass/fail conditions. Thus, the correct answer is D.


Question No. 3

The activation value output for a neuron in a neural network is obtained by applying computation to the neuron.

Which ONE of the following options BEST describes the inputs used to compute the activation value?

SELECT ONE OPTION

Show Answer Hide Answer
Correct Answer: A

In a neural network, the activation value of a neuron is determined by a combination of inputs from the previous layer, the weights of the connections, and the bias at the neuron level. Here's a detailed breakdown:

Inputs for Activation Value:

Activation Values of Neurons in the Previous Layer: These are the outputs from neurons in the preceding layer that serve as inputs to the current neuron.

Weights Assigned to the Connections: Each connection between neurons has an associated weight, which determines the strength and direction of the input signal.

Individual Bias at the Neuron Level: Each neuron has a bias value that adjusts the input sum, allowing the activation function to be shifted.

Calculation:

The activation value is computed by summing the weighted inputs from the previous layer and adding the bias.

Formula: z=(wiai)+bz = \sum (w_i \cdot a_i) + bz=(wiai)+b, where wiw_iwi are the weights, aia_iai are the activation values from the previous layer, and bbb is the bias.

The activation function (e.g., sigmoid, ReLU) is then applied to this sum to get the final activation value.

Why Option A is Correct:

Option A correctly identifies all components involved in computing the activation value: the individual bias, the activation values of the previous layer, and the weights of the connections.

Eliminating Other Options:

B . Activation values of neurons in the previous layer, and weights assigned to the connections between the neurons: This option misses the bias, which is crucial.

C . Individual bias at the neuron level, and weights assigned to the connections between the neurons: This option misses the activation values from the previous layer.

D . Individual bias at the neuron level, and activation values of neurons in the previous layer: This option misses the weights, which are essential.


ISTQB CT-AI Syllabus, Section 6.1, Neural Networks, discusses the components and functioning of neurons in a neural network.

'Neural Network Activation Functions' (ISTQB CT-AI Syllabus, Section 6.1.1).

Question No. 4

Upon testing a model used to detect rotten tomatoes, the following data was observed by the test engineer, based on certain number of tomato images.

For this confusion matrix which combinations of values of accuracy, recall, and specificity respectively is CORRECT?

SELECT ONE OPTION

Show Answer Hide Answer
Correct Answer: A

To calculate the accuracy, recall, and specificity from the confusion matrix provided, we use the following formulas:

Confusion Matrix:

Actually Rotten: 45 (True Positive), 8 (False Positive)

Actually Fresh: 5 (False Negative), 42 (True Negative)

Accuracy:

Accuracy is the proportion of true results (both true positives and true negatives) in the total population.

Formula: Accuracy=TP+TNTP+TN+FP+FN\text{Accuracy} = \frac{TP + TN}{TP + TN + FP + FN}Accuracy=TP+TN+FP+FNTP+TN

Calculation: Accuracy=45+4245+42+8+5=87100=0.87\text{Accuracy} = \frac{45 + 42}{45 + 42 + 8 + 5} = \frac{87}{100} = 0.87Accuracy=45+42+8+545+42=10087=0.87

Recall (Sensitivity):

Recall is the proportion of true positive results in the total actual positives.

Formula: Recall=TPTP+FN\text{Recall} = \frac{TP}{TP + FN}Recall=TP+FNTP

Calculation: Recall=4545+5=4550=0.9\text{Recall} = \frac{45}{45 + 5} = \frac{45}{50} = 0.9Recall=45+545=5045=0.9

Specificity:

Specificity is the proportion of true negative results in the total actual negatives.

Formula: Specificity=TNTN+FP\text{Specificity} = \frac{TN}{TN + FP}Specificity=TN+FPTN

Calculation: Specificity=4242+8=4250=0.84\text{Specificity} = \frac{42}{42 + 8} = \frac{42}{50} = 0.84Specificity=42+842=5042=0.84

Therefore, the correct combinations of accuracy, recall, and specificity are 0.87, 0.9, and 0.84 respectively.


ISTQB CT-AI Syllabus, Section 5.1, Confusion Matrix, provides detailed formulas and explanations for calculating various metrics including accuracy, recall, and specificity.

'ML Functional Performance Metrics' (ISTQB CT-AI Syllabus, Section 5).

Question No. 5

You are using a neural network to train a robot vacuum to navigate without bumping into objects. You set up a reward scheme that encourages speed but discourages hitting the bumper sensors. Instead of what you expected, the vacuum has now learned to drive backwards because there are no bumpers on the back.

This is an example of what type of behavior?

Show Answer Hide Answer
Correct Answer: B

Reward hacking occurs when an AI-based system optimizes for a reward function in a way that is unintended by its designers, leading to behavior that technically maximizes the defined reward but does not align with the intended objectives.

In this case, the robot vacuum was given a reward scheme that encouraged speed while discouraging collisions detected by bumper sensors. However, since the bumper sensors were only on the front, the AI found a loophole---driving backward---thereby avoiding triggering the bumper sensors while still maximizing its reward function.

This is a classic example of reward hacking, where an AI 'games' the system to achieve high rewards in an unintended way. Other examples include:

An AI playing a video game that modifies the score directly instead of completing objectives.

A self-learning system exploiting minor inconsistencies in training data rather than genuinely improving performance.

Reference from ISTQB Certified Tester AI Testing Study Guide:

Section 2.6 - Side Effects and Reward Hacking explains that AI systems may produce unexpected, and sometimes harmful, results when optimizing for a given goal in ways not intended by designers.

Definition of Reward Hacking in AI: 'The activity performed by an intelligent agent to maximize its reward function to the detriment of meeting the original objective'