Consider a TAS implemented to perform automated testing on native mobile apps at the UI level, where the TAF implements a client-server architecture. The client runs on-premise and allows creation of automated test scripts using TAF libraries to recognize and interact with the app’s UI objects. The server runs in the cloud as part of a PaaS service, receiving commands from the client, translating them into actions for the mobile device, and sending the results to the client. The cloud platform hosts several mobile devices dedicated for use by this TAS. The device on which to run test scripts/test suites is specified at run time. You are currently verifying whether the test automation environment and all other TAS/TAF components work correctly. Which of the following activities would you perform to achieve your goal?
Which of the following recommendations can help improve the maintainability of test automation code?
As a TA-E, you have successfully verified that a test automation environment and all other components of the TAS are working as expected. Now your goal is to verify the correct behavior for a given automated test suite that will be run by the TAS. Which of the following should NOT be part of the verifications aimed at achieving your goal?
You have been tasked with adding the execution of build verification tests to the current CI/CD pipeline used in an Agile project. The goal of these tests is to verify the stability of daily builds and ensure that the most recent changes have not altered core functionality. Currently, the first activity performed as part of this pipeline is the static source code analysis. Which of the following stages in the pipeline would you add the execution of these smoke tests to?
An automated test script makes a well-formed request to a REST API in the backend of a web app to add a single item for a product (with ID = 710) to the cart and expects a response confirming that the product is successfully added. The status line of the API response is HTTP/1.1 200 OK, while the response body indicates that the product is out of stock. The API response is correct, the test script fails but completes, and the message to log is: The product with ID = 710 is out of stock. Cart not updated. When this occurs, you are already aware that both the failed test and the API are behaving correctly and that the problem is in the test data. The TAS supports the following test logging levels: FATAL, ERROR, WARN, INFO, DEBUG. Which of the following is the MOST appropriate test logging level to use to log the specified message?
Consider choosing an approach for the automated implementation of manual regression test suites written at the UI level for some already developed web apps. The TAS is based on a programming language that allows the creation of test libraries and provides a capture/playback feature that allows recognition and interaction with all widgets in the web UIs being tested. The automated tests will be implemented by team members with strong programming skills. The chosen approach should aim to reduce both the effort required to maintain automated tests and the effort required to add new automated tests. Which of the following approaches would you choose?
Which of the following information in API documentation is LEAST relevant for implementing automated tests on that API?
Automated tests at the UI level for a web app adopt an asynchronous waiting mechanism that allows them to synchronize test steps with the app, so that they are executed correctly and at the right time, only when the app is ready and has processed the previous step: this is done when there are no timeouts or pending asynchronous requests. In this way, the tests automatically synchronize with the app's web pages. The same initialization tasks to set test preconditions are implemented as test steps for all tests. Regarding the pre-processing (Setup) features defined at the test suite level, the TAS provides both a Suite Setup (which runs exactly once when the suite starts) and a Test Setup (which runs at the start of each test case in the suite). Which of the following recommendations would you provide for improving the TAS (assuming it is possible to perform all of them)?
Which one of the following answers does NOT refer to an example of configuration item(s) that should be specified in development pipelines to identify a test environment (and its specific test data) associated with a web app under test on which to execute automated tests?
Consider a TAS aimed at implementing and running automated test scripts at the UI level on web apps. The TAS must support cross-browser compatibility for a variety of supported browsers, by ensuring that the same test script will run on such browsers in the same way without making any changes to it. This is achieved by introducing appropriate abstractions into the TAA for connection and interaction with different browsers. Because of this, the TAS will be able to make direct calls to the supported browsers using each different browser’s native support for automation. Which of the following SOLID principles was adopted?
Which of the following practices can be used to specify the active (i.e., actually available) features for each release of the SUT and determine the corresponding automated tests that must be executed for a given release?
To improve the maintainability of test automation code, it is recommended to adopt design principles and design patterns that allow the code to be structured into: