Spring Sale 70% Discount Offer - Ends in 0d 00h 00m 00s - Coupon code: save70

Free and Premium IAPP AIGP Dumps Questions Answers

Page: 1 / 12
Total 165 questions

Artificial Intelligence Governance Professional Questions and Answers

Question 1

Scenario:

A distributor operating in the EU is responsible for selling imported high-risk AI systems to businesses. The distributor wants to ensure they fulfill all applicable obligations under the EU AI Act.

All of the following are obligations of a distributor of high-risk AI systems under the EU AI Act EXCEPT?

Options:

A.

Corrective actions

B.

Verification of CE marking

C.

Registration in EU Database

D.

Communication with national authorities

Buy Now
Question 2

Scenario:

A company is using different types of AI systems to enhance consumer engagement. These include chatbots, recommendation engines, and automated content generation tools.

Which of the following situations would beleast likelyto raise concerns under existing consumer protection laws?

Options:

A.

An AI algorithm being used in a credit decision-making process by a financial institution

B.

An AI customer service system claiming that it is as accurate as a human support agent

C.

An AI tool using scraped digital content to generate news summaries on a publishing website

D.

An online platform offering recommendations to its users by displaying user-specific content and targeted advertisements

Question 3

CASE STUDY

Please use the following answer the next question:

XYZ Corp., a premier payroll services company that employs thousands of people globally, is embarking on a new hiring campaign and wants to implement policies and procedures to identify and retain the best talent. The new talent will help the company's product team expand its payroll offerings to companies in the healthcare and transportation sectors, including in Asia.

It has become time consuming and expensive for HR to review all resumes, and they are concerned that human reviewers might be susceptible to bias.

Address these concerns, the company is considering using a third-party Al tool to screen resumes and assist with hiring. They have been talking to several vendors about possibly obtaining a third-party Al-enabled hiring solution, as long as it would achieve its goals and comply with all applicable laws.

The organization has a large procurement team that is responsible for the contracting of technology solutions. One of the procurement team's goals is to reduce costs, and it often prefers lower-cost solutions. Others within the company are responsible for integrating and deploying technology solutions into the organization's operations in a responsible, cost-effective manner.

The organization is aware of the risks presented by Al hiring tools and wants to mitigate them. It also questions how best to organize and train its existing personnel to use the Al hiring tool responsibly. Their concerns are heightened by the fact that relevant laws vary across jurisdictions and continue to change.

All of the following are potential negative consequences created by using the Al tool when making hiring decisions EXCEPT?

Options:

A.

Reputational harm.

B.

Civil rights violations.

C.

Discriminatory treatment.

D.

Intellectual property infringement.

Question 4

CASE STUDY

A premier payroll services company that employs thousands of people globally, is embarking on a new hiring campaign and wants to implement policies and procedures to identify and retain the best talent. The new talent will help the company's product team expand its payroll offerings to companies in the healthcare and transportation sectors, including in Asia.

It has become time consuming and expensive for HR to review all resumes, and they are concerned that human reviewers might be susceptible to bias.

To address these concerns, the company is considering using a third-party Al tool to screen resumes and assist with hiring. They have been talking to several vendors about possibly obtaining a third-party Al-enabled hiring solution, as long as it would achieve its goals and comply with all applicable laws.

The organization has a large procurement team that is responsible for the contracting of technology solutions. One of the procurement team's goals is to reduce costs, and it often prefers lower-cost solutions. Others within the company deploy technology solutions into the organization’s operations in a responsible, cost-effective manner.

The organization is aware of the risks presented by Al hiring tools and wants to mitigate them. It also questions how best to organize and train its existing personnel to use the Al hiring tool responsibly. Their concerns are heightened by the fact that relevant laws vary across jurisdictions and continue to change.

 

All of the following are potential negative consequences created by using the AI tool to help make hiring decisions EXCEPT?

Options:

A.

Automation bias

B.

Candidate quality

C.

Privacy violations

D.

Disparate impacts

Question 5

Scenario:

A public sector agency is reviewing proposed AI use cases for improving services. It wants to prioritize implementations that deliver value butminimize unintended negative consequences.

When evaluating which AI use cases to implement, an organization should consider all of the following EXCEPT:

Options:

A.

Related TEVV (test, evaluate, verify, validate) and system metrics

B.

The users and their expectations

C.

Equitable access to the AI tool

D.

Potential positive and negative impacts of the system

Question 6

Which of the following disclosures is NOT required for an EU organization that developed and deployed a high-risk Al system?

Options:

A.

The human oversight measures employed.

B.

How an individual may contest a decision.

C.

The location(s) where data is stored.

D.

The fact that an Al system is being used.

Question 7

CASE STUDY

A global marketing agency is adapting a large language model ("LLM") to generate content for an upcoming marketing campaign for a client's new product: a hard hat designed for construction workers of any gender to better protect them from head injuries.

The marketing agency is accessing the LLM through an application programming interface ("API")developed by a third-party technology company. They want to generate text to be used for targeted advertising communications that highlight the benefits of the hard hat to potential purchasers. Both the marketing agency and the technology company have taken reasonable steps to address Al governance.

The marketing company has:

•           Entered into a contract with the technology company with suitable representations and warranties.

•           Completed an impact assessment on the LLM for this intended use.

•           Built technical guidance on how to measure and mitigate bias in the LLM.

•           Enabled technical aspects of transparency, explainability, robustness and privacy.

•           Followed applicable regulatory requirements.

•           Created specific legal statements and disclosures regarding the use of the Al on its client's advertising.

The technology company has:

•           Provided guidance and resources to developers to address environmental concerns.

•           Build technical guidance on how to measure and mitigate bias in the LLM.

•           Provided tools and resources to measure bias specific to the LLM.

•           Enabled technical aspects of transparency, explainability, robustness and privacy.

•           Mapped and mitigated potential societal harms and large-scale impacts.

•           Followed applicable regulatory requirements and industry standards.

•           Created specific legal statements and disclosures regarding the LLM. including with respect to IP and rights to data.

The agency has taken governance actions such as:

    Conducting an impact assessment

    Providing legal disclosures

    Enabling bias mitigation and explainability

    Complying with regulatory requirements

Which of the following should be included in the marketing company’s disclosures about the use of the LLM EXCEPT?

Options:

A.

Intended purpose

B.

Proprietary methods

C.

Compliance with law

D.

Acknowledgement of limitations

Question 8

A company is creating a mobile app to enable individuals to upload images and videos, and analyze this data using ML to provide lifestyle improvement recommendations. The signup form has the following data fields:

1.First name

2.Last name

3.Mobile number

4.Email ID

5.New password

6.Date of birth

7.Gender

In addition, the app obtains a device's IP address and location information while in use.

What GDPR privacy principles does this violate?

Options:

A.

Purpose Limitation and Data Minimization.

B.

Accountability and Lawfulness.

C.

Transparency and Accuracy.

D.

Integrity and Confidentiality.

Question 9

Which of the following most encourages accountability over Al systems?

Options:

A.

Determining the business objective and success criteria for the Al project.

B.

Performing due diligence on third-party Al training and testing data.

C.

Defining the roles and responsibilities of Al stakeholders.

D.

Understanding Al legal and regulatory requirements.

Question 10

Scenario:

A European AI technology company was found to be non-compliant with certain provisions of the EU AI Act. The regulator is considering penalties under the enforcement provisions of the regulation.

According to the EU AI Act, which of the following non-compliance examples could lead to fines of up to €15 million or 3% of annual worldwide turnover(whichever is higher)?

Options:

A.

In case of AI Act prohibitions

B.

In case of breach of a provider's obligations for high-risk AI systems

C.

In case of the supply of misleading information to notified bodies in reply to a request

D.

In case of a breach of AI Act prohibition by the Union institutions, bodies, offices and agencies

Question 11

A U.S. mortgage company developed an Al platform that was trained using anonymized details from mortgage applications, including the applicant’s education, employment and demographic information, as well as from subsequent payment or default information. The Al platform will be used automatically grant or deny new mortgage applications, depending on whether the platform views an applicant as presenting a likely risk of default.

Which of the following laws is NOT relevant to this use case?

Options:

A.

Fair Housing Act.

B.

Fair Credit Reporting Act.

C.

Equal Credit Opportunity Act.

D.

Title VII of the Civil Rights Act of 1964.

Question 12

Which of the following best defines an "Al model"?

Options:

A.

A system that applies defined rules to execute tasks.

B.

A system of controls that is used to govern an Al algorithm.

C.

A corpus of data which an Al algorithm analyzes to make predictions.

D.

A program that has been trained on a set of data to find patterns within the data.

Question 13

What is the key feature of Graphical Processing Units (GPUs) that makes them well-suited to running Al applications?

Options:

A.

GPUs run many tasks concurrently, resulting in faster processing.

B.

GPUs can access memory quickly, resulting in lower latency than CPUs.

C.

GPUs can run every task on a computer, making them more robust than CPUs.

D.

The number of transistors on GPUs doubles every two years, makingthe chipssmaller and lighter.

Question 14

An Al system that maintains its level of performance within defined acceptable limits despite real world or adversarial conditions would be described as?

Options:

A.

Robust.

B.

Reliable.

C.

Resilient.

D.

Reinforced.

Question 15

An EU bank intends to launch a multi-modal Al platform for customer engagement and automated decision-making assist with the opening of bank accounts. The platform has been subject to thorough risk assessments and testing, where it proves to be effective in not discriminating against any individual on the basis of a protected class.

What additional obligations must the bank fulfill prior to deployment?

Options:

A.

The bank must obtain explicit consent from users under the privacy Directive.

B.

The bank must disclose how the Al system works under the Ell Digital Services Act.

C.

The bank must subject the Al system an adequacy decision and publish its appropriate safeguards.

D.

The bank must disclose the use of the Al system and implement suitable measures for users to contest automated decision-making.

Question 16

According to the Singapore Model Al Governance Framework, all of the following are recommended measures to promote the responsible use of Al EXCEPT?

Options:

A.

Determining the level of human involvement in algorithmic decision-making.

B.

Adapting the existing governance structure algorithmic decision-making.

C.

Employing human-over-the-loop protocols for high-risk systems.

D.

Establishing communications and collaboration among stakeholders.

Question 17

Scenario:

A financial services company is planning a new AI project to assess creditworthiness. The AI team is mapping out what tasks should be completed during theplanning phaseof the AI lifecycle.

The planning phase of the AI lifecycle includes all of the following EXCEPT:

Options:

A.

Definition of underlying assumptions

B.

Approach to governance

C.

Choice of the architecture

D.

Context in which the model will operate

Question 18

Which of the following is a foundational characteristic of effective AI governance?

Options:

A.

Engagement of a cross-functional team

B.

Reliance on tested vendor management processes

C.

Thorough reviews of a company’s public filings with experts

D.

Uniform policies and procedures across developer, deployer and user roles

Question 19

All of the following are common optimization techniques in deep learning to determine weights that represent the strength of the connection between artificial neurons EXCEPT?

Options:

A.

Gradient descent, which initially sets weights arbitrary values, and then at each step changes them.

B.

Momentum, which improves the convergence speed and stability of neural network training.

C.

Autoregression, which analyzes and makes predictions about time-series data.

D.

Backpropagation, which starts from the last layer working backwards.

Question 20

A company developed Al technology that can analyze text, video, images and sound to tag content, including the names of animals, humans and objects.

What type of Al is this technology classified as?

Options:

A.

Deductive inference.

B.

Multi-modal model.

C.

Transformative Al.

D.

Expert system.

Question 21

Scenario:

An organization is planning to deploy a new internal application that uses AI to make automated decisions about individuals. This application will process personal information and may affect individuals’ access to certain benefits or opportunities.

Which of the following documents must be updated to ensure transparency?

Options:

A.

The organization's website privacy notice

B.

The organization's acceptable use policy

C.

The organization's privacy policy

D.

The user privacy notice

Question 22

Which risk management framework/guide/standard focuses on value-based engineering methodology?

Options:

A.

ISO/IEC Guide 51 (Safety).

B.

ISO 31000 Guidelines (Risk Management).

C.

IEEE 7000-2021 Standard Model Process for Addressing Ethical Concerns during System Design.

D.

Council of Europe Human Rights, Democracy, and the Rule of Law Assurance Framework (HUDERIA) for Al Systems.

Question 23

Scenario:

Business A provides grammar and writing assistance tools and licenses a generative AI model from Business B to enhance its offerings. Business A is concerned that the AI model might produce inappropriate or toxic content and wants to implement governance processes to prevent this.

Which of the following governance processes should Business A take tobest protect its usersagainst potentially inappropriate text?

Options:

A.

Business A should fine-tune the AI model on user-generated text that has been verified to be appropriate

B.

Business A should test that the AI model performs as expected and meets their minimum requirements for filtering toxic or obscene text

C.

Business A should establish a user reporting feature that allows users to flag toxic or obscene text, and report any incidents to Business B

D.

Business A should ask Business B for detailed documentation on the generative AI model's training data and whether it contained toxic or obscene sources

Question 24

CASE STUDY

A company is considering the procurement of an AI system designed to enhance the security of IT infrastructure. The AI system analyzes how users type on their laptops, including typing speed, rhythm and pressure, to create a unique user profile. This data is then used to authenticate users and ensure that only authorized personnel can access sensitive resources.

The data processed by the AI system would be classified as:

Options:

A.

Non-sensitive personal data, since it does not reveal information about health, gender or race

B.

Organizational data, since it is part of the authentication process

C.

Non-personal data, as long as it is not linked to a user ID

D.

Special category data, if it can be used to uniquely identify a person

Question 25

CASE STUDY

Please use the following answer the next question:

Good Values Corporation (GVC) is a U.S. educational services provider that employs teachers to create and deliver enrichment courses for high school students. GVC has learned that many of its teacher employees are using generative Al to create the enrichment courses, and that many of the students are using generative Al to complete their assignments.

In particular, GVC has learned that the teachers they employ used open source large language models (“LLM”) to develop an online tool that customizes study questions for individual students. GVC has also discovered that an art teacher has expressly incorporated the use of generative Al into the curriculum to enable students to use prompts to create digital art.

GVC has started to investigate these practices and develop a process to monitor any use of generative Al, including by teachers and students, going forward.

What is the best reason for GVC to offer students the choice to utilize generative Al in limited, defined circumstances?

Options:

A.

To enablestudents to learn how to manage their time.

B.

To enablestudents to learn about performing research.

C.

To enablestudents to learn about practical applications of Al.

D.

To enablestudents to learn how to use Al as asupportive educationaltool.

Question 26

Which of the following is an obligation of an importer of high-risk AI systems under the EU AI Act?

Options:

A.

Provide technical documentation.

B.

Affix the CE marking.

C.

Verify the Declaration of Conformity.

D.

Conduct a data protection impact assessment.

Question 27

A company plans on procuring a tool from an Al provider for its employees to use for certain business purposes.

Which contractual provision would best protect the company's intellectual property in the tool, including training and testing data?

Options:

A.

The provider willgive privacy notice to individuals before using their personal data to train or test the tool.

B.

The provider willdefend and indemnify the company against infringement claims.

C.

The provider willobtain and maintain insurance to cover potential claims.

D.

The provider willwarrant that the tool will work as intended.

Question 28

When monitoring the functional performance of a model that has been deployed into production, all of the following are concerns EXCEPT?

Options:

A.

Feature drift.

B.

System cost.

C.

Model drift.

D.

Data loss.

Question 29

After completing model testing and validation, which of the following is the most important step that an organization takes prior to deploying the model into production?

Options:

A.

Perform a readiness assessment.

B.

Define a model-validation methodology.

C.

Document maintenance teams and processes.

D.

Identify known edge cases to monitor post-deployment.

Question 30

Which stakeholder is responsible for lawful collection of data for the training of the foundational AI model?

Options:

A.

The marketing agency.

B.

The tech company.

C.

The data aggregator.

D.

The marketing agency's client.

Question 31

What is the best reason for a company adopt a policy that prohibits the use of generative Al?

Options:

A.

Avoid using technology that cannot be monetized.

B.

Avoid needing to identify and hire qualified resources.

C.

Avoid the time necessary to train employees on acceptable use.

D.

Avoid accidental disclosure to its confidential and proprietary information.

Question 32

In procuring an AI system from a vendor, which of the following would be important to include in a contract to enable proper oversight and auditing of the system?

Options:

A.

Liability for mistakes.

B.

Ownership of data and outputs.

C.

Responsibility for improvements.

D.

Appropriate access to data and models.

Question 33

What is the most important factor when deciding whether or not to select a proprietary AI model?

Options:

A.

What business purpose it will serve.

B.

How frequently it will be updated.

C.

Whether its training data is disclosed.

D.

Whether its system card identifies risks.

Question 34

Retrieval-Augmented Generation (RAG) is defined as?

Options:

A.

Combining LLMs with private knowledge bases to improve their outputs.

B.

Reducing computational processing requirements of the LLMs.

C.

Applying advanced filtering techniques to the LLMs.

D.

Fine tuning LLMs to minimize biased outputs.

Question 35

The most important factor in ensuring fairness when training an Al system is?

Options:

A.

The architecture and model selection.

B.

The data labeling and classification.

C.

The data attributes and variability.

D.

The model accuracy and scale.

Question 36

In the machine learning context, feature engineering is the process of?

Options:

A.

Converting raw data into clean data.

B.

Creating learning schema for a model apply.

C.

Developing guidelines to train and test a model.

D.

Extracting attributes and variables from raw data.

Question 37

What is the primary purpose of conducting ethical red-teaming on an Al system?

Options:

A.

To improve the model's accuracy.

B.

To simulate model risk scenarios.

C.

To identify security vulnerabilities.

D.

To ensure compliance with applicable law.

Question 38

A company developing and deploying its own AI model would perform all of the following steps to monitor and evaluate the model's performance EXCEPT?

Options:

A.

Publicly disclosing data with forecasts of secondary and downstream harms to stakeholders.

B.

Setting up automated tools to regularly track the model's accuracy, precision and recall rates in real-time.

C.

Implementing a formal incident response plan to address incidents that may occur during system operation.

D.

Establishing a regular schedule for human evaluation of the model's performance, including qualitative assessments.

Question 39

CASE STUDY

Please use the following answer the next question:

A local police department in the United States procured an Al system to monitor and analyze social media feeds, online marketplaces and other sources of public information to detect evidence of illegal activities (e.g., sale of drugs or stolen goods). The Al system works by surveilling the public sites in order to identify individuals that are likely to have committed a crime. It cross-references the individuals against data maintained by law enforcement and then assigns a percentage score of the likelihood of criminal activity based on certain factors like previous criminal history, location, time, race and gender.

The police department retained a third-party consultant assist in the procurement process, specifically to evaluate two finalists. Each of the vendors provided information about their system's accuracy rates, the diversity of their training data and how their system works. The consultant determined that the first vendor’s system has a higher accuracy rate and based on this information, recommended this vendor to the police department.

The police department chose the first vendor and implemented its Al system. As part of the implementation, the department and consultant created a usage policy for the system, which includes training police officers on how the system works and how to incorporate it into their investigation process.

The police department has now been using the Al system for a year. An internal review has found that every time the system scored a likelihood of criminal activity at or above 90%, the police investigation subsequently confirmed that the individual had, in fact, committed a crime. Based on these results, the police department wants to forego investigations for cases where the Al system gives a score of at least 90% and proceed directly with an arrest.

During the procurement process, what is the most likely reason that the third-party consultant asked each vendor for information about the diversity of their datasets?

Options:

A.

To comply with applicable law.

B.

To assist the fairness of the Al system.

C.

To evaluate the reliability of the Al system.

D.

To determine the explainability of the Al system.

Question 40

To maintain fairness in a deployed system, it is most important to?

Options:

A.

Protect against loss of personal data in the model.

B.

Monitor for data drift that may affect performance and accuracy.

C.

Detect anomalies outside established metrics that require new training data.

D.

Optimize computational resources and data to ensure efficiency and scalability.

Question 41

The best practice to manage third-party risk associated with AI systems is to create and implement policies that?

Options:

A.

Focus on the financial stability of third-party vendors as the primary criterion for risk assessment.

B.

Provide for an appropriate level of due diligence and ongoing monitoring based on the defined risk.

C.

Require third-party AI systems to undergo a comprehensive audit by an external cybersecurity firm every six months.

D.

Focus on the technical aspects of AI systems, such as data security, while ethical risks are addressed through suitable contracts.

Question 42

Retraining an LLM can be necessary for all of the following reasons EXCEPT?

Options:

A.

To minimize degradation in prediction accuracy due tochanges in data.

B.

Adjust the model's hyper parameters to a specific use case.

C.

Account for new interpretations of the same data.

D.

To ensure interpretability of the model's predictions.

Question 43

A company deploys an AI model for fraud detection in online transactions. During its operation, the model begins to exhibit high rates of false positives, flagging legitimate transactions as fraudulent.

Which is the best step the company should take to address this development?

Options:

A.

Dedicate more resources to monitor the model.

B.

Maintain records of all false positives.

C.

Deactivate the model until an assessment is made.

D.

Conduct training for customer service teams to handle flagged transactions.

Question 44

CASE STUDY

Please use the following answer the next question:

A mid-size US healthcare network has decided to develop an Al solution to detect a type of cancer that is most likely arise in adults. Specifically, the healthcare network intends to create a recognition algorithm that will perform an initial review of all imaging and then route records a radiologist for secondary review pursuant Agreed-upon criteria (e.g., a confidence score below a threshold).

To date, the healthcare network has taken the following steps: defined its Al ethical principles: conducted discovery to identify the intended uses and success criteria for the system: established an Al governance committee; assembled a broad, crossfunctional team with clear roles andresponsibilities; and created policies and procedures to document standards, workflows, timelines and risk thresholds during the project.

The healthcare network intends to retain a cloud provider to host the solution and a consulting firm to help develop the algorithm using the healthcare network's existing data and de-identified data that is licensed from a large US clinical research partner.

Which of the following steps can best mitigate the possibility of discrimination prior to training and testing the Al solution?

Options:

A.

Procure more data from clinical research partners.

B.

Engage a third party to perform an audit.

C.

Perform an impact assessment.

D.

Create a bias bounty program.

Question 45

An AI system's function, the industry and the location in which it operates are important factors in considering which of the following?

Options:

A.

Organizational accountability.

B.

Internal governance needs.

C.

Diversity of data sources.

D.

Explainability of results.

Question 46

You are the chief privacy officer of a medical research company that would like to collect and use sensitive data about cancer patients, such as their names, addresses, race and ethnic origin, medical histories, insurance claims, pharmaceutical prescriptions, eating and drinking habits and physical activity.

The company will use this sensitive data to build an Al algorithm that will spot common attributes that will help predict if seemingly healthy people are more likely to get cancer. However, the company is unable to obtain consent from enough patients to sufficiently collect the minimum data to train its model.

Which of the following solutions would most efficiently balance privacy concerns with the lack of available data during the testing phase?

Options:

A.

Deploy the current model and recalibrate it over time with more data.

B.

Extend the model to multi-modal ingestion with text and images.

C.

Utilize synthetic data to offset the lack of patient data.

D.

Refocus the algorithm to patients without cancer.

Question 47

Training data is best defined as a subset of data that is used to?

Options:

A.

Enable a model to detect and learn patterns.

B.

Fine-tune a model to improve accuracy and prevent overfitting.

C.

Detect the initial sources of biases to mitigate prior to deployment.

D.

Resemble the structure and statistical properties of production data.

Question 48

All of the following are potential benefits of using private over public LLMs EXCEPT?

Options:

A.

Reduction in time taken for data validation and verification.

B.

Confirmation of security and confidentiality.

C.

Reduction in possibility of hallucinated information.

D.

Application for specific use cases within the enterprise.

Question 49

According to November 2023 White House Executive Order, which of the following best describes the guidance given to governmental agencies on the use of generative Al as a workplace tool?

Options:

A.

Limit access to specific uses of generative Al.

B.

Impose a general ban on the use of generative Al.

C.

Limit access of generative Al to engineers and developers.

D.

Impose a ban on the use of generative Al in agencies that protect national security.

Page: 1 / 12
Total 165 questions