Spring Sale 70% Discount Offer - Ends in 0d 00h 00m 00s - Coupon code: save70

Free and Premium Confluent CCDAK Dumps Questions Answers

Page: 1 / 7
Total 90 questions

Confluent Certified Developer for Apache Kafka Certification Examination Questions and Answers

Question 1

(You are experiencing low throughput from a Java producer.

Kafka producer metrics show a low I/O thread ratio and low I/O thread wait ratio.

What is the most likely cause of the slow producer performance?)

Options:

A.

The producer is sending large batches of messages.

B.

There is a bad data link layer (Layer 2) connection from the producer to the cluster.

C.

The producer code has an expensive callback function.

D.

Compression is enabled.

Buy Now
Question 2

A stream processing application is consuming from a topic with five partitions. You run three instances of the application. Each instance has num.stream.threads=5.

You need to identify the number of stream tasks that will be created and how many will actively consume messages from the input topic.

Options:

A.

5 created, 1 actively consuming

B.

5 created, 5 actively consuming

C.

15 created, 5 actively consuming

D.

15 created, 15 actively consuming

Question 3

Which two statements are correct about transactions in Kafka?

(Select two.)

Options:

A.

All messages from a failed transaction will be deleted from a Kafka topic.

B.

Transactions are only possible when writing messages to a topic with single partition.

C.

Consumers can consume both committed and uncommitted transactions.

D.

Information about producers and their transactions is stored in the _transaction_state topic.

E.

Transactions guarantee at least once delivery of messages.

Question 4

You have a topic t1 with six partitions. You use Kafka Connect to send data from topic t1 in your Kafka cluster to Amazon S3. Kafka Connect is configured for two tasks.

How many partitions will each task process?

Options:

A.

2

B.

3

C.

6

D.

12

Question 5

(You started a new Kafka Connect worker.

Which configuration identifies the Kafka Connect cluster that your worker will join?)

Options:

A.

cluster.id

B.

worker.id

C.

group.id

D.

connector.id

Question 6

What is a consequence of increasing the number of partitions in an existing Kafka topic?

Options:

A.

Existing data will be redistributed across the new number of partitions temporarily increasing cluster load.

B.

Records with the same key could be located in different partitions.

C.

Consumers will need to process data from more partitions which will significantly increase consumer lag.

D.

The acknowledgment process will increase latency for producers using acks=all.

Question 7

What is the default maximum size of a message the Apache Kafka broker can accept?

Options:

A.

1MB

B.

2MB

C.

5MB

D.

10MB

Question 8

Your application is consuming from a topic configured with a deserializer.

It needs to be resilient to badly formatted records ("poison pills"). You surround the poll() call with a try/catch for RecordDeserializationException.

You need to log the bad record, skip it, and continue processing.

Which action should you take in the catch block?

Options:

A.

Log the bad record, no other action needed.

B.

Log the bad record and seek the consumer to the offset of the next record.

C.

Log the bad record and call the consumer.skip() method.

D.

Throw a runtime exception to trigger a restart of the application.

Question 9

(You have a Kafka Connect cluster with multiple connectors deployed.

One connector is not working as expected.

You need to find logs related to that specific connector to investigate the issue.

How can you find the connector’s logs?)

Options:

A.

Modify the log4j.properties file to enable connector context.

B.

Change the log level to DEBUG to include connector context information.

C.

Modify the log4j.properties file to add a dedicated log appender for the connector.

D.

Make no change; there is no way to isolate connector logs.

Question 10

Which two producer exceptions are examples of the class RetriableException? (Select two.)

Options:

A.

LeaderNotAvailableException

B.

RecordTooLargeException

C.

AuthorizationException

D.

NotEnoughReplicasException

Question 11

Which configuration allows more time for the consumer poll to process records?

Options:

A.

session.timeout.ms

B.

heartbeat.interval.ms

C.

max.poll.interval.ms

D.

fetch.max.wait.ms

Question 12

(You need to send a JSON message on the wire. The message key is a string.

How would you do this?)

Options:

A.

Specify a key serializer class for the JSON contents of the message’s value. Set the value serializer class to null.

B.

Specify a value serializer class for the JSON contents of the message’s value. Set a key serializer for the string value.

C.

Specify a value serializer class for the JSON contents of the message’s value. Set the key serializer class to null.

D.

Specify a value serializer class for the JSON contents of the message’s value. Set the key serializer class to JSON.

Question 13

Your application is consuming from a topic with one consumer group.

The number of running consumers is equal to the number of partitions.

Application logs show that some consumers are leaving the consumer group during peak time, triggering a rebalance. You also notice that your application is processing many duplicates.

You need to stop consumers from leaving the consumer group.

What should you do?

Options:

A.

Reduce max.poll.records property.

B.

Increase session.timeout.ms property.

C.

Add more consumer instances.

D.

Split consumers in different consumer groups.

Question 14

You are experiencing low throughput from a Java producer.

Metrics show low I/O thread ratio and low I/O thread wait ratio.

What is the most likely cause of the slow producer performance?

Options:

A.

Compression is enabled.

B.

The producer is sending large batches of messages.

C.

There is a bad data link layer (layer 2) connection from the producer to the cluster.

D.

The producer code has an expensive callback function.

Question 15

The producer code below features a Callback class with a method called onCompletion().

When will the onCompletion() method be invoked?

Options:

A.

When a consumer sends an acknowledgement to the producer

B.

When the producer puts the message into its socket buffer

C.

When the producer batches the message

D.

When the producer receives the acknowledgment from the broker

Question 16

You need to explain the best reason to implement the consumer callback interface ConsumerRebalanceListener prior to a Consumer Group Rebalance.

Which statement is correct?

Options:

A.

Partitions assigned to a consumer may change.

B.

Previous log files are deleted.

C.

Offsets are compacted.

D.

Partition leaders may change.

Question 17

(Which configuration is valid for deploying a JDBC Source Connector to read all rows from the orders table and write them to the dbl-orders topic?)

Options:

A.

{"name": "orders-connect","connector.class": "io.confluent.connect.jdbc.DdbcSourceConnector","tasks.max": "1","connection.url": "jdbc:mysql://mysql:3306/dbl","topic.whitelist": "orders","auto.create": "true"}

B.

{"name": "dbl-orders","connector.class": "io.confluent.connect.jdbc.DdbcSourceConnector","tasks.max": "1","connection.url": "jdbc:mysql://mysql:3306/dbl?user=user&password=pas","topic.prefix": "dbl-","table.blacklist": "ord*"}

C.

{"name": "jdbc-source","connector.class": "io.confluent.connect.jdbc.DdbcSourceConnector","tasks.max": "1","connection.url": "jdbc:mysql://mysql:3306/dbl?user=user&useAutoAuth=true","topic.prefix": "dbl-","table.whitelist": "orders"}

D.

{"name": "jdbc-source","connector.class": "io.confluent.connect.jdbc.JdbcSourceConnector","tasks.max": "1","connection.url": "jdbc:mysql://mysql:3306/dbl?user=user&password=pas","topic.prefix": "dbl-","table.whitelist": "orders"}

Question 18

Which two statements are correct when assigning partitions to the consumers in a consumer group using the assign() API?

(Select two.)

Options:

A.

It is mandatory to subscribe to a topic before calling assign() to assign partitions.

B.

The consumer chooses which partition to read without any assignment from brokers.

C.

The consumer group will not be rebalanced if a consumer leaves the group.

D.

All topics must have the same number of partitions to use assign() API.

Question 19

Match each configuration parameter with the correct option.

To answer choose a match for each option from the drop-down. Partial

credit is given for each correct answer.

Options:

Question 20

(You are building real-time streaming applications using Kafka Streams.

Your application has a custom transformation.

You need to define custom processors in Kafka Streams.

Which tool should you use?)

Options:

A.

TopologyTestDriver

B.

Processor API

C.

Kafka Streams Domain Specific Language (DSL)

D.

Kafka Streams Custom Transformation Language

Question 21

The producer code below features a Callback class with a method called onCompletion().

In the onCompletion() method, when the request is completed successfully, what does the value metadata.offset() represent?

Options:

A.

The sequential ID of the message committed into a partition

B.

Its position in the producer’s batch of messages

C.

The number of bytes that overflowed beyond a producer batch of messages

D.

The ID of the partition to which the message was committed

Question 22

You need to correctly join data from two Kafka topics.

Which two scenarios will allow for co-partitioning?

(Select two.)

Options:

A.

Both topics have the same number of partitions.

B.

Both topics have the same key and partitioning strategy.

C.

Both topics have the same value schema.

D.

Both topics have the same retention time.

Question 23

You need to configure a sink connector to write records that fail into a dead letter queue topic. Requirements:

Topic name: DLQ-Topic

Headers containing error context must be added to the messagesWhich three configuration parameters are necessary?(Select three.)

Options:

A.

errors.tolerance=all

B.

errors.deadletterqueue.topic.name=DLQ-Topic

C.

errors.deadletterqueue.context.headers.enable=true

D.

errors.tolerance=none

E.

errors.log.enable=true

F.

errors.log.include.messages=true

Question 24

(You are configuring a source connector that writes records to an Orders topic.

You need to send some of the records to a different topic.

Which Single Message Transform (SMT) is best suited for this requirement?)

Options:

A.

RegexRouter

B.

InsertField

C.

TombstoneHandler

D.

HeaderFrom

Question 25

You are working on a Kafka cluster with three nodes. You create a topic named orders with:

replication.factor = 3

min.insync.replicas = 2

acks = allWhat exception will be generated if two brokers are down due to network delay?

Options:

A.

NotEnoughReplicasException

B.

NetworkException

C.

NotCoordinatorException

D.

NotLeaderForPartitionException

Question 26

A stream processing application is tracking user activity in online shopping carts.

You want to identify periods of user inactivity.

Which type of Kafka Streams window should you use?

Options:

A.

Sliding

B.

Tumbling

C.

Hopping

D.

Session

Question 27

You need to collect logs from a host and write them to a Kafka topic named 'logs-topic'. You decide to use Kafka Connect File Source connector for this task.

What is the preferred deployment mode for this connector?

Options:

A.

Standalone mode

B.

Distributed mode

C.

Parallel mode

D.

SingleCluster mode

Page: 1 / 7
Total 90 questions