Datacouch CCDAK Practice Exam 2

What does Offset mean or correspond to for the Kafka Connector Source for a database input Choose Two

Topic configurations and ACLs are stored in

How do you ensure message ordering if retries>0:

Where does Kafka Connector for JDBC {in a distributed mode} source track its offset?

The relationship of Broker to Partition with replication factor of 3 is

In a KafkaProducer program our send() was configured with a callback onCompletion. The send was called and the callback parameters has RecordMetadata object that was null. What does this tell us about send method:

Where does Kafka Connector for HDFS Sink track its offset

The relationship of Producer to Topic is

True or False: Key deserializer must be used even if you do not intend to use keys:: True

Where does Kafka Connector for Filestream Source track its offset?

Group memberships for a consumer group are managed by

AVRO schemas are represented in what format

Which class is used to create a builder object in the Kafka Streams Application:

Number of Group Leaders in a single consumer group with 5 consumers is

Which component in Kafka handles schema evolution

Which of the following Kafka Streams operators are stateless? (Choose 3)

With a Range Assignment strategy in place, what will the final outcome look like with given inputs: Topics: foo and bar Partitions: (foo-0 and foo-1) & (bar-0 and bar-1) Consumers: C1, C2, C3

Consumer is reading data with new schema and data having old Schema is already stored in Kafka. This scenario is an example of which kind of compatibility Schema is written with the following fields: { "name": "suit", "type": "string"}, { "name": "card", "type": "string"} Consumer is expecting the following schema and assumes default for omitted size field { "name": "suit", "type": "string"}, { "name": "card", "type": "string"},

Which of the following defines filter operation correctly from a Streaming Application perspective?

With a Round Robin strategy in place, what will the final outcome look like with given inputs: Topics: foo and bar Partitions: (foo-0 and foo-1) & (bar-0 and bar-1) Consumers: C1, C2, C3

Producers have started to produce new data with new schema and Consumers are still reading data with old schema. The scenario is an example of which kind of compatibility Schema is written with the following fields: { "name": "suit", "type": "string"}, { "name": "card", "type": "string"} Consumer is expecting the following schema and ignores the additional card field { "name": "suit", "type": "string"},

Which of the following defines map operation correctly from a Streaming Application perspective?

We have a topic with 5 partitions and we have 7 consumers. How many consumers will end up receiving the data

Schema registry information is stored in

Which of the following defines mapValues operation correctly from a Streaming Application perspective?

We have just installed Confluent Kafka and all the settings are default. After installation we issue the following command: # kafka-console-producer --broker-list :9092 --topic confluent >hello world What will happen once we press enter?

Incompatibility with schema is detected and reported by?

Which of the following defines flatMap operation correctly from a Streaming Application perspective?

Which of the following is the correct command for creating a topic confluent with 2 partitions?

What is the Zero copy concept?

Are worker processes of Kafka Connect managed by Kafka Brokers?:: False

What is the best use case for KTable datatype in KStreams?

What is Transactional Coordinator in Kafka?

What is a connector in Kafka Connect?

How to implement authentication in Kafka?

Which of the below is not supported by Kafka?

What’s the default path of log4j.properties?

Authorization in Kafka is based on how many tuples match?

How do we identify brokers in a Kafka cluster?

How to compute the number of Partitions?

Which metrics are critical for understanding Broker performance?

What is a valid unit of quota in Kafka?

How to configure Client quota?

The following concept allows Kafka to transfer data from a local file channel to a remote socket channel directly without going through the application space.

The default value of auto.commit.interval.ms is:

Which of the following are run modes for Kafka Connect Choose Two:

At the end of this code, which all topics will the consumer be sbscribed to:

Properties props = new Properties(); props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, "broker1:9092"); props.put(ConsumerConfig.GROUP_ID_CONFIG, "samplegroup"); props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG,StringDeserializer.class); props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG,StringDeserializer.class);
KafkaConsumer consumer = new KafkaConsumer<>(props); consumer.subscribe(Arrays.asList("my_topic", "my_other_topic")); consumer.subscribe(Arrays.asList("last_topic"));

Kafka Connect Configuration in a distributed modes are stored in

For a topic with a replication factor 5, the number of leaders will be

What should be the configurations looking like for High Throughput for Consumer Fetch Requests:

Which of the following is not a streaming framework?

True or False: All the writes in a replicated partition go to the leader and reads go to the followers:: False

The default value of max.message.bytes is:

In a Kafka streams application which is the first place where stateful information is stored

Number of Controllers in a 40 broker cluster are

Which class is used to create a Consumer in Kafka:

We have got two records coming in a streaming application (orange,2) and (orange,5) We have decided it to treat them as a KStream and sum up the values. What will be the final result:

If a leader for a partition is lost due to failure, the new leader is elected by

How to temporarily stop consumption of new messages and resume it at a later point:

We have got two records coming in a streaming application (orange,2) and (orange,5) We have decided it to treat them as a KTable and sum up the values. What will be the final result

To gain knowledge about who is the leader for which partition, client sends a metadata request. The metadata request is sent to and served by:

Kafka uses which format to transport and store data on the brokers

Which of the following Kafka Streams operators are stateful? (Choose 3)

References

Flashcards