Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Manage Confluent Cloud tags #321

Merged
merged 28 commits into from
Oct 9, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
28 commits
Select commit Hold shift + click to select a range
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
32 changes: 17 additions & 15 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -188,6 +188,7 @@ ns4kafka:
sasl.mechanism: "PLAIN"
security.protocol: "SASL_PLAINTEXT"
sasl.jaas.config: "org.apache.kafka.common.security.scram.ScramLoginModule required username=\"admin\" password=\"admin\";"
cluster.id: "lkc-abcde"
schema-registry:
url: "http://localhost:8081"
basicAuthUsername: "user"
Expand All @@ -202,21 +203,22 @@ ns4kafka:
The name for each managed cluster has to be unique. This is this name you have to set in the field **metadata.cluster**
of your namespace descriptors.

| Property | type | description |
|-----------------------------------------|---------|-------------------------------------------------------------|
| manage-users | boolean | Does the cluster manages users ? |
| manage-acls | boolean | Does the cluster manages access control entries ? |
| manage-topics | boolean | Does the cluster manages topics ? |
| manage-connectors | boolean | Does the cluster manages connects ? |
| drop-unsync-acls | boolean | Should Ns4Kafka drop unsynchronized ACLs |
| provider | boolean | The kind of cluster. Either SELF_MANAGED or CONFLUENT_CLOUD |
| config.bootstrap.servers | string | The location of the clusters servers |
| schema-registry.url | string | The location of the Schema Registry |
| schema-registry.basicAuthUsername | string | Basic authentication username to the Schema Registry |
| schema-registry.basicAuthPassword | string | Basic authentication password to the Schema Registry |
| connects.connect-name.url | string | The location of the kafka connect |
| connects.connect-name.basicAuthUsername | string | Basic authentication username to the Kafka Connect |
| connects.connect-name.basicAuthPassword | string | Basic authentication password to the Kafka Connect |
| Property | type | description |
|-----------------------------------------|---------|----------------------------------------------------------------------------------------------------------------------------------------|
| manage-users | boolean | Does the cluster manages users ? |
| manage-acls | boolean | Does the cluster manages access control entries ? |
| manage-topics | boolean | Does the cluster manages topics ? |
| manage-connectors | boolean | Does the cluster manages connects ? |
| drop-unsync-acls | boolean | Should Ns4Kafka drop unsynchronized ACLs |
| provider | boolean | The kind of cluster. Either SELF_MANAGED or CONFLUENT_CLOUD |
| config.bootstrap.servers | string | The location of the clusters servers |
| config.cluster.id | string | The cluster id. Required to use [Confluent Cloud tags](https://docs.confluent.io/cloud/current/stream-governance/stream-catalog.html). |
| schema-registry.url | string | The location of the Schema Registry |
| schema-registry.basicAuthUsername | string | Basic authentication username to the Schema Registry |
| schema-registry.basicAuthPassword | string | Basic authentication password to the Schema Registry |
| connects.connect-name.url | string | The location of the kafka connect |
| connects.connect-name.basicAuthUsername | string | Basic authentication username to the Kafka Connect |
| connects.connect-name.basicAuthPassword | string | Basic authentication password to the Kafka Connect |

The configuration will depend on the authentication method selected for your broker, schema registry and Kafka Connect.

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -22,13 +22,13 @@
import jakarta.validation.Valid;
import java.time.Instant;
import java.util.ArrayList;
import java.util.Collections;
import java.util.Date;
import java.util.List;
import java.util.Map;
import java.util.Optional;
import java.util.concurrent.ExecutionException;
import java.util.concurrent.TimeoutException;
import java.util.stream.Collectors;
import org.apache.kafka.common.TopicPartition;

/**
Expand Down Expand Up @@ -108,6 +108,13 @@ public HttpResponse<Topic> apply(String namespace, @Valid @Body Topic topic,
validationErrors.addAll(topicService.validateTopicUpdate(ns, existingTopic.get(), topic));
}

List<String> existingTags = existingTopic
.map(oldTopic -> oldTopic.getSpec().getTags())
.orElse(Collections.emptyList());
if (topic.getSpec().getTags().stream().anyMatch(newTag -> !existingTags.contains(newTag))) {
validationErrors.addAll(topicService.validateTags(ns, topic));
}

if (!validationErrors.isEmpty()) {
throw new ResourceValidationException(validationErrors, topic.getKind(), topic.getMetadata().getName());
}
Expand Down Expand Up @@ -268,6 +275,6 @@ public List<DeleteRecordsResponse> deleteRecords(String namespace, String topic,
.offset(entry.getValue())
.build())
.build())
.collect(Collectors.toList());
.toList();
}
}
1 change: 0 additions & 1 deletion src/main/java/com/michelin/ns4kafka/models/ObjectMeta.java
Original file line number Diff line number Diff line change
Expand Up @@ -32,5 +32,4 @@ public class ObjectMeta {
@EqualsAndHashCode.Exclude
@JsonFormat(shape = JsonFormat.Shape.STRING)
private Date creationTimestamp;

}
9 changes: 9 additions & 0 deletions src/main/java/com/michelin/ns4kafka/models/Topic.java
Original file line number Diff line number Diff line change
@@ -1,12 +1,16 @@
package com.michelin.ns4kafka.models;

import com.fasterxml.jackson.annotation.JsonFormat;
import com.fasterxml.jackson.annotation.JsonSetter;
import com.fasterxml.jackson.annotation.Nulls;
import io.micronaut.core.annotation.Introspected;
import io.swagger.v3.oas.annotations.media.Schema;
import jakarta.validation.Valid;
import jakarta.validation.constraints.NotNull;
import java.time.Instant;
import java.util.ArrayList;
import java.util.Date;
import java.util.List;
import java.util.Map;
import lombok.AllArgsConstructor;
import lombok.Builder;
Expand All @@ -32,6 +36,7 @@ public class Topic {
@NotNull
private ObjectMeta metadata;

@Valid
@NotNull
private TopicSpec spec;

Expand All @@ -52,11 +57,15 @@ public enum TopicPhase {
*/
@Data
@Builder
@Introspected
@NoArgsConstructor
@AllArgsConstructor
public static class TopicSpec {
private int replicationFactor;
private int partitions;
@Builder.Default
@JsonSetter(nulls = Nulls.AS_EMPTY)
private List<String> tags = new ArrayList<>();
private Map<String, String> configs;
}

Expand Down
55 changes: 54 additions & 1 deletion src/main/java/com/michelin/ns4kafka/services/TopicService.java
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,8 @@
import com.michelin.ns4kafka.models.Topic;
import com.michelin.ns4kafka.properties.ManagedClusterProperties;
import com.michelin.ns4kafka.repositories.TopicRepository;
import com.michelin.ns4kafka.services.clients.schema.SchemaRegistryClient;
import com.michelin.ns4kafka.services.clients.schema.entities.TagInfo;
import com.michelin.ns4kafka.services.executors.TopicAsyncExecutor;
import io.micronaut.context.ApplicationContext;
import io.micronaut.inject.qualifiers.Qualifiers;
Expand All @@ -19,6 +21,7 @@
import java.util.List;
import java.util.Map;
import java.util.Optional;
import java.util.Set;
import java.util.concurrent.ExecutionException;
import java.util.concurrent.TimeoutException;
import java.util.stream.Collectors;
Expand All @@ -42,6 +45,9 @@ public class TopicService {
@Inject
List<ManagedClusterProperties> managedClusterProperties;

@Inject
SchemaRegistryClient schemaRegistryClient;

/**
* Find all topics.
*
Expand Down Expand Up @@ -195,7 +201,6 @@ public List<String> validateTopicUpdate(Namespace namespace, Topic existingTopic
+ "Please create a new topic with `compact` policy specified instead.",
newTopic.getSpec().getConfigs().get(CLEANUP_POLICY_CONFIG)));
}

return validationErrors;
}

Expand Down Expand Up @@ -319,4 +324,52 @@ public Map<TopicPartition, Long> deleteRecords(Topic topic, Map<TopicPartition,
throw new InterruptedException(e.getMessage());
}
}

/**
* Validate tags for topic.
*
* @param namespace The namespace
* @param topic The topic which contains tags
* @return A list of validation errors
*/
public List<String> validateTags(Namespace namespace, Topic topic) {
List<String> validationErrors = new ArrayList<>();

Optional<ManagedClusterProperties> topicCluster = managedClusterProperties
.stream()
.filter(cluster -> namespace.getMetadata().getCluster().equals(cluster.getName()))
.findFirst();

if (topicCluster.isPresent()
&& !topicCluster.get().getProvider().equals(ManagedClusterProperties.KafkaProvider.CONFLUENT_CLOUD)) {
validationErrors.add(String.format(
"Invalid value %s for tags: Tags are not currently supported.",
String.join(", ", topic.getSpec().getTags())));
return validationErrors;
}

Set<String> tagNames = schemaRegistryClient.getTags(namespace.getMetadata().getCluster())
.map(tags -> tags.stream().map(TagInfo::name).collect(Collectors.toSet())).block();

if (tagNames == null || tagNames.isEmpty()) {
validationErrors.add(String.format(
"Invalid value %s for tags: No tags allowed.",
String.join(", ", topic.getSpec().getTags())));
return validationErrors;
}

List<String> unavailableTagNames = topic.getSpec().getTags()
.stream()
.filter(tagName -> !tagNames.contains(tagName))
.toList();

if (!unavailableTagNames.isEmpty()) {
validationErrors.add(String.format(
"Invalid value %s for tags: Available tags are %s.",
String.join(", ", unavailableTagNames),
String.join(", ", tagNames)));
}

return validationErrors;
}
}
Original file line number Diff line number Diff line change
Expand Up @@ -6,9 +6,13 @@
import com.michelin.ns4kafka.services.clients.schema.entities.SchemaCompatibilityResponse;
import com.michelin.ns4kafka.services.clients.schema.entities.SchemaRequest;
import com.michelin.ns4kafka.services.clients.schema.entities.SchemaResponse;
import com.michelin.ns4kafka.services.clients.schema.entities.TagInfo;
import com.michelin.ns4kafka.services.clients.schema.entities.TagTopicInfo;
import com.michelin.ns4kafka.utils.exceptions.ResourceValidationException;
import io.micronaut.core.type.Argument;
import io.micronaut.core.util.StringUtils;
import io.micronaut.http.HttpRequest;
import io.micronaut.http.HttpResponse;
import io.micronaut.http.HttpStatus;
import io.micronaut.http.MutableHttpRequest;
import io.micronaut.http.client.HttpClient;
Expand Down Expand Up @@ -171,6 +175,73 @@ public Mono<SchemaCompatibilityResponse> deleteCurrentCompatibilityBySubject(Str
return Mono.from(httpClient.retrieve(request, SchemaCompatibilityResponse.class));
}

/**
* List tags.
*
* @param kafkaCluster The Kafka cluster
* @return A list of tags
*/
public Mono<List<TagInfo>> getTags(String kafkaCluster) {
ManagedClusterProperties.SchemaRegistryProperties config = getSchemaRegistry(kafkaCluster);
HttpRequest<?> request = HttpRequest
.GET(URI.create(StringUtils.prependUri(
config.getUrl(), "/catalog/v1/types/tagdefs")))
.basicAuth(config.getBasicAuthUsername(), config.getBasicAuthPassword());
return Mono.from(httpClient.retrieve(request, Argument.listOf(TagInfo.class)));
}

/**
* List tags of a topic.
*
* @param kafkaCluster The Kafka cluster
* @param entityName The topic's name for the API
* @return A list of tags
*/
public Mono<List<TagTopicInfo>> getTopicWithTags(String kafkaCluster, String entityName) {
ManagedClusterProperties.SchemaRegistryProperties config = getSchemaRegistry(kafkaCluster);
HttpRequest<?> request = HttpRequest
.GET(URI.create(StringUtils.prependUri(
config.getUrl(),
"/catalog/v1/entity/type/kafka_topic/name/" + entityName + "/tags")))
.basicAuth(config.getBasicAuthUsername(), config.getBasicAuthPassword());
return Mono.from(httpClient.retrieve(request, Argument.listOf(TagTopicInfo.class)));
}

/**
* Add a tag to a topic.
*
* @param kafkaCluster The Kafka cluster
* @param tagSpecs Tags to add
* @return Information about added tags
*/
public Mono<List<TagTopicInfo>> addTags(String kafkaCluster, List<TagTopicInfo> tagSpecs) {
ManagedClusterProperties.SchemaRegistryProperties config = getSchemaRegistry(kafkaCluster);
HttpRequest<?> request = HttpRequest
.POST(URI.create(StringUtils.prependUri(
config.getUrl(),
"/catalog/v1/entity/tags")), tagSpecs)
.basicAuth(config.getBasicAuthUsername(), config.getBasicAuthPassword());
return Mono.from(httpClient.retrieve(request, Argument.listOf(TagTopicInfo.class)));
}

/**
* Delete a tag to a topic.
*
* @param kafkaCluster The Kafka cluster
* @param entityName The topic's name
* @param tagName The tag to delete
* @return The resume response
*/
public Mono<HttpResponse<Void>> deleteTag(String kafkaCluster, String entityName, String tagName) {
ManagedClusterProperties.SchemaRegistryProperties config = getSchemaRegistry(kafkaCluster);
HttpRequest<?> request = HttpRequest
.DELETE(URI.create(StringUtils.prependUri(
config.getUrl(),
"/catalog/v1/entity/type/kafka_topic/name/" + entityName + "/tags/" + tagName)))
.basicAuth(config.getBasicAuthUsername(), config.getBasicAuthPassword());
return Mono.from(httpClient.exchange(request, Void.class));
}

/**
* Get the schema registry of the given Kafka cluster.
*
Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,12 @@
package com.michelin.ns4kafka.services.clients.schema.entities;

import lombok.Builder;

/**
* Tag name.
*
* @param name Tag name
*/
@Builder
public record TagInfo(String name) {
}
Original file line number Diff line number Diff line change
@@ -0,0 +1,20 @@
package com.michelin.ns4kafka.services.clients.schema.entities;

import lombok.Builder;

/**
* Information on tag.
*
* @param entityName The entity name
* @param entityType The entity type
* @param typeName The type name
* @param entityStatus The entity status
*/
@Builder
public record TagTopicInfo(String entityName, String entityType, String typeName, String entityStatus) {

@Override
public String toString() {
return entityName + "/" + typeName;
}
}
adriencalime marked this conversation as resolved.
Show resolved Hide resolved
Original file line number Diff line number Diff line change
Expand Up @@ -15,7 +15,6 @@
import com.michelin.ns4kafka.services.ConnectorService;
import com.michelin.ns4kafka.services.StreamService;
import io.micronaut.context.annotation.EachBean;
import jakarta.inject.Inject;
import jakarta.inject.Singleton;
import java.util.ArrayList;
import java.util.List;
Expand All @@ -24,6 +23,7 @@
import java.util.concurrent.TimeoutException;
import java.util.function.Function;
import java.util.stream.Stream;
import lombok.AllArgsConstructor;
import lombok.extern.slf4j.Slf4j;
import org.apache.kafka.clients.admin.Admin;
import org.apache.kafka.common.acl.AclBinding;
Expand All @@ -40,26 +40,19 @@
@Slf4j
@EachBean(ManagedClusterProperties.class)
@Singleton
@AllArgsConstructor
public class AccessControlEntryAsyncExecutor {
private static final String USER_PRINCIPAL = "User:";

private final ManagedClusterProperties managedClusterProperties;

@Inject
AccessControlEntryService accessControlEntryService;
private AccessControlEntryService accessControlEntryService;

@Inject
StreamService streamService;
private StreamService streamService;

@Inject
ConnectorService connectorService;
private ConnectorService connectorService;

@Inject
NamespaceRepository namespaceRepository;

public AccessControlEntryAsyncExecutor(ManagedClusterProperties managedClusterProperties) {
this.managedClusterProperties = managedClusterProperties;
}
private NamespaceRepository namespaceRepository;

/**
* Run the ACLs synchronization.
Expand Down
Loading