Skip to content

Commit

Permalink
Replication job to trigger setup and carbon flow for replica tables […
Browse files Browse the repository at this point in the history
…WIP] (#276)

## Summary
New job workflow to run Replication setup process on Airflow. Applies to
primary tables with defined ReplicationConfig.

Design decisions:
1. The Replication job does not use JobsClient to trigger a job. 
Instead it will use Airflow client to trigger and manage lifecycle of a
Airflow job.
2. Replication will have the task definition in Li-Openhouse side since
it needs to leverage AirflowClient.
3. As there can be multiple ReplicationConfigs for a table. The
ReplicationTask goes over each config sequentially to trigger a setup
job corresponding to the config.

Future work:
1. Develop AirflowClient which will allow triggering, managing state of
Airflow jobs and integrate with Replication job run.
2. Develop CarbonClient which can trigger carbon jobs to setup scheduled
replication flows.
3. Integrate CarbonClient with Replica table setup job


## Changes

- [ ] Client-facing API Changes
- [ ] Internal API Changes
- [ ] Bug Fixes
- [x] New Features
- [ ] Performance Improvements
- [ ] Code Style
- [ ] Refactoring
- [ ] Documentation
- [ ] Tests
For all the boxes checked, please include additional details of the
changes made in this pull request.

## Testing Done
<!--- Check any relevant boxes with "x" -->

- [x] Manually Tested on local docker setup. Please include commands
ran, and their output.
- [ ] Added new tests for the changes made.
- [ ] Updated existing tests to reflect the changes made.
- [ ] No tests added or updated. Please explain why. If unsure, please
feel free to ask for help.
- [ ] Some other form of testing like staging or soak time in
production. Please explain.


Tested on Local docker setup:

Ran the new Replication Job with Local Docker setup. Added a table with
replicationConfig.
Observed:
1. The new job gets picked up by the JobScheduler and follows the task
flow
2. Only Primary tables with defined replicationConfig are considered and
others are filtered out

<img width="1900" alt="Screenshot 2025-01-08 at 10 17 11 PM"
src="https://github.com/user-attachments/assets/e46b210a-650c-43f5-9759-840257206595"
/>
<img width="1598" alt="Screenshot 2025-01-08 at 3 30 15 PM"
src="https://github.com/user-attachments/assets/8ae869c1-5ebc-4fd0-b95a-ae5af2c03b74"
/>

For all the boxes checked, include a detailed description of the testing
done for the changes made in this pull request.

# Additional Information

- [ ] Breaking Changes
- [ ] Deprecations
- [ ] Large PR broken into smaller PRs, and PR plan linked in the
description.

For all the boxes checked, include additional details of the changes
made in this pull request.
  • Loading branch information
rohitkum2506 authored Jan 10, 2025
1 parent 3e8d387 commit 3c58268
Show file tree
Hide file tree
Showing 6 changed files with 116 additions and 1 deletion.
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,7 @@
import com.linkedin.openhouse.datalayout.strategy.DataLayoutStrategy;
import com.linkedin.openhouse.jobs.util.DatabaseTableFilter;
import com.linkedin.openhouse.jobs.util.DirectoryMetadata;
import com.linkedin.openhouse.jobs.util.ReplicationConfig;
import com.linkedin.openhouse.jobs.util.RetentionConfig;
import com.linkedin.openhouse.jobs.util.RetryUtil;
import com.linkedin.openhouse.jobs.util.TableDataLayoutMetadata;
Expand All @@ -15,6 +16,7 @@
import com.linkedin.openhouse.tables.client.model.GetDatabaseResponseBody;
import com.linkedin.openhouse.tables.client.model.GetTableResponseBody;
import com.linkedin.openhouse.tables.client.model.Policies;
import com.linkedin.openhouse.tables.client.model.Replication;
import java.time.Duration;
import java.util.AbstractMap;
import java.util.ArrayList;
Expand Down Expand Up @@ -56,6 +58,11 @@ public Optional<RetentionConfig> getTableRetention(TableMetadata tableMetadata)
return getTableRetention(response);
}

public Optional<List<ReplicationConfig>> getTableReplication(TableMetadata tableMetadata) {
GetTableResponseBody response = getTable(tableMetadata);
return getTableReplication(response);
}

private Optional<RetentionConfig> getTableRetention(GetTableResponseBody response) {
// timePartitionSpec or retention.ColumnPattern should be present to run Retention job on a
// table.
Expand Down Expand Up @@ -86,6 +93,31 @@ private Optional<RetentionConfig> getTableRetention(GetTableResponseBody respons
.build());
}

private Optional<List<ReplicationConfig>> getTableReplication(GetTableResponseBody response) {
// At least one replication config must be present
if (response == null
|| response.getPolicies() == null
|| response.getPolicies().getReplication() == null
|| response.getPolicies().getReplication().getConfig().size() <= 0) {
return Optional.empty();
}
List<ReplicationConfig> replicationConfigList = new ArrayList<>();
Replication replication = response.getPolicies().getReplication();
List<com.linkedin.openhouse.tables.client.model.ReplicationConfig> replicationConfig =
replication.getConfig();

replicationConfig.forEach(
rc ->
replicationConfigList.add(
ReplicationConfig.builder()
.cluster(rc.getDestination())
.tableOwner(response.getTableCreator())
.schedule(rc.getCronSchedule())
.build()));
// since replicationConfigList is initialized, it cannot be null.
return Optional.of(replicationConfigList);
}

protected GetTableResponseBody getTable(TableMetadata tableMetadata) {
return getTable(tableMetadata.getDbName(), tableMetadata.getTableName());
}
Expand Down Expand Up @@ -281,6 +313,7 @@ protected Optional<TableMetadata> mapTableResponseToTableMetadata(
.isTimePartitioned(tableResponseBody.getTimePartitioning() != null)
.isClustered(tableResponseBody.getClustering() != null)
.retentionConfig(getTableRetention(tableResponseBody).orElse(null))
.replicationConfig(getTableReplication(tableResponseBody).orElse(null))
.jobExecutionProperties(getJobExecutionProperties(tableResponseBody));
builder.creationTimeMs(Objects.requireNonNull(tableResponseBody.getCreationTime()));
return Optional.of(builder.build());
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -48,6 +48,19 @@ private List<OperationTask<?>> prepareTableOperationTaskList(JobConf.JobTypeEnum
return processMetadataList(tableMetadataList, jobType);
}

private List<OperationTask<?>> prepareReplicationOperationTaskList(JobConf.JobTypeEnum jobType) {
List<TableMetadata> replicationSetupTableMetadataList = tablesClient.getTableMetadataList();
// filters tables which are primary and hava replication config defined
replicationSetupTableMetadataList =
replicationSetupTableMetadataList.stream()
.filter(m -> m.isPrimary() && (m.getReplicationConfig() != null))
.collect(Collectors.toList());
log.info(
"Fetched metadata for {} tables for replication setup task",
replicationSetupTableMetadataList.size());
return processMetadataList(replicationSetupTableMetadataList, jobType);
}

private List<OperationTask<?>> prepareTableDirectoryOperationTaskList(
JobConf.JobTypeEnum jobType) {
List<DirectoryMetadata> directoryMetadataList = tablesClient.getOrphanTableDirectories();
Expand Down Expand Up @@ -152,6 +165,8 @@ public List<OperationTask<?>> buildOperationTaskList(
case STAGED_FILES_DELETION:
case DATA_LAYOUT_STRATEGY_GENERATION:
return prepareTableOperationTaskList(jobType);
case REPLICATION:
return prepareReplicationOperationTaskList(jobType);
case DATA_LAYOUT_STRATEGY_EXECUTION:
return prepareDataLayoutOperationTaskList(jobType, properties, meter);
case ORPHAN_DIRECTORY_DELETION:
Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,17 @@
package com.linkedin.openhouse.jobs.util;

import lombok.Builder;
import lombok.EqualsAndHashCode;
import lombok.Getter;
import lombok.ToString;

/** Table retention config class. This is app side representation of /tables policies->retention */
@Builder
@Getter
@EqualsAndHashCode
@ToString
public class ReplicationConfig {
private final String schedule;
private final String tableOwner;
private final String cluster;
}
Original file line number Diff line number Diff line change
@@ -1,6 +1,7 @@
package com.linkedin.openhouse.jobs.util;

import java.util.HashMap;
import java.util.List;
import java.util.Map;
import javax.annotation.Nullable;
import lombok.Builder;
Expand All @@ -25,6 +26,7 @@ public class TableMetadata extends Metadata {
@Builder.Default protected @NonNull Map<String, String> jobExecutionProperties = new HashMap<>();
protected @Nullable RetentionConfig retentionConfig;
protected @Nullable HistoryConfig historyConfig;
protected @Nullable List<ReplicationConfig> replicationConfig;

public String fqtn() {
return String.format("%s.%s", dbName, tableName);
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -18,6 +18,8 @@
import com.linkedin.openhouse.tables.client.model.GetDatabaseResponseBody;
import com.linkedin.openhouse.tables.client.model.GetTableResponseBody;
import com.linkedin.openhouse.tables.client.model.Policies;
import com.linkedin.openhouse.tables.client.model.Replication;
import com.linkedin.openhouse.tables.client.model.ReplicationConfig;
import com.linkedin.openhouse.tables.client.model.Retention;
import com.linkedin.openhouse.tables.client.model.RetentionColumnPattern;
import com.linkedin.openhouse.tables.client.model.TimePartitionSpec;
Expand Down Expand Up @@ -415,6 +417,33 @@ void testNonPartitionedTableWithPatternGetRetentionConfig() {
Mockito.verify(apiMock, Mockito.times(1)).getTableV1(testDbName, testTableNamePartitioned);
}

@Test
void testPrimaryTableWithReplicationConfig() {
GetTableResponseBody primaryTableWithReplicationConfigResponseBodyMock =
createPrimaryTableWithReplicationPolicyResponseBodyMock(
testDbName, testTableName, "schedule", "interval", "cluster");
Mono<GetTableResponseBody> responseMock = (Mono<GetTableResponseBody>) Mockito.mock(Mono.class);
Mockito.when(responseMock.block(any(Duration.class)))
.thenReturn(primaryTableWithReplicationConfigResponseBodyMock);
Mockito.when(apiMock.getTableV1(testDbName, testTableName)).thenReturn(responseMock);
Optional<List<com.linkedin.openhouse.jobs.util.ReplicationConfig>> result =
client.getTableReplication(
TableMetadata.builder().dbName(testDbName).tableName(testTableName).build());
Assertions.assertTrue(
result.isPresent(), "Retention config must be present for a test partitioned table");
List<com.linkedin.openhouse.jobs.util.ReplicationConfig> replicationConfigs = new ArrayList<>();
com.linkedin.openhouse.jobs.util.ReplicationConfig replicationConfig =
com.linkedin.openhouse.jobs.util.ReplicationConfig.builder()
.schedule("schedule")
.cluster("cluster")
.tableOwner("")
.build();
replicationConfigs.add(replicationConfig);
Assertions.assertEquals(replicationConfigs, result.orElse(null));
Mockito.verify(responseMock, Mockito.times(1)).block(any(Duration.class));
Mockito.verify(apiMock, Mockito.times(1)).getTableV1(testDbName, testTableName);
}

@Test
void getDatabases() {
GetAllDatabasesResponseBody allDatabasesResponseBodyMock =
Expand Down Expand Up @@ -535,6 +564,24 @@ private GetTableResponseBody createNonPartitionedTableWithPatternResponseBodyMoc
return setUpResponseBodyMock(dbName, tableName, null, policies);
}

private GetTableResponseBody createPrimaryTableWithReplicationPolicyResponseBodyMock(
String dbName, String tableName, String schedule, String interval, String cluster) {
Policies policies = Mockito.mock(Policies.class);
Replication replication = Mockito.mock(Replication.class);
List<ReplicationConfig> replicationConfigs = new ArrayList<>();
ReplicationConfig replicationConfig = Mockito.mock(ReplicationConfig.class);
replicationConfigs.add(replicationConfig);
replication.setConfig(replicationConfigs);

policies.setReplication(replication);
Mockito.when(replication.getConfig()).thenReturn(replicationConfigs);
Mockito.when(policies.getReplication()).thenReturn(replication);
Mockito.when(replicationConfig.getCronSchedule()).thenReturn(schedule);
Mockito.when(replicationConfig.getDestination()).thenReturn(cluster);
Mockito.when(replicationConfig.getInterval()).thenReturn(interval);
return setUpResponseBodyMock(dbName, tableName, null, policies);
}

private GetTableResponseBody createPartitionedTableNullPoliciesResponseBodyMock(
String dbName, String tableName, String partitionColummName) {
TimePartitionSpec partitionSpec = Mockito.mock(TimePartitionSpec.class);
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -33,6 +33,7 @@ public enum JobType {
ORPHAN_DIRECTORY_DELETION,
TABLE_STATS_COLLECTION,
DATA_LAYOUT_STRATEGY_GENERATION,
DATA_LAYOUT_STRATEGY_EXECUTION
DATA_LAYOUT_STRATEGY_EXECUTION,
REPLICATION
}
}

0 comments on commit 3c58268

Please sign in to comment.