Skip to content

Configuring the Spark cluster and the migration is manual and tedious #193

Open
@julienrf

Description

@julienrf

As mentioned in #191 and #192, setting up the Spark cluster and configuring the migrator to correctly utilize the Spark resources is manual and tedious.

How much of this could be automated? Ideally, users would only supply the table size and the throughput supported by the source and target tables, and they should get a Spark cluster correctly sized to transfer the data as efficiently as the source and target databases support.

Some ideas to explore:

  1. Provide such input as parameters of the Ansible playbook and automatically configure the corresponding Spark resources
  2. Publish a tool (e.g. using Pulumi or Terraform) that automatically provisions a cloud-based cluster (e.g. using AWS EC2), ready to run the migrator.

Metadata

Metadata

Assignees

No one assigned

    Labels

    enhancementNew feature or request

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions