All Downloads are FREE. Search and download functionalities are using the official Maven repository.

assets.en-US.Database.DeltaLake.md Maven / Gradle / Ivy

There is a newer version: 1.5.4
Show newest version
# DeltaLake

In this section, we provide guides and references to use the Delta Lake connector.

## Requirements

### If extracting the metadata from a Metastore

The Delta Lake connector internally spins up a Spark Application (`pyspark` 3.X and `delta-lake` 2.0.0) to connect to your Hive Metastore and extract metadata from there.

You will need to make sure that the ingestion process can properly access the Metastore service or the database, and that your Metastore version is compatible with Spark 3.X.

You can find further information on the Delta Lake connector in the [docs](https://docs.open-metadata.org/connectors/database/deltalake).

### If extracting the metadata from the Storage

#### S3

To execute metadata extraction AWS account should have enough access to fetch required data. The **Bucket Policy** in AWS requires at least these permissions:

```json
{
  "Version": "2012-10-17",
  "Statement": [
    {
      "Effect": "Allow",
      "Action": [
        "s3:GetObject",
        "s3:ListBucket"
      ],
      "Resource": [
        "arn:aws:s3:::bucketName",
        "arn:aws:s3:::bucketName/*"
      ]
    }
  ]
}
```


## Connection Details

$$section
### Metastore Connection $(id="configSource")

#### From Metastore

You can choose 3 different ways for connecting to the Hive Metastore:
1. **Hive Metastore Service**: Thrift connection to the Metastore service. E.g., `localhost:9083`.
2. **Hive Metastore Database**: JDBC connection to the metastore database. E.g., `jdbc:mysql://localhost:3306/demo_hive`
3. **Hive Metastore File Path**: If during testing, you have a local `metastore.db` file.

#### From Storage

- **S3Config**: Amazon S3 offers a highly available, durable, and secure object storage service with advanced management features, powerful analytics capabilities, and seamless integration with other AWS services.

$$

$$section
### Hive Metastore Service $(id="metastoreHostPort")

Here we need to inform the thrift connection to the Metastore service. E.g., `localhost:9083`.

This property will be used in the Spark Configuration under `hive.metastore.uris`. We are already adding the `thrift://` prefix, so you just need to inform the host and port.

$$

$$section
### Hive Metastore Database Connection $(id="metastoreDbConnection")

In this configuration we will be pointing to the Hive Metastore database directly.

#### Hive Metastore Database ($id="metastoreDb")

JDBC connection to the metastore database.

It should be a properly formatted database URL, which will be used in the Spark Configuration under `spark.hadoop.javax.jdo.option.ConnectionURL`.

#### Connection UserName ($id="username")

Username to use against the metastore database. The value will be used in the Spark Configuration under `spark.hadoop.javax.jdo.option.ConnectionUserName`.

#### Connection Password ($id="password")

Password to use against metastore database. The value will be used in the Spark Configuration under `spark.hadoop.javax.jdo.option.ConnectionPassword`.

#### Connection Driver Name ($id="driverName")

Driver class name for JDBC metastore. The value will be used in the Spark Configuration under `spark.hadoop.javax.jdo.option.ConnectionDriverName`,
e.g., `org.mariadb.jdbc.Driver`.

You will need to provide the driver to the ingestion image, and pass the Class path as explained below.

#### JDBC Driver Class Path ($id="jdbcDriverClassPath")

Class path to JDBC driver required for the JDBC connection. The value will be used in the Spark Configuration under `spark.driver.extraClassPath`.

$$

$$section
### Hive Metastore File Path $(id="metastoreConnection")

Local path for the local file with metastore data. E.g., `/tmp/metastore.db`. This would only be applicable for local testing. Note that the path needs to be in the same host where the ingestion process takes place.

$$

$$section
### App Name $(id="appName")

Which name to give to the Spark Application that will run the ingestion.

$$

## AWS S3

$$section
### AWS Access Key ID $(id="awsAccessKeyId")

When you interact with AWS, you specify your AWS security credentials to verify who you are and whether you have permission to access the resources that you are requesting. AWS uses the security credentials to authenticate and authorize your requests ([docs](https://docs.aws.amazon.com/IAM/latest/UserGuide/security-creds.html)).

Access keys consist of two parts:
1. An access key ID (for example, `AKIAIOSFODNN7EXAMPLE`),
2. And a secret access key (for example, `wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY`).

You must use both the access key ID and secret access key together to authenticate your requests.

You can find further information on how to manage your access keys [here](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_access-keys.html)
$$

$$section
### AWS Secret Access Key $(id="awsSecretAccessKey")

Secret access key (for example, `wJalrXUtnFEMI/K7MDENG/bPxRfiCYEXAMPLEKEY`).
$$

$$section
### AWS Region $(id="awsRegion")

Each AWS Region is a separate geographic area in which AWS clusters data centers ([docs](https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Concepts.RegionsAndAvailabilityZones.html)).

As AWS can have instances in multiple regions, we need to know the region the service you want reach belongs to.

Note that the AWS Region is the only required parameter when configuring a connection. When connecting to the services programmatically, there are different ways in which we can extract and use the rest of AWS configurations. You can find further information about configuring your credentials [here](https://boto3.amazonaws.com/v1/documentation/api/latest/guide/credentials.html#configuring-credentials).
$$

$$section
### AWS Session Token $(id="awsSessionToken")

If you are using temporary credentials to access your services, you will need to inform the AWS Access Key ID and AWS Secrets Access Key. Also, these will include an AWS Session Token.

You can find more information on [Using temporary credentials with AWS resources](https://docs.aws.amazon.com/IAM/latest/UserGuide/id_credentials_temp_use-resources.html).
$$

$$section
### Endpoint URL $(id="endPointURL")

To connect programmatically to an AWS service, you use an endpoint. An *endpoint* is the URL of the entry point for an AWS web service. The AWS SDKs and the AWS Command Line Interface (AWS CLI) automatically use the default endpoint for each service in an AWS Region. But you can specify an alternate endpoint for your API requests.

Find more information on [AWS service endpoints](https://docs.aws.amazon.com/general/latest/gr/rande.html).
$$

$$section
### Profile Name $(id="profileName")

A named profile is a collection of settings and credentials that you can apply to an AWS CLI command. When you specify a profile to run a command, the settings and credentials are used to run that command. Multiple named profiles can be stored in the config and credentials files.

You can inform this field if you'd like to use a profile other than `default`.

Find here more information about [Named profiles for the AWS CLI](https://docs.aws.amazon.com/cli/latest/userguide/cli-configure-profiles.html).
$$

$$section
### Assume Role ARN $(id="assumeRoleArn")

Typically, you use `AssumeRole` within your account or for cross-account access. In this field you'll set the `ARN` (Amazon Resource Name) of the policy of the other account.

A user who wants to access a role in a different account must also have permissions that are delegated from the account administrator. The administrator must attach a policy that allows the user to call `AssumeRole` for the `ARN` of the role in the other account.

This is a required field if you'd like to `AssumeRole`.

Find more information on [AssumeRole](https://docs.aws.amazon.com/STS/latest/APIReference/API_AssumeRole.html).
$$

$$section
### Assume Role Session Name $(id="assumeRoleSessionName")

An identifier for the assumed role session. Use the role session name to uniquely identify a session when the same role is assumed by different principals or for different reasons.

By default, we'll use the name `OpenMetadataSession`.

Find more information about the [Role Session Name](https://docs.aws.amazon.com/STS/latest/APIReference/API_AssumeRole.html#:~:text=An%20identifier%20for%20the%20assumed%20role%20session.).
$$

$$section
### Assume Role Source Identity $(id="assumeRoleSourceIdentity")

The source identity specified by the principal that is calling the `AssumeRole` operation. You can use source identity information in AWS CloudTrail logs to determine who took actions with a role.

Find more information about [Source Identity](https://docs.aws.amazon.com/STS/latest/APIReference/API_AssumeRole.html#:~:text=Required%3A%20No-,SourceIdentity,-The%20source%20identity).
$$

$$section
### Bucket Name $(id="bucketName")

A bucket name in Data Lake is a unique identifier used to organize and store data objects.

It's similar to a folder name, but it's used for object storage rather than file storage.
$$

$$section
### Prefix $(id="prefix")

The prefix of a data source refers to the first part of the data path that identifies the source or origin of the data.

It's used to organize and categorize data within the container, and can help users easily locate and access the data they need.
$$

$$section
### Database Name $(id="databaseName")

In OpenMetadata, the Database Service hierarchy works as follows:

```
Database Service > Database > Schema > Table
```

In the case of Athena, we won't have a Database as such. If you'd like to see your data in a database named something other than `default`, you can specify the name in this field.

$$

$$section
### Connection Arguments $(id="connectionArguments")

Here you can specify any extra key-value pairs for the Spark Configuration.

$$





© 2015 - 2024 Weber Informatics LLC | Privacy Policy