14.9 C
New York
Wednesday, May 21, 2025

Configure cross-account entry of Amazon SageMaker Lakehouse multi-catalog tables utilizing AWS Glue 5.0 Spark


An IAM function, Glue-execution-role, within the shopper account, with the next insurance policies:

  1. AWS managed insurance policies AWSGlueServiceRole and AmazonRedshiftDataFullAccess.
  2. Create a brand new in-line coverage with the next permissions and fasten it:
    {
        "Model": "2012-10-17",
        "Assertion": [
            {
                "Sid": "LFandRSserverlessAccess",
                "Effect": "Allow",
                "Action": [
                    "lakeformation:GetDataAccess",
                    "redshift-serverless:GetCredentials"
                ],
                "Useful resource": "*"
            },
            {
                "Impact": "Permit",
                "Motion": "iam:PassRole",
                "Useful resource": "*",
                "Situation": {
                    "StringEquals": {
                        "iam:PassedToService": "glue.amazonaws.com"
                    }
                }
            }
        ]
    }

  3. Add the next belief coverage to Glue-execution-role, permitting AWS Glue to imagine this function:
    {
        "Model": "2012-10-17",
        "Assertion": [
            {
                "Effect": "Allow",
                "Principal": {
                    "Service": [
                        "glue.amazonaws.com"
                    ]
                },
                "Motion": "sts:AssumeRole"
            }
        ]
    }

Steps for producer account setup

For the producer account setup, you possibly can both use your IAM administrator function added as Lake Formation administrator or use a Lake Formation administrator function with permissions added as mentioned within the stipulations. For illustration functions, we use the IAM admin function Admin added as Lake Formation administrator.

002-BDB 5089

Configure your catalog

Full the next steps to arrange your catalog:

  1. Log in to AWS Administration Console as Admin.
  2. On the Amazon Redshift console, observe the directions in Registering Amazon Redshift clusters and namespaces to the AWS Glue Information Catalog.
  3. After the registration is initiated, you will note the invite from Amazon Redshift on the Lake Formation console.
  4. Choose the pending catalog invitation and select Approve and create catalog.

003-BDB 5089

  1. On the Set catalog particulars web page, configure your catalog:
    1. For Identify, enter a reputation (for this put up, redshiftserverless1-uswest2).
    2. Choose Entry this catalog from Apache Iceberg appropriate engines.
    3. Select the IAM function you created for the information switch.
    4. Select Subsequent.

    004-BDB 5089

  2. On the Grant permissions – non-obligatory web page, select Add permissions.
    1. Grant the Admin consumer Tremendous consumer permissions for Catalog permissions and Grantable permissions.
    2. Select Add.

    005-BDB 5089

  3. Confirm the granted permission on the subsequent web page and select Subsequent.
    006-BDB 5089
  4. Assessment the small print on the Assessment and create web page and select Create catalog.
    007-BDB 5089

Wait just a few seconds for the catalog to indicate up.

  1. Select Catalogs within the navigation pane and confirm that the redshiftserverless1-uswest2 catalog is created.
    008-BDB 5089
  2. Discover the catalog element web page to confirm the ordersdb.public database.
    009-BDB 5089
  3. On the database View dropdown menu, view the desk and confirm that the orderstbl desk reveals up.
    010-BDB 5089

Because the Admin function, you can too question the orderstbl in Amazon Athena and make sure the information is accessible.

011-BDB 5089

Grant permissions on the tables from the producer account to the buyer account

On this step, we share the Amazon Redshift federated catalog database redshiftserverless1-uswest2:ordersdb.public and desk orderstbl in addition to the Amazon S3 based mostly Iceberg desk returnstbl_iceberg and its database customerdb from the default catalog to the buyer account. We will’t share the whole catalog to exterior accounts as a catalog-level permission; we simply share the database and desk.

  1. On the Lake Formation console, select Information permissions within the navigation pane.
  2. Select Grant.
    012-BDB 5089
  3. Underneath Principals, choose Exterior accounts.
  4. Present the buyer account ID.
  5. Underneath LF-Tags or catalog assets, choose Named Information Catalog assets.
  6. For Catalogs, select the account ID that represents the default catalog.
  7. For Databases, select customerdb.
    013-BDB 5089
  8. Underneath Database permissions, choose Describe below Database permissions and Grantable permissions.
  9. Select Grant.
    014-BDB 5089
  10. Repeat these steps and grant table-level Choose and Describe permissions on returnstbl_iceberg.
  11. Repeat these steps once more to grant database- and table-level permissions for the ordertbl desk of the federated catalog database redshiftserverless1-uswest2/ordersdb.

The next screenshots present the configuration for database-level permissions.

015-BDB 5089

016-BDB 5089

The next screenshots present the configuration for table-level permissions.

017-BDB 5089

018-BDB 5089

  1. Select Information permissions within the navigation pane and confirm that the buyer account has been granted database- and table-level permissions for each orderstbl from the federated catalog and returnstbl_iceberg from the default catalog.
    019-BDB 5089

Register the Amazon S3 location of the returnstbl_iceberg with Lake Formation.

On this step, we register the Amazon S3 based mostly Iceberg desk returnstbl_iceberg knowledge location with Lake Formation to be managed by Lake Formation permissions. Full the next steps:

  1. On the Lake Formation console, select Information lake places within the navigation pane.
  2. Select Register location.
    020-BDB 5089
  3. For Amazon S3 path, enter the trail to your S3 bucket that you simply supplied whereas creating the Iceberg desk returnstbl_iceberg.
  4. For IAM function, present the user-defined function LakeFormationS3Registration_custom that you simply created as a prerequisite.
  5. For Permission mode, choose Lake Formation.
  6. Select Register location.
    021-BDB 5089
  7. Select Information lake places within the navigation pane to confirm the Amazon S3 registration.
    022-BDB 5089

With this step, the producer account setup is full.

Steps for shopper account setup

For the buyer account setup, we use the IAM admin function Admin, added as a Lake Formation administrator.

The steps within the shopper account are fairly concerned. Within the shopper account, a Lake Formation administrator will settle for the AWS Useful resource Entry Supervisor (AWS RAM) shares and create the required useful resource hyperlinks that time to the shared catalog, database, and tables. The Lake Formation admin verifies that the shared assets are accessible by operating check queries in Athena. The admin additional grants permissions to the function Glue-execution-role on the useful resource hyperlinks, database, and tables. The admin then runs a be a part of question in AWS Glue 5.0 Spark utilizing Glue-execution-role.

Settle for and confirm the shared assets

Lake Formation makes use of AWS RAM shares to allow cross-account sharing with Information Catalog useful resource insurance policies within the AWS RAM insurance policies. To view and confirm the shared assets from producer account, full the next steps:

  1. Log in to the buyer AWS console and set the AWS Area to match the producer’s shared useful resource Area. For this put up, we use us-west-2.
  2. Open the Lake Formation console. You will note a message indicating there’s a pending invite and asking you settle for it on the AWS RAM console.
    023-BDB 5089
  3. Comply with the directions in Accepting a useful resource share invitation from AWS RAM to evaluate and settle for the pending invitations.
  4. When the invite standing adjustments to Accepted, select Shared assets below Shared with me within the navigation pane.
  5. Confirm that the Redshift Serverless federated catalog redshiftserverless1-uswest2, the default catalog database customerdb, the desk returnstbl_iceberg, and the producer account ID below Proprietor ID column show appropriately.
    024-BDB 5089
  6. On the Lake Formation console, below Information Catalog within the navigation pane, select Databases.
  7. Search by the producer account ID.
    It’s best to see the customerdb and public databases. You possibly can additional choose every database and select View tables on the Actions dropdown menu and confirm the desk names

025-BDB 5089

You’ll not see an AWS RAM share invite for the catalog stage on the Lake Formation console, as a result of catalog-level sharing isn’t attainable. You possibly can evaluate the shared federated catalog and Amazon Redshift managed catalog names on the AWS RAM console, or utilizing the AWS Command Line Interface (AWS CLI) or SDK.

Create a catalog hyperlink container and useful resource hyperlinks

A catalog hyperlink container is a Information Catalog object that references a neighborhood or cross-account federated database-level catalog from different AWS accounts. For extra particulars, consult with Accessing a shared federated catalog. Catalog hyperlink containers are basically Lake Formation useful resource hyperlinks on the catalog stage that reference or level to a Redshift cluster federated catalog or Amazon Redshift managed catalog object from different accounts.

Within the following steps, we create a catalog hyperlink container that factors to the producer shared federated catalog redshiftserverless1-uswest2. Contained in the catalog hyperlink container, we create a database. Contained in the database, we create a useful resource hyperlink for the desk that factors to the shared federated catalog desk <<producer account id>>:redshiftserverless1-uswest2/ordersdb.public.orderstbl.

  1. On the Lake Formation console, below Information Catalog within the navigation pane, select Catalogs.
  2. Select Create catalog.

026-BDB 5089

  1. Present the next particulars for the catalog:
    1. For Identify, enter a reputation for the catalog (for this put up, rl_link_container_ordersdb).
    2. For Sort, select Catalog Hyperlink container.
    3. For Supply, select Redshift.
    4. For Goal Redshift Catalog, enter the Amazon Useful resource Identify (ARN) of the producer federated catalog (arn:aws:glue:us-west-2:<<producer account id>>:catalog/redshiftserverless1-uswest2/ordersdb).
    5. Underneath Entry from engines, choose Entry this catalog from Apache Iceberg appropriate engines.
    6. For IAM function, present the Redshift-S3 knowledge switch function that you simply had created within the stipulations.
    7. Select Subsequent.

027-BDB 5089

  1. On the Grant permissions – non-obligatory web page, select Add permissions.
    1. Grant the Admin consumer Tremendous consumer permissions for Catalog permissions and Grantable permissions.
    2. Select Add after which select Subsequent.

028-BDB 5089

  1. Assessment the small print on the Assessment and create web page and select Create catalog.

Wait just a few seconds for the catalog to indicate up.

029-BDB 5089

  1. Within the navigation pane, select Catalogs.
  2. Confirm that rl_link_container_ordersdb is created.

030-BDB 5089

Create a database below rl_link_container_ordersdb

Full the next steps:

  1. On the Lake Formation console, below Information Catalog within the navigation pane, select Databases.
  2. On the Select catalog dropdown menu, select rl_link_container_ordersdb.
  3. Select Create database.

Alternatively, you possibly can select the Create dropdown menu after which select Database.

  1. Present particulars for the database:
    1. For Identify, enter a reputation (for this put up, public_db).
    2. For Catalog, select rl_link_container_ordersdb.
    3. Depart Location – non-obligatory as clean.
    4. Underneath Default permissions for newly created tables, deselect Use solely IAM entry management for brand new tables on this database.
    5. Select Create database.

031-BDB 5089

  1. Select Catalogs within the navigation pane to confirm that public_db is created below rl_link_container_ordersdb.

032-BDB 5089

Create a desk useful resource hyperlink for the shared federated catalog desk

A useful resource hyperlink to a shared federated catalog desk can reside solely contained in the database of a catalog hyperlink container. A useful resource hyperlink for such tables is not going to work if created contained in the default catalog. For extra particulars on useful resource hyperlinks, consult with Making a useful resource hyperlink to a shared Information Catalog desk.

Full the next steps to create a desk useful resource hyperlink:

  1. On the Lake Formation console, below Information Catalog within the navigation pane, select Tables.
  2. On the Create dropdown menu, select Useful resource hyperlink.

033-BDB 5089

  1. Present particulars for the desk useful resource hyperlink:
    1. For Useful resource hyperlink identify, enter a reputation (for this put up, rl_orderstbl).
    2. For Vacation spot catalog, select rl_link_container_ordersdb.
    3. For Database, select public_db.
    4. For Shared desk’s area, select US West (Oregon).
    5. For Shared desk, select orderstbl.
    6. After the Shared desk is chosen, Shared desk’s database and Shared desk’s catalog ID ought to get mechanically populated.
    7. Select Create.

034-BDB 5089

  1. Within the navigation pane, select Databases to confirm that rl_orderstbl is created below public_db, inside rl_link_container_ordersdb.

035-BDB 5089

036-BDB 5089

Create a database useful resource hyperlink for the shared default catalog database.

Now we create a database useful resource hyperlink within the default catalog to question the Amazon S3 based mostly Iceberg desk shared from the producer. For particulars on database useful resource hyperlinks, refer Making a useful resource hyperlink to a shared Information Catalog database.

Although we’re in a position to see the shared database within the default catalog of the buyer, a useful resource hyperlink is required to question from analytics engines, corresponding to Athena, Amazon EMR, and AWS Glue. When utilizing AWS Glue with Lake Formation tables, the useful resource hyperlink must be named identically to the supply account’s useful resource. For added particulars on utilizing AWS Glue with Lake Formation, consult with Issues and limitations.

Full the next steps to create a database useful resource hyperlink:

  1. On the Lake Formation console, below Information Catalog within the navigation pane, select Databases.
  2. On the Select catalog dropdown menu, select the account ID to decide on the default catalog.
  3. Seek for customerdb.

It’s best to see the shared database identify customerdb with the Proprietor account ID as that of your producer account ID.

  1. Choose customerdb, and on the Create dropdown menu, select Useful resource hyperlink.
  2. Present particulars for the useful resource hyperlink:
    1. For Useful resource hyperlink identify, enter a reputation (for this put up, customerdb).
    2. The remainder of the fields must be already populated.
    3. Select Create.
  3. Within the navigation pane, select Databases and confirm that customerdb is created below the default catalog. Useful resource hyperlink names will present in italicized font.

037-BDB 5089

Confirm entry as Admin utilizing Athena

Now you possibly can confirm your entry utilizing Athena. Full the next steps:

  1. Open the Athena console.
  2. Ensure an S3 bucket is supplied to retailer the Athena question outcomes. For particulars, consult with Specify a question consequence location utilizing the Athena console.
  3. Within the navigation pane, confirm each the default catalog and federated catalog tables by previewing them.
  4. You can too run a be a part of question as follows. Take note of the three-point notation for referring to the tables from two totally different catalogs:
SELECT
returns_tb.market as Market,
sum(orders_tb.amount) as Total_Quantity
FROM rl_link_container_ordersdb.public_db.rl_orderstbl as orders_tb
JOIN awsdatacatalog.customerdb.returnstbl_iceberg as returns_tb
ON orders_tb.order_id = returns_tb.order_id
GROUP BY returns_tb.market;

038-BDB 5089

This verifies the brand new functionality of SageMaker Lakehouse, which permits accessing Redshift cluster tables and Amazon S3 based mostly Iceberg tables in the identical question, throughout AWS accounts, by way of the Information Catalog, utilizing Lake Formation permissions.

Grant permissions to Glue-execution-role

Now we are going to share the assets from the producer account with further IAM principals within the shopper account. Often, the information lake admin grants permissions to knowledge analysts, knowledge scientists, and knowledge engineers within the shopper account to do their job capabilities, corresponding to processing and analyzing the information.

We arrange Lake Formation permissions on the catalog hyperlink container, databases, tables, and useful resource hyperlinks to the AWS Glue job execution function Glue-execution-role that we created within the stipulations.

Useful resource hyperlinks permit solely Describe and Drop permissions. It’s essential to use the Grant on track configuration to supply database Describe and desk Choose permissions.

Full the next steps:

  1. On the Lake Formation console, select Information permissions within the navigation pane.
  2. Select Grant.
  3. Underneath Principals, choose IAM customers and roles.
  4. For IAM customers and roles, enter Glue-execution-role.
  5. Underneath LF-Tags or catalog assets, choose Named Information Catalog assets.
  6. For Catalogs, select rl_link_container_ordersdb and the buyer account ID, which signifies the default catalog.
  7. Underneath Catalog permissions, choose Describe for Catalog permissions.
  8. Select Grant.

039-BDB 5089

040-BDB 5089

  1. Repeat these steps for the catalog rl_link_container_ordersdb:
    1. On the Databases dropdown menu, select public_db.
    2. Underneath Database permissions, choose Describe.
    3. Select Grant.
  2. Repeat these steps once more, however after selecting rl_link_container_ordersdb and public_db, on the Tables dropdown menu, select rl_orderstbl.
    1. Underneath Useful resource hyperlink permissions, choose Describe.
    2. Select Grant.
  3. Repeat these steps to grant further permissions to Glue-execution-role.
    1. For this iteration, grant Describe permissions on the default catalog databases public and customerdb.
    2. Grant Describe permission on the useful resource hyperlink customerdb.
    3. Grant Choose permission on the tables returnstbl_iceberg and orderstbl.

The next screenshots present the configuration for database public and customerdb permissions.

041-BDB 5089

042-BDB 5089

The next screenshots present the configuration for useful resource hyperlink customerdb permissions.

043-BDB 5089

044-BDB 5089

The next screenshots present the configuration for desk returnstbl_iceberg permissions.

045-BDB 5089

046-BDB 5089

The next screenshots present the configuration for desk orderstbl permissions.

047-BDB 5089

048-BDB 5089

  1. Within the navigation pane, select Information permissions and confirm permissions on Glue-execution-role.

049-BDB 5089

Run a PySpark job in AWS Glue 5.0

Obtain the PySpark script LakeHouseGlueSparkJob.py. This AWS Glue PySpark script runs Spark SQL by becoming a member of the producer shared federated orderstbl desk and Amazon S3 based mostly returns desk within the shopper account to research the information and determine the whole orders positioned per market.

Substitute <<consumer_account_id>> within the script together with your shopper account ID. Full the next steps to create and run an AWS Glue job:

  1. On the AWS Glue console, within the navigation pane, select ETL jobs.
  2. Select Create job, then select Script editor.

050-BDB 5089

  1. For Engine, select Spark.
  2. For Choices, select Begin contemporary.
  3. Select Add script.
  4. Browse to the placement the place you downloaded and edited the script, choose the script, and select Open.
  5. On the Job particulars tab, present the next info:
    1. For Identify, enter a reputation (for this put up, LakeHouseGlueSparkJob).
    2. Underneath Primary properties, for IAM function, select Glue-execution-role.
    3. For Glue model, choose Glue 5.0.
    4. Underneath Superior properties, for Job parameters, select Add new parameter.
    5. Add the parameters --datalake-formats = iceberg and --enable-lakeformation-fine-grained-access = true.
  6. Save the job.
  7. Select Run to execute the AWS Glue job, and anticipate the job to finish.
  8. Assessment the job run particulars from the Output logs

051-BDB 5089

052-BDB 5089

Clear up

To keep away from incurring prices in your AWS accounts, clear up the assets you created:

  1. Delete the Lake Formation permissions, catalog hyperlink container, database, and tables within the shopper account.
  2. Delete the AWS Glue job within the shopper account.
  3. Delete the federated catalog, database, and desk assets within the producer account.
  4. Delete the Redshift Serverless namespace within the producer account.
  5. Delete the S3 buckets you created as a part of knowledge switch in each accounts and the Athena question outcomes bucket within the shopper account.
  6. Clear up the IAM roles you created for the SageMaker Lakehouse setup as a part of the stipulations.

Conclusion

On this put up, we illustrated find out how to carry your current Redshift tables to SageMaker Lakehouse and share them securely with exterior AWS accounts. We additionally confirmed find out how to question the shared knowledge warehouse and knowledge lakehouse tables in the identical Spark session, from a recipient account, utilizing Spark in AWS Glue 5.0.

We hope you discover this handy to combine your Redshift tables with an current knowledge mesh and entry the tables utilizing AWS Glue Spark. Take a look at this answer in your accounts and share suggestions within the feedback part. Keep tuned for extra updates and be happy to discover the options of SageMaker Lakehouse and AWS Glue variations.

Appendix: Desk creation

Full the next steps to create a returns desk within the Amazon S3 based mostly default catalog and an orders desk in Amazon Redshift:

  1. Obtain the CSV format datasets orders and returns.
  2. Add them to your S3 bucket below the corresponding desk prefix path.
  3. Use the next SQL statements in Athena. First-time customers of Athena ought to consult with Specify a question consequence location.
CREATE DATABASE customerdb;
CREATE EXTERNAL TABLE customerdb.returnstbl_csv(
  `returned` string, 
  `order_id` string, 
  `market` string)
ROW FORMAT DELIMITED 
  FIELDS TERMINATED BY ';' 
LOCATION
  's3://<your-S3-bucket>/<prefix-for-returns-table-data>/'
TBLPROPERTIES (
  'skip.header.line.rely'='1'
);

choose * from customerdb.returnstbl_csv restrict 10; 

053-BDB 5089

  1. Create an Iceberg format desk within the default catalog and insert knowledge from the CSV format desk:
CREATE TABLE customerdb.returnstbl_iceberg(
  `returned` string, 
  `order_id` string, 
  `market` string)
LOCATION 's3://<your-producer-account-bucket>/returnstbl_iceberg/' 
TBLPROPERTIES (
  'table_type'='ICEBERG'
);

INSERT INTO customerdb.returnstbl_iceberg
SELECT *
FROM returnstbl_csv;  

SELECT * FROM customerdb.returnstbl_iceberg LIMIT 10; 

054-BDB 5089

  1. To create the orders desk within the Redshift Serverless namespace, open the Question Editor v2 on the Amazon Redshift console.
  2. Connect with the default namespace utilizing your database admin consumer credentials.
  3. Run the next instructions within the SQL editor to create the database ordersdb and desk orderstbl in it. Copy the information out of your S3 location of the orders knowledge to the orderstbl:
create database ordersdb;
use ordersdb;

create desk orderstbl(
  row_id int, 
  order_id VARCHAR, 
  order_date VARCHAR, 
  ship_date VARCHAR, 
  ship_mode VARCHAR, 
  customer_id VARCHAR, 
  customer_name VARCHAR, 
  phase VARCHAR, 
  metropolis VARCHAR, 
  state VARCHAR, 
  nation VARCHAR, 
  postal_code int, 
  market VARCHAR, 
  area VARCHAR, 
  product_id VARCHAR, 
  class VARCHAR, 
  sub_category VARCHAR, 
  product_name VARCHAR, 
  gross sales VARCHAR, 
  amount bigint, 
  low cost VARCHAR, 
  revenue VARCHAR, 
  shipping_cost VARCHAR, 
  order_priority VARCHAR
  );

copy orderstbl
from 's3://<your-s3-bucket>/ordersdatacsv/orders.csv' 
iam_role 'arn:aws:iam::<producer-account-id>:function/service-role/<your-Redshift-Function>'
CSV 
DELIMITER ';'
IGNOREHEADER 1
;

choose * from ordersdb.orderstbl restrict 5;

In regards to the Authors

055-BDB 5089Aarthi Srinivasan is a Senior Huge Information Architect with Amazon SageMaker Lakehouse. She collaborates with the service workforce to boost product options, works with AWS prospects and companions to architect lakehouse options, and establishes greatest practices for knowledge governance.

056-BDB 5089Subhasis Sarkar is a Senior Information Engineer with Amazon. Subhasis thrives on fixing advanced technological challenges with progressive options. He makes a speciality of AWS knowledge architectures, significantly knowledge mesh implementations utilizing AWS CDK elements.

Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest Articles