22.8 C
New York
Tuesday, July 15, 2025

Geospatial knowledge lakes with Amazon Redshift


Knowledge lake architectures assist organizations offload knowledge from premium storage programs with out dropping the power to question and analyze the information. This structure could be helpful for geospatial knowledge, the place builders may need terabytes of sometimes accessed knowledge of their databases that they need to cost-effectively keep. Nevertheless, this requires for his or her knowledge lake question engine to help geographic info programs (GIS) knowledge varieties and capabilities.

Amazon Redshift helps querying spatial knowledge, together with the GEOMETRY and GEOGRAPHY knowledge varieties and capabilities which are utilized in querying GIS programs. Moreover, Amazon Redshift permits you to question geospatial knowledge each in your knowledge lakes on Amazon S3 and your Redshift knowledge warehouse, supplying you with the selection of how one can entry your knowledge. Moreover, AWS Lake Formation and help for AWS Id and Entry Administration (IAM) in Esri’s ArcGIS Professional offers you a option to securely bridge knowledge between your geospatial knowledge lakes and map visualization instruments. You possibly can arrange, handle, and safe geospatial knowledge lakes within the cloud with a number of clicks.

On this put up, we stroll by means of the right way to arrange a geospatial knowledge lake utilizing Lake Formation and question the information with ArcGIS Professional utilizing Amazon Redshift Serverless.

Answer overview

In our instance, a county public well being division has used Lake Formation to safe their knowledge lake that comprises public well being info (PHI) knowledge. Epidemiologists throughout the county need to create a map for the clinics offering vaccination for his or her communities. The county’s GIS analysts want entry to the information lake to create the required maps with out having the ability to entry the PHI knowledge.

This answer makes use of Lake Formation tags to permit column-level entry within the database to the general public info that features the clinic names, addresses, zip codes, and longitude/latitude coordinates with out permitting entry to the PHI knowledge throughout the identical tables. We use Redshift Serverless and Amazon Redshift Spectrum to entry this knowledge from ArcGIS Professional, a GIS mapping software program from Esri, an AWS Accomplice.

The next diagram exhibits the structure for this answer.

End-to-end architecture showing ArcGIS Pro data integration with AWS analytics services through Redshift connector

The next is a pattern schema for this put up.

DescriptionColumn TitleGeoproperty Tag
Affected person IDpatient_idNo
Clinic IDclinic_idSure
Deal with of Clinicclinic_addressSure
Clinic Zip Codeclinic_zipSure
Clinic Metropolisclinic_citySure
First Title Affected personfirst_nameNo
Final Title Affected personlast_nameNo
Affected person Deal withpatient_addressNo
Affected person Zip Codepatient_zipNo
Vaccination Sortvaccination_typeNo
Latitude of Clinicclinic_latSure
Longitude of Clinicclinic_longSure

Within the following sections, we stroll by means of the steps to arrange the answer:

  1. Deploy the answer infrastructure utilizing AWS CloudFormation.
  2. Add a CSV with pattern knowledge to an Amazon Easy Storage Service (Amazon S3) bucket and run an AWS Glue crawler to crawl the information.
  3. Arrange Lake Formation permissions.
  4. Configure the Amazon Redshift Question Editor v2.
  5. Arrange the schemas in Amazon Redshift.
  6. Create a view in Amazon Redshift.
  7. Create an area database consumer in ArcGIS Professional.
  8. Join ArcGIS Professional to the Redshift database.

Conditions

You need to have the next conditions:

Arrange the infrastructure with AWS CloudFormation

To create the atmosphere for the demo, full the next steps:

  1. Log in to the AWS Administration Console as an AWS account administrator and a Lake Formation knowledge lake administrator—this account must be each an account admin and a knowledge lake admin for the template to finish.
  2. Open the AWS CloudFormation console
  3. Select Launch Stack.

The CloudFormation template creates the next elements:

  • S3 bucketsamp-clinic-db-{ACCOUNT_ID}
  • AWS Glue databasesamp-clinical-glue-db
  • AWS Glue crawler samp-glue-crawler
  • Redshift Serverless workgroupsamp-clinical-rs-wg
  • Redshift Serverless namespacesamp-clinical-rs-ns
  • IAM function for Amazon Redshiftdemo-RedshiftIAMRole-{UNIQUE_ID}
  • IAM function for AWS Gluesamp-clinical-glue-role
  • Lake Formation tag geoproperty

Add a CSV to the S3 bucket and run the AWS Glue crawler

The following step is to create a knowledge lake in our demo atmosphere after which use an AWS Glue crawler to populate the AWS Glue database and replace the schema and metadata within the AWS Glue Knowledge Catalog.

The CloudFormation stack created the S3 bucket we are going to use in addition to the AWS Glue database and crawler. We’ve got supplied a fictious check dataset that may characterize the affected person and medical info. Obtain the file and full the next steps:

  1. On the AWS CloudFormation console, open the stack you simply launched.
  2. On the Sources tab, select the hyperlink to the S3 bucket.
  3. Select Add and add the CSV file (data-with-geocode.csv), then select Add.
  4. On the AWS Glue console, select Crawlers within the navigation pane.
  5. Choose the crawler you created with the CloudFormation stack and select Run.

The crawler run ought to solely take a minute to finish, and can populate a desk named clinic-sample-s3_ACCOUNT_ID with a fictious dataset.

  1. Select Tables within the navigation pane and open the desk the crawler populated.

You will notice that the dataset comprises fields that comprise PHI and personally identifiable info (PII).

AWS Glue table 'clinic-sample_s3' schema definition with patient and clinic fields, input/output formats, and database properties

We now have a database arrange and the Knowledge Catalog populated with the schema and metadata we are going to use for the remainder of the demo.

Arrange Lake Formation permissions

On this subsequent set of steps, we show the right way to safe PHI knowledge to take care of compliance and empower GIS analysts to work successfully. To safe the information lake, we use AWS Lake Formation. With a view to correctly arrange Lake Formation permissions, we have to collect particulars on how entry to the information lake is established.

The Knowledge Catalog offers metadata and schema info that allows companies to entry knowledge throughout the knowledge lake. To entry the information lake from ArcGIS Professional, we use the ArcGIS Professional Redshift connector, which permits a connection from ArcGIS Professional to Amazon Redshift. Amazon Redshift can entry the Knowledge Catalog and supply connectivity to the information lake. The CloudFormation template created a Redshift Serverless occasion and namespace and an IAM function that we are going to use to configure this connection. We nonetheless must arrange Lake Formation permissions in order that GIS analysts can solely entry publicly accessible fields and never these containing PHI or PII. We’ll assign a Lake Formation tag on the columns containing the publicly accessible info and assign permissions to the GIS analysts to permit entry to columns with this tag.

By default, the Lake Formation configuration permits Tremendous entry to IAMAllowedPrinciples; that is to take care of backward compatibility as detailed in Altering the default settings in your knowledge lake. To show a safer configuration, we are going to take away this default configuration.

  1. On the Lake Formation console, select Administration within the navigation pane.
  2. Within the Knowledge Catalog settings part, make sure that Use solely IAM entry management for brand new databases and Use solely IAM entry management for brand new tables in new databases are unchecked.

AWS Data Catalog settings interface showing unchecked IAM-only access control options for new databases and tables

  1. Within the navigation pane, beneath Permissions, select Knowledge permissions.
  2. Choose IAMAllowedPrincipals and select Revoke.
  3. Select Tables within the navigation pane.
  4. Open the desk clinic-sample-s3_ACCOUNT_ID and select Edit schema.
  5. Choose the fields starting with clinic_ and select Edit LF-Tags.
  6. The CloudFormation stack created a Lake Formation tag named geoproperty. Assign geoproperty as the important thing and true for the worth on all of the clinic_ fields, then select Save.

Subsequent, we have to grant the Amazon Redshift IAM function permission to entry fields tagged with geoproperty = true.

  1. Select Knowledge lake permissions, then select Grant.
  2. For the IAM function, select demo-RedshiftIAMRole-UNIQUE_ID.
  3. Choose geoproperty for the important thing and true for the worth.
  4. Beneath Database permissions, choose Describe, and beneath Desk permissions, choose Choose and Describe.

Configure the Amazon Redshift Question Editor v2

Subsequent, we have to carry out the preliminary configuration of Amazon Redshift required for database operations. We use an AWS Secrets and techniques Supervisor secret created by the template to verify password entry is managed securely in accordance with AWS greatest practices.

  1. On the Amazon Redshift console, select Question editor v2.
  2. If you first begin Amazon Redshift, a one-time configuration for the account seems. For this put up, go away the choices default and select Configure account.

For extra details about these choices, seek advice from Configuring your AWS account.

Redshift query editor configuration interface with AWS KMS encryption settings and optional S3 bucket path input

The question editor would require credentials to connect with the serverless occasion; these have been created by the template and saved in Secrets and techniques Supervisor.

  1. Choose Different methods to attach, then choose AWS Secrets and techniques Supervisor.
  2. For Secret, choose (Redshift-admin-credentials).
  3. Select Save.

Redshift connection interface displaying IAM Identity Center and AWS Secrets Manager authentication methods with credential selector

Arrange schemas in Amazon Redshift

An exterior schema in Amazon Redshift is a characteristic used to reference schemas that exist in exterior knowledge sources. For info on creating exterior schemas, see Exterior schemas in Amazon Redshift Spectrum. We use an exterior schema to offer entry to the information lake in Amazon Redshift. From ArcGIS Professional, we are going to hook up with Amazon Redshift to entry the geospatial knowledge.

The IAM function used within the creation of the exterior schema must be related to the Redshift namespace. This has already been arrange by the CloudFormation template, nevertheless it’s a superb apply to confirm that the function is ready up appropriately earlier than continuing.

  1. On the Redshift Serverless console, select Namespace configuration within the navigation pane.
  2. Select the namespace (sample-rs-namespace).

Amazon Redshift Serverless console displaying namespace configuration with status, workgroup and creation details

On the Safety and encryption tab, it is best to see the IAM function created by CloudFormation. If this function or the namespace isn’t current, confirm the stack in AWS CloudFormation earlier than continuing.

  1. Copy the ARN of the function to be used in a later step.

Redshift security configuration panel showing single synchronized IAM role with complete ARN and management options

  1. Select Question knowledge to return to the question editor.

Amazon Redshift Serverless interface displaying sample-rs-namespace configuration with management and query data controls

  1. Within the question editor, enter the next SQL command; you’ll want to change the instance function ARN with your individual. This SQL command will create an exterior schema that makes use of the identical Redshift function related to our namespace to connect to the AWS Glue database.
CREATE EXTERNAL SCHEMA samp_clinic_sch_ext FROM DATA CATALOG
database 'sample-glue-database'
IAM_ROLE 'arn:aws:iam::{ACCOUNT_ID}:function/demo-RedshiftIAMRole-{UNIQUE_ID}';

  1. Within the question editor, carry out a choose question on sample-glue-database:

SELECT * FROM "dev"."samp_clinic_sch_ext"."clinic-sample_s3_{ACCOUNT_ID}";

As a result of the related function has been granted entry to columns tagged with geoproperty = true, solely these fields will likely be returned, as proven within the following screenshot (the information on this instance is fictionalized).

Query result displaying 20 medical clinics with details like name, address, and coordinates

  1. Use the next command to create an area schema in Amazon Redshift. The exterior schema can’t be up to date; we are going to use this native schema so as to add a geometry discipline with a Redshift operate.

CREATE SCHEMA samp_clinic_sch_local

Create a view in Amazon Redshift

For the information to be viewable from ArcGIS Professional, we might want to create a view. Now that the schemas have been established, we are able to create the view that may be accessed from ArcGIS Professional.

Amazon Redshift offers many geospatial capabilities that can be utilized to create views with fields utilized by ArcGIS Professional so as to add factors onto a map. We’ll use one in all these capabilities as a result of the dataset comprises latitude and longitude.

Use the next SQL code within the Amazon Redshift Question Editor to create a brand new view named clinic_location_view. Exchange {ACCOUNT_ID} with your individual account ID.

CREATE
OR REPLACE VIEW "samp_clinic_sch_local"."clinic_location_view" AS
SELECT
    clinic_id as id,
    clinic_lat as lat,
    clinic_long as lengthy,
    ST_MAKEPOINT(lengthy, lat) as geom
FROM
    “dev”."samp_clinic_sch_ext"."clinic-sample_s3_{ACCOUNT_ID}"
WITH NO SCHEMA BINDING;

The brand new view that’s created beneath your native schema could have a column named geom containing map-based factors that can be utilized by ArcGIS Professional so as to add factors throughout map creation. The factors on this instance are for the clinics offering vaccines. In a real-world state of affairs, as new clinics are constructed and their knowledge is added to the information lake, their areas can be added to the map created utilizing this knowledge.

Create an area database consumer for ArcGIS Professional

For this demo, we use a database consumer and group to offer entry for ArcGIS Professional shoppers. Enter the next SQL code into the Amazon Redshift Question Editor to create a database consumer and group:

CREATE USER dbuser with PASSWORD ‘SET_PASSWORD_HERE’;
CREATE GROUP esri_developer_group;
ALTER GROUP esri_developer_group ADD USER dbuser;

After the instructions are full, use the next code to grant permissions to the group:

GRANT USAGE ON SCHEMA samp_clinic_sch_local TO GROUP esri_developer_group;
ALTER DEFAULT PRIVILEGES IN SCHEMA samp_clinic_sch_local GRANT SELECT ON TABLES TO GROUP esri_developer_group;
GRANT SELECT ON ALL TABLES IN SCHEMA samp_clinic_sch_local TO GROUP esri_developer_group;

Join ArcGIS Professional to the Redshift database

With a view to add the database connection to ArcGIS Professional, you want the endpoint for the Redshift Serverless workgroup. You possibly can entry the endpoint info on the sample-rs-wg workgroup particulars web page on the Redshift Serverless console. The Redshift namespaces and workgroups are listed by default, as proven within the following screenshot.

Amazon Redshift Serverless namespace and workgroup status dashboard with performance metrics

You possibly can copy the endpoint within the Normal info part. This endpoint might want to modified; the :5439/dev will have to be eliminated when configuring the connector in ArcGIS Professional.

Amazon Redshift Serverless workgroup details showing configuration and connection information

  1. Open ArcGIS Professional with the undertaking file you need to add the Redshift connection to.
  1. On the menu, select Insert after which Connections, Database, and New Database Connection.
  2. For Database Platform, select Amazon Redshift.
  3. For Server, insert the endpoint you copied (take away all the pieces following .com from the endpoint).
  4. For Database, select your database.

Amazon Redshift Serverless connection settings with server, authentication, and database fields

In case your ArcGIS Professional consumer doesn’t have entry to the endpoint, you’ll obtain an error throughout this step. A community path should exist between the ArcGIS Professional consumer and the Redshift Serverless endpoint. You possibly can arrange the community path with Direct Join, AWS Web site-to-Web site VPN, or AWS Shopper VPN. Though it’s not beneficial for safety causes, you can even configure Amazon Redshift with a publicly accessible endpoint. Ensure you seek the advice of your safety and community groups for greatest practices and coverage steerage earlier than permitting public entry to your Redshift Serverless occasion.

If a community path exists and also you’re having points connecting, confirm the safety group guidelines permit communication inbound out of your ArcGIS Professional subnet over the port your Redshift Serverless occasion is operating on. The default port is 5439, however you possibly can configure a spread of ports relying in your atmosphere; see Connecting to Amazon Redshift Serverless for extra info.

If connectivity is profitable, ArcGIS Professional will add the Amazon Redshift connection beneath Connection File Title.

  1. Select OK.
  2. Select the connection to show the view that was created to incorporate geometry (clinic_location_view).
  3. Select (right-click) the view and select Add To Present Map.

ArcGIS Professional will add the factors from the view onto the map. The ultimate map displayed has the symbology edited to make use of crimson crosses to characterize the clinics as an alternative of dots.

Professional GIS interface showing Houston metropolitan vaccination clinics with topographic base map, toolbars, and database connectivity

Clear up

After you’ve got completed the demo, full the next steps to scrub up your sources:

  1. On the Amazon S3 console, open the bucket created by the CloudFormation stack and delete the data-with-geocode.csv file.
  2. On the AWS CloudFormation console, delete the demo stack to take away the sources it created.

Conclusion

On this put up, we reviewed the right way to arrange Redshift Serverless to make use of geospatial knowledge contained inside a knowledge lake to reinforce maps in ArcGIS Professional. This system helps builders and GIS analysts use accessible datasets in knowledge lakes and rework it in Amazon Redshift to additional enrich the information earlier than presenting it on a map. We additionally confirmed the right way to safe a knowledge lake utilizing Lake Formation, crawl a geospatial dataset with AWS Glue, and visualize the information in ArcGIS Professional.

For extra greatest practices for storing geospatial knowledge in Amazon S3 and querying it with Amazon Redshift, see partition your geospatial knowledge lake for evaluation with Amazon Redshift. We invite you to go away suggestions within the feedback part.


Concerning the authors

Jeremy Spell is a Cloud Infrastructure Architect working with Amazon Internet Companies (AWS) Skilled Companies. He enjoys architecting and constructing options for patrons. In his free time Jeremy makes Texas type BBQ, and spends time along with his household and church group.

Jeff Demuth is a options architect who joined Amazon Internet Companies (AWS) in 2016. He focuses on the geospatial group and is captivated with geographic info programs (GIS) and expertise. Outdoors of labor, Jeff enjoys touring, constructing Web of Issues (IoT) purposes, and tinkering with the most recent devices.

Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest Articles