The rise of distributed information processing frameworks resembling Apache Spark has revolutionized the best way organizations handle and analyze large-scale information. Nevertheless, as the amount and complexity of information proceed to develop, the necessity for fine-grained entry management (FGAC) has grow to be more and more essential. That is notably true in situations the place delicate or proprietary information have to be shared throughout a number of groups or organizations, resembling within the case of open information initiatives. Implementing sturdy entry management mechanisms is essential to keep up safe and managed entry to information saved in Open Desk Format (OTF) inside a contemporary information lake.
One method to addressing this problem is through the use of Amazon EMR on Amazon Elastic Kubernetes Service (Amazon EKS) and incorporating FGAC mechanisms. With Amazon EMR on EKS, you’ll be able to run open supply massive information frameworks resembling Spark on Amazon EKS. This integration gives the scalability and adaptability of Kubernetes, whereas additionally utilizing the information processing capabilities of Amazon EMR.
On February 6th 2025, AWS launched fine-grained entry management based mostly on AWS Lake Formation for EMR on EKS from Amazon EMR 7.7 and better model. Now you can considerably improve your information governance and safety frameworks utilizing this function.
On this publish, we reveal the right way to implement FGAC on Apache Iceberg tables utilizing EMR on EKS with Lake Formation.
Information mesh use case
With FGAC in a information mesh structure, area homeowners can handle entry to their information merchandise at a granular degree. This decentralized method permits for better agility and management, ensuring information is accessible solely to approved customers and companies inside or throughout domains. Insurance policies may be tailor-made to particular information merchandise, contemplating components like information sensitivity, consumer roles, and meant use. This localized management enhances safety and compliance whereas supporting the self-service nature of the information mesh.
FGAC is very helpful in enterprise domains that cope with delicate information, resembling healthcare, finance, authorized, human sources, and others. On this publish, we deal with examples from the healthcare area, showcasing how we will obtain the next:
- Share affected person information securely – Information mesh allows completely different departments inside a hospital to handle their very own affected person information as impartial domains. FGAC makes positive solely approved personnel can entry particular affected person data or information parts based mostly on their roles and need-to-know foundation.
- Facilitate analysis and collaboration – Researchers can entry de-identified affected person information from numerous hospital domains by means of the information mesh structure, enabling collaboration between multidisciplinary groups throughout completely different healthcare establishments, fostering data sharing, and accelerating analysis and discovery. FGAC helps compliance with privateness rules (resembling HIPAA) by limiting entry to delicate information parts or permitting entry solely to aggregated, anonymized datasets.
- Enhance operational effectivity – Information mesh can streamline information sharing between hospitals and insurance coverage corporations, simplifying billing and claims processing. FGAC makes positive solely approved personnel inside every group can entry the mandatory information, defending delicate monetary data.
Answer overview
On this publish, we discover the right way to implement FGAC on Iceberg tables inside an EMR on EKS utility, utilizing the capabilities of Lake Formation. For particulars on the right way to implement FGAC on Amazon EMR on EC2, seek advice from Positive-grained entry management in Amazon EMR Serverless with AWS Lake Formation.
The next parts play crucial roles on this resolution design:
- Apache Iceberg OTF:
- Excessive-performance desk format for large-scale analytics
- Helps schema evolution, ACID transactions, and time journey
- Suitable with Spark, Trino, Presto, and Flink
- Amazon S3 Tables totally managed Iceberg tables for analytics workload
- AWS Lake Formation:
- FGAC for information lakes
- Column-, row-, and cell-level safety controls
- Information mesh producers and shoppers:
- Producers: Create and serve domain-specific information merchandise
- Shoppers: Entry and combine information merchandise
- Permits self-service information consumption
To reveal how you need to use Lake Formation to implement cross-account FGAC inside an EMR on EKS surroundings, we create tables within the AWS Glue Information Catalog in a central AWS account appearing as producer and provision completely different consumer personas to replicate numerous roles and entry ranges in a separate AWS account appearing as a number of shoppers. Shoppers may be unfold throughout a number of accounts in real-world situations.
The next diagram illustrates the high-level resolution structure.
To reveal the cross-account information sharing and information filtering with Lake Formation FGAC, the answer deploys two completely different Iceberg tables with various entry for various shoppers. The permission mapping for shoppers are with cross-account desk shares and information cell filters.
It has two completely different groups with completely different ranges of Lake Formation permissions to entry Sufferers and Claims Iceberg tables. The next desk summarizes the answer’s consumer personas.
| Persona/Desk Identify | Sufferers | Claims |
Sufferers Care Group ( |
| Full desk entry |
Claims Care Group ( | No entry | Full desk entry |
Stipulations
This resolution requires an AWS account with an AWS Id and Entry Administration (IAM) energy consumer position that may create and work together with AWS companies, together with Amazon EMR, Amazon EKS, AWS Glue, Lake Formation, and Amazon Easy Storage Service (Amazon S3). Further particular necessities for every account are detailed within the related sections.
Clone the venture
To get began, obtain the venture both to your pc or the AWS CloudShell console:
Arrange infrastructure in producer account
To arrange the infrastructure within the producer account, it’s essential to have the next further sources:
The setup script deploys the next infrastructure:
- An S3 bucket to retailer pattern information in Iceberg desk format, registered as a knowledge location in Lake Formation
- An AWS Glue database named
healthcare_db - Two AWS Glue tables:
SufferersandClaimsIceberg tables - A Lake Formation information entry IAM position
- Cross-account permissions enabled for the buyer account:
- Enable the buyer to explain the database
healthcare_dbwithin the producer account - Enable to entry the
Sufferersdesk utilizing a knowledge cell filter, based mostly on row-level chosenstate, and exclude columnssn - Enable full desk entry to the
Claimsdesk
- Enable the buyer to explain the database
Run the next producer_iceberg_datalake_setup.sh script to create a growth surroundings within the producer account. Replace its parameters in keeping with your necessities:
Allow cross-account Lake Formation entry in producer account
A client account ID and an EMR on EKS Engine session tag should set within the producer’s surroundings. It permits the buyer to entry the producer’s AWS Glue tables ruled by Lake Formation. Full the next steps to allow cross-account entry:
- Open the Lake Formation console within the producer account.
- Select Software integration settings below Administration within the navigation pane.
- Choose Enable exterior engines to filter information in Amazon S3 areas registered with Lake Formation.
- For Session tag values, enter EMR on EKS Engine.
- For AWS account IDs, enter your client account ID.
- Select Save.

Determine 2: Producer Account – Lake Formation third-party engine configuration display with session tags, account IDs, and information entry permissions.
Validate FGAC setup in producer surroundings
To validate the FGAC setup within the producer account, verify the Iceberg tables, information filter, and FGAC permission settings.
Iceberg tables
Two AWS Glue tables in Iceberg format have been created by producer_iceberg_datalake_setup.sh. On the Lake Formation console, select Tables below Information Catalog within the navigation pane to see the tables listed.

Determine 3: Lake Formation interface displaying claims and sufferers tables from healthcare_db with Apache Iceberg format.
The next screenshot reveals an instance of the sufferers desk information.
The next screenshot reveals an instance of the claims desk information.
Information cell filter in opposition to sufferers desk
After efficiently operating the producer_iceberg_datalake_setup.sh script, a brand new information cell filter named patients_column_row_filter was created in Lake Formation. This filter performs two capabilities:
- Exclude the
ssncolumn from thesufferersdesk information - Embrace rows the place the state is Texas or New York
To view the information cell filter, select Information filters below Information Catalog within the navigation pane of the Lake Formation console, and open the filter. Select View permission to view the permission particulars.
FGAC permissions permitting cross-account entry
To view all of the FGAC permissions, select Information permissions below Permissions within the navigation pane of the Lake Formation console, and filter by the database title healthcare_db.
Be sure to revoke information permissions with the IAMAllowedPrincipals principal related to the healthcare_db tables, as a result of it can trigger cross-account information sharing to fail, notably with AWS Useful resource Entry Supervisor (AWS RAM).

Determine 7: Lake Formation information permissions interface displaying filtered healthcare database sources with granular entry controls
The next desk summarizes the general FGAC setup.
| Useful resource Sort | Useful resource | Permissions | Grant Permissions |
| Database | Describe | Describe | |
| Information Cell Filter | Choose | Choose | |
| Desk | Choose, Describe | Choose, Describe |
Arrange infrastructure in client account
To arrange the infrastructure within the client account, it’s essential to have the next further sources:
- eksctl and kubectl packages have to be put in
- An IAM position within the client account have to be a Lake Formation administrator to run
consumer_emr_on_eks_setup.shscript - The Lake Formation admin should settle for the AWS RAM useful resource share invitations utilizing the AWS RAM console, if the buyer account is outdoors of the producer’s organizational unit
The setup script deploys the next infrastructure:
- An EKS cluster known as
fgac-blogwith two namespaces:- Consumer namespace:
lf-fgac-user - System namespace:
lf-fgac-secure
- Consumer namespace:
- An EMR on EKS digital cluster
emr-on-eks-fgac-blog:- Arrange with a safety configuration
emr-on-eks-fgac-sec-conifg - Two EMR on EKS job execution IAM roles:
- Position for the Sufferers Care Group (
team1):emr_on_eks_fgac_job_team1_execution_role - Position for Claims Care Group (
team2):emr_on_eks_fgac_job_team2_execution_role
- Position for the Sufferers Care Group (
- A question engine IAM position utilized by FGAC safe house:
emr_on_eks_fgac_query_execution_role
- Arrange with a safety configuration
- An S3 bucket to retailer PySpark job scripts and logs
- An AWS Glue native database named
consumer_healthcare_db - Two useful resource hyperlinks to cross-account shared AWS Glue tables:
rl_patientsandrl_claims - Lake Formation permission on Amazon EMR IAM roles
Run the next consumer_emr_on_eks_setup.sh script to arrange a growth surroundings within the client account. Replace the parameters in keeping with your use case:
Allow cross-account Lake Formation entry in client account
The buyer account should add the buyer account ID with an EMR on EKS Engine session tag in Lake Formation. This session tag will likely be utilized by EMR on EKS job execution IAM roles to entry Lake Formation tables. Full the next steps:
- Open the Lake Formation console within the client account.
- Select Software integration settings below Administration within the navigation pane.
- Choose Enable exterior engines to filter information in Amazon S3 areas registered with Lake Formation.
- For Session tag values, enter EMR on EKS Engine.
- For AWS account IDs, enter your client account ID.
- Select Save.

Determine 9: Client Account – Lake Formation third-party engine configuration display with session tags, account IDs, and information entry permissions
Validate FGAC setup in client surroundings
To validate the FGAC setup within the producer account, verify the EKS cluster, namespaces, and Spark job scripts to check information permissions.
EKS cluster
On the Amazon EKS console, select Clusters within the navigation pane and make sure the EKS cluster fgac-blog is listed.
Namespaces in Amazon EKS
Kubernetes makes use of namespaces as logical partitioning system for organizing objects resembling Pods and Deployments. Namespaces additionally function as a privilege boundary within the Kubernetes role-based entry management (RBAC) system. Multi-tenant workloads in Amazon EKS may be secured utilizing namespaces.
This resolution creates two namespaces:
lf-fgac-userlf-fgac-secure
The StartJobRun API makes use of the backend workflows to submit a Spark job’s UserComponents (JobRunner, Driver, Executors) within the consumer namespace, and the corresponding system parts within the system namespace to perform the specified FGAC behaviors.
You may confirm the namespaces with the next command:kubectl get namespaceThe next screenshot reveals an instance of the anticipated output.
Spark job script to check Sufferers Care Group’s information permissions
Beginning with Amazon EMR model 6.6.0, you need to use Spark on EMR on EKS with the Iceberg desk format. For extra data on how Iceberg works in an immutable information lake, see Construct a high-performance, ACID compliant, evolving information lake utilizing Apache Iceberg on Amazon EMR.
The next script is a snippet of the PySpark job that retrieves filtered information for the Claims and Affected person tables:
Spark job script to check Claims Care Group’s information permissions
The next script is a snippet of the PySpark job that retrieves information from the Claims desk:
Validate job execution roles for EMR on EKS
The Sufferers Care Group makes use of the emr_on_eks_fgac_job_team1_execution_role IAM position to execute a PySpark job on EMR on EKS. The job execution position has permission to question each the Sufferers and Claims tables.
The Claims Care Group makes use of the emr_on_eks_fgac_job_team2_execution_role IAM position to execute jobs on EMR on EKS. The job execution position solely has permission to entry Claims information.
Each IAM job execution roles have the next permissions:
The next code is the job execution IAM position belief coverage:
The next code is the question engine IAM position coverage (emr_on_eks_fgac_query_execution_role-policy):
The next code is the question engine IAM position belief coverage:
Run PySpark jobs on EMR on EKS with FGAC
For extra particulars about the right way to work with Iceberg tables in EMR on EKS jobs, seek advice from Utilizing Apache Iceberg with Amazon EMR on EKS. Full the next steps to run the PySpark jobs on EMR on EKS with FGAC:
- Run the next instructions to run the sufferers and claims jobs:
- Watch the appliance logs from the Spark driver pod:
kubectl logs drive-pod-name -c spark-kubernetes-driver -n lf-fgac-user -f
Alternatively, you’ll be able to navigate to the Amazon EMR console, open your digital cluster, and select the open icon subsequent to the job to open the Spark UI and monitor the job progress.
View PySpark jobs output on EMR on EKS with FGAC
In Amazon S3, navigate to the Spark output logs folder:
The Sufferers Care Group PySpark job has question entry to the Sufferers and Claims tables. The Sufferers desk has filtered out the SSN column and solely reveals data for Texas and New York declare data, as laid out in our FGAC setup.
The next screenshot reveals the Claims desk for under Texas and New York.
The next screenshot reveals the Sufferers desk with out the SSN column.
Equally, navigate to the Spark output log folder for the Claims Care Group job:
As proven within the following screenshot, the Claims Care Group solely has entry to the Claims desk, so when the job tried to entry the Sufferers desk, it obtained an entry denied error.
Concerns and limitations
Though the method mentioned on this publish gives priceless insights and sensible implementation methods, it’s essential to acknowledge the important thing issues and limitations earlier than you begin utilizing this function. To study extra about utilizing EMR on EKS with Lake Formation, seek advice from How Amazon EMR on EKS works with AWS Lake Formation.
Clear up
To keep away from incurring future costs, delete the sources generated in the event you don’t want the answer anymore. Run the next cleanup scripts (change the AWS Area if mandatory).Run the next script within the client account:
Run the next script within the producer account:
Conclusion
On this publish, we demonstrated the right way to combine Lake Formation with EMR on EKS to implement fine-grained entry management on Iceberg tables. This integration gives organizations a contemporary method to implementing detailed information permissions inside a multi-account open information lake surroundings. By centralizing information administration in a major account and punctiliously regulating consumer entry in secondary accounts, this technique can simplify governance and improve safety.
For extra details about Amazon EMR 7.7 in reference to EMR on EKS, see Amazon EMR on EKS 7.7.0 releases. To study extra about utilizing Lake Formation with EMR on EKS, see Allow Lake Formation with Amazon EMR on EKS.
We encourage you to discover this resolution on your particular use instances and share your suggestions and questions within the feedback part.












