11.7 C
New York
Tuesday, May 20, 2025

Petabyte-scale information migration made easy: AppsFlyer’s finest follow journey with Amazon EMR Serverless


This put up is co-written with Roy Ninio from Appsflyer.

Organizations worldwide goal to harness the facility of knowledge to drive smarter, extra knowledgeable decision-making by embedding information on the core of their processes. Utilizing data-driven insights allows you to reply extra successfully to surprising challenges, foster innovation, and ship enhanced experiences to your prospects. Actually, information has remodeled how organizations drive decision-making, however traditionally, managing the infrastructure to help it posed vital challenges and required particular talent units and devoted personnel. The complexity of establishing, scaling, and sustaining large-scale information methods impacted agility and tempo of innovation. This reliance on consultants and complicated setups usually diverted sources from innovation, slowed time-to-market, and hindered the power to answer modifications in trade calls for.

AppsFlyer is a number one analytics and attribution firm designed to assist companies measure and optimize their advertising and marketing efforts throughout cellular, net, and related gadgets. With a deal with privacy-first innovation, AppsFlyer empowers organizations to make data-driven choices whereas respecting consumer privateness and compliance laws. AppsFlyer gives instruments for monitoring consumer acquisition, engagement, and retention, delivering actionable insights to reinforce ROI and streamline advertising and marketing methods.

On this put up, we share how AppsFlyer efficiently migrated their large information infrastructure from self-managed Hadoop clusters to Amazon EMR Serverless, detailing their finest practices, challenges to beat, and classes realized that may assist information different organizations in related transformations.

Why AppsFlyer embraced a serverless method for large information

AppsFlyer manages one of many largest-scale information infrastructures within the trade, processing 100 PB of knowledge every day, dealing with thousands and thousands of occasions per second, and operating hundreds of jobs throughout almost 100 self-managed Hadoop clusters. The AppsFlyer structure is comprised of many information engineering open supply applied sciences, together with however not restricted to Apache Spark, Apache Kafka, Apache Iceberg, and Apache Airflow. Though this setup has powered operations for years, the rising complexity of scaling sources to fulfill fluctuating calls for, coupled with the operational overhead of sustaining clusters, prompted AppsFlyer to rethink their huge information processing technique.

EMR Serverless is a contemporary, scalable resolution that alleviates the necessity for guide cluster administration whereas dynamically adjusting sources to match real-time workload necessities. With EMR Serverless, scaling up or down occurs inside seconds, minimizing idle time and interruptions like spot terminations.

This shift has freed engineering groups to deal with innovation, improved resilience and excessive availability, and future-proofed the structure to help their ever-increasing calls for. By solely paying for compute and reminiscence sources used throughout runtime, AppsFlyer additionally optimized prices and minimized fees for idle sources, marking a major step ahead in effectivity and scalability.

Answer overview

AppsFlyer’s earlier structure was constructed round self-managed Hadoop clusters operating on Amazon Elastic Compute Cloud (Amazon EC2) and dealt with the dimensions and complexity of the info workflows. Though this setup supported operational wants, it required substantial guide effort to keep up, scale, and optimize.

AppsFlyer orchestrated over 100,000 every day workflows with Airflow, managing each streaming and batch operations. Streaming pipelines used Spark Streaming to ingest real-time information from Kafka, writing uncooked datasets to an Amazon Easy Storage Service (Amazon S3) information lake whereas concurrently loading them into BigQuery and Google Cloud Storage to construct logical information layers. Batch jobs then processed this uncooked information, remodeling it into actionable datasets for inside groups, dashboards, and analytics workflows. Moreover, some processed outputs had been ingested into exterior information sources, enabling seamless supply of AppsFlyer insights to prospects throughout the online.

For analytics and quick queries, real-time information streams had been ingested into ClickHouse and Druid to energy dashboards. Moreover, Iceberg tables had been created from Delta Lake uncooked information and made accessible by Amazon Athena for additional information exploration and analytics.

With the migration to EMR Serverless, AppsFlyer changed its self-managed Hadoop clusters, bringing vital enhancements to scalability, cost-efficiency, and operational simplicity.

Spark-based workflows, together with streaming and batch jobs, had been migrated to run on EMR Serverless and make the most of the elasticity of EMR Serverless, dynamically scaling to fulfill workload calls for.

This transition has considerably lowered operational overhead, assuaging the necessity for guide cluster administration, so groups can focus extra on information processing and fewer on infrastructure.

The next diagram illustrates the answer structure.

This put up opinions the principle challenges and classes realized by the group at AppsFlyer from this migration.

Challenges and classes realized

Migrating a large-scale group like AppsFlyer, with dozens of groups, from Hadoop to EMR Serverless was a major problem—particularly as a result of many R&D groups had restricted or no prior expertise managing infrastructure. To offer a clean transition, AppsFlyer’s Information Infrastructure (DataInfra) group developed a complete migration technique that empowered the R&D groups to seamlessly migrate their pipelines.

On this part, we talk about how AppsFlyer approached the problem and achieved success for the complete group.

Centralized preparation by the DataInfra group

To offer a seamless transition to EMR Serverless, the DataInfra group took the lead in centralizing preparation efforts:

  • Clear possession – Taking full accountability for the migration, the group deliberate, guided, and supported R&D groups all through the method.
  • Structured migration information – An in depth, step-by-step information was created to streamline the transition from Hadoop, breaking down the complexities and making it accessible to groups with restricted infrastructure expertise.

Constructing a robust help community

To ensure the R&D groups had the sources they wanted, AppsFlyer established a strong help setting:

  • Information neighborhood – The first useful resource for answering technical questions. It inspired information sharing throughout groups and was spearheaded by the DataInfra group.
  • Slack help channel – A devoted channel the place the DataInfra group actively responded to questions and guided groups by the migration course of. This real-time help considerably lowered bottlenecks and helped groups resolve points shortly.

Infrastructure templates with finest practices

Recognizing the complexity of the group’s migration, the DataInfra group had standardized templates to assist groups begin shortly and effectively:

  • Infrastructure as code (IaC) templates – They developed Terraform templates with finest practices for constructing purposes on EMR Serverless. These templates included code examples and actual manufacturing workflows already migrated to EMR Serverless. Groups might shortly bootstrap their tasks through the use of these ready-made templates.
  • Cross-account entry options – Working throughout a number of AWS accounts required managing safe entry between EMR Serverless accounts (the place jobs run) and information storage accounts (the place datasets reside). To streamline this, a step-by-step module was developed for establishing cross-account entry utilizing Assume Function permissions. Moreover, a devoted repository was created, so groups can outline and automate function and coverage creation, offering seamless and scalable entry administration.

Airflow integration

As AppsFlyer’s main workflow scheduler, Airflow performs a important function, making it important to supply a seamless transition for its customers.

AppsFlyer developed a devoted Airflow operator for executing Spark jobs on EMR Serverless, rigorously designed to copy the performance of the present Hadoop-based Spark operator. As well as, a Python bundle was made accessible throughout all Airflow clusters with the related operators. This method minimized code modifications, permitting groups to transition seamlessly with minimal modifications.

Fixing widespread permission challenges

To streamline permissions administration, AppsFlyer developed focused options for frequent use instances:

  • Complete documentation – Offered detailed directions for dealing with permissions for companies like Athena, BigQuery, Vault, GIT, Kafka, and plenty of extra.
  • Standardized Spark defaults configuration for groups to use to their purposes – Included built-in options for accumulating lineage from Spark jobs operating on EMR Serverless, offering accountability and traceability.

Steady engagement with R&D groups

To advertise progress and preserve alignment throughout groups, AppsFlyer launched the next measures:

  • Weekly conferences – Weekly standing conferences to assessment the standing of every group’s migration efforts. Groups shared updates, challenges, and commitments, fostering transparency and collaboration.
  • Help – Proactive help was supplied for points raised throughout conferences to reduce delays. This made positive that the groups had been on observe and had the help they wanted to fulfill their commitments.

By implementing these methods, AppsFlyer remodeled the migration course of from a frightening problem right into a structured and well-supported journey. Key outcomes included:

  • Empowered groups – R&D groups with minimal infrastructure expertise had been capable of confidently migrate their pipelines.
  • Standardized practices – Infrastructure templates and predefined options supplied consistency and finest practices throughout the group.
  • Lowered downtime – The {custom} Airflow operator and detailed documentation minimized disruptions to present workflows.
  • Cross-account compatibility – With seamless cross-account entry, groups might run jobs and entry information effectively.
  • Improved collaboration – The information neighborhood and Slack help channel fostered a way of collaboration and shared accountability throughout groups.

Migrating a complete group’s information workflows to EMR Serverless is a fancy process, however by investing in preparation, templates, and help, AppsFlyer efficiently streamlined the method for all R&D groups within the firm.

This method can function a mannequin for organizations endeavor related migrations.

Spark utility code administration and deployment

For AppsFlyer information engineers, creating and deploying Spark purposes is a core every day accountability. The Information Platform group focuses on figuring out and implementing the best set of instruments and safeguards that may not solely simplify the migration to EMR Serverless, but in addition streamline ongoing operations.

There are two completely different approaches accessible for operating Spark code on EMR Serverless: {custom} container photographs and JARs or Python recordsdata. Initially of the exploration, {custom} photographs regarded promising as a result of it permits higher customization than JARs, which ought to permit the DataInfra group smoother migration for present workloads. After deeper analysis, it was realized that {custom} photographs have nice energy, however include a value that in giant scale would have to be evaluated. Customized photographs introduced the next challenges:

  • Customized photographs are supported as of model 6.9.0, however a few of AppsFlyer’s workloads used earlier variations.
  • EMR Serverless sources run from the second EMR Serverless begins downloading the picture till employees are stopped. This implies a fee is completed for combination vCPU, reminiscence, and storage sources throughout the picture obtain section.
  • They required a distinct steady integration and supply (CI/CD) method than compiling a JAR or Python file, resulting in operational work that ought to be minimized as a lot as attainable.

AppsFlyer determined to go all in with JARs and permit solely in distinctive instances, the place the customization required the usage of {custom} photographs. Finally, it was realized that utilizing non-custom photographs was appropriate for AppsFlyer use instances.

CI/CD perspective

From a CI/CD perspective, AppsFlyer’s DataInfra group determined to align with AppsFlyer’s GitOps imaginative and prescient, ensuring that each infrastructure and utility code are version-controlled, constructed, and deployed utilizing Git operations.

The next diagram illustrates the GitOps method AppsFlyer adopted.

JARs steady integration

For CI, the method in control of constructing the applying artifacts, a number of choices have been explored. The next key concerns drove the exploration course of:

  • Use Amazon S3 because the native JAR supply for EMR Serverless
  • Help completely different variations for a similar job
  • Help staging and manufacturing environments
  • Permit hotfixes, patches, and rollbacks

Utilizing AppsFlyer’s present exterior bundle repository led to challenges, as a result of it required them to construct a {custom} supply into Amazon S3 or a fancy runtime means to fetch the code externally.

Utilizing Amazon S3 straight additionally had a number of different approaches:

  • Buckets – Use single vs. separated buckets for staging and manufacturing
  • Variations – Use Amazon S3 native object versioning vs. importing a brand new file
  • Hotfix – Override the identical job’s JAR file vs. importing a brand new one

Lastly, the choice was to go together with immutable builds for constant deployment throughout the environments.

Every Spark job git repository pushes to the principle department, triggers a CI course of to validate the semantic versioning (semver) project, compiles the JAR artifact, and uploads it to Amazon S3. Every artifact is uploaded to a few completely different paths in line with the model of the JAR, and in addition embody a model tag for the S3 object:

  • <BucketName>/<SparkJobName>/<main>"."<minor>"."<patch>/app.jar
  • <BucketName>/<SparkJobName>/<main>"."<minor>"/app.jar
  • <BucketName>/<SparkJobName>/<main>/app.jar

AppsFlyer can now have deep granularity and assign every EMR Serverless job to a pinpointed model. Some jobs can run with the newest main model, and different stability and SLA delicate jobs require a lock to a selected patch model.

EMR Serverless steady deployment

Importing the recordsdata to Amazon S3 was the ultimate step within the CI course of, which then results in a distinct CD course of.

CD is completed by altering the infrastructure code, which is Terraform based mostly, to level to the brand new JAR that was uploaded to Amazon S3. Then the staging or manufacturing utility can begin utilizing the newly uploaded code and the method will be thought of deployed.

Spark utility rollbacks

In the event that they want an utility rollback, AppsFlyer factors the EMR Serverless job IaC configuration from the present impaired JAR model to the earlier secure JAR model within the related Amazon S3 path.

AppsFlyer believes that each automation impacting manufacturing, like CD, requires a breaking glass mechanism for an emergency state of affairs. In such instances, AppsFlyer can manually override the wanted S3 object (JAR file) whereas nonetheless utilizing Amazon S3 variations with the intention to have higher visibility and guide model management.

Single-job vs. multi-job purposes

When utilizing EMR Serverless, one vital architectural resolution is whether or not to create a separate utility for every Spark job or use an computerized scaling utility shared throughout a number of Spark jobs. The next desk summarizes these concerns.

SideSingle-Job SoftwareMulti-Job Software
Logical NatureDevoted utility for every job.Shared utility for a number of jobs.
Shared ConfigurationsRestricted shared configurations; every utility is independently configured.Permits shared configurations by spark-defaults, together with executors, reminiscence settings, and JARs.
IsolationMost isolation; every job runs independently.Maintains job-level isolation by distinct IAM roles regardless of sharing the applying.
FlexibilityVersatile for distinctive configurations or useful resource necessities.Reduces overhead by reusing configurations and utilizing computerized scaling.
OverheadIncreased setup and administration overhead as a result of a number of purposes.Decrease administrative overhead however requires cautious useful resource competition administration.
Use CircumstancesAppropriate for jobs with distinctive necessities or strict isolation wants.Excellent for associated workloads that profit from shared settings and dynamic scaling.

By balancing these concerns, AppsFlyer tailor-made its EMR Serverless utilization to effectively meet the calls for of various Spark workloads throughout their groups.

Airflow operator: Simplifying the transition to EMR Serverless

Earlier than the migration to EMR Serverless, AppsFlyer’s groups relied on a {custom} Airflow Spark operator created by the DataInfra group.

This operator, packaged as a Python library, was built-in into the Airflow setting and have become a key part of the info workflows.

It supplied important capabilities, together with:

  • Retries and alerts – Constructed-in retry logic and PagerDuty alert integration
  • AWS role-based entry – Computerized fetching of AWS permissions based mostly on function names
  • Customized defaults – Setting Spark configurations and bundle defaults tailor-made for every job
  • State administration – Job state monitoring

This operator streamlined operating Spark jobs on Hadoop and was extremely tailor-made to AppsFlyer’s necessities.

When transferring to EMR Serverless, the group selected to construct a {custom} Airflow operator to align with their present Spark-based workflows. They already had dozens of Directed Acyclic Graphs (DAGs) in manufacturing, so with this method, they might preserve their acquainted interface, together with {custom} dealing with for retries, alerting, and configurations—all with out requiring broad modifications throughout the board.

This abstraction supplied a smoother migration by preserving the identical improvement patterns and minimizing the migration efforts of adapting to the native operator semantics.

The DataInfra group developed a devoted, {custom}, EMR Serverless operator to help the next targets:

  • Seamless migration – The operator was designed to intently mimic the interface of the present Spark operator on Hadoop. This made positive that groups might migrate with minimal code modifications.
  • Function parity – They added the options lacking from the native operator:
    • Constructed-in retry logic.
    • PagerDuty integration for alerts.
    • Computerized role-based permission fetching.
    • Default Spark configurations and bundle help for every job.
  • Simplified integration – It’s packaged as a Python library accessible in Airflow clusters. Groups might use the operator similar to they did with the earlier Spark operator.

The {custom} operator abstracts a number of the underlying configurations required to submit jobs to EMR Serverless, aligning with AppsFlyer’s inside finest practices and including important options.

The next is from an instance DAG utilizing the operator:

return SparkBatchJobEmrServerlessOperator(
    task_id=task_id,  # Distinctive process identifier within the DAG

    jar_file=jar_file,  # Path to the Spark job JAR file on S3
    main_class="<important class path>",

    spark_conf=spark_conf,

    app_id=default_args["<emr_serverless_application_id>"],  # EMR Serverless app ID
    execution_role=default_args["<job_execution_role_arn>"],  # IAM function for job execution

    polling_interval_sec=120,  # How usually to ballot for job standing
    execution_timeout=timedelta(hours=1),  # Max allowed runtime

    retries=5,  # Retry makes an attempt for failed jobs
    app_args=[],  # Arguments to go to the Spark job

    depends_on_past=True,  # Guarantee sequential process execution

    tags={'proprietor': '<team_tag>'},  # Metadata for possession
    aws_assume_role="<my_aws_role>",  # Function for cross-account entry

    alerting_policy=ALERT_POLICY_CRITICAL.with_slack_channel(sc),  # Alerting integration
    proprietor="<team_owner>",

    dag=dag  # DAG this process belongs to
)

Cross-account permissions on AWS: Simplifying EMRs workflows

AppsFlyer operates throughout a number of AWS accounts, creating a necessity for safe and environment friendly cross-account entry. EMR Serverless jobs are executed within the manufacturing account, and the info they course of resides in a separate information account. To allow seamless operation, Assume Function permissions are used to confirm that EMR Serverless jobs operating within the manufacturing account can entry the info and companies within the information account. The next diagram illustrates this structure.

Under is a diagram demonstrating the cross-account permissions AppsFlyer adopted:

Function administration technique

To handle cross-account entry effectively, three distinct roles had been created and maintained:

  • EMR function – Used for executing and managing EMR Serverless purposes within the manufacturing account. Built-in straight into Airflow employees to make it accessible for the DAGs on the devoted group Airflow cluster.
  • Execution function – Assigned to the Spark job operating on EMR Serverless. Handed by the EMR function within the DAG code to supply seamless integration.
  • Information function – Resides within the information account and is assumed by the execution function to entry information saved in Amazon S3 and different AWS companies.

To implement entry boundaries, every function and coverage is tagged with team-specific identifiers.
This makes positive that groups can solely entry their very own information and roles, minimizing unauthorized entry to different groups’ sources.

Simplifying Airflow migration

A streamlined course of to make cross-account permissions clear for groups migrating their workloads to EMR Serverless was developed:

  1. The EMR function is embedded into Airflow employees, making it accessible for DAGs within the devoted Airflow cluster for every group:
{
   "Model":"2012-10-17",
   "Assertion":[
      "..."{
         "Effect":"Allow",
         "Action":"iam:PassRole",
         "Resource":"arn:aws:iam::account-id:role/execution-role",
         "Condition":{
            "StringEquals":{
               "iam:ResourceTag/Team":"team-tag"
            }
         }
      }
   ]
}

  1. The EMR function routinely passes the execution function to the job throughout the DAG code:
{
  "Model": "2012-10-17",
  "Assertion": [
    {
      "Effect": "Allow",
      "Action": "sts:AssumeRole",
      "Resource": "arn:aws:iam::data-account-id:role/data-role",
      "Condition": {
        "StringEquals": {
          "iam:ResourceTag/Team": "team-tag"
        }
      }
    }
  ]
}

  1. The execution function assumes the info function dynamically throughout job execution to entry the required information and companies within the information account:

Permits the Execution Function within the Manufacturing account to imagine the Information Function.

{
  "Model": "2012-10-17",
  "Assertion": [
    {
      "Effect": "Allow",
      "Principal": {
        "AWS": "arn:aws:iam::production-account-id:role/execution-role"
      },
      "Action": "sts:AssumeRole"
    }
  ]
}

  1. Insurance policies, belief relationships, and function definitions are managed in a devoted GitLab repository. GitLab CI/CD pipelines automate the creation and integration of roles and insurance policies, offering consistency and decreasing guide overhead.

Advantages of AppsFlyer’s method

This method supplied the next advantages:

  • Seamless entry – Groups now not must deal with cross-account permissions manually as a result of these are automated by preconfigured roles and insurance policies, offering seamless and safe entry to sources throughout accounts.
  • Scalable and safe – Function-based and tag-based permissions present safety and scalability throughout a number of groups and accounts. By utilizing roles and tags, it alleviates the necessity to create separate hardcoded insurance policies for every group or account. As a substitute, they’ll outline generalized insurance policies that scale routinely as new sources, accounts, or groups are added.
  • Automated administration – GitLab CI/CD streamlines the deployment and integration of insurance policies and roles, decreasing guide effort whereas enhancing consistency. It additionally minimizes human errors, improves change transparency, and simplifies model administration.
  • Flexibility for groups – Groups have the flexibleness to make use of their very own or native EMR Serverless operators whereas sustaining safe entry to information.

By implementing a strong, automated cross-account permissions system, AppsFlyer has enabled safe and environment friendly entry to information and companies throughout a number of AWS accounts. This makes positive that groups can deal with their workloads with out worrying about infrastructure complexities, accelerating their migration to EMR Serverless.

Integrating lineage into EMR Serverless

AppsFlyer developed a strong resolution for column-level lineage assortment to supply complete visibility into information transformations throughout pipelines. Lineage information is saved in Amazon S3 and subsequently ingested into DataHub, AppsFlyer’s lineage and metadata administration setting.

At the moment, AppsFlyer collects column-level lineage from quite a lot of sources, together with Amazon Athena, BigQuery, Spark, and extra.

This part focuses on how AppsFlyer collects Spark column-level lineage particularly throughout the EMR Serverless infrastructure.

Amassing Spark lineage with Spline

To seize lineage from Spark jobs, AppsFlyer makes use of Spline, an open supply software designed for automated monitoring of knowledge lineage and pipeline constructions.

AppsFlyer modified Spline’s default conduct to output a personalized Spline object that aligns with AppsFlyer’s particular necessities. AppsFlyer tailored the Spline integration into each legacy and trendy environments. Within the pre-migration section, they injected the Spline agent into Spark jobs by their personalized Airflow Spark operator. Within the post-migration section, they built-in Spline straight into EMR Serverless purposes.

The lineage workflow consists of the next steps:

  1. As Spark jobs execute, Spline captures detailed metadata concerning the queries and transformations carried out.
  2. The captured metadata is exported as Spline object recordsdata to a devoted S3 bucket.
  3. These Spline objects are processed into column-level lineage objects personalized to suit AppsFlyer’s information structure and necessities.
  4. The processed lineage information is ingested into DataHub, offering a centralized and interactive view of knowledge dependencies.

The next determine is an instance of a lineage diagram from DataHub.

Challenges and the way AppsFlyer addressed them

AppsFlyer encountered the next challenges:

  • Supporting completely different EMR Serverless purposes – Every EMR Serverless utility has its personal Spark and Scala model necessities.
  • Numerous operator utilization – Groups usually use {custom} or native EMR Serverless operators, making uniform Spline integration difficult.
  • Confirming common adoption – They want to verify Spark jobs throughout a number of accounts use the Spline agent for lineage monitoring.

AppsFlyer addressed these challenges with the next options:

  • Model-specific Spline brokers – AppsFlyer created a devoted Spline agent for every EMR Serverless utility model to match its Spark and Scala variations. For instance, EMR Serverless utility model 7.0.1 and Spline.7.0.1.
  • Spark defaults integration – They built-in the Spline agent into EMR Serverless utility Spark defaults to confirm lineage assortment for jobs executed on the applying—no job-specific modifications wanted.
  • Automation for compliance – This course of consists of the next steps:
    • Detect a newly created EMR Serverless utility throughout accounts.
    • Confirm that Spline is correctly outlined within the utility’s Spark defaults.
    • Ship a PagerDuty alert to the devoted group if misconfigurations are detected.

Instance integration with Terraform

To automate Spline integration, AppsFlyer used Terraform and local-exec to outline Spark defaults for EMR Serverless purposes. With Amazon EMR, you possibly can set unified Spark configuration properties by spark-defaults, that are then utilized to Spark jobs.

This configuration makes positive the Spline agent is routinely utilized to each Spark job with out requiring modifications to the Airflow operator or the job itself.

This sturdy lineage integration gives the next advantages:

  • Full visibility – Computerized lineage monitoring gives detailed insights into information transformations
  • Seamless scalability – Model-specific Spline brokers present compatibility with EMR Serverless purposes
  • Proactive monitoring – Automated compliance checks confirm that lineage monitoring is persistently enabled throughout accounts
  • Enhanced governance – Ingesting lineage information into DataHub gives traceability, helps audits, and fosters a deeper understanding of knowledge dependencies

By integrating Spline with EMR Serverless purposes, AppsFlyer has supplied complete and automatic lineage monitoring, so groups can perceive their information pipelines higher whereas assembly compliance necessities. This scalable method aligns with AppsFlyer’s dedication to sustaining transparency and reliability all through their information panorama.

Monitoring and observability

When embarking on a big migration, and as a day-to-day best-practice course of, monitoring and observability are key components of having the ability to run workloads efficiently for stability, debugging, and price.

AppsFlyer’s DataInfra group set a number of KPIs for monitoring and observability in EMR Serverless:

  • Monitor infrastructure-level metrics and logs:
    • EMR Serverless useful resource utilization, together with value
    • EMR Serverless API utilization
  • Monitor Spark application-level metrics and logs:
    • stdout and stderr logs
    • Spark engine metrics
  • Centralized observability over the present environments, Datadog

Metrics

Utilizing EMR Serverless native metrics, AppsFlyer’s DataInfra group arrange a number of dashboards to help monitoring each the migration and the day-to-day utilization of EMR Serverless throughout the corporate. The next are the principle metrics that had been monitored:

  • Service quota utilization metrics:
    • vCPU utilization monitoring (ResourceCount with vCPU dimension)
    • API utilization monitoring (API precise utilization vs. API limits)
  • Software standing metrics:
    • RunningJobs, SuccessJobs, FailedJobs, PendingJobs, CancelledJobs
  • Useful resource limits monitoring:
    • MaxCPUAllowed vs. CPUAllocated
    • MaxMemoryAllowed vs. MemoryAllocated
    • MaxStorageAllowed vs. StorageAllocated
  • Employee-level metrics:
    • WorkerCpuAllocated vs. WorkerCpuUsed
    • WorkerMemoryAllocated vs. WorkerMemoryUsed
    • WorkerEphemeralStorageAllocated vs. WorkerEphemeralStorageUsed
  • Capability allocation monitoring:
    • Metrics filtered by CapacityAllocationType (PreInitCapacity vs. OnDemandCapacity)
    • ResourceCount
  • Employee sort distribution:
    • Metrics filtered by WorkerType (SPARK_DRIVER vs. SPARK_EXECUTORS)
  • Job success charges over time:
    • SuccessJobs vs. FailedJobs ratio
    • SubmitedJobs vs. PendingJobs

The next screenshot reveals an instance of the tracked metrics.

Logs

For logs administration, AppsFlyer’s DataInfra group explored a number of choices:

Streamlining EMR Serverless log delivery to Datadog

As a result of AppsFlyer determined to maintain their logs in an exterior logging setting, the DataInfra group aimed to cut back the variety of elements concerned within the delivery course of and decrease upkeep overhead. As a substitute of managing a Lambda based mostly log shipper, they developed a {custom} Spark plugin that seamlessly exports logs from EMR Serverless to Datadog.

Corporations already storing logs in Amazon S3 or CloudWatch Logs can make the most of EMR Serverless native help for these environments. Nevertheless, for groups needing a direct, real-time integration with Datadog, this method alleviates the necessity for further infrastructure, offering a extra environment friendly and maintainable logging resolution.

The {custom} Spark plugin affords the next capabilities:

  • Automated log export – Streams logs from EMR Serverless to Datadog
  • Fewer further elements – Alleviates the necessity for Lambda based mostly log shippers
  • Safe API key administration – Makes use of Vault as an alternative of hardcoding credentials
  • Customizable logging – Helps {custom} Log4j settings and log ranges
  • Full integration with Spark – Works on each driver and executor nodes

How the plugin works

On this part, we stroll by the elements of how the plugin works and supply a pseudocode overview:

  • Driver pluginLoggerDriverPlugin runs on the Spark driver to configure logging. The plugin fetches EMR job metadata, calls Vault to retrieve the Datadog API key, and configures logging settings.
initialize() {
  if (consumer supplied log4j.xml) {
     Use {custom} log configuration
  } else {
     Fetch EMR job metadata (utility title, job ID, tags)
     Retrieve Datadog API key from Vault
     Apply default logging settings
  }
}

  • Executor plugin – LoggerExecutorPlugin gives constant logging throughout executor nodes. It inherits the driving force’s log configuration and makes positive the executors use constant logging
initialize() {
   fetch logging config from Driver
   apply log settings (log4j, log ranges)
}

  • Essential plugin – LoggerSparkPlugin registers the driving force and executor plugins in Spark. It serves because the entry level for Spark and applies {custom} logging settings dynamically.
operate registerPlugin() {
  return (driverPlugin, executorPlugin);
}

loginToVault(function, vaultAddress) {
    create AWS signed request
    authenticate with Vault
    return vault token
}

getDatadogApiKey(vaultToken, secretPath) {
    fetch API key from Vault
    return key
}

Arrange the plugin

To arrange the plugin, full the next steps:

  1. Add the next dependencies to your challenge:
<dependency>
  <groupId>com.AppsFlyer.datacom</groupId>
  <artifactId>emr-serverless-logger-plugin</artifactId>
  <model><!-- insert model right here --></model>
</dependency>

  1. Configure the Spark plugin. The next code allows the {custom} Spark plugin and assigns the Vault function to entry the Datadog API key:

--conf "spark.plugins=com.AppsFlyer.datacom.emr.plugin.LoggerSparkPlugin"

--conf "spark.datacom.emr.plugin.vaultAuthRole=your_vault_role"

  1. Use a {custom} or default Log4j configuration:

--conf "spark.datacom.emr.plugin.location=classpath:my_custom_log4j.xml"

  1. Set the setting variables for various log ranges. This adjusts the logging for particular packages.

--conf "spark.emr-serverless.driverEnv.ROOT_LOG_LEVEL=WARN"

--conf "spark.executorEnv.ROOT_LOG_LEVEL=WARN"

--conf "spark.emr-serverless.driverEnv.LOG_LEVEL=DEBUG"

--conf "spark.executorEnv.LOG_LEVEL=DEBUG"

  1. Configure the Vault and Datadog API key and confirm safe Datadog API key retrieval.

By adopting this plugin, AppsFlyer was capable of considerably simplify log delivery, decreasing the variety of transferring components whereas sustaining real-time log visibility in Datadog. This method gives reliability, safety, and ease of upkeep, making it a great resolution for groups utilizing EMR Serverless with Datadog.

Abstract

Via their migration to EMR Serverless, AppsFlyer achieved a major transformation in group autonomy and operational effectivity. Particular person groups now have higher freedom to decide on and construct their very own sources with out relying on a central infrastructure group, and may work extra independently and innovatively. The minimization of spot interruptions, which had been widespread of their earlier self-managed Hadoop clusters, has considerably improved stability and agility of their operations. Due to this autonomy and reliability, mixed with the automated scaling capabilities of EMR Serverless, the AppsFlyer groups can focus extra on information processing and innovation somewhat than infrastructure administration. The result’s a extra environment friendly, versatile, and self-sufficient improvement setting the place groups can higher reply to their particular wants whereas sustaining excessive efficiency requirements.

Ruli Weisbach, AppsFlyer EVP of R&D, says,

“EMR-Serverless is a recreation changer for AppsFlyer; we’re capable of save considerably our value with remarkably decrease administration overhead and maximal elasticity.”

If the AppsFlyer method sparked your curiosity and you’re fascinated about implementing the same resolution in your group, discuss with the next sources:

Migrating to EMR Serverless can rework your group’s information processing capabilities, providing a completely managed, cloud-based expertise that routinely scales sources and eases the operational complexity of conventional cluster administration, whereas enabling superior analytics and machine studying workloads with higher cost-efficiency.


In regards to the authors

Roy Ninio is an AI Platform Lead with deep experience in scalable information platform and cloud-native architectures. At AppsFlyer, Roy led the design of a high-performance Information Lake dealing with PB of every day occasions, pushed the adoption of EMR Serverless for dynamic huge information processing, and architected lineage and governance methods throughout platforms.

Avichay Marciano is a Sr. Analytics Options Architect at Amazon Internet Providers. He has over a decade of expertise in constructing large-scale information platforms utilizing Apache Spark, trendy information lake architectures, and OpenSearch. He’s keen about data-intensive methods, analytics at scale, and it’s intersection with machine studying.

Eitav Arditti is AWS Senior Options Architect with 15 years in AdTech trade, specializing in Serverless, Containers, Platform engineering, and Edge applied sciences. Designs cost-efficient, large-scale AWS architectures that leverage the cloud-native and edge computing to ship scalable, dependable options for enterprise development.

Yonatan Dolan is a Principal Analytics Specialist at Amazon Internet Providers. Yonatan is an Apache Iceberg evangelist, serving to prospects design scalable, open information lakehouse architectures and undertake trendy analytics options throughout industries.

Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest Articles