4.8 C
New York
Friday, January 31, 2025

How Open Universities Australia modernized their information platform and considerably lowered their ETL prices with AWS Cloud Improvement Package and AWS Step Capabilities


This can be a visitor publish co-authored by Michael Davies from Open Universities Australia.

At Open Universities Australia (OUA), we empower college students to discover an enormous array of levels from famend Australian universities, all delivered by means of on-line studying. We provide college students various pathways to realize their academic aspirations, offering them with the pliability and accessibility to achieve their educational objectives. Since our founding in 1993, now we have supported over 500,000 college students to realize their objectives by offering pathways to over 2,600 topics at 25 universities throughout Australia.

As a not-for-profit group, price is an important consideration for OUA. Whereas reviewing our contract for the third-party device we had been utilizing for our extract, rework, and cargo (ETL) pipelines, we realized that we may replicate a lot of the identical performance utilizing Amazon Net Providers (AWS) companies reminiscent of AWS Glue, Amazon AppFlow, and AWS Step Capabilities. We additionally acknowledged that we may consolidate our supply code (a lot of which was saved within the ETL device itself) right into a code repository that could possibly be deployed utilizing the AWS Cloud Improvement Package (AWS CDK). By doing so, we had a possibility to not solely scale back prices but additionally to reinforce the visibility and maintainability of our information pipelines.

On this publish, we present you ways we used AWS companies to interchange our present third-party ETL device, bettering the crew’s productiveness and producing a big discount in our ETL operational prices.

Our strategy

The migration initiative consisted of two essential components: constructing the brand new structure and migrating information pipelines from the prevailing device to the brand new structure. Usually, we’d work on each in parallel, testing one element of the structure whereas growing one other on the identical time.

From early in our migration journey, we started to outline just a few guiding ideas that we’d apply all through the event course of. These had been:

  • Easy and modular – Use easy, reusable design patterns with as few shifting components as attainable. Construction the code base to prioritize ease of use for builders.
  • Price-effective – Use sources in an environment friendly, cost-effective means. Goal to reduce conditions the place sources are operating idly whereas ready for different processes to be accomplished.
  • Enterprise continuity – As a lot as attainable, make use of present code somewhat than reinventing the wheel. Roll out updates in levels to reduce potential disruption to present enterprise processes.

Structure overview

The next Diagram 1 is the high-level structure for the answer.

Diagram 1: General structure of the answer, utilizing AWS Step Capabilities, Amazon Redshift and Amazon S3

The next AWS companies had been used to form our new ETL structure:

  • Amazon Redshift – A completely managed, petabyte-scale information warehouse service within the cloud. Amazon Redshift served as our central information repository, the place we’d retailer information, apply transformations, and make information obtainable to be used in analytics and enterprise intelligence (BI). Observe: The provisioned cluster itself was deployed individually from the ETL structure and remained unchanged all through the migration course of.
  • AWS Cloud Improvement Package (AWS CDK) – The AWS Cloud Improvement Package (AWS CDK) is an open-source software program growth framework for outlining cloud infrastructure in code and provisioning it by means of AWS CloudFormation. Our infrastructure was outlined as code utilizing the AWS CDK. Consequently, we simplified the way in which we outlined the sources we wished to deploy whereas utilizing our most well-liked coding language for growth.
  • AWS Step Capabilities – With AWS Step Capabilities, you possibly can create workflows, additionally referred to as State machines, to construct distributed functions, automate processes, orchestrate microservices, and create information and machine studying pipelines. AWS Step Capabilities can name over 200 AWS companies together with AWS Glue, AWS Lambda, and Amazon Redshift. We used the AWS Step Perform state machines to outline, orchestrate, and execute our information pipelines.
  • Amazon EventBridge – We used Amazon EventBridge, the serverless occasion bus service, to outline the event-based guidelines and schedules that will set off our AWS Step Capabilities state machines.
  • AWS Glue – A knowledge integration service, AWS Glue consolidates main information integration capabilities right into a single service. These embrace information discovery, fashionable ETL, cleaning, reworking, and centralized cataloging. It’s additionally serverless, which suggests there’s no infrastructure to handle. contains the flexibility to run Python scripts. We used it for executing long-running scripts, reminiscent of for ingesting information from an exterior API.
  • AWS Lambda – AWS Lambda is a extremely scalable, serverless compute service. We used it for executing easy scripts, reminiscent of for parsing a single textual content file.
  • Amazon AppFlow – Amazon AppFlow permits easy integration with software program as a service (SaaS) functions. We used it to outline flows that will periodically load information from chosen operational programs into our information warehouse.
  • Amazon Easy Storage Service (Amazon S3) – An object storage service providing industry-leading scalability, information availability, safety, and efficiency. Amazon S3 served as our staging space, the place we’d retailer uncooked information previous to loading it into different companies reminiscent of Amazon Redshift. We additionally used it as a repository for storing code that could possibly be retrieved and utilized by different companies.

The place sensible, we made use of the file construction of our code base for outlining sources. We arrange our AWS CDK to seek advice from the contents of a particular listing and outline a useful resource (for instance, an AWS Step Capabilities state machine or an AWS Glue job) for every file it present in that listing. We additionally made use of configuration information so we may customise the attributes of particular sources as required.

Particulars on particular patterns

Within the above structure Diagram 1, we confirmed a number of flows by which information could possibly be ingested or unloaded from our Amazon Redshift information warehouse. On this part, we spotlight 4 particular patterns in additional element which had been utilized within the closing resolution.

Sample 1: Knowledge transformation, load, and unload

A number of of our information pipelines included important information transformation steps, which had been primarily carried out by means of SQL statements executed by Amazon Redshift. Others required ingestion or unloading of knowledge from the information warehouse, which could possibly be carried out effectively utilizing COPY or UNLOAD statements executed by Amazon Redshift.

Consistent with our purpose of utilizing sources effectively, we sought to keep away from operating these statements from throughout the context of an AWS Glue job or AWS Lambda perform as a result of these processes would stay idle whereas ready for the SQL assertion to be accomplished. As an alternative, we opted for an strategy the place SQL execution duties can be orchestrated by an AWS Step Capabilities state machine, which might ship the statements to Amazon Redshift and periodically examine their progress earlier than marking them as both profitable or failed. The next Diagram 2 exhibits this workflow.

Data transformation, load, and unload

Diagram 2: Knowledge transformation, load, and unload sample utilizing Amazon Lambda and Amazon Redshift inside an AWS Step Perform

Sample 2: Knowledge replication utilizing AWS Glue

In circumstances the place we wanted to copy information from a third-party supply, we used AWS Glue to run a script that will question the related API, parse the response, and retailer the related information in Amazon S3. From right here, we used Amazon Redshift to ingest the information utilizing a COPY assertion. The next Diagram 3 exhibits this workflow.

Image 3: Copying from external API to Redshift with AWS Glue

Diagram 3: Copying from exterior API to Redshift with AWS Glue

Observe: An alternative choice for this step can be to make use of Amazon Redshift auto-copy, however this wasn’t obtainable at time of growth.

Sample 3: Knowledge replication utilizing Amazon AppFlow

For sure functions, we had been ready to make use of Amazon AppFlow flows rather than AWS Glue jobs. Consequently, we may summary a few of the complexity of querying exterior APIs straight. We configured our Amazon AppFlow flows to retailer the output information in Amazon S3, then used an EventBridge rule primarily based on an Finish Circulation Run Report occasion (which is an occasion which is revealed when a stream run is full) to set off a load into Amazon Redshift utilizing a COPY assertion. The next Diagram 4 exhibits this workflow.

By utilizing Amazon S3 as an intermediate information retailer, we gave ourselves larger management over how the information was processed when it was loaded into Amazon Redshift, when put next with loading the information on to the information warehouse utilizing Amazon AppFlow.

Image 4: Using Amazon AppFlow to integrate external data

Diagram 4: Utilizing Amazon AppFlow to combine exterior information to Amazon S3 and replica to Amazon Redshift

Sample 4: Reverse ETL

Though most of our workflows contain information being introduced into the information warehouse from exterior sources, in some circumstances we wanted the information to be exported to exterior programs as a substitute. This fashion, we may run SQL queries with advanced logic drawing on a number of information sources and use this logic to help operational necessities, reminiscent of figuring out which teams of scholars ought to obtain particular communications.

On this stream, proven within the following Diagram 5, we begin by operating an UNLOAD assertion in Amazon Redshift to unload the related information to information in Amazon S3. From right here, every file is processed by an AWS Lambda perform, which performs any obligatory transformations and sends the information to the exterior utility by means of a number of API calls.

Image 5: Reverse ETL workflow, sending data back out to external data sources

Diagram 5: Reverse ETL workflow, sending information again out to exterior information sources

Outcomes

The re-architecture and migration course of took 5 months to finish, from the preliminary idea to the profitable decommissioning of the earlier third-party device. A lot of the architectural effort was accomplished by a single full-time worker, with others on the crew primarily helping with the migration of pipelines to the brand new structure.

We achieved important price reductions, with closing bills on AWS native companies representing solely a small proportion of projected prices in comparison with persevering with with the third-party ETL device. Shifting to a code-based strategy additionally gave us larger visibility of our pipelines and made the method of sustaining them faster and simpler. General, the transition was seamless for our finish customers, who had been in a position to view the identical information and dashboards each throughout and after the migration, with minimal disruption alongside the way in which.

Conclusion

By utilizing the scalability and cost-effectiveness of AWS companies, we had been in a position to optimize our information pipelines, scale back our operational prices, and enhance our agility.

Pete Allen, an analytics engineer from Open Universities Australia, says, “Modernizing our information structure with AWS has been transformative. Transitioning from an exterior platform to an in-house, code-based analytics stack has vastly improved our scalability, flexibility, and efficiency. With AWS, we are able to now course of and analyze information with a lot sooner turnaround, decrease prices, and better availability, enabling speedy growth and deployment of knowledge options, resulting in deeper insights and higher enterprise selections.”

Extra sources


Concerning the Authors

Michael Davies is a Knowledge Engineer at OUA. He has in depth expertise throughout the schooling {industry}, with a selected concentrate on constructing strong and environment friendly information structure and pipelines.

Emma Arrigo is a Options Architect at AWS, specializing in schooling clients throughout Australia. She makes a speciality of leveraging cloud expertise and machine studying to handle advanced enterprise challenges within the schooling sector. Emma’s ardour for information extends past her skilled life, as evidenced by her canine named Knowledge.

Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest Articles