2.3 C
New York
Sunday, February 23, 2025

Amazon Redshift declares historical past mode for zero-ETL integrations to simplify historic information monitoring and evaluation


Within the ever-evolving panorama of cloud computing and information administration, AWS has persistently been on the forefront of innovation. One of many groundbreaking developments lately is zero-ETL integration, a set of totally managed integrations by AWS that minimizes the necessity to construct extract, rework, and cargo (ETL) information pipelines. This publish will discover transient historical past of zero-ETL, its significance for patrons, and introduce an thrilling new function: historical past mode for Amazon Aurora PostgreSQL-Appropriate Version, Amazon Aurora MySQL-Appropriate Version, Amazon Relational Database Service (Amazon RDS) for MySQL, and Amazon DynamoDB zero-ETL integration with Amazon Redshift.

A short historical past of zero-ETL integrations

The idea of zero-ETL integrations emerged as a response to the rising complexities and inefficiencies in conventional ETL processes. Conventional ETL processes are time-consuming and complicated to develop, preserve, and scale. Though not all use instances may be changed with zero-ETL, it simplifies the replication and means that you can apply transformation post-replication. This eliminates the necessity for added ETL expertise between the supply database and Amazon Redshift. We at AWS acknowledged the necessity for a extra streamlined method to information integration, significantly between operational databases and the cloud information warehouses. The journey of zero-ETL started in late 2022 after we launched the function for Aurora MySQL with Amazon Redshift. This function marked a pivotal second in streamlining advanced information workflows, enabling close to real-time information replication and evaluation whereas eliminating the necessity for ETL processes.

Constructing on the success of our first zero-ETL integration, we’ve made steady strides on this house by working backward from our prospects’ wants and launching options like information filtering, auto and incremental refresh of materialized views, refresh interval, and extra. Moreover, we elevated the breadth of sources to incorporate Aurora PostgreSQL, DynamoDB, and Amazon RDS for MySQL to Amazon Redshift integrations, solidifying our dedication to creating it seamless so that you can run analytics in your information. The introduction of zero-ETL was not only a technological development; it represented a paradigm shift in how organizations may method their information methods. By eradicating the necessity for intermediate information processing steps, we opened up new prospects for close to real-time analytics and decision-making.

Introducing historical past mode: A brand new frontier in information evaluation

Zero-ETL has already simplified the information integration, and we’re excited to additional improve the capabilities by asserting a brand new function that takes it a step additional: historical past mode with Amazon Redshift. Utilizing historical past mode with zero-ETL integrations, you possibly can streamline your historic information evaluation by sustaining full change information seize (CDC) from the supply in Amazon Redshift. Historical past mode allows you to unlock the total potential of your information by seamlessly capturing and retaining historic variations of data throughout your zero-ETL information sources. You may carry out superior historic evaluation, construct look again studies, carry out development evaluation, and create slowly altering dimensions (SCD) Sort 2 tables on Amazon Redshift. This lets you consolidate your core analytical belongings and derive insights throughout a number of functions, gaining value financial savings and operational efficiencies. Historical past mode permits organizations to adjust to regulatory necessities for sustaining historic data, facilitating complete information governance and knowledgeable decision-making.

Zero-ETL integrations present a present view of data in close to actual time, that means solely the most recent modifications from supply databases are retained on Amazon Redshift. With historical past mode, Amazon Redshift introduces a revolutionary method to historic information evaluation. Now you can configure your zero-ETL integrations to trace each model of your data in supply tables straight in Amazon Redshift, together with the supply timestamp with each report model indicating when every report was inserted, modified, or deleted. As a result of information modifications are tracked and retained by Amazon Redshift, this can assist you meet your compliance necessities with out having to take care of duplicate copies in information sources. As well as, you don’t have to take care of and handle partitioned tables to maintain older information intact as separate partitions to model data, and preserve historic information in supply databases.

In an information warehouse, the most typical dimensional modeling strategies is a star schema, the place there’s a truth desk on the heart surrounded by numerous related dimension tables. A dimension is a construction that categorizes details and measures in an effort to allow customers to reply enterprise questions. For example an instance, in a typical gross sales area, buyer, time, or product are dimensions and gross sales transactions is a truth. An SCD is an information warehousing idea that incorporates comparatively static information that may change slowly over a time period. There are three main kinds of SCDs maintained in information warehousing: Sort 1 (no historical past), Sort 2 (full historical past), and Sort 3 (restricted historical past). CDC is a attribute of a database that gives a capability to establish the information that modified between two database masses, in order that an motion may be carried out on the modified information.

On this publish, we show easy methods to allow historical past mode for tables in a zero-ETL integration and seize the total historic information modifications as SCD2.

Resolution overview

On this use case, we discover how a fictional nationwide retail chain, AnyCompany, makes use of AWS providers to achieve priceless insights into their buyer base. With a number of places throughout the nation, AnyCompany goals to reinforce their understanding of buyer conduct and enhance their advertising and marketing methods by way of two key initiatives:

  • Buyer migration evaluation – AnyCompany seeks to trace and analyze buyer relocation patterns, specializing in how geographical strikes affect buying conduct. By monitoring these modifications, the corporate can adapt its stock, providers, and native advertising and marketing efforts to raised serve prospects of their new places.
  • Advertising and marketing marketing campaign effectiveness – The retailer desires to guage the affect of focused advertising and marketing campaigns primarily based on buyer demographics on the time of marketing campaign execution. This evaluation can assist AnyCompany refine its advertising and marketing methods, optimize useful resource allocation, and enhance total marketing campaign efficiency.

By carefully monitoring modifications in buyer profiles for each geographic motion and advertising and marketing responsiveness, AnyCompany is positioning itself to make extra knowledgeable, data-driven selections.

On this demonstration, we start by loading a pattern dataset into the supply desk, buyer, in Aurora PostgreSQL-Appropriate. To take care of historic data, we allow historical past mode on the buyer desk, which robotically tracks modifications in Amazon Redshift.

When historical past mode is turned on, the next columns are robotically added to the goal desk, buyer, in Amazon Redshift to maintain monitor of modifications within the supply.

Column titleKnowledge kindDescription
_record_is_activeBooleanSignifies if a report within the goal is at present lively within the supply. True signifies the report is lively.
_record_create_timeTimestampBeginning time (UTC) when the supply report is lively.
_record_delete_timeTimestampEnding time (UTC) when the supply report is up to date or deleted.

Subsequent, we create a dimension desk, customer_dim, in Amazon Redshift with a further surrogate key column to indicate an instance of making an SCD desk. To optimize question efficiency for various queries, a few of which may be analyzing lively or inactive data solely whereas different queries may be analyzing information as of a sure date, we outlined the kind key consisting of _record_is_active, _record_create_time, and _record_delete_time attributes within the customer_dim desk.

The next determine offers the schema of the supply desk in Aurora PostgreSQL-Appropriate, and the goal desk and goal buyer dimension desk in Amazon Redshift.
schema

To streamline the information inhabitants course of, we developed a saved process named SP_Customer_Type2_SCD(). This process is designed to populate incremental information into the customer_dim desk from the replicated buyer desk. It handles numerous information modifications, together with updates, inserts, and deletes within the supply desk and implementing an SCD2 method.

Stipulations

Earlier than you get began, full the next steps:

  1. Configure your Aurora DB cluster and your Redshift information warehouse with the required parameters and permissions. For directions, confer with Getting began with Aurora zero-ETL integrations with Amazon Redshift.
  2. Create an Aurora zero-ETL integration with Amazon Redshift.
  3. From an Amazon Elastic Compute Cloud (Amazon EC2) terminal or utilizing AWS CloudShell, SSH into the Aurora PostgreSQL cluster and run the next instructions to put in psql:
sudo dnf set up postgresql15
psql --version

  1. Load the pattern supply information:
    • Obtain the TPC-DS pattern dataset for the buyer desk onto the machine operating psql.
    • From the EC2 terminal, run the next command to connect with the Aurora PostgreSQL DB utilizing the default tremendous person postgres:
      psql -h <RDS Write Occasion Endpoint> -p 5432 -U postgres

    • Run the next SQL command to create the database zetl:
      create database zetl template template1;

    • Change the connection to the newly created database:
    • Create the buyer desk (the next instance creates it within the public schema):
      CREATE TABLE buyer(
          c_customer_id char(16) NOT NULL PRIMARY KEY,
          c_salutation char(10),
          c_first_name char(20),
          c_last_name char(30),
          c_preferred_cust_flag char(1),
          c_birth_day int4,
          c_birth_month int4,
          c_birth_year int4,
          c_birth_country varchar(20),
          c_login char(13),
          c_email_address char(50),
          ca_street_number char(10),
          ca_street_name varchar(60),
          ca_street_type char(15),
          ca_suite_number char(10),
          ca_city varchar(60),
          ca_county varchar(30),
          ca_state char(2),
          ca_zip char(10),
          ca_country varchar(20),
          ca_gmt_offset numeric(5, 2),
          ca_location_type char(20)
      );

    • Run the next command to load buyer information from the downloaded dataset after altering the highlighted location of the dataset to your listing path:
      copy buyer from '/house/ec2-user/customer_sample_data.dat' WITH DELIMITER '|' CSV;

    • Run the next question to validate the profitable creation of the desk and loading of pattern information:
      SELECT table_catalog, table_schema, table_name, n_live_tup AS row_count
      FROM information_schema.tables JOIN g_stat_user_tables ON table_name = relname
      WHERE table_type="BASE TABLE"
      ORDER BY row_count DESC;

The SQL output needs to be as follows:

table_catalog | table_schema | table_name | row_count
---------------+--------------+------------+-----------
zetl          | public       | buyer   |   1200585
(1 row)

Create a goal database in Amazon Redshift

To duplicate information out of your supply into Amazon Redshift, you should create a goal database out of your integration in Amazon Redshift. For this publish, we now have already created a supply database known as zetl in Aurora PostgreSQL-Appropriate as a part of the conditions. Full the next steps to create the goal database:

  1. On the Amazon Redshift console, select Question editor v2 within the navigation pane.
  2. Run the next instructions to create a database known as postgres in Amazon Redshift utilizing the zero-ETL integration_id with historical past mode turned on.
-- Amazon Redshift SQL instructions to create database
SELECT integration_id FROM svv_integration; -- copy this end result, use within the subsequent sql
CREATE DATABASE "postgres" FROM INTEGRATION '<end result from above>' DATABASE "zetl" SET HISTORY_MODE = TRUE;

Historical past mode turned on on the time of goal database creation on Amazon Redshift will allow historical past mode for present and new tables created sooner or later.

  1. Run the next question to validate the profitable replication of the preliminary information from the supply into Amazon Redshift:
choose is_history_mode, table_name, table_state, * from svv_integration_table_state;

The desk buyer ought to present table_state as Synced with is_history_mode as true.
histmode-true

Allow historical past mode for present zero-ETL integrations

Historical past mode may be enabled in your present zero-ETL integrations utilizing both the Amazon Redshift console or SQL instructions. Based mostly in your use case, you possibly can activate historical past mode on the database, schema, or desk stage. To make use of the Amazon Redshift console, full the next steps:

  1. On the Amazon Redshift console, select Zero-ETL integrations within the navigation pane.
  2. Select your required integration.
  3. Select Handle historical past mode.
    zelt-integratin

On this web page, you possibly can both allow or disable historical past mode for all tables or a subset of tables.

  1. Choose Handle historical past mode for particular person tables and choose Activate for the historical past mode for the buyer
  2. Select Save modifications.
    table-hist-mode
  3. To substantiate modifications, select Desk statistics and ensure Historical past mode is On for the buyer.
    table-stats
  4. Optionally, you possibly can run the next SQL command in Amazon Redshift to allow historical past mode for the buyer desk:
ALTER DATABASE "postgres" INTEGRATION SET HISTORY_MODE = TRUE FOR TABLE public.buyer;

  1. Optionally, you possibly can allow historical past mode for all present and tables created sooner or later within the database:
ALTER DATABASE "postgres" INTEGRATION SET HISTORY_MODE = TRUE FOR ALL TABLES;

  1. Optionally, you possibly can allow historical past mode for all present and tables created sooner or later in a number of schemas. The next question permits historical past mode for all present and tables created sooner or later for the public schema:
ALTER DATABASE "postgres" INTEGRATION SET HISTORY_MODE = TRUE FOR ALL TABLES IN SCHEMA public;

  1. Run the next question to validate if the buyer desk has been efficiently modified to historical past mode with the is_history_mode column as true in order that it may well start monitoring each model (together with updates and deletes) of all data modified within the supply:
choose is_history_mode, table_name, table_state, * from svv_integration_table_state;

Initially, the desk can be in ResyncInitiated state earlier than altering to Synced.
table-synced

  1. Run the next question within the zetl database of Aurora PostgreSQL-Appropriate to change a supply report and observe the conduct of historical past mode within the Amazon Redshift goal:
UPDATE buyer
SET
    ca_suite_number="Suite 100",
    ca_street_number="500",
    ca_street_name="Foremost",
    ca_street_type="St.",
    ca_city = 'New York',
    ca_county = 'Manhattan',
    ca_state="NY",
    ca_zip = '10001'
WHERE c_customer_id = 'AAAAAAAAAAAKNAAA';

  1. Now run the next question within the postgres database of Amazon Redshift to see all variations of the identical report:
SELECT   
    c_customer_id,
    ca_street_number,
    ca_street_name,
    ca_suite_number,
    ca_city,
    ca_county,
    ca_state,
    ca_zip,
    _record_is_active,
    _record_create_time,
    _record_delete_time
FROM postgres.public.buyer
WHERE c_customer_id = 'AAAAAAAAAAAKNAAA';

Zero-ETL integrations with historical past mode has inactivated the previous report with the _record_is_active column worth to false and created a brand new report with _record_is_active as true. You too can see the way it maintains the _record_create_time and _record_delete_time column values for each data. The inactive report has a delete timestamp that matches the lively report’s create timestamp.
table-history

Load incremental information in an SCD2 desk

Full the next steps to create an SCD2 desk and implement an incremental information load course of in a daily database of Amazon Redshift, on this case dev:

  1. Create an empty buyer SDC2 desk known as customer_dim with SCD fields. The desk additionally has DISTSTYLE AUTO and SORTKEY columns _record_is_active, _record_create_time, and _record_delete_time. Once you outline a kind key on a desk, Amazon Redshift can skip studying whole blocks of knowledge for that column. It could actually accomplish that as a result of it tracks the minimal and most column values saved on every block and may skip blocks that don’t apply to the predicate vary.
CREATE TABLE dev.public.customer_dim (
    c_customer_sk bigint NOT NULL DEFAULT 0 ENCODE uncooked distkey,
    c_customer_id character various(19) DEFAULT '' :: character various ENCODE lzo,
    c_salutation character various(12) ENCODE bytedict,
    c_first_name character various(24) ENCODE lzo,
    c_last_name character various(36) ENCODE lzo,
    c_preferred_cust_flag character various(1) ENCODE lzo,
    c_birth_day integer ENCODE az64,
    c_birth_month integer ENCODE az64,
    c_birth_year integer ENCODE az64,
    c_birth_country character various(24) ENCODE bytedict,
    c_login character various(15) ENCODE lzo,
    c_email_address character various(60) ENCODE lzo,
    ca_street_number character various(12) ENCODE lzo,
    ca_street_name character various(72) ENCODE lzo,
    ca_street_type character various(18) ENCODE bytedict,
    ca_suite_number character various(12) ENCODE bytedict,
    ca_city character various(72) ENCODE lzo,
    ca_county character various(36) ENCODE lzo,
    ca_state character various(2) ENCODE lzo,
    ca_zip character various(12) ENCODE lzo,
    ca_country character various(24) ENCODE lzo,
    ca_gmt_offset numeric(5, 2) ENCODE az64,
    ca_location_type character various(24) ENCODE bytedict,
    _record_is_active boolean ENCODE uncooked,
    _record_create_time timestamp with out time zone ENCODE az64,
    _record_delete_time timestamp with out time zone ENCODE az64,
    PRIMARY KEY (c_customer_sk)
) SORTKEY (
    _record_is_active,
    _record_create_time,
    _record_delete_time
);

Subsequent, you create a saved process known as SP_Customer_Type2_SCD() to populate incremental information within the customer_dim SCD2 desk created within the previous step. The saved process incorporates the next elements:

    • First, it fetches the max _record_create_time and max _record_delete_time for every customer_id.
    • Then, it compares the output of the previous step with the continuing zero-ETL integration replicated desk for data created after the max creation time within the dimension desk or the report within the replicated desk with _record_delete_time after the max _record_delete_time within the dimension desk for every customer_id.
    • The output of the previous step captures the modified information between the replicated buyer desk and goal customer_dim dimension desk. The interim information is staged to a customer_stg desk, which is able to be merged with the goal desk.
    • Through the merge course of, data that have to be deleted are marked with _record_delete_time and _record_is_active is ready to false, whereas newly created data are inserted into the goal desk customer_dim with _record_is_active as true.
  1. Create the saved process with the next code:
CREATE OR REPLACE PROCEDURE public.sp_customer_type2_scd()
LANGUAGE plpgsql
AS $$
    BEGIN

    DROP TABLE IF EXISTS cust_latest;

    -- Create temp desk with newest report timestamps
         CREATE TEMP TABLE cust_latest DISTKEY (c_customer_id) 
    AS
        SELECT
            c_customer_id,
            max(_record_create_time) AS _record_create_time,
            max(_record_delete_time) AS _record_delete_time
        FROM customer_dim 
        GROUP BY c_customer_id;
    
    DROP TABLE IF EXISTS customer_stg;

    -- Establish and stage modified data
    CREATE TEMP TABLE customer_stg 
    AS           
    SELECT
            ABS(fnv_hash(cust.c_customer_id)) as customer_sk,
            cust.*
            FROM
                postgres.public.buyer cust
LEFT OUTER JOIN cust_latest ON cust.c_customer_id = cust_latest.c_customer_id
WHERE (cust._record_create_time > NVL(cust_latest._record_create_time, '1099-01-01 01:01:01') AND cust._record_is_active is true)
OR (cust._record_delete_time > NVL(cust_latest._record_delete_time, '1099-01-01 01:01:01') AND cust._record_is_active is fake);

    -- Merge modifications to buyer dimension desk
    MERGE INTO public.customer_dim 
    USING customer_stg stg 
    ON customer_dim.c_customer_id = stg.c_customer_id
        AND customer_dim._record_is_active = TRUE
        AND stg._record_is_active = false
    WHEN MATCHED THEN
        UPDATE
        SET
            _record_is_active = stg._record_is_active,
            _record_create_time = stg._record_create_time,
            _record_delete_time = stg._record_delete_time
    WHEN NOT MATCHED THEN
        INSERT
        VALUES
            (
                stg.customer_sk,
                stg.c_customer_id,
                stg.c_salutation,
                stg.c_first_name,
                stg.c_last_name,
                stg.c_preferred_cust_flag,
                stg.c_birth_day,
                 	     stg.c_birth_month,
                stg.c_birth_year,
                stg.c_birth_country,
                stg.c_login,
                stg.c_email_address,
                stg.ca_street_number,
                stg.ca_street_name,
                stg.ca_street_type,
                stg.ca_suite_number,
                stg.ca_city,
                stg.ca_county,
                stg.ca_state,
                stg.ca_zip,
                stg.ca_country,
                stg.ca_gmt_offset,
                stg.ca_location_type,
                stg._record_is_active,
                stg._record_create_time,
                stg._record_delete_time
            );

    END;
    $$

  1. Run and schedule the saved process to load the preliminary and ongoing incremental information into the customer_dim SCD2 desk:
CALL SP_Customer_Type2_SCD();

  1. Validate the information within the customer_dim desk for a similar buyer with a modified tackle:
SELECT
    c_customer_id,
    ca_street_number,
    ca_street_name,
    ca_suite_number,
    ca_city,
    ca_county,
    ca_state,
    ca_zip,
    _record_is_active,
    _record_create_time,
    _record_delete_time
FROM customer_dim
WHERE c_customer_id = 'AAAAAAAAAAAKNAAA';

dim-history

You’ve gotten efficiently carried out an incremental load technique for the client SCD2 desk. Going ahead, all modifications to buyer can be tracked and maintained on this buyer dimension desk by operating the saved process. This allows you to analyze buyer information at a desired cut-off date for various use instances, for instance, performing buyer migration evaluation and seeing how geographical strikes affect buying conduct, or advertising and marketing marketing campaign effectiveness to research the affect of focused advertising and marketing campaigns on buyer demographics on the time of marketing campaign execution.

Trade use instances for historical past mode

The next are different trade use instances enabled by historical past mode between operational information shops and Amazon Redshift:

  • Monetary auditing or regulatory compliance – Monitor modifications in monetary data over time to help compliance and audit necessities. Historical past mode permits auditors to reconstruct the state of economic information at any cut-off date, which is essential for investigations and regulatory reporting.
  • Buyer journey evaluation – Perceive how buyer information evolves to achieve insights into conduct patterns and preferences. Entrepreneurs can analyze how buyer profiles change over time, informing personalization methods and lifelong worth calculations.
  • Provide chain optimization – Analyze historic stock and order information to establish developments and optimize inventory ranges. Provide chain managers can evaluate how demand patterns have shifted over time, bettering forecasting accuracy.
  • HR analytics – Monitor worker information modifications over time for higher workforce planning and efficiency evaluation. HR professionals can analyze profession development, wage modifications, and talent improvement developments throughout the group.
  • Machine studying mannequin auditing – Knowledge scientists can use historic information to coach fashions, examine predictions vs. actuals to enhance accuracy, and assist clarify mannequin conduct and establish potential biases over time.
  • Hospitality and airline trade use instances – For instance:
    • Customer support – Entry historic reservation information to swiftly tackle buyer queries, enhancing service high quality and buyer satisfaction.
    • Crew scheduling – Monitor crew schedule modifications to assist adjust to union contracts, sustaining constructive labor relations and optimizing workforce administration.
    • Knowledge science functions – Use historic information to coach fashions on a number of situations from completely different time intervals. Examine predictions towards actuals to enhance mannequin accuracy for key operations resembling airport gate administration, flight prioritization, and crew scheduling optimization.

Finest practices

In case your requirement is to separate lively and inactive data, you should use _record_is_active as the primary type key. For different patterns the place you need to analyze information as of a selected date previously, no matter whether or not information is lively or inactive, _record_create_time and _record_delete_time may be added as type keys.

Historical past mode retains report variations, which can enhance the desk dimension in Amazon Redshift and will affect question efficiency. Subsequently, periodically carry out DML deletes for outdated report variations (delete information past a sure timeframe if not wanted for evaluation). When executing these deletions, preserve information integrity by deleting throughout all associated tables. Vacuuming additionally turns into mandatory whenever you carry out DML deletes on data whose versioning is now not required. To enhance auto vacuum delete effectivity, Amazon Redshift auto vacuum delete is extra environment friendly when working on bulk deletes. You may monitor vacuum development utilizing the SYS_VACUUM_HISTORY desk.

Clear up

Full the next steps to scrub up your sources:

  1. Delete the Aurora PostgreSQL cluster.
  2. Delete the Redshift cluster.
  3. Delete the EC2 occasion.

Conclusion

Zero-ETL integrations have already made vital strides in simplifying information integration and enabling close to real-time analytics. With the addition of historical past mode, AWS continues to innovate, offering you with much more highly effective instruments to derive worth out of your information.

As companies more and more depend on data-driven decision-making, zero-ETL with historical past mode can be essential in sustaining a aggressive edge within the digital economic system. These developments not solely streamline information processes but additionally open up new avenues for evaluation and perception technology.

To be taught extra about zero-ETL integration with historical past mode, confer with Zero-ETL integrations and Limitations. Get began with zero-ETL on AWS by making a free account right now!


In regards to the Authors

Raks KhareRaks Khare is a Senior Analytics Specialist Options Architect at AWS primarily based out of Pennsylvania. He helps prospects throughout various industries and areas architect information analytics options at scale on the AWS platform. Exterior of labor, he likes exploring new journey and meals locations and spending high quality time along with his household.

Jyoti Aggarwal is a Product Administration Lead for AWS zero-ETL. She leads the product and enterprise technique, together with driving initiatives round efficiency, buyer expertise, and safety. She brings alongside an experience in cloud compute, information pipelines, analytics, synthetic intelligence (AI), and information providers together with databases, information warehouses and information lakes.

Gopal Paliwal is a Principal Engineer for Amazon Redshift, main the software program improvement of ZeroETL initiatives for Amazon Redshift.

Harman Nagra is a Principal Options Architect at AWS, primarily based in San Francisco. He works with international monetary providers organizations to design, develop, and optimize their workloads on AWS.

Sumanth Punyamurthula is a Senior Knowledge and Analytics Architect at Amazon Internet Providers with greater than 20 years of expertise in main massive analytical initiatives, together with analytics, information warehouse, information lakes, information governance, safety, and cloud infrastructure throughout journey, hospitality, monetary, and healthcare industries.

Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest Articles