22.2 C
New York
Friday, May 30, 2025

Unlock self-serve streaming SQL with Amazon Managed Service for Apache Flink


This put up is co-written with Gal Krispel from Riskified.

Riskified is an ecommerce fraud prevention and threat administration platform that helps companies optimize on-line transactions by distinguishing official prospects from fraudulent ones.

Utilizing synthetic intelligence and machine studying (AI/ML), Riskified analyzes real-time transaction knowledge to detect and stop fraud whereas maximizing transaction approval charges. The platform gives a chargeback assure, defending retailers from losses as a consequence of fraudulent transactions. Riskified’s options embrace account safety, coverage abuse prevention, and chargeback administration software program, making it a complete device for decreasing threat and enhancing buyer expertise. Companies throughout numerous industries, together with retail, journey, and digital items, use Riskified to extend income whereas minimizing fraud-related losses. Riskified’s core enterprise of real-time fraud prevention makes low-latency streaming applied sciences a elementary a part of its answer.

Companies typically can’t afford to attend for batch processing to make crucial choices. With real-time knowledge streaming applied sciences like Apache Flink, Apache Spark, and Apache Kafka Streams, organizations can react immediately to rising traits, detect anomalies, and improve buyer experiences. These applied sciences are highly effective processing engines that carry out analytical operations at scale. Nevertheless, unlocking the total potential of streaming knowledge typically requires advanced engineering efforts, limiting accessibility for analysts and enterprise customers.

Streaming pipelines are in excessive demand from Riskified’s Engineering division. Due to this fact, a user-friendly interface for creating streaming pipelines is a crucial characteristic to extend analytical precision for detecting fraudulent transactions.

On this put up, we current Riskified’s journey towards enabling self-service streaming SQL pipelines. We stroll by means of the motivations behind the shift from Confluent ksqlDB to Apache Flink, the structure Riskified constructed utilizing Amazon Managed Service for Apache Flink, the technical challenges they confronted, and the options that helped them make streaming accessible, scalable, and production-ready.

Utilizing SQL to create streaming pipelines

Clients have a spread of open supply knowledge processing applied sciences to select from, similar to Flink, Spark, ksqlDB, and RisingWave. Every platform provides a streaming API for knowledge processing. SQL streaming jobs supply a strong and intuitive technique to course of real-time knowledge with minimal complexity. These pipelines use SQL, a extensively identified and declarative language, to carry out real-time transformations, filtering, aggregations, and joins in steady knowledge streams.

For instance the ability of streaming SQL in ecommerce fraud prevention, think about the idea of velocity checks, that are a crucial fraud detection sample. Velocity checks are a kind of safety measure used to detect uncommon or fast exercise by monitoring the frequency and quantity of particular actions inside a given timeframe. These checks assist establish potential fraud or abuse by analyzing repeated behaviors that deviate from regular consumer patterns. Widespread examples embrace detecting a number of transactions from the identical IP deal with in a short while span, monitoring bursts of account creation makes an attempt, or monitoring the repeated use of a single fee technique throughout totally different accounts.

Use case: Riskified’s velocity checks

Riskified applied a real-time velocity examine utilizing streaming SQL to observe buying conduct primarily based on consumer identifier.

On this setup, transaction knowledge is repeatedly streamed by means of a Kafka matter. Every message comprises consumer agent data originating from the browser, together with the uncooked transaction knowledge. Streaming SQL queries are used to combination the variety of transactions originating from a single consumer identifier inside quick time home windows.

For instance, if the variety of transactions from a given consumer identifier exceeds a sure threshold inside a 10-second interval, this may sign fraudulent exercise. When that threshold is breached, the system can robotically flag or block the transactions earlier than they’re accomplished. The next determine and accompanying code present a simplified instance of the streaming SQL question used to detect this conduct.

Velocity check SQL flow

SELECT userIdentifier,TUMBLE_START(createdAt, INTERVAL '10' SECONDS) 
  AS windowStart,TUMBLE_END(createdAt, INTERVAL '10' SECONDS) 
  AS windowEnd, COUNT(*) AS paymentAttempts
FROM transactions
  WINDOW TUMBLING (SIZE 10 SECONDS)
GROUP BY userIdentifier;

Though defining SQL queries over static datasets may seem easy, creating and sustaining sturdy streaming functions introduces distinctive challenges. Conventional SQL operates on bounded datasets, that are finite collections of knowledge saved in tables. In distinction, streaming SQL is designed to course of steady, unbounded knowledge streams resembling the SQL syntax.

To deal with these challenges at scale and make streaming job creation accessible to engineering groups, Riskified applied a self-serve answer primarily based on Confluent ksqlDB, utilizing its SQL interface and built-in Kafka integration. Engineers may outline and deploy streaming pipelines utilizing SQL, chaining ksqlDB streams from supply to sink. The system supported each stateless and stateful processing immediately on Kafka subjects, with Avro schemas used to outline the construction of streaming knowledge.

Though ksqlDB offered a quick and approachable place to begin, it will definitely revealed a number of limitations. These included challenges with schema evolution, difficulties in managing compute sources, and the absence of an abstraction for managing pipelines as a cohesive unit. In consequence, Riskified started exploring different applied sciences that would higher help its increasing streaming use circumstances. The next sections define these challenges in additional element.

Evolving the stream processing structure

In evaluating alternate options, Riskified centered on applied sciences that would deal with the precise calls for of fraud detection whereas preserving the simplicity that made the unique method interesting. The staff encountered the next challenges in sustaining the earlier answer:

  • Schemas are managed in Confluent Schema Registry, and the message format is Avro with FULL compatibility mode enforced. Schemas are continuously evolving in line with enterprise necessities. They’re model managed utilizing Git with a strict steady integration and steady supply (CI/CD) pipeline. As schemas grew extra advanced, ksqlDB’s method to schema evolution didn’t robotically incorporate newly added fields. This conduct required dropping streams and recreating them so as to add new fields as an alternative of simply restarting the applying to include new fields. This method brought about inconsistencies with offset administration as a result of stream’s tear-down.
  • ksqlDB enforces a TopicNameStrategy schema registration technique, which gives 1:1 schema-to-topic coupling. This implies the precise schema definition must be registered a number of instances, one time for every matter it’s used for. Riskified’s schema registry deployment makes use of RecordNameStrategy for schema registration. It’s an environment friendly schema registry technique that enables for sharing schemas throughout a number of subjects, storing fewer schemas, and decreasing registry administration overhead. Having combined methods within the schema registry brought about errors with Kafka shopper shoppers trying to decode messages, as a result of the consumer implementation anticipated a RecordNameStrategy in line with Riskified’s customary.
  • ksqlDB internally registers schema definitions in particular methods the place fields are interpreted as nullable, and Avro Enum sorts are transformed to Strings. This conduct brought about deserialization errors when trying emigrate native Kafka shopper functions to make use of the ksqlDB output matter. Riskified’s code base makes use of the Scala programming language, the place optionally available fields within the schema are interpreted as Possibility. Remodeling each discipline as optionally available within the schema definition required heavy refactoring, treating all Enum fields as Strings, and dealing with the Possibility knowledge kind for each discipline that requires protected dealing with. This cascading impact made the migration course of extra concerned, requiring extra time and sources to attain a clean transition.

Managing useful resource competition in ksqlDB streaming workloads

ksqlDB queries are compiled right into a Kafka Streams topology. The question definition defines the topology’s conduct.

Streaming question sources are shared relatively than remoted. This method sometimes results in the overallocation of cluster sources. Its duties are distributed throughout nodes in a ksqlDB cluster. This structure means processing duties with no useful resource isolation, and a selected job can influence different duties working on the identical node.

Useful resource competition between duties on the identical node is frequent in a production-intensive setting when utilizing a cluster structure answer. Operation groups typically fine-tune cluster configurations to take care of acceptable efficiency, regularly mitigating points by over-provisioning cluster nodes.

Challenges with ksqlDB pipelines

A ksqlDB pipeline is a series of particular person streams and lacks flow-level abstraction. Think about a posh pipeline the place a shopper publishes to a number of subjects. In ksqlDB, every matter (each enter and output) should be managed as a separate stream abstraction. Nevertheless, there isn’t any high-level abstraction to characterize a whole pipeline that chains these streams collectively. In consequence, engineering groups should manually assemble particular person streams right into a cohesive knowledge circulation, with out built-in help for managing them as a single, full pipeline.

This architectural method notably impacts operational duties. Troubleshooting requires analyzing every stream individually, making it tough to observe and preserve pipelines that include dozens of interconnected streams. When points happen, the well being of every stream must be checked individually, with no logical knowledge circulation element to assist perceive the relationships between streams or their position within the general pipeline. The absence of a unified view of the info circulation considerably elevated operational complexity.

Flink as a substitute

Riskified started exploring alternate options for its streaming platform. The necessities have been clear: a robust processing expertise that mixes a wealthy low-level API and a streaming SQL engine, backed by a robust open supply group, confirmed to carry out in probably the most demanding manufacturing environments.

Not like the earlier answer, which supported solely Kafka-to-Kafka integration, Flink provides an array of connectors for numerous databases and Streaming platforms. It was shortly acknowledged that Flink had the potential to deal with advanced streaming use circumstances.

Flink provides a number of deployment choices, together with standalone clusters, native Kubernetes deployments utilizing operators, and Hadoop YARN clusters. For enterprises looking for a completely managed choice, cloud suppliers like AWS supply managed Flink providers that assist alleviate operational overhead, similar to Managed Service for Apache Flink.

Advantages of utilizing Managed Service for Apache Flink

Riskified determined to implement an answer utilizing Managed Service for Apache Flink. This alternative supplied a number of key benefits:

  • It provides a fast and dependable technique to run Flink functions and reduces the operational overhead of independently managing the infrastructure.
  • Managed Service for Apache Flink gives true job isolation by working every streaming software in its devoted cluster. This implies you may handle sources individually for every job and scale back the danger of heavy streaming jobs inflicting useful resource hunger for different working jobs.
  • It provides built-in monitoring utilizing Amazon CloudWatch metrics, software state backup with managed snapshots, and computerized scaling.
  • AWS provides complete documentation and sensible examples to assist speed up the implementation course of.

With these options, Riskified may deal with what actually issues—getting nearer to the enterprise aim and beginning to write functions.

Utilizing Flink’s streaming SQL engine

Builders can use Flink to construct advanced and scalable streaming functions, however Riskified noticed it as greater than only a device for specialists. They needed to democratize the ability of Flink right into a device for your complete group, to resolve advanced enterprise challenges involving real-time analytics necessities while not having a devoted knowledge skilled.

To interchange their earlier answer, they envisioned sustaining a “construct as soon as, deploy many” software, which encapsulates the complexity of the Flink programming and permits the customers to deal with the SQL processing logic.

Kafka was maintained because the enter and output expertise for the preliminary migration use case, which is analogous to the ksqlDB setup. They designed a single, versatile Flink software the place end-users can modify the enter subjects, SQL processing logic, and output locations by means of runtime properties. Though ksqlDB primarily focuses on Kafka integration, Flink’s intensive connector ecosystem allows it to broaden to numerous knowledge sources and locations in future phases.

Managed Service for Apache Flink gives a versatile technique to configure streaming functions with out modifying their code. Through the use of runtime parameters, you may change the applying’s conduct with out modifying its supply code.

Utilizing Managed Service for Apache Flink for this method contains the next steps:

  1. Apply parameters for the enter/output Kafka matter, a SQL question, and the enter/output schema ID (assuming you’re utilizing Confluent Schema Registry).
  2. Use AvroSchemaConverter to transform an Avro schema right into a Flink desk.
  3. Apply the SQL processing logic and save the output as a view.
  4. Sink the view outcomes into Kafka.

The next diagram illustrates this workflow.
Streaming SQL system diagram

Performing Flink SQL question compilation with no Flink runtime setting

Offering end-users with important management to outline their pipelines makes it crucial to confirm the SQL question outlined by the consumer earlier than deployment. This validation prevents failed or hanging jobs that would eat pointless sources and incur pointless prices.

A key problem was validating Flink SQL queries with out deploying the total Flink runtime. After investigating Flink’s SQL implementation, Riskified found its dependency on Apache Calcite – a dynamic knowledge administration framework that handles SQL parsing, optimization, and question planning independently of knowledge storage. This perception enabled utilizing Calcite immediately for question validation earlier than job deployment.

You will need to know the way the info is structured to validate a Flink SQL question on a streaming supply like a Kafka matter. In any other case, surprising errors may happen when trying to question the streaming supply. Though an anticipated schema is used with relational databases, it’s not enforced for streaming sources.

Schemas assure a deterministic construction for the info saved in a Kafka matter when utilizing a schema registry. A schema will be materialized right into a Calcite desk that defines how knowledge is structured within the Kafka matter. It permits inferring desk constructions immediately from schemas (on this case, Avro format was used), enabling thorough field-level validation, together with kind checking and discipline existence, all earlier than job deployment. This desk can later be used to validate the SQL question.

The next code is an instance of supporting primary discipline sorts validation utilizing Calcite’s AbstractTable:

public class FlinkValidator {
    public static void validateSQL(String sqlQuery, Schema avroSchema) throws Exception {
        SqlParser.Config sqlConfig = SqlParser.config()
                .withCaseSensitive(true);
        SqlParser sqlParser = SqlParser.create(sqlQuery, sqlConfig);
        SqlNode parsedQuery = sqlParser.parseQuery();
        RelDataTypeFactory typeFactory = new SqlTypeFactoryImpl(RelDataTypeFactory.DEFAULT);
        CalciteSchema rootSchema = createSchemaWithAvro(avroSchema);
        SqlValidator validator = SqlValidatorUtil.newValidator(
                Frameworks.newConfigBuilder().construct().getOperatorTable(),
                rootSchema.createCatalogReader(Collections.emptyList(), typeFactory),
                typeFactory,
                SqlValidator.Config.DEFAULT
        );
        validator.validate(parsedQuery);
    }
    non-public static CalciteSchema createSchemaWithAvro(Schema avroSchema) {
        CalciteSchema rootSchema = CalciteSchema.createRootSchema(true);
        rootSchema.add("TABLE", new SimpleAvroTable(avroSchema));
        return rootSchema;
    }
    non-public static class SimpleAvroTable extends org.apache.calcite.schema.impl.AbstractTable {
        non-public closing Schema avroSchema;
        public SimpleAvroTable(Schema avroSchema) {
            this.avroSchema = avroSchema;
        }
        @Override
        public RelDataType getRowType(RelDataTypeFactory typeFactory) {
            RelDataTypeFactory.Builder builder = typeFactory.builder();
            for (Schema.Area discipline : avroSchema.getFields()) {
                builder.add(discipline.identify(), convertAvroType(discipline.schema(), typeFactory));
            }
            return builder.construct();
        }
        non-public RelDataType convertAvroType(Schema schema, RelDataTypeFactory typeFactory) {
            swap (schema.getType()) {
                case STRING:
                    return typeFactory.createSqlType(SqlTypeName.VARCHAR);
                case INT:
                    return typeFactory.createSqlType(SqlTypeName.INTEGER);
                default:
                    return typeFactory.createSqlType(SqlTypeName.ANY);
            }
        }
    }
}

You may combine this validation method as an intermediate step earlier than creating the applying. You may create a streaming job programmatically with the AWS SDK, AWS Command Line Interface (AWS CLI), or Terraform. The validation happens earlier than submitting the streaming job.

Flink SQL and Confluent Avro knowledge kind mapping limitation

Flink gives a number of APIs designed for various ranges of abstraction and consumer experience:

  • Flink SQL sits on the highest degree, permitting customers to precise knowledge transformations utilizing acquainted SQL syntax, which is good for analysts and groups comfy with relational ideas.
  • The Desk API provides an analogous method however is embedded in Java or Python, enabling type-safe and extra programmatic expressions.
  • For extra management, the DataStream API exposes low-level constructs to handle occasion time, stateful operations, and complicated occasion processing.
  • On the most granular degree, the ProcessFunction API gives full entry to Flink’s runtime options. It’s appropriate for superior use circumstances that demand detailed management over state and processing conduct.

Riskified initially used the Desk API to outline streaming transformations. Nevertheless, when deploying their first Flink job to a staging setting, they encountered serialization errors associated to the avro-confluent library and Desk API. Riskified’s schemas rely closely on Avro Enum sorts, which the avro-confluent integration doesn’t totally help. In consequence, Enum fields have been transformed to Strings, resulting in mismatches throughout serialization and errors when trying to sink processed knowledge again to Kafka utilizing Flink’s Desk API.

Riskified developed an alternate method to beat the Enum serialization limitations whereas sustaining schema necessities. They found that Flink’s DataStream API may accurately deal with Confluent’s Avro data serialization with Enum fields, not like the Desk API. They applied a hybrid answer combining each APIs as a result of the pipeline solely required SQL processing on the supply Kafka matter. It may sink to the output with none extra processing. The Desk API is used for knowledge processing and transformations, solely changing to the DataStream API on the closing output stage.

Managed Service for Apache Flink helps Flink APIs. It may swap between the Desk API and the DataStream API.
A MapFunction can convert the Row kind of the Desk API right into a DataStream of GenericRecord. The MapFunction maps Flink’s Row knowledge kind into GenericRecord sorts by iterating over the Avro schema fields and constructing the GenericRecord from the Flink Row kind, casting the Row fields into the right knowledge kind in line with the Avro schema. This conversion is required to beat the avro-confluent library limitation with Flink SQL.

The next diagram and illustrates this workflow.

Flink Table and DataStream APIs

The next code is an instance question:

// SQL Question for filtering
Desk queryResults = tableEnv.sqlQuery(
       "SELECT * FROM InputTable");
// 1. Convert question outcomes from Desk API to a DataStream<Row> and use DataStream API to sink question outcomes to Kafka matter
DataStream<Row> rowStream = tableEnv.toDataStream(queryResults);
// Fetch the schema string from the schema registry
String schemaString = fetchSchemaString(schemaRegistryURL, schemaSubjectName);
// 2. Convert Row to GenericRecord with specific TypeInformation, utilizing customized AvroMapper
TypeInformation<GenericRecord> typeInfo = new GenericRecordAvroTypeInfo(avroSchema);
DataStream<GenericRecord> genericRecordStream = rowStream
       .map(new AvroMapper(schemaString))
       .returns(typeInfo); // Explicitly set TypeInformation
// 3. Outline Kafka sink utilizing ConfluentRegistryAvroSerializationSchema
KafkaSink<GenericRecord> kafkaSink = KafkaSink.<GenericRecord>builder()
       .setBootstrapServers(bootstrapServers)
       .setRecordSerializer(
               KafkaRecordSerializationSchema.builder()
                       .setTopic(sinkTopic)
                       .setValueSerializationSchema(
                               ConfluentRegistryAvroSerializationSchema.forGeneric(
                                       schemaSubjectName,
                                       avroSchema,
                                       schemaRegistryURL
                               )
                       )
                       .construct()
       )
       .construct();
// Sink to Kafka
genericRecordStream.sinkTo(kafkaSink);

CI/CD With Managed Service for Apache Flink

With Managed Service for Apache Flink, you may run a job by choosing an Amazon Easy Storage Service (Amazon S3) key containing the applying JAR. Riskified’s Flink code base was structured as a multi-module repository to help extra use circumstances in addition to supporting self-service SQL. Every Flink job supply code within the repository is an impartial Java module. The CI pipeline applied a sturdy construct and deployment course of consisting of the next steps:

  1. Construct and compile every module.
  2. Run assessments.
  3. Package deal the modules.
  4. Add the artifact to the artifacts bucket twice: one JAR beneath <module>-<model>.jar and the second as <module>-latest.jar, resembling a Docker registry like Amazon Elastic Container Registry (Amazon ECR). Managed Service for Apache Flink jobs makes use of the most recent tag artifact on this case. Nevertheless, a duplicate of previous artifacts is saved for code rollback causes.

A CD course of follows this course of:

  1. When merged, it lists all jobs for every module utilizing the AWS CLI for Managed Service for Apache Flink.
  2. The appliance JAR location is up to date for every software, which triggers a deployment.
  3. When the applying is in a working state with no errors, the next software can be continued.

To permit protected deployment, this course of is completed progressively for each setting, beginning with the staging setting.

Self-service interface for submitting SQL jobs

Riskified believes an intuitive UI is essential for system adoption and effectivity. Nevertheless, creating a devoted UI for Flink job submission requires a realistic method, as a result of it won’t be price investing in except there’s already an internet interface for inside improvement operations.

Investing in UI improvement ought to align with the group’s present instruments and workflows. Riskified had an inside internet portal for related operations, which made the addition of Flink job submission capabilities a pure extension of the self-service infrastructure.

An AWS SDK was put in on the internet server to permit interplay with AWS elements. The consumer receives consumer enter from the UI and interprets it into runtime properties to regulate the conduct of the Flink software. The net server then makes use of the CreateApplication API motion to submit the job to Managed Service for Apache Flink.

Though an intuitive UI considerably enhances system adoption, it’s not the one path to accessibility. Alternatively, a well-designed CLI device or REST API endpoint can present the identical self-service capabilities.

The next diagram illustrates this workflow.

Flow sequence diagram

Manufacturing expertise: Flink’s implementation upsides

The transition to Flink and Managed Service for Apache Flink proved environment friendly in quite a few features:

  • Schema evolution and knowledge dealing with – Riskified can both periodically fetch up to date schemas or restart functions when schemas evolve. They will use present schemas with out self-registration.
  • Useful resource isolation and administration – Managed Service for Apache Flink runs every Flink job as an remoted cluster, decreasing useful resource competition between jobs.
  • Useful resource allocation and cost-efficiency – Managed Service for Apache Flink allows minimal useful resource allocation with computerized scaling, proving to be extra cost-efficient.
  • Job administration and circulation visibility – Flink gives a cohesive knowledge circulation abstraction by means of its job and job mannequin. It manages your complete knowledge circulation in a single job and distributes the workload evenly over a number of nodes. This unified method allows higher visibility into your complete knowledge pipeline, simplifying monitoring, troubleshooting, and optimizing advanced streaming workflows.
  • Constructed-in restoration mechanism – Managed Service for Apache Flink robotically creates checkpoints and savepoints that allow stateful Flink functions to get well from failures and resume processing with out knowledge loss. With this characteristic, streaming jobs are sturdy and may get well safely from errors.
  • Complete observability – Managed Service for Apache Flink exposes CloudWatch metrics that monitor Flink software efficiency and statistics. You too can create alarms primarily based on these metrics. Riskfied determined to make use of the Cloudwatch Prometheus Exporter to export these metrics to Prometheus and construct PrometheusRules to align Flink’s monitoring to the Riskified customary, which makes use of Prometheus and Grafana for monitoring and alerting.

Subsequent steps

Though the preliminary focus was Kafka-to-Kafka streaming queries, Flink’s wide selection of sink connectors provides the opportunity of pluggable multi-destination pipelines. This versatility is on Riskfied’s roadmap for future enhancements.

Flink’s DataStream API gives capabilities that reach far past self-serving streaming SQL capabilities, opening new avenues for extra refined fraud detection use circumstances. Riskified is exploring methods to make use of DataStream APIs to boost ecommerce fraud prevention methods.

Conclusions

On this put up, we shared how Riskified efficiently transitioned from ksqlDB to Managed Service for Apache Flink for its self-serve streaming SQL engine. This transfer addressed key challenges like schema evolution, useful resource isolation, and pipeline administration. Managed Service for Apache Flink provides options similar to together with remoted jobs environments, computerized scaling, and built-in monitoring, which proved extra environment friendly and cost-effective. Though Flink SQL limitations with Kafka required workarounds, utilizing Flink’s DataStream API and user-defined capabilities resolved these points. The transition has paved the best way for future enlargement with multi-targets and superior fraud detection capabilities, solidifying Flink as a sturdy and scalable answer for Riskified’s streaming wants.

If Riskified’s journey has sparked your curiosity in constructing a self-service streaming SQL platform, right here’s the best way to get began:

  • Study extra about Managed Service for Apache Flink:
  • Get hands-on expertise:

Concerning the authors

Gal Krispel is a Information Platform Engineer at Riskified, specializing in streaming applied sciences similar to Apache Kafka and Apache Flink. He focuses on constructing scalable, real-time knowledge pipelines that energy Riskified’s core merchandise. Gal is especially thinking about making advanced knowledge architectures accessible and environment friendly throughout the group. His work spans real-time analytics, event-driven design, and the seamless integration of stream processing into large-scale manufacturing techniques.

Sofia ZilbermanSofia Zilberman works as a Senior Streaming Options Architect at AWS, serving to prospects design and optimize real-time knowledge pipelines utilizing open-source applied sciences like Apache Flink, Kafka, and Apache Iceberg. With expertise in each streaming and batch knowledge processing, she focuses on making knowledge workflows environment friendly, observable, and high-performing.

Lorenzo NicoraLorenzo Nicora works as Senior Streaming Resolution Architect at AWS, serving to prospects throughout EMEA. He has been constructing cloud-centered, data-intensive techniques for over 25 years, working throughout industries each by means of consultancies and product firms. He has used open-source applied sciences extensively and contributed to a number of initiatives, together with Apache Flink, and is the maintainer of the Flink Prometheus connector.

Related Articles

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Latest Articles