Flink Watermark卡住不动,出现eventtime倾斜问题怎么办?

文摘   2024-08-30 07:00   重庆  
上篇文章Flink 并行运行时Watermark如何向下传递?讲了watermark在并行情况下向下传递的方式,在某些情况下,由于数据产生的比较少,导致一段时间内没有数据产生,进而就没有水印的生成,导致下游依赖水印的一些操作就会出现问题,比如某一个算子的上游有多个算子,这种情况下,水印是取其上游两个算子的较小值,如果上游某一个算子因为缺少数据迟迟没有生成水印,就会出现eventtime倾斜问题,导致下游没法触发计算。
所以filnk通过WatermarkStrategy.withIdleness()方法允许用户在配置的时间内(即超时时间内)没有记录到达时将一个流标记为空闲。这样就意味着下游的数据不需要等待水印的到来。
当下次有水印生成并发射到下游的时候,这个数据流重新变成活跃状态。
在Flink中,我们可以使用withIdleness来设置空闲的source。
代码案例:
package org.bigdatatechcir.learn_flink.part5_flink_watermark;
import org.apache.flink.api.common.eventtime.WatermarkStrategy;import org.apache.flink.api.common.functions.MapFunction;import org.apache.flink.api.common.typeinfo.Types;import org.apache.flink.api.java.tuple.Tuple2;import org.apache.flink.api.java.tuple.Tuple3;import org.apache.flink.configuration.Configuration;import org.apache.flink.configuration.RestOptions;import org.apache.flink.streaming.api.TimeCharacteristic;import org.apache.flink.streaming.api.datastream.DataStream;import org.apache.flink.streaming.api.environment.StreamExecutionEnvironment;import org.apache.flink.streaming.api.functions.source.RichParallelSourceFunction;import org.apache.flink.streaming.api.functions.windowing.ProcessWindowFunction;import org.apache.flink.streaming.api.watermark.Watermark;import org.apache.flink.streaming.api.windowing.assigners.TumblingEventTimeWindows;import org.apache.flink.streaming.api.windowing.time.Time;import org.apache.flink.streaming.api.windowing.windows.TimeWindow;import org.apache.flink.util.Collector;
import java.time.Duration;import java.time.Instant;import java.time.ZoneId;import java.time.ZonedDateTime;import java.time.format.DateTimeFormatter;import java.util.Random;
public class WithIdLenessDemo { public static void main(String[] args) throws Exception { Configuration conf = new Configuration(); conf.setString(RestOptions.BIND_PORT, "8081"); final StreamExecutionEnvironment env = StreamExecutionEnvironment.createLocalEnvironmentWithWebUI(conf); env.setParallelism(1);
env.setStreamTimeCharacteristic(TimeCharacteristic.EventTime);
DataStream<String> text = env.addSource(new RichParallelSourceFunction<String>() { private volatile boolean running = true; private volatile long count = 0; // 计数器用于跟踪已生成的数据条数 private final Random random = new Random();
@Override public void run(SourceContext<String> ctx) throws Exception { while (running) { int randomNum = random.nextInt(5) + 1; long timestamp = System.currentTimeMillis(); ctx.collectWithTimestamp("key" + randomNum + "," + 1 + "," + timestamp, timestamp);
if (++count % 200 == 0) { // 每200条数据发送一次Watermark ctx.emitWatermark(new Watermark(timestamp)); System.out.println("Manual Watermark emitted: " + timestamp); }
ZonedDateTime generateDataDateTime = Instant.ofEpochMilli(timestamp).atZone(ZoneId.systemDefault()); DateTimeFormatter formatter = DateTimeFormatter.ofPattern("yyyy-MM-dd HH:mm:ss.SSS"); String formattedGenerateDataDateTime = generateDataDateTime.format(formatter); System.out.println("Generated data: " + "key" + randomNum + "," + 1 + "," + timestamp + " at " + formattedGenerateDataDateTime); Thread.sleep(1000); } }
@Override public void cancel() { running = false; } });
DataStream<Tuple3<String, Integer, Long>> tuplesWithTimestamp = text.map(new MapFunction<String, Tuple3<String, Integer, Long>>() { @Override public Tuple3<String, Integer, Long> map(String value) { String[] words = value.split(","); return new Tuple3<>(words[0], Integer.parseInt(words[1]), Long.parseLong(words[2])); } }).returns(Types.TUPLE(Types.STRING, Types.INT, Types.LONG));
// 设置 Watermark 策略 DataStream<Tuple3<String, Integer, Long>> withWatermarks = tuplesWithTimestamp.assignTimestampsAndWatermarks( WatermarkStrategy.<Tuple3<String, Integer, Long>>forBoundedOutOfOrderness(Duration.ofSeconds(5)) //处理空闲数据源 .withIdleness(Duration.ofSeconds(15)) .withTimestampAssigner((element, recordTimestamp) -> element.f2) );
// 窗口逻辑 DataStream<Tuple2<String, Integer>> keyedStream = withWatermarks .keyBy(value -> value.f0) .window(TumblingEventTimeWindows.of(Time.seconds(5))) .process(new ProcessWindowFunction<Tuple3<String, Integer, Long>, Tuple2<String, Integer>, String, TimeWindow>() { @Override public void process(String s, Context context, Iterable<Tuple3<String, Integer, Long>> elements, Collector<Tuple2<String, Integer>> out) throws Exception { int count = 0; for (Tuple3<String, Integer, Long> element : elements) { count++; }
long start = context.window().getStart(); long end = context.window().getEnd();
ZonedDateTime startDateTime = Instant.ofEpochMilli(start).atZone(ZoneId.systemDefault()); ZonedDateTime endDateTime = Instant.ofEpochMilli(end).atZone(ZoneId.systemDefault());
DateTimeFormatter formatter = DateTimeFormatter.ofPattern("yyyy-MM-dd HH:mm:ss.SSS"); String formattedStart = startDateTime.format(formatter); String formattedEnd = endDateTime.format(formatter);
System.out.println("Tumbling Window [start " + formattedStart + ", end " + formattedEnd + ") for key " + s);
// 输出窗口结束时的Watermark long windowEndWatermark = context.currentWatermark(); ZonedDateTime windowEndDateTime = Instant.ofEpochMilli(windowEndWatermark).atZone(ZoneId.systemDefault()); String formattedWindowEndWatermark = windowEndDateTime.format(formatter); System.out.println("Watermark at the end of window: " + formattedWindowEndWatermark);
out.collect(new Tuple2<>(s, count)); } });
// 输出结果 keyedStream.print();
// 执行任务 env.execute("With Id Leness Demo"); }}

这或许是一个对你有用的开源项目data-warehouse-learning 项目是一套基于 MySQL + Kafka + Hadoop + Hive + Dolphinscheduler + Doris + Seatunnel + Paimon + Hudi + Iceberg + Flink + Dinky + DataRT + SuperSet 实现的实时离线数仓(数据湖)系统,以大家最熟悉的电商业务为切入点,详细讲述并实现了数据产生、同步、数据建模、数仓(数据湖)建设、数据服务、BI报表展示等数据全链路处理流程。

https://gitee.com/wzylzjtn/data-warehouse-learning

https://github.com/Mrkuhuo/data-warehouse-learning

https://bigdatacircle.top/

项目演示:

03

代码获取

https://gitee.com/wzylzjtn/data-warehouse-learning

https://github.com/Mrkuhuo/data-warehouse-learning


04

文档获取

05

进交流群群添加作者

 

推荐阅读

大数据技能圈
分享大数据前沿技术,实战代码,详细文档
 最新文章