2025年eruka快速刷新和kafka配置

eruka快速刷新和kafka配置Kafka partition 扩充并且需要迁移数据 生产不用指定具体分区 会自动分配 拉取也不用指定具体分区 会自动拉多个分区 可以同时拉取多个 topic kafka 配置 org apache kafka kafka 2 12 2 3 0 Properties proprops new Properties proprops

Kafka partition扩充并且需要迁移数据
生产不用指定具体分区,会自动分配
拉取也不用指定具体分区,会自动拉多个分区

可以同时拉取多个topic

kafka配置

org.apache.kafka
kafka_2.12
2.3.0

Properties proprops = new Properties();


proprops.put(“bootstrap.servers”, “192.168.31.234:59092”);


proprops.put(“acks”, “all”);


proprops.put(“key.serializer”, “org.apache.kafka.common.serialization.StringSerializer”);


proprops.put(“value.serializer”, “org.apache.kafka.common.serialization.StringSerializer”);

/p>

p>Producer

producer = new KafkaProducer<>(proprops);
for (int i = 0; i < 9999; i++)
producer.send(new ProducerRecord(“my-topic3”, Integer.toString(i), Integer.toString(i)));

producer.close();

Properties props = new Properties();
props.put(“bootstrap.servers”, “192.168.31.234:59092”);
props.put(“group.id”, “test6”);
props.put(“enable.auto.commit”, “true”);
props.put(“auto.commit.interval.ms”, “1000”);
props.put(“key.deserializer”, “org.apache.kafka.common.serialization.StringDeserializer”);
props.put(“value.deserializer”, “org.apache.kafka.common.serialization.StringDeserializer”);
// props.put(“zookeeper.session.timeout.ms”, “40000”);
// props.put(“zookeeper.sync.time.ms”, “200”);
// props.put(“rebalance.max.retries”, “5”);
// props.put(“rebalance.backoff.ms”, “12000”);
KafkaConsumer consumer = new KafkaConsumer<>(props);
// consumer.subscribe(Arrays.asList(“my-topic2”));
consumer.subscribe(Collections.singletonList(“my-topic3”));

while (true) {

ConsumerRecords records = consumer.poll(Duration.ofSeconds(5));
System.out.printf(“2222222222222″+ records.toString());
for (ConsumerRecord record : records) {

System.out.println(“llllllllllllllllll” + record);
System.out.printf(“offset = %d, key = %s, value = %s%n”, record.offset(), record.key(), record.value());
}
}

eureka 快速刷新配置

eureka:
  server:
    #关闭自我保护
    enable-self-preservation: false
    #不读取只读的缓存服务清单,因为30秒刷新一次比较慢,读写高速缓存过期策略
    UseReadOnlyResponseCache: false
    #启用主动失效,并且每次主动失效检测间隔为3s
    eviction-interval-timer-in-ms: 3000
  instance:
    hostname: localhost
    #服务过期时间配置,超过这个时间没有接收到心跳EurekaServer就会将这个实例剔除
    #注意,EurekaServer一定要设置eureka.server.eviction-interval-timer-in-ms否则这个配置无效,这个配置一般为服务刷新时间配置的三倍
    #默认90s
    lease-expiration-duration-in-seconds: 15
    #服务刷新时间配置,每隔这个时间会主动心跳一次
    #默认30s
    lease-renewal-interval-in-seconds: 5
  client:
    #客户端:服务缓存清单也是默认30秒更新一次,可通过设置RegistryFetchIntervalSeconds来缩短,单位是秒
    registryFetchIntervalSeconds: 5
    registerWithEureka: false
    fetchRegistry: false
    serviceUrl:
      defaultZone: http://${eureka.instance.hostname}:${server.port}/eureka/

编程小号
上一篇 2025-01-27 16:21
下一篇 2025-02-11 21:57

相关推荐

版权声明:本文内容由互联网用户自发贡献,该文观点仅代表作者本人。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌侵权/违法违规的内容, 请发送邮件至 举报,一经查实,本站将立刻删除。
如需转载请保留出处:https://bianchenghao.cn/hz/139584.html