Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

buffer flush failed by unexpected Zlib::DataError #1903

Closed
google238 opened this issue Mar 19, 2018 · 28 comments
Closed

buffer flush failed by unexpected Zlib::DataError #1903

google238 opened this issue Mar 19, 2018 · 28 comments
Assignees
Labels

Comments

@google238
Copy link

google238 commented Mar 19, 2018

fluted v1.0,this is my configure rule:

<match myprofile.*.*>
  @type file
  path /var/log/td-agent/${tag[1]}/%Y%m/${tag[2]}/myprofile.${tag[2]}.%Y%m%d
  compress gzip
  append true
  <buffer tag, time>
    @type memory
    flush_mode interval
    flush_interval 30s 
  </buffer>
  <format>
    @type single_value
  </format>
</match>

there are so many warn log like this:
2018-03-19 00:18:51 +0900 [warn]: #0 failed to flush the buffer. retry_time=0 next_retry_seconds=2018-03-19 00:18:52 +0900 chunk="567b15a13fd0f003c72c7e0e534dbf3e" error_class=Zlib::DataError error="data error"
2018-03-19 00:18:51 +0900 [warn]: #0 suppressed same stacktrace
2018-03-19 00:18:52 +0900 [warn]: #0 retry succeeded. chunk_id="567b15a13fd0f003c72c7e0e534dbf3e"

the result file cannot open correctly:
gzip: invalid compressed data--crc error
gzip: invalid compressed data--length error
what happend??

@repeatedly
Copy link
Member

what happend??

This is very difficult problem because zlib doesn't tell detailed information. This is drawback of zlib...
For safety, disable gzip compression is better.

I will check your configuration and test traffic on my environment.

@repeatedly repeatedly self-assigned this Mar 20, 2018
@repeatedly repeatedly added the v1 label Mar 20, 2018
@felixzh2020
Copy link

Hi, i get this error, any suggestions? @repeatedly

@jl2005
Copy link
Contributor

jl2005 commented May 7, 2018

Hi, i get this error, any suggestions? @repeatedly

2018-05-07 03:24:44 +0800 [warn]: failed to flush the buffer. retry_time=0 next_retry_seconds=2018-05-07 03:24:45 +0800 chunk="56b8e81424d18ec86e8a0c384c3c098b" error_class=Zlib::DataError error="data error"
  2018-05-07 03:24:44 +0800 [warn]: /var/lib/gems/2.3.0/gems/ruby-kafka-0.5.5/lib/kafka/gzip_codec.rb:17:in `close'
  2018-05-07 03:24:44 +0800 [warn]: /var/lib/gems/2.3.0/gems/ruby-kafka-0.5.5/lib/kafka/gzip_codec.rb:17:in `compress'
  2018-05-07 03:24:44 +0800 [warn]: /var/lib/gems/2.3.0/gems/ruby-kafka-0.5.5/lib/kafka/compressor.rb:54:in `block in compress_data'
  2018-05-07 03:24:44 +0800 [warn]: /var/lib/gems/2.3.0/gems/ruby-kafka-0.5.5/lib/kafka/instrumenter.rb:21:in `instrument'
  2018-05-07 03:24:44 +0800 [warn]: /var/lib/gems/2.3.0/gems/ruby-kafka-0.5.5/lib/kafka/compressor.rb:53:in `compress_data'
  2018-05-07 03:24:44 +0800 [warn]: /var/lib/gems/2.3.0/gems/ruby-kafka-0.5.5/lib/kafka/compressor.rb:37:in `compress'
  2018-05-07 03:24:44 +0800 [warn]: /var/lib/gems/2.3.0/gems/ruby-kafka-0.5.5/lib/kafka/produce_operation.rb:89:in `block (2 levels) in send_buffered_messages'
  2018-05-07 03:24:44 +0800 [warn]: /var/lib/gems/2.3.0/gems/ruby-kafka-0.5.5/lib/kafka/message_buffer.rb:44:in `block (2 levels) in each'
  2018-05-07 03:24:44 +0800 [warn]: /var/lib/gems/2.3.0/gems/ruby-kafka-0.5.5/lib/kafka/message_buffer.rb:43:in `each'
  2018-05-07 03:24:44 +0800 [warn]: /var/lib/gems/2.3.0/gems/ruby-kafka-0.5.5/lib/kafka/message_buffer.rb:43:in `block in each'
  2018-05-07 03:24:44 +0800 [warn]: /var/lib/gems/2.3.0/gems/ruby-kafka-0.5.5/lib/kafka/message_buffer.rb:42:in `each'
  2018-05-07 03:24:44 +0800 [warn]: /var/lib/gems/2.3.0/gems/ruby-kafka-0.5.5/lib/kafka/message_buffer.rb:42:in `each'
  2018-05-07 03:24:44 +0800 [warn]: /var/lib/gems/2.3.0/gems/ruby-kafka-0.5.5/lib/kafka/produce_operation.rb:87:in `block in send_buffered_messages'
  2018-05-07 03:24:44 +0800 [warn]: /var/lib/gems/2.3.0/gems/ruby-kafka-0.5.5/lib/kafka/produce_operation.rb:81:in `each'
  2018-05-07 03:24:44 +0800 [warn]: /var/lib/gems/2.3.0/gems/ruby-kafka-0.5.5/lib/kafka/produce_operation.rb:81:in `send_buffered_messages'
  2018-05-07 03:24:44 +0800 [warn]: /var/lib/gems/2.3.0/gems/ruby-kafka-0.5.5/lib/kafka/produce_operation.rb:47:in `block in execute'
  2018-05-07 03:24:44 +0800 [warn]: /var/lib/gems/2.3.0/gems/ruby-kafka-0.5.5/lib/kafka/instrumenter.rb:21:in `instrument'
  2018-05-07 03:24:44 +0800 [warn]: /var/lib/gems/2.3.0/gems/ruby-kafka-0.5.5/lib/kafka/produce_operation.rb:41:in `execute'
  2018-05-07 03:24:44 +0800 [warn]: /var/lib/gems/2.3.0/gems/ruby-kafka-0.5.5/lib/kafka/producer.rb:298:in `block in deliver_messages_with_retries'
  2018-05-07 03:24:44 +0800 [warn]: /var/lib/gems/2.3.0/gems/ruby-kafka-0.5.5/lib/kafka/producer.rb:286:in `loop'
  2018-05-07 03:24:44 +0800 [warn]: /var/lib/gems/2.3.0/gems/ruby-kafka-0.5.5/lib/kafka/producer.rb:286:in `deliver_messages_with_retries'
  2018-05-07 03:24:44 +0800 [warn]: /var/lib/gems/2.3.0/gems/ruby-kafka-0.5.5/lib/kafka/producer.rb:236:in `block in deliver_messages'
  2018-05-07 03:24:44 +0800 [warn]: /var/lib/gems/2.3.0/gems/ruby-kafka-0.5.5/lib/kafka/instrumenter.rb:21:in `instrument'
  2018-05-07 03:24:44 +0800 [warn]: /var/lib/gems/2.3.0/gems/ruby-kafka-0.5.5/lib/kafka/producer.rb:229:in `deliver_messages'
  2018-05-07 03:24:44 +0800 [warn]: /var/lib/gems/2.3.0/gems/fluent-plugin-kafka-0.7.2/lib/fluent/plugin/out_kafka_buffered.rb:277:in `deliver_messages'
  2018-05-07 03:24:44 +0800 [warn]: /var/lib/gems/2.3.0/gems/fluent-plugin-kafka-0.7.2/lib/fluent/plugin/out_kafka_buffered.rb:340:in `write'
  2018-05-07 03:24:44 +0800 [warn]: /var/lib/gems/2.3.0/gems/fluentd-1.1.0/lib/fluent/compat/output.rb:131:in `write'
  2018-05-07 03:24:44 +0800 [warn]: /var/lib/gems/2.3.0/gems/fluentd-1.1.0/lib/fluent/plugin/output.rb:1094:in `try_flush'
  2018-05-07 03:24:44 +0800 [warn]: /var/lib/gems/2.3.0/gems/fluentd-1.1.0/lib/fluent/plugin/output.rb:1319:in `flush_thread_run'
  2018-05-07 03:24:44 +0800 [warn]: /var/lib/gems/2.3.0/gems/fluentd-1.1.0/lib/fluent/plugin/output.rb:439:in `block (2 levels) in start'
  2018-05-07 03:24:44 +0800 [warn]: /var/lib/gems/2.3.0/gems/fluentd-1.1.0/lib/fluent/plugin_helper/thread.rb:78:in `block in thread_create'

@repeatedly
Copy link
Member

@nurse Do you have any insight to debug zlib error?

@jl2005
Copy link
Contributor

jl2005 commented May 8, 2018

I use @google238 config can reproduce problems. @repeatedly can try it.

@nurse
Copy link
Contributor

nurse commented May 8, 2018

Just an idea but maybe the chunk is so big like beyond 2GB/4GB?
No, compressing random 4GB data works fine...

@jl2005
Copy link
Contributor

jl2005 commented May 10, 2018

@repeatedly @nurse this may because

https://github.com/ruby/zlib/blob/e529c315e1b568535b2e9439f82249cf4cf6c6ae/ext/zlib/zlib.c#L1025

    err = (int)(VALUE)rb_thread_call_without_gvl(zstream_run_func, (void *)&args,
						 zstream_unblock_func, (void *)&args);

Fluentd 1.0.0 call zstream_unblock_func func will interrupt zstream_run_func

@nurse
Copy link
Contributor

nurse commented May 11, 2018

Hmm, the code you pointed out seems actually wrong...
I consider how to fix...

@jl2005
Copy link
Contributor

jl2005 commented May 11, 2018

@nurse Do you know how this happened?

@nurse
Copy link
Contributor

nurse commented May 11, 2018

@jl2005 If signal is received or the thread is killed, zstream_unblock_func will be called.

@nurse
Copy link
Contributor

nurse commented May 11, 2018

I tried the scenario but it doesn't reproduce Zlib::DataError on Zlib::GzipWriter#close.

require 'zlib'
data = IO.read("/dev/urandom", 20_000_000)
buffer = StringIO.new
buffer.set_encoding(Encoding::BINARY)

writer = Zlib::GzipWriter.new(buffer, Zlib::DEFAULT_COMPRESSION, Zlib::DEFAULT_STRATEGY)
puts "write"
begin
  writer.write(data)
rescue Interrupt
  p $!
end
puts "done"
writer.close

p Zlib.gunzip(buffer.string).bytesize
% ./test.rb
write
967: zstream_run_func:
^C1018: interrupt?:1
Interrupt
done
967: zstream_run_func:
977: z->func->run: 1
1018: interrupt?:0
1070: 1
967: zstream_run_func:
977: z->func->run: 1
1018: interrupt?:0
1070: 1
Traceback (most recent call last):
	1: from ./test.rb:17:in `<main>'
./test.rb:17:in `gunzip': invalid compressed data -- crc error (Zlib::GzipFile::CRCError)

@jl2005
Copy link
Contributor

jl2005 commented May 11, 2018

@nurse send signal or kill thread is NOT call zstream_unblock_func.

I change ext/zlib/zlib.c

 static void *
zstream_run_func(void *ptr)
{
    struct zstream_run_args *args = (struct zstream_run_args *)ptr;
    int err, state, flush = args->flush;
    struct zstream *z = args->z;
    uInt n;

    err = Z_OK;
    while (!args->interrupt) {
	n = z->stream.avail_out;
	err = z->func->run(&z->stream, flush);
	rb_str_set_len(z->buf, ZSTREAM_BUF_FILLED(z) + (n - z->stream.avail_out));

        zstream_unblock_func(args);   //<---- call zstream_unblock_func

When I run this code:

require 'zlib'
data = IO.read("/dev/urandom", 20_000_000)
buffer = StringIO.new
buffer.set_encoding(Encoding::BINARY)

writer = Zlib::GzipWriter.new(buffer, Zlib::DEFAULT_COMPRESSION, Zlib::DEFAULT_STRATEGY)

begin
  writer.write(data)
  writer.close
rescue Interrupt
  p $!
end

I got :

tail.rb:10:in `close': data error (Zlib::DataError)
	from tail.rb:10:in `<main>'

@repeatedly repeatedly changed the title failed to flush the buffer buffer flush failed by unexpected Zlib::DataError May 11, 2018
@repeatedly
Copy link
Member

The weird point is "retry is succeeded."
Fluentd sends same data but Zlib compression sometimes fails.

@jl2005
Copy link
Contributor

jl2005 commented May 12, 2018

zstream_unblock_fun will only be called when an event occurs. So try again to succeed. This event may be triggered on the on_notify event.

@nurse
Copy link
Contributor

nurse commented May 12, 2018

@jl2005

send signal or kill thread is NOT call zstream_unblock_func.

Timer thread calls zstream_unblock_func.
You can confirm this with below patch.

diff --git a/ext/zlib/zlib.c b/ext/zlib/zlib.c
index 6bd344465b..d6ea44df5d 100644
--- a/ext/zlib/zlib.c
+++ b/ext/zlib/zlib.c
@@ -1016,6 +1016,7 @@ static void
 zstream_unblock_func(void *ptr)
 {
     struct zstream_run_args *args = (struct zstream_run_args *)ptr;
+    rb_bug("zstream_unblock_func");

     args->interrupt = 1;
 }
...
-- C level backtrace information -------------------------------------------
0   ruby                                0x00000001062faff7 rb_vm_bugreport + 135
1   ruby                                0x000000010616daf8 rb_bug + 472
2   zlib.bundle                         0x00000001065dfcd2 zstream_unblock_func + 18
3   ruby                                0x00000001062ae476 thread_timer + 326
4   libsystem_pthread.dylib             0x00007fff542526c1 _pthread_body + 340
5   libsystem_pthread.dylib             0x00007fff5425256d _pthread_body + 0
...

@jl2005
Copy link
Contributor

jl2005 commented May 13, 2018

@nurse How to fix this bug?

@jl2005
Copy link
Contributor

jl2005 commented May 15, 2018

@nurse @repeatedly
I take Pull request。

@repeatedly
Copy link
Member

In fluentd output, "Thread#kill" is not called until stop/restart.
Does sleep and Thread#run interruputs other thread processing?

@nurse
Copy link
Contributor

nurse commented May 16, 2018

Does sleep and Thread#run interruputs other thread processing?

Bingo!
rb_thread_wakeup -> rb_thread_wakeup_alive -> rb_threadptr_ready -> rb_threadptr_interrupt -> rb_threadptr_interrupt_common -> RUBY_VM_SET_INTERRUPT sets intrrupt!

@jl2005
Copy link
Contributor

jl2005 commented May 17, 2018

@nurse What should we do?

repeatedly added a commit that referenced this issue May 21, 2018
… ref #1903

Direct append causes broken gzipped file when Zlib::DataError happens.
This patch uses Tempfile for gzip compression to avoid broken compression.

Signed-off-by: Masahiro Nakagawa <repeatedly@gmail.com>
@repeatedly
Copy link
Member

Temporal fix for this: #1995
Error still happens but broken file should not be generated.

@jl2005
Copy link
Contributor

jl2005 commented May 21, 2018

@repeatedly When can you really fix it.

repeatedly added a commit that referenced this issue May 22, 2018
…gzip-append

out_file: Temporal fix for broken gzipped files with gzip and append. ref #1903
@hlakshmi
Copy link

hlakshmi commented Jul 19, 2018

@repeatedly
We ran into the same bug in the v1.0.2 even when the append is set to false in the config file. Here is the exception stack trace:

2018-07-18 22:45:21 -0700 [warn]: failed to flush the buffer. retry_time=0 next_retry_seconds=2018-07-18 22:45:22 -0700 chunk="57153ac4d99d9e0b0bc579cb8dfebe46" error_class=Zlib::DataError error="data error"
  2018-07-18 22:45:21 -0700 [warn]: /opt/illumio-pce/external/lib/ruby/gems/2.3.0/gems/fluentd-1.0.2/lib/fluent/plugin/out_file.rb:219:in `close'
  2018-07-18 22:45:21 -0700 [warn]: /opt/illumio-pce/external/lib/ruby/gems/2.3.0/gems/fluentd-1.0.2/lib/fluent/plugin/out_file.rb:219:in `block in write_gzip_with_compression'
  2018-07-18 22:45:21 -0700 [warn]: /opt/illumio-pce/external/lib/ruby/gems/2.3.0/gems/fluentd-1.0.2/lib/fluent/plugin/out_file.rb:216:in `open'
  2018-07-18 22:45:21 -0700 [warn]: /opt/illumio-pce/external/lib/ruby/gems/2.3.0/gems/fluentd-1.0.2/lib/fluent/plugin/out_file.rb:216:in `write_gzip_with_compression'
  2018-07-18 22:45:21 -0700 [warn]: /opt/illumio-pce/external/lib/ruby/gems/2.3.0/gems/fluentd-1.0.2/lib/fluent/plugin/out_file.rb:201:in `call'
  2018-07-18 22:45:21 -0700 [warn]: /opt/illumio-pce/external/lib/ruby/gems/2.3.0/gems/fluentd-1.0.2/lib/fluent/plugin/out_file.rb:201:in `block in write'
  2018-07-18 22:45:21 -0700 [warn]: /opt/illumio-pce/external/lib/ruby/gems/2.3.0/gems/fluentd-1.0.2/lib/fluent/plugin/out_file.rb:300:in `find_filepath_available'
  2018-07-18 22:45:21 -0700 [warn]: /opt/illumio-pce/external/lib/ruby/gems/2.3.0/gems/fluentd-1.0.2/lib/fluent/plugin/out_file.rb:200:in `write'
  2018-07-18 22:45:21 -0700 [warn]: /opt/illumio-pce/external/lib/ruby/gems/2.3.0/gems/fluentd-1.0.2/lib/fluent/plugin/output.rb:1093:in `try_flush'
  2018-07-18 22:45:21 -0700 [warn]: /opt/illumio-pce/external/lib/ruby/gems/2.3.0/gems/fluentd-1.0.2/lib/fluent/plugin/output.rb:1318:in `flush_thread_run'
  2018-07-18 22:45:21 -0700 [warn]: /opt/illumio-pce/external/lib/ruby/gems/2.3.0/gems/fluentd-1.0.2/lib/fluent/plugin/output.rb:439:in `block (2 levels) in start'
  2018-07-18 22:45:21 -0700 [warn]: /opt/illumio-pce/external/lib/ruby/gems/2.3.0/gems/fluentd-1.0.2/lib/fluent/plugin_helper/thread.rb:78:in `block in thread_create'
2018-07-18 22:45:22 -0700 [warn]: retry succeeded. chunk_id="57153ac4d99d9e0b0bc579cb8dfebe46"

Any suggestions would be helpful. Also, the log says retry succeeded, does it mean that the retry created a new file with the same data?

@repeatedly
Copy link
Member

@hlakshmi yes and latest version should not have broken problem with append true.
We will resolve potential problem in v1.3.

tarokkk added a commit to kube-logging/logging-operator that referenced this issue Aug 23, 2018
tarokkk added a commit to kube-logging/logging-operator that referenced this issue Aug 27, 2018
@repeatedly
Copy link
Member

v1.3 should resolve this problem. If you hit same problem with v1.3 or later, reopen the issue.

@rverma-jm
Copy link

facing same issue with 1.10.4

2020-05-25 08:47:30 +0000 [info]: starting fluentd-1.10.4 pid=7 ruby="2.6.6"
2020-05-25 08:47:30 +0000 [info]: spawn command to main:  cmdline=["/usr/local/bin/ruby", "-Eascii-8bit:ascii-8bit", "/usr/local/bundle/bin/fluentd", "-c", "/fluentd/etc/fluent.conf", "-p", "/fluentd/plugins", "--under-supervisor"]
2020-05-25 08:47:33 +0000 [info]: adding match in @FLUENT_LOG pattern="fluent.*" type="null"
2020-05-25 08:47:33 +0000 [info]: #0 adding filter pattern="input.s3" type="split_array"
2020-05-25 08:47:33 +0000 [info]: #0 adding match pattern="input.s3" type="copy"
2020-05-25 08:47:35 +0000 [warn]: #0 [out_es] Detected ES 7.x: `_doc` will be used as the document `_type`.
2020-05-25 08:47:35 +0000 [info]: adding source type="http"
2020-05-25 08:47:35 +0000 [info]: #0 adding source type="s3"
2020-05-25 08:47:35 +0000 [info]: #0 starting fluentd worker pid=17 ppid=7 worker=0
2020-05-25 08:47:35 +0000 [debug]: #0 [firehose_ok] restoring buffer file: path = /var/log/fluentd/buffers/audit.cloudtrail/buffer.b5a674c207933c014d749862efbcff042.log
2020-05-25 08:47:35 +0000 [debug]: #0 [firehose_ok] buffer started instance=69939659610400 stage_size=40621244 queue_size=0
2020-05-25 08:47:35 +0000 [debug]: #0 [firehose_ok] flush_thread actually running
2020-05-25 08:47:35 +0000 [debug]: #0 [firehose_ok] enqueue_thread actually running
2020-05-25 08:47:37 +0000 [info]: #0 fluentd worker is now running worker=0
2020-05-25 08:47:37 +0000 [warn]: #0 [in_s3]  error_class=Zlib::GzipFile::Error error="not in gzip format"
  2020-05-25 08:47:37 +0000 [warn]: #0 /usr/local/lib/ruby/gems/2.6.0/gems/fluent-plugin-s3-1.3.1/lib/fluent/plugin/in_s3.rb:361:in `initialize'
  2020-05-25 08:47:37 +0000 [warn]: #0 /usr/local/lib/ruby/gems/2.6.0/gems/fluent-plugin-s3-1.3.1/lib/fluent/plugin/in_s3.rb:361:in `wrap'
  2020-05-25 08:47:37 +0000 [warn]: #0 /usr/local/lib/ruby/gems/2.6.0/gems/fluent-plugin-s3-1.3.1/lib/fluent/plugin/in_s3.rb:361:in `block in extract'
  2020-05-25 08:47:37 +0000 [warn]: #0 /usr/local/lib/ruby/gems/2.6.0/gems/fluent-plugin-s3-1.3.1/lib/fluent/plugin/in_s3.rb:359:in `loop'
  2020-05-25 08:47:37 +0000 [warn]: #0 /usr/local/lib/ruby/gems/2.6.0/gems/fluent-plugin-s3-1.3.1/lib/fluent/plugin/in_s3.rb:359:in `extract'
  2020-05-25 08:47:37 +0000 [warn]: #0 /usr/local/lib/ruby/gems/2.6.0/gems/fluent-plugin-s3-1.3.1/lib/fluent/plugin/in_s3.rb:294:in `process'
  2020-05-25 08:47:37 +0000 [warn]: #0 /usr/local/lib/ruby/gems/2.6.0/gems/fluent-plugin-s3-1.3.1/lib/fluent/plugin/in_s3.rb:187:in `block in run'
  2020-05-25 08:47:37 +0000 [warn]: #0 /usr/local/lib/ruby/gems/2.6.0/gems/aws-sdk-sqs-1.25.0/lib/aws-sdk-sqs/queue_poller.rb:413:in `block in yield_messages'
  2020-05-25 08:47:37 +0000 [warn]: #0 /usr/local/lib/ruby/gems/2.6.0/gems/aws-sdk-sqs-1.25.0/lib/aws-sdk-sqs/queue_poller.rb:412:in `each'
  2020-05-25 08:47:37 +0000 [warn]: #0 /usr/local/lib/ruby/gems/2.6.0/gems/aws-sdk-sqs-1.25.0/lib/aws-sdk-sqs/queue_poller.rb:412:in `yield_messages'
  2020-05-25 08:47:37 +0000 [warn]: #0 /usr/local/lib/ruby/gems/2.6.0/gems/aws-sdk-sqs-1.25.0/lib/aws-sdk-sqs/queue_poller.rb:405:in `block in process_messages'
  2020-05-25 08:47:37 +0000 [warn]: #0 /usr/local/lib/ruby/gems/2.6.0/gems/aws-sdk-sqs-1.25.0/lib/aws-sdk-sqs/queue_poller.rb:404:in `catch'
  2020-05-25 08:47:37 +0000 [warn]: #0 /usr/local/lib/ruby/gems/2.6.0/gems/aws-sdk-sqs-1.25.0/lib/aws-sdk-sqs/queue_poller.rb:404:in `process_messages'
  2020-05-25 08:47:37 +0000 [warn]: #0 /usr/local/lib/ruby/gems/2.6.0/gems/aws-sdk-sqs-1.25.0/lib/aws-sdk-sqs/queue_poller.rb:336:in `block (2 levels) in poll'
  2020-05-25 08:47:37 +0000 [warn]: #0 /usr/local/lib/ruby/gems/2.6.0/gems/aws-sdk-sqs-1.25.0/lib/aws-sdk-sqs/queue_poller.rb:331:in `loop'
  2020-05-25 08:47:37 +0000 [warn]: #0 /usr/local/lib/ruby/gems/2.6.0/gems/aws-sdk-sqs-1.25.0/lib/aws-sdk-sqs/queue_poller.rb:331:in `block in poll'
  2020-05-25 08:47:37 +0000 [warn]: #0 /usr/local/lib/ruby/gems/2.6.0/gems/aws-sdk-sqs-1.25.0/lib/aws-sdk-sqs/queue_poller.rb:330:in `catch'
  2020-05-25 08:47:37 +0000 [warn]: #0 /usr/local/lib/ruby/gems/2.6.0/gems/aws-sdk-sqs-1.25.0/lib/aws-sdk-sqs/queue_poller.rb:330:in `poll'
  2020-05-25 08:47:37 +0000 [warn]: #0 /usr/local/lib/ruby/gems/2.6.0/gems/fluent-plugin-s3-1.3.1/lib/fluent/plugin/in_s3.rb:181:in `run'
  2020-05-25 08:47:37 +0000 [warn]: #0 /usr/local/lib/ruby/gems/2.6.0/gems/fluentd-1.10.4/lib/fluent/plugin_helper/thread.rb:78:in `block in thread_create'

@repeatedly Shall we reopen the issue.

@repeatedly
Copy link
Member

No. This issue is for zlib library and thread interruption problem.
Your case is not in gzip format, so this issue is not related.

@xidiandb
Copy link

@repeatedly not in gzip format How did this problem arise?and How do I solve him

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging a pull request may close this issue.

8 participants