Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

error="bignum too big to convert into `unsigned long long'" #1713

Closed
kkniffin opened this issue Oct 9, 2017 · 2 comments
Closed

error="bignum too big to convert into `unsigned long long'" #1713

kkniffin opened this issue Oct 9, 2017 · 2 comments

Comments

@kkniffin
Copy link

kkniffin commented Oct 9, 2017

Check CONTRIBUTING guideline first and here is the list to help us investigate the problem.

fluentd 0.14.21
Linux

I am having an issue where data is being sent to FluentD that throws an error and the system is unrecoverable and logs no longer flow until I stop and restart the service again. I am unsure what specific data message is causing the issue and also why a message would cause the system to no longer respond to future messages and simply fail. I would think that it would error on just that one message but continue to process future ones?

The error I eventually receive is:

2017-10-09 14:13:57 +0000 [warn]: #0 fluent/event_router.rb:87:emit: suppressed same stacktrace
fluentd-nxlog_1   | 2017-10-09 14:13:57 +0000 [warn]: #0 fluent/log.rb:336:warn: emit transaction failed: error_class=RangeError error="bignum too big to convert into `unsigned long long'" tag="nxlog"
fluentd-nxlog_1   |   2017-10-09 14:13:57 +0000 [warn]: #0 fluent/event_router.rb:87:emit: suppressed same stacktrace
fluentd-nxlog_1   | 2017-10-09 14:13:57 +0000 [error]: #0 fluent/log.rb:356:error: unexpected error on reading data host="x.x.x.x" port=60051 error_class=RangeError error="bignum too big to convert into `unsigned long long'"

This is the Fluentd Config I have:

<system>
  log_level error
</system>


####################
###### SOURCES #####
####################

<source>
        @type tcp
        format json
        port 5140
        tag nxlog
</source>


#####################
##### MANIPULATE ####
#####################

#### Add Server Received Time to Records
#<filter nxlog>
#       @type record_modifier
#       <record>
#               FluentDReceived ${Time.at(time).to_s}
#       </record>
#</filter>

######################################
##### Beat Processing ################
######################################

<match nxlog>
        @type rewrite_tag_filter
        rewriterule1 NXLogFileType ^dhcp$ nxlog.tagged.dhcp
        rewriterule2 NXLogFileType ^nps$ nxlog.tagged.nps
        rewriterule3 NXLogFileType ^dns$ nxlog.tagged.dns
        rewriterule4 NXLogFileType ^wineventlog$ nxlog.tagged.wineventlog
        rewriterule5 NXLogFileType ^iis$ nxlog.tagged.iis
        rewriterule6 NXLogFileType ^nxlog$ nxlog.tagged.nxlog
        rewriterule7 NXLogFileType ^fluent$ nxlog.tagged.fluent
        rewriterule8 NXLogFileType ^sysmon$ nxlog.tagged.sysmon
        rewriterule9 NXLogFileType .+ nxlog.tagged.unmatched
</match>

######################
##### OUTPUT #########
######################

<match nxlog.tagged.**>
        @type copy
        <store>
                @type azure-loganalytics
                customer_id XXXX
                shared_key XXXX
                log_type NXLogDNS
                add_time_field true
                time_field_name LogSentTime
                time_format %s
                localtime true
                add_tag_field true
                tag_field_name nxlogDNS
        </store>
        <store>
                @type stdout
        </store>
</match>

Any help you can provide in troubleshooting this and making it so that one bad message doesn't cause the whole system to fail would be appreciated. Perhaps there is a way to just ignore the one bad message?

@repeatedly
Copy link
Member

I tested with simple tcp script and it worked after error happens.

require 'socket'
require 'json'

log = '{"k":"v"}'
TCPSocket.open('localhost', 5140) do |s|
  s.write(log + "\n")
end

So if sigdump result doesn't show stuck behaviour, nxlog output may keep wrong conneciton.

And to rescue invalid record, <label @ERROR> may help.

@ganmacs
Copy link
Member

ganmacs commented Dec 23, 2019

it's a stale issue. I'm closing. if you have any problem, updating fluentd might help it.

@ganmacs ganmacs closed this as completed Dec 23, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants