Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

No new logs being created #51

Open
paul-uz opened this issue Jan 30, 2024 · 11 comments
Open

No new logs being created #51

paul-uz opened this issue Jan 30, 2024 · 11 comments

Comments

@paul-uz
Copy link

paul-uz commented Jan 30, 2024

I was able to get an initial log file showing in S3, but now, I'm not seeing any more log files, or the original file being updated, despite seeing the output in the console (AWS cloudwatch logs)

@autopulated
Copy link
Member

autopulated commented Jan 30, 2024

The file in S3 is updated based on the upload_every and buffer_size options (defaulting to 20 seconds an 10 KB) - so the file in S3 will be updated after buffered data is 20 seconds old, or there is 10KB of buffered data (whichever comes first).

If those options don't explain the behaviour you're seeing, then please share a small complete program that reproduces the problem.

FWIW If you're using cloudwatch logs, there isn't really much point in using this module - this was written before cloudwatch logs existed, and I'm only maintaining it for the convenience of existing users.

@paul-uz
Copy link
Author

paul-uz commented Jan 30, 2024

so the file in S3 will be updated after buffered data is 20 seconds old, or there is 10JB of buffered data (whichever comes first).

What does this mean exactly? What do i need to set these options to for the file to get updated everytime?

I understand what your syaing about Cloudwatch, but I'd like the option to create the log file as well, so this package is much appreciated!

@paul-uz
Copy link
Author

paul-uz commented Jan 30, 2024

I have tried setting these both really low, but still no new blogs.

@autopulated
Copy link
Member

What do i need to set these options to for the file to get updated everytime?

Setting buffer_size: 0 would do it, but it is not recommended, since this will cause a large amount of traffic to S3.

Please can you provide a small example program that reproduces the problem?

@paul-uz
Copy link
Author

paul-uz commented Jan 30, 2024

Sadly I can only share my winston setup

const s3Transport = new winston.transports.Stream({
  stream: new S3StreamLogger({
    bucket: 'logs-bucket',
    buffer_size: 1,
    folder: 'foo',
    region: REGION,
    rotate_every: 1000,
    upload_every: 500,
    access_key_id: AWS_ACCESS_KEY_ID,
    secret_access_key: AWS_SECRET_ACCESS_KEY,
  }),
});

s3Transport.on('error', (err) => {
  console.error(err);
});

const logger = winston.createLogger({
  exitOnError: false,
  format: winston.format.json(),
  level: NODE_ENV === 'production' ? 'error' : 'debug',
  transports: [
    new winston.transports.Console(),
    s3Transport,
  ],
});

@paul-uz
Copy link
Author

paul-uz commented Jan 30, 2024

I tried setting buffer_size and upload_every to 0, and it made no difference

@autopulated
Copy link
Member

rotate_every: 1000 means that a new file name will be used every second (all times in the options are specified in mliliseconds), so I'd expect your new logs to be being written to new files.

@paul-uz
Copy link
Author

paul-uz commented Jan 30, 2024 via email

@autopulated
Copy link
Member

autopulated commented Jan 30, 2024

Oh, if you are using AWS lambda then you need to call flushFile (call it with a callback, and wait for the callback) before finishing your lambda function.

s3-streamlogger is especially not suitable for use in lambda though, as this will add a substantial delay to your lambda. Cloudwatch logs work much better

@paul-uz
Copy link
Author

paul-uz commented Jan 30, 2024 via email

@autopulated
Copy link
Member

autopulated commented Jan 30, 2024

const s3Stream = new S3StreamLogger({
    // ... options, auth should not be specified here, it should come from the lambda function role
  });
const s3Transport = new winston.transports.Stream({
  stream: s3Stream
});

// ...


doMyLambdaWork((err1) => {
  s3Stream.flushFile((err2) => {
     callLambdaDoneHandlerHere(err1 || err2);
  })
})

to reiterate s3-streamlogger is especially not suitable for use in lambda, as this will add a substantial delay to your lambda, which might incur a significant increase in cost. Cloudwatch logs work much better

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants