Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

socket hang up #142

Open
DannyJoris opened this issue Aug 24, 2016 · 7 comments
Open

socket hang up #142

DannyJoris opened this issue Aug 24, 2016 · 7 comments
Milestone

Comments

@DannyJoris
Copy link

I'm experiencing this error quite often. I'm not sure why. I do have over 4000 URLs imported, and noticed the dashboard slowed down significantly. Would a queue that size be too large? It was running fine on 800 URLs for a while.

Error: socket hang up
    at createHangUpError (_http_client.js:211:15)
    at Socket.socketOnEnd (_http_client.js:303:23)
    at emitNone (events.js:72:20)
    at Socket.emit (events.js:166:7)
    at endReadableNT (_stream_readable.js:921:12)
    at nextTickCallbackWith2Args (node.js:442:9)
    at process._tickDomainCallback (node.js:397:17)
@DannyJoris
Copy link
Author

One thing I found is that app.webservice.tasks.get({lastres: true}, function(err, tasks) { is really slow with this amount of items.

@DannyJoris
Copy link
Author

I tried to add forever: true to the request call as suggested in a few places, but that didn't work either: https://github.com/pa11y/webservice-client-node/blob/master/lib/client.js#L112-L118

@DannyJoris
Copy link
Author

DannyJoris commented Aug 26, 2016

Digging deeper I found that getting all the results objects is what causes the slowness: model.result.getAll({}, function(err, results) { in pa11y-webservice/route/tasks.js. I did a manual query from the Mongo CLI, and it's slow there as well. I'm starting to realize I had the wrong perception about MongoDB's performance before this. Storing a reference in Task objects to its latest run Result object, could be one way to improve performance I think.

@DannyJoris
Copy link
Author

If I limit the 'from' time to 3 days instead of 30, that helps significantly!

model.result.getAll({
  from: (new Date(Date.now() - (1000 * 60 * 60 * 24 * 3))).getTime()
}, function(err, results) {

@rowanmanning
Copy link
Member

Hi Danny, ouch! Yeah I guess we're not really geared towards that many URLs quite yet. Your suggestion of adding references to the last run result in a task object might be sensible, but whatever happens Node.js is still loading a lot of data into memory.

I'd like to be able to fix this, and it'll definitely inform development on Sidekick.

@brakon
Copy link

brakon commented Aug 15, 2019

Seems this is not only when you have that many URLs but also when you have a lot of "notices" or any kind of issues.

For example I have 56 result objects, some of which have ~900 subresults (individual errors/warns/notices) and this is enough for the pa11y-dashboard homepage to timeout.

I am surprised people are able to have the dashboard open with weeks worth of data... must not have a lot of accessibility errors.

@ahmedansari153
Copy link

This issue seems to be plaguing me as well. Hopefully a fix can be found soon.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

6 participants