Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

download cancelled event? #13

Open
Spongman opened this issue Aug 17, 2016 · 74 comments
Open

download cancelled event? #13

Spongman opened this issue Aug 17, 2016 · 74 comments
Milestone

Comments

@Spongman
Copy link

is there a way to know if the download has been cancelled by the user?

@jimmywarting
Copy link
Owner

jimmywarting commented Aug 17, 2016

There should be right about here

console.log('user aborted')

But it doesn't get triggered... Think it's a bug or missing
Someone should report this to chromium

@Spongman
Copy link
Author

yeah, thanks. that's what i figured...

i reported it here: w3c/ServiceWorker#957 (comment)
(cc'd, here: https://bugs.chromium.org/p/chromium/issues/detail?id=638494)

@jimmywarting
Copy link
Owner

Thanks for that 👍

@jimmywarting
Copy link
Owner

jimmywarting commented Feb 17, 2019

Now with transferable stream there is a way to detect when the bucket strategy is full (Meaning the client paused the stream) you can also detect if user aborted the request.

@TexKiller
Copy link
Contributor

TexKiller commented Mar 3, 2019

@jimmywarting

Just a heads up: Firefox already does notify the Stream passed to respondWith when the download is cancelled... This line is executed:

console.log('user aborted')

@jimmywarting
Copy link
Owner

I consider the abort event as a minor issue and it would automatically be resolved once all browser start supporting transferable streams.

However it would be nice to solve this abort event in Firefox
Will push this missing abort event for a later release

@eschaefer
Copy link

Hey all, this was a "must" for me in Firefox, so here's my solution: #105

Would love feedback.

@allengordon011
Copy link

Is it possible to access the abort() to do more than just console.log('user aborted')?

@M0aB0z
Copy link

M0aB0z commented Oct 28, 2020

Now with transferable stream there is a way to detect when the bucket strategy is full (Meaning the client paused the stream) you can also detect if user aborted the request.

Hello Guys,
Thanks a lot jimmywarting for the amazing work you did on this project, it's very appreciated.
Unfortunately the user's events (Pause & Cancel) seem critical features in order to stop the write operations accordingly to the user intent.

Is there any news about this point ?

Thks

@jimmywarting
Copy link
Owner

sry, got some bad news.

transferable streams is still only supported in Blink with an experimental flag. https://chromestatus.com/feature/5298733486964736

2nd issue is about cancelation... chrome never emits the cancel event (here) but it can fill up the bucket to the point where it stop stops calling pull(ctrl) {...} (asking for more data)
Here is the (now old) chromium bug about cancelation: https://bugs.chromium.org/p/chromium/issues/detail?id=638494 - pls star it to make it important
Only FF emits this cancel event

3th issue is that streamsaver lacks the concept about buckets when talking to the service worker over MessageChannel it don't use the pull system and just eagerly enqueues more data without any respect to a bucket or the pull request. - which can lead to memory issue if enqueue data faster than what you are able to write it to the disk.


I have written a 2nd stream saving library based on native file system access that kind of acts like an adapter for different storages (such as writing to sandboxed fs, IndexedDB, cache storage and the memory) it too also comes with a adoption for writing data to the disk using same technique as StreamSaver with service worker. However my adapter dose it i slightly different and behaves more like a .pipe should with respect to cancel and only ask for more data when it needs it. it also properly reports back with a promise when data has been sent from main thread over to the service worker streams (which streamsaver totally lacks - it just resolves writer.write(data) directly)
and using service worker is optional too in which case it will build up a blob in the memory and later download it using a[download] instead. I have made it optional since a few ppl wants to host the service worker themself so there is more manual work to set it up properly.

I think that in the feature native file system access will supersede FileSaver and my own StreamSaver lib when it gets more adoptions at which point i will maybe deprecate StreamSaver in favor of my 2nd file system adapter - but not yet

Maybe you would like to try it out instead?

One thing that native file system dose differently is that it can enqueue other types of data such as string, blobs and any typed array or arraybuffer - so saving a large blob/file is more beneficial since the browser don't need to read the blob.

Oh, and give this issue a 👍 as well ;)

@M0aB0z
Copy link

M0aB0z commented Oct 28, 2020

sry, got some bad news.

transferable streams is still only supported in Blink with an experimental flag. https://chromestatus.com/feature/5298733486964736

2nd issue is about cancelation... chrome never emits the cancel event (here) but it can fill up the bucket to the point where it stop stops calling pull(ctrl) {...} (asking for more data)
Here is the (now old) chromium bug about cancelation: https://bugs.chromium.org/p/chromium/issues/detail?id=638494 - pls star it to make it important
Only FF emits this cancel event

3th issue is that streamsaver lacks the concept about buckets when talking to the service worker over MessageChannel it don't use the pull system and just eagerly enqueues more data without any respect to a bucket or the pull request. - which can lead to memory issue if enqueue data faster than what you are able to write it to the disk.

I have written a 2nd stream saving library based on native file system access that kind of acts like an adapter for different storages (such as writing to sandboxed fs, IndexedDB, cache storage and the memory) it too also comes with a adoption for writing data to the disk using same technique as StreamSaver with service worker. However my adapter dose it i slightly different and behaves more like a .pipe should with respect to cancel and only ask for more data when it needs it. it also properly reports back with a promise when data has been sent from main thread over to the service worker streams (which streamsaver totally lacks - it just resolves writer.write(data) directly)
and using service worker is optional too in which case it will build up a blob in the memory and later download it using a[download] instead. I have made it optional since a few ppl wants to host the service worker themself so there is more manual work to set it up properly.

I think that in the feature native file system access will supersede FileSaver and my own StreamSaver lib when it gets more adoptions at which point i will maybe deprecate StreamSaver in favor of my 2nd file system adapter - but not yet

Maybe you would like to try it out instead?

One thing that native file system dose differently is that it can enqueue other types of data such as string, blobs and any typed array or arraybuffer - so saving a large blob/file is more beneficial since the browser don't need to read the blob.

Oh, and give this issue a 👍 as well ;)

Thanks for your detailed answer, I'll have a look on your file system lib, looks very interesting and may solve my problem.
Thanks again for all your quality work.

@guest271314
Copy link

@jimmywarting Is there a minimal, verifiable, complete example of this issue?

@jimmywarting
Copy link
Owner

Hmm, i tried to create a minimal plunkr example here: https://plnkr.co/edit/I27Dl0chuMCuaoHD?open=lib%2Fscript.js&preview

basically wait 2s until the iframe pops up and save the never ending file download. then cancel the download from the browser UI and expect the cancel event to be called but never happens.

I'm 100% sure that this used to work in firefox but i can't get the cancel event to fire anymore in firefox. 😕
also tried my examples but i didn't get the "user aborted" console message there either.

@guest271314
Copy link

cancel is not an event. cancel() method is called after cancel(reason) is executed if the stream is not locked. The stream becomes locked momentarily after respondWith() is excuted. You can step through this with placement of rs.cancel()

self.addEventListener('activate', (event) => {
  event.waitUntil(clients.claim());
});

var _;
onfetch = async (evt) => {
  console.log(evt.request.url);
  if (evt.request.url.endsWith('ping')) {
    try {
      var rs = new ReadableStream({
        async start(ctrl) {
          return (_ = ctrl);
        },
        async pull() {
          _.enqueue(new Uint8Array([97]));
          await new Promise((r) => setTimeout(r, 250));
        },
        cancel(reason) {
          console.log('user aborted the download', reason);
        },
      });

      const headers = {
        'content-disposition': 'attachment; filename="filename.txt"',
      };
      var res = new Response(rs, { headers });
      // rs.cancel(0);
      evt.respondWith(res);
      // rs.cancel(0);
      setTimeout(() => {
        // rs.cancel(0);
        console.log(rs, res, _);
      }, 3000);
    } catch (e) {
      console.error(e);
    }
  }
};

console.log('que?');

sw.js:11 Uncaught (in promise) TypeError: Failed to execute 'cancel' on 'ReadableStream': Cannot cancel a locked stream
at sw.js:11

sw.js:30 Uncaught (in promise) DOMException: Failed to execute 'fetch' on 'WorkerGlobalScope': The user aborted a request.
    at onfetch (https://run.plnkr.co/preview/ckh2vkij700082z6y9i3qqrz3/sw.js:30:21)

The FetchEvent for "https://run.plnkr.co/preview/ckh2vkij700082z6y9i3qqrz3/ping" resulted in a network error response: the promise was rejected.
Promise.then (async)
onfetch @ VM4 sw.js:27
VM4 sw.js:1 Uncaught (in promise) DOMException: The user aborted a request.

user aborted the download 0
run.plnkr.co/preview/ckh2y3vu2000a2z6ym2r13z5o/sw.js:35 TypeError: Failed to construct 'Response': Response body object should not be disturbed or locked
    at onfetch (run.plnkr.co/preview/ckh2y3vu2000a2z6ym2r13z5o/sw.js:27)
onfetch @ run.plnkr.co/preview/ckh2y3vu2000a2z6ym2r13z5o/sw.js:35
VM2582 script.js:6 GET https://run.plnkr.co/preview/ckh2y3vu2000a2z6ym2r13z5o/ping 404
(anonymous) @ VM2582 script.js:6
setTimeout (async)

user aborted the download 0
The FetchEvent for "https://run.plnkr.co/preview/ckh2y9o8r000d2z6yucmc7bz4/ping" resulted in a network error response: a Response whose "bodyUsed" is "true" cannot be used to respond to a request.
Promise.then (async)
onfetch @ sw.js:28
script.js:6 GET https://run.plnkr.co/preview/ckh2y9o8r000d2z6yucmc7bz4/ping net::ERR_FAILED
(anonymous) @ script.js:6
setTimeout (async)
(anonymous) @ script.js:3
Promise.then (async)
(anonymous) @ script.js:2
TypeError: Failed to fetch
(anonymous) @ VM2582 script.js:3


sw.js:31 Uncaught (in promise) TypeError: Failed to execute 'cancel' on 'ReadableStream': Cannot cancel a locked stream
    at sw.js:31
Promise.then (async)
(anonymous) @ VM2582 script.js:2
VM2582 script.js:14 <link rel="stylesheet" href="//fonts.googleapis.com/css?family=Roboto:300,300italic,700,700italic">
<link rel="stylesheet" href="//unpkg.com/normalize.css/normalize.css">
<link rel="stylesheet" href="//unpkg.com/milligram/dist/milligram.min.css">
<h1>Oh dear, something didn't go quite right</h1>
<h2>Not Found</h2>

At client side AbortController can be used (see logged messaged above)

var controller, signal;
navigator.serviceWorker.register('sw.js', {scope: './'}).then(reg => {
  setTimeout(() => {
    controller = new AbortController();
    signal = controller.signal;
    fetch('./ping', {signal})
    .then(r => {
      var reader = r.body.getReader();
      reader.read().then(function process({value, done}) {
          if (done) {
            console.log(done);
            return reader.closed;
          }
          console.log(new TextDecoder().decode(value));
          return reader.read().then(process)
      })
    })
    .catch(console.error)

    document.querySelector('h1')
    .onclick = e => controller.abort();

  }, 2000)
})

See also the code at Is it possible to write to WebAssembly.Memory in PHP that is exported to and read in JavaScript in parallel? at background.js in an extension, where we stream raw PCM audio (without a definitive end) via fetch() from php passthru() and stop the stream using abort(), which can be achieved at Chromium using QuicTransport https://github.com/guest271314/quictransport without Native Messaging.

const id = 'native_messaging_stream';
let externalPort, controller, signal;

chrome.runtime.onConnectExternal.addListener(port => {
  console.log(port);
  externalPort = port;
  externalPort.onMessage.addListener(message => {
    if (message === 'start') {
      chrome.runtime.sendNativeMessage(id, {}, async _ => {
        console.log(_);
        if (chrome.runtime.lastError) {
          console.warn(chrome.runtime.lastError.message);
        }
        controller = new AbortController();
        signal = controller.signal;
        // wait until bash script completes, server starts
        for await (const _ of (async function* stream() {
          while (true) {
            try {
              if ((await fetch('http://localhost:8000', { method: 'HEAD' })).ok)
                break;
            } catch (e) {
              console.warn(e.message);
              yield;
            }
          }
        })());
        try {
          const response = await fetch('http://localhost:8000?start=true', {
            cache: 'no-store',
            mode: 'cors',
            method: 'get',
            signal
          });
          console.log(...response.headers);
          const readable = response.body;
          readable
            .pipeTo(
              new WritableStream({
                write: async value => {
                  // value is a Uint8Array, postMessage() here only supports cloning, not transfer
                  externalPort.postMessage(JSON.stringify(value));
                },
              })
            )
            .catch(err => {
              console.warn(err);
              externalPort.postMessage('done');
            });
        } catch (err) {
          console.error(err);
        }
      });
    }
    if (message === 'stop') {
      controller.abort();
      chrome.runtime.sendNativeMessage(id, {}, _ => {
        if (chrome.runtime.lastError) {
          console.warn(chrome.runtime.lastError.message);
        }
        console.log('everything should be done');
      });
    }
  });
});

ServiceWorker does not appear to be well-suited for the task. We can stream the file without ServiceWorker using fetch() see this answer at How to solve Uncaught RangeError when download large size json where successfully streamed and downloaded a 189MB file, and, as you indicated, use File System Access, something like

(async() =>  {
  const dir = await showDirectoryPicker();
  const status = await dir.requestPermission({mode: 'readwrite'});
  const url = 'https://fetch-stream-audio.anthum.com/72kbps/opus/house--64kbs.opus?cacheBust=1';
  const handle = await dir.getFile('house--64kbs.opus', { create: true });
  const wfs = await handle.createWritable();
  const response = await fetch(url);
  const body = await response.body;
  console.log("starting write");
  await body.pipeTo(wfs, { preventCancel: true });
  const file = await (await dir.getFile('house--64kbs.opus')).getFile();
  console.log(file);
})();

(BTW, created several screenshot workarounds, two of which are published at the linked repository https://gist.github.com/guest271314/13739f7b0343d6403058c3dbca4f5580)

@jimmywarting
Copy link
Owner

cancel is not an event.

didn't know what to call it, it is kind of like an event that happens when it get aborted by the user... but whatever

ServiceWorker does not appear to be well-suited for the task. We can stream the file without ServiceWorker using fetch() see this answer at How to solve Uncaught RangeError when download large size json where successfully streamed and downloaded a 189MB file, and, as you indicated, use File System Access, something like

I know service worker isn't the best solution but it's currently the only/best client side solution at the moment until native file system access becomes more wildly adapted in more browser without a experimental flag. it too comes with its drawback

  • it lacks support for suggested filename, so you are required to ask for directory and write the file yourself - or you can let the user choose the name.
  • it isn't associated with any native browser UI element where you can see the progress and cancel the download

I'm using service worker to mimic a normal download that occur from downloading something from a server so that i don't have to build a Blob in the memory and later download the hole file at once as it's wasteful use of memory when downloading large files. + there is better ways to solve that 4y old issue if he just did response.blob() and hope that browser offload large blob to the disk instead. ( see Chrome's Blob Storage System Design )
or if he really needed a json response call response.json() it seems to be much more performant to just do
new Response(str).json().then(...) instead of JSON.parse(str)

And as always use the server to download the file if it comes from the cloud if you can
or download the file without fetch and download it directly without blob and fetch

var a = document.createElement("a")
a.download = "citylots.json"
// mostly only work for same origin
a.href = "/citylots.json"
document.body.appendChild(a)
a.click()

@guest271314
Copy link

For this specific issue you can rearrage the placement of cancel(reason) to get the reason 0, at cancel (reason) {} method.

AbortController client side a MessagePort can be utilized to send the messsage to ServiceWorker to cancel the stream - before the stream is locked. I'm not seeing releaseLock() defined at Chromium 88. Either way the data is read into memory. If you fetch() client side you can precisely count progress and abort the request - then just doenload the file. Using Native Messaging, which is available at Chromium and Firefox you can write the file or files directly to disk at a shell, or combination of browser and shell.

I'm sure we could build a custom HTML element and implement progress events, either by estimation https://stackoverflow.com/a/41215449 or counting every byte How to read and echo file size of uploaded file being written at server in real time without blocking at both server and client?.

@guest271314
Copy link

OS froze at plnkr during tests. Registering and un-registering ServiceWorkers. We should be able to achieve something similar to what is described here. Will continue testing.

@guest271314
Copy link

The plnkr throws error at Nightly 84

Failed to register/update a ServiceWorker for scope ‘https://run.plnkr.co/preview/ckh4cy0qq00071w6pnxnhpr3v/’: 
Storage access is restricted in this context due to user settings or private browsing mode. 
script.js:1:24
Uncaught (in promise) DOMException: The operation is insecure.

Some observations running the below code https://plnkr.co/edit/P2op0uo5YBA5eEEm?open=lib%2Fscript.js at Chromium 88, which might not be the exact requirement, though is a start and extensible.

  • AbortController does not cancel the ReadableStream passed to Response
  • Download Cancel UI does not communicate messages to ServiceWorker or vice versa
  • Once Response is passed to respondWith() the ReadableStream is locked - AFAICT there does not appear to be any way to unlock the stream for the purpose of calling cancel() without an error being thrown

Utilizing clinet-side code we can get progress of bytes enqueed, post messages containing download status to main thread using MessageChannel or BroadcastChannel, and call ReadableStreamDefaultController.close() and AbortController.abort() when the appropirate message is received from client document.

index.html

<!DOCTYPE html>

<html>
  <head>
    <script src="lib/script.js"></script>
  </head>

  <body>
    <button id="start">Start download</button>

    <button id="abort">Abort download</button>
  </body>
</html>

lib/script.js

const unregisterServiceWorkers = async (_) => {
  const registrations = await navigator.serviceWorker.getRegistrations();
  for (const registration of registrations) {
    console.log(registration);
    try {
      await registration.unregister();
    } catch (e) {
      throw e;
    }
  }
  return `ServiceWorker's unregistered`;
};

const bc = new BroadcastChannel('downloads');

bc.onmessage = (e) => {
  console.log(e.data);
  if (e.data.aborted) {
    unregisterServiceWorkers()
      .then((_) => {
        console.log(_);
        bc.close();
      })
      .catch(console.error);
  }
};

onload = (_) => {
  document.querySelector('#abort').onclick = (_) =>
    bc.postMessage({ abort: true });

  document.querySelector('#start').onclick = (_) => {
    const iframe = document.createElement('iframe');
    iframe.src = './ping';
    document.body.append(iframe);
  };
};

navigator.serviceWorker.register('sw.js', { scope: './' }).then((reg) => {});

sw.js

self.addEventListener('activate', (event) => {
  event.waitUntil(clients.claim());
});

let rs;

let bytes = 0;

let n = 0;

let abort = false;

let aborted = false;

const controller = new AbortController();

const signal = controller.signal;

signal.onabort = (e) => {
  try {
    console.log(e);
    console.log(source, controller, rs);
    ({ aborted } = e.currentTarget);
    bc.postMessage({ aborted });
  } catch (e) {
    console.error(e);
  }
};

const bc = new BroadcastChannel('downloads');

bc.onmessage = (e) => {
  if (e.data.abort) {
    abort = true;
  }
};

const source = {
  controller: new AbortController(),
  start: async (ctrl) => {
    console.log('starting download');
    return;
  },
  pull: async (ctrl) => {
    ++n;
    if (abort) {
      ctrl.close();
      controller.abort();
    } else {
      const data = new TextEncoder().encode(n + '\n');
      bytes += data.buffer.byteLength;
      ctrl.enqueue(data);
      bc.postMessage({ bytes, aborted });
      await new Promise((r) => setTimeout(r, 50));
    }
  },
  cancel: (reason) => {
    console.log('user aborted the download', reason);
  },
};

onfetch = (evt) => {
  console.log(evt.request);

  if (evt.request.url.endsWith('ping')) {
    rs = new ReadableStream(source);
    const headers = {
      'content-disposition': 'attachment; filename="filename.txt"',
    };

    const res = new Response(rs, { headers, signal });
    console.log(controller, res);

    evt.respondWith(res);
  }
};

console.log('que?');

@guest271314
Copy link

guest271314 commented Nov 8, 2020

Once the ReadableStream is passed to Response the stream is locked and AFAICT cannot be cancelled, thus await cancel(reason) will throw error and cancel(reason) {console.log(reason)} will not be executed.

Response actually does not expect a signal property per https://bugs.chromium.org/p/chromium/issues/detail?id=823697#c14

You may have included a signal attribute in your Response constructor options dictionary, but its not read. The spec only supports adding a signal to the Request.

Also, its not clear what the signal on a Response would accomplish. If you want to abort the Response body you can just error the body stream, no?

pipeTo() and pipeThrough() do expect optional signal properties https://streams.spec.whatwg.org/#ref-for-rs-pipe-to%E2%91%A1.

Firefox does not support pipeTo() and pipeThrough(). We need to adjust the code to branch at a condition, e.g., 'pipeTo' in readable, then utilize only getReader() and read() instead of AbortController with WritableStream, which is still behind a flag at Nightly 84.

We tee() a ReadableStream to read bytes and wait for abort signal or message to cancel the download by cancelling or closing all streams, initial and derived tee'd pairs. If the paired stream is not aborted the unlocked pair is passed to Response

Tested several hundred runs at Chromium 88 to derive the current working example that still requires independent verification. The main issue that encountered when testing is ServiceWorker "life-cycle", or ServiceWorkers that remain after page reload, and re-run code that has changed; determining exactly when all service workers are unregistered; storage messages; inconsistent behaviour between reloads of the tab.

Running the code at Firefox or Nightly at localhost logs exception

Failed to get service worker registration(s): 
Storage access is restricted in this context 
due to user settings or private browsing mode. script.js:2:54
Uncaught (in promise) DOMException: The operation is insecure. script.js:2

Have not yet successfully run the code at Mozilla browsers. The working Chromium version provides a template of how the code can work at Firefox, given similar implementations and support.

From what can gather from the entirety of the issue this is resulting interpretation of a potential solution to handle both aborting the download and notifying the client of the state of the download. Kindly verify the code produces the expected output and handles the use cases described based on own interpreation of the issue, above.

index.html

<!DOCTYPE html>

<html>
  <head>
    <script src="lib/script.js"></script>
  </head>

  <body>
    <button id="start">Start download</button>

    <button id="abort">Abort download</button>
  </body>
</html>

lib/script.js

const unregisterServiceWorkers = async (_) => {
  const registrations = await navigator.serviceWorker.getRegistrations();
  for (const registration of registrations) {
    console.log(registration);
    try {
      await registration.unregister();
    } catch (e) {
      throw e;
    }
  }
  return `ServiceWorker's unregistered`;
};

const bc = new BroadcastChannel('downloads');

bc.onmessage = (e) => {
  console.log(e.data);
};

onload = async (_) => {
  console.log(await unregisterServiceWorkers());

  document.querySelector('#abort').onclick = (_) =>
    bc.postMessage({ abort: true });

  document.querySelector('#start').onclick = async (_) => {
    console.log(await unregisterServiceWorkers());
    console.log(
      await navigator.serviceWorker.register('sw.js', { scope: './' })
    );
    let node = document.querySelector('iframe');
    if (node) document.body.removeChild(node);
    const iframe = document.createElement('iframe');
    iframe.onload = async (e) => {
      console.log(e);
    };
    document.body.append(iframe);
    iframe.src = './ping';
  };
};

sw.js

// https://stackoverflow.com/a/34046299
self.addEventListener('install', (event) => {
  // Bypass the waiting lifecycle stage,
  // just in case there's an older version of this SW registration.
  event.waitUntil(self.skipWaiting());
});

self.addEventListener('activate', (event) => {
  // Take control of all pages under this SW's scope immediately,
  // instead of waiting for reload/navigation.
  event.waitUntil(self.clients.claim());
});

self.addEventListener('fetch', (event) => {
  console.log(event.request);

  if (event.request.url.endsWith('ping')) {
    var encoder = new TextEncoder();

    var bytes = 0;

    var n = 0;

    var abort = false;

    let aborted = false;

    var res;

    const bc = new BroadcastChannel('downloads');

    bc.onmessage = (e) => {
      console.log(e.data);
      if (e.data.abort) {
        abort = true;
      }
    };

    var controller = new AbortController();
    var signal = controller.signal;
    console.log(controller, signal);
    signal.onabort = (e) => {
      console.log(
        `Event type:${e.type}\nEvent target:${e.target.constructor.name}`
      );
    };
    var readable = new ReadableStream({
      async pull(c) {
        if (n === 10 && !abort) {
          c.close();
          return;
        }
        const data = encoder.encode(n + '\n');
        bytes += data.buffer.byteLength;
        c.enqueue(data);
        bc.postMessage({ bytes, aborted });
        await new Promise((r) => setTimeout(r, 1000));
        ++n;
      },
      cancel(reason) {
        console.log(
          `readable cancel(reason):${reason.join(
            '\n'
          )}\nreadable ReadableStream.locked:${readable.locked}\na locked:${
            a.locked
          }\nb.locked:${b.locked}`
        );
      },
    });

    var [a, b] = readable.tee();
    console.log({ readable, a, b });

    async function cancelable() {
      if ('pipeTo' in b) {
        var writeable = new WritableStream({
          async write(v, c) {
            console.log(v);
            if (abort) {
              controller.abort();
              try {
                console.log(await a.cancel('Download aborted!'));
              } catch (e) {
                console.error(e);
              }
            }
          },
          abort(reason) {
            console.log(
              `abort(reason):${reason}\nWritableStream.locked:${writeable.locked}`
            );
          },
        });
        return b
          .pipeTo(writeable, { preventCancel: false, signal })
          .catch((e) => {
            console.log(
              `catch(e):${e}\nReadableStream.locked:${readable.locked}\nWritableStream.locked:${writeable.locked}`
            );
            bc.postMessage({ aborted: true });
            return 'Download aborted.';
          });
      } else {
        var reader = b.getReader();
        return reader.read().then(async function process({ value, done }) {
          if (done) {
            if (abort) {
              reader.releaseLock();
              reader.cancel();
              console.log(await a.cancel('Download aborted!'));
              bc.postMessage({ aborted: true });
            }
            return reader.closed.then((_) => 'Download aborted.');
          }

          return reader.read().then(process).catch(console.error);
        });
      }
    }

    var downloadable = cancelable().then((result) => {
      console.log({ result });
      const headers = {
        'content-disposition': 'attachment; filename="filename.txt"',
      };
      try {
        bc.postMessage({ done: true });
        bc.close();
        res = new Response(a, { headers, cache: 'no-store' });
        console.log(res);
        return res;
      } catch (e) {
        console.error(e);
      } finally {
        console.assert(res, { res });
      }
    });

    evt.respondWith(downloadable);
  }
});

console.log('que?');

Updated plnkr https://plnkr.co/edit/P2op0uo5YBA5eEEm

@jimmywarting
Copy link
Owner

I have another older (original) idea in mind also that didn't work earlier. Blink (v85) have recently gotten support for streaming upload. Example:

await fetch('https://httpbin.org/post', {
  method: 'POST',
  body: new ReadableStream({
    start(ctrl) {
      ctrl.enqueue(new Uint8Array([97])) // a
      ctrl.close()
    }
  })
}).then(r => r.json()).then(j=>j.data) // "a"

None of the other browser supports it yet. but it can simplify stuff quite a bit.
You can just echo back everything and pipe the the download iframe and the ajax request to eachother

// oversimplified (you need two fetch events for this to work)
// canceling the ajax with a AbortSignal or interrupt the readableStream from the main thread can abort the download (aka: iframeEvent).
// canceling the downloading iframe body (from browser UI) can abort the ajax (aka: ajaxEvent)

iframeEvent.respondWith(new Response(ajaxEvent.request.body, { headers: ajaxEvent.request.headers }))
ajaxEvent.respondWith(new Response(iframeEvent.request.body))
  • You don't need any MessageChannel to transfer chunks (means less overhead)
  • It's tighter coupled as a writable stream pipeline should be (with the bucket) (StreamSaver currently lacks any backpressure algorithm) writable.write(chunk) just resolves directly
  • you don't need to ping the service worker to keep it alive since it don't have to do any more work.

@Spongman
Copy link
Author

I do not wait on any individual or institution to solve problems

i think at this point it may be worth moving this discussion to another issue, as whatever problem you're solving here isn't the one that was originally reported.

@guest271314
Copy link

I suggest breaking the issue into parts, as I have done.

The entire problem statement should be in the OP of each and every issue or bug filed. Instead of appending additional core requirements in subsequent posts.

Yes, the problem is solved, here. At Chromium you have the ability to transfer the stream to the service worker, for the sole purpose of initiating a download via iframe. it is not necessary to commence the stream in the service worker.

Again, your Chromium issue is valid, but this repository cannot solve that problem or fix that bug, either chrome://downloads aspect or cancel(){} being fired in the service worker. That is the purpose of the Chromium bug. My sole intent is to Fix WontFix, to provide workarounds until implementers actually fix the bug. If that is not viable for you, right now, in order to implement your own application, then I suggest the proper venue is the Chrimium bug - as again, whatever fixes for this issue, in this repository, AFAIK, are not binding on Chromium authors to implement.

@JounQin
Copy link

JounQin commented Sep 20, 2021

I meet similar situation today.

And I try to change streamsaver:

diff --git a/node_modules/streamsaver/StreamSaver.js b/node_modules/streamsaver/StreamSaver.js
index 018ddc3..acc9288 100644
--- a/node_modules/streamsaver/StreamSaver.js
+++ b/node_modules/streamsaver/StreamSaver.js
@@ -154,6 +154,9 @@
     } else {
       opts = options || {}
     }
+
+	let stream
+	
     if (!useBlobFallback) {
       loadTransporter()
 
@@ -210,7 +213,23 @@
         channel.port1.postMessage({ readableStream }, [ readableStream ])
       }
 
+	  let aborted
+
       channel.port1.onmessage = evt => {
+		if (aborted) {
+			return
+		}
+
+		if (evt.data.aborted) {
+			channel.port1.onmessage = null
+			aborted = true
+			if (stream._writer) {
+				stream._writer.abort()
+				stream._writer = undefined
+			}
+			return stream.abort()
+		}
+
         // Service worker sent us a link that we should open.
         if (evt.data.download) {
           // Special treatment for popup...
@@ -249,7 +268,7 @@
 
     let chunks = []
 
-    return (!useBlobFallback && ts && ts.writable) || new streamSaver.WritableStream({
+    stream = (!useBlobFallback && ts && ts.writable) || new streamSaver.WritableStream({
       write (chunk) {
         if (!(chunk instanceof Uint8Array)) {
           throw new TypeError('Can only wirte Uint8Arrays')
@@ -302,6 +321,8 @@
         channel = null
       }
     }, opts.writableStrategy)
+
+	return stream
   }
 
   return streamSaver

And add the following into sw.js:

cancel() {
  port.postMessage({ aborted: true })
  console.log('user aborted')
}

It seems working on Firefox for most cases to me, while there are still two problems:

  1. When the download dialog is not ready, cancel will not be fired at all sometimes

image

  1. I want to cancel the stream request by res.body.cancel(), but an error is thrown TypeError: 'cancel' can't be called on a locked stream., but try/catch will just work (the stream requests in main thread and iframe are aborted correctly, the file is about 600MB).

image

@JounQin
Copy link

JounQin commented Sep 20, 2021

My related source codes:

import * as streamSaver from 'streamsaver'
import { WritableStream } from 'web-streams-polyfill/ponyfill'

export const pipeStream = async <T = unknown>(
  reader: ReadableStreamDefaultReader<T>,
  writer: WritableStreamDefaultWriter<T>,
  signal?: AbortSignal,
) => {
  let chunkResult: ReadableStreamDefaultReadResult<T>

  let aborted: boolean | undefined

  while (!signal?.aborted && !(chunkResult = await reader.read()).done) {
    try {
      await writer.write(chunkResult.value)
    } catch (err) {
      if (signal?.aborted) {
        break
      }

      if (!err) {
        aborted = true
        break
      }

      throw err
    }
  }

  if (signal?.aborted || aborted) {
    await Promise.all([reader.cancel(), writer.abort()])
    throw new DOMException('aborted', 'AbortError')
  }

  return writer.close()
}

export const downloadFile = async <T = unknown>(
  readStream: ReadableStream<T>,
  fileName: string,
  signal?: AbortSignal,
) => {
  if (
    (__DEV__ || location.protocol === 'https:') &&
    window.showSaveFilePicker
  ) {
    const handle = await window.showSaveFilePicker({
      suggestedName: fileName,
    })
    return pipeStream(
      readStream.getReader(),
      await handle.createWritable<T>(),
      signal,
    )
  }

  if (streamSaver.mitm !== '/streamsaver/mitm.html') {
    Object.assign(streamSaver, {
      // eslint-disable-next-line @typescript-eslint/no-unnecessary-condition
      WritableStream: streamSaver.WritableStream || WritableStream,
      mitm: '/streamsaver/mitm.html',
    })
  }

  const writeStream = streamSaver.createWriteStream(fileName)

  // Safari
  if (typeof readStream.pipeTo === 'function') {
    return readStream.pipeTo(writeStream, { signal })
  }

  // Firefox
  return pipeStream(readStream.getReader(), writeStream.getWriter(), signal)
}

@gwdp
Copy link
Contributor

gwdp commented Jan 14, 2022

Chrome issue (#638494) got merged feel days ago into upstream (https://chromium-review.googlesource.com/c/chromium/src/+/3347484)..
Still, long road to stable/end-users, but it's something.

Getting a different behaviour on Canary, write(xx) is throwing undefined; digging into it to see if can squeeze something out of it although no console.log('user aborted') being fired :/

@gwdp
Copy link
Contributor

gwdp commented Jan 14, 2022

Okay, below are my findings:

For Chrome, Canary version 99.0.4828.0 I was able to handle user-level cancellation by simply checking if write would throw (not the best of the handlings but it does work).
Example:

      const streamSaver = StreamSaver.createWriteStream(`abc.zip`);
      this.fileStream = streamSaver.getWriter();
      readable.on('data', (d) => {
          if (this.cancelled) return;
          this.fileStream.write(d)
                                   .catch((e) => { this.cancelled = true; this.abort(); });
    });

However, testing on Firefox, webworker does print user aborted but nothing was implemented there.
To replicate the possible chrome implementation on firefox and possibly other browsers I had to send to the main thread the request to abort and then abort it as manual abort would do it.

@jimmywarting do you believe this commit could be merged or there are any other cases I'm not handling properly?
gwdp@f9e375e

@jimmywarting
Copy link
Owner

jimmywarting commented Jan 14, 2022

@gwdp hmm, yea maybe.

However I would like to rewrite the hole main-thread <-> service-worker communication.
When I wrote my native-file-system-adapter then i took what I learned from StreamSaver and fixed the problems that I have had in StreamSaver.

When readable streams are not transferable...

...Then we use MessageChannel to write each chunk to send it of to the service worker where i have created a ReadableStream in order to save it. The current problem is that when we do: this.fileStream.write(d) then it's immediately sent of to the service worker via PostMessage and the fileStream.write(d) is resolved right away. This cause a problem cuz you do not know when/if it have been written so it can't handle the back pressure and you will start to have a memory problem if you write data faster then what it is able to write to the disk

So rather than pushing data from main thread to the service worker I instead built a ReadableStream that pulls data from service worker and ask main thread for more data. That way you will know for sure that data has been written to the disk and the service worker is asking for more data when fileStream.write(d) have been resolved.

The source is inspired by @MattiasBuelens remote-web-streams
I would like to use this 👆 it is a tighter glue between two worker threads and much more similar to a transferable stream is suppose to work and communicate back and forth with backpressure

edit: i wish he would have named it transferable-stream-shim or polyfill so it could be easier to google :P

@gwdp
Copy link
Contributor

gwdp commented Jan 14, 2022

Wow 🤯. If I understood properly, that would involve a major refactor on the code (simplification as well), sounds like the next release of StreamSaver? 🤓

I have been using StreamSaver for a while now and I found about native-file-system-adapter only yesterday; After a good read on the code and its usage, I still believe stream saver has its own growing use case that is not entirely replaceable by native fs adapter.
In my case, I have been using it for compressing stuff before sending it to the fs, however on my to-do list is to customize the worker to make the compression there and not in the client, which is causing me a double backpressure problem when bandwidth spikes and disk/CPU are busy.
If your proposal is to use https://github.com/MattiasBuelens/remote-web-streams to refactor the communication, I might be able to draft something out in the upcoming week since I need to do something about this compression problem anyways..

For the download cancelled event issue I believe this will need to be handled anyways; Opening a PR for that so other folks can have this fixed in the current version :)

@MattiasBuelens
Copy link

edit: i wish he would have named it transferable-stream-shim or polyfill so it could be easier to google :P

I know, I know. 😛 Granted, I made that library before transferable streams were defined in the spec, so I didn't know what the spec would end up calling it.

Also, I'm hesitant to call it a "polyfill". For it to be a proper polyfill, it would need to patch postMessage() and the message event, and then transform the message object to find and replace any streams that also appear in the transferList. That means partially re-implementing StructuredSerializeInternal, so you can replace a ReadableStream in a message like { deeply: { nested: { stream: readableStream } } }. And I really couldn't be bothered with that. 😅

@guest271314
Copy link

For Chrome, Canary version 99.0.4828.0 I was able to handle user-level cancellation by simply checking if write would throw (not the best of the handlings but it does work).

One way to check when close() is called on writer, for example, in a different thread

try {
  if (writable.locked) {
    await writer.ready;
    await writer.write(value);
  }
} catch (e) {
  console.warn(e.message);
}

if necessary handle cannot write to a closing writable stream error(s).

@jimmywarting

This comment has been minimized.

@MattiasBuelens

This comment has been minimized.

@jimmywarting

This comment has been minimized.

@jimmywarting

This comment has been minimized.

@MattiasBuelens

This comment has been minimized.

@jimmywarting

This comment has been minimized.

@guest271314
Copy link

@gwdp For completeness when closing the writable side in the pattern release the lock as well for the if condition to be true at that instance

    await writer.close();
    await writer.closed;
    console.log('Writer closed.');
    writer.releaseLock();

If a handler is attached where write() is called multiple errors can be handled in the interim between handler being dispatched multiple times in the same span of time, though will still, in my case, enqueue all necessary chunks.

@MattiasBuelens

This comment has been minimized.

@guest271314

This comment has been minimized.

@guest271314

This comment has been minimized.

@jimmywarting

This comment has been minimized.

@guest271314

This comment has been minimized.

@jimmywarting

This comment has been minimized.

@guest271314

This comment has been minimized.

@jimmywarting

This comment has been minimized.

@guest271314
Copy link

Native Messaging isn't really a good option as it requires installing a extension and using native application and don't work in all browsers

Native Messaging works in Firefox and Chrome.

The user installs the extension locally. On Chromium simply select "Developer mode" at chrome://extensions then click "Load unpacked". No external resources or requests or Chrome Web Store are required.

Then you can use the same loading of <iframe> approach with a local file listed in "web_accessible_resources" and postMessage() to the parent, without using ServiceWorker (or CDN) at all. If you do want to use ServiceWorker for some reason, e.g., for user to click on the UI and commence download, you can still do that.

That allows you to use ServiceWorker on any origin, securely, without necessarily being on an HTTPS page online.

If you are talking about insecure pages, and requesting CDN's, all of those concerns go away when you are only running code already on your own machine.

Alternatively, use the Native Messaging host to download directly to the users' local file system.

I already filed a PR for this capability. You appeared to think unpacking a local extension involves external code or requests, it does not, and is at least as secure as the MITM code and requesting CDN's you are already using.

@jimmywarting
Copy link
Owner

The user installs the extension locally. On Chromium simply select "Developer mode" at chrome://extensions then click "Load unpacked". No external resources or requests or Chrome Web Store are required.

No visitor on your web site is ever going to do this... and you shouldn't force the user do this either

@guest271314
Copy link

I think you misunderstand what I am conveying. I am not talking about visiting websites, or StreamSaver used by websites. I am talking about the end user that wants to use your StreamSaver code on their own machine. Native Messaging is useful for me and other users to, for example, stream speech synthesis engine output from local speech synthesis engine as a ReadableStream piped to MediaStreamTrackGenerator, or stream live system audio output to the user on any origin, which Chrome prevents both of the former by not outputting audio of speechSynthesis.speak() on the tab, and not capturing monitor devices on *nix OS's.

From my perspetive StreamSaver concept is based on streaming download of files on any browser. So you can basically do the opposite of what I do here https://github.com/guest271314/captureSystemAudio/tree/master/native_messaging/capture_system_audio (using postMessage(transfer, [transfer]) for Firefox)

onload = () => {
  const { readable, writable } = new TransformStream();
  const writer = writable.getWriter();
  const id = 'capture_system_audio';
  let port = chrome.runtime.connectNative(id);
  let handleMessage = async (value) => {
    try {
      if (writable.locked) {
        await writer.ready;
        await writer.write(new Uint8Array(JSON.parse(value)));
      }
    } catch (e) {
      // handle cannot write to a closing stream 
      console.warn(e.message);
    }
  };
  port.onDisconnect.addListener(async (e) => {
    console.warn(e.message);
  });
  port.onMessage.addListener(handleMessage);
  onmessage = async (e) => {
    const { type, message } = e.data;
    if (type === 'start') {
      port.postMessage({
        message,
      });
      parent.postMessage(readable, name, [readable]);
    }
    if (type === 'stop') {
      try {
        await writer.close();
        await writer.closed;
        console.log('Writer closed.');
        writer.releaseLock();
        port.onMessage.removeListener(handleMessage);
        port.disconnect(id);
        port = null;
        parent.postMessage(0, name);
        onmessage = null;
        await chrome.storage.local.clear();
      } catch (err) {
        console.warn(err.message);
      }
    }
  };
  parent.postMessage(1, name);
};
  • stream from the browser to a local file, with option to save or delete the file, stream to native messaging host locally on any origin while browsing the web, or offline. If you want to cancel the download you can do so, both in the browser and at the built-in level.

Native Messaging allows you to deploy your concept locally, to your precise specification, without reliance on any extrnal code - for end users.

Perhaps I misunderstand and your code is written primarily for web site developers exlusively

StreamSaver.js is the solution to saving streams on the client-side. It is perfect for webapps

not individual users that want to run your gear locally on any site they happen to be visting.

Again, good luck.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

10 participants