GitHub is home to over 20 million developers working together to host and review code, manage projects, and build software together.
|
This is the wrong level at which to solve this problem. We should go around duplicating every API on the platform that produces ArrayBuffers to create a second parallel version that creates a SharedArrayBuffer. If nothing else, the cost of keeping all these APIs in sync is very high. Instead, what you need is a feature (probably an ECMAScript-level feature) for creating a SharedArrayBuffer from a non-shared ArrayBuffer, without memory copies. (Presumably this would detach the original ArrayBuffer.) |
|
Also,
[citation needed]. It should be extremely cheap, especially for Files (which behind the scenes are basically a filename). |
|
@domenic About File structured cloning I misinterpreted a benchmark, it's very cheap as you mentioned. Regarding readAsSharedArrayBuffer maybe adding a param to readAsArrayBuffer can be an option:
It would be backward compatible and just change the underlying class to be used. Besides that I agree with you it would be great to allow create a SharedArrayBuffer from an ArrayBuffer. Maybe something like this https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/ArrayBuffer/transfer for SharedArrayBuffers.
|
topicus commentedMar 3, 2017
To allow workers play with file contents currently you have three options (maybe more):
If you need to parallelize tasks over file contents (e.g. parsing a big csv file) there is no way to do it without perform a copy which is costly operation when you deal with large files.
It would be great to have a readAsSharedArrayBuffer to read the file straight in a shared array buffer and thus avoid the copy. It would be cool to allow file reading in parallel but that's another issue.
Is there any other way to perform this with the current specs that I'm missing?