-
Notifications
You must be signed in to change notification settings - Fork 75
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
bad performance when deserialize long buildin-type array #118
Comments
As far as I'm aware. The uint8 arrays for pre-generated messages are already specially deserialized so we'd just be extending the case. |
Is this a duplicate of #117? |
@chfritz not the same. I thought we should use typeArray everywhere supported, great performance improve you know. But this may lead to compatibility issues. |
A few things I was trying to determine (and hadn't managed to yet)...
We'd also just need some perf tests to show the improvement we expect (mostly to guard against bugs in implementation, assuming TypedArrays would be a performance improvement in general). |
it's quite sure that, TypedArray is not a superset of normal Arrays, which is introduced for binary operation that nodejs lacks. Replace all arrays with typeArray is impossiable, especially object-arrays. a good choice I think about is introduce a flag to let programmer manually handle heavy binary datas like image/video/sound besides, heavy object-arrays are still tough , typically laserscan/pointclouds in ROS. |
I started looking into doing this but discovered that the TypedArray views require that the byte offset be a multiple of the type size (see: StackOverflow). In specific circumstances (e.g for certain message definitions) it can be guaranteed that this behavior will hold, but turning on TypedArray deserialization for specific topics or messages and not others would probably be a more invasive change than what I'd been imagining... The general way around the byte-offset issue for ArrayBuffers seems to be using a DataView, but the API there looks like we'd end up with basically the same code we currently have now using Node.js Buffers (which may just be a DataView at this point??). I'll look into it a bit more... I could be missing something in TypedArrays that would get around this or overcomplicating the per-topic deserialization case. |
Any plans? This would be a substantial performance improvement. Seems Node.js supports Another idea is, maybe a config option to rosnodejs like |
On my end, I am doing the following, which is similar to what rosnodejs could do. Note in my example it doesn't make rosnodejs any more efficient, it only makes my code more efficient, but it shows how a single let imageData: Uint8Array
let imageBlob: Blob
let imageUrl: string
let imageBlobParts: BlobPart[] = []
const img = document.getElementById('cameraImage') as HTMLImageElement
let buffer = new ArrayBuffer(INITIAL_SIZE)
const callback = (msg) => {
if (imageUrl) URL.revokeObjectURL(imageUrl)
const dataLength = msg.data.length
if (dataLength > buffer.byteLength) buffer = new ArrayBuffer(dataLength + 10000)
// TODO make an object pool of Uint8Array objects so we don't have to
// re-allocate them when we encounter an image that had the same size as
// a previous image.
imageData = new Uint8Array(buffer, 0, dataLength)
imageData.set(msg.data)
imageBlobParts[0] = imageData // re-use objects as much as possible!
imageBlob = new Blob(imageBlobParts, {type: 'image/jpeg'})
imageUrl = URL.createObjectURL(imageBlob)
img.src = imageUrl // render the image
} As you can tell there, there is a single spot in memory (a single If I do the following, the performance is much worse (400% slower): let imageData: Uint8Array
let imageBlob: Blob
let imageUrl: string
let imageBlobParts: BlobPart[] = []
const img = document.getElementById('cameraImage') as HTMLImageElement
const callback = (msg) => {
if (imageUrl) URL.revokeObjectURL(imageUrl)
imageData = new Uint8Array(msg.data) // new one every time, very bad
imageData.set(msg.data)
imageBlobParts[0] = imageData // re-use objects as much as possible!
imageBlob = new Blob(imageBlobParts, {type: 'image/jpeg'})
imageUrl = URL.createObjectURL(imageBlob)
img.src = imageUrl // render the image
} In this last example we are not re-using memory, and the callback is 4x slower. I think it would be great if rosnodejs would allow us to re-use memory and greatly improve performance. |
Excluding on-the-fly messages, rosnodejs already does not allocate additional memory for uint8 arrays. For most other types of message, I think the current api does what users expect, though point clouds and some other data structures may have issues. The easiest way around this is to allow users to do custom deserialization from the buffer. A longer term solution could be another deserialization option that wraps DataView. |
I will PR tomorrow with my approach, 5x faster on topic listen |
rosnodejs deserialize buildin-type array data oneByone using deserializePrimitive(), can we use typeArray instead?
The text was updated successfully, but these errors were encountered: