-
Notifications
You must be signed in to change notification settings - Fork 8
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
q5-webgpu: Instanced rendering #52
Comments
Here's ChatGPT's explanation of instanced drawing: Instanced drawing in WebGPU allows you to draw multiple instances of the same geometry with a single draw call, which can significantly improve performance when rendering many objects. Here's a step-by-step explanation of how it works: Setup Buffers: Create vertex and instance buffers. The vertex buffer contains the geometry data, while the instance buffer contains per-instance data (e.g., positions, colors). Create Bind Group Layout: Define the layout of the bind groups, which describe how the shaders will access resources like buffers and textures. Create Pipeline: Create a render pipeline that includes the vertex and fragment shaders. The vertex shader will use both the vertex and instance buffers. Encode Commands: Encode the commands to draw the instances. This includes setting the pipeline, binding the resources, and issuing the draw call. Submit Commands: Submit the encoded commands to the GPU for execution. Here's an example in JavaScript using WebGPU: // Assuming you have a WebGPU context (device, queue, etc.)
// Vertex data for a simple triangle
const vertexData = new Float32Array([
0.0, 0.5, // Vertex 1
-0.5, -0.5, // Vertex 2
0.5, -0.5 // Vertex 3
]);
// Instance data (e.g., positions for each instance)
const instanceData = new Float32Array([
-0.5, 0.5, // Instance 1 position
0.5, 0.5, // Instance 2 position
-0.5, -0.5, // Instance 3 position
0.5, -0.5 // Instance 4 position
]);
// Create buffers
const vertexBuffer = device.createBuffer({
size: vertexData.byteLength,
usage: GPUBufferUsage.VERTEX,
mappedAtCreation: true
});
new Float32Array(vertexBuffer.getMappedRange()).set(vertexData);
vertexBuffer.unmap();
const instanceBuffer = device.createBuffer({
size: instanceData.byteLength,
usage: GPUBufferUsage.VERTEX,
mappedAtCreation: true
});
new Float32Array(instanceBuffer.getMappedRange()).set(instanceData);
instanceBuffer.unmap();
// Create bind group layout and pipeline
const bindGroupLayout = device.createBindGroupLayout({
entries: []
});
const pipeline = device.createRenderPipeline({
vertex: {
module: device.createShaderModule({
code: `
@stage(vertex)
fn main(@location(0) position: vec2<f32>, @location(1) instancePos: vec2<f32>) -> @builtin(position) vec4<f32> {
return vec4<f32>(position + instancePos, 0.0, 1.0);
}
`
}),
entryPoint: 'main',
buffers: [
{
arrayStride: 2 * 4,
attributes: [{ shaderLocation: 0, offset: 0, format: 'float32x2' }]
},
{
arrayStride: 2 * 4,
stepMode: 'instance',
attributes: [{ shaderLocation: 1, offset: 0, format: 'float32x2' }]
}
]
},
fragment: {
module: device.createShaderModule({
code: `
@stage(fragment)
fn main() -> @location(0) vec4<f32> {
return vec4<f32>(1.0, 1.0, 1.0, 1.0);
}
`
}),
entryPoint: 'main',
targets: [{ format: 'bgra8unorm' }]
},
primitive: {
topology: 'triangle-list'
},
layout: device.createPipelineLayout({ bindGroupLayouts: [bindGroupLayout] })
});
// Encode commands
const commandEncoder = device.createCommandEncoder();
const passEncoder = commandEncoder.beginRenderPass({
colorAttachments: [{
view: context.getCurrentTexture().createView(),
loadValue: { r: 0, g: 0, b: 0, a: 1 },
storeOp: 'store'
}]
});
passEncoder.setPipeline(pipeline);
passEncoder.setVertexBuffer(0, vertexBuffer);
passEncoder.setVertexBuffer(1, instanceBuffer);
passEncoder.draw(3, 4); // 3 vertices, 4 instances
passEncoder.endPass();
// Submit commands
device.queue.submit([commandEncoder.finish()]); In this example: The vertex shader takes both vertex positions and instance positions as inputs. |
I might come back to this later but for now I'm going to mark it as not planned since it would require a total rewrite of the q5-webgpu-drawing module and I'm not sure it'd even be worth it given how fast it is just to create new vertices every frame. |
There's an opportunity with q5's upcoming WebGPU renderer to implement friendly p5 style wrappers for batch rendering, or do it behind the scenes.
Batch rendering could draw many shapes with the same fill and stroke colors or shader in the same render pass. The goal would be to take advantage of the GPUs strength at drawing in parallel, which is faster.
Proposed use example:
In between the start and end of a batch, drawing functions like
rect
would add vertices to a vertex buffer behind the scenes.I hope this will give users an easy and performant way to render millions of 2D shapes.
The text was updated successfully, but these errors were encountered: