Skip to content

antvis/A8

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

14 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

@antv/a8

A library for audio visualization using the following rendering techniques:

We provide the following effects now:

gpu sinegpu stardust gpu blackhole

Getting Started

Install from NPM.

npm install @antv/a8

Create a audio, set effect and start playing.

import { Audio, Sine } from '@antv/a8';

const audio = new Audio({
  canvas: $canvas,
});
audio.data($audio).effect(new Sine()).play();

API

Constructor

new Audio({
  canvas: $canvas,
});
  • canvas HTMLCanvasElement

data()

Pass in an HTMLAudioElement, create a AudioContext and Analyser later.

audio.data($audio);

effect()

Mount an effect.

import { Sine } from '@antv/a8';

audio.effect(new Sine());
audio.effect(new Sine()); // switch to another

style()

Update style options of effect.

audio.style({ blur: 1 });

play()

Start visualizing the audio.

audio.play();

destroy()

Destroy rAF and GPU resources(if any).

audio.destroy();

Effects

We provide the following effect now.

  • GPU Particles

GPU Particles

When creating GPU particle effects, we should use a WASM to compile shader chunks. For more informations, see https://observablehq.com/@antv/compute-toys#cell-712

const shaderCompilerPath = new URL(
  '/public/glsl_wgsl_compiler_bg.wasm',
  import.meta.url,
).href;
const effect = new Stardust(shaderCompilerPath, {});

Let me briefly describe the implementation. The whole process inside compute shaders can be divided into four stages:

  • Simulate particles
  • Clear
  • Rasterize
  • Output to storage buffer
  • Blit to screen

The particle structure is really simple, it consists of 2 properties: position and velocity. We will load/store particles from/to storage textures later.

struct Particle {
  position: float4,
  velocity: float4,
}

fn LoadParticle(pix: int2) -> Particle {
  var p: Particle;
  p.position = textureLoad(pass_in, pix, 0, 0);
  p.velocity = textureLoad(pass_in, pix, 1, 0);
  return p;
}

fn SaveParticle(pix: int2, p: Particle) {
  textureStore(pass_out, pix, 0, p.position);
  textureStore(pass_out, pix, 1, p.velocity);
}

At the first frame, we assign the initial position & velocity for each particle.

@compute @workgroup_size(16, 16)
fn SimulateParticles(@builtin(global_invocation_id) id: uint3) {
  if (time.frame == 0u) {
    let rng = rand4();

    // Normalize from [0, 1] to [-1, 1].
    p.position = float4(2.0 * rng.xyz - 1.0, 0.0);
    p.velocity = float4(0.0, 0.0, 0.0, 0.0);
  }
}

And in each of the next frames, position will be updated with velocity.

let dt = custom.Speed * custom.TimeStep;
p.velocity += (ForceField(p.position.xyz, t) - custom.VelocityDecay * p.velocity) * dt;
p.position += p.velocity * dt;

GPU Sine

gpu sine

Online DEMO

  • radius number
  • sinea number
  • sineb number
  • speed number
  • blur number
  • samples number
  • mode number

GPU Stardust

gpu stardust

Online DEMO

  • radius number
  • timeStep number
  • samples number
  • blurRadius number
  • velocityDecay number
  • speed number
  • blurExponentA number
  • blurExponentB number
  • animatedNoise number
  • accumulation number
  • exposure number

GPU BlackHole

https://en.wikipedia.org/wiki/Kerr%E2%80%93Newman_metric

gpu blackhole

Online DEMO

  • radius number
  • timeStep number
  • samples number
  • animatedNoise number
  • accumulation number
  • exposure number
  • blurExponentA number
  • blurExponentB number
  • blurRadius number
  • kerrA number
  • kerrQ number
  • initSpeed number
  • initThick number
  • steps number
  • focalPlane number
  • motionBlur number
  • gamma number

Appendix