WebGPU 3.4.1. Timelines

A computer system with a user agent at the front-end and GPU at the back-end has components working on different timelines in parallel:

Content timeline
Associated with the execution of the Web script. It includes calling all methods described by this specification.
Steps executed on the content timeline look like this.

Device timeline
Associated with the GPU device operations that are issued by the user agent. It includes creation of adapters, devices, and GPU resources and state objects, which are typically synchronous operations from the point of view of the user agent part that controls the GPU, but can live in a separate OS process.
Steps executed on the device timeline look like this.

Queue timeline
Associated with the execution of operations on the compute units of the GPU. It includes actual draw, copy, and compute jobs that run on the GPU.
Steps executed on the queue timeline look like this.

一个前端有用户代理,后端有GPU的计算机系统的组件在不同的时间线上并行工作:

内容时间轴
与Web脚本的执行关联。它包括调用本规范描述的所有方法。
在内容时间轴上执行的步骤如下所示。

设备时间轴
与用户代理发出的GPU设备操作关联。它包括创建适配器、设备、GPU资源和状态对象,从控制GPU的用户代理部分的角度来看,这些通常是同步操作,并且可以存在于单独的操作系统进程中。
在设备时间轴上执行的步骤如下所示。

队列时间轴
与在GPU的计算单元上执行操作有关。它包括在GPU上运行的实际绘制、复制和计算作业。
在队列时间轴上执行的步骤如下所示。

In this specification, asynchronous operations are used when the result value depends on work that happens on any timeline other than the Content timeline. They are represented by callbacks and promises in JavaScript.

在本规范中,当结果值取决于除内容时间线以外的任何时间线上发生的工作时,将使用异步操作。它们由JavaScript中的回调和promises表示。

EXAMPLE 1
GPUComputePassEncoder.dispatch():

  1. User encodes a dispatch command by calling a method of the GPUComputePassEncoder which happens on the Content timeline.
  2. User issues GPUQueue.submit() that hands over the GPUCommandBuffer to the user agent, which processes it on the Device timeline by calling the OS driver to do a low-level submission.
  3. The submit gets dispatched by the GPU invocation scheduler onto the actual compute units for execution, which happens on the Queue timeline.
  1. 用户通过调用发生在内容时间轴上的GPUComputePassEncoder方法对调度命令进行编码。
  2. 用户发出GPUQueue.submit(),将GPUCommandBuffer移交给用户代理,用户代理通过调用操作系统驱动程序进行低级提交,在设备时间轴上对其进行处理。
  3. 提交由GPU调用调度器调度到实际的计算单元上执行,这发生在队列时间线上。

EXAMPLE 2
GPUDevice.createBuffer():

  1. User fills out a GPUBufferDescriptor and creates a GPUBuffer with it, which happens on the Content timeline.
  2. User agent creates a low-level buffer on the Device timeline.
  1. 用户填写一个GPUBufferDescriptor并用它创建一个GPUBuffer,这发生在内容时间轴上。
  2. 用户代理在设备时间轴上创建低级缓冲区。

EXAMPLE 3
GPUBuffer.mapAsync():

  1. User requests to map a GPUBuffer on the Content timeline and gets a promise in return.
  2. User agent checks if the buffer is currently used by the GPU and makes a reminder to itself to check back when this usage is over.
  3. After the GPU operating on Queue timeline is done using the buffer, the user agent maps it to memory and resolves the promise.
  1. 用户请求将GPUBuffer映射到内容时间轴上,并得到一个promise作为返回值。
  2. 用户代理检查缓冲区当前是否由GPU使用,并提醒自己在使用结束时检查。
  3. 在GPU使用缓冲区对队列时间轴进行操作之后,用户代理将其映射到内存并resolves the promise。