我目前正在撰寫一個計算著色器,它需要一個浮點數和一個 uint3 的簡單浮點乘法,我得到的結果非常不準確。
例如 uint3(1, 2, 3) * 0.25
預期結果: uint3(0.25, 0.5, 0.75)
實際結果: uint3(0.3, 0.5, 0.8)
知道如何以某種方式提高精度嗎?
計算著色器:
[numthreads(X_THREADS, Y_THREADS, Z_THREADS)]
void CSMain (uint3 id : SV_DispatchThreadID)
{
// Fill the array
uint tid = id.x;
while (tid < octree_length)
{
// Normalize tid
uint depth = OctreeDepth(tid);
uint mid = tid - depth_offsets[depth];
// Compute node data
float node_size = bounds_size / pow(2, depth);
float3 node_min = MortonToXYZ(mid) * node_size;
// Build node
octree[tid].anchor = node_min;
octree[tid].size = node_size;
// Move tid
tid = total_num_threads;
}
}
不準確的乘法:
float3 node_min = MortonToXYZ(mid) * node_size;
- mid:此執行緒的莫頓代碼
- MortonToXYZ:回傳元素在 [0, infinity] 之間的 uint3,這是一個解碼的 Morton 碼
- node_size: [0, infinity] 之間的浮點數
啟動著色器并讀回資料的代碼:
ComputeBuffer BuildOctree(Vector3 boundsMin, float boundsSize, int octreeDepth, Vector3Int numThreadGroups)
{
// Compute constants
int totalNumThreads = numThreadGroups.x * numThreadGroups.y * numThreadGroups.z * THREADS_PER_GROUPS;
// Compute depth offsets and octree length
int[] depthOffsets = new int[octreeDepth];
int length = 0;
for (int i = 0; i < octreeDepth; i )
{
depthOffsets[i] = length;
length = (int) Mathf.Pow(8, i);
}
// Prepare buffers
ComputeBuffer octree = new ComputeBuffer(length, 4 * sizeof(float), ComputeBufferType.Structured);
ComputeBuffer depthOffsetsGPU = new ComputeBuffer(octreeDepth, sizeof(int), ComputeBufferType.Structured);
depthOffsetsGPU.SetData(depthOffsets);
octree.SetData(new OctreeNode[length]);
// Load data into shader
OCTREE_BUILDER.SetBuffer(0, "octree", octree);
OCTREE_BUILDER.SetBuffer(0, "depth_offsets", depthOffsetsGPU);
OCTREE_BUILDER.SetInt("total_num_threads", totalNumThreads);
OCTREE_BUILDER.SetInt("octree_length", length);
OCTREE_BUILDER.SetFloat("bounds_size", boundsSize);
// Launch kernal
OCTREE_BUILDER.Dispatch(0, numThreadGroups.x, numThreadGroups.y, numThreadGroups.z);
OctreeNode[] output = new OctreeNode[length];
octree.GetData(output);
for (int i = 0; i < output.Length; i )
Debug.Log("cell[" i "]: " output[i].anchor ", " output[i].size);
// Return octree buffer
return octree;
}
筆記:\
- 我嘗試了一個僅計算 uint3(1, 1, 1) * 0.25
預期結果的最小示例: uint3(0.25, 0.25, 0.25)
實際結果: uint3(0.3, 0.3, 0.3)\ - 我正在使用 RTX 2070
uj5u.com熱心網友回復:
正如用戶@bart 指出的那樣,問題出在列印中,統一列印了一個 2 位格式,將我的值向下舍入,通過ToString("F5")
在我的列印中使用它顯示了正確的值
轉載請註明出處,本文鏈接:https://www.uj5u.com/caozuo/490180.html
標籤:unity3d hlsl unity3d-shaders
上一篇:統一優化