极市导读
笔者通过整理分析了NeRF论文和相关参考代码,将为读者朋友讲述利用PyTorch框架,从0到1简单复现一个NeRF(神经辐射场)的实现细节和过程。 >>加入极市CV技术交流群,走在计算机视觉的最前沿
在解释代码之前,首先对NeRF(神经辐射场)的原理与含义进行简单回顾。而NeRF论文中是这样解释NeRF算法流程的:
“我们提出了一个当前最优的方法,应用于复杂场景下合成新视图的任务,具体的实现原理是使用一个稀疏的输入视图集合,然后不断优化底层的连续体素场景函数。我们的算法,使用一个全连接(非卷积)的深度网络,表示一个场景,这个深度网络的输入是一个单独的5D坐标(空间位置(x,y,z)和视图方向(xita,sigma)),其对应的输出则是体素密度和视图关联的辐射向量。我们通过查询沿着相机射线的5D坐标合成新的场景视图,以及通过使用经典的体素渲染技术将输出颜色和密度投射到图像中。因为体素渲染具有天然的可变性,所以优化我们的表示方法所需的唯一输入就是一组已知相机位姿的图像。我们介绍如何高效优化神经辐射场照度,以渲染具有复杂几何形状和外观的逼真新颖视图,并展示了由于之前神经渲染和视图合成工作的结果。”
基于前文的原理,本节开始讲述具体的代码实现。首先,导入算法需要的Python库文件。
import os
from typing import Optional,Tuple,List,Union,Callable
import numpy as np
import torch
from torch import nn
import matplotlib.pyplot as plt
from mpl\_toolkits.mplot3d import axes3d
from tqdm import trange
# 设置GPU还是CPU设备
device = torch.device\('cuda' if torch.cuda.is\_available\(\) else 'cpu'\)
1 输入
根据相关论文中的介绍可知,NeRF的输入是一个包含空间位置坐标与视图方向的5D坐标。然而,在PyTorch构建NeRF过程中使用的数据集只是一般的3D到2D图像数据集,包含拍摄相机的内参:位姿和焦距。因此在后面的操作中,我们会把输入数据集转为算法模型需要的输入形式。
在这一流程中使用乐高推土机图像作为简单NeRF算法的数据集,如图2所示:(具体的数据链接请在文末查看)
这项工作中使用的小型乐高数据集由 106 幅乐高推土机的图像组成,并配有位姿数据和常用焦距数值。与其他数据集一样,这里保留前 100 张图像用于训练,并保留一张测试图像用于验证,具体的加载数据操作如下:
data = np.load\('tiny\_nerf\_data.npz'\) # 加载数据集
images = data\['images'\] # 图像数据
poses = data\['poses'\] # 位姿数据
focal = data\['focal'\] # 焦距数值
print\(f'Images shape: \{images.shape\}'\)
print\(f'Poses shape: \{poses.shape\}'\)
print\(f'Focal length: \{focal\}'\)
height, width = images.shape\[1:3\]
near, far = 2., 6.
n\_training = 100 # 训练数据数量
testimg\_idx = 101 # 测试数据下标
testimg, testpose = images\[testimg\_idx\], poses\[testimg\_idx\]
plt.imshow\(testimg\)
print\('Pose'\)
print\(testpose\)
2 数据处理
回顾NeRF相关论文, 本次代码实现需要的输入是一个单独的5D坐标 (空间位置 和视图方向 , sigma ))。因此, 我们需要针对上面使用的小型乐高数据做一个处理操作。
一般而言,为了收集这些特点输入数据,算法中需要对输入图像进行反渲染操作。具体来讲就是通过每个像素点在三维空间中绘制投影线,并从中提取样本。
要从图像以外的三维空间采样输入数据点,首先就得从乐高照片集中获取每台相机的初始位姿,然后通过一些矢量数学运算,将这些4x4姿态矩阵转换成「表示原点的三维坐标和表示方向的三维矢量」——这两类信息最终会结合起来描述一个矢量,该矢量用以表征拍摄照片时相机的指向。
下列代码则正是通过绘制箭头来描述这一操作,箭头表示每一帧图像的原点和方向:
# 方向数据
dirs = np.stack\(\[np.sum\(\[0, 0, -1\] \* pose\[:3, :3\], axis=-1\) for pose in poses\]\)
# 原点数据
origins = poses\[:, :3, -1\]
# 绘图的设置
ax = plt.figure\(figsize=\(12, 8\)\).add\_subplot\(projection='3d'\)
\_ = ax.quiver\(
origins\[..., 0\].flatten\(\),
origins\[..., 1\].flatten\(\),
origins\[..., 2\].flatten\(\),
dirs\[..., 0\].flatten\(\),
dirs\[..., 1\].flatten\(\),
dirs\[..., 2\].flatten\(\), length=0.5, normalize=True\)
ax.set\_xlabel\('X'\)
ax.set\_ylabel\('Y'\)
ax.set\_zlabel\('z'\)
plt.show\(\)
最终绘制出来的箭头结果如下图所示:
当有了这些相机位姿数据之后,我们就可以沿着图像的每个像素找到投影线,而每条投影线都是由其原点(x,y,z)和方向联合定义。其中每个像素的原点可能相同,但方向一般是不同的。这些方向射线都略微偏离中心,因此不会存在两条平行方向线,如下图所示:
根据图4所述的原理,我们就可以确定每条射线的方向和原点,相关代码如下:
def get\_rays\(
height: int, # 图像高度
width: int, # 图像宽带
focal\_length: float, # 焦距
c2w: torch.Tensor
\) -> Tuple\[torch.Tensor, torch.Tensor\]:
"""
通过每个像素和相机原点,找到射线的原点和方向。
"""
# 应用针孔相机模型收集每个像素的方向
i, j = torch.meshgrid\(
torch.arange\(width, dtype=torch.float32\).to\(c2w\),
torch.arange\(height, dtype=torch.float32\).to\(c2w\),
indexing='ij'\)
i, j = i.transpose\(-1, -2\), j.transpose\(-1, -2\)
# 方向数据
directions = torch.stack\(\[\(i - width \* .5\) / focal\_length,
-\(j - height \* .5\) / focal\_length,
-torch.ones\_like\(i\)
\], dim=-1\)
# 用相机位姿求出方向
rays\_d = torch.sum\(directions\[..., None, :\] \* c2w\[:3, :3\], dim=-1\)
# 默认所有射线原点相同
rays\_o = c2w\[:3, -1\].expand\(rays\_d.shape\)
return rays\_o, rays\_d
得到每个像素对应的射线的方向数据和原点数据之后,就能够获得了NeRF算法中需要的五维数据输入,下面将这些数据调整为算法输入的格式:
# 转为PyTorch的tensor
images = torch.from\_numpy\(data\['images'\]\[:n\_training\]\).to\(device\)
poses = torch.from\_numpy\(data\['poses'\]\).to\(device\)
focal = torch.from\_numpy\(data\['focal'\]\).to\(device\)
testimg = torch.from\_numpy\(data\['images'\]\[testimg\_idx\]\).to\(device\)
testpose = torch.from\_numpy\(data\['poses'\]\[testimg\_idx\]\).to\(device\)
# 针对每个图像获取射线
height, width = images.shape\[1:3\]
with torch.no\_grad\(\):
ray\_origin, ray\_direction = get\_rays\(height, width, focal, testpose\)
print\('Ray Origin'\)
print\(ray\_origin.shape\)
print\(ray\_origin\[height // 2, width // 2, :\]\)
print\(''\)
print\('Ray Direction'\)
print\(ray\_direction.shape\)
print\(ray\_direction\[height // 2, width // 2, :\]\)
print\(''\)
2.1 分层采样
当算法输入模块有了NeRF算法需要的输入数据,也就是包含原点和方向向量组合的线条时,就可以在线条上进行采样。这一过程是采用从粗到细的采样策略,即分层采样策略。
具体来说,分层采样就是将光线分成均匀分布的小块,接着在每个小块内随机抽样。其中扰动的设置决定了是均匀取样的,还是直接简单使用分区中心作为采样点。具体操作代码如下所示:
# 采样函数定义
def sample\_stratified\(
rays\_o: torch.Tensor, # 射线原点
rays\_d: torch.Tensor, # 射线方向
near: float,
far: float,
n\_samples: int, # 采样数量
perturb: Optional\[bool\] = True, # 扰动设置
inverse\_depth: bool = False # 反向深度
\) -> Tuple\[torch.Tensor, torch.Tensor\]:
"""
从规则的bin中沿着射线进行采样。
"""
# 沿着射线抓取采样点
t\_vals = torch.linspace\(0., 1., n\_samples, device=rays\_o.device\)
if not inverse\_depth:
# 由远到近线性采样
z\_vals = near \* \(1.-t\_vals\) + far \* \(t\_vals\)
else:
# 在反向深度中线性采样
z\_vals = 1./\(1./near \* \(1.-t\_vals\) + 1./far \* \(t\_vals\)\)
# 沿着射线从bins中统一采样
if perturb:
mids = .5 \* \(z\_vals\[1:\] + z\_vals\[:-1\]\)
upper = torch.concat\(\[mids, z\_vals\[-1:\]\], dim=-1\)
lower = torch.concat\(\[z\_vals\[:1\], mids\], dim=-1\)
t\_rand = torch.rand\(\[n\_samples\], device=z\_vals.device\)
z\_vals = lower + \(upper - lower\) \* t\_rand
z\_vals = z\_vals.expand\(list\(rays\_o.shape\[:-1\]\) + \[n\_samples\]\)
# 应用相应的缩放参数
pts = rays\_o\[..., None, :\] + rays\_d\[..., None, :\] \* z\_vals\[..., :, None\]
return pts, z\_vals
接着就到了对这些采样点做可视化分析的步骤。如图5中所述,未受扰动的蓝 色点是bin的“中心“,而红点对应扰动点的采样。请注意,红点与上方的蓝点略有偏移,但所有点都在远近采样设定值之间。具体代码如下:
y\_vals = torch.zeros\_like\(z\_vals\)
# 调用采样策略函数
\_, z\_vals\_unperturbed = sample\_stratified\(rays\_o, rays\_d, near, far, n\_samples,
perturb=False, inverse\_depth=inverse\_depth\)
# 绘图相关
plt.plot\(z\_vals\_unperturbed\[0\].cpu\(\).numpy\(\), 1 + y\_vals\[0\].cpu\(\).numpy\(\), 'b-o'\)
plt.plot\(z\_vals\[0\].cpu\(\).numpy\(\), y\_vals\[0\].cpu\(\).numpy\(\), 'r-o'\)
plt.ylim\(\[-1, 2\]\)
plt.title\('Stratified Sampling \(blue\) with Perturbation \(red\)'\)
ax = plt.gca\(\)
ax.axes.yaxis.set\_visible\(False\)
plt.grid\(True\)
3 位置编码
与Transformer一样,NeRF也使用了位置编码器。因此NeRF就需要借助位置编码器将输入映射到更高的频率空间,以弥补神经网络在学习低频函数时的偏差。
这一环节将会为位置编码器建立一个简单的 torch.nn.Module 模块,相同的编码器可同时用于对输入样本和视图方向的编码操作。注意,这些输入被指定了不同的参数。代码如下所示:
# 位置编码类
class PositionalEncoder\(nn.Module\):
"""
对输入点,做sine或者consine位置编码。
"""
def \_\_init\_\_\(
self,
d\_input: int,
n\_freqs: int,
log\_space: bool = False
\):
super\(\).\_\_init\_\_\(\)
self.d\_input = d\_input
self.n\_freqs = n\_freqs
self.log\_space = log\_space
self.d\_output = d\_input \* \(1 + 2 \* self.n\_freqs\)
self.embed\_fns = \[lambda x: x\]
# 定义线性或者log尺度的频率
if self.log\_space:
freq\_bands = 2.\*\*torch.linspace\(0., self.n\_freqs - 1, self.n\_freqs\)
else:
freq\_bands = torch.linspace\(2.\*\*0., 2.\*\*\(self.n\_freqs - 1\), self.n\_freqs\)
# 替换sin和cos
for freq in freq\_bands:
self.embed\_fns.append\(lambda x, freq=freq: torch.sin\(x \* freq\)\)
self.embed\_fns.append\(lambda x, freq=freq: torch.cos\(x \* freq\)\)
def forward\(
self,
x
\) -> torch.Tensor:
"""
实际使用位置编码的函数。
"""
return torch.concat\(\[fn\(x\) for fn in self.embed\_fns\], dim=-1\)
4 NeRF模型
在此,定义一个NeRF 模型——主要由线性层模块列表构成,而列表中进一步包含非线性激活函数和残差连接。该模型有一个可选的视图方向输入,如果在实例化时提供具体的方向信息,那么会改变模型结构。
(本实现基于原始论文NeRF:Representing Scenes as Neural Radiance Fields for View Synthesis 的第3节,并使用相同的默认设置)
具体代码如下所示:
# 定义NeRF模型
class NeRF\(nn.Module\):
"""
神经辐射场模块。
"""
def \_\_init\_\_\(
self,
d\_input: int = 3,
n\_layers: int = 8,
d\_filter: int = 256,
skip: Tuple\[int\] = \(4,\),
d\_viewdirs: Optional\[int\] = None
\):
super\(\).\_\_init\_\_\(\)
self.d\_input = d\_input # 输入
self.skip = skip # 残差连接
self.act = nn.functional.relu # 激活函数
self.d\_viewdirs = d\_viewdirs # 视图方向
# 创建模型的层结构
self.layers = nn.ModuleList\(
\[nn.Linear\(self.d\_input, d\_filter\)\] +
\[nn.Linear\(d\_filter + self.d\_input, d\_filter\) if i in skip \\
else nn.Linear\(d\_filter, d\_filter\) for i in range\(n\_layers - 1\)\]
\)
# Bottleneck 层
if self.d\_viewdirs is not None:
# 如果使用视图方向,分离alpha和RGB
self.alpha\_out = nn.Linear\(d\_filter, 1\)
self.rgb\_filters = nn.Linear\(d\_filter, d\_filter\)
self.branch = nn.Linear\(d\_filter + self.d\_viewdirs, d\_filter // 2\)
self.output = nn.Linear\(d\_filter // 2, 3\)
else:
# 如果不使用试图方向,则简单输出
self.output = nn.Linear\(d\_filter, 4\)
def forward\(
self,
x: torch.Tensor,
viewdirs: Optional\[torch.Tensor\] = None
\) -> torch.Tensor:
r"""
带有视图方向的前向传播
"""
# 判断是否设置视图方向
if self.d\_viewdirs is None and viewdirs is not None:
raise ValueError\('Cannot input x\_direction if d\_viewdirs was not given.'\)
# 运行bottleneck层之前的网络层
x\_input = x
for i, layer in enumerate\(self.layers\):
x = self.act\(layer\(x\)\)
if i in self.skip:
x = torch.cat\(\[x, x\_input\], dim=-1\)
# 运行 bottleneck
if self.d\_viewdirs is not None:
# Split alpha from network output
alpha = self.alpha\_out\(x\)
# 结果传入到rgb过滤器
x = self.rgb\_filters\(x\)
x = torch.concat\(\[x, viewdirs\], dim=-1\)
x = self.act\(self.branch\(x\)\)
x = self.output\(x\)
# 拼接alpha一起作为输出
x = torch.concat\(\[x, alpha\], dim=-1\)
else:
# 不拼接,简单输出
x = self.output\(x\)
return x
5 体积渲染
上面得到NeRF模型的输出结果之后,仍需将NeRF的输出转换成图像。也就是通过渲染模块对每个像素沿光线方向的所有样本进行加权求和,从而得到该像素的估计颜色值,此外每个RGB样本都会根据其Alpha值进行加权。其中Alpha值越高,表明采样区域不透明的可能性越大,因此沿射线方向越远的点越有可能被遮挡,累加乘积可确保更远处的点受到抑制。具体代码如下:
# 体积渲染
def cumprod\_exclusive\(
tensor: torch.Tensor
\) -> torch.Tensor:
"""
\(Courtesy of https://github.com/krrish94/nerf-pytorch\)
和tf.math.cumprod\(..., exclusive=True\)功能类似
参数:
tensor \(torch.Tensor\): Tensor whose cumprod \(cumulative product, see \`torch.cumprod\`\) along dim=-1
is to be computed.
返回值:
cumprod \(torch.Tensor\): cumprod of Tensor along dim=-1, mimiciking the functionality of
tf.math.cumprod\(..., exclusive=True\) \(see \`tf.math.cumprod\` for details\).
"""
# 首先计算规则的cunprod
cumprod = torch.cumprod\(tensor, -1\)
cumprod = torch.roll\(cumprod, 1, -1\)
# 用1替换首个元素
cumprod\[..., 0\] = 1.
return cumprod
# 输出到图像的函数
def raw2outputs\(
raw: torch.Tensor,
z\_vals: torch.Tensor,
rays\_d: torch.Tensor,
raw\_noise\_std: float = 0.0,
white\_bkgd: bool = False
\) -> Tuple\[torch.Tensor, torch.Tensor, torch.Tensor, torch.Tensor\]:
"""
将NeRF的输出转换为RGB输出。
"""
# 沿着\`z\_vals\`轴元素之间的差值.
dists = z\_vals\[..., 1:\] - z\_vals\[..., :-1\]
dists = torch.cat\(\[dists, 1e10 \* torch.ones\_like\(dists\[..., :1\]\)\], dim=-1\)
# 将每个距离乘以相应方向射线的法线,转换为现实世界中的距离(考虑非单位方向)。
dists = dists \* torch.norm\(rays\_d\[..., None, :\], dim=-1\)
# 为模型预测密度添加噪音。可用于在训练过程中对网络进行正则化(防止出现浮点伪影)。
noise = 0.
if raw\_noise\_std > 0.:
noise = torch.randn\(raw\[..., 3\].shape\) \* raw\_noise\_std
# Predict density of each sample along each ray. Higher values imply
# higher likelihood of being absorbed at this point. \[n\_rays, n\_samples\]
alpha = 1.0 - torch.exp\(-nn.functional.relu\(raw\[..., 3\] + noise\) \* dists\)
# 预测每条射线上每个样本的密度。数值越大,表示该点被吸收的可能性越大。\[n\_ 射线,n\_样本]
weights = alpha \* cumprod\_exclusive\(1. - alpha + 1e-10\)
# 计算RGB图的权重。
rgb = torch.sigmoid\(raw\[..., :3\]\) # \[n\_rays, n\_samples, 3\]
rgb\_map = torch.sum\(weights\[..., None\] \* rgb, dim=-2\) # \[n\_rays, 3\]
# 估计预测距离的深度图。
depth\_map = torch.sum\(weights \* z\_vals, dim=-1\)
# 稀疏图
disp\_map = 1. / torch.max\(1e-10 \* torch.ones\_like\(depth\_map\),
depth\_map / torch.sum\(weights, -1\)\)
# 沿着每条射线加权。
acc\_map = torch.sum\(weights, dim=-1\)
# 要合成到白色背景上,请使用累积的 alpha 贴图。
if white\_bkgd:
rgb\_map = rgb\_map + \(1. - acc\_map\[..., None\]\)
return rgb\_map, depth\_map, acc\_map, weights
6 分层体积采样
事实上,三维空间中的遮挡物非常稀疏,因此大多数点对渲染图像的贡献不大。所以,对积分有贡献的区域进行超采样会有更好的效果。这里,笔者对第一组样本应用基于归一化的权重来创建整个光线的概率密度函数,然后对该密度函数应用反变换采样来收集第二组样本。具体代码如下:
# 采样概率密度函数
def sample\_pdf\(
bins: torch.Tensor,
weights: torch.Tensor,
n\_samples: int,
perturb: bool = False
\) -> torch.Tensor:
"""
应用反向转换采样到一组加权点。
"""
# 正则化权重得到概率密度函数。
pdf = \(weights + 1e-5\) / torch.sum\(weights + 1e-5, -1, keepdims=True\) # \[n\_rays, weights.shape\[-1\]\]
# 将概率密度函数转为累计分布函数。
cdf = torch.cumsum\(pdf, dim=-1\) # \[n\_rays, weights.shape\[-1\]\]
cdf = torch.concat\(\[torch.zeros\_like\(cdf\[..., :1\]\), cdf\], dim=-1\) # \[n\_rays, weights.shape\[-1\] + 1\]
# 从累计分布函数中提取样本位置。perturb == 0 时为线性。
if not perturb:
u = torch.linspace\(0., 1., n\_samples, device=cdf.device\)
u = u.expand\(list\(cdf.shape\[:-1\]\) + \[n\_samples\]\) # \[n\_rays, n\_samples\]
else:
u = torch.rand\(list\(cdf.shape\[:-1\]\) + \[n\_samples\], device=cdf.device\) # \[n\_rays, n\_samples\]
# 沿累计分布函数找出 u 值所在的索引。
u = u.contiguous\(\) # 返回具有相同值的连续张量。
inds = torch.searchsorted\(cdf, u, right=True\) # \[n\_rays, n\_samples\]
# 夹住超出范围的索引。
below = torch.clamp\(inds - 1, min=0\)
above = torch.clamp\(inds, max=cdf.shape\[-1\] - 1\)
inds\_g = torch.stack\(\[below, above\], dim=-1\) # \[n\_rays, n\_samples, 2\]
# 从累计分布函数和相应的 bin 中心取样。
matched\_shape = list\(inds\_g.shape\[:-1\]\) + \[cdf.shape\[-1\]\]
cdf\_g = torch.gather\(cdf.unsqueeze\(-2\).expand\(matched\_shape\), dim=-1,
index=inds\_g\)
bins\_g = torch.gather\(bins.unsqueeze\(-2\).expand\(matched\_shape\), dim=-1,
index=inds\_g\)
# 将样本转换为射线长度。
denom = \(cdf\_g\[..., 1\] - cdf\_g\[..., 0\]\)
denom = torch.where\(denom \< 1e-5, torch.ones\_like\(denom\), denom\)
t = \(u - cdf\_g\[..., 0\]\) / denom
samples = bins\_g\[..., 0\] + t \* \(bins\_g\[..., 1\] - bins\_g\[..., 0\]\)
return samples # \[n\_rays, n\_samples\]
7 整体的前向传播流程
此时应将上面所有内容整合在一起,通过模型计算一次前向传递。
由于潜在的内存问题,前向传递以“块“为单位进行计算,然后汇总到一个批次中。梯度传播是在整个批次处理完毕后进行的,因此有“块“和“批次“之分。对于内存紧张环境来说,分块处理尤为重要,因为该环境下提供的资源比原始论文中引用的资源更为有限。具体代码如下所示:
def get\_chunks\(
inputs: torch.Tensor,
chunksize: int = 2\*\*15
\) -> List\[torch.Tensor\]:
"""
输入分块。
"""
return \[inputs\[i:i + chunksize\] for i in range\(0, inputs.shape\[0\], chunksize\)\]
def prepare\_chunks\(
points: torch.Tensor,
encoding\_function: Callable\[\[torch.Tensor\], torch.Tensor\],
chunksize: int = 2\*\*15
\) -> List\[torch.Tensor\]:
"""
对点进行编码和分块,为 NeRF 模型做好准备。
"""
points = points.reshape\(\(-1, 3\)\)
points = encoding\_function\(points\)
points = get\_chunks\(points, chunksize=chunksize\)
return points
def prepare\_viewdirs\_chunks\(
points: torch.Tensor,
rays\_d: torch.Tensor,
encoding\_function: Callable\[\[torch.Tensor\], torch.Tensor\],
chunksize: int = 2\*\*15
\) -> List\[torch.Tensor\]:
r"""
对视图方向进行编码和分块,为 NeRF 模型做好准备。
"""
viewdirs = rays\_d / torch.norm\(rays\_d, dim=-1, keepdim=True\)
viewdirs = viewdirs\[:, None, ...\].expand\(points.shape\).reshape\(\(-1, 3\)\)
viewdirs = encoding\_function\(viewdirs\)
viewdirs = get\_chunks\(viewdirs, chunksize=chunksize\)
return viewdirs
def nerf\_forward\(
rays\_o: torch.Tensor,
rays\_d: torch.Tensor,
near: float,
far: float,
encoding\_fn: Callable\[\[torch.Tensor\], torch.Tensor\],
coarse\_model: nn.Module,
kwargs\_sample\_stratified: dict = None,
n\_samples\_hierarchical: int = 0,
kwargs\_sample\_hierarchical: dict = None,
fine\_model = None,
viewdirs\_encoding\_fn: Optional\[Callable\[\[torch.Tensor\], torch.Tensor\]\] = None,
chunksize: int = 2\*\*15
\) -> Tuple\[torch.Tensor, torch.Tensor, torch.Tensor, dict\]:
"""
计算一次前向传播
"""
# 设置参数
if kwargs\_sample\_stratified is None:
kwargs\_sample\_stratified = \{\}
if kwargs\_sample\_hierarchical is None:
kwargs\_sample\_hierarchical = \{\}
# 沿着每条射线的样本查询点。
query\_points, z\_vals = sample\_stratified\(
rays\_o, rays\_d, near, far, \*\*kwargs\_sample\_stratified\)
# 准备批次。
batches = prepare\_chunks\(query\_points, encoding\_fn, chunksize=chunksize\)
if viewdirs\_encoding\_fn is not None:
batches\_viewdirs = prepare\_viewdirs\_chunks\(query\_points, rays\_d,
viewdirs\_encoding\_fn,
chunksize=chunksize\)
else:
batches\_viewdirs = \[None\] \* len\(batches\)
# 稀疏模型流程。
predictions = \[\]
for batch, batch\_viewdirs in zip\(batches, batches\_viewdirs\):
predictions.append\(coarse\_model\(batch, viewdirs=batch\_viewdirs\)\)
raw = torch.cat\(predictions, dim=0\)
raw = raw.reshape\(list\(query\_points.shape\[:2\]\) + \[raw.shape\[-1\]\]\)
# 执行可微分体积渲染,重新合成 RGB 图像。
rgb\_map, depth\_map, acc\_map, weights = raw2outputs\(raw, z\_vals, rays\_d\)
outputs = \{
'z\_vals\_stratified': z\_vals
\}
if n\_samples\_hierarchical > 0:
# Save previous outputs to return.
rgb\_map\_0, depth\_map\_0, acc\_map\_0 = rgb\_map, depth\_map, acc\_map
# 对精细查询点进行分层抽样。
query\_points, z\_vals\_combined, z\_hierarch = sample\_hierarchical\(
rays\_o, rays\_d, z\_vals, weights, n\_samples\_hierarchical,
\*\*kwargs\_sample\_hierarchical\)
# 像以前一样准备输入。
batches = prepare\_chunks\(query\_points, encoding\_fn, chunksize=chunksize\)
if viewdirs\_encoding\_fn is not None:
batches\_viewdirs = prepare\_viewdirs\_chunks\(query\_points, rays\_d,
viewdirs\_encoding\_fn,
chunksize=chunksize\)
else:
batches\_viewdirs = \[None\] \* len\(batches\)
# 通过精细模型向前传递新样本。
fine\_model = fine\_model if fine\_model is not None else coarse\_model
predictions = \[\]
for batch, batch\_viewdirs in zip\(batches, batches\_viewdirs\):
predictions.append\(fine\_model\(batch, viewdirs=batch\_viewdirs\)\)
raw = torch.cat\(predictions, dim=0\)
raw = raw.reshape\(list\(query\_points.shape\[:2\]\) + \[raw.shape\[-1\]\]\)
# 执行可微分体积渲染,重新合成 RGB 图像。
rgb\_map, depth\_map, acc\_map, weights = raw2outputs\(raw, z\_vals\_combined, rays\_d\)
# 存储输出
outputs\['z\_vals\_hierarchical'\] = z\_hierarch
outputs\['rgb\_map\_0'\] = rgb\_map\_0
outputs\['depth\_map\_0'\] = depth\_map\_0
outputs\['acc\_map\_0'\] = acc\_map\_0
# 存储输出
outputs\['rgb\_map'\] = rgb\_map
outputs\['depth\_map'\] = depth\_map
outputs\['acc\_map'\] = acc\_map
outputs\['weights'\] = weights
return outputs
到这一步骤,就几乎拥有了训练模型所需的一切模块。现在为一个简单的训练过程做一些设置,创建超参数和辅助函数,然后来训练模型。
7.1 超参数
所有用于训练的超参数都在此设置,默认值取自原始论文中数据,除非计算上有限制。在计算受限情况下,本次讨论采用的都是合理的默认值。
# 编码器
d\_input = 3 # 输入维度
n\_freqs = 10 # 输入到编码函数中的样本点数量
log\_space = True # 如果设置,频率按对数空间缩放
use\_viewdirs = True # 如果设置,则使用视图方向作为输入
n\_freqs\_views = 4 # 视图编码功能的数量
# 采样策略
n\_samples = 64 # 每条射线的空间样本数
perturb = True # 如果设置,则对采样位置应用噪声
inverse\_depth = False # 如果设置,则按反深度线性采样点
# 模型
d\_filter = 128 # 线性层滤波器的尺寸
n\_layers = 2 # bottleneck层数量
skip = \[\] # 应用输入残差的层级
use\_fine\_model = True # 如果设置,则创建一个精细模型
d\_filter\_fine = 128 # 精细网络线性层滤波器的尺寸
n\_layers\_fine = 6 # 精细网络瓶颈层数
# 分层采样
n\_samples\_hierarchical = 64 # 每条射线的样本数
perturb\_hierarchical = False # 如果设置,则对采样位置应用噪声
# 优化器
lr = 5e-4 # 学习率
# 训练
n\_iters = 10000
batch\_size = 2\*\*14 # 每个梯度步长的射线数量(2 的幂次)
one\_image\_per\_step = True # 每个梯度步骤一个图像(禁用批处理)
chunksize = 2\*\*14 # 根据需要进行修改,以适应 GPU 内存
center\_crop = True # 裁剪图像的中心部分(每幅图像裁剪一次)
center\_crop\_iters = 50 # 经过这么多epoch后,停止裁剪中心
display\_rate = 25 # 每 X 个epoch显示一次测试输出
# 早停
warmup\_iters = 100 # 热身阶段的迭代次数
warmup\_min\_fitness = 10.0 # 在热身\_iters 处继续训练的最小 PSNR 值
n\_restarts = 10 # 训练停滞时重新开始的次数
# 捆绑了各种函数的参数,以便一次性传递。
kwargs\_sample\_stratified = \{
'n\_samples': n\_samples,
'perturb': perturb,
'inverse\_depth': inverse\_depth
\}
kwargs\_sample\_hierarchical = \{
'perturb': perturb
\}
7.2 训练类和函数
这一环节会创建一些用于训练的辅助函数。NeRF很容易出现局部最小值,在这种情况下,训练很快就会停滞并产生空白输出。必要时,会利用EarlyStopping重新启动训练。
# 绘制采样函数
def plot\_samples\(
z\_vals: torch.Tensor,
z\_hierarch: Optional\[torch.Tensor\] = None,
ax: Optional\[np.ndarray\] = None\):
r"""
绘制分层样本和(可选)分级样本。
"""
y\_vals = 1 + np.zeros\_like\(z\_vals\)
if ax is None:
ax = plt.subplot\(\)
ax.plot\(z\_vals, y\_vals, 'b-o'\)
if z\_hierarch is not None:
y\_hierarch = np.zeros\_like\(z\_hierarch\)
ax.plot\(z\_hierarch, y\_hierarch, 'r-o'\)
ax.set\_ylim\(\[-1, 2\]\)
ax.set\_title\('Stratified Samples \(blue\) and Hierarchical Samples \(red\)'\)
ax.axes.yaxis.set\_visible\(False\)
ax.grid\(True\)
return ax
def crop\_center\(
img: torch.Tensor,
frac: float = 0.5
\) -> torch.Tensor:
r"""
从图像中裁剪中心方形。
"""
h\_offset = round\(img.shape\[0\] \* \(frac / 2\)\)
w\_offset = round\(img.shape\[1\] \* \(frac / 2\)\)
return img\[h\_offset:-h\_offset, w\_offset:-w\_offset\]
class EarlyStopping:
r"""
基于适配标准的早期停止辅助器
"""
def \_\_init\_\_\(
self,
patience: int = 30,
margin: float = 1e-4
\):
self.best\_fitness = 0.0
self.best\_iter = 0
self.margin = margin
self.patience = patience or float\('inf'\) # 在epoch停止提高后等待的停止时间
def \_\_call\_\_\(
self,
iter: int,
fitness: float
\):
r"""
检查是否符合停止标准。
"""
if \(fitness - self.best\_fitness\) > self.margin:
self.best\_iter = iter
self.best\_fitness = fitness
delta = iter - self.best\_iter
stop = delta >= self.patience # 超过耐性则停止训练
return stop
def init\_models\(\):
r"""
为 NeRF 训练初始化模型、编码器和优化器。
"""
# 编码器
encoder = PositionalEncoder\(d\_input, n\_freqs, log\_space=log\_space\)
encode = lambda x: encoder\(x\)
# 视图方向编码
if use\_viewdirs:
encoder\_viewdirs = PositionalEncoder\(d\_input, n\_freqs\_views,
log\_space=log\_space\)
encode\_viewdirs = lambda x: encoder\_viewdirs\(x\)
d\_viewdirs = encoder\_viewdirs.d\_output
else:
encode\_viewdirs = None
d\_viewdirs = None
# 模型
model = NeRF\(encoder.d\_output, n\_layers=n\_layers, d\_filter=d\_filter, skip=skip,
d\_viewdirs=d\_viewdirs\)
model.to\(device\)
model\_params = list\(model.parameters\(\)\)
if use\_fine\_model:
fine\_model = NeRF\(encoder.d\_output, n\_layers=n\_layers, d\_filter=d\_filter, skip=skip,
d\_viewdirs=d\_viewdirs\)
fine\_model.to\(device\)
model\_params = model\_params + list\(fine\_model.parameters\(\)\)
else:
fine\_model = None
# 优化器
optimizer = torch.optim.Adam\(model\_params, lr=lr\)
# 早停
warmup\_stopper = EarlyStopping\(patience=50\)
return model, fine\_model, encode, encode\_viewdirs, optimizer, warmup\_stopper
7.3 训练循环
下面就是具体的训练循环过程函数:
def train\(\):
r"""
启动 NeRF 训练。
"""
# 对所有图像进行射线洗牌。
if not one\_image\_per\_step:
height, width = images.shape\[1:3\]
all\_rays = torch.stack\(\[torch.stack\(get\_rays\(height, width, focal, p\), 0\)
for p in poses\[:n\_training\]\], 0\)
rays\_rgb = torch.cat\(\[all\_rays, images\[:, None\]\], 1\)
rays\_rgb = torch.permute\(rays\_rgb, \[0, 2, 3, 1, 4\]\)
rays\_rgb = rays\_rgb.reshape\(\[-1, 3, 3\]\)
rays\_rgb = rays\_rgb.type\(torch.float32\)
rays\_rgb = rays\_rgb\[torch.randperm\(rays\_rgb.shape\[0\]\)\]
i\_batch = 0
train\_psnrs = \[\]
val\_psnrs = \[\]
iternums = \[\]
for i in trange\(n\_iters\):
model.train\(\)
if one\_image\_per\_step:
# 随机选择一张图片作为目标。
target\_img\_idx = np.random.randint\(images.shape\[0\]\)
target\_img = images\[target\_img\_idx\].to\(device\)
if center\_crop and i \< center\_crop\_iters:
target\_img = crop\_center\(target\_img\)
height, width = target\_img.shape\[:2\]
target\_pose = poses\[target\_img\_idx\].to\(device\)
rays\_o, rays\_d = get\_rays\(height, width, focal, target\_pose\)
rays\_o = rays\_o.reshape\(\[-1, 3\]\)
rays\_d = rays\_d.reshape\(\[-1, 3\]\)
else:
# 在所有图像上随机显示。
batch = rays\_rgb\[i\_batch:i\_batch + batch\_size\]
batch = torch.transpose\(batch, 0, 1\)
rays\_o, rays\_d, target\_img = batch
height, width = target\_img.shape\[:2\]
i\_batch += batch\_size
# 一个epoch后洗牌
if i\_batch >= rays\_rgb.shape\[0\]:
rays\_rgb = rays\_rgb\[torch.randperm\(rays\_rgb.shape\[0\]\)\]
i\_batch = 0
target\_img = target\_img.reshape\(\[-1, 3\]\)
# 运行 TinyNeRF 的一次迭代,得到渲染后的 RGB 图像。
outputs = nerf\_forward\(rays\_o, rays\_d,
near, far, encode, model,
kwargs\_sample\_stratified=kwargs\_sample\_stratified,
n\_samples\_hierarchical=n\_samples\_hierarchical,
kwargs\_sample\_hierarchical=kwargs\_sample\_hierarchical,
fine\_model=fine\_model,
viewdirs\_encoding\_fn=encode\_viewdirs,
chunksize=chunksize\)
# 检查任何数字问题。
for k, v in outputs.items\(\):
if torch.isnan\(v\).any\(\):
print\(f"\! \[Numerical Alert\] \{k\} contains NaN."\)
if torch.isinf\(v\).any\(\):
print\(f"\! \[Numerical Alert\] \{k\} contains Inf."\)
# 反向传播
rgb\_predicted = outputs\['rgb\_map'\]
loss = torch.nn.functional.mse\_loss\(rgb\_predicted, target\_img\)
loss.backward\(\)
optimizer.step\(\)
optimizer.zero\_grad\(\)
psnr = -10. \* torch.log10\(loss\)
train\_psnrs.append\(psnr.item\(\)\)
# 以给定的显示速率评估测试值。
if i \% display\_rate == 0:
model.eval\(\)
height, width = testimg.shape\[:2\]
rays\_o, rays\_d = get\_rays\(height, width, focal, testpose\)
rays\_o = rays\_o.reshape\(\[-1, 3\]\)
rays\_d = rays\_d.reshape\(\[-1, 3\]\)
outputs = nerf\_forward\(rays\_o, rays\_d,
near, far, encode, model,
kwargs\_sample\_stratified=kwargs\_sample\_stratified,
n\_samples\_hierarchical=n\_samples\_hierarchical,
kwargs\_sample\_hierarchical=kwargs\_sample\_hierarchical,
fine\_model=fine\_model,
viewdirs\_encoding\_fn=encode\_viewdirs,
chunksize=chunksize\)
rgb\_predicted = outputs\['rgb\_map'\]
loss = torch.nn.functional.mse\_loss\(rgb\_predicted, testimg.reshape\(-1, 3\)\)
print\("Loss:", loss.item\(\)\)
val\_psnr = -10. \* torch.log10\(loss\)
val\_psnrs.append\(val\_psnr.item\(\)\)
iternums.append\(i\)
# 绘制输出示例
fig, ax = plt.subplots\(1, 4, figsize=\(24,4\), gridspec\_kw=\{'width\_ratios': \[1, 1, 1, 3\]\}\)
ax\[0\].imshow\(rgb\_predicted.reshape\(\[height, width, 3\]\).detach\(\).cpu\(\).numpy\(\)\)
ax\[0\].set\_title\(f'Iteration: \{i\}'\)
ax\[1\].imshow\(testimg.detach\(\).cpu\(\).numpy\(\)\)
ax\[1\].set\_title\(f'Target'\)
ax\[2\].plot\(range\(0, i + 1\), train\_psnrs, 'r'\)
ax\[2\].plot\(iternums, val\_psnrs, 'b'\)
ax\[2\].set\_title\('PSNR \(train=red, val=blue'\)
z\_vals\_strat = outputs\['z\_vals\_stratified'\].view\(\(-1, n\_samples\)\)
z\_sample\_strat = z\_vals\_strat\[z\_vals\_strat.shape\[0\] // 2\].detach\(\).cpu\(\).numpy\(\)
if 'z\_vals\_hierarchical' in outputs:
z\_vals\_hierarch = outputs\['z\_vals\_hierarchical'\].view\(\(-1, n\_samples\_hierarchical\)\)
z\_sample\_hierarch = z\_vals\_hierarch\[z\_vals\_hierarch.shape\[0\] // 2\].detach\(\).cpu\(\).numpy\(\)
else:
z\_sample\_hierarch = None
\_ = plot\_samples\(z\_sample\_strat, z\_sample\_hierarch, ax=ax\[3\]\)
ax\[3\].margins\(0\)
plt.show\(\)
# 检查 PSNR 是否存在问题,如果发现问题,则停止运行。
if i == warmup\_iters - 1:
if val\_psnr \< warmup\_min\_fitness:
print\(f'Val PSNR \{val\_psnr\} below warmup\_min\_fitness \{warmup\_min\_fitness\}. Stopping...'\)
return False, train\_psnrs, val\_psnrs
elif i \< warmup\_iters:
if warmup\_stopper is not None and warmup\_stopper\(i, psnr\):
print\(f'Train PSNR flatlined at \{psnr\} for \{warmup\_stopper.patience\} iters. Stopping...'\)
return False, train\_psnrs, val\_psnrs
return True, train\_psnrs, val\_psnrs
最终的结果如下图所示:
引用:
[1]https://www.matthewtancik.com/nerf
[2]http://cseweb.ucsd.edu/~viscomp/projects/LF/papers/ECCV20/nerf/tiny_nerf_data.npz
[3]https://towardsdatascience.com/its-nerf-from-nothing-build-a-vanilla-nerf-with-pytorch-7846e4c45666
[4]https://medium.com/@rparikshat1998/nerf-from-scratch-fe21c08b145d
公众号后台回复“数据集”获取100+深度学习各方向资源整理
极市干货
点击阅读原文进入CV社区
收获更多技术干货