.github/workflows | ||
examples | ||
src/flitter_kinect | ||
.flake8 | ||
.gitignore | ||
pyproject.toml | ||
README.md | ||
setup.py |
Flitter Kinect
This package provides a plugin for using a Kinect v2 device in Flitter. It makes use of the freenect2 Python package, which requires you to install the libfreenect2 library. This is left as an exercise for the reader.
The plugin will scan for presence of a Kinect device before trying to connect to it and tries to gracefully deal with the device being unplugged while in use. Only a single attached device is supported.
The additional nodes provided by this plugin are:
!window
/!offscreen
tree nodes
!kinect
This provides access to the raw frames from the Kinect as an image. In addition
to the standard attributes (size
, etc.), it supports the following:
-
output=
[:color
|:depth
|:registered
| :combined
]
Whether to output the raw frame from the color camera, the raw frame from the depth camera, the registered color image or a combined image. The default is:combined
. -
flip_x=
[true
|false
]
Whether to flip the image horizontally. Default isfalse
. -
flip_y=
[true
|false
]
Whether to flip the image vertically. Default isfalse
. -
near=
DISTANCE
The near time-of-flight clip sphere of the depth camera, in metres. Distances smaller than this will be considered to be invalid. Default is0.5
. -
far=
DISTANCE
The far time-of-flight clip sphere of the depth camera, in metres. Distances larger than this will be considered to be invalid. Default is4.5
. -
near_value=
VALUE
The output channel value to use for distances atnear
. Default is1
. -
far_value=
VALUE
The output channel value to use for distances atfar
. Default is0
. -
invalid_value=
VALUE
The value to use for the depth channel if the distance is nearer thannear
or further thanfar
. Default is0
.
In :depth
output mode, the result will be a 512x424 image with each of the
RGB channels set to the distance through that pixel and the A channel
set to 1
. Distances in the range near
to far
will be mapped linearly to
grey values between near_value
and far_value
, with the grey value being
invalid_value
for distances outside of that distance range.
In :color
output mode, the result image will be the 1920x1080 color frame as
received from the Kinect visible light camera.
For :registered
or :combined
output, the color image will be cropped and
aligned to the undistorted depth camera's view. With :combined
, the A
channel will contain the depth value, as described above. The RGB channels
will not be premultiplied by this value (it's not a real alpha). With
:registered
, the A channel will be 1.
The !kinect
window node can be used multiple times in a view without problem.
Each will show data from the same device (see the monitor.fl
example).
!canvas3d
model nodes
!kinect
This provides access to the output of the depth camera as a live 3D surface.
The surface is constructed from the camera's point of view with the camera at
the origin and the Z axis pointing towards the camera, so the entire surface
exists on the negative-Z side of the origin, with normals (/windings) on the
camera side of the surface. There are no back faces, so the surface is
invisible from the far side (unless inverted with invert=true
). The model
units are in metres. Invalid depth values will translate to holes in the
surface. The model will be automatically updated as new depth frames are
processed (up to 30fps).
The node supports the following attributes:
-
average=
NFRAMES
The depth camera output is pretty noisy. Set this to average together the last NFRAMES. A value of3
is pretty decent, but any higher will cause visible spacetime smearing of any moving objects. The default is1
, i.e., do no averaging. -
tear=
DISTANCE
Set to a difference in depth (in metres) at which parts of the surface will be torn apart instead of joined. This is useful to differentiate near objects from far ones. The default is0
, which means to not tear the surface. -
near=
DISTANCE
A near Z-axis clip-plane, measured in (positive) metres from the camera. Points closer than this will be considered invalid. Default is0.5
. -
far=
DISTANCE
A far Z-axis clip-plane, measured in (positive) metres from the camera. Points further than this will be considered invalid. Default is4.5
.
Multiple instances of the !kinect
model – with different settings – may be
used in a scene/program (along with instances of the !kinect
window node).
They will all use the same underlying data from the device.
The surface has UV coordinates matching the :registered
color output of the
camera (as described above) and therefore the color camera output can be
texture mapped onto the surface (see the mesh.fl
example.