Sign up for the KDAB Newsletter
Stay on top of the latest news, publications, events and more.
Go to Sign-up
For SIGGRAPH, KDAB has been working on a new Qt 3D based demo. We decided that instead of using C++, it would be interesting to try out PySide2 and harness Python to drive the application.
The idea behind this demo is to do with data acquisition of a vehicle's surrounding environment. Once the data is acquired it can be processed and used to display a 3D scene.
The application is structured in two main parts. On the one hand, we use QtQuick and the Qt 3D QML API to declare the UI and instantiate the 3D scene. On the other hand we use Python for the backend logic, data processing and models and definition of the custom Qt 3D meshes elements we'll need to use in the UI.
Since this is a demo, we simulate the data that we acquire rather than rely on real data acquisition through sensors.
We simulate only two things:
The information for these is obtained by looping around a generated set of road sections.
To define a fake road track, we've used cubic bezier curves, each bezier curve defining a road section.
A cubic bezier curve is defined as 2 end points + 2 control points. This allows for a rather compact description of the road section we want our vehicle to travel on.
Using this tool, we generated the bezier curves with these values:
bezier_curves = [
[(318, 84), (479, 18), (470, 233), (472, 257)],
[(472, 257), (473, 272), (494, 459), (419, 426)],
[(419, 426), (397, 417), (354, 390), (324, 396)],
[(324, 396), (309, 399), (217, 416), (202, 415)],
[(202, 415), (157, 412), (116, 278), (114, 263)],
[(114, 263), (119, 219), (151, 190), (182, 192)],
[(182, 192), (277, 192), (216, 128), (318, 84)]
]
Notice how each bezier curve starts at the position of the last point of the previous curve. That's because we want no discontinuity between our road sections.
On each curve, we sample 250 subdivisions to generate raw position data. Given we have 7 curves, that gives us a total of 1750 positions.
In real life our vehicle is only aware of the immediately surrounding environment. In our case, we've decided that would be about 100 positions in front of the vehicle and 50 positions at the rear.
Every 16ms, we increase a global index (which goes from 0 to 1750) and select 150 entries starting at our index. From these 150 positions we extrude 4 lines (to make 3 road lanes).
The 50th entry we've selected is where we assume our vehicle is.
In the 3D view we assume the vehicle is placed in (0, 0, 0). The camera is placed slightly behind the vehicle, its view center being the vehicle. So if positions[49] is where our vehicle actually is in the real world, we actually need to translate back all our positions to minus positions[49]. We also want our vehicle and our camera to rotate as we are going along curves. For that we know that our camera is looking toward -Z (0, 0, -1). We can compute a vector u (vehicle position - road section start) and then find the angle between u and -z using the dot product.
In code this translates to simply creating a transform matrix:
road_start_position = self.m_points_at_position[0]
screen_origin_position = self.m_points_at_position[50]
def compute_angle_between_road_section_and_z():
# We want to look toward -Z
target_dir = QVector3D(0.0, 0.0, -1.0)
# Our current dir
current_dir = (screen_origin_position - road_start_position).normalized()
# Angle between our two vectors is acos(dot, current_dir, target_dir)
dot = QVector3D.dotProduct(target_dir, current_dir)
return acos(dot)
rot_angle = compute_angle_between_road_section_and_z()
self.m_road_to_world_matrix = QMatrix4x4()
# Rotate of rot_angle around +Y
self.m_road_to_world_matrix.rotate(degrees(rot_angle), QVector3D(0.0, 1.0, 0.0))
# Translate points back to origin
self.m_road_to_world_matrix.translate(-screen_origin_position)
Then, it's just a matter of transforming all these positions using the transformation matrix.
To render the road lines, we have created a new Qt 3D QGeometry subclass.
The python backend generates new buffer data for the road every frame, based on the 150 transformed positions that have been computed. Basically for each 2 positions, a quad made up of 2 triangles is generated to make up one part of a road line. This process is repeated 4 times with an offset on the x-axis for each road line. In turn, this is repeated 150 times so that we have quads for each position and for each line to make up our 3 road lanes.
We just upload these buffers to the GPU by using a Qt 3D QBuffer and setting its data property.
from PySide2.QtCore import Property, Signal, QByteArray
from PySide2.Qt3DCore import Qt3DCore
from PySide2.Qt3DRender import Qt3DRender
from array import array
class RoadLineGeometry(Qt3DRender.QGeometry):
def __init__(self, parent=None):
Qt3DRender.QGeometry.__init__(self, parent)
self.m_position_buffer = Qt3DRender.QBuffer(self)
self.m_position_buffer.setUsage(Qt3DRender.QBuffer.StaticDraw)
self.m_position_attribute = Qt3DRender.QAttribute(self)
self.m_position_attribute.setAttributeType(Qt3DRender.QAttribute.VertexAttribute)
self.m_position_attribute.setDataType(Qt3DRender.QAttribute.Float)
self.m_position_attribute.setDataSize(3)
self.m_position_attribute.setName(Qt3DRender.QAttribute.defaultPositionAttributeName())
self.m_position_attribute.setBuffer(self.m_position_buffer)
self.addAttribute(self.m_position_attribute)
def update(self, data, width):
# Data is a QByteArray of floats as vec3
float_data = array('f', data.data())
transformed_point = [v for i in range(0, len(float_data), 3)
for v in [float_data[i] - width / 2.0, 0.0, float_data[i + 2],
float_data[i] + width / 2.0, 0.0, float_data[i + 2]]]
self.m_position_buffer.setData(QByteArray(array('f', transformed_point).tobytes()))
self.m_position_attribute.setCount(len(transformed_point) / 3)
Note: in Python the QGeometry::geometryFactory API is unavailable, meaning that we need to update directly our QBuffer data in the frontend.
As for the bike, it is a .obj model we load and then scale and rotate with a transformation matrix. The scale and rotations are updated as we move through the track so that the bike always aligns with the road.
To load the geometry, a custom .obj loader was written. Regular wireframing techniques usually display triangles if the mesh was exported as triangles (which was the case for us). Our custom loader works around that by analyzing faces described in the .obj file and generating lines to match the faces (we have square faces in our case).
In addition, the bike uses a special FrameGraph. I won't highlight it in details but instead just give you a rough idea of what it does:
The Tron-like appearance is simply a result of this FrameGraph Bloom effect.
Using Python and PySide2 instead of C++ to create a Qt 3D-based application has quite a few advantages but also a couple of disadvantages:
Overall it was an enjoyable experience and with the bindings, Python is a real alternative to C++ for most of what regular users might wish to do with Qt 3D.
You can download Qt for Python here.
6 Comments
26 - Jul - 2018
GIto
Impressive work ! Could you upload the video file for this?
27 - Jul - 2018
David Murphy
"Using this tool, we generated the bezier curves with these values:". I may have missed something but what dd you use for this ?
27 - Jul - 2018
Paul Lemire
Sorry, the link on "this" might not have worked. Full link is http://www.victoriakirst.com/beziertool/
23 - Mar - 2021
Nigel Brown
Where can we find the source to this demo?
19 - May - 2021
Paul Lemire
Sorry, we usually don't share the source code of our demos.
1 - Jan - 2024
Juliusz Kaczmarek
Why is that, if I may ask? You already provide snippets of code in the above article. Qt3d for Python documentation is so poor - there's never too many examples...