Trusted Software Excellence across Desktop and Embedded
Take a glance at the areas of expertise where KDAB excels ranging from swift troubleshooting, ongoing consulting and training to multi-year, large-scale software development projects.
Find out why customers from innovative industries rely on our extensive expertise, including Medical, Biotech, Science, Renewable Energy, Transportation, Mobility, Aviation, Automation, Electronics, Agriculture and Defense.
High-quality Embedded Engineering across the Stack
To successfully develop an embedded device that meets your expectations regarding quality, budget and time to market, all parts of the project need to fit perfectly together.
Learn more about KDAB's expertise in embedded software development.
Where the capabilities of modern mobile devices or web browsers fall short, KDAB engineers help you expertly architect and build high-functioning desktop and workstation applications.
Extensible, Safety-compliant Software for the Medical Sector
Create intelligent, patient-focused medical software and devices and stay ahead with technology that adapts to your needs.
KDAB offers you expertise in developing a broad spectrum of clinical and home-healthcare devices, including but not limited to, internal imaging systems, robotic surgery devices, ventilators and non-invasive monitoring systems.
Building digital dashboards and cockpits with fluid animations and gesture-controlled touchscreens is a big challenge.
In over two decades of developing intricate UI solutions for cars, trucks, tractors, scooters, ships, airplanes and more, the KDAB team has gained market leading expertise in this realm.
Build on Advanced Expertise when creating Modern UIs
KDAB assists you in the creation of user-friendly interfaces designed specifically for industrial process control, manufacturing, and fabrication.
Our specialties encompass the custom design and development of HMIs, enabling product accessibility from embedded systems, remote desktops, and mobile devices on the move.
Legacy software is a growing but often ignored problem across all industries. KDAB helps you elevate your aging code base to meet the dynamic needs of the future.
Whether you want to migrate from an old to a modern GUI toolkit, update to a more recent version, or modernize your code base, you can rely on over 25 years of modernization experience.
KDAB offers a wide range of services to address your software needs including consulting, development, workshops and training tailored to your requirements.
Our expertise spans cross-platform desktop, embedded and 3D application development, using the proven technologies for the job.
When working with KDAB, the first-ever Qt consultancy, you benefit from a deep understanding of Qt internals, that allows us to provide effective solutions, irrespective of the depth or scale of your Qt project.
Qt Services include developing applications, building runtimes, mixing native and web technologies, solving performance issues, and porting problems.
KDAB helps create commercial, scientific or industrial desktop applications from scratch, or update its code or framework to benefit from modern features.
Discover clean, efficient solutions that precisely meet your requirements.
Boost your team's programming skills with in-depth, constantly updated, hands-on training courses delivered by active software engineers who love to teach and share their knowledge.
Our courses cover Modern C++, Qt/QML, Rust, 3D programming, Debugging, Profiling and more.
The collective expertise of KDAB's engineering team is at your disposal to help you choose the software stack for your project or master domain-specific challenges.
Our particular focus is on software technologies you use for cross-platform applications or for embedded devices.
Since 1999, KDAB has been the largest independent Qt consultancy worldwide and today is a Qt Platinum partner. Our experts can help you with any aspect of software development with Qt and QML.
KDAB specializes in Modern C++ development, with a focus on desktop applications, GUI, embedded software, and operating systems.
Our experts are industry-recognized contributors and trainers, leveraging C++'s power and relevance across these domains to deliver high-quality software solutions.
KDAB can guide you incorporating Rust into your project, from as overlapping element to your existing C++ codebase to a complete replacement of your legacy code.
Unique Expertise for Desktop and Embedded Platforms
Whether you are using Linux, Windows, MacOS, Android, iOS or real-time OS, KDAB helps you create performance optimized applications on your preferred platform.
If you are planning to create projects with Slint, a lightweight alternative to standard GUI frameworks especially on low-end hardware, you can rely on the expertise of KDAB being one of the earliest adopters and official service partner of Slint.
KDAB has deep expertise in embedded systems, which coupled with Flutter proficiency, allows us to provide comprehensive support throughout the software development lifecycle.
Our engineers are constantly contributing to the Flutter ecosystem, for example by developing flutter-pi, one of the most used embedders.
KDAB invests significant time in exploring new software technologies to maintain its position as software authority. Benefit from this research and incorporate it eventually into your own project.
Start here to browse infos on the KDAB website(s) and take advantage of useful developer resources like blogs, publications and videos about Qt, C++, Rust, 3D technologies like OpenGL and Vulkan, the KDAB developer tools and more.
The KDAB Youtube channel has become a go-to source for developers looking for high-quality tutorial and information material around software development with Qt/QML, C++, Rust and other technologies.
Click to navigate the all KDAB videos directly on this website.
In over 25 years KDAB has served hundreds of customers from various industries, many of them having become long-term customers who value our unique expertise and dedication.
Learn more about KDAB as a company, understand why we are considered a trusted partner by many and explore project examples in which we have proven to be the right supplier.
The KDAB Group is a globally recognized provider for software consulting, development and training, specializing in embedded devices and complex cross-platform desktop applications.
Read more about the history, the values, the team and the founder of the company.
When working with KDAB you can expect quality software and the desired business outcomes thanks to decades of experience gathered in hundreds of projects of different sizes in various industries.
Have a look at selected examples where KDAB has helped customers to succeed with their projects.
KDAB is committed to developing high-quality and high-performance software, and helping other developers deliver to the same high standards.
We create software with pride to improve your engineering and your business, making your products more resilient and maintainable with better performance.
KDAB has been the first certified Qt consulting and software development company in the world, and continues to deliver quality processes that meet or exceed the highest expectations.
In KDAB we value practical software development experience and skills higher than academic degrees. We strive to ensure equal treatment of all our employees regardless of age, ethnicity, gender, sexual orientation, nationality.
Interested? Read more about working at KDAB and how to apply for a job in software engineering or business administration.
When someone with an OpenGL background begins using Vulkan, one of the very common outcomes - beyond the initial one of "OMG how much code does it take to draw a triangle?" - is that the resulting image is upside down.
Searching the web for this will give many hits on discussions about coordinate systems being flipped with suggested solutions being to do things like:
Invert all of your gl_Position.y coordinates in all of your vertex shaders.
Provide a negative height viewport to flip the viewport transformation applied by Vulkan.
Perform some magic incantation on your transformation matrices such as negating the y-axis.
All of these approaches have downsides such as needing to touch all of your vertex shaders; using hardware where negative viewport heights are supported; not really understanding the implications of randomly flipping an axis in a transformation matrix; having to invert your geometry winding order.
This post will aim to explain what is different between OpenGL and Vulkan transformations and how we can adapt our code to get the desired results with the bonus of actually understanding what is going on. This final point is crucial when it comes time to make changes later so that you don't end up in the common situation of randomly flipping axes until you get what you want but which probably breaks something else.
Left- vs Right-handed Coordinate Systems
As a quick aside, it is important in what follows to know if we are dealing with a left-handed or right-handed coordinate system at any given time. First of all what does it even mean for a coordinate system to be left-handed or right-handed?
Well, it's just a way of defining the relative orientations of the coordinate axes. In the following pictures we can use our thumb, first finger, and middle finger to represent the x, y, and z axes (or basis vectors if you prefer).
In a right-handed coordinate system we use those digits on our right hand so that the x-axis points to the right say, the y-axis points up, leaving the z-axis (middle finger) to point towards us.
Conversely, in a left-handed coordinate system we can still have the x-axis pointing to the right and the y-axis pointing up, but this time the z-axis increases away from us.
Converting from a right-handed coordinate system to a left-handed coordinate system or vice versa can be achieved by simply flipping the sign of a single axis (or any odd number of axes).
As we shall see, different graphics APIs use left- or right-handed coordinate systems at various stages of processing. This stuff can be a major source of confusion for graphics developers if they do not keep track of coordinate systems and often results in "oh hey, it works if I flip the sign of this column but I have no idea why".
Common Coordinate Systems in 3D Graphics
Let's take a quick tour of the coordinate systems used in 3D graphics at various stages of the (extended) pipeline. We will begin with OpenGL and then go on to discuss Vulkan and its differences. Note that the uses of the coordinate systems are the same in both systems, but as we shall see, there are some small but important changes between the two APIs. It is these differences that we need to be aware of in order to make our applications behave the way we want them to.
Here is a quick summary of the coordinate systems, what they are used for and where they occur.
Model Space or Object Space
This is any coordinate system that a 3D artist chooses to use when creating a particular asset. If modelling a chair, then they may use units of cm perhaps. If modelling a mountain range, a more suitable choice of unit may be km. Different tools also have different conventions for the orientation of axes. Blender for example, uses a z-up convention whereas as we shall see later, many real-time 3D applications choose to use y-up as their chosen orientation. Ultimately, it does not matter just so long as we know which conventions are used. Objects in model space are also often located close to the origin for convenience when being modelled and for when we later wish to position them.
Model space is often right-handed but it is usually decided by the tool author or generative code author.
World Space
World space is what we are most familiar with and is typically what you create using game engine editors. World space is a coordinate system where everything is brought into consistent units whether the units we choose are microns, centimeters, meters, kilometers etc. How we define world space in our applications is up to us. It may well differ depending upon what it is we are trying to simulate. Cellular microscopy applications probably make more sense using suitable units such as microns or perhaps even nanometers. Whereas a space simulation is probably better off using kilometers or maybe something even larger – whatever allows you to make best use of the limited precision of floating point numbers.
World space is also where we would rotate objects coming from various definitions of model space so that they make sense in the larger scene. For example, if a chair was modeled with the z-up convention and it wasn't rotated when it was exported, then when we place it into world space we would also apply the rotation here, so that it looks correct in a y-up convention.
To create a consistent scene, we scale, rotate and translate our various 3D assets so that they are positioned relative to each other as we wish. The way we do this is to pre-multiply the vertex positions of the 3D assets by a "Model Matrix" for that asset. The Model Matrix, or just
for short, is a 4x4 matrix that encodes the scaling, rotation and translation operations needed to correctly position the asset.
World space is often right-handed but it is up to the application developer to decide.
Camera or View or Eye Space
This next space goes by various names in the literature and online such as eye space, camera space or view space. Ultimately they all mean the same thing which is that the objects in our 3D world are transformed to be relative to our virtual camera. Wait, our what?
Well, in order to be able to visualize our virtual 3D worlds on a display device, we must choose a position and orientation from which to view it. This is typically achieved by placing a virtual camera into the world. Yes, the camera entity is also positioned in world space by way of a transformation just like the assets mentioned above. View space is often defined to be a right-handed coordinate system where:
the x-axis points to the right;
the y-axis points upwards;
and the z-axis is such that we are looking down the negative z-axis.
Typically a camera is only rotated and translated to place it into world space and so the units of measurement are still whatever you decided upon for World space. Therefore, the transformation to get our 3D entities from world space and into view space consists only of a translation and rotation. The matrix for transforming from World space to View space is typically called the "View Matrix" or just
.
View space is often right-handed but it is up to the developer to decide.
Clip Space
In addition to a position and orientation, our virtual camera also needs to provide some additional information that helps us convert from a purely mathematical model of our 3D world to how it should appear on screen. We need a way to map points in View space onto specific pixel coordinates on the display.
The first step towards this is the conversion from View space to "Clip Space" which is achieved by multiplying the View space positions by a so-called "Projection Matrix" (abbreviated to
).
There are various ways to calculate a projection matrix,
, depending upon if you wish to use an orthographic projection or a perspective projection.
Orthographic projection: Often used in CAD applications as parallel lines in the world remain parallel on screen. Angles are preserved. The view volume (portion of scene that will appear on screen) is a cuboid.
Perspective projection: Often used in games and other applications as this mimics the way our eyes work. Distant objects appear smaller. Angles are not preserved. The view volume for a perspective projection is a frustum (truncated rectangular pyramid).
Ultimately, the projection matrix transforms the view volume into a cuboid in clip space with a characteristic size of w. Thanks to the way that perspective projection matrices are constructed, the w component is equal to the z-depth in eye space of the point being transformed. This is so that we can later use this to perform the perspective divide operation and get perspective-correct interpolation of our geometry's attributes (see below).
Don't worry too much about the details of this. Conceptually it squashes things around so that anything that was inside the view volume (cube or frustum) into a cuboidal volume. The exact details of this depends upon which graphics API you are using (see even further below).
Normalised Device Coordinates
The next step along our path to getting something to appear on screen involves the use of NDC space or Normalized Device Coordinates. This step is easy though. All we do to get from Clip space to NDC space is to divide the x, y, and z components of each vertex by the 4th w component (and then discarding the 4th component which is now guaranteed to be exactly 1). A process known as homogenization or perspective divide.
Why even do this? Well as the name suggests, clip space is used by the fixed function parts of the GPU to clip geometry so that it only has to rasterize parts that will actually be visible on the display. Any coordinate that has a magnitude exceeding the value w will be clipped.
It is this step that "bakes in" the perspective effect if using a perspective transformation.
The end result is that our visible part of the scene is now contained within a cuboid with characteristic length of 1. Again, see below for the differences between graphics APIs.
NDC space is a nice simple, normalized coordinate system to reason about. We're now just a small step away from getting our 3D world to appear at the correct set of pixels on the display.
Framebuffer or Window Space
The final step of the process is to convert from NDC to Window Space or Framebuffer Space or Viewport Space. Again more names for the same thing. It’s basically the pixel coordinates in your application window.
The conversion from NDC to Framebuffer space is controlled by the viewport transformation that you can configure in your graphics API of choice. This transformation is just a bias (offset) and scaling operation. This makes intuitive sense when you think we are converting from the normalized coordinates in NDC space to pixels. The levels of scale and bias are controlled by which portion of the window you wish to display to. Specifically it's offset and dimensions. The details of how to set the viewport transformation vary between graphics APIs.
Coordinate Systems in Practice
The above descriptions sound very scary and intimidating but in practice they are not so bad once we understand what is going on. Spending a little time to understand the sequence of operations is very worth while and is infinitely better than randomly changing the signs of various elements to make something work in your one particular case. It's only a matter of time until your random tweak will break something else in the future.
Take a look at the following diagram that summarizes the path that data takes through the graphics pipeline and the transformations/operations at each stage:
A few things to note:
The transformations from Model Space to Clip Space are performed in the programmable Vertex Shader stage of the graphics pipeline.
Rather than doing 3 distinct matrix multiplications for every vertex, we often combine the
,
, and
matrices into a single matrix on the CPU and pass the result into the vertex shader. This allows a vertex to be transformed all the way to Clip Space with a single matrix multiplication.
The clipping and perspective divide operations are fixed function (hardwired in silicon) operations. Each graphics API specifies the coordinate systems in which these happen.
The scale and bias transformation to go from NDC to Framebuffer Space is fixed function too but is controlled via API calls such as glViewport() or vkCmdSetViewport().
The upshot of all of this is that we need to create the Model, View and Projection matrices to get our vertex data correctly into Clip Space. How we do this differs subtly between the different graphics APIs such as OpenGL vs Vulkan as we shall see now. These differences are what often lead to some issues when migrating from OpenGL to Vulkan. Especially when using some helper libraries that were coded up with the expectation of only being used with OpenGL.
As stated above, the Model, World and View spaces are defined by the content creation tools (Model Space) or by us as application/library developers (World and View spaces). It is only when we get to clip space that we have to be concerned about what the graphics API we are using expects to receive.
OpenGL Coordinate Systems
With OpenGL, the fixed function parts of the pipeline all use left-handed coordinate systems as shown here:
If we stick with the common conventions of using a right-handed set of coordinate systems for Model Space, World Space and View Space, then the transformation from View Space to Clip Space must also flip the handedness of the coordinate system somehow.
Recall that to go from View Space to Clip Space, we multiply our View Space vertex by the projection matrix
. Usually we would use some library to create a projection matrix for us such as glm or even glFrustum, if you are still using OpenGL 1.x!
There are various ways to parameterize a perspective projection matrix but to keep it simple let's stick with the left (
), right (
), top (
), bottom (
), near (
) and far (
) parameterisation as per the glFrustum specification. This assumes the virtual camera (or eye) is at the origin and that the near and far values are the distances to the near and far clip planes along the negative z-axis. The near plane is the plane to which our scene will be projected. The left, right, top and bottom values specify the positions on the near plane used to define the clip planes that form the view volume - a frustum in the case of a perspective transform.
With this parameterisation, the projection matrix for OpenGL looks like this:
Do not blindly use this as your projection matrix! It is specifically for OpenGL!
OK, that looks reasonable and matches various texts on OpenGL programming. It works perfectly well for OpenGL because it not only performs the perspective projection transform but it also bakes in the flip from right-handed coordinates to left-handed coordinates. This last little fact seems to be something that many texts gloss over and so goes unnoticed by many graphics developers. So where does this happen? That pesky little -1 in the 3rd column of the 4th row is what does it. This has the effect of flipping the z-axis and using -z as the w component causing the change in handedness.
If we blindly then use the same matrix to calculate a perspective projection for use with Vulkan that does not need the handedness flip, then we end up in trouble. This is typically followed by google searches leading to one of the many hacks to provide a "fix".
Instead, let's use our understanding of the problem domain to now come up with a proper correction for use with Vulkan.
Vulkan Coordinate Systems
Conversely to OpenGL, the fixed function coordinate systems used in Vulkan remain as right-handed in keeping with the earlier coordinate systems as shown here:
Notice that even though z-increases into the distance and y is increasing down, it is still in fact a right-handed coordinate system. You can convince yourself of this with some flexible rotations of your right hand similar to the photographs above.
Let's think about what we need conceptually without getting bogged down in the math - for now at least, we will save that for next time. With the OpenGL perspective projection matrix we have something that takes care of the transformation of the view frustum into a cube in clip space. The problem we have when using it with Vulkan is the flip in the handedness of the coordinate system thanks to that -1 we mentioned in the previous section. Setting that perspective component to 1 instead of -1 prevents the flip in handedness - there's a bit more to it as we will see in part 2 but that takes care of the change in handedness.
We still need to reorient our coordinate axes from View Space (x-right, y-up, looking down the negative z-axis) to Vulkan's Clip Space (x-right, y-down, looking down the positive z-axis). Since the start and end coordinate systems are both right-handed, this does not involve an axis flip as in the OpenGL case. Instead, all we need to do is to perform a rotation of 180 degrees about the x-axis. This gives us exactly the change in orientation that we need.
This means that before we see how to construct a projection matrix, we should reorient our coordinate axes to already be aligned with the desired clip space orientation. To do this, we inject a 180 degree rotation of the eye space coordinate around the x-axis before we later apply the actual projection. This rotation is shown here:
Recall from high school maths, that a rotation matrix about the x-axis basis vector of
(
radians) is easily constructed as:
This also makes sense intuitively as the y and z components of the matrix will both be negated by the -1 elements. Note that we have two "axis flips", so it still maintains the right-handedness of the coordinate system as desired.
So, in the end all we need to do is to include this "correction matrix", X, into our usual chain of matrices when calculating the combined model-view-projection matrix that gets passed to the vertex shader. With the correction included, our combined matrix is calculated as
. That means the transforms applied in order (right to left) are:
Model to World
World to Eye/View
Eye/View to Rotated Eye/View
Rotated Eye/View to Clip
With the above in place, we can transform vertices all the way from Model Space through to Vulkan's Clip Space and beyond. All that remains for us next time, is to see how to actually construct the perspective projection matrix. However, we are now in a good position (and orientation) to derive the perspective projection matrix as our source (rotated eye space) and destination (clip space) coordinate systems are now aligned. All we have to worry about is the actual projection of vertices onto the near plane.
Once we complete this next step, we will be able to avoid any of the ugly hacks mentioned at the start of this article and we will have a full understanding of how our vertices are transformed all the way from Blender through to appearing on our screens. Thanks for reading!
Trusted software excellence across embedded and desktop platforms
The KDAB Group is a globally recognized provider for software consulting, development and training, specializing in embedded devices and complex cross-platform desktop applications. In addition to being leading experts in Qt, C++ and 3D technologies for over two decades, KDAB provides deep expertise across the stack, including Linux, Rust and modern UI frameworks. With 100+ employees from 20 countries and offices in Sweden, Germany, USA, France and UK, we serve clients around the world.
Advocate and the best of the part of this I've taught myself
Sean Harmer
Managing Director KDAB UK
Dr Sean Harmer is a senior software engineer at KDAB where he heads up our UK office and also leads the 3D R&D team. He has been developing with C++ and Qt since 1998 and is Qt 3D Maintainer and lead developer in the Qt Project. Sean has broad experience and a keen interest in scientific visualization and animation in OpenGL and Qt. He holds a PhD in Astrophysics along with a Masters in Mathematics and Astrophysics.
1 Comment
6 - Jun - 2024
Phillip L slaughter
Advocate and the best of the part of this I've taught myself