Visit this page for a Unity3D example using spherical coordinates.

I recently had to implement a drag and drop camera feature for a college course. The OpenGL program, we had to submit, had to contain a camera that can be controlled by moving the mouse around. The camera itself should always look at a fixed point in the 3D space while being rotated on two different axes like it’s stuck to the inside of a sphere.

We were not allowed to use a lookAt function, and the tricky bit is that the camera doesn’t only have to rotate, it also has to move while doing so.

Anyway, while I’m sure that there are several other, maybe better and more efficient ways, to solve this problem, I decided to try a very simple and easy to understand approach. Instead of dealing with two different rotations and movements on my camera, I decided to only rotate and move the camera itself up and down and use a global world matrix to rotate the entire world around the Y-axis (without movement).

## The mouse event handlers

This program uses a single mouse button event handler that gets called whenever a mouse button is pressed or released. In my case, I only had to listen for the left-mouse button (button 0). Whenever the user presses it, the current mouse position is recorded and stored in a global variable together with a flag that indicates that a movement has to be converted to a rotation. Whenever the user releases the button, the flag is reset.

```
void mouseCallback(GLFWwindow *window, int button, int action, int mods)
{
// Whenever the left mouse button is pressed, the
// mouse cursor's position is stored for the arc-
// ball camera as a reference.
if (button == GLFW_MOUSE_BUTTON_LEFT && action == GLFW_PRESS)
{
double curr_x = 0, curr_y = 0;
glfwGetCursorPos(window, &curr_x, &curr_y);
// last is a global vec3 variable
last = vec3(curr_x, curr_y, -1);
// This is another global variable
ballEnabled = true;
}
// When the user releases the left mouse button,
// all we have to do is to reset the flag.
if (button == GLFW_MOUSE_BUTTON_LEFT && action == GLFW_RELEASE)
ballEnabled = false;
}
```

Don’t forget to register your newly created event handler!

```
glfwSetMouseButtonCallback(window, mouseCallback);
```

## Creating the rotation matrices

For this to work, I had to define the following three matrices:

```
mat4 world = mat4(1);
mat4 projection = glm::perspective( /* ... */ );
mat4 view = translate(mat4(1), vec3(0, 0, -radius));
```

The world matrix is what we’ll use to rotate every object in our scene around the Y-axis. Right now, it’s nothing more than a 4×4 identity matrix. The view matrix will be used to move the camera up and down inside the sphere. As you can see, it’s initially placed in the origin of the Y and X-axis and on the outside of the sphere. Radius is another global variable that can be used to move the camera farther away from the origin. I initialized mine with a default-radius of 6.0.

Every object in the scene will have a unique object matrix so that it can be transformed independently from the other objects. Once all the transformations are done, the necessary matrices are multiplied:

```
tp1_model = translate(mat4(1), vec3(-1.5f, -1.0f, 0.0f));
tp1_model = rotate(tp1_model, (float)radians(180.0f), Y_AXIS);
mat4 tp1_mvp = projection * view * world * tp1_model;
```

As you can see, tp1_model is one of those individual model matrices and I used it to move and rotate an object in my scene. The global projection, view, and world matrices are then multiplied to form the final transformation matrix that gets passed to the vertex shader.

## Make the camera rotate and move

So now we have everything set up. But how exactly can we rotate and move the camera? Well, inside the main-game loop, add the following code:

```
if (ballEnabled)
{
double curr_x = 0, curr_y = 0;
glfwGetCursorPos(window, &curr_x, &curr_y);
// Calculate the distance, the mouse was moved,
// between the last and the current frame
double dx = curr_x - last.x;
double dy = last.y - curr_y;
// Tweak these values to change the sensitivity
float scale_x = abs(dx) / VIEWPORT_WIDTH;
float scale_y = abs(dy) / VIEWPORT_HEIGHT;
float rotSpeed = 350.0f;
// Horizontal rotation (on the Y-axis)
// This is simple because no clamping is needed
if (dx < 0)
{
// As discussed earlier, the entire world is rotated
world = rotate(world, (float)radians(-rotSpeed * scale_x), Y_AXIS);
x_rot -= rotSpeed * scale_x;
}
else if (dx > 0)
{
world = rotate(world, (float)radians(rotSpeed * scale_x), Y_AXIS);
x_rot += rotSpeed * scale_x;
}
// The user wants to rotate the camera this much
float rot = rotSpeed * scale_y;
if (dy < 0)
{
// Upper rotation limit (+90 deg)
if (y_rot + rot > y_rot_clamp)
rot = y_rot_clamp - y_rot;
view = rotate(view, (float)radians(rot), X_AXIS);
y_rot += rot;
}
else if (dy > 0)
{
// Limit the rotation in the other direction too (-90 deg)
if (y_rot - rot < -y_rot_clamp)
rot = y_rot + y_rot_clamp;
view = rotate(view, (float)radians(-rot), X_AXIS);
y_rot -= rot;
}
last.x = curr_x;
last.y = curr_y;
}
```

As you can see, the if-condition checks whether the aforementioned flag is set. If it is true, every movement of the mouse is translated to a rotation inside the OpenGL app.

For that to happen, we first have to get the current mouse position and then we use the previously stored coordinates to determine how far the mouse cursor moved between the last and the current frame. Next, both rotations get calculated individually.

I started with the world matrix. This case is simple because the mouse movement can directly be translated to a horizontal rotation in the world space.

The other direction (up and down) was a bit more tricky. First of all, I had to make sure that the user is unable to tip the camera over. Otherwise, the world would be upside down. Therefore, the rotation was limited to almost 90 degrees. y_rot_clamp is another global variable that’s close to 90 degrees (89.999).

y_rot and x_rot are two global variables that store the current rotation of the camera. This is needed for the clamping and other parts of my application, which are irrelevant for this tutorial. Furthermore, I defined a few vectors (for example X_AXIS = {1,0,0}) that simply represent the axes in my world to make it easier to develop a consistent application.

And, last but not least, I updated the x and y values that will get used in the next frame to calculate the deltas.

## Conclusion

As you just saw, it can be really easy to create an arcball camera in OpenGL. However, I can not guarantee that this implementation is 100% correct and will always work. It worked for me and it was much easier to understand than working with spherical-coordinates or quaternions and that’s why I chose it for my assignment.

## Alternative solutions

I found a few other solutions online but they were not that easy to understand or didn’t work for me. Anyway, if my solution didn’t work for you (or you don’t like it), I recommend you take a look at these resources:

Unity3D tutorial using spherical coordinates

Tutorial of Arcball without quaternions

ArcBall Rotation

OpenGL Programming: Arcball Tutorial

## One thought on “How to program an arcball (orbiting) camera in C++ and OpenGL”