In this introductory tutorial, we’ll break down how movement works in Unity 2D. Not by copying code, but by understanding how space, time, and frame rhythm affect how an object moves in a game.
A common beginner mistake is copying ready-made movement code without understanding why it works. As a result, the game starts behaving strangely: the character moves faster diagonally, behaves differently on different PCs, falls through objects, or “jitters” for no obvious reason. These kinds of bugs are almost impossible to fix without understanding the fundamentals.
We’ll gradually move from simple position changes to a conscious choice between different movement approaches, so you understand not only how something works, but also why.
This tutorial also helps reveal the connection between the engine’s internal logic and what you see on the screen. Numbers, vectors, and time stop being abstract concepts — they become part of a single space in which an object exists, a space you begin to feel rather than just calculate.
After you create and open a 2D project in Unity, the scene will initially look empty. But even “emptiness” in Unity is already a space. Any game object you add will always have a position defined by coordinates.
In 2D games, we are primarily interested in two axes: X and Y. The X axis controls movement left and right, and the Y axis controls movement up and down. An object’s position is simply a point in this space.
💡 It’s important to remember that Unity is a 3D engine, even when you’re making a 2D game. In reality, every object also has a Z coordinate. This axis points along the camera’s view direction and most of the time remains at 0 in 2D. However, in some cases the Z value can be exactly why an object “disappears” or ends up somewhere you didn’t expect.
Below is a typical coordinate grid. This is not abstract mathematics — it’s the exact space your Unity scene lives in. When you move an object in the Scene view, you’re actually changing its coordinates in this system.
For now, there’s no need to calculate anything or memorize formulas. It’s enough to feel that the Unity scene and this grid are the same world, simply shown in different ways.
We’ve already mentioned that an object’s location in the coordinate
system — and therefore in the Unity scene — is defined by its
position. A position is written as a set of numbers
(x, y, z), where x controls movement
left and right, y controls movement up and down, and
z moves the object closer to or farther from the
camera.
In 2D games, the Z coordinate is usually not used or remains
0. Even so, it still exists and can sometimes affect how
objects are rendered.
Below are a few examples of positions in this space. The red circle is
at
(1, 1), the green one at (3, -2), and the
blue one at (-3.5, -2.5). These are simply different
coordinates within the same world.
For now, it’s important to think of position as a place in space — the point where an object currently is. We’re not talking about movement or direction yet, only about where the object is located.
💡 An object’s position doesn’t have to be an integer. For precise
placement, Unity uses floating-point numbers — values with a
fractional part, such as -3.5 or 0.25.
Now we can introduce an important concept that Unity works with all the time — a vector. For now, without formulas and without complexity. In the simplest sense, a vector is a directed line segment.
A vector has a direction and a length. You can imagine it as an arrow: it starts at one point and points to another. Such an arrow doesn’t just show where something is — it describes the relationship between two points in space.
When talking about an object’s position, a vector can be thought of as
a segment that starts at the origin (0, 0) and ends at
the point where the object is located. This is how Unity “sees”
position in space — not as separate numbers, but as a direction and a
distance from the origin.
In the image below, object positions are shown not only as points, but also as arrows starting from the origin. These arrows are the vectors that describe the position of each object in 2D space.
💡 The same points on the coordinate grid in Unity would be written like this:
// Red point (1, 1)
redCircle.transform.position = new Vector2(1f, 1f);
// Green point (3, -2)
greenCircle.transform.position = new Vector2(3f, -2f);
// Blue point (-3.5, -2.5)
blueCircle.transform.position = new Vector2(-3.5f, -2.5f);
These are simply different positions in the same space, written in Unity’s language.
The key idea to remember is simple: a vector is a way to describe position and direction in space.
Up to this point, we’ve been implicitly using one important thing — a
reference point. On a coordinate grid, this is the point
(0, 0), from which all positions and directions are
measured.
This kind of space is called global. In it, all objects exist in the same shared world and use the same coordinate system.
But in games, it’s often useful to look at an object not from the entire world, but relative to another object — or even relative to itself. In that case, a local space appears, with its own reference point.
Imagine a simple example: a car drives past you from left to right. For you, its direction of motion is to the right, but for the driver sitting inside the car, it’s moving straight ahead. This shows that the directions of the axes in global and local coordinate systems don’t have to match.
Another example is object hierarchy. If an object is a child, its local position is measured not from the center of the world, but from its parent’s position. From the world’s point of view, the object may be in one place, and from the parent’s point of view, in another.
Imagine a cup standing on a table, and you want to move it from the center of the table to one of the corners. You don’t need to calculate the cup’s position relative to the entire kitchen — you simply move it relative to the surface of the table.
In this image, you can see that the position coordinates of the green
child object depend on the reference point. Relative to the world
origin, its position is (2, 3), and relative to its
parent — (-1, -2). The object’s actual position on the
screen doesn’t change — only the coordinate system we use to describe
it does.
The key takeaway: a position is always measured from some reference point. Which point that is depends on the space in which we’re viewing the object.
A vector has two key characteristics: direction and length. If you imagine a vector as an arrow, the direction shows where the arrow is pointing, and the length shows how far it moves.
That’s exactly why vectors are convenient to use as an offset. We take the object’s current position (as a reference point) and add an offset vector to it. As a result, we get a new position.
💡 You can think of this very simply: position + offset = new position. For now, this isn’t about movement over time — it’s just about shifting a point in space.
For example, if we want to move an object to the right, we add a vector with a positive X value. If we want to move it to the left, we add a vector with a negative X value. The same applies to Y: a positive Y moves upward, a negative Y moves downward.
// current object position (point). Example: (1, -1)
Vector2 p = transform.position;
// offset 2 units to the right → (1, -1) + (2, 0)
p = p + new Vector2(2f, 0f);
// offset 1 unit left and 3 units up → (3, -1) + (-1, 3)
p = p + new Vector2(-1f, 3f);
// apply the new position. Final position: (2, 2)
transform.position = p;
Another useful idea: offsets can be added together. If you add one offset and then another, it’s the same as adding their sum once. For example, when the player presses “right” and “up” at the same time, Unity simply combines both offsets into a single diagonal vector.
Horizontal and vertical directions are used so often that Unity already provides predefined names for them. This isn’t anything new — just convenient names for familiar vectors.
// standard directions in Unity 2D
Vector2.right // (1, 0)
Vector2.left // (-1, 0)
Vector2.up // (0, 1)
Vector2.down // (0, -1)
These vectors don’t depend on an object’s position. They always point in a direction and have a length of 1. Essentially, they are pre-made arrows that are convenient to reuse again and again.
In addition to global directions, every object also has its own local directions. These depend on how the object is rotated and oriented in space.
// local object directions
transform.right // right relative to the object
transform.left // left relative to the object
transform.up // up relative to the object
transform.down // down relative to the object
transform.right and
transform.up axes rotate together with the object.
Unlike Vector2.right or Vector2.up, these
directions can change. If the object is rotated, its local axes rotate
with it. Remember: a direction can be global for the entire world, or
relative — depending on a specific object.
Up to this point, we’ve talked about simple and intuitive directions — up, down, left, and right. But in games, you often need something different: a direction not “in general”, but toward a specific target.
This is where vector subtraction comes in. We take the target’s position and subtract the object’s current position from it. The result is a direction vector — an arrow pointing directly from the object to the target.
// object position and target position
Vector2 current = transform.position;
Vector2 target = targetPosition;
// direction vector toward the target
Vector2 direction = target - current;
// vector length (distance to the target)
float distance = direction.magnitude;
// squared vector length (squared distance)
float distanceSqr = direction.sqrMagnitude;
💡 magnitude returns the actual distance to the target,
but it requires calculating a square root, which is a relatively
expensive operation. That’s why in situations with many such checks,
developers often use the squared distance, comparing it to
the square of the desired threshold. sqrMagnitude is
faster and commonly used when you only need to compare distances.
This kind of vector is especially important in games. It immediately contains two things: the direction toward the target and the distance to it, expressed as the vector’s length.
For now, it’s enough to understand the core idea: subtracting positions gives you a vector that answers the question “which way, and how far?”.
Movement in a game doesn’t exist on its own. It emerges as a result of a sequence of state changes that are displayed on the screen one after another.
The Unity engine runs in a loop. On each step, it first executes code and calculates a new state of the scene: object positions, rotations, and variable values. Then this state is rendered to the screen as a single frame. After that, the cycle repeats.
// called once per frame
void Update()
{
// current object position
Vector2 p = transform.position;
// small offset to the right
p = p + new Vector2(1f, 0f);
// apply the new position
transform.position = p;
}
This code runs every frame. Each time, the object’s position changes slightly, and Unity renders the updated scene state.
If you change the position only once, the object will simply appear in a new place. But when such small changes happen from frame to frame, the eye begins to perceive them as continuous movement.
In this way, movement is not a separate entity, but the result of successive changes to an object’s position between frames relative to the observer (the camera).
We’ve already seen that movement appears as a sequence of frames. But this raises an important question: what happens if the number of frames per second differs across computers?
Imagine two computers. One renders 50 frames per second, the other 100. On the more powerful system, the image will look smoother — but if we move an object by the same distance every frame, a problem appears.
On the computer running at 100 FPS, the code executes twice as many times per second as on the one running at 50 FPS. As a result, the object on the faster machine will travel twice the distance in the same amount of time.
To avoid this, movement must be tied not to the number of frames, but
to real time. In Unity, this is done using
Time.deltaTime.
Time.deltaTime shows how much time has passed since the
previous frame. At high FPS this value is smaller, and at low FPS it’s
larger. By using it, we can scale the offset based on the frame update
rate.
// called once per frame
void Update()
{
// current object position
Vector2 p = transform.position;
// offset scaled by real time
p = p + new Vector2(1f, 0f) * Time.deltaTime;
// apply the new position
transform.position = p;
}
Now the offset depends not on the number of frames, but on how much time has passed between them. At high FPS the steps are smaller, at low FPS they are larger — but over the course of one second, the object travels the same distance.
As a result, movement looks smoother on fast computers and may appear more choppy on slower ones, but the distance traveled per second remains the same on all systems.
Up to this point, we’ve talked about movement principles in general: frames, positions, offsets, and time. Now we can connect these ideas directly to how movement is implemented in Unity.
In most cases, Unity uses one of two approaches to movement: positional geometry and physical simulation. There is no “good” or “bad” option here — only the one that fits a specific task.
With positional movement, we work directly with an object’s position
by changing its coordinates in space. We’ve already used this approach
by modifying the global position via transform.position,
as well as local movement via transform.Translate.
The main advantages of positional movement are simplicity, full control, and predictability. We know exactly where the object will be at the next moment in time.
However, this approach also has a limitation. By changing position
directly, we ignore physical simulation. If an object has a collider
and a
Rigidbody, directly setting its position can lead to
unnatural behavior or missed collisions.
In situations where physics and interaction with the environment
matter, movement is usually implemented through a
Rigidbody. This approach allows the object to move more
naturally, but it also reduces the amount of direct control from code.
Instead of moving the object directly, we influence its state: velocity, impulse, or other physical parameters.
Next, we’ll look at both approaches separately and see in which situations each one is most appropriate.
| Criterion | Transform (positional) | Rigidbody (physics) |
|---|---|---|
| Concept | Direct position changes | Movement as a physical body |
| Control | Full and predictable | Partially delegated to physics |
| Collisions | May be ignored | Handled correctly |
| Inertia and mass | Absent | Present by default |
| Typical use cases | UI, logic, simple objects | Characters, action games, platformers |
If an object does not participate in physical simulation and has no physical body, movement is usually implemented directly via position.
Example: changing position directly
// move to the right by changing position
void Update()
{
transform.position += Vector3.right * 2f * Time.deltaTime;
}
Example: local movement using Translate
// move to the right in the object's local space
void Update()
{
transform.Translate(Vector3.right * 2f * Time.deltaTime);
}
The Translate method is convenient because, by default,
it moves the object in its local space — relative to its own
orientation, not the world origin.
If an object needs to properly interact with the environment —
including collisions, mass, and inertia — it requires a physical body:
Collider and Rigidbody.
Example: movement using force
It’s important to understand that when switching to physical movement, vectors don’t disappear. They are still used to define direction, speed, and force — they just no longer change the object’s position directly. Instead, they describe how the physical body should move.
Instead of “placing the object at a point,” we tell the physics system in which direction and with what intensity to affect the body. Position becomes the result of this process, not a value we overwrite ourselves.
// apply force to a physical body
void FixedUpdate()
{
rb.AddForce(Vector2.right * 10f);
}
In some cases, more precise control is required even when using physics. Then you can work directly with the body’s state, such as its velocity.
Example: controlling body velocity
// directly setting velocity (Unity 6+)
void FixedUpdate()
{
rb.linearVelocity = new Vector2(2f, 0f);
// in older Unity versions, rb.velocity is used
}
💡 Note: physics in Unity runs on its own rhythm, tied to real time
rather than frame rate. That’s why movement via
Rigidbody is handled in FixedUpdate, and
additional scaling with Time.deltaTime is usually not
required.
Both approaches have their strengths. The important thing is not to choose the “correct” method, but to understand which one fits the current task.
Each of these approaches will be explored in more detail in separate tutorials on the site.
Up to this point, we’ve looked at movement as the result of changes in position, time, and physical properties. What remains is to connect this with who controls the movement and how that control is applied.
In general, there are two main approaches here as well: external control and internal control.
Internal control is movement driven by algorithms, animation, or game logic. We’ve already seen examples where an object moves on its own, without player involvement. This approach depends on the game’s design goals and does not have a single “correct” solution.
External control is control coming from the player. The source of this control can be a keyboard, gamepad, touchscreen, or any other controller. The game’s task is to receive this input, process it, and use it in the required form.
Example: control via Transform
using UnityEngine;
using UnityEngine.InputSystem;
void Update()
{
float x = 0f;
float y = 0f;
if (Keyboard.current != null)
{
if (Keyboard.current.leftArrowKey.isPressed)
x -= 1f;
if (Keyboard.current.rightArrowKey.isPressed)
x += 1f;
if (Keyboard.current.downArrowKey.isPressed)
y -= 1f;
if (Keyboard.current.upArrowKey.isPressed)
y += 1f;
}
// movement control via position
Vector2 direction = new Vector2(x, y);
transform.Translate(direction * 2f * Time.deltaTime);
}
In this case, we directly offset the object, using player input as the movement direction. This approach is simple and works well for objects that do not participate in physical simulation.
Example: control via Rigidbody
using UnityEngine;
using UnityEngine.InputSystem;
void FixedUpdate()
{
float x = 0f;
float y = 0f;
if (Keyboard.current != null)
{
if (Keyboard.current.leftArrowKey.isPressed)
x -= 1f;
if (Keyboard.current.rightArrowKey.isPressed)
x += 1f;
if (Keyboard.current.downArrowKey.isPressed)
y -= 1f;
if (Keyboard.current.upArrowKey.isPressed)
y += 1f;
}
Vector2 direction = new Vector2(x, y);
// movement control via a physical body
rb.linearVelocity = direction * 2f;
}
Here, player input is used to control the state of the physical body. The object responds to collisions, inertia, and other physical properties of the scene.
The main takeaway from this step is simple: control is not movement by itself. It’s just a source of data that is then transformed into direction, speed, or other movement parameters.
Different input and control methods will be covered in more detail in separate tutorials.
In this tutorial, we didn’t teach “the correct code.” Step by step, we explored what movement in a game is actually built from: space, positions, vectors, frames, time, and the ways we influence an object.
You’ve seen that movement is not magic and not a ready-made function, but a sequence of simple decisions. By changing position, we change state. By displaying states frame by frame, we get movement. By accounting for time and context, we make it consistent across different systems.
Unity doesn’t enforce a single path. You can move an object directly, you can delegate movement to physics, or you can control it via player input or algorithms. What matters is not choosing the “best” method, but understanding why you choose a particular one.
If after this tutorial the code starts to look not like a collection of lines, but like a description of what’s happening in the world, then the goal has been achieved.
Next, we’ll dive deeper into individual topics: physics, input, cameras, and game feel. But the foundation is already in place.