Even if it is 2D, you'd need the projection matrix, to project from the world space to the NDC. In case you need a moving camera, then you better go with a matrix, but you can optimize it to use a single vector offset.

Now considering the model matrix (better call it as sprite matrix I think) if all you need is positioning objects, there is no need to use the matrix in the first place, you can just go with a 2D vector.

1 2 3 4
| uniform mat4 projView; uniform vec2 offset;
layout (location = 0) in vec2 position; |

Then you simply multiply it with the projView matrix and get your output in NDC. In case you have rotation (2D rotations are just rotations on the z-axis) so you can get away with a 2D matrix for rotations:

You can simply multiply this matrix with your position to make it rotated. So the final shader might look like this:

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18
| #version 410
uniform mat4 projView; uniform vec2 offset; uniform float rotation;
layout (location = 0) in vec2 position;
void main() { float sinR = sin(rotation); float cosR = cos(rotation);
mat2 rot = mat2(cosR, sinR, -sinR, cosR);
vec2 vPos = offset + (rot * position); gl_Position = viewProj * vec4(vPos, 0.0, 1.0); } |

I however, recommend against this, because it is not that necessary to super optimize the program. The second thing is, I believe that premature optimization is the root of all evil. If your hardware supports a lot of power, why not utilize it?