It doesn't matter what kind of projection you're using. You can easily reconstruct the view space position anyway. The idea is to upload the inverse of the projection matrix to the shader, reconstruct the NDC (normalized device coordinates) of the pixel and "unproject" it using the inverse projection matrix, hence it works with any kind of projection matrix.
NDC coordinates are coordinates that go from 1 to +1 in all 3 axes. When you multiply the view space position by the projection matrix in the vertex shader when filling the Gbuffer, you calculate NDC coordinates, and the GPU hardware maps XY to the viewport and Z to the depth buffer. We can undo the projection, but first we need to get all the data to do that.
First of all, you need the XY coordinates. These are easy to calculate. They basically go from (1, 1) in the bottom left corner to (+1, +1) in the top right corner. The easiest way is to calculate them from gl_FragCoord.xy, which gives you the position (in pixels) of the pixel. Divide by the size of the screen and you have coordinates going from (0, 0) to (+1, +1). Remapping that to (1, 1) to (+1, +1) is easy. The Z coordinate is the depth buffer value of that pixel, but the depth buffer value also goes from (0) to (+1) and needs remapping. With this, we have the NDC coordinates of the pixel. Now it's just a matter of multiplying the NDC coordinates with the projection matrix and dividing by the resulting W coordinate.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
 uniform sampler2D depthBuffer; uniform vec2 inverseScreenResolution; uniform mat4 inverseProjectionMatrix;
...
vec2 texCoords = gl_FragCoord.xy * inverseScreenResolution; float depthValue = texture(depthBuffer, texCoords);
vec3 ndc = vec3(texCoords, depthValue) * 2.0  1.0;
vec4 unprojectResult = inverseProjectionMatrix * vec4(ndc, 1.0);
vec3 viewSpacePosition = unprojectResult.xyz / unprojectResult.w;

An example Gbuffer layout for deferred shading is:
COLOR_ATTACHMENT0: GL_RGBA16F: (diffuse.r, diffuse.g, diffuse.b, <unused>)
COLOR_ATTACHMENT1: GL_RGBA16F: (
packedNormal.x, packedNormal.y, specularIntensity, specularExponent)
DEPTH_ATTACHMENT: GL_DEPTH_COMPONENT24: (depth)
EDIT: Actually, if you're only using an orthographic projection, you don't need the Wdivide (but it doesn't harm to keep it there).
EDIT2: Also, there are lots of optimizations you can do to this. I opted to just give you the basics before diving into those. I can answer whatever questions you have about deferred shading.