Start by figuring out which matrix isn't working. Stop when you find a problem, fix it and continue to the next step once you've solved it.
1. Make the fragment shader output currentPosition.xyz to gl_FragColor. This is a bit hard to read, with most of the screen having negative X and/or Y (which displays as 0 of course), to ensure that you're basing the calculation on the correct coordinates. The depth should be somewhat easy to read in the blue channel. Now that I think about it, if your texture coordinates have (0, 0) in the top left corner, you DO need to invert Y.
vec3 currentPosition = vec3(texCoord0, z) * 2 - 1;
currentPosition.y = -currentPosition.y;
If you find an error here: The error occurs before the matrices are even used.
2. Make it output the following to check if the inverted matrix is correct:
vec4 worldPos = T_MVInverse * vec4(currentPosition, 1.0);
gl_FragColor = worldPos / worldPos.w;
This will output the WORLD position of each pixel. This will most likely be a value far over 1.0, so you may want to divide it by 10 or 100 or something to get reasonable values. Make sure that the world position values are stable under camera motion and rotation for static objects.
If you find an error here: The problem lies in the T_MVInverse matrix.
3. Make it output previousPosition, which should look the same as currentPosition when the camera is not moving.
If you find an error here: The problem lies in the T_previousMVP matrix.
If you feel like going all out and implement something much more complicated, you can go for this motion blur algorithm: http://graphics.cs.williams.edu/papers/MotionBlurI3D12/McGuire12Blur.pdf
I've implemented it myself, and it works great. It relies on per pixel motion vectors so it can handle any kind of motion. I calculate accurate motion vectors which take into account camera movement, object movement and skeleton animation movement. The cool thing about this algorithm is that it can actually blur over edges. It relies on a second low-resolution motion vector texture which keeps track of the dominant motion vector of the pixels that it covers to achieve this, although it does have trouble when motion in different directions causes the motion blur vectors to "overlap"... I get a feeling that this is a bit too advanced, but it's a cool paper nonetheless.