Let me rephrase the second question. What I meant to ask is: you can pass data to the GPU with VBOs and I realise that it's possible to keep passing in new data with VBOs. However, since shaders can modify that data passed in previously, why would anyone do that? Is there any limitations in modifying that data with a shader which would mean you'd have to pass new data in with a VBO? Also, what would be the point of passing in colour data in the first place when you can just change it directly on the GPU using a fragment shader?
i.e: Is there any reason to use GL_STREAM_DRAW over shaders?
VBO updating and shaders are two completely different things designed to accomplish completely different goals. VBOs are permanent or temporary memory buffers stored in the GPU's video RAM. Shaders are small programs that can do whatever you want to each vertex, pixel or piece of geometry. They cannot replace each other.
To answer your main question: GL_STREAM_DRAW is very useful for positioning things. A prime example is particle rendering. You have a few thousand particles that you move around each frame, meaning that each frame the updated particle data has to be reuploaded. In this case, GL_STREAM_DRAW is great since that's exactly what you want to do. Note that GL_STREAM_DRAW in itself is only a hint. From what I know most drivers use that value as a hint but the final decision concerning how the VBO should work is done using heuristics based on how you use it. Also, it is possible to run the particle simulation on the GPU, meaning that the data is already in VRAM so no uploading is necessary, but this can be very hard to do when more advanced stuff like collision detection comes into play.
Shaders on the other hand can be used to complement particle rendering. For instance, geometry shaders can be used to expand points into point sprites (quads facing the screen with a texture). Let's say your particles are simple: They have a position, a color and also need texture coordinates for texturing. Without any shaders, you'd need 4 vertices per particle (one for each corner) each containing a 3D position (3 floats), an RGBA color (4 bytes) and texture coordinates (only 1 or 2 bytes, but padded to 4 bytes for alignment). In total, we need 12+4+4 bytes = 20 bytes per corner, totaling 80 bytes per particle. That's a LOT! It's obvious that we're duplicating the color 4 times for each particle and the texture coordinates are the same for each particle. In this case geometry shaders can help a lot. By instead uploading a single vertex which contains all the information we need to construct a quad in our geometry shader we can both save a lot of memory and offload much work from the CPU.
To construct our quad we'd only need a 3D position (3 floats), a 2D size of the generated sprite/quad (2 floats) and a color (4 bytes) for a total of 12+8+4 = 24 bytes in total. The geometry shader can then output 4 corners, each with generated texture coordinates. You could even throw in a rotation variable and calculate a rotation matrix in the geometry shader.
TL;DR: VBOs are useful for static data stored permanently on the GPU, and CPU generated data uploaded each frame. Shaders are useful for heavy work like lighting and other effects that need to be recalculated or generated from static data (or a relatively small amount of dynamic data like a matrix, a skeleton, a light's position) each frame.