I cannot agree with that. A point stands for a certain location in space whereas a vector is a direction that has no certain origin and can be located anywhere. That's also how mathematics and physics handle these items.
A thing like a position is very different from a thing like speed. A think like a vertex coordinate is very different from a thing like a normal.
And for they transform differently (applying a translation to a vector absolutely make no sense), they have to be treated differently.
Libs that ignore the difference have sometimes very ugly code when it comes to transform normals.....
When you use different classes for them you have additional code to maintain...
Making a difference between points and vectors by using different classes is actually new to me. I know two solutions, both make no difference at all, both call the thing (which is essentially a 3-tuple) a vector, and none of them produces any 'dirty' code if you know what you are doing.
Solution no. 1 (which I'm using in my engine): There is very very very seldom a case where you do not know if a vector means a position or a direction. Actually I have not yet seen any such case. There are simply two transformation functions, one for points and one for methods.
Solution no. 2 (which OpenGL uses): Very nice, but requires additional CPU time. All vectors are stored as 4-tuples, where the fourth value is 1 for positions and 0 for directions. A transformation is stored as a 4x4 matrix where the rightmost column contains the translation part of the transform. Advantage: You can store the translation in the matrix as well. Also, computing the signed distance of a point to a plane means simply computing the dot product of two 4-tuples.
BTW the ugly code when transforming normals may have another reason: You cannot transform normals the same way as you transform vertices. This only works for either orthogonal or orthonormal transformations (not sure). Otherwise the normals are tilted towards the surface during transformation.