No, these files are not merged by Hadoop. The number of files you get is the same as the number of reduce tasks.
If you need that as input for a next job then don't worry about having separate files. Simply specify the entire directory as input for the next job.
If you do need the data outside of the cluster then I usually merge them at the receiving end when pulling the data off the cluster.
I.e. something like this:
hadoop fs -cat /some/where/on/hdfs/job-output/part-r-* > TheCombinedResultOfTheJob.txt