HDFS applications need a write-once-read-many access model for files. A file once created, written, and closed need not be changed. This assumption simplifies data coherency issues and enables high throughput data access. A MapReduce application or a web crawler application fits perfectly with this model.
(Source: HDFS Design)
HDFS is built around the idea that files are rarely updated. Rather, they are read as data for some calculation, and possibly additional data is appended to files from time to time.
For example, an airline reservation system would not be suitable for a
DFS, even if the data were very large, because the data is changed so
frequently.
(Source: Mining of Massive Datasets)
HDFS applications need a write-once-read-many access model for files. A file once created, written, and closed need not be changed. This assumption simplifies data coherency issues and enables high throughput data access. A MapReduce application or a web crawler application fits perfectly with this model. (Source: HDFS Design)
HDFS is built around the idea that files are rarely updated. Rather, they are read as data for some calculation, and possibly additional data is appended to files from time to time. For example, an airline reservation system would not be suitable for a DFS, even if the data were very large, because the data is changed so frequently. (Source: Mining of Massive Datasets)
Also see Why HDFS is write once and read multiple times?