They mentioned using hadoop for file storage - perhaps they are just using HDFS and not MapReduce.
Otherwise, Spark is relatively new, so they might have some older infra/jobs in Hadoop.
Storm and Spark streaming work a little differently (real-time streaming vs "micro-batching) and apparently have different use cases, but I'm not totally sure what the practical difference are here either..
Otherwise, Spark is relatively new, so they might have some older infra/jobs in Hadoop.
Storm and Spark streaming work a little differently (real-time streaming vs "micro-batching) and apparently have different use cases, but I'm not totally sure what the practical difference are here either..