Hadoop, XML and Spark
Hadoop is predominately known for its Hadoop Distributed File System (HDFS) where the data is distributed across multiple systems and its code for running MapReduce tasks (Rathbone, 2013). MapReduce has two queries, one that maps the input data into a final format and split across a group of computer nodes, while the second query reduces the data in each node so that when combining all the nodes it can provide the answer sought (Eini, 2010).
XML documents represent a whole data file, which contains markups, elements, and nodes (Lublinsky, Smith, & Yakubovich,, 2013; Myer, 2005):
- XML markups are tags that helps describe the data start and end points as well as the data properties/attributes, which are encapsulated by < and a >
- XML elements are data values, encapsulated by an opening <tag> and a closing </tag>
- XML nodes are part of the hierarchical structure of a document that contains a data element and its tags
Unfortunately, the syntax and tags are redundant, which can consume huge amounts of bytes, and slow down processing speeds (Hiroshi, 2007)
Five questions must be asked before designing an XML data document (Font, 2010):
- Will this document be part of a solution?
- Will this document have design standards that must be followed?
- What part may change over time?
- To what extent is human readability or machine readability important?
- Will there be a massive amount of data? Does file size matter?
All XML data documents should be versioned, and key stakeholders should be involved in the XML data design process (Font, 2010). XML is a machine and human readable data format (Smith, 2012). With a goal of using XML for MapReduce, we need to assume that we need to map and reduce huge files (Eini, 2010; Smith 2012). Unfortunately, XML doesn’t include sync markers in the data format and therefore MapReduce doesn’t support XML (Smith, 2012). However, Smith (2012) and Rohit (2013) used the XmlInputFormat class from mahout to work with XML input data into HBase. Smith (2012), stated that the Mahout’s code needs to know the exact sequence of XML start and end tags that will be searched for and Elements with attributes are hard for Mahout’s XML library to detect and parse.
Apache Spark started from a working group inside and outside of UC Berkley, in search of an open-sourced, multi-pass algorithm batch processing model of MapReduce (Zaharia et al., 2012). Spark is faster than Hadoop in iterative operations by 25x-40x for really small datasets, 3x-5x for relatively large datasets, but Spark is more memory intensive, and speed advantage disappears when available memory goes down to zero with really large datasets (Gu & Li, 2013). Apache Spark, on their website, boasts that they can run programs 100X faster than Hadoop’s MapReduce in Memory (Spark, n.d.). Spark outperforms Hadoop by 10x on iterative machine learning jobs (Gu & Li, 2013). Also, Spark runs 10x faster than Hadoop on disk memory (Spark, n.d.). Gu and Li (2013), recommend that if speed to the solution is not an issue, but memory is, then Spark shouldn’t be prioritized over Hadoop; however, if speed to the solution is critical and the job is iterative Spark should be prioritized.
Big data can be defined as any set of data that has high velocity, volume, and variety, also known as the 3Vs (Davenport & Dyche, 2013; Fox & Do, 2013; Podesta, Pritzker, Moniz, Holdren, & Zients, 2014). What is considered to be big data can change with respect to time. What is considered as big data in 2002 is not considered big data in 2016 due to advancements made in technology over time (Fox & Do, 2013). Then there is Data-in-motion, which can be defined as a part of data velocity, which deals with the speed of data coming in from multiple sources as well as the speed of data traveling between systems (Katal, Wazid, & Goudar, 2013). Essentially data-in-motion can encompass data streaming, data transfer, or real-time data. However, there are challenges and issues that have to be addressed to conducting real-time analysis on data streams (Katal et al., 2013; Tsinoremas et al., n.d.).
It is not enough to analyze the relevant data for data-driven decisions but also selecting relevant visualizations of that data to enable those data-driven decision (eInfochips, n.d.). There are many types of ways to visualize the data to highlight key facts through style and succinctly: tables and rankings, bar charts, line graphs, pie charts, stacked bar charts, tree maps, choropleth maps, cartograms, pinpoint maps, or proportional symbol maps (CHCF, 2014). The above visualization plots, charts, maps and graphs could be part of an animated, static, and Interactive Visualizations and would it be a standalone image, dashboards, scorecards, or infographics (CHCF, 2014; eInfochips, n.d.).
Artificial Intelligence (AI)
Artificial Intelligence (AI) is an embedded technology, based off of the current infrastructure (i.e. supercomputers), big data, and machine learning algorithms (Cyranoski, 2015; Power, 2015). AI can provide tremendous value since it builds thousands of models and correlations automatically in one week, which use to take a few quantitative data scientist years to do (Dewey, 2013; Power, 2015). Unfortunately, the rules created by AI out of 50K variables lack substantive human meaning, or the “Why” behind it, thus making it hard to interpret the results (Power, 2015).
“Machines can excel at frequent high-volume tasks. Humans can tackle novel situations.” said by Anthony Goldbloom. Thus, the fundamental question that decision makers need to ask, is how the decision is reduced to frequent high volume task and how much of it is reduced to novel situations (Goldbloom, 2016). Therefore, if the ratio is skewed on the high volume tasks then AI could be a candidate to replace decision makers, if the ratio is evenly split then AI could augment and assist decision makers, and if the ratio is skewed on novel situations, then AI wouldn’t help decision makers. They novel situations is equivalent to our tough challenges today (McAfee, 2013). Finally, Meetoo (2016), warned that it doesn’t matter how intelligent or strategic a job could be, if there is enough data on that job to create accurate rules it can be automated as well; because machine learning can run millions of simulations against itself to generate huge volumes of data to learn from.
- California HealthCare Foundation. (2014). Worth a thousand words: How to display health data. Retrieved from http://www.chcf.org/~/media/MEDIA%20LIBRARY%20Files/PDF/PDF%20W/PDF%20WorthThousandWordsDataViz.pdf
- Cyranoski, D. (2015). Exclusive: Genomics Pioneer Jun Wang on his new AI venture. Nature. Retrieved from http://www.nature.com/news/exclusive-genomics-pioneer-jun-wang-on-his-new-ai-venture-1.18091
- Davenport, T. H., & Dyche, J. (2013). Big Data in Big Companies. International Institute for Analytics, (May), 1–31.
- Dewey, D. (2013). The long-term future of AI (and what we can do about it. TEDxVienna. Retrieved from https://www.youtube.com/watch?v=CK5w3wh4G-M
- (n.d.). Revolutionizing the healthcare industry with big data, analytics and visualizations. Retrieved from https://www.einfochips.com/whitepaper/Revolutionizing-the-Healthcare-Industry-with-Big-Data-Analytics-and-Visualization.pdf
- Eini, O. (2010). Map/Reduce- a visual explanation. Retrieved from https://ayende.com/blog/4435/map-reduce-a-visual-explanation
- Font, F. (2010) Designing XML Data Documents. Room4me.com Software LLC. Retrieved form http://www.room4me.com/whitepapers/Designing_XML_Data_Documents_FrankFont1a.pdf
- Fox, S., & Do, T. (2013). Getting real about Big Data: applying critical realism to analyse Big Data hype. International Journal of Managing Projects in Business, 6(4), 739–760. http://doi.org/10.1108/IJMPB-08-2012-0049
- Goldbloom, A. (2016). The jobs we’ll lose to machines – and the ones we won’t. TED Talks. Retrieved from https://www.youtube.com/watch?v=gWmRkYsLzB4
- Gu, L., & Li, H. (2013). Memory or time: Performance evaluation for iterative operation on hadoop and spark. InHigh Performance Computing and Communications & 2013 IEEE International Conference on Embedded and Ubiquitous Computing (HPCC_EUC), 2013 IEEE 10th International Conference on (pp. 721-727). IEEE.
- Hiroshi (2007). Advantages & disadvantages of XML. Retrieved from http://www.techmynd.com/advantages-disadvantages-of-xml/
- Katal, A., Wazid, M., & Goudar, R. H. (2013, August). Big data: issues, challenges, tools and good practices. InContemporary Computing (IC3), 2013 Sixth International Conference on (pp. 404-409). IEEE.
- Lublinsky, B., Smith, K., & Yakubovich, A. (2013). Professional Hadoop Solutions. Wrox. VitalBook file.
- McAfee, A. (2013). What will future jobs look like? TED Talks. Retrieved from https://www.youtube.com/watch?v=cXQrbxD9_Ng
- Meetoo, A. (2016). Jobs of the future and how we can prepare for them. TEDx Talks. Retrieved from https://www.youtube.com/watch?v=OI5eO2CSib8
- Myer, T. (2005). A really, really, really good introduction to xml. Retrieved fromhttps://www.sitepoint.com/really-good-introduction-xml/
- Podesta, J., Pritzker, P., Moniz, E. J., Holdren, J., & Zients, J. (2014). Big Data: Seizing Opportunities. Executive Office of the President of USA, 1–79.
- Power, B. (2015). Artificial Intelligence is almost ready for business. Harvard Business Review. Retrieved fromhttps://hbr.org/2015/03/artificial-intelligence-is-almost-ready-for-business
- Rathbone, M. (2013). A beginners guide to Hadoop. Retrieved from http://blog.matthewrathbone.com/2013/04/17/what-is-hadoop.html
- Smith, C. (2012). Hadoop in Practice. Retrieved from https://dzone.com/articles/hadoop-practice
- Spark (n.d.) Lightning-fast cluster computing. Retrieved from http://spark.apache.org
- Tsinoremas, N. F., Zysman, J., Mader, C., Kirtma, B., & Blaire, J. (n.d.) Data in motion: A new paradigm in research data lifecycle management. Center for Computational Science: University of Miami.
- Zaharia, M., Chowdhury, M., Das, T., Dave, A., Ma, J., Mccauley, M., … & Stoica, I. (2012). Fast and interactive analytics over Hadoop data with Spark. USENIX Login, 37(4), 45-51.