IN CASE YOU MISSED IT!

BigData News Wednesday, May 16 Advisor, , Data & more…

[vc_row] [vc_column] [vc_column_text]

What’s new?

[/vc_column_text] [/vc_column][/vc_row] [vc_row el_id=”__edition_id_4b09eff0_5923_11e8_a287_00″] [vc_column width=”1/2″] [vc_separator] [vc_column_text]

Enterprise Architecture Advisor

[/vc_column_text] [vc_column_text el_class=”topfeed-summary-list”]

[/vc_column_text] [vc_column_text el_class=”topfeed-tags”] Tags: Advisor, Enterprise Architecture, [/vc_column_text] [/vc_column] [vc_column width=”1/2″] [vc_separator] [vc_column_text el_class=”topfeed-tweet”]

[/vc_column_text] [vc_column_text el_class=”topfeed-embedly”] Enterprise Architecture Advisor [/vc_column_text] [/vc_column] [/vc_row] [vc_row el_id=”_content_factoid_give_big_data_perspective__”] [vc_column width=”1/2″] [vc_separator] [vc_column_text]

Factoid to Give Big-Data a Perspective

[/vc_column_text] [vc_column_text el_class=”topfeed-summary-list”]

  • What will Big-Data convey with some of its startling factoids.
  • I recently ran through google to find some good factoids.

[/vc_column_text] [vc_column_text el_class=”topfeed-tags”] Tags: data, big data, poor data, current global storage, global storage capacity [/vc_column_text] [/vc_column] [vc_column width=”1/2″] [vc_separator] [vc_column_text el_class=”topfeed-tweet”]

[/vc_column_text] [vc_column_text el_class=”topfeed-embedly”] Factoid to Give Big-Data a Perspective [/vc_column_text] [/vc_column] [/vc_row] [vc_row el_id=”_datacenter_more_data_refining_capacity_needed_”] [vc_column width=”1/2″] [vc_separator] [vc_column_text]

More Data Refining Capacity Needed

[/vc_column_text] [vc_column_text el_class=”topfeed-summary-list”]

  • While the exponents make sense to me mathematically, it is difficult to comprehend the magnitude of the data explosion coming in the next few years.
  • While only 1.3 zettabytes will make it back to the data center, the amount of information will clearly easily overwhelm the best of efforts to make sense of the data glut.
  • Artificial intelligence / machine learning / deep learning is viewed as an innovative way to absorb large amount of data.
  • In fact, even within the last few months, numerous organizations have submitted results to Stanford DAWNBench, a recent proposal for deep learning benchmark, where the deep learning training time has been reduced by several orders of magnitude.
  • Unfortunately, the size data set for these benchmarks are simply tiny, especially when compared with the anticipated data glut as indicated in the Cisco Global Cloud Index.

[/vc_column_text] [vc_column_text el_class=”topfeed-tags”] Tags: Cisco Global Cloud, Global Cloud Index, data, deep learning, data glut [/vc_column_text] [/vc_column] [vc_column width=”1/2″] [vc_separator] [vc_column_text el_class=”topfeed-tweet”]

[/vc_column_text] [vc_column_text el_class=”topfeed-embedly”] More Data Refining Capacity Needed [/vc_column_text] [/vc_column] [/vc_row]

Top Big Data Courses

The Ultimate Hands-On Hadoop - Tame your Big Data! (31,889 students enrolled)

By Sundog Education by Frank Kane
  • Design distributed systems that manage "big data" using Hadoop and related technologies.
  • Use HDFS and MapReduce for storing and analyzing data at scale.
  • Use Pig and Spark to create scripts to process data on a Hadoop cluster in more complex ways.
  • Analyze relational data using Hive and MySQL
  • Analyze non-relational data using HBase, Cassandra, and MongoDB
  • Query data interactively with Drill, Phoenix, and Presto
  • Choose an appropriate data storage technology for your application
  • Understand how Hadoop clusters are managed by YARN, Tez, Mesos, Zookeeper, Zeppelin, Hue, and Oozie.
  • Publish data to your Hadoop cluster using Kafka, Sqoop, and Flume
  • Consume streaming data using Spark Streaming, Flink, and Storm

Learn more.


Taming Big Data with MapReduce and Hadoop - Hands On! (13,894 students enrolled)

By Sundog Education by Frank Kane
  • Understand how MapReduce can be used to analyze big data sets
  • Write your own MapReduce jobs using Python and MRJob
  • Run MapReduce jobs on Hadoop clusters using Amazon Elastic MapReduce
  • Chain MapReduce jobs together to analyze more complex problems
  • Analyze social network data using MapReduce
  • Analyze movie ratings data using MapReduce and produce movie recommendations with it.
  • Understand other Hadoop-based technologies, including Hive, Pig, and Spark
  • Understand what Hadoop is for, and how it works

Learn more.