While the exponents make sense to me mathematically, it is difficult to comprehend the magnitude of the data explosion coming in the next few years.
While only 1.3 zettabytes will make it back to the data center, the amount of information will clearly easily overwhelm the best of efforts to make sense of the data glut.
Artificial intelligence / machine learning / deep learning is viewed as an innovative way to absorb large amount of data.
In fact, even within the last few months, numerous organizations have submitted results to Stanford DAWNBench, a recent proposal for deep learning benchmark, where the deep learning training time has been reduced by several orders of magnitude.
Unfortunately, the size data set for these benchmarks are simply tiny, especially when compared with the anticipated data glut as indicated in the Cisco Global Cloud Index.
[/vc_column_text] [vc_column_text el_class=”topfeed-tags”] Tags: Cisco Global Cloud, Global Cloud Index, data, deep learning, data glut [/vc_column_text] [/vc_column] [vc_column width=”1/2″] [vc_separator] [vc_column_text el_class=”topfeed-tweet”] https://twitter.com/HanYang1234/status/996773243661926400 [/vc_column_text] [vc_column_text el_class=”topfeed-embedly”] More Data Refining Capacity Needed [/vc_column_text] [/vc_column] [/vc_row]