Whether it is for security, privacy, network/infrastructure resilience, or quality of service analysis – processing large amount of data in various formats and from diverse sources is a growing challenge. This is further compounded by the research challenges including (a) the significance of the results from analysis, (b) validation of results (that the analysis has not missed any crucial parameter or data), (c) understanding the relationship between different actions, parameters and measurement values that cause an alarm (sequence of events) and (d) designing predictive and pre-emptive algorithms that can detect and remedy before a negative event occurs. This research thread is not only looking into deployment of Deep Learning and Machine Learning in the large scale enterprise environment where substantial resources can be dedicated – but also in computing at the edge frameworks too, where we are dealing with restricted resource environments. The objective is to build an ecosystem where most autonomous decisions are taken locally and only a small subset of analysis is pushed to the back office. Furthermore, we focus on collaborative autonomous systems, where each of the participating systems learns from each other – thus if one system comes under attack, neighboring systems are informed about the attack and countermeasures.