Returning to parallel computing with Apache Spark has been insightful, especially observing the increasing mainstream adoption of the McColl and Valiant BSP (Bulk Synchronous Parallel) model beyond GPUs. This structured approach to parallel computation, with its emphasis on synchronized supersteps, offers a practical framework for diverse parallel architectures.While setting up Spark on clusters can involve effort and introduce overhead, ongoing optimizations are expected to enhance its efficiency over time. Improvements in data handling, memory management, and query execution aim to streamline parallel processing.A GitHub repository for Spark snippets has been created as a resource for practical examples. As Apache Spark continues to evolve in parallel with the HDFS (Hadoop Distributed File System), this repository intends to showcase solutions leveraging their combined strengths for scalable data processing.
Looking at more resources online for Python for Data Science. There are many good resources available. Of course the main tools are: Numpy , Pandas , MathPlotLib , SkiKit-Learn has some amazing tools. Kaggle for instance has Data Science contents, but good to install a local system like the Jupyter Notebook to speed things up as the Kaggle editor can lag and take some time to run on small data-sets. The newer DataCamp has some neat tutorials on it and simple App to do daily exercises on your mobile device. Here is the Python DataScience Handbook . Really useful. A short tutorial: Learn Python for Data Science , a fun read. A list of cool DataSci tutorials is here , and another how to get started with Python for DS . Will add more later.