Skip to main content

My Growing Collection of Tech Notes: Drupal, PHP, Linux, Symfony, and More

I’ve been keeping a running set of technical notes on Tumblr as I work through different web development projects. Over time it has turned into a personal reference library that covers Drupal development, PHP programming, Linux server setup, and the Symfony framework. Most of these notes come from real problems I’ve solved while building or maintaining websites, so the collection keeps expanding as I learn new tools and techniques.

A large portion of my notes focuses on Drupal because I spend a lot of time working with Drupal 7, Drupal 8, and the transition toward Drupal 9. I’ve documented everything from module development and data migration to caching, performance optimization, Varnish configuration, and headless Drupal workflows. Since Drupal 8 and Drupal 9 are built on Symfony, I also keep notes on Symfony concepts and PHP best practices that help improve development speed and code quality.

I also write down what I learn while setting up and managing Linux servers. Many of these entries involve Ubuntu 16.04, including installing essential software, configuring GNOME Flashback, setting up Webmin, enabling SSL on Apache, and improving performance with Memcached and PHP OpCode caching. As I explore more DevOps tools, I’ve added notes on Docker, Composer, Drush, and other utilities that make modern development smoother.

This list keeps growing as I continue learning about backend development, server optimization, and emerging technologies like augmented reality toolkits. Here are the topics I’ve documented so far:




Popular posts from this blog

Getting back into parallel computing with Apache Spark

Returning to parallel computing with Apache Spark has been insightful, especially observing the increasing mainstream adoption of the McColl and Valiant BSP (Bulk Synchronous Parallel) model beyond GPUs. This structured approach to parallel computation, with its emphasis on synchronized supersteps, offers a practical framework for diverse parallel architectures.While setting up Spark on clusters can involve effort and introduce overhead, ongoing optimizations are expected to enhance its efficiency over time. Improvements in data handling, memory management, and query execution aim to streamline parallel processing.A GitHub repository for Spark snippets has been created as a resource for practical examples. As Apache Spark continues to evolve in parallel with the HDFS (Hadoop Distributed File System), this repository intends to showcase solutions leveraging their combined strengths for scalable data processing.