Skip to main content

Drupal 8 Performance Testing with Drush and the Site Audit Module

This is a link to a blog post to perform a performance test Drupal 8 written by Darrell Ulm in May 2016 that walks through how to run a Drupal 8 performance test using Drush and the Drupal site_audit module. It is a helpful resource for anyone who wants to understand the health, configuration, and overall performance of a Drupal installation.

The main idea is that Drupal 8 can use the same Drush tool, site_audit, to analyze how well your site is running. The module generates detailed reports that cover best practices, caching configuration, unused content types, and database statistics. It also provides insights into installed modules, security settings, user accounts, views, and Drupal Watchdog log entries.

The site_audit module is useful for developers and site administrators because it gives a clear overview of potential issues and performance bottlenecks. Many major Drupal hosting platforms offer similar reporting features, but having this tool available locally or on any server makes it easy to review your site’s status at any time.

For most Drupal sites, running site_audit is a smart step in profiling performance, identifying configuration problems, and ensuring the site follows recommended practices. It is one of those tools that quickly becomes an automatic part of any Drupal workflow.

Wordpress, Tumblr

Popular posts from this blog

Getting back into parallel computing with Apache Spark

Returning to parallel computing with Apache Spark has been insightful, especially observing the increasing mainstream adoption of the McColl and Valiant BSP (Bulk Synchronous Parallel) model beyond GPUs. This structured approach to parallel computation, with its emphasis on synchronized supersteps, offers a practical framework for diverse parallel architectures.While setting up Spark on clusters can involve effort and introduce overhead, ongoing optimizations are expected to enhance its efficiency over time. Improvements in data handling, memory management, and query execution aim to streamline parallel processing.A GitHub repository for Spark snippets has been created as a resource for practical examples. As Apache Spark continues to evolve in parallel with the HDFS (Hadoop Distributed File System), this repository intends to showcase solutions leveraging their combined strengths for scalable data processing.

Apache Spark Knapsack Approximation Algorithm in Python

The code shown below computes an approximation algorithm, greedy heuristic, for the 0-1 knapsack problem in Apache Spark. Having worked with parallel dynamic programming algorithms a good amount, wanted to see what this would look like in Spark. The Github code repo. for the Knapsack approximation algorithms is here , and it includes a Scala solution. The work on a Java version is in progress at time of this writing. Below we have the code that computes the solution that fits within the knapsack W for a set of items each with it's own weight and profit value. We look to maximize the final sum of selected items profits while not exceeding the total possible weight, W. First we import some spark libraries into Python. # Knapsack 0-1 function weights, values and size-capacity. from pyspark.sql import SparkSession from pyspark.sql.functions import lit from pyspark.sql.functions import col from pyspark.sql.functions import sum Now define the function, which will take a Spark ...