Okay, so I was checking out this website, ResearcherID, and I created this page for Darrell Ulm: http://www.researcherid.com/rid/Y-5083-2018. It seems like another really useful site for listing research work, much like ORCID, which you can see here for Darrell Ulm as well: https://orcid.org/0000-0002-0513-0416 . I'm still trying to fully understand the nuances between ResearcherID and ORCID, as they appear to be quite similar in their aim – providing a unique identifier for researchers and their publications. However, looking at Darrell Ulm's ResearcherID page, it seems to have some interesting connections to other resources, specifically mentioning reviewing efforts. It's fascinating to see how these platforms are interconnected and how they contribute to the broader ecosystem of scholarly communication and recognition. I need to explore further how these different systems integrate and what unique benefits each offers to researchers like Darrell. It’s all part of navigating the evolving landscape of research visibility.
Technical notes about past publications and work by Darrell Ulm including Apache Spark, software development work, computer programming, Parallel Computing, Algorithms, Koha, and Drupal. Source code snippets, like in Python for Spark. Retrospective of projects.
ResearcherID for Darrell Ulm Site
Discovering ORCID.org and Revisiting My ( Darrell Ulm )Research in Parallel Processing and Associative Computing
ORCid.org is a research publication database (mine: Darrell Ulm)
I recently came across ORCID.org, a platform that\helps researchers organize and present their scholarly work in a structured and reliable way. It surprised me that I had not used it earlier because it offers a level of control and clarity that is incredibly useful when managing decades of publications. As I began adding my research history, I found myself reflecting on the themes that have shaped my work in parallel processing, associative computing, and algorithmic problem solving. It felt a bit like rediscovering old tools in a workshop that I somehow forgot I built.
A Look Back at My Research Contributions
Much of my work has focused on high performance computing, data parallelism, and innovative approaches to classic optimization problems. ORCID gave me a chance to revisit these contributions and understand how they fit together across time.
Parallel and Distributed Processing
Several of my publications appeared in the International Parallel and Distributed Processing Symposium. These works explored new ways to model and simulate parallel computation.
Stream PRAM Presented at the 19th International Parallel and Distributed Processing Symposium (IPDPS 2005). This work examined a streaming approach to the Parallel Random Access Machine model and how it can be adapted for modern architectures.
Solving a 2D Knapsack Problem Using a Hybrid Data Parallel and Control Style of Computing Presented at IPDPS 2004. This research combined data parallelism with control driven techniques to tackle a complex two dimensional knapsack optimization problem.
Distributed Systems and Global Knowledge
World Wide Wisdom Published in IEEE Distributed Systems Online in 2004. This article explored early ideas about distributed knowledge systems and how global information sharing could reshape computing. Looking back, it feels like a precursor to many of the collaborative systems we take for granted today. It's just a book review and I reviewed it because it seemed like an important book.
Associative Computing and Simulation Models
My earlier work focused heavily on associative computing models and how they could simulate or enhance traditional parallel architectures.
Simulating PRAM with a MSIMD Model (ASC) Presented at the 1998 International Conference on Parallel Processing. This paper demonstrated how a Multiple Single Instruction Multiple Data model could simulate PRAM behavior with efficiency and scalability.
Solving a 2D Knapsack Problem on an Associative Computer Augmented with a Linear Network Presented at PDPTA 1996. This work extended associative computing techniques by integrating a linear network to improve communication and problem solving performance.
Virtual Parallelism by Self Simulation of the Multiple Instruction Stream Associate Model Also presented at PDPTA 1996. This research introduced a method for achieving virtual parallelism through self simulation, allowing complex instruction streams to be executed more efficiently.
Mesh and SIMD Based Optimization
Some of my earliest work focused on solving optimization problems on mesh and SIMD architectures.
Solving a Two Dimensional Knapsack Problem on a Mesh with Multiple Buses Presented at the 1995 International Conference on Parallel Processing. This paper explored how mesh based systems with multiple communication buses could accelerate knapsack computations.
Solving a Two Dimensional Knapsack Problem on SIMD Computers Presented at the 1992 International Conference on Parallel Processing. This was one of my foundational works, showing how SIMD architectures could be used to solve complex optimization problems that traditionally required more flexible computing models.
Why ORCID Matters for Researchers
Organizing all of these publications in one place reminded me how valuable it is to have a persistent and authoritative record of scholarly work. ORCID makes it easier to present research clearly, connect publications to identifiers like DOIs, and maintain a consistent academic identity across platforms. It also helps highlight the evolution of a research career, something that is easy to lose track of when your work spans many years and many conferences.
As I continue refining my ORCID profile, I am finding it to be a surprisingly helpful tool. It brings structure to a long timeline of ideas, experiments, and problem solving approaches. Maybe I should have used it earlier, but better late than never. My brain probably just took a small detour somewhere along the way.
Python for Data Science
There are many good resources available.
Of course the main tools are: Numpy, Pandas, MathPlotLib, SkiKit-Learn has some amazing tools.
Kaggle for instance has Data Science contents, but good to install a local system like the Jupyter Notebook to speed things up as the Kaggle editor can lag and take some time to run on small data-sets.
The newer DataCamp has some neat tutorials on it and simple App to do daily exercises on your mobile device.
Here is the Python DataScience Handbook. Really useful.
A short tutorial: Learn Python for Data Science, a fun read.
A list of cool DataSci tutorials is here, and another how to get started with Python for DS.
Will add more later.
My Growing Collection of Tech Notes: Drupal, PHP, Linux, Symfony, and More
I’ve been keeping a running set of technical notes on Tumblr as I work through different web development projects. Over time it has turned into a personal reference library that covers Drupal development, PHP programming, Linux server setup, and the Symfony framework. Most of these notes come from real problems I’ve solved while building or maintaining websites, so the collection keeps expanding as I learn new tools and techniques.
A large portion of my notes focuses on Drupal because I spend a lot of time working with Drupal 7, Drupal 8, and the transition toward Drupal 9. I’ve documented everything from module development and data migration to caching, performance optimization, Varnish configuration, and headless Drupal workflows. Since Drupal 8 and Drupal 9 are built on Symfony, I also keep notes on Symfony concepts and PHP best practices that help improve development speed and code quality.
I also write down what I learn while setting up and managing Linux servers. Many of these entries involve Ubuntu 16.04, including installing essential software, configuring GNOME Flashback, setting up Webmin, enabling SSL on Apache, and improving performance with Memcached and PHP OpCode caching. As I explore more DevOps tools, I’ve added notes on Docker, Composer, Drush, and other utilities that make modern development smoother.
This list keeps growing as I continue learning about backend development, server optimization, and emerging technologies like augmented reality toolkits. Here are the topics I’ve documented so far:
- Drupal 8 alpha release for Google Books Text Filter Module
- Ubuntu 16.04 Setup Gnome Flashback
- Gnome Flashback Move Menu Bar to Bottom of Screen
- Install Webmin to Ubuntu 16.04
- Install Google Chrome amd64 on Ubuntu 16.04 Linux
- Install PHP on Linux
- Install Drush for Drupal via Composer
- Install Composer on Linux
- Download and Install Drupal 8 with Details
- Install MySQL Workbench on Ubuntu 16.04
- Turn on the PHP 7 OpCode Cache
- Nice tutorial for SSL for Apache2 on Ubuntu 16.04
- Memcached on Ubuntu 16.04
- Drupal 7 Performance Optimizations
- Varnish Setup Instructions for Drupal 7 and more Performance Links
- PHP Versions Fast Switch Between
- Drupal 7 CKEditor Module with Simple Image Upload
- Drupal 7 Block Caching API, Configurations and Modules
- Drupal 8 Data Migration Information
- Drupal 8 API : Custom Module Development in PHP
- HTML Archive Methods and Software
- PHP 5.6 to PHP 7.0 Upgrade Compatibility Check
- Kalabox, 1 click local development, Drupal, WordPress
- Installing Docker on Linux
- Drupal AdvAgg Advanced Aggregation Module
- Symfony PHP Framework Tutorials
- Drupal Permissions and Access Control
- Headless Drupal
- Twig Theme Coding with Drupal 8
- Drupal 8 and Backwards Compatibility in Drupal 9
- Augmented Reality AR Tool-kits
- Drupal 7 Varnish and Page Cache
- PHP Programming Optimization Methods