Drupal multi-domain data collection database

End of last year, Samara group contacted us to build a new Drupal website that would allow various cities across the United States to collect and track biodiversity data within and surrounding their borders.
Fountain City, Inc. pulled together a team of two Drupal developers, an Information Architects and a Project Manager to pull it all together. Design work was coordinated with Chuck Simply a local Drupal Themer in Portland OR.
Each project comes with its new challenges, in this case we needed a system that would allow us to spin up new city-sites, each with their own city manager and user base, but with a common framework of features and cross-city data structure that would allow charting, reporting and sharing of taxonomic terminology.
Multiple subdomains, each with separate user permissions
Alain, the project’s tech lead proposed the Drupal module domain as an appropriate solution to the city’s use case. Domain let’s us set up unique user access per subdomain while still sharing data as needed across them. While still letting power users and admins retain access rights to all of the subdomains.
After this post is up, we will publish another follow up article where Alain goes much further in depth into the solution employed and what family of modules we used together to achieve the desired result.
Automatic geolocation detection

Since a lot of the data collection is being done on-site, the website also needed an easy way to let data collectors input the data. We added drop downs and auto complete fields as appropriate and for the user’s location we set up the Drupal module geolocation with self signed Let’s Encrypt SSL certifications on all subdomains to permit field agents to automatically populate their geolocations into the application with each piece of data information collected.
Low cost, zero renewal fee SSL solution

Let’s Encrypt is a trending solution to sites that need many SSL certs. It’s a major cost savings for any website where you need many SSL certs and the costs of a wildcard domain are restrictive. You spend the time to configure the SSL certs and then afterwards they auto renew yearly, no subscription fees! In our case we even installed the cPanel Let’s Encrypt plugin that let’s you create and manage all your SSL certs directly from the cPanel UI. Couldn’t be easier.
But be warned, we found out the hard way that you can’t issue more than 20 SSL certs via Let’s Encrypt per week without obtaining an exemption. In our case this delay was circumventable – but this could be a major project road block if you need to register 100+ subdomains, all with SSL and you only have a week or two to make it happen…
Light-weight Drupal Theming

We work on projects with many different priorities. In the case of ubif.us the priority was first and foremost on functionality, not form. But with the knowledge that a Themer would still need to work on the layout minimally to improve some issues with table presentations, title sizes and some basic branding requirements.
We went with a basic framework, in this case Bootstrap. This gave us a workable theme that is responsive and a SASS underlayer to let theming take place iteratively on top of functionality as it was developed.
Maps and charts

At its core, data wants to be visualized. It’s how you can create meaning from all the numbers. We laid out extensive ground work in ubif.us for charting, maps and various types of reports. Ultimately in release-1 of the tool, which at this time of writing was a couple weeks ago, the site doesn’t have yet all of our wish list of charting capabilities yet. But we did have the time to code the ground work for upcoming maps and got far enough along to build maps with ArcGIS data, area charts and maps with data points dynamically loading.
The ground work of release 1 though was a focus on data table displays for imports and exports of data.
The data being imported for this site was multi-dimensional, meaning each piece of content referenced not only location-specific parameters but also data collector information from a secondary look up table. In order for the import to work smoothly it would need to, ideally, check and then auto-create the implied taxonomy terms and referenced data objects together with the creation of the specific rows being imported. In other words, create not only the data for each row in the form being processed, but also any new Nodes or Taxonomy terms referenced in the row, but not yet present in the system.
Naturally however this can cause some usability concerns, if you just create new data from each row, you could be creating name variations that are not intentional. Like Robert K. Richardson when a Robert Keith Richardson already exists. While it is possible to code software that will check for common variations like this, and warn the user, setting up and testing a smart interpreter is a time consuming endeavor for a project with a short turn around and limited budget.
The quickest short term option is to simply report back to the user for all new data sets being created per import, and then to let the user determine if those new data references should be created or not on their own.
Conclusion
That’s it for now! Join us in our next article as we outline the domain module set up.