Faculty Publication System

Peer-reviewed journal publications are an important academic indicator for any institution. To tackle the prohibitively manual “process” for reporting published research, we built an end-to-end system for capturing, managing, and delivering faculty publication data.

Tools
  • React
  • MongoDB
  • Node

The Challenge

McIntire faculty members are at the forefront of their fields as researchers and thought leaders, often publishing in top-tier journals multiple times per year. This scholarly work is an outstanding mechanism for communicating academic expertise, yet in the absence of a formalized infrastructure for the collection and delivery of this information, external reporting is nearly impossible. To address this challenge head-on, we needed to come up with a system that would allow departments – Marketing and Communications, Corporate Relations, Development – direct access to the ground-breaking research that was happening at the school, without adding to anyone’s administrative burden.

System Design

Streamlined and simplified

First, we built a REST API with Node.js, Express, and MongoDB. This service supports granular permissions, in-memory caching, and moderately complex filtering, sorting, and pagination – all foundational pieces of a snappy, functional data management system. For the frontend, we decided to bolt on to the existing school portal, a custom React application that already included a number of robust features for performing permission-based CRUD operations – with just a few modifications, we were able to reuse the components and leverage an interface that users already knew.

System Feature

Database population

With that setup in place, we had a mechanism for capturing publication data moving forward, but what about everything published beforehand? We moved on to the task of seeding the database with all of the legacy publications. Google Scholar accounted for roughly 95% of the records, so we built an app to do exactly three things: scrape the data (using Puppeteer), transform it to fit our schema, and then post it to the API. As with all scraping endeavors, this took a little trial and error, but in the end, we were able to automatically pull in 1400+ publications. This scraper is ready to grab new content on pre-defined intervals or as-needed on an individual basis.

System Feature

Custom data exports

Looking in the other direction, we also set up an export feature that allows admins to download a formatted file that can be imported into a separate Annual Review tool. Every faculty member has to complete an annual review each year, so having all of their publication data pre-populated is certainly a value-add.

UI Elements

Easy and accessible

The management interface is designed to be functional and straightforward. Individual faculty members can self-manage their own publications and they can also read, but not modify other faculty members’ publications. With such a large pool of users and a range of comfort levels, simplicity was the key to widespread adoption – tons of bells and whistles wouldn’t dazzle, they would confuse. To that end, we spent a lot of time coming up with components that would remove the guesswork. We included robust filtering and search inputs, straightforward Microsoft-Word-esque WYSIWYG inputs, bold action buttons, and clickable tooltips for every field. So, whether you’re trying to find a publication or trying to edit one, the steps are easy to identify, understand, and implement.

Related Projects