Tim Andrews

Experienced IT Professional and Full Stack Web Developer

My Work Experiences

This is my "Experiences" Blog, for lack of better term. I will post about the many jobs, projects, and skills that I have done over my life. I may also throw in a few posts about some persenal experiences that I'd like to share. This is still a work in progress, so bare with me as I add features and content to this blog.

Constitution Annotated Modernization (CONAN)

Tim Andrews | March 20, 2022, 3:04 p.m.

This is a project that I'm very proud to have worked on.  The Constitution Annotated (CONAN) is a document maintained by the Library of Congress that provides a comprehensive overview of how the Constitution has been interpreted over time.  When this project started CONAN consisted of an XLM document that was sporadically updated and published in book form once a decade.  The source XLM document was available online but search capabilities were limited.

The CONAN Modernization project's goals were to transform CONAN into a modern website with advanced search capabilities and publishing tools that would allow CONAN to be updated on a more consistent basis.  You can view the completed website at constitution.congress.gov

The team consisted of myself and one other developer from Artemis Consulting as well as a talented cross-functional team from the Library of Congress made up of content experts, designers, project management, and other technical staff.  Myself and the other developer did the bulk of the development work on this project with excellent support and guidance provide by the rest of the team from the Library.

The project began with some minor updates to the XML structure of the source document to increase consistency.  The development responsibilities on this project naturally split in the middle.  I would build the tools that would convert the XML source document and transform that data and store that data in an Apache Solr Data Store (ETL).  My development partner would build the tools to retrieve and display the data from Solr (The Frontend)

My work consisted of two main areas.   First I built an ETL (Extract, Transform, Load) application that would read the original XML documents and parse those documents into individual searchable elements.  It would then write the contents and metadata for each of the elements into the Apache Solr Data Store.  This process relied heavily on lxml.etree library as well as extensive use of Regular Expressions (Regex) to parse the data into individual elements.

In addition to building the ETL application, I also helped design and implement a content management system for the Library staff that allowed them to make and track updates to the Source XML Documents using git integrations with their XML editing tools.  They could publish updates to a development environment to review changes and then schedule updates to the production environment.  The publishing tools made use of Jenkins.io for automating publishing tasks.

The project was originally built on a very tight schedule.  We went from concept to completely functional website in approximately nine months and meet our deadline of September 17 (Constitution Day) to have the site live and fully functional.  The new site was very well received and we were able to continue working on and expanding the functionality of the site after that.