<img alt="" src="https://secure.leadforensics.com/149263.png" style="display:none;">

Blueprint for Effective Software and Database Dependency Management

Posted by Sebastian Holst on May 6, 2020 4:45:14 PM

Gartner_IT_2018In our last installment, Simplifying Software and Database Dependency Management, we explored how IT complexity, data reliability, and institutional knowledge gaps drive-up dependency management cost and confusion. Our rationale was simple; there is simply no way to reliably assess proposed dependency management solutions or practices without first establishing a comprehensive understanding of these underlying factors and their relative impact on software quality, development velocity and operational resilience.

Dependency Management Requirements

What can we infer from the past three posts? In order to consistently and effectively accelerate development velocity, improve software quality, and reduce risk and expense across both development and operations - what core software dependency management capabilities will be required?


I.  Do no harm

Even as we’ve established the significant risks that stem from poor software and database dependency management, the compensating controls, technologies and practices cannot bring with them even greater risk. Effective dependency management cannot degrade systems performance or resiliency, organizational security or compliance or development productivity.  While this principle may appear to be self-evident, implementation hazards and operational side-effects are not always as obvious.

Consider:

  • System resource requirements
  • Implementation time and required skills
  • License fees and/or consulting fees
  • Integration complexity and/or incompatibility
  • Supply chain risk and/or vendor viabilityCrosscode word cloud

II.  Inventory

One of the most common causes of incomplete dependency mapping initiatives is the inability to peer across architectural, development and/or runtime boundaries. Dependencies are missed because they literally cannot be seen.

Consider:

  • Implementation coverage
    • Architecture
    • OS
    • Language
    • Database
    • API/Library

While an enterprise may be wholly dependent on a particular component, it may not have the ability – or permission – to make (or even detect) relevant changes due to the components’ provenance (origin and ownership).

Consider:

  • Provenance
    • Internally developed
    • 3rd-party (proprietary)
    • 3rd-party (open source)

III.  Detection

Not all “dependencies” are equal. Many are inconsequential – or, more literally, immaterial. However, some may equate to a severe security vulnerability, a system failure, a regulatory incident or a breach of a commercial license. Given that an enterprise system will have literally millions of dependencies – all perfectly appropriate at any given time – dependency detection solutions must be able to reduce the “signal to noise” ratio – to find “the needle in the haystack” – to “separate the wheat from the chaff” (pick your favorite metaphor here).

Consider:

Crosscode helps with the user experience

  • Granularity
    • Software
      • Application, component and method
         
  • Database
    • Database, table and column
    • Version
       
  • Semantics
    • Quality
      • How many versions behind is a given component?
         
    • Security
      • Are their known vulnerabilities with a given version?
         
    • Governance
      • Who has control and/or liability of a given asset?
         
    • Compliance
      • Is a given dependency relevant to privacy, financial or other highly regulated activities?
         

IV.  User Experience

Even if one assumes that the dependency management solutions is able to detect and contextually map relevant dependencies, the resulting model is likely to be overwhelming and unnecessarily verbose for any one development, operational or governance activity or role. Filtering and presenting relevant information at the right time and in an actionable format is an essential finishing capability in any system – and dependency management is no exception.

Consider:

  • Role-based workflows and interfaces
    • DevOps
    • Security
    • Governance
        
  • Discovery: Activity-based utilities and familiar or intuitive UX “idioms”
    • Navigation
    • Reporting
    • Auditability
        
  • Collaboration: Resolving dependency-related incidents almost always triggers a cross-functional workflow.
  • Notification: given the scale and distribution of today’s computing environments, no organization can rely upon proactive, human inspection.
  • Administration: managing access to all of the above must be readily configured without requiring additional FTE hires.office workers analyzing research business data-2

V.  Proactive and predictive

Scaling dependency management must move beyond reducing the technical steps required. Scaling must reduce – if not eliminate – heavy reliance on highly skilled professionals to be on-call 24 hours X 7 days per week to evaluate and amend criteria with which dependencies are assessed and acted upon. An effective dependency management solution must be able to encode that expertise within it’s own operations scaling judgement along with the volume of computing assets and literal dependencies.

Consider: A rule-driven framework

VI.  Responsive & resilient

The actual definition of getting the right information in the right format to the right person (or system) in order to ensure the right decisions are made” will likely vary quite significantly from industry to industry and from organization to organization (and again by specific scenario within each organization).  Faster is not always better (because it typically comes at a price of some sort).

Consider your definition for: On-demand (just-in-time)

VII.  Automated

Last – but certainly NOT least – comes automation. A topic so broad that it needs to be applied to each and every evaluation criterion already listed. There is not one definition of automation that can be used – the concept must be reapplied and interpreted at each step of a dependency management workflow.

Producing this information cannot be accomplished manually; trusted and resilient automation is required.

Consider:

  • Automation: Scalability – memory, storage, connections, users, ….
  • Automation: Reliability – fail-over, logging, secure
  • Automation: Agility – wide variations in usage patterns
  • Automation: Consistency – wherever possible, taking the human factor out of the equation
  • Automation: Efficiency – predictable and transparent
  • Automation: Compliance – not only doing the right thing but also generating credible evidence that the right thing has been done.

Measure twice, cut once: getting the right fit for your organization

While this post ended up being considerably longer than any of the others in this series, it also hasn’t come close to meeting the stated objective of proving a usable “blueprint for effective software and database dependency management.” I’d say we may have hit the “template” mark – but not the “blueprint” mark. The reason should be reasonably obvious – interpreting the above evaluation criteria in the context of a specific organization’s implementation details (alongside their appetite for risk) is where most of the work will need to be done.

How can the above “template” be applied to a specific use case?

Is there any commonality across technology stacks or is every organization entirely distinct from the next?

 

Blond guy sitting-1

 

Topics: applicationmodernization, softwarearchitecture, Technicaldebt