www.cortex.io Open in urlscan Pro
3.233.126.24  Public Scan

URL: https://www.cortex.io/report/the-2024-state-of-software-production-readiness
Submission: On May 09 via manual from US — Scanned from US

Form analysis 2 forms found in the DOM

<form class="mktoForm mktoHasWidth mktoLayoutLeft" id="mktoForm_1166" novalidate="novalidate" style="font-family: Helvetica, Arial, sans-serif; font-size: 13px; color: rgb(51, 51, 51); width: 793px;">
  <style type="text/css">
    .mktoForm .mktoButtonWrap.mktoSimple .mktoButton {
      color: #fff;
      border: 1px solid #75ae4c;
      padding: 0.4em 1em;
      font-size: 1em;
      background-color: #99c47c;
      background-image: -webkit-gradient(linear, left top, left bottom, from(#99c47c), to(#75ae4c));
      background-image: -webkit-linear-gradient(top, #99c47c, #75ae4c);
      background-image: -moz-linear-gradient(top, #99c47c, #75ae4c);
      background-image: linear-gradient(to bottom, #99c47c, #75ae4c);
    }

    .mktoForm .mktoButtonWrap.mktoSimple .mktoButton:hover {
      border: 1px solid #447f19;
    }

    .mktoForm .mktoButtonWrap.mktoSimple .mktoButton:focus {
      outline: none;
      border: 1px solid #447f19;
    }

    .mktoForm .mktoButtonWrap.mktoSimple .mktoButton:active {
      background-color: #75ae4c;
      background-image: -webkit-gradient(linear, left top, left bottom, from(#75ae4c), to(#99c47c));
      background-image: -webkit-linear-gradient(top, #75ae4c, #99c47c);
      background-image: -moz-linear-gradient(top, #75ae4c, #99c47c);
      background-image: linear-gradient(to bottom, #75ae4c, #99c47c);
    }
  </style>
  <div class="mktoFormRow">
    <div class="mktoFieldDescriptor mktoFormCol" style="margin-bottom: 10px;">
      <div class="mktoOffset" style="width: 10px;"></div>
      <div class="mktoFieldWrap mktoRequiredField"><label for="Email" id="LblEmail" class="mktoLabel mktoHasWidth" style="width: 107px;">
          <div class="mktoAsterix">*</div>
        </label>
        <div class="mktoGutter mktoHasWidth" style="width: 10px;"></div><input id="Email" name="Email" placeholder="Join 2000+ Subscribers" maxlength="255" aria-labelledby="LblEmail InstructEmail" type="email"
          class="mktoField mktoEmailField mktoHasWidth mktoRequired" aria-required="true" style="width: 150px;"><span id="InstructEmail" tabindex="-1" class="mktoInstruction"></span>
        <div class="mktoClear"></div>
      </div>
      <div class="mktoClear"></div>
    </div>
    <div class="mktoClear"></div>
  </div>
  <div class="mktoFormRow"><input type="hidden" name="Demandbase_Company_Employees__c" class="mktoField mktoFieldDescriptor mktoFormCol" value="" style="margin-bottom: 10px;">
    <div class="mktoClear"></div>
  </div>
  <div class="mktoButtonRow"><span class="mktoButtonWrap mktoSimple" style="margin-left: 120px;"><button type="submit" class="mktoButton">Subscribe</button></span></div><input type="hidden" name="formid" class="mktoField mktoFieldDescriptor"
    value="1166"><input type="hidden" name="munchkinId" class="mktoField mktoFieldDescriptor" value="563-WJM-722">
  <fieldset id="db_data_container" style="border: none;"><input id="db_registry_company_name" name="db_registry_company_name" type="hidden" value="M247 Europe SRL"><input id="db_registry_city" name="db_registry_city" type="hidden"
      value="Miami"><input id="db_registry_state" name="db_registry_state" type="hidden" value="FL"><input id="db_region_name" name="db_region_name" type="hidden" value="Florida"><input id="db_registry_zip_code" name="db_registry_zip_code"
      type="hidden" value="33018"><input id="db_registry_area_code" name="db_registry_area_code" type="hidden" value=""><input id="db_registry_dma_code" name="db_registry_dma_code" type="hidden" value="528"><input id="db_registry_country"
      name="db_registry_country" type="hidden" value="United States"><input id="db_registry_country_code" name="db_registry_country_code" type="hidden" value="US"><input id="db_registry_country_code3" name="db_registry_country_code3" type="hidden"
      value=""><input id="db_registry_latitude" name="db_registry_latitude" type="hidden" value="25.91"><input id="db_registry_longitude" name="db_registry_longitude" type="hidden" value="-80.39"><input id="db_isp" name="db_isp" type="hidden"
      value="false"><input id="db_information_level" name="db_information_level" type="hidden" value="Basic"><input id="db_audience" name="db_audience" type="hidden" value="Residential"><input id="db_audience_segment" name="db_audience_segment"
      type="hidden" value=""><input id="db_access_type" name="db_access_type" type="hidden" value="identified_non_business"><input id="db_data_source" name="db_data_source" type="hidden" value="ip"></fieldset>
</form>

<form class="mktoForm mktoHasWidth mktoLayoutLeft" novalidate="novalidate" style="font-family: Helvetica, Arial, sans-serif; font-size: 13px; color: rgb(51, 51, 51); visibility: hidden; position: absolute; top: -500px; left: -1000px; width: 1600px;">
</form>

Text Content

This website uses cookies to improve user experience. By using our website you
agree to be bound by our Terms of Service and consent to all cookies in
accordance with our Privacy Policy.
Decline allAccept all

Join us on May 9th - The 2024 State of Software Production Readiness

Product

Our Products

A robust set of features to build a complete internal developer portal

Scorecards
Set standards and keep teams accountable

Scaffolder
Achieve consistency and reduce
time-to-code

Catalog
Immediate visibility into your services and resources

Plugins
Bring data from anywhere to extend your experience

Integrations
Data from all of your favorite tools, in one place

Developer Homepage
Eliminate noise and prioritize impactful work

Eng Intelligence
Close the gap between measurement 
and impact


Solutions

Our Solutions

Drive key initiatives and set a foundation for operational excellence

Software Ownership
Ensures that every service has a distinct owner, purpose and boundary

Software Migration
Ensure everyone’s services and resources are up-to-date with the latest
technology.

Developer Productivity
Eliminate noise and view relevant data in context

Incident Management
Speed time to resolve incidents

Production Readiness
Help your team understand the health of your services at a glance.

Backstage Migration Helper
Keep your catalogs, lose the maintenance


Resources

Our Resources

Discover our resources and learn what your team can do with Cortex

Docs
Get started and access our in-depth guides.

Blog
Visit our blog for additional resources, updates and industry perspectives.

Resources & Events
Access our practical guides and keep tabs on webinars and events.

Pricing
Get a quote and see how your team can integrate Cortex.

Careers
We’re looking for teammates to join us as we create a new category of
engineering tooling.

Backstage vs. Cortex
A side-by-side comparison of Cortex and Backstage by Spotify.


Customers

Login

Talk to SalesBook a live demo




Report


THE 2024 STATE OF SOFTWARE PRODUCTION READINESS





INTRODUCTION

Production readiness—or more specifically the production readiness review
(PRR)—is a set of checks used to mark when software is considered secure,
scalable, and reliable enough for use. While PRRs are unique to the engineering
teams compiling them, most include things like adequate testing and security
coverage, connection to CI/CD tools, and detailed rollback protocol.

Of course, the need for safe, scalable, and reliable systems doesn’t end at
initial launch. Code and the standards governing it change, which means software
needs to be regularly evaluated for alignment. While tools like code scanners
and APMs make it easy to assess some indicators of software health, rapidly
expanding software ecosystems make historically manual checks like ownership,
on-call information, and SLOs much harder to maintain without sacrificing
quality or time to market.

To better understand how teams are addressing new challenges in production
readiness, Cortex conducted a survey of 50 engineering leaders at companies with
more than 500 employees in North America, Europe, UK&I, and AsiaPac. The survey
included free-text and multiple choice questions pertaining to production
readiness standards, tools, struggles, and desired future state. This report
contains an analysis of these results as well as suggestions for building or
improving your own program.


SURVEY DEMOGRAPHICS









KEY THEMES

There are five key themes to unpack in the survey results, which reflect broader
market trends and conversations.


1. STANDARDS VARY WIDELY ACROSS AND EVEN WITHIN ORGANIZATIONS.

While we expect to see variation in production readiness checklists by org size
and industry, survey results actually show zero duplication; no two leaders
selected the same set of standards from a choose-all-that-apply list.
Additionally, when asked about obstacles to ensuring production readiness for
new and existing software, the #1 response (66%) was lack of consistency in how
standards are defined across teams within their own company.


2. EVEN THE MOST CONFIDENT LEADERS STRUGGLE WITH SOFTWARE OWNERSHIP, MANUAL
FOLLOW-UP, AND ON-GOING ALIGNMENT TO STANDARDS.

When asked about blockers to production readiness, 56% said manual follow-up,
and 36% said unclear ownership. Even when just looking at those reporting the
highest confidence in their program, these two options still tied for #1 pain
(42% each). More than 30% of this cohort also reported struggling with
continuous checks like ensuring adequate code coverage, and enforcing
SLOs—highlighting a solution gap in continuous rather than point-in-time
assessments.


3. A THIRD OF LEADERS SURVEYED REPORT HAVING NO PROCESS FOR CONTINUALLY
MONITORING SOFTWARE FOR ALIGNMENT TO STANDARDS.

When asked about ensuring alignment to standards post-initial launch, 32% of
engineering leaders confessed to having no formal process outside of addressing
incidents. While still a relatively new technology, Internal Developer Portals
(IDPs) now offer a way to centralize “always on” standards, which may be why the
25% of survey participants that have adopted IDPs also report higher confidence
in their production readiness programs.


4. 98% HAVE WITNESSED AT LEAST ONE NEGATIVE CONSEQUENCE OF FAILING TO MEET
PRODUCTION READINESS STANDARDS AT THEIR ORGANIZATION.

While overall program confidence hovers around 6/10, 98% of participants have
witnessed at least one significant consequence of failing to meet production
readiness standards, with top responses all relating to downstream loss in
revenue or time to market. 62% saw an increase in change failure rate, 56% saw
an increase in mean time to resolve/remediate, 54% saw a decrease in developer
productivity, and 52% said they’ve seen software not ship on time.


5. TOOLING USED TO TRACK PRODUCTION READINESS IS MORE HIGHLY CORRELATED WITH
PROGRAM CONFIDENCE THAN THE CHECKS THEMSELVES.

While we see variation across production readiness lists, the top three checks
are the same for the most and least confident cohorts—CI/CD integrations,
connection to logging & monitoring tools, and peer review. So it may be that
mode of management, rather than individual steps may be a better indicator of
effectiveness. The most confident cohort are 4x less likely to use spreadsheets
than the least confident, and 2x more likely than all participants to use IDPs.


THE PRODUCTION READINESS REVIEW: DEFINITIONS, COMPOSITION, AND CONCERNS

We asked survey participants to briefly describe what production readiness looks
like at their organization, before inquiring about their confidence in this
process, and what challenges they face.


HOW ENGINEERING LEADERS DESCRIBE THEIR PRODUCTION READINESS PROCESS

While no two participants share the exact same process, survey results show
clear trends in basic production readiness requirements:

67% MENTIONED TESTING AND QUALITY ASSURANCE PROCESS

The majority of respondents highlighted the importance of a thorough testing
process, including unit tests, integration tests, and user acceptance tests, as
part of their PRR. Ensuring code quality and meeting security standards were
also frequently mentioned.

“We have a series of checks, for code quality, test coverage, etc. Then we
deploy to a staging environment and perform load testing and scalability
responsiveness.  We also look at logging and alerting by killing the service and
making sure alerts trigger.  Then we do a dashboard and monitoring review. If
all is good it can go to production using a standardized tool chain that logs
each deploy.”



Sr. Engineering Manager
10,000+ person company

“Our Production Readiness Review consists of a couple of phases. Usually we will
conduct UAT which functions as training. When that is complete we will rapidly
fix release issues, and get sign-off. Then we will coordinate what rollback
looks like for the team, and release into production.”



VP of Application Development
1,000-5,000 person company


50% MENTIONED AUTOMATION AND CONTINUOUS INTEGRATION/DEPLOYMENT

Many respondents are utilizing or aiming to integrate more automation into their
PRR processes. This includes automated code reviews, setting up deployment
pipelines, and monitoring systems to streamline the deployment process and
reduce manual intervention.

“Our testing is largely automated, we run CI pipelines for all projects, and
have upstream releases that are on the training system. They come out on a
certain date regardless. Then we have integration testing rounds which identify
cross product bugs and following a successful round, we will release the
downstream or commercial version.”



Director, Field Engineering
1,000-5,000 person company

“We have a continuous integration process that includes security protocols like
SAST and vulnerability scanning. Only if the software or code passes all the
mandatory scanning steps will the binary be published to the repository manager.
We have a CD in place which is controlled via a change management process with
all the testing validations attached and only the approved changes are allowed
to be deployed to prod.”



Engineering Lead
10,000+ person company


42% MENTIONED DOCUMENTATION, RUNBOOKS, AND ROLLBACK PROTOCOL

A number of respondents emphasized the need for detailed documentation as part
of the PRR process. This includes maintaining runbooks, deployment and rollback
plans, and ensuring that all necessary documentation is up to date and
accessible.

“We have a partly automated process that includes gated criteria, such as QA
complete, code reviews, documentation reviews, infra assessment, monitoring &
observability confirmation, operational readiness & training, and DR & rollback
readiness. Once a change is identified, the Change Advisory board meets to
ensure all the above gates are passed and then signs off the release. For
hotfixes and smaller changes, a very slim version of the above process is used.”



Senior Vice President, Engineering
1,000-5,000 person company

“PRR involves an initial code review, testing, merge, and deploy to staging.
After that, acceptance testing is done on the staging platform. When everything
is clear, then code is deployed to production, it is also checked what would be
the rollback in case of bugs in production. When deployed to production,
monitoring is done on the deployed code and also this entire process is
documented.”



Software Engineering Lead
1,000-5,000 person company


COMPOSITION OF PRODUCTION READINESS CHECKLISTS

Survey participants were asked to select all items included in their standard
production readiness checklist (or add others). Additionally, they were asked to
identify which were challenging to enforce.

MOST FREQUENTLY CITED REQUIREMENTS

The most frequently cited checks in production readiness reviews include things
that are typically automated or achieved by default in the software development
process, like connection to a git repo, and integration with CI/CD tools. Other
frequently cited items have close association with security or reliability, like
testing and code coverage.




MOST FREQUENTLY CITED PRODUCTION READINESS ACTIVITIES

Activity % of total respondents CI/CD integrations 76% Connection to logging and
monitoring tools 75% Peer review complete 75% Acceptable test failure rate 61%
Adequate code coverage 59%

MOST CHALLENGING AREAS TO ENFORCE

Both before and after initial push to production, organizations report
struggling to enforce quite a few steps in their production readiness checklist.
Topping the list are items that have historically not been easy to observe on a
continuous basis without manual intervention, or those that are often omitted
from the preliminary launch process because they require some observation time,
like runbooks and SLOs.

ACTIVITIES NOTED AS CHALLENGING TO ENFORCE BEFORE AND AFTER PRODUCTION

Activity % of total respondents Ensuring SLOs are defined 41% Maintaining an
acceptable test failure rate 37% Ensuring connection to vuln management
solutions 37% Adding appropriate documentation 27% Defining service owners 27%
Capping allowable vulns 27% Using up to date packages 25%


TOOLS USED TO MANAGE PRODUCTION READINESS

The production readiness process spans multiple tools and teams. It’s therefore
critical to centralize this information in order to ensure consistency of terms
and timelines. We asked survey participants where their organization tracks
alignment to production readiness standards. Participants could choose multiple
answers if their organization uses multiple tools for this work.




Interestingly, results do not vary significantly by organization size, though
larger organizations are slightly more likely to use Internal Developer Portals,
and smaller organizations are more likely either have no process, or use project
management software. Project management software is still a popular choice for
companies of all sizes, possibly due to the rise in software
engineering-specific project management tools. However, we’ll see later that
perceived program effectiveness tends to be lower for those only attempting to
use project management software to track on-going alignment to standards.


FREQUENCY OF ASSESSING PRODUCTION READINESS

Production readiness is—perhaps mistakenly or short-sightedly—often framed as
whether software is ready for initial launch. This is especially true for
organizations that lack technology that can provide continuous assessment. We
asked survey respondents how often they review software for alignment to
standards like persistent ownership, and up-to-date packages after initial
release into production.




Across all respondents we see an alarming 32% have no formal process for
reviewing software outside of mitigating issues that may arise. Of those that do
have a process in place, once a quarter is most popular (27%), with the next
most cited interval actually being continuous (16%). Larger organizations are
more likely to review more frequently, but they’re also more likely to employ
Internal Developer Portals, which often provide this capability.


CHALLENGES IN TRACKING PRODUCTION READINESS

As illustrated in the section above, ideal production readiness can be difficult
to both achieve in full, and track in perpetuity. We asked participants what
made this process difficult.




Results show that the #1 blocker to ensuring production readiness is lack of
consistent standards across teams. Runners up on the list of challenges include
manual follow-up for actions required, inconsistent ownership information, and
lack of templates for building new software according to standards.

Cross-referencing tools by problems faced, we can draw a few connections. While
spreadsheets and wikis can help track information about software, they require
manual effort to reflect up to date ownership and status. While project
management tools can automate prioritization and follow-up, they lack the
integrations needed to continually assess when either are necessary.


CONSEQUENCES OF FAILING TO MEET PRODUCTION READINESS STANDARDS

Failing to meet production readiness standards as code evolves can have negative
consequences like higher risk from unmitigated vulnerabilities, or delayed time
to market. We asked our survey participants which negative consequences—if
any—they’ve witnessed at their current organization.




The most frequently reported consequences were those that could have a direct
downstream impact on organizational revenue and time to market goals. 62% saw an
increase in change failure rate, 56% saw an increase in mean time to
resolve/remediate, 54% saw a decrease in developer productivity, and 52% said
they’ve seen software not ship on time. When looking at the same data sliced by
frequency of evaluations, we see that 94% of those without a process for
on-going evaluation saw change-failure rate go up, compared to just 38% for
those employing continuous assessment.

Vulnerabilities can’t be completely avoided with even the best production
readiness checklist, but ensuring connection to vuln scanners, and setting
timeline expectations for remediation can significantly reduce risk. While
exploitation is a rare worst case scenario for languishing vulnerabilities, a
surprising 22% of respondents reported witnessing this as a consequence of
untimely response.


TRENDS BY CONFIDENCE: COMPARING PROGRAM CHARACTERISTICS BY PERCEIVED
EFFECTIVENESS

Just looking at how various organizations structure their production readiness
programs can’t tell us much about the effectiveness of their program. Afterall,
the longest checklist doesn’t necessarily make it the most well-managed. To add
a little more color to these responses, we asked leaders to rate the perceived
effectiveness of their program on a scale of 1-10, with 10 denoting they
“strongly agree” that their production readiness review process is “highly
effective.”

The average confidence rating landed at 6.4, with confidence by org size and
industry fairly uniform. The highest average confidence level reported by any
sector was 7.5 (Advertising), with the lowest average confidence level for any
sector coming in at 5.6 (Ecommerce), though it’s important to note that the
sample sizes for both were relatively small. By far the more interesting trends
were uncovered when comparing confidence to areas of concern, challenges in
enforcement, tools used to manage production readiness, and the frequency at
which it is assessed.


PROGRAM CONFIDENCE X PRODUCTION READINESS CHECKLISTS

Teams with the highest confidence (those reporting an 8, 9, or 10 on the
effectiveness scale), exhibit similar trends to the wider audience in terms of
activities that comprise their production readiness checklist, but diverge in
which activities cause them the most trouble to enforce.




Compared to the general population, the most confident cohort does not report
struggling with code quality metrics, on-call setup, or accounting of
dependencies. This may be due to use of additional tools that specialize in each
of those activities. However, more than 30% of the most confident cohort does
still struggle with ensuring ownership is assigned, ensuring documentation is
attached, ensuring adequate code coverage, maintaining connection to
vulnerability tools, and enforcing SLOs. Importantly, these activities undergo
more continuous change than the activities mentioned earlier. Without technology
designed to enforce both point-in-time and continuous standards, all
organizations will struggle with these checks.


PROGRAM CONFIDENCE X TOOLS FOR TRACKING PRODUCTION READINESS

When we compare program confidence to where teams track alignment to production
readiness, we see that 75% of the least confident leaders (those providing a 1,
2, or 3 score), are using spreadsheets as part of their production readiness
review, compared to just 17% of the most confident. Conversely, 50% of the most
confident teams use an Internal Developer Portal, while 0% of the least
confident do. Unsurprisingly, no one in the most confident cohort lacks a
process for managing production readiness.





PROGRAM CONFIDENCE X FREQUENCY OF ASSESSMENT

When we look at program confidence by how often teams perform production
readiness assessments, we see a few interesting correlations.




First, those that employ continuous monitoring report higher confidence than
others (median of 7). Second, 83% of those reporting an 8 or higher confidence
score employ some form of regular monitoring. And third, those reporting no
formal process have a wider range of confidence than all others.

While we don’t know what contributed to each leader’s rating, if this last
cohort believes regular reviews require more effort than they provide in upside,
they may be less likely to consider lack of post-deployment reviews detrimental
to program effectinvess. This may be particularly true of leaders that have not
yet adopted ways to abstract away manual effort in ensuring alignment.


PROGRAM CONFIDENCE X CHALLENGES

When looking at program confidence by challenges faced when trying to employ
standards of production readiness, we see the least confident leaders struggle
most with lack of consistent standards. Interestingly, all leaders—regardless of
program confidence—struggle with tracking ownership, and ensuring follow-up on
actions required. In fact, these two challenges show the smallest difference
between the most and least confident cohorts.




Universal problems like these tend to be indicative of a tooling—rather than
people or process—gap. We now have Internal Developer Portals to help, but these
solutions must include capabilities that make it easy to automatically refresh
information about owners and health, continually assess alignment to standards,
and automatically serve notifications for action required.


PRODUCTION READINESS AUTOMATION

We know that production readiness needs to be a continuous process, but lack of
appropriate tooling can make that difficult to achieve—as exemplified by many of
the responses in this survey. So we asked our survey participants which parts on
the production readiness lifecycle should be automated.

While responses trended into four key buckets outlined below, a few participants
noted that the bigger issue was a need for a more holistic approach—connecting
automation and workflows across tools.

“Most of what we do for production readiness is already automated, but by
different systems. I would like a more holistic approach. I would like to see
everything in a single dashboard because reliability is critical for our
industry.”



Senior Engineering Manager
10,000+ person company

“We need to automate everything after you deploy the software. Drift is very
difficult to handle.”



Director of System Services & System Engineering
10,000+ person company

“Honestly I’d like to automate as much as possible. Having even just a workflow
for dates and the person acknowledging things would be better than we have now.
Connection between pre-established norms, their execution, and a dashboard would
be the holy grail.”



Director of Engineering
500-1,000 person company


42% OF RESPONDENTS MENTIONED TESTING, VALIDATION, AND CODE COVERAGE

Respondents expressed a strong desire for more automation in testing and
validation processes. This includes automating unit tests, integration tests,
and performance tests to ensure code quality. Many organizations also mentioned
already employing tools designed to help with this.

“I would like to increase automated testing and automation to create change
management tickets with a standard template. This is only possible if every team
starts following a standard change template and process.”



Engineering Lead
10,000+ person company


38% OF RESPONDENTS MENTIONED OWNERSHIP, ON-CALL, RUNBOOKS, OR DOCUMENTATION

Unlike security and testing tools, participants mentioned a gap in automating
things like ownership and attaching documentation or runbooks. There seems to be
a clear desire for technology that could automatically follow-up with developers
to make these requirements obvious.

“I would like to automate the documentation process—where the new feature is
documented, how it fits the overall architecture,
its dependencies, and the run book of both what could go wrong and what to do.”



Software Engineering Lead
1,000-5,000 person company


28% OF RESPONDENTS MENTIONED SECURITY, COMPLIANCE, RISK, OR VULNERABILITY
ASSESSMENTS

Security and compliance are critical aspects of the PRR process that respondents
want to see automated. This includes automating security checks, vulnerability
scans, and compliance with data privacy policies to ensure the software is
secure and adheres to regulations.

“Currently, we automate security package vulnerability scanning, testing (at
least some), and dependency inspection. We don't automate business SLOs (some
services have health SLOs monitored, such as uptime, number of requests, etc)
which is a big one. There is a custom tool being built for business metrics
monitoring, which is the most important thing, as it's not directly related to
common health metrics, which are monitored in Grafana with Prometheus.”



Software Engineering Manager
1,000-5,000 person company


28% OF RESPONDENTS MENTIONED INFRA PROVISIONING OR SLOS

Automation of deployment processes and monitoring setups were also frequently
mentioned. Respondents are looking for ways to automate the deployment
validation, monitoring, and alerting setup to ensure a smooth and reliable
production release.

“Automated testing, deployment validation, and infrastructure checks are key
areas for efficient production readiness assessment automation.”



Software Engineering Manager
10,000+ person company


HOW INTERNAL DEVELOPER PORTALS IMPROVE THE PRODUCTION READINESS PROCESS

The results of this survey highlighted the disparity in production readiness
approach, tooling, and perceived effectiveness. But there were a few noteworthy
trends that united all participants:

 1. Alignment is hard. As organizations adopt frameworks designed to help
    developers ship faster, an inability to manage the knock-on effect of
    information entropy has introduced new risk, hampered velocity, and degraded
    productivity.
 2. Automation is key. Most leaders identified activities in their production
    readiness checklist that they have not been able to automate. This has led
    to either lack of attention to these tasks, lack of continuity, or lack of
    efficient management.
 3. Assessment must be continuous. Leaders that reported the highest levels of
    confidence spoke of the importance of continuous standards enforcement, and
    have taken steps to make this a regular part of their production readiness
    lifecycle.

Internal Developer Portals were designed to address all three dimensions of
production readiness, enabling developers to reduce time to find, time to fix,
and time to build. By centralizing access to the tools and information
developers need to build, while automatically tracking alignment to standards
over time, IDPs are the most efficient way to improve productivity without
compromising quality.

Three ways IDPs drive better production readiness includes:


CENTRALIZE DATA AND STANDARDS ACROSS ALL TOOLS

Many organizations employ different software standards in tools specific to each
set of requirements. For example, teams may have observability standards managed
in their APM tools, or security standards managed in vulnerability management
tools. This siloed approach might seem easier than manually collating data
defined differently everywhere, but that’s no longer the only option.

Internal developer portals connect to all engineering and identity tools to
unite information about software owners, composition, health, and efficiency.
Beyond a central system of record for engineering data, IDPs also unite once
siloed-standards, so teams can manage security standards where testing standards
are tracked. Or build compliance standards where observability standards live.

Cortex’s IDP was designed to facilitate maximum flexibility without excessive
overhead. Fully custom catalogs, 50+ out of the box integrations, the ability to
bring in custom data, and a rich plugin architecture enables teams to build new
data experiences that best fit developer workflows. So anything that details how
software is built, by whom, when, and how, can be captured by catalogs segmented
by software type like services, resources, APIs, infrastructure, etc. Any
standard that governs how code is written, tested, secured, reviewed, or
deployed can be unified, and even re-segmented by team, type, domain, or
exemption status, to ensure all components are managed in context.





APPLY ALWAYS-ON STANDARDS FOR CONTINUOUS ALIGNMENT

Code repos, project management tools, and wikis are all indisputably useful
tools for engineering teams. But none have a live view of the rest of your
software ecosystem, which means none can tell you when software falls out of
alignment with critical requirements for security, compliance, or efficiency.

Internal Developer Portals fill this gap with scorecards. Because IDPs are a
one-stop-shop for all your data and documentation, they can also serve as a
means of continuously monitoring alignment to all software standards you define
in a scorecard. So if code is updated, ownership changes, new tools are adopted,
or old packages hit end-of-life, your IDP makes it easy to see what needs
attention.

Of course, not all IDPs provide this capability. Cortex is the only IDP that
provides the level of data model flexibility to define any rule, for any data,
targeting any audience.

This means users can create Scorecards with rule types like:

 * Binary: Check for connection to a vulnerability scanner or use of the most up
   to date package
 * Target: Require at least 70% code coverage or two reviewers
 * Threshold: Allow no more than one p0 vulnerability, or five open JIRA tickets
   per service

In order to ensure scorecards aren’t just passive assessment systems, Cortex
also enables teams to drive meaningful action by:

 * Defining deadlines—so certain measures are actioned before a set date and
   time
 * Pushing alerts—via Slack, Teams, or email to ensure devs know what’s needed,
   when
 * Uploading exemption lists—to target only the folks that need to take action





PROVIDE APPROVED TEMPLATES TO SHIP QUALITY CODE QUICKLY

Enabling self-service is a top priority engineering leaders that care about both
developer productivity, and happiness. But self-service also contributes to
higher quality software, and therefore should be part of any robust production
readiness process. Internal Developer Portals enable teams to create
pre-approved templates with health checks built in, so developers can reduce
time spent looking for standards and context-switching across applications.

Cortex enables teams to not only build production readiness standards into
templates with boilerplate code, but also initiate any workflow from within the
platform using a customizable HTTP request. So developers can do things like:

 * Make an API call to AWS
 * Deploy a service
 * Provision a resource
 * Assign a temporary access key
 * Create a JIRA ticket, or r any other action that would help the developer
   complete a task, or in this case, their production readiness checklist

Self-serve templates and workflows are also especially useful for onboarding new
developers that would normally need time to ramp on your chosen production
readiness protocol. By centralizing the tools and templates needed to align with
standards, time to impact can be drastically reduced.





CONCLUSIONS

Production readiness is a necessary process for any organization developing
software. While every team will have a slightly different set of requirements
and order of execution based on their business, there are clear trends in our
study that should give any engineering leader a few things to think about when
designing or updating their own program.

First, on-going ownership is vital to ensuring on-going software health. If a
component’s ownership information is stagnant even by a week, the downstream
impact to incident response, security, and compliance can be severe. Finding a
solution that connects to your identity management system is the quickest way to
avoid orphaned software that can introduce risk.

Second, regardless of which items comprise your production readiness readiness
checklist, having a way to continuously monitor alignment to those standards is
key. Having up-to-date information in one place reduces the need to manually
cross-reference disparate data, and can help eliminate unnecessary status
stand-ups that burn manager and developer hours.

Finally, having a strong production readiness program has many benefits beyond
software security and reliability. If using tools like Internal Developer
Portals, additional benefits to developers include:

 * Reduced time to find the right set of standards
 * Reduced time spent fixing avoidable issues
 * Reduced time to code and time in approvals when building new software, and
 * Reduced time context switching by planning routine health upgrades around
   current work.

All of these savings accrue to more efficient use of resources, higher developer
productivity, faster time to market, and happier teams.

For more information on how Cortex’s Internal Developer Portal can improve your
production readiness process, check out our self-guided tour, or connect with us
for a personalized demonstration.

Lorem ipsum dolor sit amet, consectetur adipiscing elit. Suspendisse varius enim
in eros elementum tristique. Duis cursus, mi quis viverra ornare, eros dolor
interdum nulla, ut commodo diam libero vitae erat. Aenean faucibus nibh et justo
cursus id rutrum lorem imperdiet. Nunc ut sem vitae risus tristique posuere.

Content

Introduction
Survey demographics
Key Themes
The Production Readiness Review: Definitions, Composition, and Concerns
Trends by confidence: Comparing program characteristics by perceived
effectiveness
Production Readiness automation
How Internal Developer Portals improve the production readiness process
Conclusions


Share article



*





Subscribe
Thank you!

You’ve been successfully subscribed



Activity % of total respondents CI/CD integrations 76%


BOOK A DEMO TODAY


Get started
Social Media


Product
ScorecardsScaffolderCatalogPluginsIntegrations
Solutions
Software OwnershipSoftware MigrationDeveloper ProductivityIncident
ManagementProduction ReadinessBackstage Migration Helper
Resources
DocsBlogPricingCustomers
Company
About Us
Careers
We’re hiring
PressPrivacy PolicySecurity PolicyTerms of Service
© 2024 Cortex. All rights reserved.