Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion content/en/administrators_guide/getting_started.md
Original file line number Diff line number Diff line change
Expand Up @@ -116,7 +116,7 @@ To successfully create a new Datadog installation, review the [plan][11] page. Y
[7]: /getting_started/tagging/
[8]: https://app.datadoghq.com/logs/pipelines/pipeline/add
[9]: https://app.datadoghq.com/apm/service-setup
[10]: https://app.datadoghq.com/monitors/recommended
[10]: https://app.datadoghq.com/monitors/templates
[11]: /administrators_guide/plan
[12]: /administrators_guide/plan/#resource-tagging
[13]: https://github.com/DataDog/datadog-agent/tree/main/examples
Expand Down
2 changes: 1 addition & 1 deletion content/en/data_jobs/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -64,4 +64,4 @@ To determine why a stage is taking a long time to complete, you can use the **Sp

{{< partial name="whats-next/whats-next.html" >}}

[1]: https://app.datadoghq.com/monitors/recommended?q=jobs%20&only_installed=true&p=1
[1]: https://app.datadoghq.com/monitors/templates
14 changes: 3 additions & 11 deletions content/en/internal_developer_portal/scorecards/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -37,7 +37,7 @@ Scorecards help your team measure and continuously improve the health and perfor

You have full control over how scorecards are defined. In addition to the three sets of core Scorecards that the Datadog platform provides around Production Readiness, Observability Best Practices, and Documentation & Ownership, you can customize default rules or create new ones to match your team's priorities and reflect your own operational standards. This flexibility lets you tailor Scorecards to your organization's engineering culture and maturity.

Datadog evaluates the default Scorecards every 24 hours for all registered entities in the Software Catalog against a set of pass-fail criteria. You can turn off these default evaluations any time. You can configure the data input, evaluation criteria, and evaluation cadence for any customized rules using the [Scorecards API][10] or [Datadog Workflow Automation][9].
Datadog evaluates the default Scorecards every 24 hours for all registered entities in the Software Catalog against a set of pass-fail criteria. You can turn off these default evaluations any time. You can configure the data input, evaluation criteria, and evaluation cadence for any customized rules using the [Scorecards API][1] or [Datadog Workflow Automation][2].

Datadog can summarize Scorecard results into automated reports and deliver them directly through Slack, helping your team stay aligned, track improvements, and efficiently address gaps.

Expand All @@ -56,13 +56,5 @@ Datadog can summarize Scorecard results into automated reports and deliver them

{{< partial name="whats-next/whats-next.html" >}}

[1]: https://app.datadoghq.com/services
[2]: /service_management/service_level_objectives/
[3]: https://app.datadoghq.com/monitors/recommended
[4]: /tracing/services/deployment_tracking/
[5]: /tracing/other_telemetry/connect_logs_and_traces/
[6]: /tracing/software_catalog/
[7]: /getting_started/tagging/unified_service_tagging/
[8]: https://app.datadoghq.com/services/scorecard
[9]: /service_management/workflows/
[10]: /api/latest/service-scorecards/
[1]: /api/latest/service-scorecards/
[2]: /service_management/workflows/
Original file line number Diff line number Diff line change
Expand Up @@ -10,8 +10,8 @@ further_reading:
tag: "Documentation"
text: "Software Catalog"
- link: /api/latest/service-scorecards/
tag: "Documentation"
text: "Scorecards API"
tag: "Documentation"
text: "Scorecards API"
- link: "https://www.datadoghq.com/blog/service-scorecards/"
tag: "Blog"
text: "Prioritize and promote service observability best practices with Scorecards"
Expand All @@ -20,17 +20,17 @@ further_reading:
text: "Formalize best practices with custom Scorecards"
- link: "/continuous_integration/dora_metrics/"
tag: "Documentation"
text: "Track DORA Metrics with Datadog"
text: "Track DORA Metrics with Datadog"
---

Datadog provides the following out-of-the-box scorecards based on a default set of rules: Production Readiness, Observability Best Practices, and Ownership & Documentation.
Datadog provides the following out-of-the-box scorecards based on a default set of rules: Production Readiness, Observability Best Practices, and Ownership & Documentation.

## Set up default scorecards

To select which of the out-of-the-box rules are evaluated for each of the default scorecards:

1. Open the [Scorecards page][1] in Software Catalog.
2. Enable or disable rules to customize how the scores are calculated.
2. Enable or disable rules to customize how the scores are calculated.
3. Click **View your scores** to start tracking your progress toward the selected rules across your defined entities.

{{< img src="/tracing/software_catalog/scorecards-setup.png" alt="Scorecards setup page" style="width:90%;" >}}
Expand All @@ -43,24 +43,24 @@ After the default scorecards are set up, the Scorecards page in the Software Cat

The Production Readiness score for all entities (unless otherwise indicated) is based on these rules:

Have any SLOs defined
Have any SLOs defined
: [Service Level Objectives (SLOs)][2] provide a framework for defining clear targets around application performance, which helps you provide a consistent customer experience, balance feature development with platform stability, and improve communication with internal and external users.

Have any monitors defined
: Monitors reduce downtime by helping your team quickly react to issues in your environment. Review [monitor templates][3].

Specified on-call
: Improve the on-call experience for everyone by establishing clear ownership of your services. This gives your on-call engineers the correct point of contact during incidents, reducing the time it takes to resolve your incidents.
: Improve the on-call experience for everyone by establishing clear ownership of your services. This gives your on-call engineers the correct point of contact during incidents, reducing the time it takes to resolve your incidents.

Last deployment occurred within the last 3 months
: For services monitored by APM or USM. Agile development practices give you the ability to quickly address user feedback and pivot to developing the most important functionality for your end users.
: For services monitored by APM or USM. Agile development practices give you the ability to quickly address user feedback and pivot to developing the most important functionality for your end users.

### Observability Best Practices

The Observability Best Practices score is based on the following rules:

Deployment tracking is active
: For services monitored by APM or USM. [Ensure smooth rollouts by implementing a version tag with Unified Service Tagging][4]. As you roll out new versions of your functionality, Datadog captures and alerts on differences between the versions in error rates, number of requests, and more. This can help you understand when to roll back to previous versions to improve end user experience.
: For services monitored by APM or USM. [Ensure smooth rollouts by implementing a version tag with Unified Service Tagging][4]. As you roll out new versions of your functionality, Datadog captures and alerts on differences between the versions in error rates, number of requests, and more. This can help you understand when to roll back to previous versions to improve end user experience.

Logs correlation is active
: For APM services, evaluated based on the past hour of logs detected. [Correlation between APM and Logs][5] improves the speed of troubleshooting for end users, saving you time during incidents and outages.
Expand All @@ -70,7 +70,7 @@ Logs correlation is active
The Ownership & Documentation score is based on the following rules:

Team defined
: Defining a Team makes it easier for your on-call staff to know which team to escalate to in case a service they are not familiar with is the root cause of an issue.
: Defining a Team makes it easier for your on-call staff to know which team to escalate to in case a service they are not familiar with is the root cause of an issue.

Contacts defined
: Defining contacts reduces the time it takes for your on-call staff to escalate to the owner of another service, helping you recover your services faster from outages and incidents.
Expand All @@ -85,15 +85,15 @@ Docs defined

Each out-of-the-box scorecard (Production Readiness, Observability Best Practices, Ownership & Documentation) is made up of a default set of rules. These reflect pass-fail conditions and are automatically evaluated once per day. An entity's score against custom rules is based on outcomes sent using the [Scorecards API][8] or [Workflow Automation][9]. To exclude a particular custom rule from an entity's score calculation, set its outcome to `skip` in the Scorecards API.

Individual rules may have restrictions based on data availability. For example, deployment-related rules rely on the availability of version tags through APM [Unified Service Tagging][6].
Individual rules may have restrictions based on data availability. For example, deployment-related rules rely on the availability of version tags through APM [Unified Service Tagging][6].

Each rule lists a score for the percentage of entities that are passing. Each scorecard has an overall score percentage that totals how many entities are passing, across all rules—**not** how many entities are passing all rules. Skipped and disabled rules are not included in this calculation.

Scores for each rule can also be viewed **By Kind** and **By Team**. These tabs aggregate scores across an entity's kind (for example, `service`, `queue`, `datastore`, or `api`) or team as defined in Software Catalog. This score is calculated by averaging each entity's individual score within each kind or team.
Scores for each rule can also be viewed **By Kind** and **By Team**. These tabs aggregate scores across an entity's kind (for example, `service`, `queue`, `datastore`, or `api`) or team as defined in Software Catalog. This score is calculated by averaging each entity's individual score within each kind or team.

## Group rules into levels

You can group rules into levels to categorize them by their criticality. There are three predefined levels:
You can group rules into levels to categorize them by their criticality. There are three predefined levels:

- **Level 1 - Basic rules:** These rules reflect the baseline expectations for every production entity, such as having an on-call owner, monitoring in place, or a team defined.
- **Level 2 - Intermediate rules:** These rules reflect strong engineering practices that should be adopted across most entities. Examples might include defining SLOs or linking documentation within Software Catalog.
Expand All @@ -103,15 +103,15 @@ You can set levels for any out-of-the-box or custom rules. By default, rules wit

{{< img src="/tracing/software_catalog/scorecard-levels.png" alt="Scorecards UI grouped by levels" style="width:90%;" >}}

You can group rules by scorecard or level in the Scorecards UI. In the Software Catalog, you can track how a specific entity is progressing through each level. Each entity starts at Level 0. The entity progresses to Level 1 once it passes all level 1 rules until it reaches a Level 3 status.
You can group rules by scorecard or level in the Scorecards UI. In the Software Catalog, you can track how a specific entity is progressing through each level. Each entity starts at Level 0. The entity progresses to Level 1 once it passes all level 1 rules until it reaches a Level 3 status.

{{< img src="/tracing/software_catalog/scorecard-levels-software-catalog.png" alt="Scorecards view in Software Catalog showing service's status by level" style="width:90%;" >}}

## Scope scorecard rules
## Scope scorecard rules

Scopes allow you to define which entities a rule applies to, using metadata from entity definitions in Software Catalog. Without a scope defined, a rule applies to all defined entities in the catalog. You can scope by a `kind` of entity as well as any field within an entity definition, including `team`, `tier`, and custom tags.
Scopes allow you to define which entities a rule applies to, using metadata from entity definitions in Software Catalog. Without a scope defined, a rule applies to all defined entities in the catalog. You can scope by a `kind` of entity as well as any field within an entity definition, including `team`, `tier`, and custom tags.

By default, an entity must match all specified conditions to be evaluated against the rule. You can use `OR` statements to include multiple values for the same field.
By default, an entity must match all specified conditions to be evaluated against the rule. You can use `OR` statements to include multiple values for the same field.

{{< img src="/tracing/software_catalog/scorecard-edit-scope.png" alt="Scorecards setup page" style="width:90%;" >}}

Expand All @@ -123,7 +123,7 @@ You can set scopes for both out-of-the-box and custom rules. When you add a scop

[1]: https://app.datadoghq.com/services/scorecard
[2]: /service_management/service_level_objectives/
[3]: https://app.datadoghq.com/monitors/recommended
[3]: https://app.datadoghq.com/monitors/templates
[4]: /tracing/services/deployment_tracking/
[5]: /tracing/other_telemetry/connect_logs_and_traces/
[6]: /getting_started/tagging/unified_service_tagging/
Expand Down
4 changes: 2 additions & 2 deletions content/en/monitors/_index.md
Original file line number Diff line number Diff line change
Expand Up @@ -45,7 +45,7 @@ Monitor critical changes by checking metrics, integration availability, and netw

## Get started

The fastest way to start with Datadog Monitors is with [Recommended Monitors][1]. These are a collection of monitors within Datadog that are preconfigured by Datadog and integration partners.
The fastest way to start with Datadog Monitors is with [Monitor templates][1]. These are a collection of monitors within Datadog that are preconfigured by Datadog and integration partners.

You can also build your own monitors from scratch in lab environments in the Learning Center, or in your application by following the Getting Started with Monitors guide.

Expand Down Expand Up @@ -79,7 +79,7 @@ Monitors and alerts are essential tools for ensuring the reliability, performanc

{{< partial name="whats-next/whats-next.html" >}}

[1]: https://app.datadoghq.com/monitors/recommended
[1]: https://app.datadoghq.com/monitors/templates
[2]: /monitors/notify
[3]: /monitors/downtimes
[4]: /monitors/downtimes/?tab=bymonitorname
Expand Down
Loading