audit-log

Subscribe to all “audit-log” posts via RSS or follow GitHub Changelog on Twitter to stay updated on everything we ship.

~ cd github-changelog
~/github-changelog|main git log main
showing all changes successfully

The enterprise and organization-level audit log events are now created when a code scanning alert is created, fixed, dismissed, reopened, or appeared in a new branch:
code_scanning.alert_created – a code scanning alert was seen for the first time;
code_scanning.alert_appeared_in_branch – an existing code scanning alert appeared in a branch;
code_scanning.alert_closed_became_fixed – a code scanning alert was fixed;
code_scanning.alert_reappeared – a code scanning alert that was previously fixed reappeared;
code_scanning.alert_closed_by_user – a code scanning alert was manually dismissed;
code_scanning.alert_reopened_by_user – a code scanning alert that was previously dismissed was reopened.

The new functionality, which will be included in GHES 3.17, provides more insight into the history of a code scanning alert for easier troubleshooting and analysis.

For more information:
Learn more about code scanning
Learn more about audit log events for your enterprise
Learn more about audit log events for your organization

See more

Audit logs play a critical role in keeping enterprises secure and auditing enterprise activity for compliance. Since becoming generally available in January 2022, audit log streaming has been used by over 2000 enterprises to transmit audit logs to Enterprises’ preferred streaming endpoints. We are excited to announce three new features that will help you programmatically configure audit log streaming to multiple endpoints of your choosing. In doing so, we aim to empower you to select and employ tools that best support your security and compliance objectives.

Audit log steaming to a user defined HTTPS event collector

You can now enroll in a private preview that allows you to stream your audit logs to a user defined HTTPS event collector. This allows audit logs to written to any endpoint capable of accepting an HTTP post and meets our requirements for streaming GitHub audit logs. By introducing a user defined HTTPs event collector, you are empowered to stream your audit logs to the tool you feel best supports your enterprise’s needs.

Configure audit log streaming to a HTTPS Event Collector in the log streaming settings page for your Enterprise audit log

This private preview is only available to GitHub Enterprise Cloud customers. Enterprise administrators interested in participating in the private beta should reach out to your GitHub account manager or contact our sales team to have this feature enabled for your enterprise. Let us know what you think by providing feedback on our community discussion post.

Enterprise audit logs can be streamed to two endpoints

You can participate in a public preview to stream your Enterprise’s audit log to two of GitHub’s supported streaming endpoints. You can stream your audit log to two endpoints of the same type, or you can stream to two different providers.

Log streaming settings page showing two configured streams. One to Datadog and the other to Splunk

This update allows you to use your preferred choice of tools for log storage and analysis. When managing your Enterprise, you may need to employ multiple tools to ensure compliance and maintain a strong security posture. This can involve different teams, requiring different levels of access, employing different technology to accomplish their objectives in supporting your Enterprise’s security and compliance requirements. By streaming your audit logs to two endpoints, you can employ multiple log storage and analysis tools without the need for a complex log routing architecture or dealing with increased latency.

This public preview is available to all GitHub Enterprise Cloud customers. We plan to ship this feature to GitHub Enterprise Server when this feature is released as generally available. To set up multiple streams, follow the instructions for each provider for setting up audit log streaming.

Configure audit log streaming via GitHub’s REST API

You can now configure audit log streaming via the REST API. This private beta grants access to new API endpoints for the following audit log streaming actions:

  • GET Endpoint Configuration: Retrieve the audit log streaming configuration for your Enterprise.
  • Stream Key Endpoint: Provide the customer with an audit streaming key. This key is essential for our customers to encrypt their secrets before sending them via an API call.
  • POST Endpoint: Create new audit log stream configurations.
  • PUT Endpoint: Update existing audit log stream configurations.
  • DELETE Endpoint: Delete existing audit log stream configurations.

With the introduction of these new REST API endpoints, enterprise owners can programmatically create, update, delete and list their Enterprise’s audit log streams. By allowing programmatic updates to the audit log streaming configuration, customers can automate tasks like rotating your audit log streaming secrets.

These new audit log streaming endpoints will impose a rate limit of 15 API requests per hour protect the availability of the audit log streaming service. For the time being, these endpoints are only accessible via personal access token (PAT) classic and OAuth token with admin:enterprise scope.

This feature is generally available on GitHub Enterprise Cloud (GHEC) and will be included in the release of GitHub Enterprise Server (GHES) version 3.16. To learn more, check out our documentation for the REST API endpoints for enterprise audit logs

See more

You can now stream your Enterprise’s audit log to two of GitHub’s supported streaming endpoints.

This update allows you as an Enterprise owner to easily employ your choice of tools for log storage and analysis. When managing your Enterprise, you may need to employ multiple tools to ensure compliance and maintain a strong security posture. This can involve different teams, requiring different levels of access, employing different technology to accomplish their objectives in supporting your Enterprise’s security and compliance requirements. By streaming your audit logs to two endpoints, you can employ multiple log storage and analysis tools without the need for a complex log routing architecture or deal with increased latency.

Interested in signing up? Please reach out to your GitHub account manager or contact our sales team to have this feature enabled for your Enterprise. Once enabled, you can follow our documents setting up audit log streaming to set up a second stream.

See more

Enterprise Owners on GitHub Enterprise Cloud (GHEC) can join a private beta allowing them to configure audit log streaming via the REST API. This private beta grants access to new API endpoints for the following audit log streaming actions:

  • GET Endpoint Configuration: Retrieve the audit log streaming configuration for your Enterprise.
  • Stream Key Endpoint: Provide the customer with an audit streaming key. This key is essential for our customers to encrypt their secrets before sending them via an API call.
  • POST Endpoint: Create new audit log stream configurations.
  • PUT Endpoint: Update existing audit log stream configurations.
  • DELETE Endpoint: Delete existing audit log stream configurations.

With the introduction of these new REST API endpoints, enterprise owners can programmatically create, update, delete and list their Enterprise’s audit log streams. By allowing programmatic updates to the audit log streaming configuration, customers can automate tasks like rotating your audit log streaming secrets.

These new audit log streaming endpoints will impose a rate limit of 15 API requests per hour to protect the availability of the audit log streaming service. For the time being, these endpoints are only accessible via personal access token (PAT) classic and OAuth token with admin:enterprise scope.

Enterprise owners interested in participating in the private beta should reach out to your GitHub account manager or contact our sales team to have this feature enabled for your enterprise. Enterprise owners can follow instructions for these API endpoints, and provide feedback on their experience on our community discussion.

See more

GitHub’s audit log streaming health check is now generally available! The purpose of the audit log health check is to ensure audit log streams do not fail silently. Every 24 hours, a health check runs for each stream. If a stream is set up incorrectly, an email will be sent to the enterprise owners as notification that their audit log stream is not properly configured.

Example email notification for misconfigured stream

Streamed audit logs are stored for up to seven days on GitHub.com. To avoid audit log events being dropped from the stream, a misconfigured stream must be fixed within six days of email notification. To fix your streaming configuration, follow the steps outlined in “Setting up audit log streaming.”

See more

Starting today for GitHub Enterprise Cloud and as part of GitHub Enterprise Server version 3.13, enterprise and organization audit log events will include the applicable SAML and SCIM identity data associated with the user. This data provides increased visibility into the identity of the user and enables logs from multiple systems to quickly and easily be linked using a common corporate identity. The SAML identity information will be displayed in the external_identity_nameid field and the SCIM identity data will be displayed in the external_identity_username field within the audit log payloads.

In GitHub Enterprise Cloud Classic, SAML SSO gives organization and enterprise owners a way to control and secure access to resources like repositories, issues, and pull requests. Organization owners can invite GitHub users to join an organization backed by SAML SSO, allowing users to become members of the organization while retaining their existing identity and contributions on GitHub.

If your Enterprise Cloud Classic organization uses SAML SSO, you can use SCIM to add, manage, and remove organization members’ access to your organization. For example, an administrator can deprovision an organization member using SCIM and automatically remove the member from the organization.

To learn more, read our documentation about SAML SSO authentication data in our audit logs.

See more

GitHub Enterprise and organization owners now have improved visibility into authentication activity via personal access token (classic), fine-grained personal access token (FGP), OAuth token, SSH key or deploy key. The audit log may now contain hashed renderings of the token or key used for authentication and the programmatic_access_type field describing the type of token/key used for authentication. Enterprise and organization owners can query by specific token or key to identify and track activity.

To learn more, read our documentation on identifying audit log events performed by an access token.

See more

GitHub Enterprise Cloud customers can now participate in a public beta displaying SAML single sign-on (SSO) identities for relevant users in audit log events.

SAML SSO gives organization and enterprise owners a way to control and secure access to resources like repositories, issues, and pull requests. Organization owners can invite GitHub users to join an organization backed by SAML SSO, allowing users to become members of the organization while retaining their existing identity and contributions on GitHub.

With the addition of SAML SSO identities in the audit log, organization and enterprise owners can easily link audit log activity with the user's corporate identity used to SSO into GitHub.com. This provides increased visibility into the identity of the user and enables logs from multiple systems to quickly and easily be linked using a common SAML identity.

To learn more, read our documentation about SAML SSO authentication data in our audit logs. Enterprise and organization owners can provide feedback at the logging SAML SSO authentication data for enterprise and org audit log events community discussion page.

See more

In October 2022, we released a private beta adding linked SAML single sign-on (SSO) identities for relevant users to GitHub Enterprise audit log events.

We are expanding the private beta to now include linked identities within git events, making this information available across all relevant events.

Enterprise owners interested in participating in the private beta should reach out to your GitHub account manager or contact our sales team to have this feature enabled for your enterprise. Once enabled, enterprise and organization owners can provide feedback at the logging SAML SSO authentication data for enterprise and org audit log events community discussion page.

See more

In early July, GitHub announced a new rate limit was coming for the audit log API endpoints. Starting today, each audit log API endpoint will impose a rate limit of 1,750 queries per hour per user, IP address, enterprise, or organization. This is higher than the previously stated change to 15 queries per minute, in order to allow integrators more time to adjust workflows and scripts which programmatically query the audit log API. We intend to enforce a limit of 15 queries per minute on or after November 1st, 2023.

This rate limit will be enforced on each combination of an individual user, IP address and entity path (/orgs/<org_name>/audit_log or /enterprises/<enterprise_name>/audit_log) independently.

To adapt to these changes and avoid rate limiting, programs or integrations querying the audit log API should query at a maximum frequency of 1,750 queries per hour. Additionally, applications querying the audit log API should be updated to honor HTTP 403 and 429 responses to dynamically adjust to the back-pressure exerted by GitHub.

For additional information, please consult our documentation on handling rate limits for requests from personal accounts and rate limits for GitHub Apps. Alternatively, enterprises seeking access to near real-time data should consider streaming your enterprise audit log.

See more

In April, we announced that GitHub Enterprise Cloud customers could join a public beta for streaming API request events as part of their enterprise audit log. As part of that release, REST API calls against enterprise's private and internal repositories could be streamed to one of GitHub's supported streaming endpoints.

However, we've discovered the need to expand our api call coverage against private and internal repositories in order to capture other security significant api routes. Additionally, we've determined several api routes targeting internal and private repositories generate significant event volumes with little auditing or security value. To address these concerns, we partnered with GitHub's security team to define a set of auditing and security significant controllers to serve as the basis for the public beta. These adjustments to the beta should increase signal and decrease the noise generated by the api request event being streamed.
image (4)

Note: hashed_token and token_id have been redacted for security reasons.

Enterprise owners interested in the public beta can still follow the instructions in our docs for enabling audit log streaming of API requests. We welcome feedback on the changes made to this feature on our beta feedback community discussion post.

See more

GitHub provides Enterprise customers with the ability to programmatically retrieve enterprise and organization audit log events in near real-time using the audit log API. A high-quality audit log is an essential tool used by enterprises to ensure compliance, maintain security, investigate issues, and promote accountability. To support these objectives, the audit log API needs to be highly reliable, consistently available, and extremely scalable.

Recognizing the audit log API's importance as a data source to enterprises, each audit log API endpoint will impose a rate limit of 15 queries per minute per enterprise or org starting August 1st, 2023. Based on a thorough analysis of event generation data, we are confident that the new rate limit will continue to support customers in accessing near real-time data via the audit log API. Additionally, query cost is a crucial consideration, and in the future, the audit log may impose further rate limiting for high-cost queries that place significant strain on our data stores.

What can you do to prepare for these changes? First, programs or integrations querying the audit log API should be adjusted to query at a maximum frequency of 15 queries per minute. Additionally, applications querying the audit log API should be updated to be capable of honoring HTTP 429 responses, enabling them to dynamically adjust to the back-pressure exerted by our systems. Alternatively, Enterprises seeking access to near real-time data should consider streaming your enterprise audit log.

See more

The Enterprise and Organization audit log UI and user security logs UI now include an expandable view that displays the full audit log payload of each event.

Customers can now see the same event metadata when searching your audit log via U/I, exporting audit logs to a JSON file, querying the audit log API, or streaming your audit logs to one of our supported streaming endpoints.

See more

Starting today, Dependabot will be able to auto-dismiss npm alerts that have limited impact (e.g. long-running tests) or are unlikely to be exploitable. With this ship, Dependabot will cut false positives and reduce alert fatigue substantially.

On-by-default for public repositories, and opt-in for private repositories, this feature will result in 15% of low impact npm alerts being auto-dismissed moving forward – so you can focus on the alerts that matter, without worrying about the ones that don’t.

What’s changing?

When the feature is enabled, Dependabot will auto-dismiss certain types of vulnerabilities that are found in npm dependencies used in development (npm devDependency alerts with scope:development). This feature will help you proactively filter out false positives on development-scoped (non-production or runtime) alerts without compromising on high risk devDependency alerts.

Dependabot alerts auto-dismissal list view

Frequently asked questions

Why is GitHub making this change?

At GitHub, we’ve been thinking deeply about how to responsibly address long-running issues around alert fatigue and false positives. Rather than over-indexing on one criterion like reachability or dependency scope, we believe that a responsibly-designed solution should be able to detect and reason on a rich set of complex, contextual alert metadata.

That’s why, moving forward, we’re releasing a series of ships powered by an underlying, all-new, flexible and powerful alert rules engine. Today’s ship, our first application, leverages GitHub-curated vulnerability patterns to help proactively filter out false positive alerts.

Why auto-dismissal, rather than purely suppressing these alerts?

Auto-dismissing ensures any ignored alerts are 1) able to be reintroduced if alert metadata changes, 2) caught by existing reporting systems and workflows, and 3) extensible as a whole to future rules-based actions, where Dependabot can decision on subsets of alerts and do things like reopen for patch, open a Dependabot pull request, or even auto-merge if very risky.

How does GitHub identify and detect low impact alerts?

Auto-dismissed alerts match GitHub-curated vulnerability patterns. These patterns take into account contextual information about how you’re using the dependency and the level of risk they may pose to your repository. To learn more, see our documentation on covered classes of vulnerabilities.

How will this activity be reported?

Auto-dismissal activity is supported across webhooks, REST, GraphQL, and the audit log for Dependabot alerts. In addition, you can review your closed alert list with the resolution:auto-dismissed filter.

How will this experience look and feel?

Alerts identified as false positives will be automatically dismissed without a notification or new pull request, and appear as special timeline event. As these alerts are closed, you’ll still be able to review any auto-dismissed alerts with the resolution:auto-dismissed filter.

How do I reopen an automatically dismissed alert?

Like any manually dismissed alert, you can reopen an auto-dismissed alert from the alert list view or details page. This specific alert won’t be auto-dismissed again.

What happens if alert metadata changes or advisory information is withdrawn?

Dependabot recognizes and immediately responds to any changes to metadata which void auto-dismissal logic. For example, if you change the dependency scope and the alert no longer meets the criteria to be auto-dismissed, the alert will automatically reopen.

How can I enable or disable the feature?

This feature is on-by-default for public repositories and opt-in for private repositories. Repository admins can opt in or out from your Dependabot alerts settings in the Code Security page.

Is this feature available for enterprise?

Yes! In addition to all free repositories, this feature will ship immediately to GHEC and to GHES in version 3.10.

What’s next?

Next, we’ll expose our underlying engine – which enables Dependabot to perform actions based on a rich set of contextual alert metadata – so you can write your own custom rules to better manage your alerts, too.

How do I learn more?

How do I provide feedback?

Let us know what you think by providing feedback — we’re listening!

See more