Skip to content

Commit

Permalink
Merge remote-tracking branch 'upstream/main' into add-weekly-job-elas…
Browse files Browse the repository at this point in the history
…tic-agent-ubuntu
  • Loading branch information
mrodm committed Aug 26, 2024
2 parents 980a890 + 32fc06f commit e959100
Show file tree
Hide file tree
Showing 184 changed files with 29,840 additions and 844 deletions.
1 change: 1 addition & 0 deletions .github/CODEOWNERS
Validating CODEOWNERS rules …
Original file line number Diff line number Diff line change
Expand Up @@ -206,6 +206,7 @@
/packages/gcp/data_stream/vpcflow @elastic/security-service-integrations
/packages/gcp_metrics @elastic/obs-ds-hosted-services
/packages/gcp_pubsub @elastic/security-service-integrations
/packages/gigamon @elastic/security-service-integrations
/packages/github @elastic/security-service-integrations
/packages/gitlab @elastic/security-service-integrations
/packages/golang @elastic/obs-infraobs-integrations
Expand Down
17 changes: 13 additions & 4 deletions packages/amazon_security_lake/_dev/build/docs/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,8 +4,9 @@ This [Amazon Security Lake](https://aws.amazon.com/security-lake/) integration h

Security Lake automates the collection of security-related log and event data from integrated AWS services and third-party services. It also helps you manage the lifecycle of data with customizable retention and replication settings. Security Lake converts ingested data into Apache Parquet format and a standard open-source schema called the Open Cybersecurity Schema Framework (OCSF). With OCSF support, Security Lake normalizes and combines security data from AWS and a broad range of enterprise security data sources.

The Amazon Security Lake integration currently supports only one mode of log collection:
The Amazon Security Lake integration can be used in two different modes to collect data:
- AWS S3 polling mode: Amazon Security Lake writes data to S3, and Elastic Agent polls the S3 bucket by listing its contents and reading new files.
- AWS S3 SQS mode: Amazon Security Lake writes data to S3, S3 sends a notification of a new object to SQS, the Elastic Agent receives the notification from SQS, and then reads the S3 object. Multiple agents can be used in this mode.

## Compatibility

Expand Down Expand Up @@ -37,6 +38,7 @@ The Amazon Security Lake integration collects logs from both [Third-party servic
- For **Log and event sources**, choose which sources the subscriber is authorized to consume.
- For **Data access method**, choose **S3** to set up data access for the subscriber.
- For **Subscriber credentials**, provide the subscriber's **AWS account ID** and **external ID**.
- For **Notification details**, select **SQS queue**.
- Choose Create.
3. Above mentioned steps will create and provide the required details such as IAM roles/AWS role ID, external ID and queue URL to configure AWS Security Lake Integration.

Expand All @@ -48,11 +50,18 @@ The Amazon Security Lake integration collects logs from both [Third-party servic
3. Click on the "Amazon Security Lake" integration from the search results.
4. Click on the Add Amazon Security Lake Integration button to add the integration.
![Home Page](../img/home_page.png)
5. The integration currently only supports collecting logs via AWS S3.
6. While adding the integration, you have to configure the following details:
- bucket arn
5. By default collect logs via S3 Bucket toggle will be off and collect logs for AWS SQS.
- queue url
![Queue URL](../img/queue_url.png)
- collect logs via S3 Bucket toggled off
- role ARN
- external id
![Role ARN and External ID](../img/role_arn_and_external_id.png)

6. If you want to collect logs via AWS S3, then you have to put the following details:
- bucket arn
- role ARN
- external id

**NOTE**:

Expand Down
5 changes: 5 additions & 0 deletions packages/amazon_security_lake/changelog.yml
Original file line number Diff line number Diff line change
@@ -1,4 +1,9 @@
# newer versions go on top
- version: "1.5.0"
changes:
- description: Re-added SQS notification settings which were removed due to a prior update error.
type: bugfix
link: https://github.com/elastic/integrations/pull/10854
- version: "1.4.1"
changes:
- description: "Remove confusing documentation remaining from previous change."
Expand Down
Original file line number Diff line number Diff line change
@@ -1,3 +1,4 @@
{{#if collect_s3_logs}}

{{#if bucket_arn}}
bucket_arn: {{bucket_arn}}
Expand All @@ -11,10 +12,32 @@ bucket_list_interval: {{interval}}
{{#if bucket_list_prefix}}
bucket_list_prefix: {{bucket_list_prefix}}
{{/if}}

{{else}}

{{#if queue_url}}
queue_url: {{queue_url}}
{{/if}}
sqs.notification_parsing_script.source: {{event_parsing_script}}
{{#if region}}
region: {{region}}
{{/if}}
{{#if visibility_timeout}}
visibility_timeout: {{visibility_timeout}}
{{/if}}
{{#if api_timeout}}
api_timeout: {{api_timeout}}
{{/if}}
{{#if max_number_of_messages}}
max_number_of_messages: {{max_number_of_messages}}
{{/if}}
{{#if file_selectors}}
file_selectors:
{{file_selectors}}
{{/if}}

{{/if}}

{{#if access_key_id}}
access_key_id: {{access_key_id}}
{{/if}}
Expand Down Expand Up @@ -56,6 +79,11 @@ proxy_url: {{proxy_url}}
ssl: {{ssl}}
{{/if}}
tags:
{{#if collect_s3_logs}}
- collect_s3_logs
{{else}}
- collect_sqs_logs
{{/if}}
{{#if preserve_original_event}}
- preserve_original_event
{{/if}}
Expand Down
72 changes: 70 additions & 2 deletions packages/amazon_security_lake/data_stream/event/manifest.yml
Original file line number Diff line number Diff line change
Expand Up @@ -7,6 +7,14 @@ streams:
description: Collect Amazon Security Lake Events via AWS S3 input.
template_path: aws-s3.yml.hbs
vars:
- name: collect_s3_logs
required: true
show_user: true
title: Collect logs via S3 Bucket
description: To Collect logs via S3 bucket enable the toggle switch. By default, it will collect logs via SQS Queue.
type: bool
multi: false
default: false
- name: access_key_id
type: password
title: Access Key ID
Expand Down Expand Up @@ -77,13 +85,56 @@ streams:
show_user: true
default: 5
description: Number of workers that will process the S3 objects listed. It is a required parameter for collecting logs via the AWS S3 Bucket.
- name: queue_url
type: text
title: "[SQS] Queue URL"
multi: false
required: false
show_user: true
description: URL of the AWS SQS queue that messages will be received from. It is a required parameter for collecting logs via the AWS SQS.
- name: visibility_timeout
type: text
title: "[SQS] Visibility Timeout"
multi: false
required: false
show_user: true
default: 300s
description: The duration that the received messages are hidden from subsequent retrieve requests after being retrieved by a ReceiveMessage request. The maximum is 12 hours. Supported units for this parameter are h/m/s.
- name: api_timeout
type: text
title: "[SQS] API Timeout"
multi: false
required: false
show_user: true
default: 120s
description: The maximum duration of AWS API can take. The maximum is half of the visibility timeout value. Supported units for this parameter are h/m/s.
- name: max_number_of_messages
type: integer
title: "[SQS] Maximum Concurrent SQS Messages"
required: false
show_user: true
default: 5
description: The maximum number of SQS messages that can be inflight at any time.
- name: file_selectors
type: yaml
title: "File Selectors"
title: "[SQS] File Selectors"
multi: false
required: false
show_user: false
description: If the S3 bucket will have events that correspond to files that this integration shouldn't process, file_selectors can be used to limit the files that are downloaded. This is a list of selectors which are made up of regex and expand_event_list_from_field options. The regex should match the S3 object key, and the optional expand_event_list_from_field is the same as the global setting. If file_selectors is given, then any global expand_event_list_from_field value is ignored in favor of the ones specified in the file_selectors. Regexes use [RE2 syntax](https://pkg.go.dev/regexp/syntax). Files that don’t match one of the regexes will not be processed.
description: If the SQS queue will have events that correspond to files that this integration shouldn't process, file_selectors can be used to limit the files that are downloaded. This is a list of selectors which are made up of regex and expand_event_list_from_field options. The regex should match the S3 object key in the SQS message, and the optional expand_event_list_from_field is the same as the global setting. If file_selectors is given, then any global expand_event_list_from_field value is ignored in favor of the ones specified in the file_selectors. Regexes use [RE2 syntax](https://pkg.go.dev/regexp/syntax). Files that don’t match one of the regexes will not be processed.
default: |
# Example: if you want to consume events that contain 'CloudTrail' in the S3 object key and apply parquet decoding to the events.
# - regex: '/CloudTrail/'
# decoding.codec.parquet.enabled: true
# decoding.codec.parquet.batch_size: 100
# decoding.codec.parquet.process_parallel: true
- name: region
type: text
title: "[SQS] Region"
multi: false
required: false
show_user: true
description: The name of the AWS region of the end point. If this option is given it takes precedence over the region name obtained from the queue_url value.
- name: fips_enabled
type: bool
title: Enable S3 FIPS
Expand Down Expand Up @@ -128,6 +179,23 @@ streams:
show_user: false
default: ""
description: Default region to use prior to connecting to region specific services/endpoints if no AWS region is set from environment variable, credentials or instance profile. If none of the above are set and no default region is set as well, `us-east-1` is used. A region, either from environment variable, credentials or instance profile or from this default region setting, needs to be set when using regions in non-regular AWS environments such as AWS China or US Government Isolated.
- name: event_parsing_script
type: yaml
title: Event Notification Parsing Script
multi: false
required: true
show_user: false
description: The JS script used to parse the custom format of SQS Event notifications.
default: |
function parse(notification) {
var evts = [];
var m = JSON.parse(notification);
var evt = new S3EventV2();
evt.SetS3BucketName(m.detail.bucket.name);
evt.SetS3ObjectKey(m.detail.object.key);
evts.push(evt);
return evts;
}
- name: proxy_url
type: text
title: Proxy URL
Expand Down
17 changes: 13 additions & 4 deletions packages/amazon_security_lake/docs/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,8 +4,9 @@ This [Amazon Security Lake](https://aws.amazon.com/security-lake/) integration h

Security Lake automates the collection of security-related log and event data from integrated AWS services and third-party services. It also helps you manage the lifecycle of data with customizable retention and replication settings. Security Lake converts ingested data into Apache Parquet format and a standard open-source schema called the Open Cybersecurity Schema Framework (OCSF). With OCSF support, Security Lake normalizes and combines security data from AWS and a broad range of enterprise security data sources.

The Amazon Security Lake integration currently supports only one mode of log collection:
The Amazon Security Lake integration can be used in two different modes to collect data:
- AWS S3 polling mode: Amazon Security Lake writes data to S3, and Elastic Agent polls the S3 bucket by listing its contents and reading new files.
- AWS S3 SQS mode: Amazon Security Lake writes data to S3, S3 sends a notification of a new object to SQS, the Elastic Agent receives the notification from SQS, and then reads the S3 object. Multiple agents can be used in this mode.

## Compatibility

Expand Down Expand Up @@ -37,6 +38,7 @@ The Amazon Security Lake integration collects logs from both [Third-party servic
- For **Log and event sources**, choose which sources the subscriber is authorized to consume.
- For **Data access method**, choose **S3** to set up data access for the subscriber.
- For **Subscriber credentials**, provide the subscriber's **AWS account ID** and **external ID**.
- For **Notification details**, select **SQS queue**.
- Choose Create.
3. Above mentioned steps will create and provide the required details such as IAM roles/AWS role ID, external ID and queue URL to configure AWS Security Lake Integration.

Expand All @@ -48,11 +50,18 @@ The Amazon Security Lake integration collects logs from both [Third-party servic
3. Click on the "Amazon Security Lake" integration from the search results.
4. Click on the Add Amazon Security Lake Integration button to add the integration.
![Home Page](../img/home_page.png)
5. The integration currently only supports collecting logs via AWS S3.
6. While adding the integration, you have to configure the following details:
- bucket arn
5. By default collect logs via S3 Bucket toggle will be off and collect logs for AWS SQS.
- queue url
![Queue URL](../img/queue_url.png)
- collect logs via S3 Bucket toggled off
- role ARN
- external id
![Role ARN and External ID](../img/role_arn_and_external_id.png)

6. If you want to collect logs via AWS S3, then you have to put the following details:
- bucket arn
- role ARN
- external id

**NOTE**:

Expand Down
6 changes: 3 additions & 3 deletions packages/amazon_security_lake/manifest.yml
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
format_version: "3.0.3"
name: amazon_security_lake
title: Amazon Security Lake
version: "1.4.1"
version: "1.5.0"
description: Collect logs from Amazon Security Lake with Elastic Agent.
type: integration
categories: ["aws", "security"]
Expand Down Expand Up @@ -62,8 +62,8 @@ policy_templates:
description: Collect logs from Amazon Security Lake instances.
inputs:
- type: aws-s3
title: Collect Amazon Security Lake logs via AWS S3
description: Collecting logs from Amazon Security Lake via AWS S3.
title: Collect Amazon Security Lake logs via AWS S3 or AWS SQS
description: Collecting logs from Amazon Security Lake via AWS S3 or AWS SQS.
owner:
github: elastic/security-service-integrations
type: elastic
5 changes: 5 additions & 0 deletions packages/auditd/changelog.yml
Original file line number Diff line number Diff line change
@@ -1,4 +1,9 @@
# newer versions go on top
- version: "3.20.1"
changes:
- description: "Preserve auditd.log.record_type and fallback to auditd.log.SYSCALL"
type: bugfix
link: https://github.com/elastic/integrations/pull/10829
- version: "3.20.0"
changes:
- description: "Allow @custom pipeline access to event.original without setting preserve_original_event."
Expand Down
Original file line number Diff line number Diff line change
@@ -1,3 +1,4 @@
type=SOCKADDR msg=audit(1666825569.818:23260118): saddr=02000000000000000000000000000000SADDR={ saddr_fam=inet laddr=0.0.0.0 lport=0 }
type=SOCKADDR msg=audit(1666825569.435:23260106): saddr=0A00DE9900000000000000000000000000002a02cf40000000000000SADDR={ saddr_fam=inet6 laddr=2a02:cf40:: lport=56985 }
type=SOCKADDR msg=audit(1666825568.865:23260105): saddr=0100SADDR={ saddr_fam=local sockaddr len too short }
node=praorem001 type=SYSCALL msg=audit(1723109482.048:4981103): arch=c000003e syscall=87 success=yes exit=0 a0=7f1118081d10 a1=7f1118081d10 a2=242 a3=180 items=2 ppid=560201 pid=560348 auid=1561577791 uid=2012 gid=2007 euid=2012 suid=2012 fsuid=2012 egid=2007 sgid=2007 fsgid=2007 tty=(none) ses=126 comm="httpd" exe="/app/ogc101/app/dllogc/product/13.5.0/mw_100/ohs/bin/httpd" key="delete"ARCH=x86_64 SYSCALL=unlink AUID="na-uoradbdba03" UID="dllogc" GID="oinstall" EUID="dllogc" SUID="dllogc" FSUID="dllogc" EGID="oinstall" SGID="oinstall" FSGID="oinstall"
Loading

0 comments on commit e959100

Please sign in to comment.