-
Notifications
You must be signed in to change notification settings - Fork 118
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add config option to deploy custom elastic-agents as test services #786
Changes from 28 commits
d515d21
48f20d4
fdf4b9f
fbc5d52
07820ad
af8e172
31aec19
15d7df4
9bcb382
5b649d9
009e259
d17bb7d
99131b6
632a0f5
2bf8399
ca9fb47
7b10af7
6e108f9
a1cc748
aa5cb6e
4b63068
f89d55f
c586f71
b158777
241a1b4
4af44a4
b0ae2ad
5c6f901
f6ab76b
e37de08
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -64,6 +64,7 @@ or the data stream's level: | |
|
||
`<service deployer>` - a name of the supported service deployer: | ||
* `docker` - Docker Compose | ||
* `agent` - Custom `elastic-agent` with Docker Compose | ||
* `k8s` - Kubernetes | ||
* `tf` - Terraform | ||
|
||
|
@@ -106,6 +107,58 @@ volumes: | |
mysqldata: | ||
``` | ||
|
||
### Agent service deployer | ||
|
||
When using the Agent service deployer, the `elastic-agent` provided by the stack | ||
will not be used. An agent will be deployed as a Docker compose service named `docker-custom-agent` | ||
which base configuration is provided [here](../../internal/install/_static/docker-custom-agent-base.yml). | ||
This configuration will be merged with the one provided in the `custom-agent.yml` file. | ||
This is useful if you need different capabilities than the provided by the | ||
`elastic-agent` used by the `elastic-package stack` command. | ||
|
||
`custom-agent.yml` | ||
``` | ||
version: '2.3' | ||
services: | ||
docker-custom-agent: | ||
pid: host | ||
cap_add: | ||
- AUDIT_CONTROL | ||
- AUDIT_READ | ||
user: root | ||
``` | ||
|
||
This will result in an agent configuration such as: | ||
|
||
``` | ||
version: '2.3' | ||
services: | ||
docker-custom-agent: | ||
hostname: docker-custom-agent | ||
image: "docker.elastic.co/beats/elastic-agent-complete:8.2.0" | ||
pid: host | ||
cap_add: | ||
- AUDIT_CONTROL | ||
- AUDIT_READ | ||
user: root | ||
healthcheck: | ||
test: "elastic-agent status" | ||
retries: 180 | ||
interval: 1s | ||
environment: | ||
FLEET_ENROLL: "1" | ||
FLEET_INSECURE: "1" | ||
FLEET_URL: "http://fleet-server:8220" | ||
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. Umm, this makes me think that we might need some way to pass variables to these configs. These connection settings depend on the stack and how it is started. For example in #789 I am changing these options, and this would break scenarios with these environment variables. An option can be to create a different configuration file, expected for example in
This way every package using this deployer only needs to configure the minimal relevant set of settings. And we can control the settings needed for enrollment, or other settings that may be dependent of Another middle-ground option could be to use There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. It looks like you're referring to this issue (Extend "profiles" with local patches). Do you think that we need a proposal to sketch the final look and then iterate on that? Maybe we should focus on this specific issue. It might be tricky if we want "patches" to be backward compatible with older stacks. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. I would consider this deployer a separate thing to possible general patches for profiles or the stack subcommand. I think that something like this "agent" deployer has value on itself. Even if later we consider more advanced configurations for profiles or the stack subcommand. The problem I see with general local patches for I think it is fine to look for a way to start/patch specialized agents on test time as is being done in this PR. These agents are disposable and developers have more awareness on when they are being started. Packages could also select the version of the agent to use, limiting the problems of supporting multiple versions. Variants could help to test with multiple versions or configurations if there are differences. And if some day we also have stack-level patches, I think it would be ok if the patches are different to the ones used for this "agent" deployer, at the end they are different things. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
What about unpatching? Did you plan for this too or is it like the Kubernetes agent, once installed it stays there. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more.
No need to unpatch. Current implementation starts this patched agent as a docker compose service, and destroys it on tear down. So it doesn't stay. I think this is a good approach. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. (If I understand it correctly, please @marc-gr correct me if I am wrong 🙂 ) There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. It is correct 👍 |
||
``` | ||
|
||
And in the test config: | ||
|
||
``` | ||
data_stream: | ||
vars: | ||
# ... | ||
``` | ||
|
||
|
||
### Terraform service deployer | ||
|
||
|
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,15 @@ | ||
version: "2.3" | ||
services: | ||
docker-custom-agent: | ||
image: "${ELASTIC_AGENT_IMAGE_REF}" | ||
healthcheck: | ||
test: "elastic-agent status" | ||
retries: 180 | ||
interval: 1s | ||
hostname: docker-custom-agent | ||
environment: | ||
- FLEET_ENROLL=1 | ||
- FLEET_INSECURE=1 | ||
- FLEET_URL=http://fleet-server:8220 | ||
volumes: | ||
- ${SERVICE_LOGS_DIR}:/tmp/service_logs/ |
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,151 @@ | ||
// Copyright Elasticsearch B.V. and/or licensed to Elasticsearch B.V. under one | ||
// or more contributor license agreements. Licensed under the Elastic License; | ||
// you may not use this file except in compliance with the Elastic License. | ||
|
||
package servicedeployer | ||
|
||
import ( | ||
_ "embed" | ||
"fmt" | ||
|
||
"github.com/pkg/errors" | ||
|
||
"github.com/elastic/elastic-package/internal/compose" | ||
"github.com/elastic/elastic-package/internal/configuration/locations" | ||
"github.com/elastic/elastic-package/internal/docker" | ||
"github.com/elastic/elastic-package/internal/files" | ||
"github.com/elastic/elastic-package/internal/install" | ||
"github.com/elastic/elastic-package/internal/kibana" | ||
"github.com/elastic/elastic-package/internal/logger" | ||
"github.com/elastic/elastic-package/internal/stack" | ||
) | ||
|
||
const dockerCustomAgentName = "docker-custom-agent" | ||
|
||
// CustomAgentDeployer knows how to deploy a custom elastic-agent defined via | ||
// a Docker Compose file. | ||
type CustomAgentDeployer struct { | ||
cfg string | ||
} | ||
|
||
// NewCustomAgentDeployer returns a new instance of a deployedCustomAgent. | ||
func NewCustomAgentDeployer(cfgPath string) (*CustomAgentDeployer, error) { | ||
return &CustomAgentDeployer{ | ||
cfg: cfgPath, | ||
}, nil | ||
} | ||
|
||
// SetUp sets up the service and returns any relevant information. | ||
func (d *CustomAgentDeployer) SetUp(inCtxt ServiceContext) (DeployedService, error) { | ||
logger.Debug("setting up service using Docker Compose service deployer") | ||
|
||
appConfig, err := install.Configuration() | ||
if err != nil { | ||
return nil, errors.Wrap(err, "can't read application configuration") | ||
} | ||
|
||
kibanaClient, err := kibana.NewClient() | ||
if err != nil { | ||
return nil, errors.Wrap(err, "can't create Kibana client") | ||
} | ||
|
||
stackVersion, err := kibanaClient.Version() | ||
if err != nil { | ||
return nil, errors.Wrap(err, "can't read Kibana injected metadata") | ||
} | ||
|
||
env := append( | ||
appConfig.StackImageRefs(stackVersion).AsEnv(), | ||
fmt.Sprintf("%s=%s", serviceLogsDirEnv, inCtxt.Logs.Folder.Local), | ||
) | ||
|
||
ymlPaths, err := d.loadComposeDefinitions() | ||
if err != nil { | ||
return nil, err | ||
} | ||
|
||
service := dockerComposeDeployedService{ | ||
ymlPaths: ymlPaths, | ||
project: "elastic-package-service", | ||
sv: ServiceVariant{ | ||
Name: dockerCustomAgentName, | ||
Env: env, | ||
}, | ||
} | ||
|
||
outCtxt := inCtxt | ||
|
||
p, err := compose.NewProject(service.project, service.ymlPaths...) | ||
if err != nil { | ||
return nil, errors.Wrap(err, "could not create Docker Compose project for service") | ||
} | ||
|
||
// Verify the Elastic stack network | ||
err = stack.EnsureStackNetworkUp() | ||
if err != nil { | ||
return nil, errors.Wrap(err, "Elastic stack network is not ready") | ||
} | ||
|
||
// Clean service logs | ||
err = files.RemoveContent(outCtxt.Logs.Folder.Local) | ||
if err != nil { | ||
return nil, errors.Wrap(err, "removing service logs failed") | ||
} | ||
|
||
inCtxt.Name = dockerCustomAgentName | ||
serviceName := inCtxt.Name | ||
opts := compose.CommandOptions{ | ||
Env: env, | ||
ExtraArgs: []string{"--build", "-d"}, | ||
} | ||
err = p.Up(opts) | ||
if err != nil { | ||
return nil, errors.Wrap(err, "could not boot up service using Docker Compose") | ||
} | ||
|
||
// Connect service network with stack network (for the purpose of metrics collection) | ||
err = docker.ConnectToNetwork(p.ContainerName(serviceName), stack.Network()) | ||
if err != nil { | ||
return nil, errors.Wrapf(err, "can't attach service container to the stack network") | ||
} | ||
|
||
err = p.WaitForHealthy(opts) | ||
if err != nil { | ||
processServiceContainerLogs(p, compose.CommandOptions{ | ||
Env: opts.Env, | ||
}, outCtxt.Name) | ||
return nil, errors.Wrap(err, "service is unhealthy") | ||
} | ||
|
||
// Build service container name | ||
outCtxt.Hostname = p.ContainerName(serviceName) | ||
|
||
logger.Debugf("adding service container %s internal ports to context", p.ContainerName(serviceName)) | ||
serviceComposeConfig, err := p.Config(compose.CommandOptions{Env: env}) | ||
if err != nil { | ||
return nil, errors.Wrap(err, "could not get Docker Compose configuration for service") | ||
} | ||
|
||
s := serviceComposeConfig.Services[serviceName] | ||
outCtxt.Ports = make([]int, len(s.Ports)) | ||
for idx, port := range s.Ports { | ||
outCtxt.Ports[idx] = port.InternalPort | ||
} | ||
|
||
// Shortcut to first port for convenience | ||
if len(outCtxt.Ports) > 0 { | ||
outCtxt.Port = outCtxt.Ports[0] | ||
} | ||
Comment on lines
+136
to
+138
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. nit: I don't remember if this condition is still required. There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. if we keep it then the port can be referenced from the test config and that can be useful in some cases, not a hard requirement though so we can remove it if there is any concern |
||
|
||
outCtxt.Agent.Host.NamePrefix = inCtxt.Name | ||
service.ctxt = outCtxt | ||
return &service, nil | ||
} | ||
|
||
func (d *CustomAgentDeployer) loadComposeDefinitions() ([]string, error) { | ||
locationManager, err := locations.NewLocationManager() | ||
if err != nil { | ||
return nil, errors.Wrap(err, "can't locate Docker Compose file for Custom Agent deployer") | ||
} | ||
return []string{locationManager.DockerCustomAgentDeployerYml(), d.cfg}, nil | ||
} |
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,3 @@ | ||
dependencies: | ||
ecs: | ||
reference: [email protected] |
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,6 @@ | ||
# newer versions go on top | ||
- version: "999.999.999" | ||
changes: | ||
- description: Initial draft of the package | ||
type: enhancement | ||
link: https://github.com/elastic/integrations/pull/1 |
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,7 @@ | ||
services: | ||
docker-custom-agent: | ||
pid: host | ||
cap_add: | ||
- AUDIT_CONTROL | ||
- AUDIT_READ | ||
user: root |
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,5 @@ | ||
data_stream: | ||
vars: | ||
audit_rules: | ||
- "-a always,exit -F arch=b64 -S execve,execveat -k exec" | ||
preserve_original_event: true |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As the
custom-agent.yml
will be part of a package, we need to cover it with package-spec. You will need to open one more PR.