Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

could not fetch for browser source job #981

Open
fabry00 opened this issue Nov 8, 2024 · 4 comments
Open

could not fetch for browser source job #981

fabry00 opened this issue Nov 8, 2024 · 4 comments
Labels
question Further information is requested

Comments

@fabry00
Copy link

fabry00 commented Nov 8, 2024

I have pushed 2 very simple monitors: one lightweight monitor which works fine and one journey monitor:

import { journey, step, monitor, expect } from "@elastic/synthetics"

journey("test", ({ page, params, request }) => {
  const monitorSettings = { id: "test", tags: [] }
  console.log("Monitor settings:", monitorSettings)
  monitor.use(monitorSettings)

  step("Call API", async () => {
    const response = await request.get(`https://myur.net`, {})
    expect(response.status()).toBe(200)
  })
})

If I execute it locally using npx @elastic/synthetics journeys the test succeed

Journey: test
Monitor settings: { id: 'test', tags: [] }
   ✓  Step: 'Call API' succeeded (170 ms)

 1 passed (1164 ms) 

While in Elasticsearch it fails with the following error without more details:

journey did not finish executing, 0 steps ran (attempt: 2): could not fetch for browser source job: setting up project dir failed: exit status 255

Kibana:
Monitors

Journey monitor

Local execution
Local execution

My elasticsearch/kibana/fleet version: "8.15.0"
While my @elastic/synthetics version is "^1.16.0"

@vigneshshanmugam
Copy link
Member

vigneshshanmugam commented Nov 8, 2024

I just tested using a similar script and ran in our Elastic managed global locations, Its running without any issues.

Image

I believe you are running this test on Synthetics private locations and have set up the monitors aganist elastic-agent-complete image and required flags as suggested in the docs https://www.elastic.co/guide/en/observability/current/synthetics-private-location.html#synthetics-private-location-connect

In addition to this, Have you tried upgrading the stack to 8.15.3 and see if that helps? Thanks

@vigneshshanmugam vigneshshanmugam added the question Further information is requested label Nov 8, 2024
@fabry00
Copy link
Author

fabry00 commented Nov 9, 2024

@vigneshshanmugam yes I am running Synthetics private locations and have deployed the elastic-agent-complete on Openshift.
Unfortunately I cannot set the following capabilities (in the Fleet deployment) since they are restricted by security (based on the link you shared)

  securityContext:
    capabilities:
      add:
        - NET_RAW
        - SETUID

Are those capabilities mandatory to have synthetics monitors working? The Lightweight monitors are working perfectly in the same fleet pod

Also I've just upgraded to 8.1.5.3 and I have the same error

@vigneshshanmugam
Copy link
Member

@fabry00 I believe the error you are seeing is related to the File permission as when running browser monitors we setup a directory for installing the configured monitors and execute them. elastic/beats#30869

  • Could you share how you are running the private locations on the Openshift. Have you configured a user id for this container?
  • Is there enough memory on the instance you are running?
  • Are you seeing the agent in the degraded status? Could you share the elastic-agent logs?

@fabry00
Copy link
Author

fabry00 commented Nov 12, 2024

@vigneshshanmugam here my POD yaml:

kind: Pod
apiVersion: v1
metadata:
  name: fleetserver-75f8bbdd4c-l269w
spec:
  restartPolicy: Always
  serviceAccountName: monitoring
  securityContext:
    seLinuxOptions:
      level: 's0:c71,c5'
    fsGroup: 1004980000
    seccompProfile:
      type: RuntimeDefault
  containers:
    - resources:
        limits:
          cpu: '2'
          memory: 2Gi
        requests:
          cpu: 100m
          memory: 1Gi
      terminationMessagePath: /dev/termination-log
      name: fleetserver
      env:
        - name: FLEET_SERVER_ENABLE
          value: '1'
        - name: FLEET_SERVER_ELASTICSEARCH_HOST
          value: 'https://client:9200'
        - name: FLEET_SERVER_INSECURE_HTTP
          value: '0'
        - name: FLEET_SERVER_CERT
          value: /usr/share/fleet/config/certs/elastic-certificate.pem
        - name: FLEET_SERVER_CERT_KEY
          value: /usr/share/fleet/config/certs/elastic-certificate.key
        - name: FLEET_SERVER_POLICY_ID
          value: fleet-server-policy-dev
        - name: FLEET_SERVER_HOST
          value: 0.0.0.0
        - name: FLEET_SERVER_PORT
          value: '8220'
        - name: FLEET_ENROLL
          value: '1'
        - name: FLEET_URL
          value: 'https://fleetserver-dev.monitoring-dev.svc.cluster.local:8220'
        - name: FLEET_INSECURE
          value: 'false'
        - name: ELASTICSEARCH_HOST
          value: 'https://client:9200'
        - name: ELASTICSEARCH_USERNAME
          valueFrom:
            secretKeyRef:
              name: elk-admin-user
              key: username
        - name: ELASTICSEARCH_PASSWORD
          valueFrom:
            secretKeyRef:
              name: elk-admin-user
              key: password
        - name: ELASTICSEARCH_CA
          value: /usr/share/fleet/config/certs/elastic-ca-certificate.pem
        - name: ELASTIC_AGENT_KUBERNETES_RESOURCES_NODE_ENABLED
          value: 'false'
        - name: ELASTIC_AGENT_LOG_LEVEL
          value: info
        - name: METRICBEAT_MODULES
      securityContext:
        capabilities:
          drop:
            - ALL
        runAsUser: 1004980000
        runAsNonRoot: true
        allowPrivilegeEscalation: false
      volumeMounts:
        - name: elastic-certificate-pem
          readOnly: true
          mountPath: /usr/share/fleet/config/certs
        - name: elastic-agent-cert
          readOnly: true
          mountPath: /usr/share/agent/config/certs
        - name: fleetserver-certificate
          readOnly: true
          mountPath: /usr/share/fleet-server/certs
        - name: service-ca-cert
          readOnly: true
          mountPath: /usr/share/agent/config/service-ca
        - name: ca-cert
          readOnly: true
          mountPath: /usr/share/agent/config/certs
      terminationMessagePolicy: File
      envFrom:
        - secretRef:
            name: elk-fleet-server
        - secretRef:
            name: elk-agent
      image: 'elastic-agent-complete:8.15.3'
  serviceAccount: monitoring
  volumes:
    - name: elastic-certificate-pem
      secret:
        secretName: elk-elastic-certs
        defaultMode: 420
    - name: elastic-agent-cert
      secret:
        secretName: elastic-agent-cert
        defaultMode: 420
    - name: fleetserver-certificate
      secret:
        secretName: fleetserver-cert
        defaultMode: 420
    - name: service-ca-cert
      configMap:
        name: service-ca-cert
        defaultMode: 420
    - name: ca-cert
      configMap:
        name: ca-cert
        defaultMode: 420

To answer your question:

  • Could you share how you are running the private locations on the Openshift. Have you configured a user id for this container? Yes, we have to configure the user id for the container, and the userids range is fixed by Openshift Administrators
  • Is there enough memory on the instance you are running? Yes, the metrics reported by the agent to elk shows: CPU 4.00 %, Memory 409 MB
  • Are you seeing the agent in the degraded status? No degradation
    Could you share the elastic-agent logs? I won't be able to upload the logs anywhere due to restrictions, but if you tell me what you are looking for I can extract a portion of the logs and at them here as a comment

Deploying the agent as a docker container in a VM it works

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

2 participants