Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Problem with great amount of entries. #215

Closed
nudgegoonies opened this issue Jan 15, 2020 · 14 comments
Closed

Problem with great amount of entries. #215

nudgegoonies opened this issue Jan 15, 2020 · 14 comments
Assignees
Milestone

Comments

@nudgegoonies
Copy link

nudgegoonies commented Jan 15, 2020

We have an other problem that happens with 1.2.7 till 1.2.11. We mirror a lot of images from Docker Hub and the yaml file with the images/tags is large. The problem was, that lstags stopped working after a specific amount of images/tags. We enabled trace logging but it does not show anything useful. The "batches" all work fine and the "state" is shown. But during "[PULL/PUSH] PUSHING" there is a moment when the time to the next "[PULL/PUSH] PUSHING" increases heavily from image to image until some image where lstags waits forever. We looked at the process and it is not crashed but it does not run anymore.

We found a workaround by splitting the yaml file from Docker Hub but it seems that there is a problem when the amount of images, tags or size reaches some limit after that lstags stops working.

We also found out that the lstags process does not terminate when run from gitlab-ci and terminated through pipeline timeout. We found hanging lstags processes on the runner. But sending them a regular sigterm works to to terminate them.

@nudgegoonies
Copy link
Author

I investigated a bit more. lstags does not work completely sequentially from the yaml files right? In the log i see, that the last image/tags in the sorted yaml is not pushed at the end. There are two more images/tags processed after that image/tags that come alphabetically and from the yaml file before that image.

When i remember right we never missed an image after the lstags sync. So i think the problem is exiting when lstags is finished. This would also explain the lots of hanging lstags processes.

@ivanilves
Copy link
Owner

could U please:

  -w, --wait-between=         Time to wait between batches of requests (incl. pulls and pushes) (default: 0) [$WAIT_BETWEEN]

Yes, lstags processes yaml file entries in async non-sequential manner. 😉

So i think the problem is exiting when lstags is finished.

This is really interesting. For a research sake, could you provide your yaml file here (if it's not confidential, of course!)?

@nudgegoonies
Copy link
Author

We tried the 1.2.12 now and have the same problems with authentication.

I check if we can provide the yaml file.

@ivanilves
Copy link
Owner

We tried the 1.2.12 now and have the same problems with authentication.

And rolling back to 1.2.10 fixes authentication back, right?

What kind of authentication you are using with Artifactory?

BASIC or TOKEN?

@nudgegoonies
Copy link
Author

Sorry for the long delay, we use BASIC AUTH with username/password and not the artifactory token.

@ivanilves
Copy link
Owner

This is strange indeed. There were no changes on BASIC AUTH side in lstags since almost the very beginning 🤔

Call it a blunt, naive suggestion - but from the machine/VM/container you are running lstags on - could you please ensure you can actually do docker pull / docker push to your Artifactory?

@nudgegoonies
Copy link
Author

Yes, i can pull and push images normally.

@ivanilves
Copy link
Owner

ivanilves commented Feb 6, 2020

Just released a new version https://github.com/ivanilves/lstags/releases/tag/v1.2.13 with better tracing.

Could you please run it and share the [improved] tracing output here? 🙏

@nudgegoonies
Copy link
Author

Sorry for the delay.

I cherry-picked your 2 commits for better tracing to the latest working version 1.2.10 with the following output. Read my comments inside the two ### about what was written there:

<STATE>      <DIGEST>                                      <(local) ID>    <Created At>              <IMAGE>:<TAG>
ABSENT       sha256:8ab8291e47460c686529dcbc1efedeb48      n/a             2018-07-20T08:07:57       docker.elastic.co/elasticsearch/elasticsearch-oss:6.3.2
ABSENT       sha256:220a9a988288baf446e36d74aa93eb747      n/a             2018-12-17T22:53:36       docker.elastic.co/elasticsearch/elasticsearch-oss:6.5.4
ABSENT       sha256:f59f7936d018e9d2329dfa0705b305e2c      n/a             2019-01-24T12:31:25       docker.elastic.co/elasticsearch/elasticsearch-oss:6.6.0
ABSENT       sha256:789e22d8b2a7ea1aacc3992da0157370e      n/a             2019-02-13T18:13:42       docker.elastic.co/elasticsearch/elasticsearch-oss:6.6.1
ABSENT       sha256:9c5ef68d57c747a0277f2276493057c22      n/a             2019-03-06T16:20:44       docker.elastic.co/elasticsearch/elasticsearch-oss:6.6.2
PRESENT      sha256:5021b5feb63f642565d19213504d86a38      170c6c1bc829    2019-06-18T15:21:56       docker.elastic.co/elasticsearch/elasticsearch-oss:6.8.1
PRESENT      sha256:316be55cedc4a1d301e57344ebed8424a      e73d08f153d8    2019-07-24T17:30:47       docker.elastic.co/elasticsearch/elasticsearch-oss:6.8.2
PRESENT      sha256:eadd80cfc04c7ae59f050eb23ae40931f      5756084a46f1    2019-08-29T21:12:58       docker.elastic.co/elasticsearch/elasticsearch-oss:6.8.3
-
|@URL: https://external.docker.mamdev.server.lan/v2/elasticsearch/elasticsearch-oss/tags/list
|@REQ-HEADER: Authorization                            = [Bearer ### Exactly the password part right of the : from the base64 decoded auth section in the config.json after decoding it ###]
|@REQ-HEADER: Accept                                   = [application/json application/vnd.docker.distribution.manifest.v2+json]
|@RESP-HEADER: Connection                               = [keep-alive]
|@RESP-HEADER: Server                                   = [Artifactory/6.16.0]
|@RESP-HEADER: X-Artifactory-Id                         = [################:-########:###########:-####]
|@RESP-HEADER: Docker-Distribution-Api-Version          = [registry/2.0]
|@RESP-HEADER: Date                                     = [Wed, 12 Feb 2020 14:04:10 GMT]
|@RESP-HEADER: Content-Type                             = [application/json]
|--- BODY BEGIN ---
|{
|  "name" : "elasticsearch/elasticsearch-oss",
|  "tags" : [ "6.3.2", "6.5.4", "6.6.0", "6.6.1", "6.6.2", "6.8.1", "6.8.2", "6.8.3" ]
|}
|--- BODY END ---

This is the output from your non working version 1.2.13. Read my comments inside the two ### about what was written there:

<STATE>      <DIGEST>                                      <(local) ID>    <Created At>              <IMAGE>:<TAG>
ABSENT       sha256:8ab8291e47460c686529dcbc1efedeb48      n/a             2018-07-20T08:07:57       docker.elastic.co/elasticsearch/elasticsearch-oss:6.3.2
ABSENT       sha256:220a9a988288baf446e36d74aa93eb747      n/a             2018-12-17T22:53:36       docker.elastic.co/elasticsearch/elasticsearch-oss:6.5.4
ABSENT       sha256:f59f7936d018e9d2329dfa0705b305e2c      n/a             2019-01-24T12:31:25       docker.elastic.co/elasticsearch/elasticsearch-oss:6.6.0
ABSENT       sha256:789e22d8b2a7ea1aacc3992da0157370e      n/a             2019-02-13T18:13:42       docker.elastic.co/elasticsearch/elasticsearch-oss:6.6.1
ABSENT       sha256:9c5ef68d57c747a0277f2276493057c22      n/a             2019-03-06T16:20:44       docker.elastic.co/elasticsearch/elasticsearch-oss:6.6.2
PRESENT      sha256:5021b5feb63f642565d19213504d86a38      170c6c1bc829    2019-06-18T15:21:56       docker.elastic.co/elasticsearch/elasticsearch-oss:6.8.1
PRESENT      sha256:316be55cedc4a1d301e57344ebed8424a      e73d08f153d8    2019-07-24T17:30:47       docker.elastic.co/elasticsearch/elasticsearch-oss:6.8.2
PRESENT      sha256:eadd80cfc04c7ae59f050eb23ae40931f      5756084a46f1    2019-08-29T21:12:58       docker.elastic.co/elasticsearch/elasticsearch-oss:6.8.3
-
|@URL: https://external.docker.mamdev.server.lan/v2/elasticsearch/elasticsearch-oss/tags/list
|@REQ-HEADER: Authorization                            = [Bearer ### Broken base64 string. That is the readable part: {"typ":"JWT","alg":"RS256","kid":"#:#:#:#:#:#:#:#:#:#:#:#"} ###]
|@REQ-HEADER: Accept                                   = [application/json application/vnd.docker.distribution.manifest.v2+json]
|@RESP-HEADER: Connection                               = [keep-alive]
|@RESP-HEADER: Server                                   = [Artifactory/6.16.0]
|@RESP-HEADER: X-Artifactory-Id                         = [################:-########:###########:-####]
|@RESP-HEADER: Www-Authenticate                         = [Basic realm="Artifactory Realm"]
|@RESP-HEADER: Date                                     = [Wed, 12 Feb 2020 13:27:43 GMT]
|@RESP-HEADER: Content-Type                             = [application/json;charset=ISO-8859-1]
|@RESP-HEADER: Content-Length                           = [101]
|--- BODY BEGIN ---
|{
|  "errors" : [ {
|    "status" : 401,
|    "message" : "Token failed verification: signature"
|  } ]
|}
|--- BODY END ---

I used the command line base64 tool to decode the string and got the error '': invalid input". But for me the json looks complete. At least there is { and }.

See also the missing Docker-Distribution-Api-Version header and the added Www-Authenticate header.

@nudgegoonies
Copy link
Author

I think our recent conversation belongs more to #214

@ivanilves
Copy link
Owner

True! moved to #214

@ivanilves
Copy link
Owner

Hi @nudgegoonies

Would like to ask you if original issue "Problem with great amount of entries" is still a thing?

Thank U!

@ivanilves ivanilves added this to the v1.3 milestone Mar 22, 2020
@ivanilves ivanilves self-assigned this Mar 22, 2020
@nudgegoonies
Copy link
Author

@ivanilves We havn't experienced hanging lstags processes anymore after syncing great amount of entries.

@ivanilves
Copy link
Owner

Good, closing 😄

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants