Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Freeze fixes and v1 kludges #2545

Open
wants to merge 3 commits into
base: criu-dev
Choose a base branch
from

Conversation

kolyshkin
Copy link
Contributor

@kolyshkin kolyshkin commented Dec 13, 2024

1. freeze_processes: fix logic

There are a few issues with the freeze_processes logic:

  1. Commit 9fae23f grossly (by 1000x) miscalculated the number of
    attempts required, as a result, we are seeing something like this:

(00.000340) freezing processes: 100000 attempts with 100 ms steps
(00.000351) freezer.state=THAWED
(00.000358) freezer.state=FREEZING
(00.100446) freezer.state=FREEZING
...close to 100 lines skipped...
(09.915110) freezer.state=FREEZING
(10.000432) Error (criu/cr-dump.c:1467): Timeout reached. Try to interrupt: 0
(10.000563) freezer.state=FREEZING

For 10s with 100ms steps we only need 100 attempts, not 100000.

  1. When the timeout is hit, the "failed to freeze cgroup" error is not
    printed, and the log_unfrozen_stacks is not called either.

  2. The nanosleep at the last iteration is useless (this was hidden by
    issue 1 above, as the timeout was hit first).

Fix all these.

While at it,

  1. Amend the error message with the number of attempts, sleep duration,
    and timeout.

  2. Modify the "freezing cgroup" debug message to be in sync with the
    above error.

    Was:

    freezing processes: 100000 attempts with 100 ms steps

    Now:

    freezing cgroup some/name: 100 x 100ms attempts, timeout: 10s

2. freeze_processes: implement kludges for cgroup v1

Cgroup v1 freezer has always been problematic, failing to freeze a
cgroup.

In runc, we have implemented a few kludges to increase the chance of
succeeding, but those are used when runc freezes a cgroup for its own
purposes (for "runc pause" and to modify device properties for cgroup
v1).

When criu is used, it fails to freeze a cgroup from time to time
(see 1, 2). Let's try adding kludges similar to ones in runc.

Alas, I have absolutely no way to test this, so please review carefully.

criu/seize.c Show resolved Hide resolved
criu/seize.c Outdated Show resolved Hide resolved
criu/seize.c Outdated
*/
switch (i%32) {
case 30:
freezer_write_state(fd, THAWED);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think it is better to do that before nanosleep, otherwise we waste one time slice.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We don't waste anything, we just spend one time slice in THAWED state, letting the processes to run for some time, hoping this will change their state to one in which they will have a better chance to freeze.

The only "wasteful" check is get_freezer_state after this, since it will return THAWED, and I don't consider it heavy enough to worth optimizing out and making the code more complicated than it should be.

Or, the code can be changed like this, to be more straightforward in what we're trying to do here:

@@ -620,10 +621,11 @@ static int freeze_processes(void)
                        switch (i%32) {
                                case 30:
                                        freezer_write_state(fd, THAWED);
+                                       nanosleep(&req, NULL);
+                                       freezer_write_state(fd, FROZEN);
                                        break;
                                case 9:
                                case 20:
-                               case 31:
                                        freezer_write_state(fd, FROZEN);
                                        break;
                        }

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

			nanosleep(&req, NULL); <---- we should not sleep here if we are going to thaw the cgroup.

			if (cgroup_v2)
				continue;

			/* As per older kernel docs (freezer-subsystem.txt before

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, my bad. Moved nanosleep to before checking the status -- the difference is the minimum sleep time is 100ms now (was 0). Not sure if it makes any practical difference. If it does, we can do

if i != 0
   nanosleep(&req, NULL);

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You can do 1M syscalls for 100ms. We probably need to reconsider this timeout. In many cases, the full container dump should take a shorter period.

let's do something like that:

+static void cgroupv1_freezer_kludges(int fd, int i, const struct timespec *req)
+{
+       /* As per older kernel docs (freezer-subsystem.txt before
+        * the kernel commit ef9fe980c6fcc1821), if FREEZING is seen,
+        * userspace should either retry or thaw. While current
+        * kernel cgroup v1 docs no longer mention a need to retry,
+        * even recent kernels can't reliably freeze a cgroup v1.
+        *
+        * Let's keep asking the kernel to freeze from time to time.
+        * In addition, do occasional thaw/sleep/freeze.
+        *
+        * This is still a game of chances (the real fix belongs to the kernel)
+        * but these kludges might improve the probability of success.
+        *
+        * Cgroup v2 does not have this problem.
+        */
+       switch (i % 32) {
+       case 9:
+       case 20:
+               freezer_write_state(fd, FROZEN);
+               break;
+       case 31:
+               freezer_write_state(fd, THAWED);
+               nanosleep(req, NULL);
+               freezer_write_state(fd, FROZEN);
+               break;
+       }
+}
+
 static int freeze_processes(void)
 {
        int fd, exit_code = -1;
        enum freezer_state state = THAWED;
 
        static const unsigned long step_ms = 100;
+       /* Since opts.timeout is in seconds, multiply it by 1000 to convert to milliseconds. */
        unsigned long nr_attempts = (opts.timeout * 1000) / step_ms;
        unsigned long i = 0;
 
@@ -598,6 +628,10 @@ static int freeze_processes(void)
                                pr_err("Unable to freeze cgroup %s (timed out)\n", opts.freeze_cgroup);
                                goto err;
                        }
+
+                       if (!cgroup_v2)
+                               cgroupv1_freezer_kludges(fd, i, &req);
+
                        nanosleep(&req, NULL);
               

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Makes sense, although in this case we have a wasteful nanosleep when i reaches nr_attempts and the timeout is not hit. I.e. we check the state, sleep, and then print an error. Not a new problem, but let me fix this, too.

@kolyshkin
Copy link
Contributor Author

Addressed review comments; rebased.

criu/seize.c Outdated Show resolved Hide resolved
Done using clang-format 19.1.5 with .clang-format obtained via
scripts/fetch-clang-format.sh.

Signed-off-by: Kir Kolyshkin <[email protected]>
There are a few issues with the freeze_processes logic:

1. Commit 9fae23f grossly (by 1000x) miscalculated the number of
   attempts required, as a result, we are seeing something like this:

> (00.000340) freezing processes: 100000 attempts with 100 ms steps
> (00.000351) freezer.state=THAWED
> (00.000358) freezer.state=FREEZING
> (00.100446) freezer.state=FREEZING
> ...close to 100 lines skipped...
> (09.915110) freezer.state=FREEZING
> (10.000432) Error (criu/cr-dump.c:1467): Timeout reached. Try to interrupt: 0
> (10.000563) freezer.state=FREEZING

   For 10s with 100ms steps we only need 100 attempts, not 100000.

2. When the timeout is hit, the "failed to freeze cgroup" error is not
   printed, and the log_unfrozen_stacks is not called either.

3. The nanosleep at the last iteration is useless (this was hidden by
   issue 1 above, as the timeout was hit first).

Fix all these.

While at it,

4. Amend the error message with the number of attempts, sleep duration,
   and timeout.

5. Modify the "freezing cgroup" debug message to be in sync with the
   above error.

   Was:

   > freezing processes: 100000 attempts with 100 ms steps

   Now:

   > freezing cgroup some/name: 100 x 100ms attempts, timeout: 10s

Signed-off-by: Kir Kolyshkin <[email protected]>
Cgroup v1 freezer has always been problematic, failing to freeze a
cgroup.

In runc, we have implemented a few kludges to increase the chance of
succeeding, but those are used when runc freezes a cgroup for its own
purposes (for "runc pause" and to modify device properties for cgroup
v1).

When criu is used, it fails to freeze a cgroup from time to time
(see [1], [2]). Let's try adding kludges similar to ones in runc.

Alas, I have absolutely no way to test this, so please review carefully.

[1]: opencontainers/runc#4273
[2]: opencontainers/runc#4457

Signed-off-by: Kir Kolyshkin <[email protected]>
@kolyshkin
Copy link
Contributor Author

Refactored to fix some more issues.

@kolyshkin
Copy link
Contributor Author

kolyshkin commented Dec 17, 2024

  1. Commit 9fae23f grossly (by 1000x) miscalculated the number of attempts required
    <...>
  2. When the timeout is hit, the "failed to freeze cgroup" error is not printed, and the log_unfrozen_stacks is not called either.

As this is now fixed, we may find additional issues with the log_unfrozen_stacks (which, I guess, was never called before due to nr_attempts miscalculation).

@kolyshkin
Copy link
Contributor Author

Testing this in opencontainers/runc#4559, no luck so far (can't reproduce the issue).

@kolyshkin
Copy link
Contributor Author

Testing this in opencontainers/runc#4559

I have concluded the testing, these kludges definitely help (i.e. can easily reproduce the issue without the fix, and can not reproduce it with the fix). See opencontainers/runc#4559 for more details.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants