Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Too forgiving test in rpn-calculator-inspection #1133

Open
hrubi opened this issue May 5, 2022 · 2 comments
Open

Too forgiving test in rpn-calculator-inspection #1133

hrubi opened this issue May 5, 2022 · 2 comments

Comments

@hrubi
Copy link
Contributor

hrubi commented May 5, 2022

Tests for RPNCalculatorInspection.reliability_check/2 are too forgiving for calculations that take longer than 100ms. If the solution starts the checks in parallel and awaits the results iteratively all the tests pass, but they should not. When the solution awaits the first check and it does not complete in 100ms, it adds :timeout to results. But when awaiting the second check, it already has that 100ms "bonus" from the previous await. So if the second check takes 150ms to finish, the await will receive the response after 50ms and add :ok to results, whereas it should add :timeout.

Here is a test which covers this case:

    test "returns :timeout for all calculations lasting longer than 100ms" do
      inputs = [200, 150, 0]
      calculator = &:timer.sleep(&1)

      outputs = %{
        200 => :timeout,
        150 => :timeout,
        0 => :ok
      }

      assert RPNCalculatorInspection.reliability_check(calculator, inputs) == outputs
    end

And here is a solution which passes the existing tests, but not the above one:

  def reliability_check(calculator, inputs) do
    trap_orig = Process.flag(:trap_exit, true)

    results =
      inputs
      |> Enum.map(&start_reliability_check(calculator, &1))
      |> Enum.reduce(%{}, &await_reliability_check_result/2)

    Process.flag(:trap_exit, trap_orig)
    results
  end

Here's a solution which will pass the above test as well.

  def reliability_check(calculator, inputs) do
    trap_orig = Process.flag(:trap_exit, true)
    caller = self()

    results =
      inputs
      |> Enum.map(fn input ->
        await_pid =
          spawn_link(fn ->
            check = start_reliability_check(calculator, input)
            results = await_reliability_check_result(check, %{})
            [{^input, result}] = Map.to_list(results)
            send(caller, {self(), result})
          end)

        {input, await_pid}
      end)
      |> Enum.reduce(%{}, fn {input, await_pid}, results ->
        receive do
          {^await_pid, result} -> Map.put(results, input, result)
        after
          200 -> raise "Timeout"
        end
      end)

    Process.flag(:trap_exit, trap_orig)
    results
  end

Hardening the test would lead to more complicated solutions which are probably beyond the educational scope of this exercise. But the existing tests do not sufficiently cover the goals set in the exercise. So I'm not sure what exactly to do here.

@angelikatyborska
Copy link
Member

You're completely right. The same problem applies to the second part of the exercise, with tasks. I was aware of this issue when creating the exercise. It's described here, in the task concept's about.md document: https://github.com/exercism/elixir/blob/main/concepts/tasks/about.md#timeouts. The initial idea was to have extended concept descriptions that students see after completing the exercise, but in reality the platform was implemented in a way that doesn't tell you that there might be more content after completing the exercise and most concepts don't have any extra content. Even with this limitation, I wouldn't want to bring this timeout explanation into the concept introduction because it isn't necessary for solving the exercise in a naive way.

If we wanted to make deep understanding of timeouts part of this exercise, we would need to redesign it and make it a bit more complicated. @jiegillet @neenjaw any opinions?

@jiegillet
Copy link
Contributor

That's a good observation.
From what I've gathered about this exercise, it's already pretty challenging for the students, and this would ramp it up quite a bit, like you said, @hrubi. It hurts me to say, but I'm leaning towards not adding it in.
Thinking outside the box here, but maybe we could make a second pass kind of thing through the analyzer? Some informative message saying "we noticed your solution is too forgiving, bla bla, please look here (link to about.md) for more info". It's not standard at all...

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants