Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Requirements specification #7

Open
zsunberg opened this issue Jun 16, 2020 · 11 comments
Open

Requirements specification #7

zsunberg opened this issue Jun 16, 2020 · 11 comments
Labels
decision A decision that needs to be made help wanted Extra attention is needed

Comments

@zsunberg
Copy link
Member

zsunberg commented Jun 16, 2020

In #1, several people talked about a way for algorithm writers to specify requirements. This seems like a good idea, but designing it will be challenging. Here are a few important questions we need to answer:

  1. Are requirements just functions (e.g. clone) or method signatures (i.e. with the argument types, e.g. clone(::YourCommonEnv)?
    There are several sub-issues with this:
    1. if we have any cases of the same function with different possible signatures, then the requirements will need to be signatures
    2. If we use signatures, it is a huge pain to infer the right types to ask for (e.g. suppose we had interact!(env, a), to check the requirement we would have to infer (or ask the user to supply) the type of a. It might be possible to do Base.return_types(env->eltype(actions(env)), Tuple{typeof(env)})) or something.
  2. Do we evaluate requirements all at once so we can show a checklist or allow algorithm-writers to write their own code and go through each requirement one by one?
  3. Should algorithms automatically check requirements when they start, or should we provide a mechanism for an environment-writer to test what methods are needed before running the algorithm?
  4. How should we handle cases where there are "or" requirements, e. g. MCTS could use clone or setstate!
  5. in the interface checking, should we use Base.show_method_candidates to make a report like a MethodError, or just wait until it is erroneously called?

Please weigh in!

@zsunberg zsunberg added the decision A decision that needs to be made label Jun 16, 2020
@zsunberg zsunberg added this to the v0.2 milestone Jun 16, 2020
@zsunberg
Copy link
Member Author

My preferences are

  1. Go with method signatures
  2. All at once
  3. Ambivalent - probably leave that up to the person who writes the algorithm. I think we should just provide some way to easily create and print requirements checklists.
  4. I don't know - this is a bit tricky
  5. Show it in the report

@zsunberg
Copy link
Member Author

@findmyway it looked like you had some ideas in #1 - if you're up for it, you could take a stab at implementing something and then we could discuss it.

@findmyway
Copy link
Member

My preferences:

  1. Both are ok to me.
  2. All at once. The solvers may need this info to decide which algorithms to use.
  3. Leave it to algorithms. I think we need to suppose environment writers are ignorant to the solvers and generally should implement optional interfaces as much as possible. (And most can be defined by default implementations )
  4. I think the solver should provide a trait for users to decide which implementation to use. Like MCTSPreferedStyle(env) -> StateBased/CopyEnvBased, by default we stick with the one available (if both are provided, we can use either one by default and still tell the user that by defining MCTSPreferedStyle one can choose a more efficient solution)
  5. I have no preference here.

@jbrea
Copy link

jbrea commented Jun 17, 2020

  1. Either is fine. Method signatures seem more stringent but more tricky to implement. The requirements specification will be in the docs anyway. I think it is sufficient, if these tools give informative hints/reports to the developers.
  2. All at once.
  3. For environment authors it may be nice to know in advance which algorithms would work with the currently implemented interface functions and which ones would fail. So I would definitely go for a simple way to get a checklist/report for environment authors. I think it should be up to the algorithm author to decide, whether to check the environment or not at the start.
  4. If possible, I would try to avoid "or" requirements, see e.g. my comment in #1. Otherwise, couldn't we go with something like hasmethod(clone, (env_type,)) || hasmethod(setstate!, (env_type, state_type)) || @warn (if we decide to check method signatures in 1.))?
  5. Looks like a nice idea to show it in the report.

@zsunberg
Copy link
Member Author

zsunberg commented Jun 18, 2020

@findmyway I like your idea for for (4)

generally should implement optional interfaces as much as possible

If these are "industrial-grade" environment writers, like people wrapping gym or something, then I agree, but we should be careful - if it's someone in environmental engineering who has a cool model of a water system, we should not accidentally make them think they need to implement state or setstate! if they don't need it for the solver they want to use :)

most can be defined by default implementations

Make sure to weigh in on #6 :)


Ok. Given that we agree on (2), I think this package should contain:

requirements_met(list) # returns a boolean
show_requirements(list; show_method_candidates=true) # prints a nice list with check marks

where list is an iterable that contains either (function, arg, arg), (function, Tuple{argtype, argtype}) or some kind of Or item that we create to deal with "or" cases. We could also try to make it handle just functions.

Anyone want to take a crack at it?

@jbrea
Copy link

jbrea commented Jun 18, 2020

Given the importance that provided will have, I think we should include checks for it, e.g. in requirements_met. It may be too easy for an environment author to forget to put an @provided in front of an optional interface function.

@darsnack
Copy link

darsnack commented Jun 18, 2020

Maybe it's just me, but I am getting confused here.

Given the importance that provided will have, I think we should include checks for it, e.g. in requirements_met.

I think we should be more clear about who this check tool is intended for. Are we agreed on the role of the environment writer here? I agree with @findmyway that environments should be written w/o knowledge of algorithms/solvers. I was under the impression the requirements check was for algorithm writers:

In #1, several people talked about a way for algorithm writers to specify requirements.

@zsunberg the functions you listed in your last comment don't take algorithms into account in any way? Don't we need some information from the algorithm writer to say whether requirements are met? Or are you describing some kind of interface for algorithm writers?

EDIT: Sorry, I think I found the source of my confusion. Is list provided by the algorithm writer?

@zsunberg
Copy link
Member Author

@darsnack Thanks for bringing this up - I think it is useful to talk about.

Yes, you are right - list would be provided by the algorithm writer (maybe one day we can do some fancy analysis that automatically generates it).

There are really three roles and 2 contexts that we should be thinking about for this package:

Roles:

  • environment writer
  • algorithm writer
  • experimenter (neither wrote algorithm or environment, but wants to run experiments with it)

Contexts:

  • seriously developing a package meant to be used widely (developers)
  • experimenting/learning (tinkerers)

environments should be written w/o knowledge of algorithms/solvers

environment developers should certainly do this, but environment tinkerers who only want to start out with one algorithm should probably start out only implementing the optional functions that are needed to use the algorithm that they want to.

I see provided as a mechanism for these communication flows:

  • environment developers/tinkerers -> algorithm developers/tinkerers
  • environment developers/tinkerers -> experiment developers/tinkerers

Requirements is a mechanism for these communication flows:

  • algorithm developers ->environment tinkerers
  • algorithm developers -> experiment developers/tinkerers
  • experiment developers -> environment tinkerers

In other words, I don't think algorithm tinkerers will ever specify requirements and I don't think environment developers will ever look at them. They are mostly for communication from developers to tinkerers, whereas provided will be used between developers and tinkerers.

@zsunberg
Copy link
Member Author

zsunberg commented Jun 18, 2020

the functions you listed in your last comment don't take algorithms into account in any way? Don't we need some information from the algorithm writer to say whether requirements are met? Or are you describing some kind of interface for algorithm writers?

Yeah, I'm not sure exactly what the usage would look like. I was imagining that a friendly algorithm writer might write something like this:

function learn(env::AbstractEnv)
    # construct requirements list
    # ...

    if !requirements_met(list)
        @warn("There were missing requirements for <algorithm name>")
        show_requirements(list)
    end

    # run the algorithm
end

and/or, they might write a function called determine_requirements put in their documentation:

to see if your environment meets the requirements for the algorithm, run CommonRLInterface.show_requirements(determine_requirements(env))

I'm not sure if we can do anything more than that because we haven't specified how an algorithm should look (and shouldn't) in this package.

Maybe the requirements functionality should go in another package, e.g. RLEnvironmentLinter.jl?

@darsnack
Copy link

Okay that clears a lot up. Based on that description, I think we need args + types as part of requirements for the requirements_met check to be reliable. If these functions were purely tools (e.g. for linting only), then we could get away with checking only function names and printing what function we detected as matching. But to be usable within developed code, we need types.

@jbrea
Copy link

jbrea commented Jun 20, 2020

Maybe it's just me, but I am getting confused here.

Given the importance that provided will have, I think we should include checks for it, e.g. in requirements_met.

Sorry, I wasn't clear. I imagined the situation, where a student or researcher, unfamiliar with the RL ecosystem in julia, has a custom environment and wants to run some algorithms on that environment. The researcher figured out that step!, reset!, actions and the optional clone should be implemented for the custom environment, but forgets to decorate clone with @provided. An algorithm that checks for clone with the provided-mechanism will complain in this case that the environment doesn't meet the requirements, and potentially leave the researcher pretty confused ("But I did implement clone!?", "Yes, but you did not decorate it with @provided".)

I like the idea of RLEnvironmentLinter.jl. This could also help the environment authors to follow the maxime

generally [you] should implement optional interfaces as much as possible.

@zsunberg zsunberg added the help wanted Extra attention is needed label Jun 30, 2020
@zsunberg zsunberg removed this from the v0.2 milestone Jul 1, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
decision A decision that needs to be made help wanted Extra attention is needed
Projects
None yet
Development

No branches or pull requests

4 participants