Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Giving options for required capabilities #166

Open
appukuttan-shailesh opened this issue Sep 1, 2020 · 9 comments
Open

Giving options for required capabilities #166

appukuttan-shailesh opened this issue Sep 1, 2020 · 9 comments

Comments

@appukuttan-shailesh
Copy link
Contributor

appukuttan-shailesh commented Sep 1, 2020

Currently we use required_capabilities to list all the capabilities that are required for a particular test to be undertaken on a given model. I was wondering if there is a way (and if not, if it would be useful to implement) a method to allow specification of one or more sets of capabilities, such that each set individually suffices for the test to be undertaken by the model.

As a potential example, something like:

self.required_capabilities = [(cap_A, cap_B), (cap_C) ]

Here the model should either implement cap_A AND cap_B, or just cap_C.
Either of these sets of capabilties would suffice for the test.

To get into some more detail...

I would normally split the capabilities into more elemental units, such that a seemingly composite capability such as cap_C isn't created. But in my current use case, I am creating a model loader for models output via BluePyOpt (BPO)(https://github.com/BlueBrain/BluePyOpt). As we have a number of teams developing models using this package, and the output being produced in a standardized format, we were keen to allow several of our tests to be easily undertaken on these models.

For this purpose we created a model loader class in Python, that took as input the path to the zip file generated by BPO, and the class would unzip and implement all the required capabilities for these model, and return a sciunit.Model object as output. Currently, I inherit this model loader class individually from all the individual capabilities that I desire to have implemented.

I was wondering if I could maybe create a new capability, such as a capability named BPO_compatible in our base validation package, and then any test module (other packages such as 'hippounit') could specify required_capabilities for their tests as:
self.required_capabilities = [(cap_A, cap_B), (BPO_compatible) ]

whereby the BPO model loader class will simply inherit from BPO_compatible (defined in our base validation module), and non-BPO models will require to implement cap_A AND cap_B (which would be more elemental capabilities defined in the test module). Effectively, BPO_compatible will eventually be a large composite compatibility that allows BPO models to implictly implement various capabilities, and thereby undertake several validation tests 'out of the box'.

So in summary:

  1. BPO_compatible would be large composite capability
  2. BPO_compatible would be defined and implemented within our base validation module, (while other elemental capabilities would be defined within their test modules)

To some extent, I am thinking out loud here, and so might have missed a few details.

@rgerkin : not sure if there is already a method to implement this?

@rgerkin
Copy link
Contributor

rgerkin commented Sep 1, 2020

There is not currently an implementation for this. To summarize, is the main use case is "test requires capability set 1 or capability set 2, both of which ultimately implement the same set of require things, but in different ways"? And ideally the capabilities would be refactored so that each required method occurs in only one place, but for various reasons that isn't possible at the moment? And BPO_compatible does not directly inherit any other capabilities, but only "shims" them in on demand somehow via the model loader? This last point is what I am most unclear about. What is it that BPO_compatible will actually do? Will it simply send the signal that the capabilities will be there by the time the test is actually executed?

@appukuttan-shailesh
Copy link
Contributor Author

appukuttan-shailesh commented Sep 2, 2020

is the main use case is "test requires capability set 1 or capability set 2, both of which ultimately implement the same set of require things, but in different ways"? And ideally the capabilities would be refactored so that each required method occurs in only one place, but for various reasons that isn't possible at the moment?

thats correct

What is it that BPO_compatible will actually do?

In the current form that I envision it, this capability will implement all the necessary methods (such as inject current, record vm etc), and have no "unimplemented" methods (that we would normally implement on the model). As all BPO models would have similar internal structures, these implementations within the capability should work for all BPO models.

@rgerkin
Copy link
Contributor

rgerkin commented Sep 2, 2020

If it implements all the necessary methods, why can't it also subclass them? Then any model that inherits from it will also pass all the capability checks. Possibly related, there is an extra_capability_checks options that goes beyond just using isinstance checks and actually does some code inspection, via a user-provided hook.

@appukuttan-shailesh
Copy link
Contributor Author

If it implements all the necessary methods, why can't it also subclass them?

Not sure if I follow this entirely..... were you asking why the capability BPO_compatible doesn't sub-class the other capabilities (e.g. cap_A, 'cap_B), as it implements the same methods? If so, yes then this could be done. But I was trying to see if the capability BPO_compatible` and the BPO model loader (both of which would be developed in a base package) could be devised without having to keep track of requirements posed by tests. Such that it needn't have to be updated (to sub-class a new capability) with each new test.

Rather, any new test being developed (in their own packages) could check if requirements demanded it by, are already satisfied by the BPO capability (in the base package), and if so it could set BPO_compatible as one option for the required capabilities. The intention is to have the tests be developed in such a way as to accomodate BPO models (if they wish to provide implicit support to BPO models).

I realize that in some sense this appears in contradiction to model agnosticism that we aim for, but it could also possibly be just considered an alternate capability-based interface for models to tests. The overall motivation is to provide implicit support to standard model formats (BPO format has the potential to become one such standard), by offering automatic implementation of commonly required capabilities. HBP already has some tools that automatically interpret such model formats. This should give some incentive to more users to adopt such standard formats going forward.

p.s. I hope I don't sound too confusing here; like I said, I am thinking out loud here to some extent. So I might not being seeing the potential downsides to this approach at the moment.

@appukuttan-shailesh
Copy link
Contributor Author

I am not too familiar with extra_capability_checks. Is there a working example anywhere that I could refer to for understanding how it comes into use?

@rgerkin
Copy link
Contributor

rgerkin commented Sep 3, 2020

Every capability method that BPO would ever try to implement is originally defined in the corresponding capability. Presumably you already know what and where these capabilities are. If you don't I'm not sure how BPO could implement them. So you would just do something like:

from a.capabilities import A1, A2, A3
from b.capabilities import B1, B2, B3
classes = [A1, A2, A3, B1, B2, B3]

class BPO_compatible(*classes): pass

but with the rest of the stuff that BPO_compatible does instead of pass. And you could even automate the importing of the classes with importlib and pointing it to wherever capabilities are located. Now any model inheriting from BPO_compatible will be testable by any tests requiring any of those capabilities.

Would this not work?

@appukuttan-shailesh
Copy link
Contributor Author

Every capability method that BPO would ever try to implement is originally defined in the corresponding capability.

So this is going to be true for every capability corresponding to tests that have already been developed.
But the above plan is more directed towards new tests that would be developed.

We are in some ways putting the onus on the test developers (who would be more adept with the SciUnit framework) to want to offer implicit support for certain standard model formats (e.g. BPO here). So the BPO capability will be defined and implemented (in a base package; not the test package), with methods that will allow, e.g.:

  1. inject DC current of specified stimulusv
  2. inject AC current of specified stimulus
  3. record somatic potential
  4. record potential from oblique dendrites
  5. record potential at N synapses at specified distances from the soma
    ... and so on

Now, when a new test is being developed, if the methods made available by the BPO capability suffice for the test to be run on the BPO models, then the test can specifiy:

self.required_capabilities = [BPO_compatible ]

(not all tests will suffice with the methods available in BPO_compatible, and the BPO models will not be able to run these tests automatically)

In addition, to support non-BPO models, where these methods would need to be custom implemented, the test can specify these via other/new capabilities, such as cap_A, cap_B:

self.required_capabilities = [(cap_A, cap_B), (BPO_compatible) ]

The test developer could either opt to employ similar naming conventions for methods within these new capabilities as in BPO compatible, or handle BPO models vs other models appropriately internally (e.g. checking isinstance(model, BPO_compatible) then ... else ...)

.

Would this not work?

Your suggestion certainly works. I actually do currently have an implementation which does something like this (not final; just an early prototype) with dynamic sub-classing of capabilities in the model loader:
https://github.com/appukuttan-shailesh/hbp-validation-client/blob/api-v2_modelLoaderBPO/hbp_validation_framework/bpoloader.py

What I didn't like here, was the links to capabilities in other test modules. E.g. if we have a DC current inject capability in two different test modules (say HippoUnit, BasalUnit), I would need to install and load both these modules, even if I just require one of them (dynamic sub-classing as above could overcome this). But additionally, if these two modules contain capabilites that name the DC current inject method differently, the model loader will need to handle for both.

I believe it comes down to whether its useful to define a capability interface with a number of pre-implemented methods for models developed in a standard format. Tests can then be developed to consume these capabilities. For tests that require more capability methods than available in BPO_compatible capability, the BPO model loader does not come into use.

@rgerkin
Copy link
Contributor

rgerkin commented Sep 4, 2020

If it is directed towards new tests that will be developed, is there a problem with having a common package or module where all capabilities are written? Note that this doesn't require that they be implemented there, merely that their names, organization, and method signatures (arguments and return types), and basic purpose be provided there (call this hbp_capabilities). Different packages (e.g. hippounit) could then implement these in different ways, by subclassing these un-implemented capabilities, and implementing them. required_capabilities for all tests should then contains only the capabilities in hbp_capabilities, so that models can implement them in whatever way they want without concern for implementation details.

Otherwise, it sounds like what you want is to support two different versions of the same capability that have the same ultimate purpose, but located in two different places where Python has no way of recognizing that they serve the same purpose. And that seems like non-ideal code design.

@appukuttan-shailesh
Copy link
Contributor Author

Apologies! I was away for a bit, and just realized that I hadn't returned to this discussion. I have had to shift to another task in the meantime, and so shall return to this discussion a bit later (in most likelihood, will try to rework the design).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants