-
Notifications
You must be signed in to change notification settings - Fork 33
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Giving options for required capabilities #166
Comments
There is not currently an implementation for this. To summarize, is the main use case is "test requires capability set 1 or capability set 2, both of which ultimately implement the same set of require things, but in different ways"? And ideally the capabilities would be refactored so that each required method occurs in only one place, but for various reasons that isn't possible at the moment? And BPO_compatible does not directly inherit any other capabilities, but only "shims" them in on demand somehow via the model loader? This last point is what I am most unclear about. What is it that BPO_compatible will actually do? Will it simply send the signal that the capabilities will be there by the time the test is actually executed? |
thats correct
In the current form that I envision it, this capability will implement all the necessary methods (such as inject current, record vm etc), and have no "unimplemented" methods (that we would normally implement on the model). As all BPO models would have similar internal structures, these implementations within the capability should work for all BPO models. |
If it implements all the necessary methods, why can't it also subclass them? Then any model that inherits from it will also pass all the capability checks. Possibly related, there is an |
Not sure if I follow this entirely..... were you asking why the capability Rather, any new test being developed (in their own packages) could check if requirements demanded it by, are already satisfied by the BPO capability (in the base package), and if so it could set I realize that in some sense this appears in contradiction to model agnosticism that we aim for, but it could also possibly be just considered an alternate capability-based interface for models to tests. The overall motivation is to provide implicit support to standard model formats (BPO format has the potential to become one such standard), by offering automatic implementation of commonly required capabilities. HBP already has some tools that automatically interpret such model formats. This should give some incentive to more users to adopt such standard formats going forward. p.s. I hope I don't sound too confusing here; like I said, I am thinking out loud here to some extent. So I might not being seeing the potential downsides to this approach at the moment. |
I am not too familiar with |
Every capability method that BPO would ever try to implement is originally defined in the corresponding capability. Presumably you already know what and where these capabilities are. If you don't I'm not sure how BPO could implement them. So you would just do something like:
but with the rest of the stuff that Would this not work? |
So this is going to be true for every capability corresponding to tests that have already been developed. We are in some ways putting the onus on the test developers (who would be more adept with the SciUnit framework) to want to offer implicit support for certain standard model formats (e.g. BPO here). So the BPO capability will be defined and implemented (in a base package; not the test package), with methods that will allow, e.g.:
Now, when a new test is being developed, if the methods made available by the BPO capability suffice for the test to be run on the BPO models, then the test can specifiy:
(not all tests will suffice with the methods available in BPO_compatible, and the BPO models will not be able to run these tests automatically) In addition, to support non-BPO models, where these methods would need to be custom implemented, the test can specify these via other/new capabilities, such as cap_A, cap_B:
The test developer could either opt to employ similar naming conventions for methods within these new capabilities as in BPO compatible, or handle BPO models vs other models appropriately internally (e.g. checking .
Your suggestion certainly works. I actually do currently have an implementation which does something like this (not final; just an early prototype) with dynamic sub-classing of capabilities in the model loader: What I didn't like here, was the links to capabilities in other test modules. E.g. if we have a DC current inject capability in two different test modules (say HippoUnit, BasalUnit), I would need to install and load both these modules, even if I just require one of them (dynamic sub-classing as above could overcome this). But additionally, if these two modules contain capabilites that name the DC current inject method differently, the model loader will need to handle for both. I believe it comes down to whether its useful to define a capability interface with a number of pre-implemented methods for models developed in a standard format. Tests can then be developed to consume these capabilities. For tests that require more capability methods than available in BPO_compatible capability, the BPO model loader does not come into use. |
If it is directed towards new tests that will be developed, is there a problem with having a common package or module where all capabilities are written? Note that this doesn't require that they be implemented there, merely that their names, organization, and method signatures (arguments and return types), and basic purpose be provided there (call this Otherwise, it sounds like what you want is to support two different versions of the same capability that have the same ultimate purpose, but located in two different places where Python has no way of recognizing that they serve the same purpose. And that seems like non-ideal code design. |
Apologies! I was away for a bit, and just realized that I hadn't returned to this discussion. I have had to shift to another task in the meantime, and so shall return to this discussion a bit later (in most likelihood, will try to rework the design). |
Currently we use
required_capabilities
to list all the capabilities that are required for a particular test to be undertaken on a given model. I was wondering if there is a way (and if not, if it would be useful to implement) a method to allow specification of one or more sets of capabilities, such that each set individually suffices for the test to be undertaken by the model.As a potential example, something like:
Here the model should either implement
cap_A
ANDcap_B
, or justcap_C
.Either of these sets of capabilties would suffice for the test.
To get into some more detail...
I would normally split the capabilities into more elemental units, such that a seemingly composite capability such as
cap_C
isn't created. But in my current use case, I am creating a model loader for models output via BluePyOpt (BPO)(https://github.com/BlueBrain/BluePyOpt). As we have a number of teams developing models using this package, and the output being produced in a standardized format, we were keen to allow several of our tests to be easily undertaken on these models.For this purpose we created a model loader class in Python, that took as input the path to the zip file generated by BPO, and the class would unzip and implement all the required capabilities for these model, and return a
sciunit.Model
object as output. Currently, I inherit this model loader class individually from all the individual capabilities that I desire to have implemented.I was wondering if I could maybe create a new capability, such as a capability named
BPO_compatible
in our base validation package, and then any test module (other packages such as 'hippounit') could specifyrequired_capabilities
for their tests as:self.required_capabilities = [(cap_A, cap_B), (BPO_compatible) ]
whereby the BPO model loader class will simply inherit from
BPO_compatible
(defined in our base validation module), and non-BPO models will require to implementcap_A
ANDcap_B
(which would be more elemental capabilities defined in the test module). Effectively,BPO_compatible
will eventually be a large composite compatibility that allows BPO models to implictly implement various capabilities, and thereby undertake several validation tests 'out of the box'.So in summary:
BPO_compatible
would be large composite capabilityBPO_compatible
would be defined and implemented within our base validation module, (while other elemental capabilities would be defined within their test modules)To some extent, I am thinking out loud here, and so might have missed a few details.
@rgerkin : not sure if there is already a method to implement this?
The text was updated successfully, but these errors were encountered: