Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support experiment results #15

Open
nhocki opened this issue Jan 30, 2013 · 2 comments
Open

Support experiment results #15

nhocki opened this issue Jan 30, 2013 · 2 comments

Comments

@nhocki
Copy link
Member

nhocki commented Jan 30, 2013

As we've been talking, we need to figure out a way to support experiment results. The approach we are proposing right now is the following:

Each RLMS module will have an interface (similar to the ManagerClass) that will query the results of an experiment (given an experiment id - or equivalent unique id inside the RLMS). The modules will then process this information and present it to the Labmanager in a standard format. Hopefully something like Activity Streams.

For now, teachers and students would be able to see the results of an experiment and manually enter the grades in the LMS. Or contact the student. Or do whatever they want with the results. This way we are not depending on LMS functionality for results.

Also, we are not depending on the LTI standard that can only send a grade (v 1.1.1). If in the future the standard gets mature enough, we will have all our data in the Activity Stream format and we can convert from that to the standard.

Problems with this approach

There may be some problems with this approach. One being the actual translations from the experiments to our standard format. This may get tricky when the information stored in the labs is different. For example, in iLabs, each lab has it's own data and is not a standardised across iLabs. This is something we're working on and we hope to fix soon.

About this issue, I think that while iLabs work on this, we could start defining how to do this in other lab systems.

Implementation

We need to define the standard format we'll accept. When we have this we could start working on a dashboard that accepts the JSON data and build's itself up. We could look into Dashing. Even though it's in ruby, we could copy the way it works, it is Batman.js underneath.

/cc @porduna @sergiobuj @phbailey

@phbailey
Copy link
Member

The comment about iLabs 'That Each lab has it 's own data and is not standardized' is a design goal of the iLab Shared Architecture and will not be 'Fixed'.
It would be possible to to have iLab Clients (lab interfaces) define metadata about the data that a client produces and use that metadata to package a client's data structure into a standard format which would include the metadata specification, so that a target grading environment would be able to parse the experiment results.

@phbailey phbailey reopened this Jan 30, 2013
@nhocki
Copy link
Member Author

nhocki commented Jan 30, 2013

Oh, the metadata was what I meant with fix. Having a way of knowing how to interpret the data for each experiment.

Nicolás Hock Isaza

On Jan 29, 2013, at 10:35 PM, phbailey [email protected] wrote:

The comment about iLabs 'That Each lab has it 's own data and is not standardized' is a design goal of the iLab Shared Architecture and will not be 'Fixed'.
It would be possible to to have iLab Clients (lab interfaces) define metadata about the data that a client produces and use that metadata to package a client's data structure into a standard format which would include the metadata specification, so that a target grading environment would be able to parse the experiment results.


Reply to this email directly or view it on GitHub.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants