-
Notifications
You must be signed in to change notification settings - Fork 11
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
- Loading branch information
Showing
11 changed files
with
363 additions
and
43 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Large diffs are not rendered by default.
Oops, something went wrong.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,16 @@ | ||
# Using the staging server to try your experiments | ||
|
||
### A sandbox environment | ||
|
||
The staging versions of [Lookit](https://staging-lookit.osf.io) and [Experimenter](https://staging-experimenter.osf.io) allow you to write and try out your studies, without posting them on the main site and collecting data. On Experimenter, you can create a new study, edit the details about it like the summary and age range, edit the JSON schema that defines what happens in the study, and start/stop data collection. Lookit accesses that data to show the studies that are currently collecting data and parses the study description. The staging-lookit application is separate from the production version of Lookit at lookit.mit.edu; account data isn't shared. Any data stored on staging-lookit is expected to be temporary test data. | ||
|
||
> Note: Technically, staging-lookit is public - anyone can create an account there, and see the studies being developed. It is, therefore, not a good place for your super-secret experimental design, working perpetual motion machine plans, etc. But in practice, no one's going to stumble on it. | ||
It's also possible to run Lookit and/or Experimenter locally, so that you can edit the code that's used. In this case, they still talk to either the staging or the production server to fetch the definitions of available studies (Lookit) or save those definitions (Experimenter). For now, the plan is that you do NOT need to edit any of the code - if you want frames to work differently, in ways that aren't possible to achieve by adjusting the data you pass to them using the JSON schema, you'll contact MIT. | ||
|
||
### Getting access | ||
|
||
To access staging-experimenter, you'll need an account on [staging.osf.io](https://staging.osf.io/). Once you've created an account, go to | ||
[your profile](https://staging.osf.io/profile/) and send MIT your 5-character profile ID from the end of your "public profile" link (e.g. `staging.osf.io/72qxr`) to get access to experimenter. | ||
|
||
Once you can log in to experimenter, you may be prompted to select a namespace. Select 'lookit'. |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,80 @@ | ||
# Preparing your stimuli | ||
|
||
### Audio and video files | ||
|
||
Most experiments will involve using audio and/or video files! You are responsible for hosting these somewhere (contact MIT if you need help finding a place to put them). | ||
|
||
For basic editing of audio files, if you don't already have a system in place, we highly recommend [Audacity](http://www.audacityteam.org/). You can create many "tracks" or select portions of a longer recording using labels, and export them all at once; you can easily adjust volume so it's similar across your stimuli; and the simple "noise reduction" filter works well. | ||
|
||
### File formats | ||
|
||
To have your media play properly across various web browsers, you will generally need to provide multiple file formats. For a comprehensive overview of this topic, see [MDN](https://developer.mozilla.org/en-US/docs/Web/HTML/Supported_media_formats). | ||
|
||
MIT's standard practice is to provide mp3 and ogg formats for audio, and webm and mp4 (H.264 video codec + AAC audio codec) for video, to cover modern browsers. The easiest way to create the appropriate files, especially if you have a lot to convert, is to use the command-line tool [ffmpeg](https://ffmpeg.org/). It's a bit of a pain to get used to, but then you can do almost anything you might want to with audio and video files. | ||
|
||
Here's an example command to convert a video file INPUTPATH to mp4 with reasonable quality/filesize and using H.264 & AAC codecs: | ||
|
||
```ffmpeg -i INPUTPATH -c:v libx264 -preset slow -b:v 1000k -maxrate 1000k -bufsize 2000k -c:a libfdk_aac -b:a 128k``` | ||
|
||
And to make a webm file: | ||
|
||
```ffmpeg -i INPUTPATH -c:v libvpx -b:v 1000k -maxrate 1000k -bufsize 2000k -c:a libvorbis -b:a 128k -speed 2``` | ||
|
||
Converting all your audio and video files can be easily automated in python. Here's an example script that uses ffmpeg to convert all the m4a and wav files in a directory to mp3 and ogg files: | ||
|
||
```python | ||
import os | ||
import subprocess as sp | ||
import sys | ||
|
||
audioPath = '/Users/kms/Dropbox (MIT)/round 2/ingroupobligations/lookit stimuli/audio clips/' | ||
|
||
audioFiles = os.listdir(audioPath) | ||
|
||
for audio in audioFiles: | ||
(shortname, ext) = os.path.splitext(audio) | ||
print shortname | ||
if not(os.path.isdir(os.path.join(audioPath, audio))) and ext in ['.m4a', '.wav']: | ||
sp.call(['ffmpeg', '-i', os.path.join(audioPath, audio), \ | ||
os.path.join(audioPath, 'mp3', shortname + '.mp3')]) | ||
sp.call(['ffmpeg', '-i', os.path.join(audioPath, audio), \ | ||
os.path.join(audioPath, 'ogg', shortname + '.ogg')]) | ||
``` | ||
|
||
### Directory structure | ||
|
||
For convenience, several of the newer frames allow you to define a base directory (`baseDir`) as part of the frame definition, so that instead of providing full paths to your stimuli (including multiple file formats) you can give relative paths and specify the audio and/or video formats to expect (`audioTypes` and `videoTypes`). | ||
|
||
**Images**: Anything without `://` in the string is assumed to be a relative image source. | ||
|
||
**Audio/video sources**: you will be providing a list of objects describing the source, like this: | ||
|
||
```json | ||
[ | ||
{ | ||
"src": "http://stimuli.org/myAudioFile.mp3", | ||
"type": "audio/mp3" | ||
}, | ||
{ | ||
"src": "http://stimuli.org/myAudioFile.ogg", | ||
"type": "audio/ogg" | ||
} | ||
] | ||
``` | ||
|
||
Instead of listing multiple sources, which are generally the same file in different formats, you can alternately list a single source like this: | ||
|
||
```json | ||
[ | ||
{ | ||
"stub": "myAudioFile" | ||
} | ||
] | ||
``` | ||
|
||
If you use this option, your stimuli will be expected to be organized into directories based on type. | ||
|
||
- **baseDir/img/**: all images (any file format; include the file format when specifying the image path) | ||
- **baseDir/ext/**: all audio/video media files with extension `ext` | ||
|
||
**Example**: Suppose you set `baseDir: 'http://stimuli.org/mystudy/` and then specified an image source as `train.jpg`. That image location would be expanded to `http://stimuli.org/mystudy/img/train.jpg`. If you specified that the audio types you were using were `mp3` and `ogg` (the default) by setting `audioTypes: ['mp3', 'ogg']`, and specified an audio source as `[{"stub": "honk"}]`, then audio files would be expected to be located at `http://stimuli.org/mystudy/mp3/honk.mp3` and `http://stimuli.org/mystudy/ogg/honk.ogg`. |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters