Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Probe geometry doc #660

Merged
merged 8 commits into from
Oct 24, 2023
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
168 changes: 168 additions & 0 deletions examples/loading_data/loading_raw_audio_data.ipynb
Original file line number Diff line number Diff line change
@@ -0,0 +1,168 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "5683982d",
"metadata": {},
"source": [
"# Loading Raw Audio Data"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "6b2485da",
"metadata": {
"nbsphinx": "hidden"
},
"outputs": [],
"source": [
"# Turn off logging, this is a hidden cell on docs page\n",
"import logging\n",
"logger = logging.getLogger('ibllib')\n",
"logger.setLevel(logging.CRITICAL)"
]
},
{
"cell_type": "markdown",
"id": "16345774",
"metadata": {},
"source": [
"The audio file is saved from the microphone. It is useful to look at it to plot a spectrogram and confirm the sounds played during the task are indeed audible."
]
},
{
"cell_type": "markdown",
"id": "8d62c890",
"metadata": {},
"source": [
"## Relevant datasets\n",
"* _iblrig_micData.raw.flac\n"
]
},
{
"cell_type": "markdown",
"id": "bc23fdf7",
"metadata": {},
"source": [
"## Loading"
]
},
{
"cell_type": "markdown",
"id": "9103084d",
"metadata": {},
"source": [
"### Loading raw audio file"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "2b807296",
"metadata": {
"ibl_execute": false
},
"outputs": [],
"source": [
"from one.api import ONE\n",
"import soundfile as sf\n",
"\n",
"one = ONE()\n",
"eid = '4ecb5d24-f5cc-402c-be28-9d0f7cb14b3a'\n",
"\n",
"# -- Get raw data\n",
"filename = one.load_dataset(eid, '_iblrig_micData.raw.flac', download_only=True)\n",
"with open(filename, 'rb') as f:\n",
" wav, fs = sf.read(f)"
]
},
{
"cell_type": "markdown",
"id": "203d23c1",
"metadata": {},
"source": [
"## Plot the spectrogram"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "811e3533",
"metadata": {
"ibl_execute": false
},
"outputs": [],
"source": [
"from ibllib.io.extractors.training_audio import welchogram\n",
"import numpy as np\n",
"import matplotlib.pyplot as plt\n",
"\n",
"# -- Compute spectrogram over first 2 minutes\n",
"t_idx = 120 * fs\n",
"tscale, fscale, W, detect = welchogram(fs, wav[:t_idx])\n",
"\n",
"# -- Put data into single variable\n",
"TF = {}\n",
"\n",
"TF['power'] = W.astype(np.single)\n",
"TF['frequencies'] = fscale[None, :].astype(np.single)\n",
"TF['onset_times'] = detect\n",
"TF['times_mic'] = tscale[:, None].astype(np.single)\n",
"\n",
"# # -- Plot spectrogram\n",
"tlims = TF['times_mic'][[0, -1]].flatten()\n",
"flims = TF['frequencies'][0, [0, -1]].flatten()\n",
"fig = plt.figure(figsize=[16, 7])\n",
"ax = plt.axes()\n",
"im = ax.imshow(20 * np.log10(TF['power'].T), aspect='auto', cmap=plt.get_cmap('magma'),\n",
" extent=np.concatenate((tlims, flims)),\n",
" origin='lower')\n",
"ax.set_xlabel(r'Time (s)')\n",
"ax.set_ylabel(r'Frequency (Hz)')\n",
"plt.colorbar(im)\n",
"plt.show()"
]
},
{
"cell_type": "markdown",
"id": "bef6702e",
"metadata": {},
"source": [
"## More details\n",
"* [Description of audio datasets](https://docs.google.com/document/d/1OqIqqakPakHXRAwceYLwFY9gOrm8_P62XIfCTnHwstg/edit#heading=h.n61f0vdcplxp)"
]
},
{
"cell_type": "markdown",
"id": "4e9dd4b9",
"metadata": {},
"source": [
"## Useful modules\n",
"* [ibllib.io.extractors.training_audio](https://int-brain-lab.github.io/iblenv/_autosummary/ibllib.io.extractors.training_audio.html#module-ibllib.io.extractors.training_audio)"
]
}
],
"metadata": {
"celltoolbar": "Edit Metadata",
"kernelspec": {
"display_name": "Python [conda env:iblenv] *",
"language": "python",
"name": "conda-env-iblenv-py"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.9.7"
}
},
"nbformat": 4,
"nbformat_minor": 5
}
58 changes: 57 additions & 1 deletion examples/loading_data/loading_raw_ephys_data.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -152,7 +152,7 @@
"\n",
"# Use spikeglx reader to read in the whole raw data\n",
"sr = spikeglx.Reader(bin_file)\n",
"sr.shape\n"
"print(sr.shape)"
]
},
{
Expand Down Expand Up @@ -326,6 +326,62 @@
"destriped = destripe(raw, fs=sr.fs)"
]
},
{
"cell_type": "markdown",
"source": [
"## Get the probe geometry"
],
"metadata": {
"collapsed": false
}
},
{
"cell_type": "markdown",
"source": [
"### Using the `eid` and `probe` information"
],
"metadata": {
"collapsed": false
}
},
{
"cell_type": "code",
"execution_count": null,
"outputs": [],
"source": [
"from brainbox.io.one import load_channel_locations\n",
"channels = load_channel_locations(eid, probe)\n",
"print(channels[probe].keys())\n",
"# Use the axial and lateral coordinates ; Print example first 4 channels\n",
"print(channels[probe][\"axial_um\"][0:4])\n",
"print(channels[probe][\"lateral_um\"][0:4])"
],
"metadata": {
"collapsed": false
}
},
{
"cell_type": "markdown",
"source": [
"### Using the reader and the `.cbin` file"
],
"metadata": {
"collapsed": false
}
},
{
"cell_type": "code",
"execution_count": null,
"outputs": [],
"source": [
"# You would have loaded the bin file as per the loading example above\n",
"# sr = spikeglx.Reader(bin_file)\n",
"sr.geometry"
],
"metadata": {
"collapsed": false
}
},
{
"cell_type": "markdown",
"id": "9851b10d",
Expand Down
Loading