Skip to content

Commit

Permalink
[DATALAD RUNCMD] run codespell throughout fixing typo automagically
Browse files Browse the repository at this point in the history
=== Do not change lines below ===
{
 "chain": [],
 "cmd": "codespell -w",
 "exit": 0,
 "extra_inputs": [],
 "inputs": [],
 "outputs": [],
 "pwd": "."
}
^^^ Do not change lines above ^^^
  • Loading branch information
yarikoptic committed Sep 20, 2023
1 parent 2236a8c commit bffa54f
Show file tree
Hide file tree
Showing 10 changed files with 25 additions and 25 deletions.
Original file line number Diff line number Diff line change
Expand Up @@ -732,7 +732,7 @@
"source": [
"In DataJoint, the tier of the table indicates **the nature of the data and the data source for the table**. So far we have encountered two table tiers: `Manual` and `Imported`, and we will encounter the two other major tiers in this session. \n",
"\n",
"DataJoint tables in `Manual` tier, or simply **Manual tables** indicate that its contents are **manually** entered by either experimenters or a recording system, and its content **do not depend on external data files or other tables**. This is the most basic table type you will encounter, especially as the tables at the beggining of the pipeline. In the Diagram, `Manual` tables are depicted by green rectangles.\n",
"DataJoint tables in `Manual` tier, or simply **Manual tables** indicate that its contents are **manually** entered by either experimenters or a recording system, and its content **do not depend on external data files or other tables**. This is the most basic table type you will encounter, especially as the tables at the beginning of the pipeline. In the Diagram, `Manual` tables are depicted by green rectangles.\n",
"\n",
"On the other hand, **Imported tables** are understood to pull data (or *import* data) from external data files, and come equipped with functionalities to perform this importing process automatically, as we will see shortly! In the Diagram, `Imported` tables are depicted by blue ellipses."
]
Expand Down
12 changes: 6 additions & 6 deletions completed_tutorials/03-Calcium Imaging Computed Tables.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -770,7 +770,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"There are multiple ways to perform the segementation. To keep it simple, we just detect the cells by setting up the threshold on the average image."
"There are multiple ways to perform the segmentation. To keep it simple, we just detect the cells by setting up the threshold on the average image."
]
},
{
Expand Down Expand Up @@ -1027,7 +1027,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"We would like to perform the segmentation for a **combination** of `AverageFrame`s and different set of paremeters of `threshold` and `size_cutoff` values. To do this while still taking advantage of the `make` and `populate` logic, you would want to define a table to house parameters for segmentation in a `Lookup` table!"
"We would like to perform the segmentation for a **combination** of `AverageFrame`s and different set of parameters of `threshold` and `size_cutoff` values. To do this while still taking advantage of the `make` and `populate` logic, you would want to define a table to house parameters for segmentation in a `Lookup` table!"
]
},
{
Expand Down Expand Up @@ -1160,7 +1160,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"The `Computed` table is labeled as a pink oval and the `Part` table is bare text. We see that `Segmentation` is a `Computed` table that depends on **both AverageFrame and SegmentationParam**. Finally, let's go ahead and implement the `make` method for the `Segmenation` table. "
"The `Computed` table is labeled as a pink oval and the `Part` table is bare text. We see that `Segmentation` is a `Computed` table that depends on **both AverageFrame and SegmentationParam**. Finally, let's go ahead and implement the `make` method for the `Segmentation` table. "
]
},
{
Expand Down Expand Up @@ -1342,7 +1342,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"And for the part table `Segmenation.Roi`, there was an additional primary key attribute `roi_idx`:`"
"And for the part table `Segmentation.Roi`, there was an additional primary key attribute `roi_idx`:`"
]
},
{
Expand Down Expand Up @@ -1721,7 +1721,7 @@
}
],
"source": [
"# ENTER YOUR CODE! - populate the Segmenation table for real!\n",
"# ENTER YOUR CODE! - populate the Segmentation table for real!\n",
"Segmentation.populate()"
]
},
Expand Down Expand Up @@ -2177,7 +2177,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"We can simply delete the unwanted paramter from the `SegmentationParam` table, and let DataJoint cascade the deletion:"
"We can simply delete the unwanted parameter from the `SegmentationParam` table, and let DataJoint cascade the deletion:"
]
},
{
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -523,7 +523,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"Let's take the first key, and generate the file name that corresponds to this session. Remember the `data_{mouse_id}_{session_date}.npy` filename convetion!"
"Let's take the first key, and generate the file name that corresponds to this session. Remember the `data_{mouse_id}_{session_date}.npy` filename convention!"
]
},
{
Expand Down Expand Up @@ -977,7 +977,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"So this particular file contains a NumPy array of size 1 x 1000. This represents a (simulated) recording of raw electric activity from neuron(s) (1st dimension) over 1000 time bins (2nd dimesion)."
"So this particular file contains a NumPy array of size 1 x 1000. This represents a (simulated) recording of raw electric activity from neuron(s) (1st dimension) over 1000 time bins (2nd dimension)."
]
},
{
Expand Down Expand Up @@ -1068,7 +1068,7 @@
"source": [
"In DataJoint, the tier of the table indicates **the nature of the data and the data source for the table**. So far we have encountered two table tiers: `Manual` and `Imported`, and we will encounter the two other major tiers in this session. \n",
"\n",
"DataJoint tables in `Manual` tier, or simply **Manual tables** indicate that its contents are **manually** entered by either experimenters or a recording system, and its content **do not depend on external data files or other tables**. This is the most basic table type you will encounter, especially as the tables at the beggining of the pipeline. In the Diagram, `Manual` tables are depicted by green rectangles.\n",
"DataJoint tables in `Manual` tier, or simply **Manual tables** indicate that its contents are **manually** entered by either experimenters or a recording system, and its content **do not depend on external data files or other tables**. This is the most basic table type you will encounter, especially as the tables at the beginning of the pipeline. In the Diagram, `Manual` tables are depicted by green rectangles.\n",
"\n",
"On the other hand, **Imported tables** are understood to pull data (or *import* data) from external data files, and come equipped with functionalities to perform this importing process automatically, as we will see shortly! In the Diagram, `Imported` tables are depicted by blue ellipses."
]
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -3333,7 +3333,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"We can simply delete the unwanted paramter from the `SpikeDetectionParam` table, and let DataJoint cascade the deletion:"
"We can simply delete the unwanted parameter from the `SpikeDetectionParam` table, and let DataJoint cascade the deletion:"
]
},
{
Expand Down
4 changes: 2 additions & 2 deletions short_tutorials/University.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -490,7 +490,7 @@
"metadata": {},
"outputs": [],
"source": [
"# Millenials\n",
"# Millennials\n",
"millennials = Student & 'date_of_birth between \"1981-01-01\" and \"1996-12-31\"'"
]
},
Expand Down Expand Up @@ -519,7 +519,7 @@
"metadata": {},
"outputs": [],
"source": [
"# Millenials who have never enrolled\n",
"# Millennials who have never enrolled\n",
"millennials - Enroll"
]
},
Expand Down
2 changes: 1 addition & 1 deletion tutorials/01-DataJoint Basics.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -51,7 +51,7 @@
"If you visit the [documentation for DataJoint](https://docs.datajoint.io/introduction/Data-pipelines.html), we define a data pipeline as follows:\n",
"> A data pipeline is a sequence of steps (more generally a directed acyclic graph) with integrated storage at each step. These steps may be thought of as nodes in a graph.\n",
"\n",
"While this is an accurate description, it may not be the most intuitive definition. Put succinctly, a data pipeline is a listing or a \"map\" of various \"things\" that you work with in a project, with line connecting things to each other to indicate their dependecies. The \"things\" in a data pipeline tends to be the *nouns* you find when describing a project. The \"things\" may include anything from mouse, experimenter, equipment, to experiment session, trial, two-photon scans, electric activities, to receptive fields, neuronal spikes, to figures for a publication! A data pipeline gives you a framework to:\n",
"While this is an accurate description, it may not be the most intuitive definition. Put succinctly, a data pipeline is a listing or a \"map\" of various \"things\" that you work with in a project, with line connecting things to each other to indicate their dependencies. The \"things\" in a data pipeline tends to be the *nouns* you find when describing a project. The \"things\" may include anything from mouse, experimenter, equipment, to experiment session, trial, two-photon scans, electric activities, to receptive fields, neuronal spikes, to figures for a publication! A data pipeline gives you a framework to:\n",
"\n",
"1. define these \"things\" as tables in which you can store the information about them\n",
"2. define the relationships (in particular the dependencies) between the \"things\"\n",
Expand Down
2 changes: 1 addition & 1 deletion tutorials/02-Calcium Imaging Imported Tables.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -342,7 +342,7 @@
"source": [
"In DataJoint, the tier of the table indicates **the nature of the data and the data source for the table**. So far we have encountered two table tiers: `Manual` and `Imported`, and we will encounter the two other major tiers in this session. \n",
"\n",
"DataJoint tables in `Manual` tier, or simply **Manual tables** indicate that its contents are **manually** entered by either experimenters or a recording system, and its content **do not depend on external data files or other tables**. This is the most basic table type you will encounter, especially as the tables at the beggining of the pipeline. In the Diagram, `Manual` tables are depicted by green rectangles.\n",
"DataJoint tables in `Manual` tier, or simply **Manual tables** indicate that its contents are **manually** entered by either experimenters or a recording system, and its content **do not depend on external data files or other tables**. This is the most basic table type you will encounter, especially as the tables at the beginning of the pipeline. In the Diagram, `Manual` tables are depicted by green rectangles.\n",
"\n",
"On the other hand, **Imported tables** are understood to pull data (or *import* data) from external data files, and come equipped with functionalities to perform this importing process automatically, as we will see shortly! In the Diagram, `Imported` tables are depicted by blue ellipses."
]
Expand Down
12 changes: 6 additions & 6 deletions tutorials/03-Calcium Imaging Computed Tables.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -250,7 +250,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"There are multiple ways to perform the segementation. To keep it simple, we just detect the cells by setting up the threshold on the average image."
"There are multiple ways to perform the segmentation. To keep it simple, we just detect the cells by setting up the threshold on the average image."
]
},
{
Expand Down Expand Up @@ -397,7 +397,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"We would like to perform the segmentation for a **combination** of `AverageFrame`s and different set of paremeters of `threshold` and `size_cutoff` values. To do this while still taking advantage of the `make` and `populate` logic, you would want to define a table to house parameters for segmentation in a `Lookup` table!"
"We would like to perform the segmentation for a **combination** of `AverageFrame`s and different set of parameters of `threshold` and `size_cutoff` values. To do this while still taking advantage of the `make` and `populate` logic, you would want to define a table to house parameters for segmentation in a `Lookup` table!"
]
},
{
Expand Down Expand Up @@ -506,7 +506,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"The `Computed` table is labeled as a pink oval and the `Part` table is bare text. We see that `Segmentation` is a `Computed` table that depends on **both AverageFrame and SegmentationParam**. Finally, let's go ahead and implement the `make` method for the `Segmenation` table. "
"The `Computed` table is labeled as a pink oval and the `Part` table is bare text. We see that `Segmentation` is a `Computed` table that depends on **both AverageFrame and SegmentationParam**. Finally, let's go ahead and implement the `make` method for the `Segmentation` table. "
]
},
{
Expand Down Expand Up @@ -597,7 +597,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"And for the part table `Segmenation.Roi`, there was an additional primary key attribute `roi_idx`:`"
"And for the part table `Segmentation.Roi`, there was an additional primary key attribute `roi_idx`:`"
]
},
{
Expand Down Expand Up @@ -693,7 +693,7 @@
"metadata": {},
"outputs": [],
"source": [
"# ENTER YOUR CODE! - populate the Segmenation table for real!\n"
"# ENTER YOUR CODE! - populate the Segmentation table for real!\n"
]
},
{
Expand Down Expand Up @@ -804,7 +804,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"We can simply delete the unwanted paramter from the `SegmentationParam` table, and let DataJoint cascade the deletion:"
"We can simply delete the unwanted parameter from the `SegmentationParam` table, and let DataJoint cascade the deletion:"
]
},
{
Expand Down
6 changes: 3 additions & 3 deletions tutorials/04-Electrophysiology Imported Tables.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -190,7 +190,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"Let's take the first key, and generate the file name that corresponds to this session. Remember the `data_{mouse_id}_{session_date}.npy` filename convetion!"
"Let's take the first key, and generate the file name that corresponds to this session. Remember the `data_{mouse_id}_{session_date}.npy` filename convention!"
]
},
{
Expand Down Expand Up @@ -267,7 +267,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"So this particular file contains a NumPy array of size 1 x 1000. This represents a (simulated) recording of raw electric activity from neuron(s) (1st dimension) over 1000 time bins (2nd dimesion)."
"So this particular file contains a NumPy array of size 1 x 1000. This represents a (simulated) recording of raw electric activity from neuron(s) (1st dimension) over 1000 time bins (2nd dimension)."
]
},
{
Expand Down Expand Up @@ -345,7 +345,7 @@
"source": [
"In DataJoint, the tier of the table indicates **the nature of the data and the data source for the table**. So far we have encountered two table tiers: `Manual` and `Imported`, and we will encounter the two other major tiers in this session. \n",
"\n",
"DataJoint tables in `Manual` tier, or simply **Manual tables** indicate that its contents are **manually** entered by either experimenters or a recording system, and its content **do not depend on external data files or other tables**. This is the most basic table type you will encounter, especially as the tables at the beggining of the pipeline. In the Diagram, `Manual` tables are depicted by green rectangles.\n",
"DataJoint tables in `Manual` tier, or simply **Manual tables** indicate that its contents are **manually** entered by either experimenters or a recording system, and its content **do not depend on external data files or other tables**. This is the most basic table type you will encounter, especially as the tables at the beginning of the pipeline. In the Diagram, `Manual` tables are depicted by green rectangles.\n",
"\n",
"On the other hand, **Imported tables** are understood to pull data (or *import* data) from external data files, and come equipped with functionalities to perform this importing process automatically, as we will see shortly! In the Diagram, `Imported` tables are depicted by blue ellipses."
]
Expand Down
2 changes: 1 addition & 1 deletion tutorials/05-Electrophysiology Computed Tables.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -1030,7 +1030,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"We can simply delete the unwanted paramter from the `SpikeDetectionParam` table, and let DataJoint cascade the deletion:"
"We can simply delete the unwanted parameter from the `SpikeDetectionParam` table, and let DataJoint cascade the deletion:"
]
},
{
Expand Down

0 comments on commit bffa54f

Please sign in to comment.