partition
stringclasses 3
values | func_name
stringlengths 1
134
| docstring
stringlengths 1
46.9k
| path
stringlengths 4
223
| original_string
stringlengths 75
104k
| code
stringlengths 75
104k
| docstring_tokens
listlengths 1
1.97k
| repo
stringlengths 7
55
| language
stringclasses 1
value | url
stringlengths 87
315
| code_tokens
listlengths 19
28.4k
| sha
stringlengths 40
40
|
|---|---|---|---|---|---|---|---|---|---|---|---|
test
|
Dataset.get_studies
|
Get IDs or data for studies that meet specific criteria.
If multiple criteria are passed, the set intersection is returned. For
example, passing expression='emotion' and mask='my_mask.nii.gz' would
return only those studies that are associated with emotion AND report
activation within the voxels indicated in the passed image.
Args:
ids (list): A list of IDs of studies to retrieve.
features (list or str): The name of a feature, or a list of
features, to use for selecting studies.
expression (str): A string expression to pass to the PEG for study
retrieval.
mask: the mask image (see Masker documentation for valid data
types).
peaks (ndarray or list): Either an n x 3 numpy array, or a list of
lists or tuples (e.g., [(-10, 22, 14)]) specifying the world
(x/y/z) coordinates of the target location(s).
frequency_threshold (float): For feature-based or expression-based
selection, the threshold for selecting studies--i.e., the
cut-off for a study to be included. Must be a float in range
[0, 1].
activation_threshold (int or float): For mask-based selection,
threshold for a study to be included based on amount of
activation displayed. If an integer, represents the absolute
number of voxels that must be active within the mask in order
for a study to be selected. If a float, it represents the
proportion of voxels that must be active.
func (Callable): The function to use when aggregating over the list
of features. See documentation in FeatureTable.get_ids() for a
full explanation. Only used for feature- or expression-based
selection.
return_type (str): A string specifying what data to return. Valid
options are:
'ids': returns a list of IDs of selected studies.
'images': returns a voxel x study matrix of data for all
selected studies.
'weights': returns a dict where the keys are study IDs and the
values are the computed weights. Only valid when performing
feature-based selection.
r (int): For peak-based selection, the distance cut-off (in mm)
for inclusion (i.e., only studies with one or more activations
within r mm of one of the passed foci will be returned).
Returns:
When return_type is 'ids' (default), returns a list of IDs of the
selected studies. When return_type is 'data', returns a 2D numpy
array, with voxels in rows and studies in columns. When return_type
is 'weights' (valid only for expression-based selection), returns
a dict, where the keys are study IDs, and the values are the
computed weights.
Examples
--------
Select all studies tagged with the feature 'emotion':
>>> ids = dataset.get_studies(features='emotion')
Select all studies that activate at least 20% of voxels in an amygdala
mask, and retrieve activation data rather than IDs:
>>> data = dataset.get_studies(mask='amygdala_mask.nii.gz',
threshold=0.2, return_type='images')
Select studies that report at least one activation within 12 mm of at
least one of three specific foci:
>>> ids = dataset.get_studies(peaks=[[12, -20, 30], [-26, 22, 22],
[0, 36, -20]], r=12)
|
neurosynth/base/dataset.py
|
def get_studies(self, features=None, expression=None, mask=None,
peaks=None, frequency_threshold=0.001,
activation_threshold=0.0, func=np.sum, return_type='ids',
r=6
):
""" Get IDs or data for studies that meet specific criteria.
If multiple criteria are passed, the set intersection is returned. For
example, passing expression='emotion' and mask='my_mask.nii.gz' would
return only those studies that are associated with emotion AND report
activation within the voxels indicated in the passed image.
Args:
ids (list): A list of IDs of studies to retrieve.
features (list or str): The name of a feature, or a list of
features, to use for selecting studies.
expression (str): A string expression to pass to the PEG for study
retrieval.
mask: the mask image (see Masker documentation for valid data
types).
peaks (ndarray or list): Either an n x 3 numpy array, or a list of
lists or tuples (e.g., [(-10, 22, 14)]) specifying the world
(x/y/z) coordinates of the target location(s).
frequency_threshold (float): For feature-based or expression-based
selection, the threshold for selecting studies--i.e., the
cut-off for a study to be included. Must be a float in range
[0, 1].
activation_threshold (int or float): For mask-based selection,
threshold for a study to be included based on amount of
activation displayed. If an integer, represents the absolute
number of voxels that must be active within the mask in order
for a study to be selected. If a float, it represents the
proportion of voxels that must be active.
func (Callable): The function to use when aggregating over the list
of features. See documentation in FeatureTable.get_ids() for a
full explanation. Only used for feature- or expression-based
selection.
return_type (str): A string specifying what data to return. Valid
options are:
'ids': returns a list of IDs of selected studies.
'images': returns a voxel x study matrix of data for all
selected studies.
'weights': returns a dict where the keys are study IDs and the
values are the computed weights. Only valid when performing
feature-based selection.
r (int): For peak-based selection, the distance cut-off (in mm)
for inclusion (i.e., only studies with one or more activations
within r mm of one of the passed foci will be returned).
Returns:
When return_type is 'ids' (default), returns a list of IDs of the
selected studies. When return_type is 'data', returns a 2D numpy
array, with voxels in rows and studies in columns. When return_type
is 'weights' (valid only for expression-based selection), returns
a dict, where the keys are study IDs, and the values are the
computed weights.
Examples
--------
Select all studies tagged with the feature 'emotion':
>>> ids = dataset.get_studies(features='emotion')
Select all studies that activate at least 20% of voxels in an amygdala
mask, and retrieve activation data rather than IDs:
>>> data = dataset.get_studies(mask='amygdala_mask.nii.gz',
threshold=0.2, return_type='images')
Select studies that report at least one activation within 12 mm of at
least one of three specific foci:
>>> ids = dataset.get_studies(peaks=[[12, -20, 30], [-26, 22, 22],
[0, 36, -20]], r=12)
"""
results = []
# Feature-based selection
if features is not None:
# Need to handle weights as a special case, because we can't
# retrieve the weights later using just the IDs.
if return_type == 'weights':
if expression is not None or mask is not None or \
peaks is not None:
raise ValueError(
"return_type cannot be 'weights' when feature-based "
"search is used in conjunction with other search "
"modes.")
return self.feature_table.get_ids(
features, frequency_threshold, func, get_weights=True)
else:
results.append(self.feature_table.get_ids(
features, frequency_threshold, func))
# Logical expression-based selection
if expression is not None:
_ids = self.feature_table.get_ids_by_expression(
expression, frequency_threshold, func)
results.append(list(_ids))
# Mask-based selection
if mask is not None:
mask = self.masker.mask(mask, in_global_mask=True).astype(bool)
num_vox = np.sum(mask)
prop_mask_active = self.image_table.data.T.dot(mask).astype(float)
if isinstance(activation_threshold, float):
prop_mask_active /= num_vox
indices = np.where(prop_mask_active > activation_threshold)[0]
results.append([self.image_table.ids[ind] for ind in indices])
# Peak-based selection
if peaks is not None:
r = float(r)
found = set()
for p in peaks:
xyz = np.array(p, dtype=float)
x = self.activations['x']
y = self.activations['y']
z = self.activations['z']
dists = np.sqrt(np.square(x - xyz[0]) + np.square(y - xyz[1]) +
np.square(z - xyz[2]))
inds = np.where((dists > 5.5) & (dists < 6.5))[0]
tmp = dists[inds]
found |= set(self.activations[dists <= r]['id'].unique())
results.append(found)
# Get intersection of all sets
ids = list(reduce(lambda x, y: set(x) & set(y), results))
if return_type == 'ids':
return ids
elif return_type == 'data':
return self.get_image_data(ids)
|
def get_studies(self, features=None, expression=None, mask=None,
peaks=None, frequency_threshold=0.001,
activation_threshold=0.0, func=np.sum, return_type='ids',
r=6
):
""" Get IDs or data for studies that meet specific criteria.
If multiple criteria are passed, the set intersection is returned. For
example, passing expression='emotion' and mask='my_mask.nii.gz' would
return only those studies that are associated with emotion AND report
activation within the voxels indicated in the passed image.
Args:
ids (list): A list of IDs of studies to retrieve.
features (list or str): The name of a feature, or a list of
features, to use for selecting studies.
expression (str): A string expression to pass to the PEG for study
retrieval.
mask: the mask image (see Masker documentation for valid data
types).
peaks (ndarray or list): Either an n x 3 numpy array, or a list of
lists or tuples (e.g., [(-10, 22, 14)]) specifying the world
(x/y/z) coordinates of the target location(s).
frequency_threshold (float): For feature-based or expression-based
selection, the threshold for selecting studies--i.e., the
cut-off for a study to be included. Must be a float in range
[0, 1].
activation_threshold (int or float): For mask-based selection,
threshold for a study to be included based on amount of
activation displayed. If an integer, represents the absolute
number of voxels that must be active within the mask in order
for a study to be selected. If a float, it represents the
proportion of voxels that must be active.
func (Callable): The function to use when aggregating over the list
of features. See documentation in FeatureTable.get_ids() for a
full explanation. Only used for feature- or expression-based
selection.
return_type (str): A string specifying what data to return. Valid
options are:
'ids': returns a list of IDs of selected studies.
'images': returns a voxel x study matrix of data for all
selected studies.
'weights': returns a dict where the keys are study IDs and the
values are the computed weights. Only valid when performing
feature-based selection.
r (int): For peak-based selection, the distance cut-off (in mm)
for inclusion (i.e., only studies with one or more activations
within r mm of one of the passed foci will be returned).
Returns:
When return_type is 'ids' (default), returns a list of IDs of the
selected studies. When return_type is 'data', returns a 2D numpy
array, with voxels in rows and studies in columns. When return_type
is 'weights' (valid only for expression-based selection), returns
a dict, where the keys are study IDs, and the values are the
computed weights.
Examples
--------
Select all studies tagged with the feature 'emotion':
>>> ids = dataset.get_studies(features='emotion')
Select all studies that activate at least 20% of voxels in an amygdala
mask, and retrieve activation data rather than IDs:
>>> data = dataset.get_studies(mask='amygdala_mask.nii.gz',
threshold=0.2, return_type='images')
Select studies that report at least one activation within 12 mm of at
least one of three specific foci:
>>> ids = dataset.get_studies(peaks=[[12, -20, 30], [-26, 22, 22],
[0, 36, -20]], r=12)
"""
results = []
# Feature-based selection
if features is not None:
# Need to handle weights as a special case, because we can't
# retrieve the weights later using just the IDs.
if return_type == 'weights':
if expression is not None or mask is not None or \
peaks is not None:
raise ValueError(
"return_type cannot be 'weights' when feature-based "
"search is used in conjunction with other search "
"modes.")
return self.feature_table.get_ids(
features, frequency_threshold, func, get_weights=True)
else:
results.append(self.feature_table.get_ids(
features, frequency_threshold, func))
# Logical expression-based selection
if expression is not None:
_ids = self.feature_table.get_ids_by_expression(
expression, frequency_threshold, func)
results.append(list(_ids))
# Mask-based selection
if mask is not None:
mask = self.masker.mask(mask, in_global_mask=True).astype(bool)
num_vox = np.sum(mask)
prop_mask_active = self.image_table.data.T.dot(mask).astype(float)
if isinstance(activation_threshold, float):
prop_mask_active /= num_vox
indices = np.where(prop_mask_active > activation_threshold)[0]
results.append([self.image_table.ids[ind] for ind in indices])
# Peak-based selection
if peaks is not None:
r = float(r)
found = set()
for p in peaks:
xyz = np.array(p, dtype=float)
x = self.activations['x']
y = self.activations['y']
z = self.activations['z']
dists = np.sqrt(np.square(x - xyz[0]) + np.square(y - xyz[1]) +
np.square(z - xyz[2]))
inds = np.where((dists > 5.5) & (dists < 6.5))[0]
tmp = dists[inds]
found |= set(self.activations[dists <= r]['id'].unique())
results.append(found)
# Get intersection of all sets
ids = list(reduce(lambda x, y: set(x) & set(y), results))
if return_type == 'ids':
return ids
elif return_type == 'data':
return self.get_image_data(ids)
|
[
"Get",
"IDs",
"or",
"data",
"for",
"studies",
"that",
"meet",
"specific",
"criteria",
"."
] |
neurosynth/neurosynth
|
python
|
https://github.com/neurosynth/neurosynth/blob/948ce7edce15d7df693446e76834e0c23bfe8f11/neurosynth/base/dataset.py#L241-L374
|
[
"def",
"get_studies",
"(",
"self",
",",
"features",
"=",
"None",
",",
"expression",
"=",
"None",
",",
"mask",
"=",
"None",
",",
"peaks",
"=",
"None",
",",
"frequency_threshold",
"=",
"0.001",
",",
"activation_threshold",
"=",
"0.0",
",",
"func",
"=",
"np",
".",
"sum",
",",
"return_type",
"=",
"'ids'",
",",
"r",
"=",
"6",
")",
":",
"results",
"=",
"[",
"]",
"# Feature-based selection",
"if",
"features",
"is",
"not",
"None",
":",
"# Need to handle weights as a special case, because we can't",
"# retrieve the weights later using just the IDs.",
"if",
"return_type",
"==",
"'weights'",
":",
"if",
"expression",
"is",
"not",
"None",
"or",
"mask",
"is",
"not",
"None",
"or",
"peaks",
"is",
"not",
"None",
":",
"raise",
"ValueError",
"(",
"\"return_type cannot be 'weights' when feature-based \"",
"\"search is used in conjunction with other search \"",
"\"modes.\"",
")",
"return",
"self",
".",
"feature_table",
".",
"get_ids",
"(",
"features",
",",
"frequency_threshold",
",",
"func",
",",
"get_weights",
"=",
"True",
")",
"else",
":",
"results",
".",
"append",
"(",
"self",
".",
"feature_table",
".",
"get_ids",
"(",
"features",
",",
"frequency_threshold",
",",
"func",
")",
")",
"# Logical expression-based selection",
"if",
"expression",
"is",
"not",
"None",
":",
"_ids",
"=",
"self",
".",
"feature_table",
".",
"get_ids_by_expression",
"(",
"expression",
",",
"frequency_threshold",
",",
"func",
")",
"results",
".",
"append",
"(",
"list",
"(",
"_ids",
")",
")",
"# Mask-based selection",
"if",
"mask",
"is",
"not",
"None",
":",
"mask",
"=",
"self",
".",
"masker",
".",
"mask",
"(",
"mask",
",",
"in_global_mask",
"=",
"True",
")",
".",
"astype",
"(",
"bool",
")",
"num_vox",
"=",
"np",
".",
"sum",
"(",
"mask",
")",
"prop_mask_active",
"=",
"self",
".",
"image_table",
".",
"data",
".",
"T",
".",
"dot",
"(",
"mask",
")",
".",
"astype",
"(",
"float",
")",
"if",
"isinstance",
"(",
"activation_threshold",
",",
"float",
")",
":",
"prop_mask_active",
"/=",
"num_vox",
"indices",
"=",
"np",
".",
"where",
"(",
"prop_mask_active",
">",
"activation_threshold",
")",
"[",
"0",
"]",
"results",
".",
"append",
"(",
"[",
"self",
".",
"image_table",
".",
"ids",
"[",
"ind",
"]",
"for",
"ind",
"in",
"indices",
"]",
")",
"# Peak-based selection",
"if",
"peaks",
"is",
"not",
"None",
":",
"r",
"=",
"float",
"(",
"r",
")",
"found",
"=",
"set",
"(",
")",
"for",
"p",
"in",
"peaks",
":",
"xyz",
"=",
"np",
".",
"array",
"(",
"p",
",",
"dtype",
"=",
"float",
")",
"x",
"=",
"self",
".",
"activations",
"[",
"'x'",
"]",
"y",
"=",
"self",
".",
"activations",
"[",
"'y'",
"]",
"z",
"=",
"self",
".",
"activations",
"[",
"'z'",
"]",
"dists",
"=",
"np",
".",
"sqrt",
"(",
"np",
".",
"square",
"(",
"x",
"-",
"xyz",
"[",
"0",
"]",
")",
"+",
"np",
".",
"square",
"(",
"y",
"-",
"xyz",
"[",
"1",
"]",
")",
"+",
"np",
".",
"square",
"(",
"z",
"-",
"xyz",
"[",
"2",
"]",
")",
")",
"inds",
"=",
"np",
".",
"where",
"(",
"(",
"dists",
">",
"5.5",
")",
"&",
"(",
"dists",
"<",
"6.5",
")",
")",
"[",
"0",
"]",
"tmp",
"=",
"dists",
"[",
"inds",
"]",
"found",
"|=",
"set",
"(",
"self",
".",
"activations",
"[",
"dists",
"<=",
"r",
"]",
"[",
"'id'",
"]",
".",
"unique",
"(",
")",
")",
"results",
".",
"append",
"(",
"found",
")",
"# Get intersection of all sets",
"ids",
"=",
"list",
"(",
"reduce",
"(",
"lambda",
"x",
",",
"y",
":",
"set",
"(",
"x",
")",
"&",
"set",
"(",
"y",
")",
",",
"results",
")",
")",
"if",
"return_type",
"==",
"'ids'",
":",
"return",
"ids",
"elif",
"return_type",
"==",
"'data'",
":",
"return",
"self",
".",
"get_image_data",
"(",
"ids",
")"
] |
948ce7edce15d7df693446e76834e0c23bfe8f11
|
test
|
Dataset.add_features
|
Construct a new FeatureTable from file.
Args:
features: Feature data to add. Can be:
(a) A text file containing the feature data, where each row is
a study in the database, with features in columns. The first
column must contain the IDs of the studies to match up with the
image data.
(b) A pandas DataFrame, where studies are in rows, features are
in columns, and the index provides the study IDs.
append (bool): If True, adds new features to existing ones
incrementally. If False, replaces old features.
merge, duplicates, min_studies, threshold: Additional arguments
passed to FeatureTable.add_features().
|
neurosynth/base/dataset.py
|
def add_features(self, features, append=True, merge='outer',
duplicates='ignore', min_studies=0.0, threshold=0.001):
""" Construct a new FeatureTable from file.
Args:
features: Feature data to add. Can be:
(a) A text file containing the feature data, where each row is
a study in the database, with features in columns. The first
column must contain the IDs of the studies to match up with the
image data.
(b) A pandas DataFrame, where studies are in rows, features are
in columns, and the index provides the study IDs.
append (bool): If True, adds new features to existing ones
incrementally. If False, replaces old features.
merge, duplicates, min_studies, threshold: Additional arguments
passed to FeatureTable.add_features().
"""
if (not append) or not hasattr(self, 'feature_table'):
self.feature_table = FeatureTable(self)
self.feature_table.add_features(features, merge=merge,
duplicates=duplicates,
min_studies=min_studies,
threshold=threshold)
|
def add_features(self, features, append=True, merge='outer',
duplicates='ignore', min_studies=0.0, threshold=0.001):
""" Construct a new FeatureTable from file.
Args:
features: Feature data to add. Can be:
(a) A text file containing the feature data, where each row is
a study in the database, with features in columns. The first
column must contain the IDs of the studies to match up with the
image data.
(b) A pandas DataFrame, where studies are in rows, features are
in columns, and the index provides the study IDs.
append (bool): If True, adds new features to existing ones
incrementally. If False, replaces old features.
merge, duplicates, min_studies, threshold: Additional arguments
passed to FeatureTable.add_features().
"""
if (not append) or not hasattr(self, 'feature_table'):
self.feature_table = FeatureTable(self)
self.feature_table.add_features(features, merge=merge,
duplicates=duplicates,
min_studies=min_studies,
threshold=threshold)
|
[
"Construct",
"a",
"new",
"FeatureTable",
"from",
"file",
"."
] |
neurosynth/neurosynth
|
python
|
https://github.com/neurosynth/neurosynth/blob/948ce7edce15d7df693446e76834e0c23bfe8f11/neurosynth/base/dataset.py#L376-L399
|
[
"def",
"add_features",
"(",
"self",
",",
"features",
",",
"append",
"=",
"True",
",",
"merge",
"=",
"'outer'",
",",
"duplicates",
"=",
"'ignore'",
",",
"min_studies",
"=",
"0.0",
",",
"threshold",
"=",
"0.001",
")",
":",
"if",
"(",
"not",
"append",
")",
"or",
"not",
"hasattr",
"(",
"self",
",",
"'feature_table'",
")",
":",
"self",
".",
"feature_table",
"=",
"FeatureTable",
"(",
"self",
")",
"self",
".",
"feature_table",
".",
"add_features",
"(",
"features",
",",
"merge",
"=",
"merge",
",",
"duplicates",
"=",
"duplicates",
",",
"min_studies",
"=",
"min_studies",
",",
"threshold",
"=",
"threshold",
")"
] |
948ce7edce15d7df693446e76834e0c23bfe8f11
|
test
|
Dataset.get_image_data
|
A convenience wrapper for ImageTable.get_image_data().
Args:
ids (list, array): A list or 1D numpy array of study ids to
return. If None, returns data for all studies.
voxels (list, array): A list or 1D numpy array of voxel indices
(i.e., rows) to return. If None, returns data for all voxels.
|
neurosynth/base/dataset.py
|
def get_image_data(self, ids=None, voxels=None, dense=True):
""" A convenience wrapper for ImageTable.get_image_data().
Args:
ids (list, array): A list or 1D numpy array of study ids to
return. If None, returns data for all studies.
voxels (list, array): A list or 1D numpy array of voxel indices
(i.e., rows) to return. If None, returns data for all voxels.
"""
return self.image_table.get_image_data(ids, voxels=voxels, dense=dense)
|
def get_image_data(self, ids=None, voxels=None, dense=True):
""" A convenience wrapper for ImageTable.get_image_data().
Args:
ids (list, array): A list or 1D numpy array of study ids to
return. If None, returns data for all studies.
voxels (list, array): A list or 1D numpy array of voxel indices
(i.e., rows) to return. If None, returns data for all voxels.
"""
return self.image_table.get_image_data(ids, voxels=voxels, dense=dense)
|
[
"A",
"convenience",
"wrapper",
"for",
"ImageTable",
".",
"get_image_data",
"()",
"."
] |
neurosynth/neurosynth
|
python
|
https://github.com/neurosynth/neurosynth/blob/948ce7edce15d7df693446e76834e0c23bfe8f11/neurosynth/base/dataset.py#L401-L410
|
[
"def",
"get_image_data",
"(",
"self",
",",
"ids",
"=",
"None",
",",
"voxels",
"=",
"None",
",",
"dense",
"=",
"True",
")",
":",
"return",
"self",
".",
"image_table",
".",
"get_image_data",
"(",
"ids",
",",
"voxels",
"=",
"voxels",
",",
"dense",
"=",
"dense",
")"
] |
948ce7edce15d7df693446e76834e0c23bfe8f11
|
test
|
Dataset.get_feature_names
|
Returns names of features. If features is None, returns all
features. Otherwise assumes the user is trying to find the order of the
features.
|
neurosynth/base/dataset.py
|
def get_feature_names(self, features=None):
""" Returns names of features. If features is None, returns all
features. Otherwise assumes the user is trying to find the order of the
features. """
if features:
return self.feature_table.get_ordered_names(features)
else:
return self.feature_table.feature_names
|
def get_feature_names(self, features=None):
""" Returns names of features. If features is None, returns all
features. Otherwise assumes the user is trying to find the order of the
features. """
if features:
return self.feature_table.get_ordered_names(features)
else:
return self.feature_table.feature_names
|
[
"Returns",
"names",
"of",
"features",
".",
"If",
"features",
"is",
"None",
"returns",
"all",
"features",
".",
"Otherwise",
"assumes",
"the",
"user",
"is",
"trying",
"to",
"find",
"the",
"order",
"of",
"the",
"features",
"."
] |
neurosynth/neurosynth
|
python
|
https://github.com/neurosynth/neurosynth/blob/948ce7edce15d7df693446e76834e0c23bfe8f11/neurosynth/base/dataset.py#L416-L423
|
[
"def",
"get_feature_names",
"(",
"self",
",",
"features",
"=",
"None",
")",
":",
"if",
"features",
":",
"return",
"self",
".",
"feature_table",
".",
"get_ordered_names",
"(",
"features",
")",
"else",
":",
"return",
"self",
".",
"feature_table",
".",
"feature_names"
] |
948ce7edce15d7df693446e76834e0c23bfe8f11
|
test
|
Dataset.get_feature_counts
|
Returns a dictionary, where the keys are the feature names
and the values are the number of studies tagged with the feature.
|
neurosynth/base/dataset.py
|
def get_feature_counts(self, threshold=0.001):
""" Returns a dictionary, where the keys are the feature names
and the values are the number of studies tagged with the feature. """
counts = np.sum(self.get_feature_data() >= threshold, 0)
return dict(zip(self.get_feature_names(), list(counts)))
|
def get_feature_counts(self, threshold=0.001):
""" Returns a dictionary, where the keys are the feature names
and the values are the number of studies tagged with the feature. """
counts = np.sum(self.get_feature_data() >= threshold, 0)
return dict(zip(self.get_feature_names(), list(counts)))
|
[
"Returns",
"a",
"dictionary",
"where",
"the",
"keys",
"are",
"the",
"feature",
"names",
"and",
"the",
"values",
"are",
"the",
"number",
"of",
"studies",
"tagged",
"with",
"the",
"feature",
"."
] |
neurosynth/neurosynth
|
python
|
https://github.com/neurosynth/neurosynth/blob/948ce7edce15d7df693446e76834e0c23bfe8f11/neurosynth/base/dataset.py#L425-L429
|
[
"def",
"get_feature_counts",
"(",
"self",
",",
"threshold",
"=",
"0.001",
")",
":",
"counts",
"=",
"np",
".",
"sum",
"(",
"self",
".",
"get_feature_data",
"(",
")",
">=",
"threshold",
",",
"0",
")",
"return",
"dict",
"(",
"zip",
"(",
"self",
".",
"get_feature_names",
"(",
")",
",",
"list",
"(",
"counts",
")",
")",
")"
] |
948ce7edce15d7df693446e76834e0c23bfe8f11
|
test
|
Dataset.load
|
Load a pickled Dataset instance from file.
|
neurosynth/base/dataset.py
|
def load(cls, filename):
""" Load a pickled Dataset instance from file. """
try:
dataset = pickle.load(open(filename, 'rb'))
except UnicodeDecodeError:
# Need to try this for python3
dataset = pickle.load(open(filename, 'rb'), encoding='latin')
if hasattr(dataset, 'feature_table'):
dataset.feature_table._csr_to_sdf()
return dataset
|
def load(cls, filename):
""" Load a pickled Dataset instance from file. """
try:
dataset = pickle.load(open(filename, 'rb'))
except UnicodeDecodeError:
# Need to try this for python3
dataset = pickle.load(open(filename, 'rb'), encoding='latin')
if hasattr(dataset, 'feature_table'):
dataset.feature_table._csr_to_sdf()
return dataset
|
[
"Load",
"a",
"pickled",
"Dataset",
"instance",
"from",
"file",
"."
] |
neurosynth/neurosynth
|
python
|
https://github.com/neurosynth/neurosynth/blob/948ce7edce15d7df693446e76834e0c23bfe8f11/neurosynth/base/dataset.py#L432-L442
|
[
"def",
"load",
"(",
"cls",
",",
"filename",
")",
":",
"try",
":",
"dataset",
"=",
"pickle",
".",
"load",
"(",
"open",
"(",
"filename",
",",
"'rb'",
")",
")",
"except",
"UnicodeDecodeError",
":",
"# Need to try this for python3",
"dataset",
"=",
"pickle",
".",
"load",
"(",
"open",
"(",
"filename",
",",
"'rb'",
")",
",",
"encoding",
"=",
"'latin'",
")",
"if",
"hasattr",
"(",
"dataset",
",",
"'feature_table'",
")",
":",
"dataset",
".",
"feature_table",
".",
"_csr_to_sdf",
"(",
")",
"return",
"dataset"
] |
948ce7edce15d7df693446e76834e0c23bfe8f11
|
test
|
Dataset.save
|
Pickle the Dataset instance to the provided file.
|
neurosynth/base/dataset.py
|
def save(self, filename):
""" Pickle the Dataset instance to the provided file.
"""
if hasattr(self, 'feature_table'):
self.feature_table._sdf_to_csr()
pickle.dump(self, open(filename, 'wb'), -1)
if hasattr(self, 'feature_table'):
self.feature_table._csr_to_sdf()
|
def save(self, filename):
""" Pickle the Dataset instance to the provided file.
"""
if hasattr(self, 'feature_table'):
self.feature_table._sdf_to_csr()
pickle.dump(self, open(filename, 'wb'), -1)
if hasattr(self, 'feature_table'):
self.feature_table._csr_to_sdf()
|
[
"Pickle",
"the",
"Dataset",
"instance",
"to",
"the",
"provided",
"file",
"."
] |
neurosynth/neurosynth
|
python
|
https://github.com/neurosynth/neurosynth/blob/948ce7edce15d7df693446e76834e0c23bfe8f11/neurosynth/base/dataset.py#L444-L453
|
[
"def",
"save",
"(",
"self",
",",
"filename",
")",
":",
"if",
"hasattr",
"(",
"self",
",",
"'feature_table'",
")",
":",
"self",
".",
"feature_table",
".",
"_sdf_to_csr",
"(",
")",
"pickle",
".",
"dump",
"(",
"self",
",",
"open",
"(",
"filename",
",",
"'wb'",
")",
",",
"-",
"1",
")",
"if",
"hasattr",
"(",
"self",
",",
"'feature_table'",
")",
":",
"self",
".",
"feature_table",
".",
"_csr_to_sdf",
"(",
")"
] |
948ce7edce15d7df693446e76834e0c23bfe8f11
|
test
|
ImageTable.get_image_data
|
Slices and returns a subset of image data.
Args:
ids (list, array): A list or 1D numpy array of study ids to
return. If None, returns data for all studies.
voxels (list, array): A list or 1D numpy array of voxel indices
(i.e., rows) to return. If None, returns data for all voxels.
dense (bool): Optional boolean. When True (default), convert the
result to a dense array before returning. When False, keep as
sparse matrix.
Returns:
A 2D numpy array with voxels in rows and studies in columns.
|
neurosynth/base/dataset.py
|
def get_image_data(self, ids=None, voxels=None, dense=True):
""" Slices and returns a subset of image data.
Args:
ids (list, array): A list or 1D numpy array of study ids to
return. If None, returns data for all studies.
voxels (list, array): A list or 1D numpy array of voxel indices
(i.e., rows) to return. If None, returns data for all voxels.
dense (bool): Optional boolean. When True (default), convert the
result to a dense array before returning. When False, keep as
sparse matrix.
Returns:
A 2D numpy array with voxels in rows and studies in columns.
"""
if dense and ids is None and voxels is None:
logger.warning(
"Warning: get_image_data() is being called without specifying "
"a subset of studies or voxels to retrieve. This may result in"
" a very large amount of data (several GB) being read into "
"memory. If you experience any problems, consider returning a "
"sparse matrix by passing dense=False, or pass in a list of "
"ids of voxels to retrieve only a portion of the data.")
result = self.data
if ids is not None:
idxs = np.where(np.in1d(np.array(self.ids), np.array(ids)))[0]
result = result[:, idxs]
if voxels is not None:
result = result[voxels, :]
return result.toarray() if dense else result
|
def get_image_data(self, ids=None, voxels=None, dense=True):
""" Slices and returns a subset of image data.
Args:
ids (list, array): A list or 1D numpy array of study ids to
return. If None, returns data for all studies.
voxels (list, array): A list or 1D numpy array of voxel indices
(i.e., rows) to return. If None, returns data for all voxels.
dense (bool): Optional boolean. When True (default), convert the
result to a dense array before returning. When False, keep as
sparse matrix.
Returns:
A 2D numpy array with voxels in rows and studies in columns.
"""
if dense and ids is None and voxels is None:
logger.warning(
"Warning: get_image_data() is being called without specifying "
"a subset of studies or voxels to retrieve. This may result in"
" a very large amount of data (several GB) being read into "
"memory. If you experience any problems, consider returning a "
"sparse matrix by passing dense=False, or pass in a list of "
"ids of voxels to retrieve only a portion of the data.")
result = self.data
if ids is not None:
idxs = np.where(np.in1d(np.array(self.ids), np.array(ids)))[0]
result = result[:, idxs]
if voxels is not None:
result = result[voxels, :]
return result.toarray() if dense else result
|
[
"Slices",
"and",
"returns",
"a",
"subset",
"of",
"image",
"data",
"."
] |
neurosynth/neurosynth
|
python
|
https://github.com/neurosynth/neurosynth/blob/948ce7edce15d7df693446e76834e0c23bfe8f11/neurosynth/base/dataset.py#L507-L537
|
[
"def",
"get_image_data",
"(",
"self",
",",
"ids",
"=",
"None",
",",
"voxels",
"=",
"None",
",",
"dense",
"=",
"True",
")",
":",
"if",
"dense",
"and",
"ids",
"is",
"None",
"and",
"voxels",
"is",
"None",
":",
"logger",
".",
"warning",
"(",
"\"Warning: get_image_data() is being called without specifying \"",
"\"a subset of studies or voxels to retrieve. This may result in\"",
"\" a very large amount of data (several GB) being read into \"",
"\"memory. If you experience any problems, consider returning a \"",
"\"sparse matrix by passing dense=False, or pass in a list of \"",
"\"ids of voxels to retrieve only a portion of the data.\"",
")",
"result",
"=",
"self",
".",
"data",
"if",
"ids",
"is",
"not",
"None",
":",
"idxs",
"=",
"np",
".",
"where",
"(",
"np",
".",
"in1d",
"(",
"np",
".",
"array",
"(",
"self",
".",
"ids",
")",
",",
"np",
".",
"array",
"(",
"ids",
")",
")",
")",
"[",
"0",
"]",
"result",
"=",
"result",
"[",
":",
",",
"idxs",
"]",
"if",
"voxels",
"is",
"not",
"None",
":",
"result",
"=",
"result",
"[",
"voxels",
",",
":",
"]",
"return",
"result",
".",
"toarray",
"(",
")",
"if",
"dense",
"else",
"result"
] |
948ce7edce15d7df693446e76834e0c23bfe8f11
|
test
|
ImageTable.trim
|
Trim ImageTable to keep only the passed studies. This is a
convenience method, and should generally be avoided in favor of
non-destructive alternatives that don't require slicing (e.g.,
matrix multiplication).
|
neurosynth/base/dataset.py
|
def trim(self, ids):
""" Trim ImageTable to keep only the passed studies. This is a
convenience method, and should generally be avoided in favor of
non-destructive alternatives that don't require slicing (e.g.,
matrix multiplication). """
self.data = self.get_image_data(ids, dense=False) # .tocoo()
idxs = np.where(np.in1d(np.array(self.ids), np.array(ids)))[0]
self.ids = [self.ids[i] for i in idxs]
|
def trim(self, ids):
""" Trim ImageTable to keep only the passed studies. This is a
convenience method, and should generally be avoided in favor of
non-destructive alternatives that don't require slicing (e.g.,
matrix multiplication). """
self.data = self.get_image_data(ids, dense=False) # .tocoo()
idxs = np.where(np.in1d(np.array(self.ids), np.array(ids)))[0]
self.ids = [self.ids[i] for i in idxs]
|
[
"Trim",
"ImageTable",
"to",
"keep",
"only",
"the",
"passed",
"studies",
".",
"This",
"is",
"a",
"convenience",
"method",
"and",
"should",
"generally",
"be",
"avoided",
"in",
"favor",
"of",
"non",
"-",
"destructive",
"alternatives",
"that",
"don",
"t",
"require",
"slicing",
"(",
"e",
".",
"g",
".",
"matrix",
"multiplication",
")",
"."
] |
neurosynth/neurosynth
|
python
|
https://github.com/neurosynth/neurosynth/blob/948ce7edce15d7df693446e76834e0c23bfe8f11/neurosynth/base/dataset.py#L539-L546
|
[
"def",
"trim",
"(",
"self",
",",
"ids",
")",
":",
"self",
".",
"data",
"=",
"self",
".",
"get_image_data",
"(",
"ids",
",",
"dense",
"=",
"False",
")",
"# .tocoo()",
"idxs",
"=",
"np",
".",
"where",
"(",
"np",
".",
"in1d",
"(",
"np",
".",
"array",
"(",
"self",
".",
"ids",
")",
",",
"np",
".",
"array",
"(",
"ids",
")",
")",
")",
"[",
"0",
"]",
"self",
".",
"ids",
"=",
"[",
"self",
".",
"ids",
"[",
"i",
"]",
"for",
"i",
"in",
"idxs",
"]"
] |
948ce7edce15d7df693446e76834e0c23bfe8f11
|
test
|
FeatureTable.add_features
|
Add new features to FeatureTable.
Args:
features (str, DataFrame): A filename to load data from, or a
pandas DataFrame. In either case, studies are in rows and
features are in columns. Values in cells reflect the weight of
the intersecting feature for the intersecting study. Feature
names and study IDs should be included as the first column
and first row, respectively.
merge (str): The merge strategy to use when merging new features
with old. This is passed to pandas.merge, so can be 'left',
'right', 'outer', or 'inner'. Defaults to outer (i.e., all data
in both new and old will be kept, and missing values will be
assigned zeros.)
duplicates (str): string indicating how to handle features whose
name matches an existing feature. Valid options:
'ignore' (default): ignores the new feature, keeps old data
'replace': replace the old feature's data with the new data
'merge': keeps both features, renaming them so they're
different
min_studies (int): minimum number of studies that pass threshold in
order to add feature.
threshold (float): minimum frequency threshold each study must
exceed in order to count towards min_studies.
|
neurosynth/base/dataset.py
|
def add_features(self, features, merge='outer', duplicates='ignore',
min_studies=0, threshold=0.0001):
""" Add new features to FeatureTable.
Args:
features (str, DataFrame): A filename to load data from, or a
pandas DataFrame. In either case, studies are in rows and
features are in columns. Values in cells reflect the weight of
the intersecting feature for the intersecting study. Feature
names and study IDs should be included as the first column
and first row, respectively.
merge (str): The merge strategy to use when merging new features
with old. This is passed to pandas.merge, so can be 'left',
'right', 'outer', or 'inner'. Defaults to outer (i.e., all data
in both new and old will be kept, and missing values will be
assigned zeros.)
duplicates (str): string indicating how to handle features whose
name matches an existing feature. Valid options:
'ignore' (default): ignores the new feature, keeps old data
'replace': replace the old feature's data with the new data
'merge': keeps both features, renaming them so they're
different
min_studies (int): minimum number of studies that pass threshold in
order to add feature.
threshold (float): minimum frequency threshold each study must
exceed in order to count towards min_studies.
"""
if isinstance(features, string_types):
if not os.path.exists(features):
raise ValueError("%s cannot be found." % features)
try:
features = pd.read_csv(features, sep='\t', index_col=0)
except Exception as e:
logger.error("%s cannot be parsed: %s" % (features, e))
if min_studies:
valid = np.where(
(features.values >= threshold).sum(0) >= min_studies)[0]
features = features.iloc[:, valid]
# Warn user if no/few IDs match between the FeatureTable and the
# Dataset. This most commonly happens because older database.txt files
# used doi's as IDs whereas we now use PMIDs throughout.
n_studies = len(features)
n_common_ids = len(
set(features.index) & set(self.dataset.image_table.ids))
if float(n_common_ids) / n_studies < 0.01: # Minimum 1% overlap
msg = "Only %d" % n_common_ids if n_common_ids else "None of the"
logger.warning(
msg + " studies in the feature file matched studies currently "
"the Dataset. The most likely cause for this is that you're "
"pairing a newer feature set with an older, incompatible "
"database file. You may want to try regenerating the Dataset "
"instance from a newer database file that uses PMIDs rather "
"than doi's as the study identifiers in the first column.")
old_data = self.data.to_dense()
# Handle features with duplicate names
common_features = list(set(old_data.columns) & set(features.columns))
if duplicates == 'ignore':
features = features.drop(common_features, axis=1)
elif duplicates == 'replace':
old_data = old_data.drop(common_features, axis=1)
data = old_data.merge(
features, how=merge, left_index=True, right_index=True)
self.data = data.fillna(0.0).to_sparse()
|
def add_features(self, features, merge='outer', duplicates='ignore',
min_studies=0, threshold=0.0001):
""" Add new features to FeatureTable.
Args:
features (str, DataFrame): A filename to load data from, or a
pandas DataFrame. In either case, studies are in rows and
features are in columns. Values in cells reflect the weight of
the intersecting feature for the intersecting study. Feature
names and study IDs should be included as the first column
and first row, respectively.
merge (str): The merge strategy to use when merging new features
with old. This is passed to pandas.merge, so can be 'left',
'right', 'outer', or 'inner'. Defaults to outer (i.e., all data
in both new and old will be kept, and missing values will be
assigned zeros.)
duplicates (str): string indicating how to handle features whose
name matches an existing feature. Valid options:
'ignore' (default): ignores the new feature, keeps old data
'replace': replace the old feature's data with the new data
'merge': keeps both features, renaming them so they're
different
min_studies (int): minimum number of studies that pass threshold in
order to add feature.
threshold (float): minimum frequency threshold each study must
exceed in order to count towards min_studies.
"""
if isinstance(features, string_types):
if not os.path.exists(features):
raise ValueError("%s cannot be found." % features)
try:
features = pd.read_csv(features, sep='\t', index_col=0)
except Exception as e:
logger.error("%s cannot be parsed: %s" % (features, e))
if min_studies:
valid = np.where(
(features.values >= threshold).sum(0) >= min_studies)[0]
features = features.iloc[:, valid]
# Warn user if no/few IDs match between the FeatureTable and the
# Dataset. This most commonly happens because older database.txt files
# used doi's as IDs whereas we now use PMIDs throughout.
n_studies = len(features)
n_common_ids = len(
set(features.index) & set(self.dataset.image_table.ids))
if float(n_common_ids) / n_studies < 0.01: # Minimum 1% overlap
msg = "Only %d" % n_common_ids if n_common_ids else "None of the"
logger.warning(
msg + " studies in the feature file matched studies currently "
"the Dataset. The most likely cause for this is that you're "
"pairing a newer feature set with an older, incompatible "
"database file. You may want to try regenerating the Dataset "
"instance from a newer database file that uses PMIDs rather "
"than doi's as the study identifiers in the first column.")
old_data = self.data.to_dense()
# Handle features with duplicate names
common_features = list(set(old_data.columns) & set(features.columns))
if duplicates == 'ignore':
features = features.drop(common_features, axis=1)
elif duplicates == 'replace':
old_data = old_data.drop(common_features, axis=1)
data = old_data.merge(
features, how=merge, left_index=True, right_index=True)
self.data = data.fillna(0.0).to_sparse()
|
[
"Add",
"new",
"features",
"to",
"FeatureTable",
".",
"Args",
":",
"features",
"(",
"str",
"DataFrame",
")",
":",
"A",
"filename",
"to",
"load",
"data",
"from",
"or",
"a",
"pandas",
"DataFrame",
".",
"In",
"either",
"case",
"studies",
"are",
"in",
"rows",
"and",
"features",
"are",
"in",
"columns",
".",
"Values",
"in",
"cells",
"reflect",
"the",
"weight",
"of",
"the",
"intersecting",
"feature",
"for",
"the",
"intersecting",
"study",
".",
"Feature",
"names",
"and",
"study",
"IDs",
"should",
"be",
"included",
"as",
"the",
"first",
"column",
"and",
"first",
"row",
"respectively",
".",
"merge",
"(",
"str",
")",
":",
"The",
"merge",
"strategy",
"to",
"use",
"when",
"merging",
"new",
"features",
"with",
"old",
".",
"This",
"is",
"passed",
"to",
"pandas",
".",
"merge",
"so",
"can",
"be",
"left",
"right",
"outer",
"or",
"inner",
".",
"Defaults",
"to",
"outer",
"(",
"i",
".",
"e",
".",
"all",
"data",
"in",
"both",
"new",
"and",
"old",
"will",
"be",
"kept",
"and",
"missing",
"values",
"will",
"be",
"assigned",
"zeros",
".",
")",
"duplicates",
"(",
"str",
")",
":",
"string",
"indicating",
"how",
"to",
"handle",
"features",
"whose",
"name",
"matches",
"an",
"existing",
"feature",
".",
"Valid",
"options",
":",
"ignore",
"(",
"default",
")",
":",
"ignores",
"the",
"new",
"feature",
"keeps",
"old",
"data",
"replace",
":",
"replace",
"the",
"old",
"feature",
"s",
"data",
"with",
"the",
"new",
"data",
"merge",
":",
"keeps",
"both",
"features",
"renaming",
"them",
"so",
"they",
"re",
"different",
"min_studies",
"(",
"int",
")",
":",
"minimum",
"number",
"of",
"studies",
"that",
"pass",
"threshold",
"in",
"order",
"to",
"add",
"feature",
".",
"threshold",
"(",
"float",
")",
":",
"minimum",
"frequency",
"threshold",
"each",
"study",
"must",
"exceed",
"in",
"order",
"to",
"count",
"towards",
"min_studies",
"."
] |
neurosynth/neurosynth
|
python
|
https://github.com/neurosynth/neurosynth/blob/948ce7edce15d7df693446e76834e0c23bfe8f11/neurosynth/base/dataset.py#L569-L634
|
[
"def",
"add_features",
"(",
"self",
",",
"features",
",",
"merge",
"=",
"'outer'",
",",
"duplicates",
"=",
"'ignore'",
",",
"min_studies",
"=",
"0",
",",
"threshold",
"=",
"0.0001",
")",
":",
"if",
"isinstance",
"(",
"features",
",",
"string_types",
")",
":",
"if",
"not",
"os",
".",
"path",
".",
"exists",
"(",
"features",
")",
":",
"raise",
"ValueError",
"(",
"\"%s cannot be found.\"",
"%",
"features",
")",
"try",
":",
"features",
"=",
"pd",
".",
"read_csv",
"(",
"features",
",",
"sep",
"=",
"'\\t'",
",",
"index_col",
"=",
"0",
")",
"except",
"Exception",
"as",
"e",
":",
"logger",
".",
"error",
"(",
"\"%s cannot be parsed: %s\"",
"%",
"(",
"features",
",",
"e",
")",
")",
"if",
"min_studies",
":",
"valid",
"=",
"np",
".",
"where",
"(",
"(",
"features",
".",
"values",
">=",
"threshold",
")",
".",
"sum",
"(",
"0",
")",
">=",
"min_studies",
")",
"[",
"0",
"]",
"features",
"=",
"features",
".",
"iloc",
"[",
":",
",",
"valid",
"]",
"# Warn user if no/few IDs match between the FeatureTable and the",
"# Dataset. This most commonly happens because older database.txt files",
"# used doi's as IDs whereas we now use PMIDs throughout.",
"n_studies",
"=",
"len",
"(",
"features",
")",
"n_common_ids",
"=",
"len",
"(",
"set",
"(",
"features",
".",
"index",
")",
"&",
"set",
"(",
"self",
".",
"dataset",
".",
"image_table",
".",
"ids",
")",
")",
"if",
"float",
"(",
"n_common_ids",
")",
"/",
"n_studies",
"<",
"0.01",
":",
"# Minimum 1% overlap",
"msg",
"=",
"\"Only %d\"",
"%",
"n_common_ids",
"if",
"n_common_ids",
"else",
"\"None of the\"",
"logger",
".",
"warning",
"(",
"msg",
"+",
"\" studies in the feature file matched studies currently \"",
"\"the Dataset. The most likely cause for this is that you're \"",
"\"pairing a newer feature set with an older, incompatible \"",
"\"database file. You may want to try regenerating the Dataset \"",
"\"instance from a newer database file that uses PMIDs rather \"",
"\"than doi's as the study identifiers in the first column.\"",
")",
"old_data",
"=",
"self",
".",
"data",
".",
"to_dense",
"(",
")",
"# Handle features with duplicate names",
"common_features",
"=",
"list",
"(",
"set",
"(",
"old_data",
".",
"columns",
")",
"&",
"set",
"(",
"features",
".",
"columns",
")",
")",
"if",
"duplicates",
"==",
"'ignore'",
":",
"features",
"=",
"features",
".",
"drop",
"(",
"common_features",
",",
"axis",
"=",
"1",
")",
"elif",
"duplicates",
"==",
"'replace'",
":",
"old_data",
"=",
"old_data",
".",
"drop",
"(",
"common_features",
",",
"axis",
"=",
"1",
")",
"data",
"=",
"old_data",
".",
"merge",
"(",
"features",
",",
"how",
"=",
"merge",
",",
"left_index",
"=",
"True",
",",
"right_index",
"=",
"True",
")",
"self",
".",
"data",
"=",
"data",
".",
"fillna",
"(",
"0.0",
")",
".",
"to_sparse",
"(",
")"
] |
948ce7edce15d7df693446e76834e0c23bfe8f11
|
test
|
FeatureTable.get_feature_data
|
Slices and returns a subset of feature data.
Args:
ids (list, array): A list or 1D numpy array of study ids to
return rows for. If None, returns data for all studies
(i.e., all rows in array).
features (list, array): A list or 1D numpy array of named features
to return. If None, returns data for all features (i.e., all
columns in array).
dense (bool): Optional boolean. When True (default), convert the
result to a dense array before returning. When False, keep as
sparse matrix. Note that if ids is not None, the returned array
will always be dense.
Returns:
A pandas DataFrame with study IDs in rows and features incolumns.
|
neurosynth/base/dataset.py
|
def get_feature_data(self, ids=None, features=None, dense=True):
""" Slices and returns a subset of feature data.
Args:
ids (list, array): A list or 1D numpy array of study ids to
return rows for. If None, returns data for all studies
(i.e., all rows in array).
features (list, array): A list or 1D numpy array of named features
to return. If None, returns data for all features (i.e., all
columns in array).
dense (bool): Optional boolean. When True (default), convert the
result to a dense array before returning. When False, keep as
sparse matrix. Note that if ids is not None, the returned array
will always be dense.
Returns:
A pandas DataFrame with study IDs in rows and features incolumns.
"""
result = self.data
if ids is not None:
result = result.ix[ids]
if features is not None:
result = result.ix[:, features]
return result.to_dense() if dense else result
|
def get_feature_data(self, ids=None, features=None, dense=True):
""" Slices and returns a subset of feature data.
Args:
ids (list, array): A list or 1D numpy array of study ids to
return rows for. If None, returns data for all studies
(i.e., all rows in array).
features (list, array): A list or 1D numpy array of named features
to return. If None, returns data for all features (i.e., all
columns in array).
dense (bool): Optional boolean. When True (default), convert the
result to a dense array before returning. When False, keep as
sparse matrix. Note that if ids is not None, the returned array
will always be dense.
Returns:
A pandas DataFrame with study IDs in rows and features incolumns.
"""
result = self.data
if ids is not None:
result = result.ix[ids]
if features is not None:
result = result.ix[:, features]
return result.to_dense() if dense else result
|
[
"Slices",
"and",
"returns",
"a",
"subset",
"of",
"feature",
"data",
"."
] |
neurosynth/neurosynth
|
python
|
https://github.com/neurosynth/neurosynth/blob/948ce7edce15d7df693446e76834e0c23bfe8f11/neurosynth/base/dataset.py#L640-L665
|
[
"def",
"get_feature_data",
"(",
"self",
",",
"ids",
"=",
"None",
",",
"features",
"=",
"None",
",",
"dense",
"=",
"True",
")",
":",
"result",
"=",
"self",
".",
"data",
"if",
"ids",
"is",
"not",
"None",
":",
"result",
"=",
"result",
".",
"ix",
"[",
"ids",
"]",
"if",
"features",
"is",
"not",
"None",
":",
"result",
"=",
"result",
".",
"ix",
"[",
":",
",",
"features",
"]",
"return",
"result",
".",
"to_dense",
"(",
")",
"if",
"dense",
"else",
"result"
] |
948ce7edce15d7df693446e76834e0c23bfe8f11
|
test
|
FeatureTable.get_ordered_names
|
Given a list of features, returns features in order that they
appear in database.
Args:
features (list): A list or 1D numpy array of named features to
return.
Returns:
A list of features in order they appear in database.
|
neurosynth/base/dataset.py
|
def get_ordered_names(self, features):
""" Given a list of features, returns features in order that they
appear in database.
Args:
features (list): A list or 1D numpy array of named features to
return.
Returns:
A list of features in order they appear in database.
"""
idxs = np.where(
np.in1d(self.data.columns.values, np.array(features)))[0]
return list(self.data.columns[idxs].values)
|
def get_ordered_names(self, features):
""" Given a list of features, returns features in order that they
appear in database.
Args:
features (list): A list or 1D numpy array of named features to
return.
Returns:
A list of features in order they appear in database.
"""
idxs = np.where(
np.in1d(self.data.columns.values, np.array(features)))[0]
return list(self.data.columns[idxs].values)
|
[
"Given",
"a",
"list",
"of",
"features",
"returns",
"features",
"in",
"order",
"that",
"they",
"appear",
"in",
"database",
"."
] |
neurosynth/neurosynth
|
python
|
https://github.com/neurosynth/neurosynth/blob/948ce7edce15d7df693446e76834e0c23bfe8f11/neurosynth/base/dataset.py#L667-L681
|
[
"def",
"get_ordered_names",
"(",
"self",
",",
"features",
")",
":",
"idxs",
"=",
"np",
".",
"where",
"(",
"np",
".",
"in1d",
"(",
"self",
".",
"data",
".",
"columns",
".",
"values",
",",
"np",
".",
"array",
"(",
"features",
")",
")",
")",
"[",
"0",
"]",
"return",
"list",
"(",
"self",
".",
"data",
".",
"columns",
"[",
"idxs",
"]",
".",
"values",
")"
] |
948ce7edce15d7df693446e76834e0c23bfe8f11
|
test
|
FeatureTable.get_ids
|
Returns a list of all studies in the table that meet the desired
feature-based criteria.
Will most commonly be used to retrieve studies that use one or more
features with some minimum frequency; e.g.,:
get_ids(['fear', 'anxiety'], threshold=0.001)
Args:
features (lists): a list of feature names to search on.
threshold (float): optional float indicating threshold features
must pass to be included.
func (Callable): any numpy function to use for thresholding
(default: sum). The function will be applied to the list of
features and the result compared to the threshold. This can be
used to change the meaning of the query in powerful ways. E.g,:
max: any of the features have to pass threshold
(i.e., max > thresh)
min: all features must each individually pass threshold
(i.e., min > thresh)
sum: the summed weight of all features must pass threshold
(i.e., sum > thresh)
get_weights (bool): if True, returns a dict with ids => weights.
Returns:
When get_weights is false (default), returns a list of study
names. When true, returns a dict, with study names as keys
and feature weights as values.
|
neurosynth/base/dataset.py
|
def get_ids(self, features, threshold=0.0, func=np.sum, get_weights=False):
""" Returns a list of all studies in the table that meet the desired
feature-based criteria.
Will most commonly be used to retrieve studies that use one or more
features with some minimum frequency; e.g.,:
get_ids(['fear', 'anxiety'], threshold=0.001)
Args:
features (lists): a list of feature names to search on.
threshold (float): optional float indicating threshold features
must pass to be included.
func (Callable): any numpy function to use for thresholding
(default: sum). The function will be applied to the list of
features and the result compared to the threshold. This can be
used to change the meaning of the query in powerful ways. E.g,:
max: any of the features have to pass threshold
(i.e., max > thresh)
min: all features must each individually pass threshold
(i.e., min > thresh)
sum: the summed weight of all features must pass threshold
(i.e., sum > thresh)
get_weights (bool): if True, returns a dict with ids => weights.
Returns:
When get_weights is false (default), returns a list of study
names. When true, returns a dict, with study names as keys
and feature weights as values.
"""
if isinstance(features, str):
features = [features]
features = self.search_features(features) # Expand wild cards
feature_weights = self.data.ix[:, features]
weights = feature_weights.apply(func, 1)
above_thresh = weights[weights >= threshold]
# ids_to_keep = self.ids[above_thresh]
return above_thresh if get_weights else list(above_thresh.index)
|
def get_ids(self, features, threshold=0.0, func=np.sum, get_weights=False):
""" Returns a list of all studies in the table that meet the desired
feature-based criteria.
Will most commonly be used to retrieve studies that use one or more
features with some minimum frequency; e.g.,:
get_ids(['fear', 'anxiety'], threshold=0.001)
Args:
features (lists): a list of feature names to search on.
threshold (float): optional float indicating threshold features
must pass to be included.
func (Callable): any numpy function to use for thresholding
(default: sum). The function will be applied to the list of
features and the result compared to the threshold. This can be
used to change the meaning of the query in powerful ways. E.g,:
max: any of the features have to pass threshold
(i.e., max > thresh)
min: all features must each individually pass threshold
(i.e., min > thresh)
sum: the summed weight of all features must pass threshold
(i.e., sum > thresh)
get_weights (bool): if True, returns a dict with ids => weights.
Returns:
When get_weights is false (default), returns a list of study
names. When true, returns a dict, with study names as keys
and feature weights as values.
"""
if isinstance(features, str):
features = [features]
features = self.search_features(features) # Expand wild cards
feature_weights = self.data.ix[:, features]
weights = feature_weights.apply(func, 1)
above_thresh = weights[weights >= threshold]
# ids_to_keep = self.ids[above_thresh]
return above_thresh if get_weights else list(above_thresh.index)
|
[
"Returns",
"a",
"list",
"of",
"all",
"studies",
"in",
"the",
"table",
"that",
"meet",
"the",
"desired",
"feature",
"-",
"based",
"criteria",
"."
] |
neurosynth/neurosynth
|
python
|
https://github.com/neurosynth/neurosynth/blob/948ce7edce15d7df693446e76834e0c23bfe8f11/neurosynth/base/dataset.py#L683-L720
|
[
"def",
"get_ids",
"(",
"self",
",",
"features",
",",
"threshold",
"=",
"0.0",
",",
"func",
"=",
"np",
".",
"sum",
",",
"get_weights",
"=",
"False",
")",
":",
"if",
"isinstance",
"(",
"features",
",",
"str",
")",
":",
"features",
"=",
"[",
"features",
"]",
"features",
"=",
"self",
".",
"search_features",
"(",
"features",
")",
"# Expand wild cards",
"feature_weights",
"=",
"self",
".",
"data",
".",
"ix",
"[",
":",
",",
"features",
"]",
"weights",
"=",
"feature_weights",
".",
"apply",
"(",
"func",
",",
"1",
")",
"above_thresh",
"=",
"weights",
"[",
"weights",
">=",
"threshold",
"]",
"# ids_to_keep = self.ids[above_thresh]",
"return",
"above_thresh",
"if",
"get_weights",
"else",
"list",
"(",
"above_thresh",
".",
"index",
")"
] |
948ce7edce15d7df693446e76834e0c23bfe8f11
|
test
|
FeatureTable.search_features
|
Returns all features that match any of the elements in the input
list.
Args:
search (str, list): A string or list of strings defining the query.
Returns:
A list of matching feature names.
|
neurosynth/base/dataset.py
|
def search_features(self, search):
''' Returns all features that match any of the elements in the input
list.
Args:
search (str, list): A string or list of strings defining the query.
Returns:
A list of matching feature names.
'''
if isinstance(search, string_types):
search = [search]
search = [s.replace('*', '.*') for s in search]
cols = list(self.data.columns)
results = []
for s in search:
results.extend([f for f in cols if re.match(s + '$', f)])
return list(set(results))
|
def search_features(self, search):
''' Returns all features that match any of the elements in the input
list.
Args:
search (str, list): A string or list of strings defining the query.
Returns:
A list of matching feature names.
'''
if isinstance(search, string_types):
search = [search]
search = [s.replace('*', '.*') for s in search]
cols = list(self.data.columns)
results = []
for s in search:
results.extend([f for f in cols if re.match(s + '$', f)])
return list(set(results))
|
[
"Returns",
"all",
"features",
"that",
"match",
"any",
"of",
"the",
"elements",
"in",
"the",
"input",
"list",
"."
] |
neurosynth/neurosynth
|
python
|
https://github.com/neurosynth/neurosynth/blob/948ce7edce15d7df693446e76834e0c23bfe8f11/neurosynth/base/dataset.py#L722-L739
|
[
"def",
"search_features",
"(",
"self",
",",
"search",
")",
":",
"if",
"isinstance",
"(",
"search",
",",
"string_types",
")",
":",
"search",
"=",
"[",
"search",
"]",
"search",
"=",
"[",
"s",
".",
"replace",
"(",
"'*'",
",",
"'.*'",
")",
"for",
"s",
"in",
"search",
"]",
"cols",
"=",
"list",
"(",
"self",
".",
"data",
".",
"columns",
")",
"results",
"=",
"[",
"]",
"for",
"s",
"in",
"search",
":",
"results",
".",
"extend",
"(",
"[",
"f",
"for",
"f",
"in",
"cols",
"if",
"re",
".",
"match",
"(",
"s",
"+",
"'$'",
",",
"f",
")",
"]",
")",
"return",
"list",
"(",
"set",
"(",
"results",
")",
")"
] |
948ce7edce15d7df693446e76834e0c23bfe8f11
|
test
|
FeatureTable.get_ids_by_expression
|
Use a PEG to parse expression and return study IDs.
|
neurosynth/base/dataset.py
|
def get_ids_by_expression(self, expression, threshold=0.001, func=np.sum):
""" Use a PEG to parse expression and return study IDs."""
lexer = lp.Lexer()
lexer.build()
parser = lp.Parser(
lexer, self.dataset, threshold=threshold, func=func)
parser.build()
return parser.parse(expression).keys().values
|
def get_ids_by_expression(self, expression, threshold=0.001, func=np.sum):
""" Use a PEG to parse expression and return study IDs."""
lexer = lp.Lexer()
lexer.build()
parser = lp.Parser(
lexer, self.dataset, threshold=threshold, func=func)
parser.build()
return parser.parse(expression).keys().values
|
[
"Use",
"a",
"PEG",
"to",
"parse",
"expression",
"and",
"return",
"study",
"IDs",
"."
] |
neurosynth/neurosynth
|
python
|
https://github.com/neurosynth/neurosynth/blob/948ce7edce15d7df693446e76834e0c23bfe8f11/neurosynth/base/dataset.py#L741-L748
|
[
"def",
"get_ids_by_expression",
"(",
"self",
",",
"expression",
",",
"threshold",
"=",
"0.001",
",",
"func",
"=",
"np",
".",
"sum",
")",
":",
"lexer",
"=",
"lp",
".",
"Lexer",
"(",
")",
"lexer",
".",
"build",
"(",
")",
"parser",
"=",
"lp",
".",
"Parser",
"(",
"lexer",
",",
"self",
".",
"dataset",
",",
"threshold",
"=",
"threshold",
",",
"func",
"=",
"func",
")",
"parser",
".",
"build",
"(",
")",
"return",
"parser",
".",
"parse",
"(",
"expression",
")",
".",
"keys",
"(",
")",
".",
"values"
] |
948ce7edce15d7df693446e76834e0c23bfe8f11
|
test
|
FeatureTable.get_features_by_ids
|
Returns features for which the mean loading across all specified
studies (in ids) is >= threshold.
|
neurosynth/base/dataset.py
|
def get_features_by_ids(self, ids=None, threshold=0.0001, func=np.mean,
get_weights=False):
''' Returns features for which the mean loading across all specified
studies (in ids) is >= threshold. '''
weights = self.data.ix[ids].apply(func, 0)
above_thresh = weights[weights >= threshold]
return above_thresh if get_weights else list(above_thresh.index)
|
def get_features_by_ids(self, ids=None, threshold=0.0001, func=np.mean,
get_weights=False):
''' Returns features for which the mean loading across all specified
studies (in ids) is >= threshold. '''
weights = self.data.ix[ids].apply(func, 0)
above_thresh = weights[weights >= threshold]
return above_thresh if get_weights else list(above_thresh.index)
|
[
"Returns",
"features",
"for",
"which",
"the",
"mean",
"loading",
"across",
"all",
"specified",
"studies",
"(",
"in",
"ids",
")",
"is",
">",
"=",
"threshold",
"."
] |
neurosynth/neurosynth
|
python
|
https://github.com/neurosynth/neurosynth/blob/948ce7edce15d7df693446e76834e0c23bfe8f11/neurosynth/base/dataset.py#L750-L756
|
[
"def",
"get_features_by_ids",
"(",
"self",
",",
"ids",
"=",
"None",
",",
"threshold",
"=",
"0.0001",
",",
"func",
"=",
"np",
".",
"mean",
",",
"get_weights",
"=",
"False",
")",
":",
"weights",
"=",
"self",
".",
"data",
".",
"ix",
"[",
"ids",
"]",
".",
"apply",
"(",
"func",
",",
"0",
")",
"above_thresh",
"=",
"weights",
"[",
"weights",
">=",
"threshold",
"]",
"return",
"above_thresh",
"if",
"get_weights",
"else",
"list",
"(",
"above_thresh",
".",
"index",
")"
] |
948ce7edce15d7df693446e76834e0c23bfe8f11
|
test
|
FeatureTable._sdf_to_csr
|
Convert FeatureTable to SciPy CSR matrix.
|
neurosynth/base/dataset.py
|
def _sdf_to_csr(self):
""" Convert FeatureTable to SciPy CSR matrix. """
data = self.data.to_dense()
self.data = {
'columns': list(data.columns),
'index': list(data.index),
'values': sparse.csr_matrix(data.values)
}
|
def _sdf_to_csr(self):
""" Convert FeatureTable to SciPy CSR matrix. """
data = self.data.to_dense()
self.data = {
'columns': list(data.columns),
'index': list(data.index),
'values': sparse.csr_matrix(data.values)
}
|
[
"Convert",
"FeatureTable",
"to",
"SciPy",
"CSR",
"matrix",
"."
] |
neurosynth/neurosynth
|
python
|
https://github.com/neurosynth/neurosynth/blob/948ce7edce15d7df693446e76834e0c23bfe8f11/neurosynth/base/dataset.py#L758-L765
|
[
"def",
"_sdf_to_csr",
"(",
"self",
")",
":",
"data",
"=",
"self",
".",
"data",
".",
"to_dense",
"(",
")",
"self",
".",
"data",
"=",
"{",
"'columns'",
":",
"list",
"(",
"data",
".",
"columns",
")",
",",
"'index'",
":",
"list",
"(",
"data",
".",
"index",
")",
",",
"'values'",
":",
"sparse",
".",
"csr_matrix",
"(",
"data",
".",
"values",
")",
"}"
] |
948ce7edce15d7df693446e76834e0c23bfe8f11
|
test
|
FeatureTable._csr_to_sdf
|
Inverse of _sdf_to_csr().
|
neurosynth/base/dataset.py
|
def _csr_to_sdf(self):
""" Inverse of _sdf_to_csr(). """
self.data = pd.DataFrame(self.data['values'].todense(),
index=self.data['index'],
columns=self.data['columns']).to_sparse()
|
def _csr_to_sdf(self):
""" Inverse of _sdf_to_csr(). """
self.data = pd.DataFrame(self.data['values'].todense(),
index=self.data['index'],
columns=self.data['columns']).to_sparse()
|
[
"Inverse",
"of",
"_sdf_to_csr",
"()",
"."
] |
neurosynth/neurosynth
|
python
|
https://github.com/neurosynth/neurosynth/blob/948ce7edce15d7df693446e76834e0c23bfe8f11/neurosynth/base/dataset.py#L767-L771
|
[
"def",
"_csr_to_sdf",
"(",
"self",
")",
":",
"self",
".",
"data",
"=",
"pd",
".",
"DataFrame",
"(",
"self",
".",
"data",
"[",
"'values'",
"]",
".",
"todense",
"(",
")",
",",
"index",
"=",
"self",
".",
"data",
"[",
"'index'",
"]",
",",
"columns",
"=",
"self",
".",
"data",
"[",
"'columns'",
"]",
")",
".",
"to_sparse",
"(",
")"
] |
948ce7edce15d7df693446e76834e0c23bfe8f11
|
test
|
deprecated
|
Deprecation warning decorator. Takes optional deprecation message,
otherwise will use a generic warning.
|
neurosynth/utils.py
|
def deprecated(*args):
""" Deprecation warning decorator. Takes optional deprecation message,
otherwise will use a generic warning. """
def wrap(func):
def wrapped_func(*args, **kwargs):
warnings.warn(msg, category=DeprecationWarning)
return func(*args, **kwargs)
return wrapped_func
if len(args) == 1 and callable(args[0]):
msg = "Function '%s' will be deprecated in future versions of " \
"Neurosynth." % args[0].__name__
return wrap(args[0])
else:
msg = args[0]
return wrap
|
def deprecated(*args):
""" Deprecation warning decorator. Takes optional deprecation message,
otherwise will use a generic warning. """
def wrap(func):
def wrapped_func(*args, **kwargs):
warnings.warn(msg, category=DeprecationWarning)
return func(*args, **kwargs)
return wrapped_func
if len(args) == 1 and callable(args[0]):
msg = "Function '%s' will be deprecated in future versions of " \
"Neurosynth." % args[0].__name__
return wrap(args[0])
else:
msg = args[0]
return wrap
|
[
"Deprecation",
"warning",
"decorator",
".",
"Takes",
"optional",
"deprecation",
"message",
"otherwise",
"will",
"use",
"a",
"generic",
"warning",
"."
] |
neurosynth/neurosynth
|
python
|
https://github.com/neurosynth/neurosynth/blob/948ce7edce15d7df693446e76834e0c23bfe8f11/neurosynth/utils.py#L3-L18
|
[
"def",
"deprecated",
"(",
"*",
"args",
")",
":",
"def",
"wrap",
"(",
"func",
")",
":",
"def",
"wrapped_func",
"(",
"*",
"args",
",",
"*",
"*",
"kwargs",
")",
":",
"warnings",
".",
"warn",
"(",
"msg",
",",
"category",
"=",
"DeprecationWarning",
")",
"return",
"func",
"(",
"*",
"args",
",",
"*",
"*",
"kwargs",
")",
"return",
"wrapped_func",
"if",
"len",
"(",
"args",
")",
"==",
"1",
"and",
"callable",
"(",
"args",
"[",
"0",
"]",
")",
":",
"msg",
"=",
"\"Function '%s' will be deprecated in future versions of \"",
"\"Neurosynth.\"",
"%",
"args",
"[",
"0",
"]",
".",
"__name__",
"return",
"wrap",
"(",
"args",
"[",
"0",
"]",
")",
"else",
":",
"msg",
"=",
"args",
"[",
"0",
"]",
"return",
"wrap"
] |
948ce7edce15d7df693446e76834e0c23bfe8f11
|
test
|
magic
|
Execute a full clustering analysis pipeline.
Args:
dataset: a Dataset instance to extract all data from.
method (str): the overall clustering approach to use. Valid options:
'coactivation' (default): Clusters voxel within the ROI mask based
on shared pattern of coactivation with the rest of the brain.
'studies': Treat each study as a feature in an n-dimensional space.
I.e., voxels will be assigned to the same cluster if they tend
to be co-reported in similar studies.
'features': Voxels will be assigned to the same cluster if they
tend to have similar feature vectors (i.e., the studies that
activate those voxels tend to use similar terms).
roi_mask: A string, nibabel image, or numpy array providing an
inclusion mask of voxels to cluster. If None, the default mask
in the Dataset instance is used (typically, all in-brain voxels).
coactivation_mask: If method='coactivation', this mask defines the
voxels to use when generating the pairwise distance matrix. For
example, if a PFC mask is passed, all voxels in the roi_mask will
be clustered based on how similar their patterns of coactivation
with PFC voxels are. Can be a str, nibabel image, or numpy array.
features (str or list): Optional string or list of strings specifying
any feature names to use for study selection. E.g., passing
['emotion', 'reward'] would retain for analysis only those studies
associated with the features emotion or reward at a frequency
greater than feature_threshold.
feature_threshold (float): The threshold to use when selecting studies
on the basis of features.
min_voxels_per_study (int): Minimum number of active voxels a study
must report in order to be retained in the dataset. By default,
all studies are used.
min_studies_per_voxel (int): Minimum number of studies a voxel must be
active in in order to be retained in analysis. By default, all
voxels are used.
reduce_reference (str, scikit-learn object or None): The dimensionality
reduction algorithm to apply to the feature space prior to the
computation of pairwise distances. If a string is passed (either
'pca' or 'ica'), n_components must be specified. If None, no
dimensionality reduction will be applied. Otherwise, must be a
scikit-learn-style object that exposes a transform() method.
n_components (int): Number of components to extract during the
dimensionality reduction step. Only used if reduce_reference is
a string.
distance_metric (str): The distance metric to use when computing
pairwise distances on the to-be-clustered voxels. Can be any of the
metrics supported by sklearn.metrics.pairwise_distances.
clustering_algorithm (str or scikit-learn object): the clustering
algorithm to use. If a string, must be one of 'kmeans' or 'minik'.
Otherwise, any sklearn class that exposes a fit_predict() method.
n_clusters (int): If clustering_algorithm is a string, the number of
clusters to extract.
clustering_kwargs (dict): Additional keywords to pass to the clustering
object.
output_dir (str): The directory to write results to. If None (default),
returns the cluster label image rather than saving to disk.
filename (str): Name of cluster label image file. Defaults to
cluster_labels_k{k}.nii.gz, where k is the number of clusters.
coactivation_images (bool): If True, saves a meta-analytic coactivation
map for every ROI in the resulting cluster map.
coactivation_threshold (float or int): If coactivation_images is True,
this is the threshold used to define whether or not a study is
considered to activation within a cluster ROI. Integer values are
interpreted as minimum number of voxels within the ROI; floats
are interpreted as the proportion of voxels. Defaults to 0.1 (i.e.,
10% of all voxels within ROI must be active).
|
neurosynth/analysis/cluster.py
|
def magic(dataset, method='coactivation', roi_mask=None,
coactivation_mask=None, features=None, feature_threshold=0.05,
min_voxels_per_study=None, min_studies_per_voxel=None,
reduce_reference='pca', n_components=100,
distance_metric='correlation', clustering_algorithm='kmeans',
n_clusters=5, clustering_kwargs={}, output_dir=None, filename=None,
coactivation_images=False, coactivation_threshold=0.1):
''' Execute a full clustering analysis pipeline.
Args:
dataset: a Dataset instance to extract all data from.
method (str): the overall clustering approach to use. Valid options:
'coactivation' (default): Clusters voxel within the ROI mask based
on shared pattern of coactivation with the rest of the brain.
'studies': Treat each study as a feature in an n-dimensional space.
I.e., voxels will be assigned to the same cluster if they tend
to be co-reported in similar studies.
'features': Voxels will be assigned to the same cluster if they
tend to have similar feature vectors (i.e., the studies that
activate those voxels tend to use similar terms).
roi_mask: A string, nibabel image, or numpy array providing an
inclusion mask of voxels to cluster. If None, the default mask
in the Dataset instance is used (typically, all in-brain voxels).
coactivation_mask: If method='coactivation', this mask defines the
voxels to use when generating the pairwise distance matrix. For
example, if a PFC mask is passed, all voxels in the roi_mask will
be clustered based on how similar their patterns of coactivation
with PFC voxels are. Can be a str, nibabel image, or numpy array.
features (str or list): Optional string or list of strings specifying
any feature names to use for study selection. E.g., passing
['emotion', 'reward'] would retain for analysis only those studies
associated with the features emotion or reward at a frequency
greater than feature_threshold.
feature_threshold (float): The threshold to use when selecting studies
on the basis of features.
min_voxels_per_study (int): Minimum number of active voxels a study
must report in order to be retained in the dataset. By default,
all studies are used.
min_studies_per_voxel (int): Minimum number of studies a voxel must be
active in in order to be retained in analysis. By default, all
voxels are used.
reduce_reference (str, scikit-learn object or None): The dimensionality
reduction algorithm to apply to the feature space prior to the
computation of pairwise distances. If a string is passed (either
'pca' or 'ica'), n_components must be specified. If None, no
dimensionality reduction will be applied. Otherwise, must be a
scikit-learn-style object that exposes a transform() method.
n_components (int): Number of components to extract during the
dimensionality reduction step. Only used if reduce_reference is
a string.
distance_metric (str): The distance metric to use when computing
pairwise distances on the to-be-clustered voxels. Can be any of the
metrics supported by sklearn.metrics.pairwise_distances.
clustering_algorithm (str or scikit-learn object): the clustering
algorithm to use. If a string, must be one of 'kmeans' or 'minik'.
Otherwise, any sklearn class that exposes a fit_predict() method.
n_clusters (int): If clustering_algorithm is a string, the number of
clusters to extract.
clustering_kwargs (dict): Additional keywords to pass to the clustering
object.
output_dir (str): The directory to write results to. If None (default),
returns the cluster label image rather than saving to disk.
filename (str): Name of cluster label image file. Defaults to
cluster_labels_k{k}.nii.gz, where k is the number of clusters.
coactivation_images (bool): If True, saves a meta-analytic coactivation
map for every ROI in the resulting cluster map.
coactivation_threshold (float or int): If coactivation_images is True,
this is the threshold used to define whether or not a study is
considered to activation within a cluster ROI. Integer values are
interpreted as minimum number of voxels within the ROI; floats
are interpreted as the proportion of voxels. Defaults to 0.1 (i.e.,
10% of all voxels within ROI must be active).
'''
roi = Clusterable(dataset, roi_mask, min_voxels=min_voxels_per_study,
min_studies=min_studies_per_voxel, features=features,
feature_threshold=feature_threshold)
if method == 'coactivation':
reference = Clusterable(dataset, coactivation_mask,
min_voxels=min_voxels_per_study,
min_studies=min_studies_per_voxel,
features=features,
feature_threshold=feature_threshold)
elif method == 'features':
reference = deepcopy(roi)
feature_data = dataset.feature_table.data
n_studies = len(feature_data)
reference.data = reference.data.dot(feature_data.values) / n_studies
elif method == 'studies':
reference = roi
if reduce_reference is not None:
if isinstance(reduce_reference, string_types):
# Number of components can't exceed feature count or cluster count
n_feat = reference.data.shape[1]
n_components = min(n_components, n_feat)
reduce_reference = {
'pca': sk_decomp.PCA,
'ica': sk_decomp.FastICA
}[reduce_reference](n_components)
# For non-coactivation-based approaches, transpose the data matrix
transpose = (method == 'coactivation')
reference = reference.transform(reduce_reference, transpose=transpose)
if method == 'coactivation':
distances = pairwise_distances(roi.data, reference.data,
metric=distance_metric)
else:
distances = reference.data
# TODO: add additional clustering methods
if isinstance(clustering_algorithm, string_types):
clustering_algorithm = {
'kmeans': sk_cluster.KMeans,
'minik': sk_cluster.MiniBatchKMeans
}[clustering_algorithm](n_clusters, **clustering_kwargs)
labels = clustering_algorithm.fit_predict(distances) + 1.
header = roi.masker.get_header()
header['cal_max'] = labels.max()
header['cal_min'] = labels.min()
voxel_labels = roi.masker.unmask(labels)
img = nifti1.Nifti1Image(voxel_labels, None, header)
if output_dir is not None:
if not exists(output_dir):
makedirs(output_dir)
if filename is None:
filename = 'cluster_labels_k%d.nii.gz' % n_clusters
outfile = join(output_dir, filename)
img.to_filename(outfile)
# Write coactivation images
if coactivation_images:
for l in np.unique(voxel_labels):
roi_mask = np.copy(voxel_labels)
roi_mask[roi_mask != l] = 0
ids = dataset.get_studies(
mask=roi_mask, activation_threshold=coactivation_threshold)
ma = meta.MetaAnalysis(dataset, ids)
ma.save_results(output_dir=join(output_dir, 'coactivation'),
prefix='cluster_%d_coactivation' % l)
else:
return img
|
def magic(dataset, method='coactivation', roi_mask=None,
coactivation_mask=None, features=None, feature_threshold=0.05,
min_voxels_per_study=None, min_studies_per_voxel=None,
reduce_reference='pca', n_components=100,
distance_metric='correlation', clustering_algorithm='kmeans',
n_clusters=5, clustering_kwargs={}, output_dir=None, filename=None,
coactivation_images=False, coactivation_threshold=0.1):
''' Execute a full clustering analysis pipeline.
Args:
dataset: a Dataset instance to extract all data from.
method (str): the overall clustering approach to use. Valid options:
'coactivation' (default): Clusters voxel within the ROI mask based
on shared pattern of coactivation with the rest of the brain.
'studies': Treat each study as a feature in an n-dimensional space.
I.e., voxels will be assigned to the same cluster if they tend
to be co-reported in similar studies.
'features': Voxels will be assigned to the same cluster if they
tend to have similar feature vectors (i.e., the studies that
activate those voxels tend to use similar terms).
roi_mask: A string, nibabel image, or numpy array providing an
inclusion mask of voxels to cluster. If None, the default mask
in the Dataset instance is used (typically, all in-brain voxels).
coactivation_mask: If method='coactivation', this mask defines the
voxels to use when generating the pairwise distance matrix. For
example, if a PFC mask is passed, all voxels in the roi_mask will
be clustered based on how similar their patterns of coactivation
with PFC voxels are. Can be a str, nibabel image, or numpy array.
features (str or list): Optional string or list of strings specifying
any feature names to use for study selection. E.g., passing
['emotion', 'reward'] would retain for analysis only those studies
associated with the features emotion or reward at a frequency
greater than feature_threshold.
feature_threshold (float): The threshold to use when selecting studies
on the basis of features.
min_voxels_per_study (int): Minimum number of active voxels a study
must report in order to be retained in the dataset. By default,
all studies are used.
min_studies_per_voxel (int): Minimum number of studies a voxel must be
active in in order to be retained in analysis. By default, all
voxels are used.
reduce_reference (str, scikit-learn object or None): The dimensionality
reduction algorithm to apply to the feature space prior to the
computation of pairwise distances. If a string is passed (either
'pca' or 'ica'), n_components must be specified. If None, no
dimensionality reduction will be applied. Otherwise, must be a
scikit-learn-style object that exposes a transform() method.
n_components (int): Number of components to extract during the
dimensionality reduction step. Only used if reduce_reference is
a string.
distance_metric (str): The distance metric to use when computing
pairwise distances on the to-be-clustered voxels. Can be any of the
metrics supported by sklearn.metrics.pairwise_distances.
clustering_algorithm (str or scikit-learn object): the clustering
algorithm to use. If a string, must be one of 'kmeans' or 'minik'.
Otherwise, any sklearn class that exposes a fit_predict() method.
n_clusters (int): If clustering_algorithm is a string, the number of
clusters to extract.
clustering_kwargs (dict): Additional keywords to pass to the clustering
object.
output_dir (str): The directory to write results to. If None (default),
returns the cluster label image rather than saving to disk.
filename (str): Name of cluster label image file. Defaults to
cluster_labels_k{k}.nii.gz, where k is the number of clusters.
coactivation_images (bool): If True, saves a meta-analytic coactivation
map for every ROI in the resulting cluster map.
coactivation_threshold (float or int): If coactivation_images is True,
this is the threshold used to define whether or not a study is
considered to activation within a cluster ROI. Integer values are
interpreted as minimum number of voxels within the ROI; floats
are interpreted as the proportion of voxels. Defaults to 0.1 (i.e.,
10% of all voxels within ROI must be active).
'''
roi = Clusterable(dataset, roi_mask, min_voxels=min_voxels_per_study,
min_studies=min_studies_per_voxel, features=features,
feature_threshold=feature_threshold)
if method == 'coactivation':
reference = Clusterable(dataset, coactivation_mask,
min_voxels=min_voxels_per_study,
min_studies=min_studies_per_voxel,
features=features,
feature_threshold=feature_threshold)
elif method == 'features':
reference = deepcopy(roi)
feature_data = dataset.feature_table.data
n_studies = len(feature_data)
reference.data = reference.data.dot(feature_data.values) / n_studies
elif method == 'studies':
reference = roi
if reduce_reference is not None:
if isinstance(reduce_reference, string_types):
# Number of components can't exceed feature count or cluster count
n_feat = reference.data.shape[1]
n_components = min(n_components, n_feat)
reduce_reference = {
'pca': sk_decomp.PCA,
'ica': sk_decomp.FastICA
}[reduce_reference](n_components)
# For non-coactivation-based approaches, transpose the data matrix
transpose = (method == 'coactivation')
reference = reference.transform(reduce_reference, transpose=transpose)
if method == 'coactivation':
distances = pairwise_distances(roi.data, reference.data,
metric=distance_metric)
else:
distances = reference.data
# TODO: add additional clustering methods
if isinstance(clustering_algorithm, string_types):
clustering_algorithm = {
'kmeans': sk_cluster.KMeans,
'minik': sk_cluster.MiniBatchKMeans
}[clustering_algorithm](n_clusters, **clustering_kwargs)
labels = clustering_algorithm.fit_predict(distances) + 1.
header = roi.masker.get_header()
header['cal_max'] = labels.max()
header['cal_min'] = labels.min()
voxel_labels = roi.masker.unmask(labels)
img = nifti1.Nifti1Image(voxel_labels, None, header)
if output_dir is not None:
if not exists(output_dir):
makedirs(output_dir)
if filename is None:
filename = 'cluster_labels_k%d.nii.gz' % n_clusters
outfile = join(output_dir, filename)
img.to_filename(outfile)
# Write coactivation images
if coactivation_images:
for l in np.unique(voxel_labels):
roi_mask = np.copy(voxel_labels)
roi_mask[roi_mask != l] = 0
ids = dataset.get_studies(
mask=roi_mask, activation_threshold=coactivation_threshold)
ma = meta.MetaAnalysis(dataset, ids)
ma.save_results(output_dir=join(output_dir, 'coactivation'),
prefix='cluster_%d_coactivation' % l)
else:
return img
|
[
"Execute",
"a",
"full",
"clustering",
"analysis",
"pipeline",
".",
"Args",
":",
"dataset",
":",
"a",
"Dataset",
"instance",
"to",
"extract",
"all",
"data",
"from",
".",
"method",
"(",
"str",
")",
":",
"the",
"overall",
"clustering",
"approach",
"to",
"use",
".",
"Valid",
"options",
":",
"coactivation",
"(",
"default",
")",
":",
"Clusters",
"voxel",
"within",
"the",
"ROI",
"mask",
"based",
"on",
"shared",
"pattern",
"of",
"coactivation",
"with",
"the",
"rest",
"of",
"the",
"brain",
".",
"studies",
":",
"Treat",
"each",
"study",
"as",
"a",
"feature",
"in",
"an",
"n",
"-",
"dimensional",
"space",
".",
"I",
".",
"e",
".",
"voxels",
"will",
"be",
"assigned",
"to",
"the",
"same",
"cluster",
"if",
"they",
"tend",
"to",
"be",
"co",
"-",
"reported",
"in",
"similar",
"studies",
".",
"features",
":",
"Voxels",
"will",
"be",
"assigned",
"to",
"the",
"same",
"cluster",
"if",
"they",
"tend",
"to",
"have",
"similar",
"feature",
"vectors",
"(",
"i",
".",
"e",
".",
"the",
"studies",
"that",
"activate",
"those",
"voxels",
"tend",
"to",
"use",
"similar",
"terms",
")",
".",
"roi_mask",
":",
"A",
"string",
"nibabel",
"image",
"or",
"numpy",
"array",
"providing",
"an",
"inclusion",
"mask",
"of",
"voxels",
"to",
"cluster",
".",
"If",
"None",
"the",
"default",
"mask",
"in",
"the",
"Dataset",
"instance",
"is",
"used",
"(",
"typically",
"all",
"in",
"-",
"brain",
"voxels",
")",
".",
"coactivation_mask",
":",
"If",
"method",
"=",
"coactivation",
"this",
"mask",
"defines",
"the",
"voxels",
"to",
"use",
"when",
"generating",
"the",
"pairwise",
"distance",
"matrix",
".",
"For",
"example",
"if",
"a",
"PFC",
"mask",
"is",
"passed",
"all",
"voxels",
"in",
"the",
"roi_mask",
"will",
"be",
"clustered",
"based",
"on",
"how",
"similar",
"their",
"patterns",
"of",
"coactivation",
"with",
"PFC",
"voxels",
"are",
".",
"Can",
"be",
"a",
"str",
"nibabel",
"image",
"or",
"numpy",
"array",
".",
"features",
"(",
"str",
"or",
"list",
")",
":",
"Optional",
"string",
"or",
"list",
"of",
"strings",
"specifying",
"any",
"feature",
"names",
"to",
"use",
"for",
"study",
"selection",
".",
"E",
".",
"g",
".",
"passing",
"[",
"emotion",
"reward",
"]",
"would",
"retain",
"for",
"analysis",
"only",
"those",
"studies",
"associated",
"with",
"the",
"features",
"emotion",
"or",
"reward",
"at",
"a",
"frequency",
"greater",
"than",
"feature_threshold",
".",
"feature_threshold",
"(",
"float",
")",
":",
"The",
"threshold",
"to",
"use",
"when",
"selecting",
"studies",
"on",
"the",
"basis",
"of",
"features",
".",
"min_voxels_per_study",
"(",
"int",
")",
":",
"Minimum",
"number",
"of",
"active",
"voxels",
"a",
"study",
"must",
"report",
"in",
"order",
"to",
"be",
"retained",
"in",
"the",
"dataset",
".",
"By",
"default",
"all",
"studies",
"are",
"used",
".",
"min_studies_per_voxel",
"(",
"int",
")",
":",
"Minimum",
"number",
"of",
"studies",
"a",
"voxel",
"must",
"be",
"active",
"in",
"in",
"order",
"to",
"be",
"retained",
"in",
"analysis",
".",
"By",
"default",
"all",
"voxels",
"are",
"used",
".",
"reduce_reference",
"(",
"str",
"scikit",
"-",
"learn",
"object",
"or",
"None",
")",
":",
"The",
"dimensionality",
"reduction",
"algorithm",
"to",
"apply",
"to",
"the",
"feature",
"space",
"prior",
"to",
"the",
"computation",
"of",
"pairwise",
"distances",
".",
"If",
"a",
"string",
"is",
"passed",
"(",
"either",
"pca",
"or",
"ica",
")",
"n_components",
"must",
"be",
"specified",
".",
"If",
"None",
"no",
"dimensionality",
"reduction",
"will",
"be",
"applied",
".",
"Otherwise",
"must",
"be",
"a",
"scikit",
"-",
"learn",
"-",
"style",
"object",
"that",
"exposes",
"a",
"transform",
"()",
"method",
".",
"n_components",
"(",
"int",
")",
":",
"Number",
"of",
"components",
"to",
"extract",
"during",
"the",
"dimensionality",
"reduction",
"step",
".",
"Only",
"used",
"if",
"reduce_reference",
"is",
"a",
"string",
".",
"distance_metric",
"(",
"str",
")",
":",
"The",
"distance",
"metric",
"to",
"use",
"when",
"computing",
"pairwise",
"distances",
"on",
"the",
"to",
"-",
"be",
"-",
"clustered",
"voxels",
".",
"Can",
"be",
"any",
"of",
"the",
"metrics",
"supported",
"by",
"sklearn",
".",
"metrics",
".",
"pairwise_distances",
".",
"clustering_algorithm",
"(",
"str",
"or",
"scikit",
"-",
"learn",
"object",
")",
":",
"the",
"clustering",
"algorithm",
"to",
"use",
".",
"If",
"a",
"string",
"must",
"be",
"one",
"of",
"kmeans",
"or",
"minik",
".",
"Otherwise",
"any",
"sklearn",
"class",
"that",
"exposes",
"a",
"fit_predict",
"()",
"method",
".",
"n_clusters",
"(",
"int",
")",
":",
"If",
"clustering_algorithm",
"is",
"a",
"string",
"the",
"number",
"of",
"clusters",
"to",
"extract",
".",
"clustering_kwargs",
"(",
"dict",
")",
":",
"Additional",
"keywords",
"to",
"pass",
"to",
"the",
"clustering",
"object",
".",
"output_dir",
"(",
"str",
")",
":",
"The",
"directory",
"to",
"write",
"results",
"to",
".",
"If",
"None",
"(",
"default",
")",
"returns",
"the",
"cluster",
"label",
"image",
"rather",
"than",
"saving",
"to",
"disk",
".",
"filename",
"(",
"str",
")",
":",
"Name",
"of",
"cluster",
"label",
"image",
"file",
".",
"Defaults",
"to",
"cluster_labels_k",
"{",
"k",
"}",
".",
"nii",
".",
"gz",
"where",
"k",
"is",
"the",
"number",
"of",
"clusters",
".",
"coactivation_images",
"(",
"bool",
")",
":",
"If",
"True",
"saves",
"a",
"meta",
"-",
"analytic",
"coactivation",
"map",
"for",
"every",
"ROI",
"in",
"the",
"resulting",
"cluster",
"map",
".",
"coactivation_threshold",
"(",
"float",
"or",
"int",
")",
":",
"If",
"coactivation_images",
"is",
"True",
"this",
"is",
"the",
"threshold",
"used",
"to",
"define",
"whether",
"or",
"not",
"a",
"study",
"is",
"considered",
"to",
"activation",
"within",
"a",
"cluster",
"ROI",
".",
"Integer",
"values",
"are",
"interpreted",
"as",
"minimum",
"number",
"of",
"voxels",
"within",
"the",
"ROI",
";",
"floats",
"are",
"interpreted",
"as",
"the",
"proportion",
"of",
"voxels",
".",
"Defaults",
"to",
"0",
".",
"1",
"(",
"i",
".",
"e",
".",
"10%",
"of",
"all",
"voxels",
"within",
"ROI",
"must",
"be",
"active",
")",
"."
] |
neurosynth/neurosynth
|
python
|
https://github.com/neurosynth/neurosynth/blob/948ce7edce15d7df693446e76834e0c23bfe8f11/neurosynth/analysis/cluster.py#L72-L219
|
[
"def",
"magic",
"(",
"dataset",
",",
"method",
"=",
"'coactivation'",
",",
"roi_mask",
"=",
"None",
",",
"coactivation_mask",
"=",
"None",
",",
"features",
"=",
"None",
",",
"feature_threshold",
"=",
"0.05",
",",
"min_voxels_per_study",
"=",
"None",
",",
"min_studies_per_voxel",
"=",
"None",
",",
"reduce_reference",
"=",
"'pca'",
",",
"n_components",
"=",
"100",
",",
"distance_metric",
"=",
"'correlation'",
",",
"clustering_algorithm",
"=",
"'kmeans'",
",",
"n_clusters",
"=",
"5",
",",
"clustering_kwargs",
"=",
"{",
"}",
",",
"output_dir",
"=",
"None",
",",
"filename",
"=",
"None",
",",
"coactivation_images",
"=",
"False",
",",
"coactivation_threshold",
"=",
"0.1",
")",
":",
"roi",
"=",
"Clusterable",
"(",
"dataset",
",",
"roi_mask",
",",
"min_voxels",
"=",
"min_voxels_per_study",
",",
"min_studies",
"=",
"min_studies_per_voxel",
",",
"features",
"=",
"features",
",",
"feature_threshold",
"=",
"feature_threshold",
")",
"if",
"method",
"==",
"'coactivation'",
":",
"reference",
"=",
"Clusterable",
"(",
"dataset",
",",
"coactivation_mask",
",",
"min_voxels",
"=",
"min_voxels_per_study",
",",
"min_studies",
"=",
"min_studies_per_voxel",
",",
"features",
"=",
"features",
",",
"feature_threshold",
"=",
"feature_threshold",
")",
"elif",
"method",
"==",
"'features'",
":",
"reference",
"=",
"deepcopy",
"(",
"roi",
")",
"feature_data",
"=",
"dataset",
".",
"feature_table",
".",
"data",
"n_studies",
"=",
"len",
"(",
"feature_data",
")",
"reference",
".",
"data",
"=",
"reference",
".",
"data",
".",
"dot",
"(",
"feature_data",
".",
"values",
")",
"/",
"n_studies",
"elif",
"method",
"==",
"'studies'",
":",
"reference",
"=",
"roi",
"if",
"reduce_reference",
"is",
"not",
"None",
":",
"if",
"isinstance",
"(",
"reduce_reference",
",",
"string_types",
")",
":",
"# Number of components can't exceed feature count or cluster count",
"n_feat",
"=",
"reference",
".",
"data",
".",
"shape",
"[",
"1",
"]",
"n_components",
"=",
"min",
"(",
"n_components",
",",
"n_feat",
")",
"reduce_reference",
"=",
"{",
"'pca'",
":",
"sk_decomp",
".",
"PCA",
",",
"'ica'",
":",
"sk_decomp",
".",
"FastICA",
"}",
"[",
"reduce_reference",
"]",
"(",
"n_components",
")",
"# For non-coactivation-based approaches, transpose the data matrix",
"transpose",
"=",
"(",
"method",
"==",
"'coactivation'",
")",
"reference",
"=",
"reference",
".",
"transform",
"(",
"reduce_reference",
",",
"transpose",
"=",
"transpose",
")",
"if",
"method",
"==",
"'coactivation'",
":",
"distances",
"=",
"pairwise_distances",
"(",
"roi",
".",
"data",
",",
"reference",
".",
"data",
",",
"metric",
"=",
"distance_metric",
")",
"else",
":",
"distances",
"=",
"reference",
".",
"data",
"# TODO: add additional clustering methods",
"if",
"isinstance",
"(",
"clustering_algorithm",
",",
"string_types",
")",
":",
"clustering_algorithm",
"=",
"{",
"'kmeans'",
":",
"sk_cluster",
".",
"KMeans",
",",
"'minik'",
":",
"sk_cluster",
".",
"MiniBatchKMeans",
"}",
"[",
"clustering_algorithm",
"]",
"(",
"n_clusters",
",",
"*",
"*",
"clustering_kwargs",
")",
"labels",
"=",
"clustering_algorithm",
".",
"fit_predict",
"(",
"distances",
")",
"+",
"1.",
"header",
"=",
"roi",
".",
"masker",
".",
"get_header",
"(",
")",
"header",
"[",
"'cal_max'",
"]",
"=",
"labels",
".",
"max",
"(",
")",
"header",
"[",
"'cal_min'",
"]",
"=",
"labels",
".",
"min",
"(",
")",
"voxel_labels",
"=",
"roi",
".",
"masker",
".",
"unmask",
"(",
"labels",
")",
"img",
"=",
"nifti1",
".",
"Nifti1Image",
"(",
"voxel_labels",
",",
"None",
",",
"header",
")",
"if",
"output_dir",
"is",
"not",
"None",
":",
"if",
"not",
"exists",
"(",
"output_dir",
")",
":",
"makedirs",
"(",
"output_dir",
")",
"if",
"filename",
"is",
"None",
":",
"filename",
"=",
"'cluster_labels_k%d.nii.gz'",
"%",
"n_clusters",
"outfile",
"=",
"join",
"(",
"output_dir",
",",
"filename",
")",
"img",
".",
"to_filename",
"(",
"outfile",
")",
"# Write coactivation images",
"if",
"coactivation_images",
":",
"for",
"l",
"in",
"np",
".",
"unique",
"(",
"voxel_labels",
")",
":",
"roi_mask",
"=",
"np",
".",
"copy",
"(",
"voxel_labels",
")",
"roi_mask",
"[",
"roi_mask",
"!=",
"l",
"]",
"=",
"0",
"ids",
"=",
"dataset",
".",
"get_studies",
"(",
"mask",
"=",
"roi_mask",
",",
"activation_threshold",
"=",
"coactivation_threshold",
")",
"ma",
"=",
"meta",
".",
"MetaAnalysis",
"(",
"dataset",
",",
"ids",
")",
"ma",
".",
"save_results",
"(",
"output_dir",
"=",
"join",
"(",
"output_dir",
",",
"'coactivation'",
")",
",",
"prefix",
"=",
"'cluster_%d_coactivation'",
"%",
"l",
")",
"else",
":",
"return",
"img"
] |
948ce7edce15d7df693446e76834e0c23bfe8f11
|
test
|
Clusterable.transform
|
Apply a transformation to the Clusterable instance. Accepts any
scikit-learn-style class that implements a fit_transform() method.
|
neurosynth/analysis/cluster.py
|
def transform(self, transformer, transpose=False):
''' Apply a transformation to the Clusterable instance. Accepts any
scikit-learn-style class that implements a fit_transform() method. '''
data = self.data.T if transpose else self.data
data = transformer.fit_transform(data)
self.data = data.T if transpose else data
return self
|
def transform(self, transformer, transpose=False):
''' Apply a transformation to the Clusterable instance. Accepts any
scikit-learn-style class that implements a fit_transform() method. '''
data = self.data.T if transpose else self.data
data = transformer.fit_transform(data)
self.data = data.T if transpose else data
return self
|
[
"Apply",
"a",
"transformation",
"to",
"the",
"Clusterable",
"instance",
".",
"Accepts",
"any",
"scikit",
"-",
"learn",
"-",
"style",
"class",
"that",
"implements",
"a",
"fit_transform",
"()",
"method",
"."
] |
neurosynth/neurosynth
|
python
|
https://github.com/neurosynth/neurosynth/blob/948ce7edce15d7df693446e76834e0c23bfe8f11/neurosynth/analysis/cluster.py#L63-L69
|
[
"def",
"transform",
"(",
"self",
",",
"transformer",
",",
"transpose",
"=",
"False",
")",
":",
"data",
"=",
"self",
".",
"data",
".",
"T",
"if",
"transpose",
"else",
"self",
".",
"data",
"data",
"=",
"transformer",
".",
"fit_transform",
"(",
"data",
")",
"self",
".",
"data",
"=",
"data",
".",
"T",
"if",
"transpose",
"else",
"data",
"return",
"self"
] |
948ce7edce15d7df693446e76834e0c23bfe8f11
|
test
|
analyze_features
|
Generate meta-analysis images for a set of features.
Args:
dataset: A Dataset instance containing feature and activation data.
features: A list of named features to generate meta-analysis maps for.
If None, analyzes all features in the current dataset.
image_type: The type of image to return. Specify one of the extensions
generated by the MetaAnalysis procedure--e.g., association-test_z,
uniformity-test_z, etc. By default, will use
association-test_z (i.e., z-scores reflecting the association
between presence of activation and presence of feature).
threshold: The threshold for determining whether or not a Mappable has
a feature. By default, this is 0.001, which is only sensible in the
case of term-based features (so be sure to specify it for other
kinds).
q: The FDR rate to use for multiple comparisons correction (default =
0.05).
output_dir: Directory to save all meta-analysis images to. If none,
returns all the data as a matrix.
prefix: All output images will be prepended with this string (if None,
defaults to the name of the feature).
Returns:
If output_dir is None, an n_voxels x n_features 2D numpy array.
|
neurosynth/analysis/meta.py
|
def analyze_features(dataset, features=None, image_type='association-test_z',
threshold=0.001, q=0.01, output_dir=None, prefix=None):
""" Generate meta-analysis images for a set of features.
Args:
dataset: A Dataset instance containing feature and activation data.
features: A list of named features to generate meta-analysis maps for.
If None, analyzes all features in the current dataset.
image_type: The type of image to return. Specify one of the extensions
generated by the MetaAnalysis procedure--e.g., association-test_z,
uniformity-test_z, etc. By default, will use
association-test_z (i.e., z-scores reflecting the association
between presence of activation and presence of feature).
threshold: The threshold for determining whether or not a Mappable has
a feature. By default, this is 0.001, which is only sensible in the
case of term-based features (so be sure to specify it for other
kinds).
q: The FDR rate to use for multiple comparisons correction (default =
0.05).
output_dir: Directory to save all meta-analysis images to. If none,
returns all the data as a matrix.
prefix: All output images will be prepended with this string (if None,
defaults to the name of the feature).
Returns:
If output_dir is None, an n_voxels x n_features 2D numpy array.
"""
if features is None:
features = dataset.get_feature_names()
if output_dir is None:
result = np.zeros((dataset.masker.n_vox_in_mask, len(features)))
for i, f in enumerate(features):
ids = dataset.get_studies(features=f, frequency_threshold=threshold)
ma = MetaAnalysis(dataset, ids, q=q)
if output_dir is None:
result[:, i] = ma.images[image_type]
else:
pfx = f if prefix is None else prefix + '_' + f
ma.save_results(output_dir=output_dir, prefix=pfx)
if output_dir is None:
return result
|
def analyze_features(dataset, features=None, image_type='association-test_z',
threshold=0.001, q=0.01, output_dir=None, prefix=None):
""" Generate meta-analysis images for a set of features.
Args:
dataset: A Dataset instance containing feature and activation data.
features: A list of named features to generate meta-analysis maps for.
If None, analyzes all features in the current dataset.
image_type: The type of image to return. Specify one of the extensions
generated by the MetaAnalysis procedure--e.g., association-test_z,
uniformity-test_z, etc. By default, will use
association-test_z (i.e., z-scores reflecting the association
between presence of activation and presence of feature).
threshold: The threshold for determining whether or not a Mappable has
a feature. By default, this is 0.001, which is only sensible in the
case of term-based features (so be sure to specify it for other
kinds).
q: The FDR rate to use for multiple comparisons correction (default =
0.05).
output_dir: Directory to save all meta-analysis images to. If none,
returns all the data as a matrix.
prefix: All output images will be prepended with this string (if None,
defaults to the name of the feature).
Returns:
If output_dir is None, an n_voxels x n_features 2D numpy array.
"""
if features is None:
features = dataset.get_feature_names()
if output_dir is None:
result = np.zeros((dataset.masker.n_vox_in_mask, len(features)))
for i, f in enumerate(features):
ids = dataset.get_studies(features=f, frequency_threshold=threshold)
ma = MetaAnalysis(dataset, ids, q=q)
if output_dir is None:
result[:, i] = ma.images[image_type]
else:
pfx = f if prefix is None else prefix + '_' + f
ma.save_results(output_dir=output_dir, prefix=pfx)
if output_dir is None:
return result
|
[
"Generate",
"meta",
"-",
"analysis",
"images",
"for",
"a",
"set",
"of",
"features",
".",
"Args",
":",
"dataset",
":",
"A",
"Dataset",
"instance",
"containing",
"feature",
"and",
"activation",
"data",
".",
"features",
":",
"A",
"list",
"of",
"named",
"features",
"to",
"generate",
"meta",
"-",
"analysis",
"maps",
"for",
".",
"If",
"None",
"analyzes",
"all",
"features",
"in",
"the",
"current",
"dataset",
".",
"image_type",
":",
"The",
"type",
"of",
"image",
"to",
"return",
".",
"Specify",
"one",
"of",
"the",
"extensions",
"generated",
"by",
"the",
"MetaAnalysis",
"procedure",
"--",
"e",
".",
"g",
".",
"association",
"-",
"test_z",
"uniformity",
"-",
"test_z",
"etc",
".",
"By",
"default",
"will",
"use",
"association",
"-",
"test_z",
"(",
"i",
".",
"e",
".",
"z",
"-",
"scores",
"reflecting",
"the",
"association",
"between",
"presence",
"of",
"activation",
"and",
"presence",
"of",
"feature",
")",
".",
"threshold",
":",
"The",
"threshold",
"for",
"determining",
"whether",
"or",
"not",
"a",
"Mappable",
"has",
"a",
"feature",
".",
"By",
"default",
"this",
"is",
"0",
".",
"001",
"which",
"is",
"only",
"sensible",
"in",
"the",
"case",
"of",
"term",
"-",
"based",
"features",
"(",
"so",
"be",
"sure",
"to",
"specify",
"it",
"for",
"other",
"kinds",
")",
".",
"q",
":",
"The",
"FDR",
"rate",
"to",
"use",
"for",
"multiple",
"comparisons",
"correction",
"(",
"default",
"=",
"0",
".",
"05",
")",
".",
"output_dir",
":",
"Directory",
"to",
"save",
"all",
"meta",
"-",
"analysis",
"images",
"to",
".",
"If",
"none",
"returns",
"all",
"the",
"data",
"as",
"a",
"matrix",
".",
"prefix",
":",
"All",
"output",
"images",
"will",
"be",
"prepended",
"with",
"this",
"string",
"(",
"if",
"None",
"defaults",
"to",
"the",
"name",
"of",
"the",
"feature",
")",
".",
"Returns",
":",
"If",
"output_dir",
"is",
"None",
"an",
"n_voxels",
"x",
"n_features",
"2D",
"numpy",
"array",
"."
] |
neurosynth/neurosynth
|
python
|
https://github.com/neurosynth/neurosynth/blob/948ce7edce15d7df693446e76834e0c23bfe8f11/neurosynth/analysis/meta.py#L14-L54
|
[
"def",
"analyze_features",
"(",
"dataset",
",",
"features",
"=",
"None",
",",
"image_type",
"=",
"'association-test_z'",
",",
"threshold",
"=",
"0.001",
",",
"q",
"=",
"0.01",
",",
"output_dir",
"=",
"None",
",",
"prefix",
"=",
"None",
")",
":",
"if",
"features",
"is",
"None",
":",
"features",
"=",
"dataset",
".",
"get_feature_names",
"(",
")",
"if",
"output_dir",
"is",
"None",
":",
"result",
"=",
"np",
".",
"zeros",
"(",
"(",
"dataset",
".",
"masker",
".",
"n_vox_in_mask",
",",
"len",
"(",
"features",
")",
")",
")",
"for",
"i",
",",
"f",
"in",
"enumerate",
"(",
"features",
")",
":",
"ids",
"=",
"dataset",
".",
"get_studies",
"(",
"features",
"=",
"f",
",",
"frequency_threshold",
"=",
"threshold",
")",
"ma",
"=",
"MetaAnalysis",
"(",
"dataset",
",",
"ids",
",",
"q",
"=",
"q",
")",
"if",
"output_dir",
"is",
"None",
":",
"result",
"[",
":",
",",
"i",
"]",
"=",
"ma",
".",
"images",
"[",
"image_type",
"]",
"else",
":",
"pfx",
"=",
"f",
"if",
"prefix",
"is",
"None",
"else",
"prefix",
"+",
"'_'",
"+",
"f",
"ma",
".",
"save_results",
"(",
"output_dir",
"=",
"output_dir",
",",
"prefix",
"=",
"pfx",
")",
"if",
"output_dir",
"is",
"None",
":",
"return",
"result"
] |
948ce7edce15d7df693446e76834e0c23bfe8f11
|
test
|
MetaAnalysis.save_results
|
Write out any images generated by the meta-analysis.
Args:
output_dir (str): folder to write images to
prefix (str): all image files will be prepended with this string
prefix_sep (str): glue between the prefix and rest of filename
image_list (list): optional list of images to save--e.g.,
['pFgA_z', 'pAgF']. If image_list is None (default), will save
all images.
|
neurosynth/analysis/meta.py
|
def save_results(self, output_dir='.', prefix='', prefix_sep='_',
image_list=None):
""" Write out any images generated by the meta-analysis.
Args:
output_dir (str): folder to write images to
prefix (str): all image files will be prepended with this string
prefix_sep (str): glue between the prefix and rest of filename
image_list (list): optional list of images to save--e.g.,
['pFgA_z', 'pAgF']. If image_list is None (default), will save
all images.
"""
if prefix == '':
prefix_sep = ''
if not exists(output_dir):
makedirs(output_dir)
logger.debug("Saving results...")
if image_list is None:
image_list = self.images.keys()
for suffix, img in self.images.items():
if suffix in image_list:
filename = prefix + prefix_sep + suffix + '.nii.gz'
outpath = join(output_dir, filename)
imageutils.save_img(img, outpath, self.dataset.masker)
|
def save_results(self, output_dir='.', prefix='', prefix_sep='_',
image_list=None):
""" Write out any images generated by the meta-analysis.
Args:
output_dir (str): folder to write images to
prefix (str): all image files will be prepended with this string
prefix_sep (str): glue between the prefix and rest of filename
image_list (list): optional list of images to save--e.g.,
['pFgA_z', 'pAgF']. If image_list is None (default), will save
all images.
"""
if prefix == '':
prefix_sep = ''
if not exists(output_dir):
makedirs(output_dir)
logger.debug("Saving results...")
if image_list is None:
image_list = self.images.keys()
for suffix, img in self.images.items():
if suffix in image_list:
filename = prefix + prefix_sep + suffix + '.nii.gz'
outpath = join(output_dir, filename)
imageutils.save_img(img, outpath, self.dataset.masker)
|
[
"Write",
"out",
"any",
"images",
"generated",
"by",
"the",
"meta",
"-",
"analysis",
".",
"Args",
":",
"output_dir",
"(",
"str",
")",
":",
"folder",
"to",
"write",
"images",
"to",
"prefix",
"(",
"str",
")",
":",
"all",
"image",
"files",
"will",
"be",
"prepended",
"with",
"this",
"string",
"prefix_sep",
"(",
"str",
")",
":",
"glue",
"between",
"the",
"prefix",
"and",
"rest",
"of",
"filename",
"image_list",
"(",
"list",
")",
":",
"optional",
"list",
"of",
"images",
"to",
"save",
"--",
"e",
".",
"g",
".",
"[",
"pFgA_z",
"pAgF",
"]",
".",
"If",
"image_list",
"is",
"None",
"(",
"default",
")",
"will",
"save",
"all",
"images",
"."
] |
neurosynth/neurosynth
|
python
|
https://github.com/neurosynth/neurosynth/blob/948ce7edce15d7df693446e76834e0c23bfe8f11/neurosynth/analysis/meta.py#L201-L226
|
[
"def",
"save_results",
"(",
"self",
",",
"output_dir",
"=",
"'.'",
",",
"prefix",
"=",
"''",
",",
"prefix_sep",
"=",
"'_'",
",",
"image_list",
"=",
"None",
")",
":",
"if",
"prefix",
"==",
"''",
":",
"prefix_sep",
"=",
"''",
"if",
"not",
"exists",
"(",
"output_dir",
")",
":",
"makedirs",
"(",
"output_dir",
")",
"logger",
".",
"debug",
"(",
"\"Saving results...\"",
")",
"if",
"image_list",
"is",
"None",
":",
"image_list",
"=",
"self",
".",
"images",
".",
"keys",
"(",
")",
"for",
"suffix",
",",
"img",
"in",
"self",
".",
"images",
".",
"items",
"(",
")",
":",
"if",
"suffix",
"in",
"image_list",
":",
"filename",
"=",
"prefix",
"+",
"prefix_sep",
"+",
"suffix",
"+",
"'.nii.gz'",
"outpath",
"=",
"join",
"(",
"output_dir",
",",
"filename",
")",
"imageutils",
".",
"save_img",
"(",
"img",
",",
"outpath",
",",
"self",
".",
"dataset",
".",
"masker",
")"
] |
948ce7edce15d7df693446e76834e0c23bfe8f11
|
test
|
transform
|
Convert coordinates from one space to another using provided
transformation matrix.
|
neurosynth/base/transformations.py
|
def transform(foci, mat):
""" Convert coordinates from one space to another using provided
transformation matrix. """
t = linalg.pinv(mat)
foci = np.hstack((foci, np.ones((foci.shape[0], 1))))
return np.dot(foci, t)[:, 0:3]
|
def transform(foci, mat):
""" Convert coordinates from one space to another using provided
transformation matrix. """
t = linalg.pinv(mat)
foci = np.hstack((foci, np.ones((foci.shape[0], 1))))
return np.dot(foci, t)[:, 0:3]
|
[
"Convert",
"coordinates",
"from",
"one",
"space",
"to",
"another",
"using",
"provided",
"transformation",
"matrix",
"."
] |
neurosynth/neurosynth
|
python
|
https://github.com/neurosynth/neurosynth/blob/948ce7edce15d7df693446e76834e0c23bfe8f11/neurosynth/base/transformations.py#L10-L15
|
[
"def",
"transform",
"(",
"foci",
",",
"mat",
")",
":",
"t",
"=",
"linalg",
".",
"pinv",
"(",
"mat",
")",
"foci",
"=",
"np",
".",
"hstack",
"(",
"(",
"foci",
",",
"np",
".",
"ones",
"(",
"(",
"foci",
".",
"shape",
"[",
"0",
"]",
",",
"1",
")",
")",
")",
")",
"return",
"np",
".",
"dot",
"(",
"foci",
",",
"t",
")",
"[",
":",
",",
"0",
":",
"3",
"]"
] |
948ce7edce15d7df693446e76834e0c23bfe8f11
|
test
|
xyz_to_mat
|
Convert an N x 3 array of XYZ coordinates to matrix indices.
|
neurosynth/base/transformations.py
|
def xyz_to_mat(foci, xyz_dims=None, mat_dims=None):
""" Convert an N x 3 array of XYZ coordinates to matrix indices. """
foci = np.hstack((foci, np.ones((foci.shape[0], 1))))
mat = np.array([[-0.5, 0, 0, 45], [0, 0.5, 0, 63], [0, 0, 0.5, 36]]).T
result = np.dot(foci, mat)[:, ::-1] # multiply and reverse column order
return np.round_(result).astype(int)
|
def xyz_to_mat(foci, xyz_dims=None, mat_dims=None):
""" Convert an N x 3 array of XYZ coordinates to matrix indices. """
foci = np.hstack((foci, np.ones((foci.shape[0], 1))))
mat = np.array([[-0.5, 0, 0, 45], [0, 0.5, 0, 63], [0, 0, 0.5, 36]]).T
result = np.dot(foci, mat)[:, ::-1] # multiply and reverse column order
return np.round_(result).astype(int)
|
[
"Convert",
"an",
"N",
"x",
"3",
"array",
"of",
"XYZ",
"coordinates",
"to",
"matrix",
"indices",
"."
] |
neurosynth/neurosynth
|
python
|
https://github.com/neurosynth/neurosynth/blob/948ce7edce15d7df693446e76834e0c23bfe8f11/neurosynth/base/transformations.py#L18-L23
|
[
"def",
"xyz_to_mat",
"(",
"foci",
",",
"xyz_dims",
"=",
"None",
",",
"mat_dims",
"=",
"None",
")",
":",
"foci",
"=",
"np",
".",
"hstack",
"(",
"(",
"foci",
",",
"np",
".",
"ones",
"(",
"(",
"foci",
".",
"shape",
"[",
"0",
"]",
",",
"1",
")",
")",
")",
")",
"mat",
"=",
"np",
".",
"array",
"(",
"[",
"[",
"-",
"0.5",
",",
"0",
",",
"0",
",",
"45",
"]",
",",
"[",
"0",
",",
"0.5",
",",
"0",
",",
"63",
"]",
",",
"[",
"0",
",",
"0",
",",
"0.5",
",",
"36",
"]",
"]",
")",
".",
"T",
"result",
"=",
"np",
".",
"dot",
"(",
"foci",
",",
"mat",
")",
"[",
":",
",",
":",
":",
"-",
"1",
"]",
"# multiply and reverse column order",
"return",
"np",
".",
"round_",
"(",
"result",
")",
".",
"astype",
"(",
"int",
")"
] |
948ce7edce15d7df693446e76834e0c23bfe8f11
|
test
|
Transformer.apply
|
Apply a named transformation to a set of foci.
If the named transformation doesn't exist, return foci untransformed.
|
neurosynth/base/transformations.py
|
def apply(self, name, foci):
""" Apply a named transformation to a set of foci.
If the named transformation doesn't exist, return foci untransformed.
"""
if name in self.transformations:
return transform(foci, self.transformations[name])
else:
logger.info(
"No transformation named '%s' found; coordinates left "
"untransformed." % name)
return foci
|
def apply(self, name, foci):
""" Apply a named transformation to a set of foci.
If the named transformation doesn't exist, return foci untransformed.
"""
if name in self.transformations:
return transform(foci, self.transformations[name])
else:
logger.info(
"No transformation named '%s' found; coordinates left "
"untransformed." % name)
return foci
|
[
"Apply",
"a",
"named",
"transformation",
"to",
"a",
"set",
"of",
"foci",
"."
] |
neurosynth/neurosynth
|
python
|
https://github.com/neurosynth/neurosynth/blob/948ce7edce15d7df693446e76834e0c23bfe8f11/neurosynth/base/transformations.py#L59-L70
|
[
"def",
"apply",
"(",
"self",
",",
"name",
",",
"foci",
")",
":",
"if",
"name",
"in",
"self",
".",
"transformations",
":",
"return",
"transform",
"(",
"foci",
",",
"self",
".",
"transformations",
"[",
"name",
"]",
")",
"else",
":",
"logger",
".",
"info",
"(",
"\"No transformation named '%s' found; coordinates left \"",
"\"untransformed.\"",
"%",
"name",
")",
"return",
"foci"
] |
948ce7edce15d7df693446e76834e0c23bfe8f11
|
test
|
Masker.reset
|
Reset/remove all layers, keeping only the initial volume.
|
neurosynth/base/mask.py
|
def reset(self):
""" Reset/remove all layers, keeping only the initial volume. """
self.layers = {}
self.stack = []
self.set_mask()
self.n_vox_in_vol = len(np.where(self.current_mask)[0])
|
def reset(self):
""" Reset/remove all layers, keeping only the initial volume. """
self.layers = {}
self.stack = []
self.set_mask()
self.n_vox_in_vol = len(np.where(self.current_mask)[0])
|
[
"Reset",
"/",
"remove",
"all",
"layers",
"keeping",
"only",
"the",
"initial",
"volume",
"."
] |
neurosynth/neurosynth
|
python
|
https://github.com/neurosynth/neurosynth/blob/948ce7edce15d7df693446e76834e0c23bfe8f11/neurosynth/base/mask.py#L32-L37
|
[
"def",
"reset",
"(",
"self",
")",
":",
"self",
".",
"layers",
"=",
"{",
"}",
"self",
".",
"stack",
"=",
"[",
"]",
"self",
".",
"set_mask",
"(",
")",
"self",
".",
"n_vox_in_vol",
"=",
"len",
"(",
"np",
".",
"where",
"(",
"self",
".",
"current_mask",
")",
"[",
"0",
"]",
")"
] |
948ce7edce15d7df693446e76834e0c23bfe8f11
|
test
|
Masker.add
|
Add one or more layers to the stack of masking layers.
Args:
layers: A string, NiBabel image, list, or dict. If anything other
than a dict is passed, assigns sequential layer names based on
the current position in stack; if a dict, uses key as the name
and value as the mask image.
|
neurosynth/base/mask.py
|
def add(self, layers, above=None, below=None):
""" Add one or more layers to the stack of masking layers.
Args:
layers: A string, NiBabel image, list, or dict. If anything other
than a dict is passed, assigns sequential layer names based on
the current position in stack; if a dict, uses key as the name
and value as the mask image.
"""
def add_named_layer(name, image):
image = self.get_image(image, output='vector')
if above is not None:
image[image < above] = 0.
if below is not None:
image[image > below] = 0.
self.layers[name] = image
self.stack.append(name)
if isinstance(layers, dict):
for (name, image) in layers.items():
add_named_layer(name, image)
else:
if not isinstance(layers, list):
layers = [layers]
for image in layers:
name = 'layer_%d' % len(self.stack)
add_named_layer(name, image)
self.set_mask()
|
def add(self, layers, above=None, below=None):
""" Add one or more layers to the stack of masking layers.
Args:
layers: A string, NiBabel image, list, or dict. If anything other
than a dict is passed, assigns sequential layer names based on
the current position in stack; if a dict, uses key as the name
and value as the mask image.
"""
def add_named_layer(name, image):
image = self.get_image(image, output='vector')
if above is not None:
image[image < above] = 0.
if below is not None:
image[image > below] = 0.
self.layers[name] = image
self.stack.append(name)
if isinstance(layers, dict):
for (name, image) in layers.items():
add_named_layer(name, image)
else:
if not isinstance(layers, list):
layers = [layers]
for image in layers:
name = 'layer_%d' % len(self.stack)
add_named_layer(name, image)
self.set_mask()
|
[
"Add",
"one",
"or",
"more",
"layers",
"to",
"the",
"stack",
"of",
"masking",
"layers",
".",
"Args",
":",
"layers",
":",
"A",
"string",
"NiBabel",
"image",
"list",
"or",
"dict",
".",
"If",
"anything",
"other",
"than",
"a",
"dict",
"is",
"passed",
"assigns",
"sequential",
"layer",
"names",
"based",
"on",
"the",
"current",
"position",
"in",
"stack",
";",
"if",
"a",
"dict",
"uses",
"key",
"as",
"the",
"name",
"and",
"value",
"as",
"the",
"mask",
"image",
"."
] |
neurosynth/neurosynth
|
python
|
https://github.com/neurosynth/neurosynth/blob/948ce7edce15d7df693446e76834e0c23bfe8f11/neurosynth/base/mask.py#L39-L68
|
[
"def",
"add",
"(",
"self",
",",
"layers",
",",
"above",
"=",
"None",
",",
"below",
"=",
"None",
")",
":",
"def",
"add_named_layer",
"(",
"name",
",",
"image",
")",
":",
"image",
"=",
"self",
".",
"get_image",
"(",
"image",
",",
"output",
"=",
"'vector'",
")",
"if",
"above",
"is",
"not",
"None",
":",
"image",
"[",
"image",
"<",
"above",
"]",
"=",
"0.",
"if",
"below",
"is",
"not",
"None",
":",
"image",
"[",
"image",
">",
"below",
"]",
"=",
"0.",
"self",
".",
"layers",
"[",
"name",
"]",
"=",
"image",
"self",
".",
"stack",
".",
"append",
"(",
"name",
")",
"if",
"isinstance",
"(",
"layers",
",",
"dict",
")",
":",
"for",
"(",
"name",
",",
"image",
")",
"in",
"layers",
".",
"items",
"(",
")",
":",
"add_named_layer",
"(",
"name",
",",
"image",
")",
"else",
":",
"if",
"not",
"isinstance",
"(",
"layers",
",",
"list",
")",
":",
"layers",
"=",
"[",
"layers",
"]",
"for",
"image",
"in",
"layers",
":",
"name",
"=",
"'layer_%d'",
"%",
"len",
"(",
"self",
".",
"stack",
")",
"add_named_layer",
"(",
"name",
",",
"image",
")",
"self",
".",
"set_mask",
"(",
")"
] |
948ce7edce15d7df693446e76834e0c23bfe8f11
|
test
|
Masker.remove
|
Remove one or more layers from the stack of masking layers.
Args:
layers: An int, string or list of strings and/or ints. Ints are
interpreted as indices in the stack to remove; strings are
interpreted as names of layers to remove. Negative ints will
also work--i.e., remove(-1) will drop the last layer added.
|
neurosynth/base/mask.py
|
def remove(self, layers):
""" Remove one or more layers from the stack of masking layers.
Args:
layers: An int, string or list of strings and/or ints. Ints are
interpreted as indices in the stack to remove; strings are
interpreted as names of layers to remove. Negative ints will
also work--i.e., remove(-1) will drop the last layer added.
"""
if not isinstance(layers, list):
layers = [layers]
for l in layers:
if isinstance(l, string_types):
if l not in self.layers:
raise ValueError("There's no image/layer named '%s' in "
"the masking stack!" % l)
self.stack.remove(l)
else:
l = self.stack.pop(l)
del self.layers[l]
self.set_mask()
|
def remove(self, layers):
""" Remove one or more layers from the stack of masking layers.
Args:
layers: An int, string or list of strings and/or ints. Ints are
interpreted as indices in the stack to remove; strings are
interpreted as names of layers to remove. Negative ints will
also work--i.e., remove(-1) will drop the last layer added.
"""
if not isinstance(layers, list):
layers = [layers]
for l in layers:
if isinstance(l, string_types):
if l not in self.layers:
raise ValueError("There's no image/layer named '%s' in "
"the masking stack!" % l)
self.stack.remove(l)
else:
l = self.stack.pop(l)
del self.layers[l]
self.set_mask()
|
[
"Remove",
"one",
"or",
"more",
"layers",
"from",
"the",
"stack",
"of",
"masking",
"layers",
".",
"Args",
":",
"layers",
":",
"An",
"int",
"string",
"or",
"list",
"of",
"strings",
"and",
"/",
"or",
"ints",
".",
"Ints",
"are",
"interpreted",
"as",
"indices",
"in",
"the",
"stack",
"to",
"remove",
";",
"strings",
"are",
"interpreted",
"as",
"names",
"of",
"layers",
"to",
"remove",
".",
"Negative",
"ints",
"will",
"also",
"work",
"--",
"i",
".",
"e",
".",
"remove",
"(",
"-",
"1",
")",
"will",
"drop",
"the",
"last",
"layer",
"added",
"."
] |
neurosynth/neurosynth
|
python
|
https://github.com/neurosynth/neurosynth/blob/948ce7edce15d7df693446e76834e0c23bfe8f11/neurosynth/base/mask.py#L70-L90
|
[
"def",
"remove",
"(",
"self",
",",
"layers",
")",
":",
"if",
"not",
"isinstance",
"(",
"layers",
",",
"list",
")",
":",
"layers",
"=",
"[",
"layers",
"]",
"for",
"l",
"in",
"layers",
":",
"if",
"isinstance",
"(",
"l",
",",
"string_types",
")",
":",
"if",
"l",
"not",
"in",
"self",
".",
"layers",
":",
"raise",
"ValueError",
"(",
"\"There's no image/layer named '%s' in \"",
"\"the masking stack!\"",
"%",
"l",
")",
"self",
".",
"stack",
".",
"remove",
"(",
"l",
")",
"else",
":",
"l",
"=",
"self",
".",
"stack",
".",
"pop",
"(",
"l",
")",
"del",
"self",
".",
"layers",
"[",
"l",
"]",
"self",
".",
"set_mask",
"(",
")"
] |
948ce7edce15d7df693446e76834e0c23bfe8f11
|
test
|
Masker.get_image
|
A flexible method for transforming between different
representations of image data.
Args:
image: The input image. Can be a string (filename of image),
NiBabel image, N-dimensional array (must have same shape as
self.volume), or vectorized image data (must have same length
as current conjunction mask).
output: The format of the returned image representation. Must be
one of:
'vector': A 1D vectorized array
'array': An N-dimensional array, with
shape = self.volume.shape
'image': A NiBabel image
Returns: An object containing image data; see output options above.
|
neurosynth/base/mask.py
|
def get_image(self, image, output='vector'):
""" A flexible method for transforming between different
representations of image data.
Args:
image: The input image. Can be a string (filename of image),
NiBabel image, N-dimensional array (must have same shape as
self.volume), or vectorized image data (must have same length
as current conjunction mask).
output: The format of the returned image representation. Must be
one of:
'vector': A 1D vectorized array
'array': An N-dimensional array, with
shape = self.volume.shape
'image': A NiBabel image
Returns: An object containing image data; see output options above.
"""
if isinstance(image, string_types):
image = nb.load(image)
if type(image).__module__.startswith('nibabel'):
if output == 'image':
return image
image = image.get_data()
if not type(image).__module__.startswith('numpy'):
raise ValueError("Input image must be a string, a NiBabel image, "
"or a numpy array.")
if image.shape[:3] == self.volume.shape:
if output == 'image':
return nb.nifti1.Nifti1Image(image, None, self.get_header())
elif output == 'array':
return image
else:
image = image.ravel()
if output == 'vector':
return image.ravel()
image = np.reshape(image, self.volume.shape)
if output == 'array':
return image
return nb.nifti1.Nifti1Image(image, None, self.get_header())
|
def get_image(self, image, output='vector'):
""" A flexible method for transforming between different
representations of image data.
Args:
image: The input image. Can be a string (filename of image),
NiBabel image, N-dimensional array (must have same shape as
self.volume), or vectorized image data (must have same length
as current conjunction mask).
output: The format of the returned image representation. Must be
one of:
'vector': A 1D vectorized array
'array': An N-dimensional array, with
shape = self.volume.shape
'image': A NiBabel image
Returns: An object containing image data; see output options above.
"""
if isinstance(image, string_types):
image = nb.load(image)
if type(image).__module__.startswith('nibabel'):
if output == 'image':
return image
image = image.get_data()
if not type(image).__module__.startswith('numpy'):
raise ValueError("Input image must be a string, a NiBabel image, "
"or a numpy array.")
if image.shape[:3] == self.volume.shape:
if output == 'image':
return nb.nifti1.Nifti1Image(image, None, self.get_header())
elif output == 'array':
return image
else:
image = image.ravel()
if output == 'vector':
return image.ravel()
image = np.reshape(image, self.volume.shape)
if output == 'array':
return image
return nb.nifti1.Nifti1Image(image, None, self.get_header())
|
[
"A",
"flexible",
"method",
"for",
"transforming",
"between",
"different",
"representations",
"of",
"image",
"data",
".",
"Args",
":",
"image",
":",
"The",
"input",
"image",
".",
"Can",
"be",
"a",
"string",
"(",
"filename",
"of",
"image",
")",
"NiBabel",
"image",
"N",
"-",
"dimensional",
"array",
"(",
"must",
"have",
"same",
"shape",
"as",
"self",
".",
"volume",
")",
"or",
"vectorized",
"image",
"data",
"(",
"must",
"have",
"same",
"length",
"as",
"current",
"conjunction",
"mask",
")",
".",
"output",
":",
"The",
"format",
"of",
"the",
"returned",
"image",
"representation",
".",
"Must",
"be",
"one",
"of",
":",
"vector",
":",
"A",
"1D",
"vectorized",
"array",
"array",
":",
"An",
"N",
"-",
"dimensional",
"array",
"with",
"shape",
"=",
"self",
".",
"volume",
".",
"shape",
"image",
":",
"A",
"NiBabel",
"image",
"Returns",
":",
"An",
"object",
"containing",
"image",
"data",
";",
"see",
"output",
"options",
"above",
"."
] |
neurosynth/neurosynth
|
python
|
https://github.com/neurosynth/neurosynth/blob/948ce7edce15d7df693446e76834e0c23bfe8f11/neurosynth/base/mask.py#L92-L136
|
[
"def",
"get_image",
"(",
"self",
",",
"image",
",",
"output",
"=",
"'vector'",
")",
":",
"if",
"isinstance",
"(",
"image",
",",
"string_types",
")",
":",
"image",
"=",
"nb",
".",
"load",
"(",
"image",
")",
"if",
"type",
"(",
"image",
")",
".",
"__module__",
".",
"startswith",
"(",
"'nibabel'",
")",
":",
"if",
"output",
"==",
"'image'",
":",
"return",
"image",
"image",
"=",
"image",
".",
"get_data",
"(",
")",
"if",
"not",
"type",
"(",
"image",
")",
".",
"__module__",
".",
"startswith",
"(",
"'numpy'",
")",
":",
"raise",
"ValueError",
"(",
"\"Input image must be a string, a NiBabel image, \"",
"\"or a numpy array.\"",
")",
"if",
"image",
".",
"shape",
"[",
":",
"3",
"]",
"==",
"self",
".",
"volume",
".",
"shape",
":",
"if",
"output",
"==",
"'image'",
":",
"return",
"nb",
".",
"nifti1",
".",
"Nifti1Image",
"(",
"image",
",",
"None",
",",
"self",
".",
"get_header",
"(",
")",
")",
"elif",
"output",
"==",
"'array'",
":",
"return",
"image",
"else",
":",
"image",
"=",
"image",
".",
"ravel",
"(",
")",
"if",
"output",
"==",
"'vector'",
":",
"return",
"image",
".",
"ravel",
"(",
")",
"image",
"=",
"np",
".",
"reshape",
"(",
"image",
",",
"self",
".",
"volume",
".",
"shape",
")",
"if",
"output",
"==",
"'array'",
":",
"return",
"image",
"return",
"nb",
".",
"nifti1",
".",
"Nifti1Image",
"(",
"image",
",",
"None",
",",
"self",
".",
"get_header",
"(",
")",
")"
] |
948ce7edce15d7df693446e76834e0c23bfe8f11
|
test
|
Masker.mask
|
Vectorize an image and mask out all invalid voxels.
Args:
images: The image to vectorize and mask. Input can be any object
handled by get_image().
layers: Which mask layers to use (specified as int, string, or
list of ints and strings). When None, applies the conjunction
of all layers.
nan_to_num: boolean indicating whether to convert NaNs to 0.
in_global_mask: Whether to return the resulting masked vector in
the globally masked space (i.e., n_voxels =
len(self.global_mask)). If False (default), returns in the full
image space (i.e., n_voxels = len(self.volume)).
Returns:
A 1D NumPy array of in-mask voxels.
|
neurosynth/base/mask.py
|
def mask(self, image, nan_to_num=True, layers=None, in_global_mask=False):
""" Vectorize an image and mask out all invalid voxels.
Args:
images: The image to vectorize and mask. Input can be any object
handled by get_image().
layers: Which mask layers to use (specified as int, string, or
list of ints and strings). When None, applies the conjunction
of all layers.
nan_to_num: boolean indicating whether to convert NaNs to 0.
in_global_mask: Whether to return the resulting masked vector in
the globally masked space (i.e., n_voxels =
len(self.global_mask)). If False (default), returns in the full
image space (i.e., n_voxels = len(self.volume)).
Returns:
A 1D NumPy array of in-mask voxels.
"""
self.set_mask(layers)
image = self.get_image(image, output='vector')
if in_global_mask:
masked_data = image[self.global_mask]
masked_data[~self.get_mask(in_global_mask=True)] = 0
else:
masked_data = image[self.current_mask]
if nan_to_num:
masked_data = np.nan_to_num(masked_data)
return masked_data
|
def mask(self, image, nan_to_num=True, layers=None, in_global_mask=False):
""" Vectorize an image and mask out all invalid voxels.
Args:
images: The image to vectorize and mask. Input can be any object
handled by get_image().
layers: Which mask layers to use (specified as int, string, or
list of ints and strings). When None, applies the conjunction
of all layers.
nan_to_num: boolean indicating whether to convert NaNs to 0.
in_global_mask: Whether to return the resulting masked vector in
the globally masked space (i.e., n_voxels =
len(self.global_mask)). If False (default), returns in the full
image space (i.e., n_voxels = len(self.volume)).
Returns:
A 1D NumPy array of in-mask voxels.
"""
self.set_mask(layers)
image = self.get_image(image, output='vector')
if in_global_mask:
masked_data = image[self.global_mask]
masked_data[~self.get_mask(in_global_mask=True)] = 0
else:
masked_data = image[self.current_mask]
if nan_to_num:
masked_data = np.nan_to_num(masked_data)
return masked_data
|
[
"Vectorize",
"an",
"image",
"and",
"mask",
"out",
"all",
"invalid",
"voxels",
"."
] |
neurosynth/neurosynth
|
python
|
https://github.com/neurosynth/neurosynth/blob/948ce7edce15d7df693446e76834e0c23bfe8f11/neurosynth/base/mask.py#L138-L167
|
[
"def",
"mask",
"(",
"self",
",",
"image",
",",
"nan_to_num",
"=",
"True",
",",
"layers",
"=",
"None",
",",
"in_global_mask",
"=",
"False",
")",
":",
"self",
".",
"set_mask",
"(",
"layers",
")",
"image",
"=",
"self",
".",
"get_image",
"(",
"image",
",",
"output",
"=",
"'vector'",
")",
"if",
"in_global_mask",
":",
"masked_data",
"=",
"image",
"[",
"self",
".",
"global_mask",
"]",
"masked_data",
"[",
"~",
"self",
".",
"get_mask",
"(",
"in_global_mask",
"=",
"True",
")",
"]",
"=",
"0",
"else",
":",
"masked_data",
"=",
"image",
"[",
"self",
".",
"current_mask",
"]",
"if",
"nan_to_num",
":",
"masked_data",
"=",
"np",
".",
"nan_to_num",
"(",
"masked_data",
")",
"return",
"masked_data"
] |
948ce7edce15d7df693446e76834e0c23bfe8f11
|
test
|
Masker.unmask
|
Reconstruct a masked vector into the original 3D volume.
Args:
data: The 1D vector to reconstruct. (Can also be a 2D vector where
the second dimension is time, but then output will always
be set to 'array'--i.e., a 4D image will be returned.)
layers: Which mask layers to use (specified as int, string, or list
of ints and strings). When None, applies the conjunction of all
layers. Note that the layers specified here must exactly match
the layers used in the mask() operation, otherwise the shape of
the mask will be incorrect and bad things will happen.
output: What kind of object to return. See options in get_image().
By default, returns an N-dimensional array of reshaped data.
|
neurosynth/base/mask.py
|
def unmask(self, data, layers=None, output='array'):
""" Reconstruct a masked vector into the original 3D volume.
Args:
data: The 1D vector to reconstruct. (Can also be a 2D vector where
the second dimension is time, but then output will always
be set to 'array'--i.e., a 4D image will be returned.)
layers: Which mask layers to use (specified as int, string, or list
of ints and strings). When None, applies the conjunction of all
layers. Note that the layers specified here must exactly match
the layers used in the mask() operation, otherwise the shape of
the mask will be incorrect and bad things will happen.
output: What kind of object to return. See options in get_image().
By default, returns an N-dimensional array of reshaped data.
"""
self.set_mask(layers)
if data.ndim == 2:
n_volumes = data.shape[1]
# Assume 1st dimension is voxels, 2nd is time
# but we generate x,y,z,t volume
image = np.zeros(self.full.shape + (n_volumes,))
image[self.current_mask, :] = data
image = np.reshape(image, self.volume.shape + (n_volumes,))
else:
# img = self.full.copy()
image = np.zeros(self.full.shape)
image[self.current_mask] = data
return self.get_image(image, output)
|
def unmask(self, data, layers=None, output='array'):
""" Reconstruct a masked vector into the original 3D volume.
Args:
data: The 1D vector to reconstruct. (Can also be a 2D vector where
the second dimension is time, but then output will always
be set to 'array'--i.e., a 4D image will be returned.)
layers: Which mask layers to use (specified as int, string, or list
of ints and strings). When None, applies the conjunction of all
layers. Note that the layers specified here must exactly match
the layers used in the mask() operation, otherwise the shape of
the mask will be incorrect and bad things will happen.
output: What kind of object to return. See options in get_image().
By default, returns an N-dimensional array of reshaped data.
"""
self.set_mask(layers)
if data.ndim == 2:
n_volumes = data.shape[1]
# Assume 1st dimension is voxels, 2nd is time
# but we generate x,y,z,t volume
image = np.zeros(self.full.shape + (n_volumes,))
image[self.current_mask, :] = data
image = np.reshape(image, self.volume.shape + (n_volumes,))
else:
# img = self.full.copy()
image = np.zeros(self.full.shape)
image[self.current_mask] = data
return self.get_image(image, output)
|
[
"Reconstruct",
"a",
"masked",
"vector",
"into",
"the",
"original",
"3D",
"volume",
".",
"Args",
":",
"data",
":",
"The",
"1D",
"vector",
"to",
"reconstruct",
".",
"(",
"Can",
"also",
"be",
"a",
"2D",
"vector",
"where",
"the",
"second",
"dimension",
"is",
"time",
"but",
"then",
"output",
"will",
"always",
"be",
"set",
"to",
"array",
"--",
"i",
".",
"e",
".",
"a",
"4D",
"image",
"will",
"be",
"returned",
".",
")",
"layers",
":",
"Which",
"mask",
"layers",
"to",
"use",
"(",
"specified",
"as",
"int",
"string",
"or",
"list",
"of",
"ints",
"and",
"strings",
")",
".",
"When",
"None",
"applies",
"the",
"conjunction",
"of",
"all",
"layers",
".",
"Note",
"that",
"the",
"layers",
"specified",
"here",
"must",
"exactly",
"match",
"the",
"layers",
"used",
"in",
"the",
"mask",
"()",
"operation",
"otherwise",
"the",
"shape",
"of",
"the",
"mask",
"will",
"be",
"incorrect",
"and",
"bad",
"things",
"will",
"happen",
".",
"output",
":",
"What",
"kind",
"of",
"object",
"to",
"return",
".",
"See",
"options",
"in",
"get_image",
"()",
".",
"By",
"default",
"returns",
"an",
"N",
"-",
"dimensional",
"array",
"of",
"reshaped",
"data",
"."
] |
neurosynth/neurosynth
|
python
|
https://github.com/neurosynth/neurosynth/blob/948ce7edce15d7df693446e76834e0c23bfe8f11/neurosynth/base/mask.py#L169-L195
|
[
"def",
"unmask",
"(",
"self",
",",
"data",
",",
"layers",
"=",
"None",
",",
"output",
"=",
"'array'",
")",
":",
"self",
".",
"set_mask",
"(",
"layers",
")",
"if",
"data",
".",
"ndim",
"==",
"2",
":",
"n_volumes",
"=",
"data",
".",
"shape",
"[",
"1",
"]",
"# Assume 1st dimension is voxels, 2nd is time",
"# but we generate x,y,z,t volume",
"image",
"=",
"np",
".",
"zeros",
"(",
"self",
".",
"full",
".",
"shape",
"+",
"(",
"n_volumes",
",",
")",
")",
"image",
"[",
"self",
".",
"current_mask",
",",
":",
"]",
"=",
"data",
"image",
"=",
"np",
".",
"reshape",
"(",
"image",
",",
"self",
".",
"volume",
".",
"shape",
"+",
"(",
"n_volumes",
",",
")",
")",
"else",
":",
"# img = self.full.copy()",
"image",
"=",
"np",
".",
"zeros",
"(",
"self",
".",
"full",
".",
"shape",
")",
"image",
"[",
"self",
".",
"current_mask",
"]",
"=",
"data",
"return",
"self",
".",
"get_image",
"(",
"image",
",",
"output",
")"
] |
948ce7edce15d7df693446e76834e0c23bfe8f11
|
test
|
Masker.get_mask
|
Set the current mask by taking the conjunction of all specified
layers.
Args:
layers: Which layers to include. See documentation for add() for
format.
include_global_mask: Whether or not to automatically include the
global mask (i.e., self.volume) in the conjunction.
|
neurosynth/base/mask.py
|
def get_mask(self, layers=None, output='vector', in_global_mask=True):
""" Set the current mask by taking the conjunction of all specified
layers.
Args:
layers: Which layers to include. See documentation for add() for
format.
include_global_mask: Whether or not to automatically include the
global mask (i.e., self.volume) in the conjunction.
"""
if in_global_mask:
output = 'vector'
if layers is None:
layers = self.layers.keys()
elif not isinstance(layers, list):
layers = [layers]
layers = map(lambda x: x if isinstance(x, string_types)
else self.stack[x], layers)
layers = [self.layers[l] for l in layers if l in self.layers]
# Always include the original volume
layers.append(self.full)
layers = np.vstack(layers).T.astype(bool)
mask = layers.all(axis=1)
mask = self.get_image(mask, output)
return mask[self.global_mask] if in_global_mask else mask
|
def get_mask(self, layers=None, output='vector', in_global_mask=True):
""" Set the current mask by taking the conjunction of all specified
layers.
Args:
layers: Which layers to include. See documentation for add() for
format.
include_global_mask: Whether or not to automatically include the
global mask (i.e., self.volume) in the conjunction.
"""
if in_global_mask:
output = 'vector'
if layers is None:
layers = self.layers.keys()
elif not isinstance(layers, list):
layers = [layers]
layers = map(lambda x: x if isinstance(x, string_types)
else self.stack[x], layers)
layers = [self.layers[l] for l in layers if l in self.layers]
# Always include the original volume
layers.append(self.full)
layers = np.vstack(layers).T.astype(bool)
mask = layers.all(axis=1)
mask = self.get_image(mask, output)
return mask[self.global_mask] if in_global_mask else mask
|
[
"Set",
"the",
"current",
"mask",
"by",
"taking",
"the",
"conjunction",
"of",
"all",
"specified",
"layers",
"."
] |
neurosynth/neurosynth
|
python
|
https://github.com/neurosynth/neurosynth/blob/948ce7edce15d7df693446e76834e0c23bfe8f11/neurosynth/base/mask.py#L197-L224
|
[
"def",
"get_mask",
"(",
"self",
",",
"layers",
"=",
"None",
",",
"output",
"=",
"'vector'",
",",
"in_global_mask",
"=",
"True",
")",
":",
"if",
"in_global_mask",
":",
"output",
"=",
"'vector'",
"if",
"layers",
"is",
"None",
":",
"layers",
"=",
"self",
".",
"layers",
".",
"keys",
"(",
")",
"elif",
"not",
"isinstance",
"(",
"layers",
",",
"list",
")",
":",
"layers",
"=",
"[",
"layers",
"]",
"layers",
"=",
"map",
"(",
"lambda",
"x",
":",
"x",
"if",
"isinstance",
"(",
"x",
",",
"string_types",
")",
"else",
"self",
".",
"stack",
"[",
"x",
"]",
",",
"layers",
")",
"layers",
"=",
"[",
"self",
".",
"layers",
"[",
"l",
"]",
"for",
"l",
"in",
"layers",
"if",
"l",
"in",
"self",
".",
"layers",
"]",
"# Always include the original volume",
"layers",
".",
"append",
"(",
"self",
".",
"full",
")",
"layers",
"=",
"np",
".",
"vstack",
"(",
"layers",
")",
".",
"T",
".",
"astype",
"(",
"bool",
")",
"mask",
"=",
"layers",
".",
"all",
"(",
"axis",
"=",
"1",
")",
"mask",
"=",
"self",
".",
"get_image",
"(",
"mask",
",",
"output",
")",
"return",
"mask",
"[",
"self",
".",
"global_mask",
"]",
"if",
"in_global_mask",
"else",
"mask"
] |
948ce7edce15d7df693446e76834e0c23bfe8f11
|
test
|
get_sphere
|
# Return all points within r mm of coordinates. Generates a cube
and then discards all points outside sphere. Only returns values that
fall within the dimensions of the image.
|
neurosynth/base/imageutils.py
|
def get_sphere(coords, r=4, vox_dims=(2, 2, 2), dims=(91, 109, 91)):
""" # Return all points within r mm of coordinates. Generates a cube
and then discards all points outside sphere. Only returns values that
fall within the dimensions of the image."""
r = float(r)
xx, yy, zz = [slice(-r / vox_dims[i], r / vox_dims[
i] + 0.01, 1) for i in range(len(coords))]
cube = np.vstack([row.ravel() for row in np.mgrid[xx, yy, zz]])
sphere = cube[:, np.sum(np.dot(np.diag(
vox_dims), cube) ** 2, 0) ** .5 <= r]
sphere = np.round(sphere.T + coords)
return sphere[(np.min(sphere, 1) >= 0) &
(np.max(np.subtract(sphere, dims), 1) <= -1), :].astype(int)
|
def get_sphere(coords, r=4, vox_dims=(2, 2, 2), dims=(91, 109, 91)):
""" # Return all points within r mm of coordinates. Generates a cube
and then discards all points outside sphere. Only returns values that
fall within the dimensions of the image."""
r = float(r)
xx, yy, zz = [slice(-r / vox_dims[i], r / vox_dims[
i] + 0.01, 1) for i in range(len(coords))]
cube = np.vstack([row.ravel() for row in np.mgrid[xx, yy, zz]])
sphere = cube[:, np.sum(np.dot(np.diag(
vox_dims), cube) ** 2, 0) ** .5 <= r]
sphere = np.round(sphere.T + coords)
return sphere[(np.min(sphere, 1) >= 0) &
(np.max(np.subtract(sphere, dims), 1) <= -1), :].astype(int)
|
[
"#",
"Return",
"all",
"points",
"within",
"r",
"mm",
"of",
"coordinates",
".",
"Generates",
"a",
"cube",
"and",
"then",
"discards",
"all",
"points",
"outside",
"sphere",
".",
"Only",
"returns",
"values",
"that",
"fall",
"within",
"the",
"dimensions",
"of",
"the",
"image",
"."
] |
neurosynth/neurosynth
|
python
|
https://github.com/neurosynth/neurosynth/blob/948ce7edce15d7df693446e76834e0c23bfe8f11/neurosynth/base/imageutils.py#L12-L24
|
[
"def",
"get_sphere",
"(",
"coords",
",",
"r",
"=",
"4",
",",
"vox_dims",
"=",
"(",
"2",
",",
"2",
",",
"2",
")",
",",
"dims",
"=",
"(",
"91",
",",
"109",
",",
"91",
")",
")",
":",
"r",
"=",
"float",
"(",
"r",
")",
"xx",
",",
"yy",
",",
"zz",
"=",
"[",
"slice",
"(",
"-",
"r",
"/",
"vox_dims",
"[",
"i",
"]",
",",
"r",
"/",
"vox_dims",
"[",
"i",
"]",
"+",
"0.01",
",",
"1",
")",
"for",
"i",
"in",
"range",
"(",
"len",
"(",
"coords",
")",
")",
"]",
"cube",
"=",
"np",
".",
"vstack",
"(",
"[",
"row",
".",
"ravel",
"(",
")",
"for",
"row",
"in",
"np",
".",
"mgrid",
"[",
"xx",
",",
"yy",
",",
"zz",
"]",
"]",
")",
"sphere",
"=",
"cube",
"[",
":",
",",
"np",
".",
"sum",
"(",
"np",
".",
"dot",
"(",
"np",
".",
"diag",
"(",
"vox_dims",
")",
",",
"cube",
")",
"**",
"2",
",",
"0",
")",
"**",
".5",
"<=",
"r",
"]",
"sphere",
"=",
"np",
".",
"round",
"(",
"sphere",
".",
"T",
"+",
"coords",
")",
"return",
"sphere",
"[",
"(",
"np",
".",
"min",
"(",
"sphere",
",",
"1",
")",
">=",
"0",
")",
"&",
"(",
"np",
".",
"max",
"(",
"np",
".",
"subtract",
"(",
"sphere",
",",
"dims",
")",
",",
"1",
")",
"<=",
"-",
"1",
")",
",",
":",
"]",
".",
"astype",
"(",
"int",
")"
] |
948ce7edce15d7df693446e76834e0c23bfe8f11
|
test
|
map_peaks_to_image
|
Take a set of discrete foci (i.e., 2-D array of xyz coordinates)
and generate a corresponding image, convolving each focus with a
hard sphere of radius r.
|
neurosynth/base/imageutils.py
|
def map_peaks_to_image(peaks, r=4, vox_dims=(2, 2, 2), dims=(91, 109, 91),
header=None):
""" Take a set of discrete foci (i.e., 2-D array of xyz coordinates)
and generate a corresponding image, convolving each focus with a
hard sphere of radius r."""
data = np.zeros(dims)
for p in peaks:
valid = get_sphere(p, r, vox_dims, dims)
valid = valid[:, ::-1]
data[tuple(valid.T)] = 1
return nifti1.Nifti1Image(data, None, header=header)
|
def map_peaks_to_image(peaks, r=4, vox_dims=(2, 2, 2), dims=(91, 109, 91),
header=None):
""" Take a set of discrete foci (i.e., 2-D array of xyz coordinates)
and generate a corresponding image, convolving each focus with a
hard sphere of radius r."""
data = np.zeros(dims)
for p in peaks:
valid = get_sphere(p, r, vox_dims, dims)
valid = valid[:, ::-1]
data[tuple(valid.T)] = 1
return nifti1.Nifti1Image(data, None, header=header)
|
[
"Take",
"a",
"set",
"of",
"discrete",
"foci",
"(",
"i",
".",
"e",
".",
"2",
"-",
"D",
"array",
"of",
"xyz",
"coordinates",
")",
"and",
"generate",
"a",
"corresponding",
"image",
"convolving",
"each",
"focus",
"with",
"a",
"hard",
"sphere",
"of",
"radius",
"r",
"."
] |
neurosynth/neurosynth
|
python
|
https://github.com/neurosynth/neurosynth/blob/948ce7edce15d7df693446e76834e0c23bfe8f11/neurosynth/base/imageutils.py#L27-L37
|
[
"def",
"map_peaks_to_image",
"(",
"peaks",
",",
"r",
"=",
"4",
",",
"vox_dims",
"=",
"(",
"2",
",",
"2",
",",
"2",
")",
",",
"dims",
"=",
"(",
"91",
",",
"109",
",",
"91",
")",
",",
"header",
"=",
"None",
")",
":",
"data",
"=",
"np",
".",
"zeros",
"(",
"dims",
")",
"for",
"p",
"in",
"peaks",
":",
"valid",
"=",
"get_sphere",
"(",
"p",
",",
"r",
",",
"vox_dims",
",",
"dims",
")",
"valid",
"=",
"valid",
"[",
":",
",",
":",
":",
"-",
"1",
"]",
"data",
"[",
"tuple",
"(",
"valid",
".",
"T",
")",
"]",
"=",
"1",
"return",
"nifti1",
".",
"Nifti1Image",
"(",
"data",
",",
"None",
",",
"header",
"=",
"header",
")"
] |
948ce7edce15d7df693446e76834e0c23bfe8f11
|
test
|
load_imgs
|
Load multiple images from file into an ndarray.
Args:
filenames: A single filename or list of filenames pointing to valid
images.
masker: A Masker instance.
nan_to_num: Optional boolean indicating whether to convert NaNs to zero.
Returns:
An m x n 2D numpy array, where m = number of voxels in mask and
n = number of images passed.
|
neurosynth/base/imageutils.py
|
def load_imgs(filenames, masker, nan_to_num=True):
""" Load multiple images from file into an ndarray.
Args:
filenames: A single filename or list of filenames pointing to valid
images.
masker: A Masker instance.
nan_to_num: Optional boolean indicating whether to convert NaNs to zero.
Returns:
An m x n 2D numpy array, where m = number of voxels in mask and
n = number of images passed.
"""
if isinstance(filenames, string_types):
filenames = [filenames]
data = np.zeros((masker.n_vox_in_mask, len(filenames)))
for i, f in enumerate(filenames):
data[:, i] = masker.mask(f, nan_to_num)
return data
|
def load_imgs(filenames, masker, nan_to_num=True):
""" Load multiple images from file into an ndarray.
Args:
filenames: A single filename or list of filenames pointing to valid
images.
masker: A Masker instance.
nan_to_num: Optional boolean indicating whether to convert NaNs to zero.
Returns:
An m x n 2D numpy array, where m = number of voxels in mask and
n = number of images passed.
"""
if isinstance(filenames, string_types):
filenames = [filenames]
data = np.zeros((masker.n_vox_in_mask, len(filenames)))
for i, f in enumerate(filenames):
data[:, i] = masker.mask(f, nan_to_num)
return data
|
[
"Load",
"multiple",
"images",
"from",
"file",
"into",
"an",
"ndarray",
"."
] |
neurosynth/neurosynth
|
python
|
https://github.com/neurosynth/neurosynth/blob/948ce7edce15d7df693446e76834e0c23bfe8f11/neurosynth/base/imageutils.py#L40-L58
|
[
"def",
"load_imgs",
"(",
"filenames",
",",
"masker",
",",
"nan_to_num",
"=",
"True",
")",
":",
"if",
"isinstance",
"(",
"filenames",
",",
"string_types",
")",
":",
"filenames",
"=",
"[",
"filenames",
"]",
"data",
"=",
"np",
".",
"zeros",
"(",
"(",
"masker",
".",
"n_vox_in_mask",
",",
"len",
"(",
"filenames",
")",
")",
")",
"for",
"i",
",",
"f",
"in",
"enumerate",
"(",
"filenames",
")",
":",
"data",
"[",
":",
",",
"i",
"]",
"=",
"masker",
".",
"mask",
"(",
"f",
",",
"nan_to_num",
")",
"return",
"data"
] |
948ce7edce15d7df693446e76834e0c23bfe8f11
|
test
|
save_img
|
Save a vectorized image to file.
|
neurosynth/base/imageutils.py
|
def save_img(data, filename, masker, header=None):
""" Save a vectorized image to file. """
if not header:
header = masker.get_header()
header.set_data_dtype(data.dtype) # Avoids loss of precision
# Update min/max -- this should happen on save, but doesn't seem to
header['cal_max'] = data.max()
header['cal_min'] = data.min()
img = nifti1.Nifti1Image(masker.unmask(data), None, header)
img.to_filename(filename)
|
def save_img(data, filename, masker, header=None):
""" Save a vectorized image to file. """
if not header:
header = masker.get_header()
header.set_data_dtype(data.dtype) # Avoids loss of precision
# Update min/max -- this should happen on save, but doesn't seem to
header['cal_max'] = data.max()
header['cal_min'] = data.min()
img = nifti1.Nifti1Image(masker.unmask(data), None, header)
img.to_filename(filename)
|
[
"Save",
"a",
"vectorized",
"image",
"to",
"file",
"."
] |
neurosynth/neurosynth
|
python
|
https://github.com/neurosynth/neurosynth/blob/948ce7edce15d7df693446e76834e0c23bfe8f11/neurosynth/base/imageutils.py#L61-L70
|
[
"def",
"save_img",
"(",
"data",
",",
"filename",
",",
"masker",
",",
"header",
"=",
"None",
")",
":",
"if",
"not",
"header",
":",
"header",
"=",
"masker",
".",
"get_header",
"(",
")",
"header",
".",
"set_data_dtype",
"(",
"data",
".",
"dtype",
")",
"# Avoids loss of precision",
"# Update min/max -- this should happen on save, but doesn't seem to",
"header",
"[",
"'cal_max'",
"]",
"=",
"data",
".",
"max",
"(",
")",
"header",
"[",
"'cal_min'",
"]",
"=",
"data",
".",
"min",
"(",
")",
"img",
"=",
"nifti1",
".",
"Nifti1Image",
"(",
"masker",
".",
"unmask",
"(",
"data",
")",
",",
"None",
",",
"header",
")",
"img",
".",
"to_filename",
"(",
"filename",
")"
] |
948ce7edce15d7df693446e76834e0c23bfe8f11
|
test
|
threshold_img
|
Threshold data, setting all values in the array above/below threshold
to zero.
Args:
data (ndarray): The image data to threshold.
threshold (float): Numeric threshold to apply to image.
mask (ndarray): Optional 1D-array with the same length as the data. If
passed, the threshold is first applied to the mask, and the
resulting indices are used to threshold the data. This is primarily
useful when, e.g., applying a statistical threshold to a z-value
image based on a p-value threshold.
mask_out (str): Thresholding direction. Can be 'below' the threshold
(default) or 'above' the threshold. Note: use 'above' when masking
based on p values.
|
neurosynth/base/imageutils.py
|
def threshold_img(data, threshold, mask=None, mask_out='below'):
""" Threshold data, setting all values in the array above/below threshold
to zero.
Args:
data (ndarray): The image data to threshold.
threshold (float): Numeric threshold to apply to image.
mask (ndarray): Optional 1D-array with the same length as the data. If
passed, the threshold is first applied to the mask, and the
resulting indices are used to threshold the data. This is primarily
useful when, e.g., applying a statistical threshold to a z-value
image based on a p-value threshold.
mask_out (str): Thresholding direction. Can be 'below' the threshold
(default) or 'above' the threshold. Note: use 'above' when masking
based on p values.
"""
if mask is not None:
mask = threshold_img(mask, threshold, mask_out=mask_out)
return data * mask.astype(bool)
if mask_out.startswith('b'):
data[data < threshold] = 0
elif mask_out.startswith('a'):
data[data > threshold] = 0
return data
|
def threshold_img(data, threshold, mask=None, mask_out='below'):
""" Threshold data, setting all values in the array above/below threshold
to zero.
Args:
data (ndarray): The image data to threshold.
threshold (float): Numeric threshold to apply to image.
mask (ndarray): Optional 1D-array with the same length as the data. If
passed, the threshold is first applied to the mask, and the
resulting indices are used to threshold the data. This is primarily
useful when, e.g., applying a statistical threshold to a z-value
image based on a p-value threshold.
mask_out (str): Thresholding direction. Can be 'below' the threshold
(default) or 'above' the threshold. Note: use 'above' when masking
based on p values.
"""
if mask is not None:
mask = threshold_img(mask, threshold, mask_out=mask_out)
return data * mask.astype(bool)
if mask_out.startswith('b'):
data[data < threshold] = 0
elif mask_out.startswith('a'):
data[data > threshold] = 0
return data
|
[
"Threshold",
"data",
"setting",
"all",
"values",
"in",
"the",
"array",
"above",
"/",
"below",
"threshold",
"to",
"zero",
".",
"Args",
":",
"data",
"(",
"ndarray",
")",
":",
"The",
"image",
"data",
"to",
"threshold",
".",
"threshold",
"(",
"float",
")",
":",
"Numeric",
"threshold",
"to",
"apply",
"to",
"image",
".",
"mask",
"(",
"ndarray",
")",
":",
"Optional",
"1D",
"-",
"array",
"with",
"the",
"same",
"length",
"as",
"the",
"data",
".",
"If",
"passed",
"the",
"threshold",
"is",
"first",
"applied",
"to",
"the",
"mask",
"and",
"the",
"resulting",
"indices",
"are",
"used",
"to",
"threshold",
"the",
"data",
".",
"This",
"is",
"primarily",
"useful",
"when",
"e",
".",
"g",
".",
"applying",
"a",
"statistical",
"threshold",
"to",
"a",
"z",
"-",
"value",
"image",
"based",
"on",
"a",
"p",
"-",
"value",
"threshold",
".",
"mask_out",
"(",
"str",
")",
":",
"Thresholding",
"direction",
".",
"Can",
"be",
"below",
"the",
"threshold",
"(",
"default",
")",
"or",
"above",
"the",
"threshold",
".",
"Note",
":",
"use",
"above",
"when",
"masking",
"based",
"on",
"p",
"values",
"."
] |
neurosynth/neurosynth
|
python
|
https://github.com/neurosynth/neurosynth/blob/948ce7edce15d7df693446e76834e0c23bfe8f11/neurosynth/base/imageutils.py#L73-L95
|
[
"def",
"threshold_img",
"(",
"data",
",",
"threshold",
",",
"mask",
"=",
"None",
",",
"mask_out",
"=",
"'below'",
")",
":",
"if",
"mask",
"is",
"not",
"None",
":",
"mask",
"=",
"threshold_img",
"(",
"mask",
",",
"threshold",
",",
"mask_out",
"=",
"mask_out",
")",
"return",
"data",
"*",
"mask",
".",
"astype",
"(",
"bool",
")",
"if",
"mask_out",
".",
"startswith",
"(",
"'b'",
")",
":",
"data",
"[",
"data",
"<",
"threshold",
"]",
"=",
"0",
"elif",
"mask_out",
".",
"startswith",
"(",
"'a'",
")",
":",
"data",
"[",
"data",
">",
"threshold",
"]",
"=",
"0",
"return",
"data"
] |
948ce7edce15d7df693446e76834e0c23bfe8f11
|
test
|
create_grid
|
Creates an image containing labeled cells in a 3D grid.
Args:
image: String or nibabel image. The image used to define the grid
dimensions. Also used to define the mask to apply to the grid.
Only voxels with non-zero values in the mask will be retained; all
other voxels will be zeroed out in the returned image.
scale: The scaling factor which controls the grid size. Value reflects
diameter of cube in voxels.
apply_mask: Boolean indicating whether or not to zero out voxels not in
image.
save_file: Optional string giving the path to save image to. Image
written out is a standard Nifti image. If save_file is None, no
image is written.
Returns:
A nibabel image with the same dimensions as the input image. All voxels
in each cell in the 3D grid are assigned the same non-zero label.
|
neurosynth/base/imageutils.py
|
def create_grid(image, scale=4, apply_mask=True, save_file=None):
""" Creates an image containing labeled cells in a 3D grid.
Args:
image: String or nibabel image. The image used to define the grid
dimensions. Also used to define the mask to apply to the grid.
Only voxels with non-zero values in the mask will be retained; all
other voxels will be zeroed out in the returned image.
scale: The scaling factor which controls the grid size. Value reflects
diameter of cube in voxels.
apply_mask: Boolean indicating whether or not to zero out voxels not in
image.
save_file: Optional string giving the path to save image to. Image
written out is a standard Nifti image. If save_file is None, no
image is written.
Returns:
A nibabel image with the same dimensions as the input image. All voxels
in each cell in the 3D grid are assigned the same non-zero label.
"""
if isinstance(image, string_types):
image = nb.load(image)
# create a list of cluster centers
centers = []
x_length, y_length, z_length = image.shape
for x in range(0, x_length, scale):
for y in range(0, y_length, scale):
for z in range(0, z_length, scale):
centers.append((x, y, z))
# create a box around each center with the diameter equal to the scaling
# factor
grid = np.zeros(image.shape)
for (i, (x, y, z)) in enumerate(centers):
for mov_x in range((-scale + 1) // 2, (scale + 1) // 2):
for mov_y in range((-scale + 1) // 2, (scale + 1) // 2):
for mov_z in range((-scale + 1) // 2, (scale + 1) // 2):
try: # Ignore voxels outside bounds of image
grid[x + mov_x, y + mov_y, z + mov_z] = i + 1
except:
pass
if apply_mask:
mask = image
if isinstance(mask, string_types):
mask = nb.load(mask)
if type(mask).__module__ != np.__name__:
mask = mask.get_data()
grid[~mask.astype(bool)] = 0.0
grid = nb.Nifti1Image(grid, image.get_affine(), image.get_header())
if save_file is not None:
nb.save(grid, save_file)
return grid
|
def create_grid(image, scale=4, apply_mask=True, save_file=None):
""" Creates an image containing labeled cells in a 3D grid.
Args:
image: String or nibabel image. The image used to define the grid
dimensions. Also used to define the mask to apply to the grid.
Only voxels with non-zero values in the mask will be retained; all
other voxels will be zeroed out in the returned image.
scale: The scaling factor which controls the grid size. Value reflects
diameter of cube in voxels.
apply_mask: Boolean indicating whether or not to zero out voxels not in
image.
save_file: Optional string giving the path to save image to. Image
written out is a standard Nifti image. If save_file is None, no
image is written.
Returns:
A nibabel image with the same dimensions as the input image. All voxels
in each cell in the 3D grid are assigned the same non-zero label.
"""
if isinstance(image, string_types):
image = nb.load(image)
# create a list of cluster centers
centers = []
x_length, y_length, z_length = image.shape
for x in range(0, x_length, scale):
for y in range(0, y_length, scale):
for z in range(0, z_length, scale):
centers.append((x, y, z))
# create a box around each center with the diameter equal to the scaling
# factor
grid = np.zeros(image.shape)
for (i, (x, y, z)) in enumerate(centers):
for mov_x in range((-scale + 1) // 2, (scale + 1) // 2):
for mov_y in range((-scale + 1) // 2, (scale + 1) // 2):
for mov_z in range((-scale + 1) // 2, (scale + 1) // 2):
try: # Ignore voxels outside bounds of image
grid[x + mov_x, y + mov_y, z + mov_z] = i + 1
except:
pass
if apply_mask:
mask = image
if isinstance(mask, string_types):
mask = nb.load(mask)
if type(mask).__module__ != np.__name__:
mask = mask.get_data()
grid[~mask.astype(bool)] = 0.0
grid = nb.Nifti1Image(grid, image.get_affine(), image.get_header())
if save_file is not None:
nb.save(grid, save_file)
return grid
|
[
"Creates",
"an",
"image",
"containing",
"labeled",
"cells",
"in",
"a",
"3D",
"grid",
".",
"Args",
":",
"image",
":",
"String",
"or",
"nibabel",
"image",
".",
"The",
"image",
"used",
"to",
"define",
"the",
"grid",
"dimensions",
".",
"Also",
"used",
"to",
"define",
"the",
"mask",
"to",
"apply",
"to",
"the",
"grid",
".",
"Only",
"voxels",
"with",
"non",
"-",
"zero",
"values",
"in",
"the",
"mask",
"will",
"be",
"retained",
";",
"all",
"other",
"voxels",
"will",
"be",
"zeroed",
"out",
"in",
"the",
"returned",
"image",
".",
"scale",
":",
"The",
"scaling",
"factor",
"which",
"controls",
"the",
"grid",
"size",
".",
"Value",
"reflects",
"diameter",
"of",
"cube",
"in",
"voxels",
".",
"apply_mask",
":",
"Boolean",
"indicating",
"whether",
"or",
"not",
"to",
"zero",
"out",
"voxels",
"not",
"in",
"image",
".",
"save_file",
":",
"Optional",
"string",
"giving",
"the",
"path",
"to",
"save",
"image",
"to",
".",
"Image",
"written",
"out",
"is",
"a",
"standard",
"Nifti",
"image",
".",
"If",
"save_file",
"is",
"None",
"no",
"image",
"is",
"written",
".",
"Returns",
":",
"A",
"nibabel",
"image",
"with",
"the",
"same",
"dimensions",
"as",
"the",
"input",
"image",
".",
"All",
"voxels",
"in",
"each",
"cell",
"in",
"the",
"3D",
"grid",
"are",
"assigned",
"the",
"same",
"non",
"-",
"zero",
"label",
"."
] |
neurosynth/neurosynth
|
python
|
https://github.com/neurosynth/neurosynth/blob/948ce7edce15d7df693446e76834e0c23bfe8f11/neurosynth/base/imageutils.py#L98-L152
|
[
"def",
"create_grid",
"(",
"image",
",",
"scale",
"=",
"4",
",",
"apply_mask",
"=",
"True",
",",
"save_file",
"=",
"None",
")",
":",
"if",
"isinstance",
"(",
"image",
",",
"string_types",
")",
":",
"image",
"=",
"nb",
".",
"load",
"(",
"image",
")",
"# create a list of cluster centers",
"centers",
"=",
"[",
"]",
"x_length",
",",
"y_length",
",",
"z_length",
"=",
"image",
".",
"shape",
"for",
"x",
"in",
"range",
"(",
"0",
",",
"x_length",
",",
"scale",
")",
":",
"for",
"y",
"in",
"range",
"(",
"0",
",",
"y_length",
",",
"scale",
")",
":",
"for",
"z",
"in",
"range",
"(",
"0",
",",
"z_length",
",",
"scale",
")",
":",
"centers",
".",
"append",
"(",
"(",
"x",
",",
"y",
",",
"z",
")",
")",
"# create a box around each center with the diameter equal to the scaling",
"# factor",
"grid",
"=",
"np",
".",
"zeros",
"(",
"image",
".",
"shape",
")",
"for",
"(",
"i",
",",
"(",
"x",
",",
"y",
",",
"z",
")",
")",
"in",
"enumerate",
"(",
"centers",
")",
":",
"for",
"mov_x",
"in",
"range",
"(",
"(",
"-",
"scale",
"+",
"1",
")",
"//",
"2",
",",
"(",
"scale",
"+",
"1",
")",
"//",
"2",
")",
":",
"for",
"mov_y",
"in",
"range",
"(",
"(",
"-",
"scale",
"+",
"1",
")",
"//",
"2",
",",
"(",
"scale",
"+",
"1",
")",
"//",
"2",
")",
":",
"for",
"mov_z",
"in",
"range",
"(",
"(",
"-",
"scale",
"+",
"1",
")",
"//",
"2",
",",
"(",
"scale",
"+",
"1",
")",
"//",
"2",
")",
":",
"try",
":",
"# Ignore voxels outside bounds of image",
"grid",
"[",
"x",
"+",
"mov_x",
",",
"y",
"+",
"mov_y",
",",
"z",
"+",
"mov_z",
"]",
"=",
"i",
"+",
"1",
"except",
":",
"pass",
"if",
"apply_mask",
":",
"mask",
"=",
"image",
"if",
"isinstance",
"(",
"mask",
",",
"string_types",
")",
":",
"mask",
"=",
"nb",
".",
"load",
"(",
"mask",
")",
"if",
"type",
"(",
"mask",
")",
".",
"__module__",
"!=",
"np",
".",
"__name__",
":",
"mask",
"=",
"mask",
".",
"get_data",
"(",
")",
"grid",
"[",
"~",
"mask",
".",
"astype",
"(",
"bool",
")",
"]",
"=",
"0.0",
"grid",
"=",
"nb",
".",
"Nifti1Image",
"(",
"grid",
",",
"image",
".",
"get_affine",
"(",
")",
",",
"image",
".",
"get_header",
"(",
")",
")",
"if",
"save_file",
"is",
"not",
"None",
":",
"nb",
".",
"save",
"(",
"grid",
",",
"save_file",
")",
"return",
"grid"
] |
948ce7edce15d7df693446e76834e0c23bfe8f11
|
test
|
set_logging_level
|
Set neurosynth's logging level
Args
level : str
Name of the logging level (warning, error, info, etc) known
to logging module. If no level provided, it would get that one
from environment variable NEUROSYNTH_LOGLEVEL
|
neurosynth/__init__.py
|
def set_logging_level(level=None):
"""Set neurosynth's logging level
Args
level : str
Name of the logging level (warning, error, info, etc) known
to logging module. If no level provided, it would get that one
from environment variable NEUROSYNTH_LOGLEVEL
"""
if level is None:
level = os.environ.get('NEUROSYNTH_LOGLEVEL', 'warn')
if level is not None:
logger.setLevel(getattr(logging, level.upper()))
return logger.getEffectiveLevel()
|
def set_logging_level(level=None):
"""Set neurosynth's logging level
Args
level : str
Name of the logging level (warning, error, info, etc) known
to logging module. If no level provided, it would get that one
from environment variable NEUROSYNTH_LOGLEVEL
"""
if level is None:
level = os.environ.get('NEUROSYNTH_LOGLEVEL', 'warn')
if level is not None:
logger.setLevel(getattr(logging, level.upper()))
return logger.getEffectiveLevel()
|
[
"Set",
"neurosynth",
"s",
"logging",
"level"
] |
neurosynth/neurosynth
|
python
|
https://github.com/neurosynth/neurosynth/blob/948ce7edce15d7df693446e76834e0c23bfe8f11/neurosynth/__init__.py#L25-L38
|
[
"def",
"set_logging_level",
"(",
"level",
"=",
"None",
")",
":",
"if",
"level",
"is",
"None",
":",
"level",
"=",
"os",
".",
"environ",
".",
"get",
"(",
"'NEUROSYNTH_LOGLEVEL'",
",",
"'warn'",
")",
"if",
"level",
"is",
"not",
"None",
":",
"logger",
".",
"setLevel",
"(",
"getattr",
"(",
"logging",
",",
"level",
".",
"upper",
"(",
")",
")",
")",
"return",
"logger",
".",
"getEffectiveLevel",
"(",
")"
] |
948ce7edce15d7df693446e76834e0c23bfe8f11
|
test
|
expand_address
|
Expand the given address into one or more normalized strings.
Required
--------
@param address: the address as either Unicode or a UTF-8 encoded string
Options
-------
@param languages: a tuple or list of ISO language code strings (e.g. "en", "fr", "de", etc.)
to use in expansion. If None is passed, use language classifier
to detect language automatically.
@param address_components: an integer (bit-set) of address component expansions
to use e.g. ADDRESS_NAME | ADDRESS_STREET would use
only expansions which apply to venue names or streets.
@param latin_ascii: use the Latin to ASCII transliterator, which normalizes e.g. æ => ae
@param transliterate: use any available transliterators for non-Latin scripts, e.g.
for the Greek phrase διαφορετικούς becomes diaphoretikoús̱
@param strip_accents: strip accented characters e.g. é => e, ç => c. This loses some
information in various languags, but in general we want
@param decompose: perform Unicode normalization (NFD form)
@param lowercase: UTF-8 lowercase the string
@param trim_string: trim spaces on either side of the string
@param replace_word_hyphens: add version of the string replacing hyphens with space
@param delete_word_hyphens: add version of the string with hyphens deleted
@param replace_numeric_hyphens: add version of the string with numeric hyphens replaced
e.g. 12345-6789 => 12345 6789
@param delete_numeric_hyphens: add version of the string with numeric hyphens removed
e.g. 12345-6789 => 123456789
@param split_alpha_from_numeric: split tokens like CR17 into CR 17, helps with expansion
of certain types of highway abbreviations
@param delete_final_periods: remove final periods on abbreviations e.g. St. => St
@param delete_acronym_periods: remove periods in acronyms e.g. U.S.A. => USA
@param drop_english_possessives: normalize possessives e.g. Mark's => Marks
@param delete_apostrophes: delete other types of hyphens e.g. O'Malley => OMalley
@param expand_numex: converts numeric expressions e.g. Twenty sixth => 26th,
using either the supplied languages or the result of
automated language classification.
@param roman_numerals: normalize Roman numerals e.g. IX => 9. Since these can be
ambiguous (especially I and V), turning this on simply
adds another version of the string if any potential
Roman numerals are found.
|
postal/expand.py
|
def expand_address(address, languages=None, **kw):
"""
Expand the given address into one or more normalized strings.
Required
--------
@param address: the address as either Unicode or a UTF-8 encoded string
Options
-------
@param languages: a tuple or list of ISO language code strings (e.g. "en", "fr", "de", etc.)
to use in expansion. If None is passed, use language classifier
to detect language automatically.
@param address_components: an integer (bit-set) of address component expansions
to use e.g. ADDRESS_NAME | ADDRESS_STREET would use
only expansions which apply to venue names or streets.
@param latin_ascii: use the Latin to ASCII transliterator, which normalizes e.g. æ => ae
@param transliterate: use any available transliterators for non-Latin scripts, e.g.
for the Greek phrase διαφορετικούς becomes diaphoretikoús̱
@param strip_accents: strip accented characters e.g. é => e, ç => c. This loses some
information in various languags, but in general we want
@param decompose: perform Unicode normalization (NFD form)
@param lowercase: UTF-8 lowercase the string
@param trim_string: trim spaces on either side of the string
@param replace_word_hyphens: add version of the string replacing hyphens with space
@param delete_word_hyphens: add version of the string with hyphens deleted
@param replace_numeric_hyphens: add version of the string with numeric hyphens replaced
e.g. 12345-6789 => 12345 6789
@param delete_numeric_hyphens: add version of the string with numeric hyphens removed
e.g. 12345-6789 => 123456789
@param split_alpha_from_numeric: split tokens like CR17 into CR 17, helps with expansion
of certain types of highway abbreviations
@param delete_final_periods: remove final periods on abbreviations e.g. St. => St
@param delete_acronym_periods: remove periods in acronyms e.g. U.S.A. => USA
@param drop_english_possessives: normalize possessives e.g. Mark's => Marks
@param delete_apostrophes: delete other types of hyphens e.g. O'Malley => OMalley
@param expand_numex: converts numeric expressions e.g. Twenty sixth => 26th,
using either the supplied languages or the result of
automated language classification.
@param roman_numerals: normalize Roman numerals e.g. IX => 9. Since these can be
ambiguous (especially I and V), turning this on simply
adds another version of the string if any potential
Roman numerals are found.
"""
address = safe_decode(address, 'utf-8')
return _expand.expand_address(address, languages=languages, **kw)
|
def expand_address(address, languages=None, **kw):
"""
Expand the given address into one or more normalized strings.
Required
--------
@param address: the address as either Unicode or a UTF-8 encoded string
Options
-------
@param languages: a tuple or list of ISO language code strings (e.g. "en", "fr", "de", etc.)
to use in expansion. If None is passed, use language classifier
to detect language automatically.
@param address_components: an integer (bit-set) of address component expansions
to use e.g. ADDRESS_NAME | ADDRESS_STREET would use
only expansions which apply to venue names or streets.
@param latin_ascii: use the Latin to ASCII transliterator, which normalizes e.g. æ => ae
@param transliterate: use any available transliterators for non-Latin scripts, e.g.
for the Greek phrase διαφορετικούς becomes diaphoretikoús̱
@param strip_accents: strip accented characters e.g. é => e, ç => c. This loses some
information in various languags, but in general we want
@param decompose: perform Unicode normalization (NFD form)
@param lowercase: UTF-8 lowercase the string
@param trim_string: trim spaces on either side of the string
@param replace_word_hyphens: add version of the string replacing hyphens with space
@param delete_word_hyphens: add version of the string with hyphens deleted
@param replace_numeric_hyphens: add version of the string with numeric hyphens replaced
e.g. 12345-6789 => 12345 6789
@param delete_numeric_hyphens: add version of the string with numeric hyphens removed
e.g. 12345-6789 => 123456789
@param split_alpha_from_numeric: split tokens like CR17 into CR 17, helps with expansion
of certain types of highway abbreviations
@param delete_final_periods: remove final periods on abbreviations e.g. St. => St
@param delete_acronym_periods: remove periods in acronyms e.g. U.S.A. => USA
@param drop_english_possessives: normalize possessives e.g. Mark's => Marks
@param delete_apostrophes: delete other types of hyphens e.g. O'Malley => OMalley
@param expand_numex: converts numeric expressions e.g. Twenty sixth => 26th,
using either the supplied languages or the result of
automated language classification.
@param roman_numerals: normalize Roman numerals e.g. IX => 9. Since these can be
ambiguous (especially I and V), turning this on simply
adds another version of the string if any potential
Roman numerals are found.
"""
address = safe_decode(address, 'utf-8')
return _expand.expand_address(address, languages=languages, **kw)
|
[
"Expand",
"the",
"given",
"address",
"into",
"one",
"or",
"more",
"normalized",
"strings",
"."
] |
openvenues/pypostal
|
python
|
https://github.com/openvenues/pypostal/blob/1c0fd96b5e2463b7015cd3625ac276db520c69fe/postal/expand.py#L9-L54
|
[
"def",
"expand_address",
"(",
"address",
",",
"languages",
"=",
"None",
",",
"*",
"*",
"kw",
")",
":",
"address",
"=",
"safe_decode",
"(",
"address",
",",
"'utf-8'",
")",
"return",
"_expand",
".",
"expand_address",
"(",
"address",
",",
"languages",
"=",
"languages",
",",
"*",
"*",
"kw",
")"
] |
1c0fd96b5e2463b7015cd3625ac276db520c69fe
|
test
|
normalized_tokens
|
Normalizes a string, tokenizes, and normalizes each token
with string and token-level options.
This version only uses libpostal's deterministic normalizations
i.e. methods with a single output. The string tree version will
return multiple normalized strings, each with tokens.
Usage:
normalized_tokens(u'St.-Barthélemy')
|
postal/normalize.py
|
def normalized_tokens(s, string_options=DEFAULT_STRING_OPTIONS,
token_options=DEFAULT_TOKEN_OPTIONS,
strip_parentheticals=True, whitespace=False,
languages=None):
'''
Normalizes a string, tokenizes, and normalizes each token
with string and token-level options.
This version only uses libpostal's deterministic normalizations
i.e. methods with a single output. The string tree version will
return multiple normalized strings, each with tokens.
Usage:
normalized_tokens(u'St.-Barthélemy')
'''
s = safe_decode(s)
normalized_tokens = _normalize.normalized_tokens(s, string_options, token_options, whitespace, languages=languages)
if strip_parentheticals:
normalized_tokens = remove_parens(normalized_tokens)
return [(s, token_types.from_id(token_type)) for s, token_type in normalized_tokens]
|
def normalized_tokens(s, string_options=DEFAULT_STRING_OPTIONS,
token_options=DEFAULT_TOKEN_OPTIONS,
strip_parentheticals=True, whitespace=False,
languages=None):
'''
Normalizes a string, tokenizes, and normalizes each token
with string and token-level options.
This version only uses libpostal's deterministic normalizations
i.e. methods with a single output. The string tree version will
return multiple normalized strings, each with tokens.
Usage:
normalized_tokens(u'St.-Barthélemy')
'''
s = safe_decode(s)
normalized_tokens = _normalize.normalized_tokens(s, string_options, token_options, whitespace, languages=languages)
if strip_parentheticals:
normalized_tokens = remove_parens(normalized_tokens)
return [(s, token_types.from_id(token_type)) for s, token_type in normalized_tokens]
|
[
"Normalizes",
"a",
"string",
"tokenizes",
"and",
"normalizes",
"each",
"token",
"with",
"string",
"and",
"token",
"-",
"level",
"options",
"."
] |
openvenues/pypostal
|
python
|
https://github.com/openvenues/pypostal/blob/1c0fd96b5e2463b7015cd3625ac276db520c69fe/postal/normalize.py#L57-L78
|
[
"def",
"normalized_tokens",
"(",
"s",
",",
"string_options",
"=",
"DEFAULT_STRING_OPTIONS",
",",
"token_options",
"=",
"DEFAULT_TOKEN_OPTIONS",
",",
"strip_parentheticals",
"=",
"True",
",",
"whitespace",
"=",
"False",
",",
"languages",
"=",
"None",
")",
":",
"s",
"=",
"safe_decode",
"(",
"s",
")",
"normalized_tokens",
"=",
"_normalize",
".",
"normalized_tokens",
"(",
"s",
",",
"string_options",
",",
"token_options",
",",
"whitespace",
",",
"languages",
"=",
"languages",
")",
"if",
"strip_parentheticals",
":",
"normalized_tokens",
"=",
"remove_parens",
"(",
"normalized_tokens",
")",
"return",
"[",
"(",
"s",
",",
"token_types",
".",
"from_id",
"(",
"token_type",
")",
")",
"for",
"s",
",",
"token_type",
"in",
"normalized_tokens",
"]"
] |
1c0fd96b5e2463b7015cd3625ac276db520c69fe
|
test
|
parse_address
|
Parse address into components.
@param address: the address as either Unicode or a UTF-8 encoded string
@param language (optional): language code
@param country (optional): country code
|
postal/parser.py
|
def parse_address(address, language=None, country=None):
"""
Parse address into components.
@param address: the address as either Unicode or a UTF-8 encoded string
@param language (optional): language code
@param country (optional): country code
"""
address = safe_decode(address, 'utf-8')
return _parser.parse_address(address, language=language, country=country)
|
def parse_address(address, language=None, country=None):
"""
Parse address into components.
@param address: the address as either Unicode or a UTF-8 encoded string
@param language (optional): language code
@param country (optional): country code
"""
address = safe_decode(address, 'utf-8')
return _parser.parse_address(address, language=language, country=country)
|
[
"Parse",
"address",
"into",
"components",
"."
] |
openvenues/pypostal
|
python
|
https://github.com/openvenues/pypostal/blob/1c0fd96b5e2463b7015cd3625ac276db520c69fe/postal/parser.py#L6-L15
|
[
"def",
"parse_address",
"(",
"address",
",",
"language",
"=",
"None",
",",
"country",
"=",
"None",
")",
":",
"address",
"=",
"safe_decode",
"(",
"address",
",",
"'utf-8'",
")",
"return",
"_parser",
".",
"parse_address",
"(",
"address",
",",
"language",
"=",
"language",
",",
"country",
"=",
"country",
")"
] |
1c0fd96b5e2463b7015cd3625ac276db520c69fe
|
test
|
near_dupe_hashes
|
Hash the given address into normalized strings that can be used to group similar
addresses together for more detailed pairwise comparison. This can be thought of
as the blocking function in record linkage or locally-sensitive hashing in the
document near-duplicate detection.
Required
--------
@param labels: array of component labels as either Unicode or UTF-8 encoded strings
e.g. ["house_number", "road", "postcode"]
@param values: array of component values as either Unicode or UTF-8 encoded strings
e.g. ["123", "Broadway", "11216"]. Note len(values) must be equal to
len(labels).
Options
-------
@param languages: a tuple or list of ISO language code strings (e.g. "en", "fr", "de", etc.)
to use in expansion. If None is passed, use language classifier
to detect language automatically.
@param with_name: use name in the hashes
@param with_address: use house_number & street in the hashes
@param with_unit: use secondary unit as part of the hashes
@param with_city_or_equivalent: use the city, city_district, suburb, or island name as one of
the geo qualifiers
@param with_small_containing_boundaries: use small containing boundaries (currently state_district)
as one of the geo qualifiers
@param with_postal_code: use postal code as one of the geo qualifiers
@param with_latlon: use geohash + neighbors as one of the geo qualifiers
@param latitude: latitude (Y coordinate)
@param longitude: longitude (X coordinate)
@param geohash_precision: geohash tile size (default = 6)
@param name_and_address_keys: include keys with name + address + geo
@param name_only_keys: include keys with name + geo
@param address_only_keys: include keys with address + geo
|
postal/near_dupe.py
|
def near_dupe_hashes(labels, values, languages=None, **kw):
"""
Hash the given address into normalized strings that can be used to group similar
addresses together for more detailed pairwise comparison. This can be thought of
as the blocking function in record linkage or locally-sensitive hashing in the
document near-duplicate detection.
Required
--------
@param labels: array of component labels as either Unicode or UTF-8 encoded strings
e.g. ["house_number", "road", "postcode"]
@param values: array of component values as either Unicode or UTF-8 encoded strings
e.g. ["123", "Broadway", "11216"]. Note len(values) must be equal to
len(labels).
Options
-------
@param languages: a tuple or list of ISO language code strings (e.g. "en", "fr", "de", etc.)
to use in expansion. If None is passed, use language classifier
to detect language automatically.
@param with_name: use name in the hashes
@param with_address: use house_number & street in the hashes
@param with_unit: use secondary unit as part of the hashes
@param with_city_or_equivalent: use the city, city_district, suburb, or island name as one of
the geo qualifiers
@param with_small_containing_boundaries: use small containing boundaries (currently state_district)
as one of the geo qualifiers
@param with_postal_code: use postal code as one of the geo qualifiers
@param with_latlon: use geohash + neighbors as one of the geo qualifiers
@param latitude: latitude (Y coordinate)
@param longitude: longitude (X coordinate)
@param geohash_precision: geohash tile size (default = 6)
@param name_and_address_keys: include keys with name + address + geo
@param name_only_keys: include keys with name + geo
@param address_only_keys: include keys with address + geo
"""
return _near_dupe.near_dupe_hashes(labels, values, languages=languages, **kw)
|
def near_dupe_hashes(labels, values, languages=None, **kw):
"""
Hash the given address into normalized strings that can be used to group similar
addresses together for more detailed pairwise comparison. This can be thought of
as the blocking function in record linkage or locally-sensitive hashing in the
document near-duplicate detection.
Required
--------
@param labels: array of component labels as either Unicode or UTF-8 encoded strings
e.g. ["house_number", "road", "postcode"]
@param values: array of component values as either Unicode or UTF-8 encoded strings
e.g. ["123", "Broadway", "11216"]. Note len(values) must be equal to
len(labels).
Options
-------
@param languages: a tuple or list of ISO language code strings (e.g. "en", "fr", "de", etc.)
to use in expansion. If None is passed, use language classifier
to detect language automatically.
@param with_name: use name in the hashes
@param with_address: use house_number & street in the hashes
@param with_unit: use secondary unit as part of the hashes
@param with_city_or_equivalent: use the city, city_district, suburb, or island name as one of
the geo qualifiers
@param with_small_containing_boundaries: use small containing boundaries (currently state_district)
as one of the geo qualifiers
@param with_postal_code: use postal code as one of the geo qualifiers
@param with_latlon: use geohash + neighbors as one of the geo qualifiers
@param latitude: latitude (Y coordinate)
@param longitude: longitude (X coordinate)
@param geohash_precision: geohash tile size (default = 6)
@param name_and_address_keys: include keys with name + address + geo
@param name_only_keys: include keys with name + geo
@param address_only_keys: include keys with address + geo
"""
return _near_dupe.near_dupe_hashes(labels, values, languages=languages, **kw)
|
[
"Hash",
"the",
"given",
"address",
"into",
"normalized",
"strings",
"that",
"can",
"be",
"used",
"to",
"group",
"similar",
"addresses",
"together",
"for",
"more",
"detailed",
"pairwise",
"comparison",
".",
"This",
"can",
"be",
"thought",
"of",
"as",
"the",
"blocking",
"function",
"in",
"record",
"linkage",
"or",
"locally",
"-",
"sensitive",
"hashing",
"in",
"the",
"document",
"near",
"-",
"duplicate",
"detection",
"."
] |
openvenues/pypostal
|
python
|
https://github.com/openvenues/pypostal/blob/1c0fd96b5e2463b7015cd3625ac276db520c69fe/postal/near_dupe.py#L6-L42
|
[
"def",
"near_dupe_hashes",
"(",
"labels",
",",
"values",
",",
"languages",
"=",
"None",
",",
"*",
"*",
"kw",
")",
":",
"return",
"_near_dupe",
".",
"near_dupe_hashes",
"(",
"labels",
",",
"values",
",",
"languages",
"=",
"languages",
",",
"*",
"*",
"kw",
")"
] |
1c0fd96b5e2463b7015cd3625ac276db520c69fe
|
test
|
has_api_key
|
Detect whether the file contains an api key in the Token object that is not 40*'0'.
See issue #86.
:param file: path-to-file to check
:return: boolean
|
tools/api_key_tool.py
|
def has_api_key(file_name):
"""
Detect whether the file contains an api key in the Token object that is not 40*'0'.
See issue #86.
:param file: path-to-file to check
:return: boolean
"""
f = open(file_name, 'r')
text = f.read()
if re.search(real_api_regex, text) is not None and \
re.search(zero_api_regex, text) is None:
return True
return False
|
def has_api_key(file_name):
"""
Detect whether the file contains an api key in the Token object that is not 40*'0'.
See issue #86.
:param file: path-to-file to check
:return: boolean
"""
f = open(file_name, 'r')
text = f.read()
if re.search(real_api_regex, text) is not None and \
re.search(zero_api_regex, text) is None:
return True
return False
|
[
"Detect",
"whether",
"the",
"file",
"contains",
"an",
"api",
"key",
"in",
"the",
"Token",
"object",
"that",
"is",
"not",
"40",
"*",
"0",
".",
"See",
"issue",
"#86",
".",
":",
"param",
"file",
":",
"path",
"-",
"to",
"-",
"file",
"to",
"check",
":",
"return",
":",
"boolean"
] |
hydrosquall/tiingo-python
|
python
|
https://github.com/hydrosquall/tiingo-python/blob/9bb98ca9d24f2e4db651cf0590e4b47184546482/tools/api_key_tool.py#L16-L28
|
[
"def",
"has_api_key",
"(",
"file_name",
")",
":",
"f",
"=",
"open",
"(",
"file_name",
",",
"'r'",
")",
"text",
"=",
"f",
".",
"read",
"(",
")",
"if",
"re",
".",
"search",
"(",
"real_api_regex",
",",
"text",
")",
"is",
"not",
"None",
"and",
"re",
".",
"search",
"(",
"zero_api_regex",
",",
"text",
")",
"is",
"None",
":",
"return",
"True",
"return",
"False"
] |
9bb98ca9d24f2e4db651cf0590e4b47184546482
|
test
|
remove_api_key
|
Change the api key in the Token object to 40*'0'. See issue #86.
:param file: path-to-file to change
|
tools/api_key_tool.py
|
def remove_api_key(file_name):
"""
Change the api key in the Token object to 40*'0'. See issue #86.
:param file: path-to-file to change
"""
with open(file_name, 'r') as fp:
text = fp.read()
text = re.sub(real_api_regex, zero_token_string, text)
with open(file_name, 'w') as fp:
fp.write(text)
return
|
def remove_api_key(file_name):
"""
Change the api key in the Token object to 40*'0'. See issue #86.
:param file: path-to-file to change
"""
with open(file_name, 'r') as fp:
text = fp.read()
text = re.sub(real_api_regex, zero_token_string, text)
with open(file_name, 'w') as fp:
fp.write(text)
return
|
[
"Change",
"the",
"api",
"key",
"in",
"the",
"Token",
"object",
"to",
"40",
"*",
"0",
".",
"See",
"issue",
"#86",
".",
":",
"param",
"file",
":",
"path",
"-",
"to",
"-",
"file",
"to",
"change"
] |
hydrosquall/tiingo-python
|
python
|
https://github.com/hydrosquall/tiingo-python/blob/9bb98ca9d24f2e4db651cf0590e4b47184546482/tools/api_key_tool.py#L31-L41
|
[
"def",
"remove_api_key",
"(",
"file_name",
")",
":",
"with",
"open",
"(",
"file_name",
",",
"'r'",
")",
"as",
"fp",
":",
"text",
"=",
"fp",
".",
"read",
"(",
")",
"text",
"=",
"re",
".",
"sub",
"(",
"real_api_regex",
",",
"zero_token_string",
",",
"text",
")",
"with",
"open",
"(",
"file_name",
",",
"'w'",
")",
"as",
"fp",
":",
"fp",
".",
"write",
"(",
"text",
")",
"return"
] |
9bb98ca9d24f2e4db651cf0590e4b47184546482
|
test
|
dict_to_object
|
Converts a python dict to a namedtuple, saving memory.
|
tiingo/api.py
|
def dict_to_object(item, object_name):
"""Converts a python dict to a namedtuple, saving memory."""
fields = item.keys()
values = item.values()
return json.loads(json.dumps(item),
object_hook=lambda d:
namedtuple(object_name, fields)(*values))
|
def dict_to_object(item, object_name):
"""Converts a python dict to a namedtuple, saving memory."""
fields = item.keys()
values = item.values()
return json.loads(json.dumps(item),
object_hook=lambda d:
namedtuple(object_name, fields)(*values))
|
[
"Converts",
"a",
"python",
"dict",
"to",
"a",
"namedtuple",
"saving",
"memory",
"."
] |
hydrosquall/tiingo-python
|
python
|
https://github.com/hydrosquall/tiingo-python/blob/9bb98ca9d24f2e4db651cf0590e4b47184546482/tiingo/api.py#L44-L50
|
[
"def",
"dict_to_object",
"(",
"item",
",",
"object_name",
")",
":",
"fields",
"=",
"item",
".",
"keys",
"(",
")",
"values",
"=",
"item",
".",
"values",
"(",
")",
"return",
"json",
".",
"loads",
"(",
"json",
".",
"dumps",
"(",
"item",
")",
",",
"object_hook",
"=",
"lambda",
"d",
":",
"namedtuple",
"(",
"object_name",
",",
"fields",
")",
"(",
"*",
"values",
")",
")"
] |
9bb98ca9d24f2e4db651cf0590e4b47184546482
|
test
|
TiingoClient.list_tickers
|
Return a list of dicts of metadata tickers for all supported tickers
of the specified asset type, as well as metadata about each ticker.
This includes supported date range, the exchange the ticker is traded
on, and the currency the stock is traded on.
Tickers for unrelated products are omitted.
https://apimedia.tiingo.com/docs/tiingo/daily/supported_tickers.zip
|
tiingo/api.py
|
def list_tickers(self, assetType):
"""Return a list of dicts of metadata tickers for all supported tickers
of the specified asset type, as well as metadata about each ticker.
This includes supported date range, the exchange the ticker is traded
on, and the currency the stock is traded on.
Tickers for unrelated products are omitted.
https://apimedia.tiingo.com/docs/tiingo/daily/supported_tickers.zip
"""
listing_file_url = "https://apimedia.tiingo.com/docs/tiingo/daily/supported_tickers.zip"
response = requests.get(listing_file_url)
zipdata = get_zipfile_from_response(response)
raw_csv = get_buffer_from_zipfile(zipdata, 'supported_tickers.csv')
reader = csv.DictReader(raw_csv)
return [row for row in reader
if row.get('assetType') == assetType]
|
def list_tickers(self, assetType):
"""Return a list of dicts of metadata tickers for all supported tickers
of the specified asset type, as well as metadata about each ticker.
This includes supported date range, the exchange the ticker is traded
on, and the currency the stock is traded on.
Tickers for unrelated products are omitted.
https://apimedia.tiingo.com/docs/tiingo/daily/supported_tickers.zip
"""
listing_file_url = "https://apimedia.tiingo.com/docs/tiingo/daily/supported_tickers.zip"
response = requests.get(listing_file_url)
zipdata = get_zipfile_from_response(response)
raw_csv = get_buffer_from_zipfile(zipdata, 'supported_tickers.csv')
reader = csv.DictReader(raw_csv)
return [row for row in reader
if row.get('assetType') == assetType]
|
[
"Return",
"a",
"list",
"of",
"dicts",
"of",
"metadata",
"tickers",
"for",
"all",
"supported",
"tickers",
"of",
"the",
"specified",
"asset",
"type",
"as",
"well",
"as",
"metadata",
"about",
"each",
"ticker",
".",
"This",
"includes",
"supported",
"date",
"range",
"the",
"exchange",
"the",
"ticker",
"is",
"traded",
"on",
"and",
"the",
"currency",
"the",
"stock",
"is",
"traded",
"on",
".",
"Tickers",
"for",
"unrelated",
"products",
"are",
"omitted",
".",
"https",
":",
"//",
"apimedia",
".",
"tiingo",
".",
"com",
"/",
"docs",
"/",
"tiingo",
"/",
"daily",
"/",
"supported_tickers",
".",
"zip"
] |
hydrosquall/tiingo-python
|
python
|
https://github.com/hydrosquall/tiingo-python/blob/9bb98ca9d24f2e4db651cf0590e4b47184546482/tiingo/api.py#L101-L116
|
[
"def",
"list_tickers",
"(",
"self",
",",
"assetType",
")",
":",
"listing_file_url",
"=",
"\"https://apimedia.tiingo.com/docs/tiingo/daily/supported_tickers.zip\"",
"response",
"=",
"requests",
".",
"get",
"(",
"listing_file_url",
")",
"zipdata",
"=",
"get_zipfile_from_response",
"(",
"response",
")",
"raw_csv",
"=",
"get_buffer_from_zipfile",
"(",
"zipdata",
",",
"'supported_tickers.csv'",
")",
"reader",
"=",
"csv",
".",
"DictReader",
"(",
"raw_csv",
")",
"return",
"[",
"row",
"for",
"row",
"in",
"reader",
"if",
"row",
".",
"get",
"(",
"'assetType'",
")",
"==",
"assetType",
"]"
] |
9bb98ca9d24f2e4db651cf0590e4b47184546482
|
test
|
TiingoClient.get_ticker_metadata
|
Return metadata for 1 ticker
Use TiingoClient.list_tickers() to get available options
Args:
ticker (str) : Unique identifier for stock
|
tiingo/api.py
|
def get_ticker_metadata(self, ticker, fmt='json'):
"""Return metadata for 1 ticker
Use TiingoClient.list_tickers() to get available options
Args:
ticker (str) : Unique identifier for stock
"""
url = "tiingo/daily/{}".format(ticker)
response = self._request('GET', url)
data = response.json()
if fmt == 'json':
return data
elif fmt == 'object':
return dict_to_object(data, "Ticker")
|
def get_ticker_metadata(self, ticker, fmt='json'):
"""Return metadata for 1 ticker
Use TiingoClient.list_tickers() to get available options
Args:
ticker (str) : Unique identifier for stock
"""
url = "tiingo/daily/{}".format(ticker)
response = self._request('GET', url)
data = response.json()
if fmt == 'json':
return data
elif fmt == 'object':
return dict_to_object(data, "Ticker")
|
[
"Return",
"metadata",
"for",
"1",
"ticker",
"Use",
"TiingoClient",
".",
"list_tickers",
"()",
"to",
"get",
"available",
"options"
] |
hydrosquall/tiingo-python
|
python
|
https://github.com/hydrosquall/tiingo-python/blob/9bb98ca9d24f2e4db651cf0590e4b47184546482/tiingo/api.py#L127-L140
|
[
"def",
"get_ticker_metadata",
"(",
"self",
",",
"ticker",
",",
"fmt",
"=",
"'json'",
")",
":",
"url",
"=",
"\"tiingo/daily/{}\"",
".",
"format",
"(",
"ticker",
")",
"response",
"=",
"self",
".",
"_request",
"(",
"'GET'",
",",
"url",
")",
"data",
"=",
"response",
".",
"json",
"(",
")",
"if",
"fmt",
"==",
"'json'",
":",
"return",
"data",
"elif",
"fmt",
"==",
"'object'",
":",
"return",
"dict_to_object",
"(",
"data",
",",
"\"Ticker\"",
")"
] |
9bb98ca9d24f2e4db651cf0590e4b47184546482
|
test
|
TiingoClient._invalid_frequency
|
Check to see that frequency was specified correctly
:param frequency (string): frequency string
:return (boolean):
|
tiingo/api.py
|
def _invalid_frequency(self, frequency):
"""
Check to see that frequency was specified correctly
:param frequency (string): frequency string
:return (boolean):
"""
is_valid = self._is_eod_frequency(frequency) or re.match(self._frequency_pattern, frequency)
return not is_valid
|
def _invalid_frequency(self, frequency):
"""
Check to see that frequency was specified correctly
:param frequency (string): frequency string
:return (boolean):
"""
is_valid = self._is_eod_frequency(frequency) or re.match(self._frequency_pattern, frequency)
return not is_valid
|
[
"Check",
"to",
"see",
"that",
"frequency",
"was",
"specified",
"correctly",
":",
"param",
"frequency",
"(",
"string",
")",
":",
"frequency",
"string",
":",
"return",
"(",
"boolean",
")",
":"
] |
hydrosquall/tiingo-python
|
python
|
https://github.com/hydrosquall/tiingo-python/blob/9bb98ca9d24f2e4db651cf0590e4b47184546482/tiingo/api.py#L142-L149
|
[
"def",
"_invalid_frequency",
"(",
"self",
",",
"frequency",
")",
":",
"is_valid",
"=",
"self",
".",
"_is_eod_frequency",
"(",
"frequency",
")",
"or",
"re",
".",
"match",
"(",
"self",
".",
"_frequency_pattern",
",",
"frequency",
")",
"return",
"not",
"is_valid"
] |
9bb98ca9d24f2e4db651cf0590e4b47184546482
|
test
|
TiingoClient._get_url
|
Return url based on frequency. Daily, weekly, or yearly use Tiingo
EOD api; anything less than daily uses the iex intraday api.
:param ticker (string): ticker to be embedded in the url
:param frequency (string): valid frequency per Tiingo api
:return (string): url
|
tiingo/api.py
|
def _get_url(self, ticker, frequency):
"""
Return url based on frequency. Daily, weekly, or yearly use Tiingo
EOD api; anything less than daily uses the iex intraday api.
:param ticker (string): ticker to be embedded in the url
:param frequency (string): valid frequency per Tiingo api
:return (string): url
"""
if self._invalid_frequency(frequency):
etext = ("Error: {} is an invalid frequency. Check Tiingo API documentation "
"for valid EOD or intraday frequency format.")
raise InvalidFrequencyError(etext.format(frequency))
else:
if self._is_eod_frequency(frequency):
return "tiingo/daily/{}/prices".format(ticker)
else:
return "iex/{}/prices".format(ticker)
|
def _get_url(self, ticker, frequency):
"""
Return url based on frequency. Daily, weekly, or yearly use Tiingo
EOD api; anything less than daily uses the iex intraday api.
:param ticker (string): ticker to be embedded in the url
:param frequency (string): valid frequency per Tiingo api
:return (string): url
"""
if self._invalid_frequency(frequency):
etext = ("Error: {} is an invalid frequency. Check Tiingo API documentation "
"for valid EOD or intraday frequency format.")
raise InvalidFrequencyError(etext.format(frequency))
else:
if self._is_eod_frequency(frequency):
return "tiingo/daily/{}/prices".format(ticker)
else:
return "iex/{}/prices".format(ticker)
|
[
"Return",
"url",
"based",
"on",
"frequency",
".",
"Daily",
"weekly",
"or",
"yearly",
"use",
"Tiingo",
"EOD",
"api",
";",
"anything",
"less",
"than",
"daily",
"uses",
"the",
"iex",
"intraday",
"api",
".",
":",
"param",
"ticker",
"(",
"string",
")",
":",
"ticker",
"to",
"be",
"embedded",
"in",
"the",
"url",
":",
"param",
"frequency",
"(",
"string",
")",
":",
"valid",
"frequency",
"per",
"Tiingo",
"api",
":",
"return",
"(",
"string",
")",
":",
"url"
] |
hydrosquall/tiingo-python
|
python
|
https://github.com/hydrosquall/tiingo-python/blob/9bb98ca9d24f2e4db651cf0590e4b47184546482/tiingo/api.py#L151-L167
|
[
"def",
"_get_url",
"(",
"self",
",",
"ticker",
",",
"frequency",
")",
":",
"if",
"self",
".",
"_invalid_frequency",
"(",
"frequency",
")",
":",
"etext",
"=",
"(",
"\"Error: {} is an invalid frequency. Check Tiingo API documentation \"",
"\"for valid EOD or intraday frequency format.\"",
")",
"raise",
"InvalidFrequencyError",
"(",
"etext",
".",
"format",
"(",
"frequency",
")",
")",
"else",
":",
"if",
"self",
".",
"_is_eod_frequency",
"(",
"frequency",
")",
":",
"return",
"\"tiingo/daily/{}/prices\"",
".",
"format",
"(",
"ticker",
")",
"else",
":",
"return",
"\"iex/{}/prices\"",
".",
"format",
"(",
"ticker",
")"
] |
9bb98ca9d24f2e4db651cf0590e4b47184546482
|
test
|
TiingoClient.get_ticker_price
|
By default, return latest EOD Composite Price for a stock ticker.
On average, each feed contains 3 data sources.
Supported tickers + Available Day Ranges are here:
https://apimedia.tiingo.com/docs/tiingo/daily/supported_tickers.zip
Args:
ticker (string): Unique identifier for stock ticker
startDate (string): Start of ticker range in YYYY-MM-DD format
endDate (string): End of ticker range in YYYY-MM-DD format
fmt (string): 'csv' or 'json'
frequency (string): Resample frequency
|
tiingo/api.py
|
def get_ticker_price(self, ticker,
startDate=None, endDate=None,
fmt='json', frequency='daily'):
"""By default, return latest EOD Composite Price for a stock ticker.
On average, each feed contains 3 data sources.
Supported tickers + Available Day Ranges are here:
https://apimedia.tiingo.com/docs/tiingo/daily/supported_tickers.zip
Args:
ticker (string): Unique identifier for stock ticker
startDate (string): Start of ticker range in YYYY-MM-DD format
endDate (string): End of ticker range in YYYY-MM-DD format
fmt (string): 'csv' or 'json'
frequency (string): Resample frequency
"""
url = self._get_url(ticker, frequency)
params = {
'format': fmt if fmt != "object" else 'json', # conversion local
'resampleFreq': frequency
}
if startDate:
params['startDate'] = startDate
if endDate:
params['endDate'] = endDate
# TODO: evaluate whether to stream CSV to cache on disk, or
# load as array in memory, or just pass plain text
response = self._request('GET', url, params=params)
if fmt == "json":
return response.json()
elif fmt == "object":
data = response.json()
return [dict_to_object(item, "TickerPrice") for item in data]
else:
return response.content.decode("utf-8")
|
def get_ticker_price(self, ticker,
startDate=None, endDate=None,
fmt='json', frequency='daily'):
"""By default, return latest EOD Composite Price for a stock ticker.
On average, each feed contains 3 data sources.
Supported tickers + Available Day Ranges are here:
https://apimedia.tiingo.com/docs/tiingo/daily/supported_tickers.zip
Args:
ticker (string): Unique identifier for stock ticker
startDate (string): Start of ticker range in YYYY-MM-DD format
endDate (string): End of ticker range in YYYY-MM-DD format
fmt (string): 'csv' or 'json'
frequency (string): Resample frequency
"""
url = self._get_url(ticker, frequency)
params = {
'format': fmt if fmt != "object" else 'json', # conversion local
'resampleFreq': frequency
}
if startDate:
params['startDate'] = startDate
if endDate:
params['endDate'] = endDate
# TODO: evaluate whether to stream CSV to cache on disk, or
# load as array in memory, or just pass plain text
response = self._request('GET', url, params=params)
if fmt == "json":
return response.json()
elif fmt == "object":
data = response.json()
return [dict_to_object(item, "TickerPrice") for item in data]
else:
return response.content.decode("utf-8")
|
[
"By",
"default",
"return",
"latest",
"EOD",
"Composite",
"Price",
"for",
"a",
"stock",
"ticker",
".",
"On",
"average",
"each",
"feed",
"contains",
"3",
"data",
"sources",
"."
] |
hydrosquall/tiingo-python
|
python
|
https://github.com/hydrosquall/tiingo-python/blob/9bb98ca9d24f2e4db651cf0590e4b47184546482/tiingo/api.py#L169-L205
|
[
"def",
"get_ticker_price",
"(",
"self",
",",
"ticker",
",",
"startDate",
"=",
"None",
",",
"endDate",
"=",
"None",
",",
"fmt",
"=",
"'json'",
",",
"frequency",
"=",
"'daily'",
")",
":",
"url",
"=",
"self",
".",
"_get_url",
"(",
"ticker",
",",
"frequency",
")",
"params",
"=",
"{",
"'format'",
":",
"fmt",
"if",
"fmt",
"!=",
"\"object\"",
"else",
"'json'",
",",
"# conversion local",
"'resampleFreq'",
":",
"frequency",
"}",
"if",
"startDate",
":",
"params",
"[",
"'startDate'",
"]",
"=",
"startDate",
"if",
"endDate",
":",
"params",
"[",
"'endDate'",
"]",
"=",
"endDate",
"# TODO: evaluate whether to stream CSV to cache on disk, or",
"# load as array in memory, or just pass plain text",
"response",
"=",
"self",
".",
"_request",
"(",
"'GET'",
",",
"url",
",",
"params",
"=",
"params",
")",
"if",
"fmt",
"==",
"\"json\"",
":",
"return",
"response",
".",
"json",
"(",
")",
"elif",
"fmt",
"==",
"\"object\"",
":",
"data",
"=",
"response",
".",
"json",
"(",
")",
"return",
"[",
"dict_to_object",
"(",
"item",
",",
"\"TickerPrice\"",
")",
"for",
"item",
"in",
"data",
"]",
"else",
":",
"return",
"response",
".",
"content",
".",
"decode",
"(",
"\"utf-8\"",
")"
] |
9bb98ca9d24f2e4db651cf0590e4b47184546482
|
test
|
TiingoClient.get_dataframe
|
Return a pandas.DataFrame of historical prices for one or more ticker symbols.
By default, return latest EOD Composite Price for a list of stock tickers.
On average, each feed contains 3 data sources.
Supported tickers + Available Day Ranges are here:
https://apimedia.tiingo.com/docs/tiingo/daily/supported_tickers.zip
or from the TiingoClient.list_tickers() method.
Args:
tickers (string/list): One or more unique identifiers for a stock ticker.
startDate (string): Start of ticker range in YYYY-MM-DD format.
endDate (string): End of ticker range in YYYY-MM-DD format.
metric_name (string): Optional parameter specifying metric to be returned for each
ticker. In the event of a single ticker, this is optional and if not specified
all of the available data will be returned. In the event of a list of tickers,
this parameter is required.
frequency (string): Resample frequency (defaults to daily).
|
tiingo/api.py
|
def get_dataframe(self, tickers,
startDate=None, endDate=None, metric_name=None, frequency='daily'):
""" Return a pandas.DataFrame of historical prices for one or more ticker symbols.
By default, return latest EOD Composite Price for a list of stock tickers.
On average, each feed contains 3 data sources.
Supported tickers + Available Day Ranges are here:
https://apimedia.tiingo.com/docs/tiingo/daily/supported_tickers.zip
or from the TiingoClient.list_tickers() method.
Args:
tickers (string/list): One or more unique identifiers for a stock ticker.
startDate (string): Start of ticker range in YYYY-MM-DD format.
endDate (string): End of ticker range in YYYY-MM-DD format.
metric_name (string): Optional parameter specifying metric to be returned for each
ticker. In the event of a single ticker, this is optional and if not specified
all of the available data will be returned. In the event of a list of tickers,
this parameter is required.
frequency (string): Resample frequency (defaults to daily).
"""
valid_columns = ['open', 'high', 'low', 'close', 'volume', 'adjOpen', 'adjHigh', 'adjLow',
'adjClose', 'adjVolume', 'divCash', 'splitFactor']
if metric_name is not None and metric_name not in valid_columns:
raise APIColumnNameError('Valid data items are: ' + str(valid_columns))
params = {
'format': 'json',
'resampleFreq': frequency
}
if startDate:
params['startDate'] = startDate
if endDate:
params['endDate'] = endDate
if pandas_is_installed:
if type(tickers) is str:
stock = tickers
url = self._get_url(stock, frequency)
response = self._request('GET', url, params=params)
df = pd.DataFrame(response.json())
if metric_name is not None:
prices = df[metric_name]
prices.index = df['date']
else:
prices = df
prices.index = df['date']
del (prices['date'])
else:
prices = pd.DataFrame()
for stock in tickers:
url = self._get_url(stock, frequency)
response = self._request('GET', url, params=params)
df = pd.DataFrame(response.json())
df.index = df['date']
df.rename(index=str, columns={metric_name: stock}, inplace=True)
prices = pd.concat([prices, df[stock]], axis=1)
prices.index = pd.to_datetime(prices.index)
return prices
else:
error_message = ("Pandas is not installed, but .get_ticker_price() was "
"called with fmt=pandas. In order to install tiingo with "
"pandas, reinstall with pandas as an optional dependency. \n"
"Install tiingo with pandas dependency: \'pip install tiingo[pandas]\'\n"
"Alternatively, just install pandas: pip install pandas.")
raise InstallPandasException(error_message)
|
def get_dataframe(self, tickers,
startDate=None, endDate=None, metric_name=None, frequency='daily'):
""" Return a pandas.DataFrame of historical prices for one or more ticker symbols.
By default, return latest EOD Composite Price for a list of stock tickers.
On average, each feed contains 3 data sources.
Supported tickers + Available Day Ranges are here:
https://apimedia.tiingo.com/docs/tiingo/daily/supported_tickers.zip
or from the TiingoClient.list_tickers() method.
Args:
tickers (string/list): One or more unique identifiers for a stock ticker.
startDate (string): Start of ticker range in YYYY-MM-DD format.
endDate (string): End of ticker range in YYYY-MM-DD format.
metric_name (string): Optional parameter specifying metric to be returned for each
ticker. In the event of a single ticker, this is optional and if not specified
all of the available data will be returned. In the event of a list of tickers,
this parameter is required.
frequency (string): Resample frequency (defaults to daily).
"""
valid_columns = ['open', 'high', 'low', 'close', 'volume', 'adjOpen', 'adjHigh', 'adjLow',
'adjClose', 'adjVolume', 'divCash', 'splitFactor']
if metric_name is not None and metric_name not in valid_columns:
raise APIColumnNameError('Valid data items are: ' + str(valid_columns))
params = {
'format': 'json',
'resampleFreq': frequency
}
if startDate:
params['startDate'] = startDate
if endDate:
params['endDate'] = endDate
if pandas_is_installed:
if type(tickers) is str:
stock = tickers
url = self._get_url(stock, frequency)
response = self._request('GET', url, params=params)
df = pd.DataFrame(response.json())
if metric_name is not None:
prices = df[metric_name]
prices.index = df['date']
else:
prices = df
prices.index = df['date']
del (prices['date'])
else:
prices = pd.DataFrame()
for stock in tickers:
url = self._get_url(stock, frequency)
response = self._request('GET', url, params=params)
df = pd.DataFrame(response.json())
df.index = df['date']
df.rename(index=str, columns={metric_name: stock}, inplace=True)
prices = pd.concat([prices, df[stock]], axis=1)
prices.index = pd.to_datetime(prices.index)
return prices
else:
error_message = ("Pandas is not installed, but .get_ticker_price() was "
"called with fmt=pandas. In order to install tiingo with "
"pandas, reinstall with pandas as an optional dependency. \n"
"Install tiingo with pandas dependency: \'pip install tiingo[pandas]\'\n"
"Alternatively, just install pandas: pip install pandas.")
raise InstallPandasException(error_message)
|
[
"Return",
"a",
"pandas",
".",
"DataFrame",
"of",
"historical",
"prices",
"for",
"one",
"or",
"more",
"ticker",
"symbols",
"."
] |
hydrosquall/tiingo-python
|
python
|
https://github.com/hydrosquall/tiingo-python/blob/9bb98ca9d24f2e4db651cf0590e4b47184546482/tiingo/api.py#L207-L275
|
[
"def",
"get_dataframe",
"(",
"self",
",",
"tickers",
",",
"startDate",
"=",
"None",
",",
"endDate",
"=",
"None",
",",
"metric_name",
"=",
"None",
",",
"frequency",
"=",
"'daily'",
")",
":",
"valid_columns",
"=",
"[",
"'open'",
",",
"'high'",
",",
"'low'",
",",
"'close'",
",",
"'volume'",
",",
"'adjOpen'",
",",
"'adjHigh'",
",",
"'adjLow'",
",",
"'adjClose'",
",",
"'adjVolume'",
",",
"'divCash'",
",",
"'splitFactor'",
"]",
"if",
"metric_name",
"is",
"not",
"None",
"and",
"metric_name",
"not",
"in",
"valid_columns",
":",
"raise",
"APIColumnNameError",
"(",
"'Valid data items are: '",
"+",
"str",
"(",
"valid_columns",
")",
")",
"params",
"=",
"{",
"'format'",
":",
"'json'",
",",
"'resampleFreq'",
":",
"frequency",
"}",
"if",
"startDate",
":",
"params",
"[",
"'startDate'",
"]",
"=",
"startDate",
"if",
"endDate",
":",
"params",
"[",
"'endDate'",
"]",
"=",
"endDate",
"if",
"pandas_is_installed",
":",
"if",
"type",
"(",
"tickers",
")",
"is",
"str",
":",
"stock",
"=",
"tickers",
"url",
"=",
"self",
".",
"_get_url",
"(",
"stock",
",",
"frequency",
")",
"response",
"=",
"self",
".",
"_request",
"(",
"'GET'",
",",
"url",
",",
"params",
"=",
"params",
")",
"df",
"=",
"pd",
".",
"DataFrame",
"(",
"response",
".",
"json",
"(",
")",
")",
"if",
"metric_name",
"is",
"not",
"None",
":",
"prices",
"=",
"df",
"[",
"metric_name",
"]",
"prices",
".",
"index",
"=",
"df",
"[",
"'date'",
"]",
"else",
":",
"prices",
"=",
"df",
"prices",
".",
"index",
"=",
"df",
"[",
"'date'",
"]",
"del",
"(",
"prices",
"[",
"'date'",
"]",
")",
"else",
":",
"prices",
"=",
"pd",
".",
"DataFrame",
"(",
")",
"for",
"stock",
"in",
"tickers",
":",
"url",
"=",
"self",
".",
"_get_url",
"(",
"stock",
",",
"frequency",
")",
"response",
"=",
"self",
".",
"_request",
"(",
"'GET'",
",",
"url",
",",
"params",
"=",
"params",
")",
"df",
"=",
"pd",
".",
"DataFrame",
"(",
"response",
".",
"json",
"(",
")",
")",
"df",
".",
"index",
"=",
"df",
"[",
"'date'",
"]",
"df",
".",
"rename",
"(",
"index",
"=",
"str",
",",
"columns",
"=",
"{",
"metric_name",
":",
"stock",
"}",
",",
"inplace",
"=",
"True",
")",
"prices",
"=",
"pd",
".",
"concat",
"(",
"[",
"prices",
",",
"df",
"[",
"stock",
"]",
"]",
",",
"axis",
"=",
"1",
")",
"prices",
".",
"index",
"=",
"pd",
".",
"to_datetime",
"(",
"prices",
".",
"index",
")",
"return",
"prices",
"else",
":",
"error_message",
"=",
"(",
"\"Pandas is not installed, but .get_ticker_price() was \"",
"\"called with fmt=pandas. In order to install tiingo with \"",
"\"pandas, reinstall with pandas as an optional dependency. \\n\"",
"\"Install tiingo with pandas dependency: \\'pip install tiingo[pandas]\\'\\n\"",
"\"Alternatively, just install pandas: pip install pandas.\"",
")",
"raise",
"InstallPandasException",
"(",
"error_message",
")"
] |
9bb98ca9d24f2e4db651cf0590e4b47184546482
|
test
|
TiingoClient.get_news
|
Return list of news articles matching given search terms
https://api.tiingo.com/docs/tiingo/news
# Dates are in YYYY-MM-DD Format.
Args:
tickers [string] : List of unique Stock Tickers to search
tags [string] : List of topics tagged by Tiingo Algorithms
sources [string]: List of base urls to include as news sources
startDate, endDate [date]: Boundaries of news search window
limit (int): Max results returned. Default 100, max 1000
offset (int): Search results offset, used for paginating
sortBy (string): "publishedDate" OR (#TODO: UPDATE THIS)
|
tiingo/api.py
|
def get_news(self, tickers=[], tags=[], sources=[], startDate=None,
endDate=None, limit=100, offset=0, sortBy="publishedDate",
fmt='json'):
"""Return list of news articles matching given search terms
https://api.tiingo.com/docs/tiingo/news
# Dates are in YYYY-MM-DD Format.
Args:
tickers [string] : List of unique Stock Tickers to search
tags [string] : List of topics tagged by Tiingo Algorithms
sources [string]: List of base urls to include as news sources
startDate, endDate [date]: Boundaries of news search window
limit (int): Max results returned. Default 100, max 1000
offset (int): Search results offset, used for paginating
sortBy (string): "publishedDate" OR (#TODO: UPDATE THIS)
"""
url = "tiingo/news"
params = {
'limit': limit,
'offset': offset,
'sortBy': sortBy,
'tickers': tickers,
'sources': sources,
'tags': tags,
'startDate': startDate,
'endDate': endDate
}
response = self._request('GET', url, params=params)
data = response.json()
if fmt == 'json':
return data
elif fmt == 'object':
return [dict_to_object(item, "NewsArticle") for item in data]
|
def get_news(self, tickers=[], tags=[], sources=[], startDate=None,
endDate=None, limit=100, offset=0, sortBy="publishedDate",
fmt='json'):
"""Return list of news articles matching given search terms
https://api.tiingo.com/docs/tiingo/news
# Dates are in YYYY-MM-DD Format.
Args:
tickers [string] : List of unique Stock Tickers to search
tags [string] : List of topics tagged by Tiingo Algorithms
sources [string]: List of base urls to include as news sources
startDate, endDate [date]: Boundaries of news search window
limit (int): Max results returned. Default 100, max 1000
offset (int): Search results offset, used for paginating
sortBy (string): "publishedDate" OR (#TODO: UPDATE THIS)
"""
url = "tiingo/news"
params = {
'limit': limit,
'offset': offset,
'sortBy': sortBy,
'tickers': tickers,
'sources': sources,
'tags': tags,
'startDate': startDate,
'endDate': endDate
}
response = self._request('GET', url, params=params)
data = response.json()
if fmt == 'json':
return data
elif fmt == 'object':
return [dict_to_object(item, "NewsArticle") for item in data]
|
[
"Return",
"list",
"of",
"news",
"articles",
"matching",
"given",
"search",
"terms",
"https",
":",
"//",
"api",
".",
"tiingo",
".",
"com",
"/",
"docs",
"/",
"tiingo",
"/",
"news"
] |
hydrosquall/tiingo-python
|
python
|
https://github.com/hydrosquall/tiingo-python/blob/9bb98ca9d24f2e4db651cf0590e4b47184546482/tiingo/api.py#L279-L312
|
[
"def",
"get_news",
"(",
"self",
",",
"tickers",
"=",
"[",
"]",
",",
"tags",
"=",
"[",
"]",
",",
"sources",
"=",
"[",
"]",
",",
"startDate",
"=",
"None",
",",
"endDate",
"=",
"None",
",",
"limit",
"=",
"100",
",",
"offset",
"=",
"0",
",",
"sortBy",
"=",
"\"publishedDate\"",
",",
"fmt",
"=",
"'json'",
")",
":",
"url",
"=",
"\"tiingo/news\"",
"params",
"=",
"{",
"'limit'",
":",
"limit",
",",
"'offset'",
":",
"offset",
",",
"'sortBy'",
":",
"sortBy",
",",
"'tickers'",
":",
"tickers",
",",
"'sources'",
":",
"sources",
",",
"'tags'",
":",
"tags",
",",
"'startDate'",
":",
"startDate",
",",
"'endDate'",
":",
"endDate",
"}",
"response",
"=",
"self",
".",
"_request",
"(",
"'GET'",
",",
"url",
",",
"params",
"=",
"params",
")",
"data",
"=",
"response",
".",
"json",
"(",
")",
"if",
"fmt",
"==",
"'json'",
":",
"return",
"data",
"elif",
"fmt",
"==",
"'object'",
":",
"return",
"[",
"dict_to_object",
"(",
"item",
",",
"\"NewsArticle\"",
")",
"for",
"item",
"in",
"data",
"]"
] |
9bb98ca9d24f2e4db651cf0590e4b47184546482
|
test
|
TiingoClient.get_bulk_news
|
Only available to institutional clients.
If ID is NOT provided, return array of available file_ids.
If ID is provided, provides URL which you can use to download your
file, as well as some metadata about that file.
|
tiingo/api.py
|
def get_bulk_news(self, file_id=None, fmt='json'):
"""Only available to institutional clients.
If ID is NOT provided, return array of available file_ids.
If ID is provided, provides URL which you can use to download your
file, as well as some metadata about that file.
"""
if file_id:
url = "tiingo/news/bulk_download/{}".format(file_id)
else:
url = "tiingo/news/bulk_download"
response = self._request('GET', url)
data = response.json()
if fmt == 'json':
return data
elif fmt == 'object':
return dict_to_object(data, "BulkNews")
|
def get_bulk_news(self, file_id=None, fmt='json'):
"""Only available to institutional clients.
If ID is NOT provided, return array of available file_ids.
If ID is provided, provides URL which you can use to download your
file, as well as some metadata about that file.
"""
if file_id:
url = "tiingo/news/bulk_download/{}".format(file_id)
else:
url = "tiingo/news/bulk_download"
response = self._request('GET', url)
data = response.json()
if fmt == 'json':
return data
elif fmt == 'object':
return dict_to_object(data, "BulkNews")
|
[
"Only",
"available",
"to",
"institutional",
"clients",
".",
"If",
"ID",
"is",
"NOT",
"provided",
"return",
"array",
"of",
"available",
"file_ids",
".",
"If",
"ID",
"is",
"provided",
"provides",
"URL",
"which",
"you",
"can",
"use",
"to",
"download",
"your",
"file",
"as",
"well",
"as",
"some",
"metadata",
"about",
"that",
"file",
"."
] |
hydrosquall/tiingo-python
|
python
|
https://github.com/hydrosquall/tiingo-python/blob/9bb98ca9d24f2e4db651cf0590e4b47184546482/tiingo/api.py#L314-L330
|
[
"def",
"get_bulk_news",
"(",
"self",
",",
"file_id",
"=",
"None",
",",
"fmt",
"=",
"'json'",
")",
":",
"if",
"file_id",
":",
"url",
"=",
"\"tiingo/news/bulk_download/{}\"",
".",
"format",
"(",
"file_id",
")",
"else",
":",
"url",
"=",
"\"tiingo/news/bulk_download\"",
"response",
"=",
"self",
".",
"_request",
"(",
"'GET'",
",",
"url",
")",
"data",
"=",
"response",
".",
"json",
"(",
")",
"if",
"fmt",
"==",
"'json'",
":",
"return",
"data",
"elif",
"fmt",
"==",
"'object'",
":",
"return",
"dict_to_object",
"(",
"data",
",",
"\"BulkNews\"",
")"
] |
9bb98ca9d24f2e4db651cf0590e4b47184546482
|
test
|
RestClient._request
|
Make HTTP request and return response object
Args:
method (str): GET, POST, PUT, DELETE
url (str): path appended to the base_url to create request
**kwargs: passed directly to a requests.request object
|
tiingo/restclient.py
|
def _request(self, method, url, **kwargs):
"""Make HTTP request and return response object
Args:
method (str): GET, POST, PUT, DELETE
url (str): path appended to the base_url to create request
**kwargs: passed directly to a requests.request object
"""
resp = self._session.request(method,
'{}/{}'.format(self._base_url, url),
headers=self._headers,
**kwargs)
try:
resp.raise_for_status()
except HTTPError as e:
logging.error(resp.content)
raise RestClientError(e)
return resp
|
def _request(self, method, url, **kwargs):
"""Make HTTP request and return response object
Args:
method (str): GET, POST, PUT, DELETE
url (str): path appended to the base_url to create request
**kwargs: passed directly to a requests.request object
"""
resp = self._session.request(method,
'{}/{}'.format(self._base_url, url),
headers=self._headers,
**kwargs)
try:
resp.raise_for_status()
except HTTPError as e:
logging.error(resp.content)
raise RestClientError(e)
return resp
|
[
"Make",
"HTTP",
"request",
"and",
"return",
"response",
"object"
] |
hydrosquall/tiingo-python
|
python
|
https://github.com/hydrosquall/tiingo-python/blob/9bb98ca9d24f2e4db651cf0590e4b47184546482/tiingo/restclient.py#L39-L58
|
[
"def",
"_request",
"(",
"self",
",",
"method",
",",
"url",
",",
"*",
"*",
"kwargs",
")",
":",
"resp",
"=",
"self",
".",
"_session",
".",
"request",
"(",
"method",
",",
"'{}/{}'",
".",
"format",
"(",
"self",
".",
"_base_url",
",",
"url",
")",
",",
"headers",
"=",
"self",
".",
"_headers",
",",
"*",
"*",
"kwargs",
")",
"try",
":",
"resp",
".",
"raise_for_status",
"(",
")",
"except",
"HTTPError",
"as",
"e",
":",
"logging",
".",
"error",
"(",
"resp",
".",
"content",
")",
"raise",
"RestClientError",
"(",
"e",
")",
"return",
"resp"
] |
9bb98ca9d24f2e4db651cf0590e4b47184546482
|
test
|
HTTPClient.get_bearer_info
|
Get the application bearer token from client_id and client_secret.
|
spotify/http.py
|
async def get_bearer_info(self):
"""Get the application bearer token from client_id and client_secret."""
if self.client_id is None:
raise SpotifyException(_GET_BEARER_ERR % 'client_id')
elif self.client_secret is None:
raise SpotifyException(_GET_BEARER_ERR % 'client_secret')
token = b64encode(':'.join((self.client_id, self.client_secret)).encode())
kwargs = {
'url': 'https://accounts.spotify.com/api/token',
'data': {'grant_type': 'client_credentials'},
'headers': {'Authorization': 'Basic ' + token.decode()}
}
async with self._session.post(**kwargs) as resp:
return json.loads(await resp.text(encoding='utf-8'))
|
async def get_bearer_info(self):
"""Get the application bearer token from client_id and client_secret."""
if self.client_id is None:
raise SpotifyException(_GET_BEARER_ERR % 'client_id')
elif self.client_secret is None:
raise SpotifyException(_GET_BEARER_ERR % 'client_secret')
token = b64encode(':'.join((self.client_id, self.client_secret)).encode())
kwargs = {
'url': 'https://accounts.spotify.com/api/token',
'data': {'grant_type': 'client_credentials'},
'headers': {'Authorization': 'Basic ' + token.decode()}
}
async with self._session.post(**kwargs) as resp:
return json.loads(await resp.text(encoding='utf-8'))
|
[
"Get",
"the",
"application",
"bearer",
"token",
"from",
"client_id",
"and",
"client_secret",
"."
] |
mental32/spotify.py
|
python
|
https://github.com/mental32/spotify.py/blob/bb296cac7c3dd289908906b7069bd80f43950515/spotify/http.py#L83-L100
|
[
"async",
"def",
"get_bearer_info",
"(",
"self",
")",
":",
"if",
"self",
".",
"client_id",
"is",
"None",
":",
"raise",
"SpotifyException",
"(",
"_GET_BEARER_ERR",
"%",
"'client_id'",
")",
"elif",
"self",
".",
"client_secret",
"is",
"None",
":",
"raise",
"SpotifyException",
"(",
"_GET_BEARER_ERR",
"%",
"'client_secret'",
")",
"token",
"=",
"b64encode",
"(",
"':'",
".",
"join",
"(",
"(",
"self",
".",
"client_id",
",",
"self",
".",
"client_secret",
")",
")",
".",
"encode",
"(",
")",
")",
"kwargs",
"=",
"{",
"'url'",
":",
"'https://accounts.spotify.com/api/token'",
",",
"'data'",
":",
"{",
"'grant_type'",
":",
"'client_credentials'",
"}",
",",
"'headers'",
":",
"{",
"'Authorization'",
":",
"'Basic '",
"+",
"token",
".",
"decode",
"(",
")",
"}",
"}",
"async",
"with",
"self",
".",
"_session",
".",
"post",
"(",
"*",
"*",
"kwargs",
")",
"as",
"resp",
":",
"return",
"json",
".",
"loads",
"(",
"await",
"resp",
".",
"text",
"(",
"encoding",
"=",
"'utf-8'",
")",
")"
] |
bb296cac7c3dd289908906b7069bd80f43950515
|
test
|
HTTPClient.request
|
Make a request to the spotify API with the current bearer credentials.
Parameters
----------
route : Union[tuple[str, str], Route]
A tuple of the method and url or a :class:`Route` object.
kwargs : Any
keyword arguments to pass into :class:`aiohttp.ClientSession.request`
|
spotify/http.py
|
async def request(self, route, **kwargs):
"""Make a request to the spotify API with the current bearer credentials.
Parameters
----------
route : Union[tuple[str, str], Route]
A tuple of the method and url or a :class:`Route` object.
kwargs : Any
keyword arguments to pass into :class:`aiohttp.ClientSession.request`
"""
if isinstance(route, tuple):
method, url = route
else:
method = route.method
url = route.url
if self.bearer_info is None:
self.bearer_info = bearer_info = await self.get_bearer_info()
access_token = bearer_info['access_token']
else:
access_token = self.bearer_info['access_token']
headers = {
'Authorization': 'Bearer ' + access_token,
'Content-Type': kwargs.get('content_type', 'application/json'),
**kwargs.pop('headers', {})
}
for _ in range(self.RETRY_AMOUNT):
r = await self._session.request(method, url, headers=headers, **kwargs)
try:
status = r.status
try:
data = json.loads(await r.text(encoding='utf-8'))
except json.decoder.JSONDecodeError:
data = {}
if 300 > status >= 200:
return data
if status == 401:
self.bearer_info = bearer_info = await self.get_bearer_info()
headers['Authorization'] = 'Bearer ' + bearer_info['access_token']
continue
if status == 429:
# we're being rate limited.
amount = r.headers.get('Retry-After')
await asyncio.sleep(int(amount), loop=self.loop)
continue
if status in (502, 503):
# unconditional retry
continue
if status == 403:
raise Forbidden(r, data)
elif status == 404:
raise NotFound(r, data)
finally:
await r.release()
else:
raise HTTPException(r, data)
|
async def request(self, route, **kwargs):
"""Make a request to the spotify API with the current bearer credentials.
Parameters
----------
route : Union[tuple[str, str], Route]
A tuple of the method and url or a :class:`Route` object.
kwargs : Any
keyword arguments to pass into :class:`aiohttp.ClientSession.request`
"""
if isinstance(route, tuple):
method, url = route
else:
method = route.method
url = route.url
if self.bearer_info is None:
self.bearer_info = bearer_info = await self.get_bearer_info()
access_token = bearer_info['access_token']
else:
access_token = self.bearer_info['access_token']
headers = {
'Authorization': 'Bearer ' + access_token,
'Content-Type': kwargs.get('content_type', 'application/json'),
**kwargs.pop('headers', {})
}
for _ in range(self.RETRY_AMOUNT):
r = await self._session.request(method, url, headers=headers, **kwargs)
try:
status = r.status
try:
data = json.loads(await r.text(encoding='utf-8'))
except json.decoder.JSONDecodeError:
data = {}
if 300 > status >= 200:
return data
if status == 401:
self.bearer_info = bearer_info = await self.get_bearer_info()
headers['Authorization'] = 'Bearer ' + bearer_info['access_token']
continue
if status == 429:
# we're being rate limited.
amount = r.headers.get('Retry-After')
await asyncio.sleep(int(amount), loop=self.loop)
continue
if status in (502, 503):
# unconditional retry
continue
if status == 403:
raise Forbidden(r, data)
elif status == 404:
raise NotFound(r, data)
finally:
await r.release()
else:
raise HTTPException(r, data)
|
[
"Make",
"a",
"request",
"to",
"the",
"spotify",
"API",
"with",
"the",
"current",
"bearer",
"credentials",
"."
] |
mental32/spotify.py
|
python
|
https://github.com/mental32/spotify.py/blob/bb296cac7c3dd289908906b7069bd80f43950515/spotify/http.py#L102-L165
|
[
"async",
"def",
"request",
"(",
"self",
",",
"route",
",",
"*",
"*",
"kwargs",
")",
":",
"if",
"isinstance",
"(",
"route",
",",
"tuple",
")",
":",
"method",
",",
"url",
"=",
"route",
"else",
":",
"method",
"=",
"route",
".",
"method",
"url",
"=",
"route",
".",
"url",
"if",
"self",
".",
"bearer_info",
"is",
"None",
":",
"self",
".",
"bearer_info",
"=",
"bearer_info",
"=",
"await",
"self",
".",
"get_bearer_info",
"(",
")",
"access_token",
"=",
"bearer_info",
"[",
"'access_token'",
"]",
"else",
":",
"access_token",
"=",
"self",
".",
"bearer_info",
"[",
"'access_token'",
"]",
"headers",
"=",
"{",
"'Authorization'",
":",
"'Bearer '",
"+",
"access_token",
",",
"'Content-Type'",
":",
"kwargs",
".",
"get",
"(",
"'content_type'",
",",
"'application/json'",
")",
",",
"*",
"*",
"kwargs",
".",
"pop",
"(",
"'headers'",
",",
"{",
"}",
")",
"}",
"for",
"_",
"in",
"range",
"(",
"self",
".",
"RETRY_AMOUNT",
")",
":",
"r",
"=",
"await",
"self",
".",
"_session",
".",
"request",
"(",
"method",
",",
"url",
",",
"headers",
"=",
"headers",
",",
"*",
"*",
"kwargs",
")",
"try",
":",
"status",
"=",
"r",
".",
"status",
"try",
":",
"data",
"=",
"json",
".",
"loads",
"(",
"await",
"r",
".",
"text",
"(",
"encoding",
"=",
"'utf-8'",
")",
")",
"except",
"json",
".",
"decoder",
".",
"JSONDecodeError",
":",
"data",
"=",
"{",
"}",
"if",
"300",
">",
"status",
">=",
"200",
":",
"return",
"data",
"if",
"status",
"==",
"401",
":",
"self",
".",
"bearer_info",
"=",
"bearer_info",
"=",
"await",
"self",
".",
"get_bearer_info",
"(",
")",
"headers",
"[",
"'Authorization'",
"]",
"=",
"'Bearer '",
"+",
"bearer_info",
"[",
"'access_token'",
"]",
"continue",
"if",
"status",
"==",
"429",
":",
"# we're being rate limited.",
"amount",
"=",
"r",
".",
"headers",
".",
"get",
"(",
"'Retry-After'",
")",
"await",
"asyncio",
".",
"sleep",
"(",
"int",
"(",
"amount",
")",
",",
"loop",
"=",
"self",
".",
"loop",
")",
"continue",
"if",
"status",
"in",
"(",
"502",
",",
"503",
")",
":",
"# unconditional retry",
"continue",
"if",
"status",
"==",
"403",
":",
"raise",
"Forbidden",
"(",
"r",
",",
"data",
")",
"elif",
"status",
"==",
"404",
":",
"raise",
"NotFound",
"(",
"r",
",",
"data",
")",
"finally",
":",
"await",
"r",
".",
"release",
"(",
")",
"else",
":",
"raise",
"HTTPException",
"(",
"r",
",",
"data",
")"
] |
bb296cac7c3dd289908906b7069bd80f43950515
|
test
|
HTTPClient.album
|
Get a spotify album by its ID.
Parameters
----------
spotify_id : str
The spotify_id to search by.
market : Optional[str]
An ISO 3166-1 alpha-2 country code.
Returns
-------
album : Dict
The album object.
|
spotify/http.py
|
def album(self, spotify_id, market='US'):
"""Get a spotify album by its ID.
Parameters
----------
spotify_id : str
The spotify_id to search by.
market : Optional[str]
An ISO 3166-1 alpha-2 country code.
Returns
-------
album : Dict
The album object.
"""
route = Route('GET', '/albums/{spotify_id}', spotify_id=spotify_id)
payload = {}
if market:
payload['market'] = market
return self.request(route, params=payload)
|
def album(self, spotify_id, market='US'):
"""Get a spotify album by its ID.
Parameters
----------
spotify_id : str
The spotify_id to search by.
market : Optional[str]
An ISO 3166-1 alpha-2 country code.
Returns
-------
album : Dict
The album object.
"""
route = Route('GET', '/albums/{spotify_id}', spotify_id=spotify_id)
payload = {}
if market:
payload['market'] = market
return self.request(route, params=payload)
|
[
"Get",
"a",
"spotify",
"album",
"by",
"its",
"ID",
"."
] |
mental32/spotify.py
|
python
|
https://github.com/mental32/spotify.py/blob/bb296cac7c3dd289908906b7069bd80f43950515/spotify/http.py#L171-L192
|
[
"def",
"album",
"(",
"self",
",",
"spotify_id",
",",
"market",
"=",
"'US'",
")",
":",
"route",
"=",
"Route",
"(",
"'GET'",
",",
"'/albums/{spotify_id}'",
",",
"spotify_id",
"=",
"spotify_id",
")",
"payload",
"=",
"{",
"}",
"if",
"market",
":",
"payload",
"[",
"'market'",
"]",
"=",
"market",
"return",
"self",
".",
"request",
"(",
"route",
",",
"params",
"=",
"payload",
")"
] |
bb296cac7c3dd289908906b7069bd80f43950515
|
test
|
HTTPClient.album_tracks
|
Get an albums tracks by an ID.
Parameters
----------
spotify_id : str
The spotify_id to search by.
limit : Optional[int]
The maximum number of items to return. Default: 20. Minimum: 1. Maximum: 50.
offset : Optiona[int]
The offset of which Spotify should start yielding from.
market : Optional[str]
An ISO 3166-1 alpha-2 country code.
|
spotify/http.py
|
def album_tracks(self, spotify_id, limit=20, offset=0, market='US'):
"""Get an albums tracks by an ID.
Parameters
----------
spotify_id : str
The spotify_id to search by.
limit : Optional[int]
The maximum number of items to return. Default: 20. Minimum: 1. Maximum: 50.
offset : Optiona[int]
The offset of which Spotify should start yielding from.
market : Optional[str]
An ISO 3166-1 alpha-2 country code.
"""
route = Route('GET', '/albums/{spotify_id}/tracks', spotify_id=spotify_id)
payload = {'limit': limit, 'offset': offset}
if market:
payload['market'] = market
return self.request(route, params=payload)
|
def album_tracks(self, spotify_id, limit=20, offset=0, market='US'):
"""Get an albums tracks by an ID.
Parameters
----------
spotify_id : str
The spotify_id to search by.
limit : Optional[int]
The maximum number of items to return. Default: 20. Minimum: 1. Maximum: 50.
offset : Optiona[int]
The offset of which Spotify should start yielding from.
market : Optional[str]
An ISO 3166-1 alpha-2 country code.
"""
route = Route('GET', '/albums/{spotify_id}/tracks', spotify_id=spotify_id)
payload = {'limit': limit, 'offset': offset}
if market:
payload['market'] = market
return self.request(route, params=payload)
|
[
"Get",
"an",
"albums",
"tracks",
"by",
"an",
"ID",
"."
] |
mental32/spotify.py
|
python
|
https://github.com/mental32/spotify.py/blob/bb296cac7c3dd289908906b7069bd80f43950515/spotify/http.py#L194-L214
|
[
"def",
"album_tracks",
"(",
"self",
",",
"spotify_id",
",",
"limit",
"=",
"20",
",",
"offset",
"=",
"0",
",",
"market",
"=",
"'US'",
")",
":",
"route",
"=",
"Route",
"(",
"'GET'",
",",
"'/albums/{spotify_id}/tracks'",
",",
"spotify_id",
"=",
"spotify_id",
")",
"payload",
"=",
"{",
"'limit'",
":",
"limit",
",",
"'offset'",
":",
"offset",
"}",
"if",
"market",
":",
"payload",
"[",
"'market'",
"]",
"=",
"market",
"return",
"self",
".",
"request",
"(",
"route",
",",
"params",
"=",
"payload",
")"
] |
bb296cac7c3dd289908906b7069bd80f43950515
|
test
|
HTTPClient.albums
|
Get a spotify album by its ID.
Parameters
----------
spotify_ids : List[str]
The spotify_ids to search by.
market : Optional[str]
An ISO 3166-1 alpha-2 country code.
|
spotify/http.py
|
def albums(self, spotify_ids, market='US'):
"""Get a spotify album by its ID.
Parameters
----------
spotify_ids : List[str]
The spotify_ids to search by.
market : Optional[str]
An ISO 3166-1 alpha-2 country code.
"""
route = Route('GET', '/albums/')
payload = {'ids': spotify_ids}
if market:
payload['market'] = market
return self.request(route, params=payload)
|
def albums(self, spotify_ids, market='US'):
"""Get a spotify album by its ID.
Parameters
----------
spotify_ids : List[str]
The spotify_ids to search by.
market : Optional[str]
An ISO 3166-1 alpha-2 country code.
"""
route = Route('GET', '/albums/')
payload = {'ids': spotify_ids}
if market:
payload['market'] = market
return self.request(route, params=payload)
|
[
"Get",
"a",
"spotify",
"album",
"by",
"its",
"ID",
"."
] |
mental32/spotify.py
|
python
|
https://github.com/mental32/spotify.py/blob/bb296cac7c3dd289908906b7069bd80f43950515/spotify/http.py#L216-L232
|
[
"def",
"albums",
"(",
"self",
",",
"spotify_ids",
",",
"market",
"=",
"'US'",
")",
":",
"route",
"=",
"Route",
"(",
"'GET'",
",",
"'/albums/'",
")",
"payload",
"=",
"{",
"'ids'",
":",
"spotify_ids",
"}",
"if",
"market",
":",
"payload",
"[",
"'market'",
"]",
"=",
"market",
"return",
"self",
".",
"request",
"(",
"route",
",",
"params",
"=",
"payload",
")"
] |
bb296cac7c3dd289908906b7069bd80f43950515
|
test
|
HTTPClient.artist
|
Get a spotify artist by their ID.
Parameters
----------
spotify_id : str
The spotify_id to search by.
|
spotify/http.py
|
def artist(self, spotify_id):
"""Get a spotify artist by their ID.
Parameters
----------
spotify_id : str
The spotify_id to search by.
"""
route = Route('GET', '/artists/{spotify_id}', spotify_id=spotify_id)
return self.request(route)
|
def artist(self, spotify_id):
"""Get a spotify artist by their ID.
Parameters
----------
spotify_id : str
The spotify_id to search by.
"""
route = Route('GET', '/artists/{spotify_id}', spotify_id=spotify_id)
return self.request(route)
|
[
"Get",
"a",
"spotify",
"artist",
"by",
"their",
"ID",
"."
] |
mental32/spotify.py
|
python
|
https://github.com/mental32/spotify.py/blob/bb296cac7c3dd289908906b7069bd80f43950515/spotify/http.py#L234-L243
|
[
"def",
"artist",
"(",
"self",
",",
"spotify_id",
")",
":",
"route",
"=",
"Route",
"(",
"'GET'",
",",
"'/artists/{spotify_id}'",
",",
"spotify_id",
"=",
"spotify_id",
")",
"return",
"self",
".",
"request",
"(",
"route",
")"
] |
bb296cac7c3dd289908906b7069bd80f43950515
|
test
|
HTTPClient.artist_albums
|
Get an artists tracks by their ID.
Parameters
----------
spotify_id : str
The spotify_id to search by.
include_groups : INCLUDE_GROUPS_TP
INCLUDE_GROUPS
limit : Optional[int]
The maximum number of items to return. Default: 20. Minimum: 1. Maximum: 50.
offset : Optiona[int]
The offset of which Spotify should start yielding from.
market : Optional[str]
An ISO 3166-1 alpha-2 country code.
|
spotify/http.py
|
def artist_albums(self, spotify_id, include_groups=None, limit=20, offset=0, market='US'):
"""Get an artists tracks by their ID.
Parameters
----------
spotify_id : str
The spotify_id to search by.
include_groups : INCLUDE_GROUPS_TP
INCLUDE_GROUPS
limit : Optional[int]
The maximum number of items to return. Default: 20. Minimum: 1. Maximum: 50.
offset : Optiona[int]
The offset of which Spotify should start yielding from.
market : Optional[str]
An ISO 3166-1 alpha-2 country code.
"""
route = Route('GET', '/artists/{spotify_id}/albums', spotify_id=spotify_id)
payload = {'limit': limit, 'offset': offset}
if include_groups:
payload['include_groups'] = include_groups
if market:
payload['market'] = market
return self.request(route, params=payload)
|
def artist_albums(self, spotify_id, include_groups=None, limit=20, offset=0, market='US'):
"""Get an artists tracks by their ID.
Parameters
----------
spotify_id : str
The spotify_id to search by.
include_groups : INCLUDE_GROUPS_TP
INCLUDE_GROUPS
limit : Optional[int]
The maximum number of items to return. Default: 20. Minimum: 1. Maximum: 50.
offset : Optiona[int]
The offset of which Spotify should start yielding from.
market : Optional[str]
An ISO 3166-1 alpha-2 country code.
"""
route = Route('GET', '/artists/{spotify_id}/albums', spotify_id=spotify_id)
payload = {'limit': limit, 'offset': offset}
if include_groups:
payload['include_groups'] = include_groups
if market:
payload['market'] = market
return self.request(route, params=payload)
|
[
"Get",
"an",
"artists",
"tracks",
"by",
"their",
"ID",
"."
] |
mental32/spotify.py
|
python
|
https://github.com/mental32/spotify.py/blob/bb296cac7c3dd289908906b7069bd80f43950515/spotify/http.py#L245-L270
|
[
"def",
"artist_albums",
"(",
"self",
",",
"spotify_id",
",",
"include_groups",
"=",
"None",
",",
"limit",
"=",
"20",
",",
"offset",
"=",
"0",
",",
"market",
"=",
"'US'",
")",
":",
"route",
"=",
"Route",
"(",
"'GET'",
",",
"'/artists/{spotify_id}/albums'",
",",
"spotify_id",
"=",
"spotify_id",
")",
"payload",
"=",
"{",
"'limit'",
":",
"limit",
",",
"'offset'",
":",
"offset",
"}",
"if",
"include_groups",
":",
"payload",
"[",
"'include_groups'",
"]",
"=",
"include_groups",
"if",
"market",
":",
"payload",
"[",
"'market'",
"]",
"=",
"market",
"return",
"self",
".",
"request",
"(",
"route",
",",
"params",
"=",
"payload",
")"
] |
bb296cac7c3dd289908906b7069bd80f43950515
|
test
|
HTTPClient.artist_top_tracks
|
Get an artists top tracks per country with their ID.
Parameters
----------
spotify_id : str
The spotify_id to search by.
country : COUNTRY_TP
COUNTRY
|
spotify/http.py
|
def artist_top_tracks(self, spotify_id, country):
"""Get an artists top tracks per country with their ID.
Parameters
----------
spotify_id : str
The spotify_id to search by.
country : COUNTRY_TP
COUNTRY
"""
route = Route('GET', '/artists/{spotify_id}/top-tracks', spotify_id=spotify_id)
payload = {'country': country}
return self.request(route, params=payload)
|
def artist_top_tracks(self, spotify_id, country):
"""Get an artists top tracks per country with their ID.
Parameters
----------
spotify_id : str
The spotify_id to search by.
country : COUNTRY_TP
COUNTRY
"""
route = Route('GET', '/artists/{spotify_id}/top-tracks', spotify_id=spotify_id)
payload = {'country': country}
return self.request(route, params=payload)
|
[
"Get",
"an",
"artists",
"top",
"tracks",
"per",
"country",
"with",
"their",
"ID",
"."
] |
mental32/spotify.py
|
python
|
https://github.com/mental32/spotify.py/blob/bb296cac7c3dd289908906b7069bd80f43950515/spotify/http.py#L272-L284
|
[
"def",
"artist_top_tracks",
"(",
"self",
",",
"spotify_id",
",",
"country",
")",
":",
"route",
"=",
"Route",
"(",
"'GET'",
",",
"'/artists/{spotify_id}/top-tracks'",
",",
"spotify_id",
"=",
"spotify_id",
")",
"payload",
"=",
"{",
"'country'",
":",
"country",
"}",
"return",
"self",
".",
"request",
"(",
"route",
",",
"params",
"=",
"payload",
")"
] |
bb296cac7c3dd289908906b7069bd80f43950515
|
test
|
HTTPClient.artist_related_artists
|
Get related artists for an artist by their ID.
Parameters
----------
spotify_id : str
The spotify_id to search by.
|
spotify/http.py
|
def artist_related_artists(self, spotify_id):
"""Get related artists for an artist by their ID.
Parameters
----------
spotify_id : str
The spotify_id to search by.
"""
route = Route('GET', '/artists/{spotify_id}/related-artists', spotify_id=spotify_id)
return self.request(route)
|
def artist_related_artists(self, spotify_id):
"""Get related artists for an artist by their ID.
Parameters
----------
spotify_id : str
The spotify_id to search by.
"""
route = Route('GET', '/artists/{spotify_id}/related-artists', spotify_id=spotify_id)
return self.request(route)
|
[
"Get",
"related",
"artists",
"for",
"an",
"artist",
"by",
"their",
"ID",
"."
] |
mental32/spotify.py
|
python
|
https://github.com/mental32/spotify.py/blob/bb296cac7c3dd289908906b7069bd80f43950515/spotify/http.py#L286-L295
|
[
"def",
"artist_related_artists",
"(",
"self",
",",
"spotify_id",
")",
":",
"route",
"=",
"Route",
"(",
"'GET'",
",",
"'/artists/{spotify_id}/related-artists'",
",",
"spotify_id",
"=",
"spotify_id",
")",
"return",
"self",
".",
"request",
"(",
"route",
")"
] |
bb296cac7c3dd289908906b7069bd80f43950515
|
test
|
HTTPClient.artists
|
Get a spotify artists by their IDs.
Parameters
----------
spotify_id : List[str]
The spotify_ids to search with.
|
spotify/http.py
|
def artists(self, spotify_ids):
"""Get a spotify artists by their IDs.
Parameters
----------
spotify_id : List[str]
The spotify_ids to search with.
"""
route = Route('GET', '/artists')
payload = {'ids': spotify_ids}
return self.request(route, params=payload)
|
def artists(self, spotify_ids):
"""Get a spotify artists by their IDs.
Parameters
----------
spotify_id : List[str]
The spotify_ids to search with.
"""
route = Route('GET', '/artists')
payload = {'ids': spotify_ids}
return self.request(route, params=payload)
|
[
"Get",
"a",
"spotify",
"artists",
"by",
"their",
"IDs",
"."
] |
mental32/spotify.py
|
python
|
https://github.com/mental32/spotify.py/blob/bb296cac7c3dd289908906b7069bd80f43950515/spotify/http.py#L297-L307
|
[
"def",
"artists",
"(",
"self",
",",
"spotify_ids",
")",
":",
"route",
"=",
"Route",
"(",
"'GET'",
",",
"'/artists'",
")",
"payload",
"=",
"{",
"'ids'",
":",
"spotify_ids",
"}",
"return",
"self",
".",
"request",
"(",
"route",
",",
"params",
"=",
"payload",
")"
] |
bb296cac7c3dd289908906b7069bd80f43950515
|
test
|
HTTPClient.category
|
Get a single category used to tag items in Spotify.
Parameters
----------
category_id : str
The Spotify category ID for the category.
country : COUNTRY_TP
COUNTRY
locale : LOCALE_TP
LOCALE
|
spotify/http.py
|
def category(self, category_id, country=None, locale=None):
"""Get a single category used to tag items in Spotify.
Parameters
----------
category_id : str
The Spotify category ID for the category.
country : COUNTRY_TP
COUNTRY
locale : LOCALE_TP
LOCALE
"""
route = Route('GET', '/browse/categories/{category_id}', category_id=category_id)
payload = {}
if country:
payload['country'] = country
if locale:
payload['locale'] = locale
return self.request(route, params=payload)
|
def category(self, category_id, country=None, locale=None):
"""Get a single category used to tag items in Spotify.
Parameters
----------
category_id : str
The Spotify category ID for the category.
country : COUNTRY_TP
COUNTRY
locale : LOCALE_TP
LOCALE
"""
route = Route('GET', '/browse/categories/{category_id}', category_id=category_id)
payload = {}
if country:
payload['country'] = country
if locale:
payload['locale'] = locale
return self.request(route, params=payload)
|
[
"Get",
"a",
"single",
"category",
"used",
"to",
"tag",
"items",
"in",
"Spotify",
"."
] |
mental32/spotify.py
|
python
|
https://github.com/mental32/spotify.py/blob/bb296cac7c3dd289908906b7069bd80f43950515/spotify/http.py#L309-L330
|
[
"def",
"category",
"(",
"self",
",",
"category_id",
",",
"country",
"=",
"None",
",",
"locale",
"=",
"None",
")",
":",
"route",
"=",
"Route",
"(",
"'GET'",
",",
"'/browse/categories/{category_id}'",
",",
"category_id",
"=",
"category_id",
")",
"payload",
"=",
"{",
"}",
"if",
"country",
":",
"payload",
"[",
"'country'",
"]",
"=",
"country",
"if",
"locale",
":",
"payload",
"[",
"'locale'",
"]",
"=",
"locale",
"return",
"self",
".",
"request",
"(",
"route",
",",
"params",
"=",
"payload",
")"
] |
bb296cac7c3dd289908906b7069bd80f43950515
|
test
|
HTTPClient.category_playlists
|
Get a list of Spotify playlists tagged with a particular category.
Parameters
----------
category_id : str
The Spotify category ID for the category.
limit : Optional[int]
The maximum number of items to return. Default: 20. Minimum: 1. Maximum: 50.
offset : Optional[int]
The index of the first item to return. Default: 0
country : COUNTRY_TP
COUNTRY
|
spotify/http.py
|
def category_playlists(self, category_id, limit=20, offset=0, country=None):
"""Get a list of Spotify playlists tagged with a particular category.
Parameters
----------
category_id : str
The Spotify category ID for the category.
limit : Optional[int]
The maximum number of items to return. Default: 20. Minimum: 1. Maximum: 50.
offset : Optional[int]
The index of the first item to return. Default: 0
country : COUNTRY_TP
COUNTRY
"""
route = Route('GET', '/browse/categories/{category_id}/playlists', category_id=category_id)
payload = {'limit': limit, 'offset': offset}
if country:
payload['country'] = country
return self.request(route, params=payload)
|
def category_playlists(self, category_id, limit=20, offset=0, country=None):
"""Get a list of Spotify playlists tagged with a particular category.
Parameters
----------
category_id : str
The Spotify category ID for the category.
limit : Optional[int]
The maximum number of items to return. Default: 20. Minimum: 1. Maximum: 50.
offset : Optional[int]
The index of the first item to return. Default: 0
country : COUNTRY_TP
COUNTRY
"""
route = Route('GET', '/browse/categories/{category_id}/playlists', category_id=category_id)
payload = {'limit': limit, 'offset': offset}
if country:
payload['country'] = country
return self.request(route, params=payload)
|
[
"Get",
"a",
"list",
"of",
"Spotify",
"playlists",
"tagged",
"with",
"a",
"particular",
"category",
"."
] |
mental32/spotify.py
|
python
|
https://github.com/mental32/spotify.py/blob/bb296cac7c3dd289908906b7069bd80f43950515/spotify/http.py#L332-L352
|
[
"def",
"category_playlists",
"(",
"self",
",",
"category_id",
",",
"limit",
"=",
"20",
",",
"offset",
"=",
"0",
",",
"country",
"=",
"None",
")",
":",
"route",
"=",
"Route",
"(",
"'GET'",
",",
"'/browse/categories/{category_id}/playlists'",
",",
"category_id",
"=",
"category_id",
")",
"payload",
"=",
"{",
"'limit'",
":",
"limit",
",",
"'offset'",
":",
"offset",
"}",
"if",
"country",
":",
"payload",
"[",
"'country'",
"]",
"=",
"country",
"return",
"self",
".",
"request",
"(",
"route",
",",
"params",
"=",
"payload",
")"
] |
bb296cac7c3dd289908906b7069bd80f43950515
|
test
|
HTTPClient.categories
|
Get a list of categories used to tag items in Spotify.
Parameters
----------
limit : Optional[int]
The maximum number of items to return. Default: 20. Minimum: 1. Maximum: 50.
offset : Optional[int]
The index of the first item to return. Default: 0
country : COUNTRY_TP
COUNTRY
locale : LOCALE_TP
LOCALE
|
spotify/http.py
|
def categories(self, limit=20, offset=0, country=None, locale=None):
"""Get a list of categories used to tag items in Spotify.
Parameters
----------
limit : Optional[int]
The maximum number of items to return. Default: 20. Minimum: 1. Maximum: 50.
offset : Optional[int]
The index of the first item to return. Default: 0
country : COUNTRY_TP
COUNTRY
locale : LOCALE_TP
LOCALE
"""
route = Route('GET', '/browse/categories')
payload = {'limit': limit, 'offset': offset}
if country:
payload['country'] = country
if locale:
payload['locale'] = locale
return self.request(route, params=payload)
|
def categories(self, limit=20, offset=0, country=None, locale=None):
"""Get a list of categories used to tag items in Spotify.
Parameters
----------
limit : Optional[int]
The maximum number of items to return. Default: 20. Minimum: 1. Maximum: 50.
offset : Optional[int]
The index of the first item to return. Default: 0
country : COUNTRY_TP
COUNTRY
locale : LOCALE_TP
LOCALE
"""
route = Route('GET', '/browse/categories')
payload = {'limit': limit, 'offset': offset}
if country:
payload['country'] = country
if locale:
payload['locale'] = locale
return self.request(route, params=payload)
|
[
"Get",
"a",
"list",
"of",
"categories",
"used",
"to",
"tag",
"items",
"in",
"Spotify",
"."
] |
mental32/spotify.py
|
python
|
https://github.com/mental32/spotify.py/blob/bb296cac7c3dd289908906b7069bd80f43950515/spotify/http.py#L354-L377
|
[
"def",
"categories",
"(",
"self",
",",
"limit",
"=",
"20",
",",
"offset",
"=",
"0",
",",
"country",
"=",
"None",
",",
"locale",
"=",
"None",
")",
":",
"route",
"=",
"Route",
"(",
"'GET'",
",",
"'/browse/categories'",
")",
"payload",
"=",
"{",
"'limit'",
":",
"limit",
",",
"'offset'",
":",
"offset",
"}",
"if",
"country",
":",
"payload",
"[",
"'country'",
"]",
"=",
"country",
"if",
"locale",
":",
"payload",
"[",
"'locale'",
"]",
"=",
"locale",
"return",
"self",
".",
"request",
"(",
"route",
",",
"params",
"=",
"payload",
")"
] |
bb296cac7c3dd289908906b7069bd80f43950515
|
test
|
HTTPClient.featured_playlists
|
Get a list of Spotify featured playlists.
Parameters
----------
locale : LOCALE_TP
LOCALE
country : COUNTRY_TP
COUNTRY
timestamp : TIMESTAMP_TP
TIMESTAMP
limit : Optional[int]
The maximum number of items to return. Default: 20. Minimum: 1. Maximum: 50.
offset : Optional[int]
The index of the first item to return. Default: 0
|
spotify/http.py
|
def featured_playlists(self, locale=None, country=None, timestamp=None, limit=20, offset=0):
"""Get a list of Spotify featured playlists.
Parameters
----------
locale : LOCALE_TP
LOCALE
country : COUNTRY_TP
COUNTRY
timestamp : TIMESTAMP_TP
TIMESTAMP
limit : Optional[int]
The maximum number of items to return. Default: 20. Minimum: 1. Maximum: 50.
offset : Optional[int]
The index of the first item to return. Default: 0
"""
route = Route('GET', '/browse/featured-playlists')
payload = {'limit': limit, 'offset': offset}
if country:
payload['country'] = country
if locale:
payload['locale'] = locale
if timestamp:
payload['timestamp'] = timestamp
return self.request(route, params=payload)
|
def featured_playlists(self, locale=None, country=None, timestamp=None, limit=20, offset=0):
"""Get a list of Spotify featured playlists.
Parameters
----------
locale : LOCALE_TP
LOCALE
country : COUNTRY_TP
COUNTRY
timestamp : TIMESTAMP_TP
TIMESTAMP
limit : Optional[int]
The maximum number of items to return. Default: 20. Minimum: 1. Maximum: 50.
offset : Optional[int]
The index of the first item to return. Default: 0
"""
route = Route('GET', '/browse/featured-playlists')
payload = {'limit': limit, 'offset': offset}
if country:
payload['country'] = country
if locale:
payload['locale'] = locale
if timestamp:
payload['timestamp'] = timestamp
return self.request(route, params=payload)
|
[
"Get",
"a",
"list",
"of",
"Spotify",
"featured",
"playlists",
"."
] |
mental32/spotify.py
|
python
|
https://github.com/mental32/spotify.py/blob/bb296cac7c3dd289908906b7069bd80f43950515/spotify/http.py#L379-L407
|
[
"def",
"featured_playlists",
"(",
"self",
",",
"locale",
"=",
"None",
",",
"country",
"=",
"None",
",",
"timestamp",
"=",
"None",
",",
"limit",
"=",
"20",
",",
"offset",
"=",
"0",
")",
":",
"route",
"=",
"Route",
"(",
"'GET'",
",",
"'/browse/featured-playlists'",
")",
"payload",
"=",
"{",
"'limit'",
":",
"limit",
",",
"'offset'",
":",
"offset",
"}",
"if",
"country",
":",
"payload",
"[",
"'country'",
"]",
"=",
"country",
"if",
"locale",
":",
"payload",
"[",
"'locale'",
"]",
"=",
"locale",
"if",
"timestamp",
":",
"payload",
"[",
"'timestamp'",
"]",
"=",
"timestamp",
"return",
"self",
".",
"request",
"(",
"route",
",",
"params",
"=",
"payload",
")"
] |
bb296cac7c3dd289908906b7069bd80f43950515
|
test
|
HTTPClient.new_releases
|
Get a list of new album releases featured in Spotify.
Parameters
----------
limit : Optional[int]
The maximum number of items to return. Default: 20. Minimum: 1. Maximum: 50.
offset : Optional[int]
The index of the first item to return. Default: 0
country : COUNTRY_TP
COUNTRY
|
spotify/http.py
|
def new_releases(self, *, country=None, limit=20, offset=0):
"""Get a list of new album releases featured in Spotify.
Parameters
----------
limit : Optional[int]
The maximum number of items to return. Default: 20. Minimum: 1. Maximum: 50.
offset : Optional[int]
The index of the first item to return. Default: 0
country : COUNTRY_TP
COUNTRY
"""
route = Route('GET', '/browse/new-releases')
payload = {'limit': limit, 'offset': offset}
if country:
payload['country'] = country
return self.request(route, params=payload)
|
def new_releases(self, *, country=None, limit=20, offset=0):
"""Get a list of new album releases featured in Spotify.
Parameters
----------
limit : Optional[int]
The maximum number of items to return. Default: 20. Minimum: 1. Maximum: 50.
offset : Optional[int]
The index of the first item to return. Default: 0
country : COUNTRY_TP
COUNTRY
"""
route = Route('GET', '/browse/new-releases')
payload = {'limit': limit, 'offset': offset}
if country:
payload['country'] = country
return self.request(route, params=payload)
|
[
"Get",
"a",
"list",
"of",
"new",
"album",
"releases",
"featured",
"in",
"Spotify",
"."
] |
mental32/spotify.py
|
python
|
https://github.com/mental32/spotify.py/blob/bb296cac7c3dd289908906b7069bd80f43950515/spotify/http.py#L409-L427
|
[
"def",
"new_releases",
"(",
"self",
",",
"*",
",",
"country",
"=",
"None",
",",
"limit",
"=",
"20",
",",
"offset",
"=",
"0",
")",
":",
"route",
"=",
"Route",
"(",
"'GET'",
",",
"'/browse/new-releases'",
")",
"payload",
"=",
"{",
"'limit'",
":",
"limit",
",",
"'offset'",
":",
"offset",
"}",
"if",
"country",
":",
"payload",
"[",
"'country'",
"]",
"=",
"country",
"return",
"self",
".",
"request",
"(",
"route",
",",
"params",
"=",
"payload",
")"
] |
bb296cac7c3dd289908906b7069bd80f43950515
|
test
|
HTTPClient.recommendations
|
Get Recommendations Based on Seeds.
Parameters
----------
seed_artists : str
A comma separated list of Spotify IDs for seed artists. Up to 5 seed values may be provided.
seed_genres : str
A comma separated list of any genres in the set of available genre seeds. Up to 5 seed values may be provided.
seed_tracks : str
A comma separated list of Spotify IDs for a seed track. Up to 5 seed values may be provided.
limit : Optional[int]
The maximum number of items to return. Default: 20. Minimum: 1. Maximum: 50.
market : Optional[str]
An ISO 3166-1 alpha-2 country code.
max_* : Optional[Keyword arguments]
For each tunable track attribute, a hard ceiling on the selected track attribute’s value can be provided.
min_* : Optional[Keyword arguments]
For each tunable track attribute, a hard floor on the selected track attribute’s value can be provided.
target_* : Optional[Keyword arguments]
For each of the tunable track attributes (below) a target value may be provided.
|
spotify/http.py
|
def recommendations(self, seed_artists, seed_genres, seed_tracks, *, limit=20, market=None, **filters):
"""Get Recommendations Based on Seeds.
Parameters
----------
seed_artists : str
A comma separated list of Spotify IDs for seed artists. Up to 5 seed values may be provided.
seed_genres : str
A comma separated list of any genres in the set of available genre seeds. Up to 5 seed values may be provided.
seed_tracks : str
A comma separated list of Spotify IDs for a seed track. Up to 5 seed values may be provided.
limit : Optional[int]
The maximum number of items to return. Default: 20. Minimum: 1. Maximum: 50.
market : Optional[str]
An ISO 3166-1 alpha-2 country code.
max_* : Optional[Keyword arguments]
For each tunable track attribute, a hard ceiling on the selected track attribute’s value can be provided.
min_* : Optional[Keyword arguments]
For each tunable track attribute, a hard floor on the selected track attribute’s value can be provided.
target_* : Optional[Keyword arguments]
For each of the tunable track attributes (below) a target value may be provided.
"""
route = Route('GET', '/recommendations')
payload = {'seed_artists': seed_artists, 'seed_genres': seed_genres, 'seed_tracks': seed_tracks, 'limit': limit}
if market:
payload['market'] = market
if filters:
payload.update(filters)
return self.request(route, param=payload)
|
def recommendations(self, seed_artists, seed_genres, seed_tracks, *, limit=20, market=None, **filters):
"""Get Recommendations Based on Seeds.
Parameters
----------
seed_artists : str
A comma separated list of Spotify IDs for seed artists. Up to 5 seed values may be provided.
seed_genres : str
A comma separated list of any genres in the set of available genre seeds. Up to 5 seed values may be provided.
seed_tracks : str
A comma separated list of Spotify IDs for a seed track. Up to 5 seed values may be provided.
limit : Optional[int]
The maximum number of items to return. Default: 20. Minimum: 1. Maximum: 50.
market : Optional[str]
An ISO 3166-1 alpha-2 country code.
max_* : Optional[Keyword arguments]
For each tunable track attribute, a hard ceiling on the selected track attribute’s value can be provided.
min_* : Optional[Keyword arguments]
For each tunable track attribute, a hard floor on the selected track attribute’s value can be provided.
target_* : Optional[Keyword arguments]
For each of the tunable track attributes (below) a target value may be provided.
"""
route = Route('GET', '/recommendations')
payload = {'seed_artists': seed_artists, 'seed_genres': seed_genres, 'seed_tracks': seed_tracks, 'limit': limit}
if market:
payload['market'] = market
if filters:
payload.update(filters)
return self.request(route, param=payload)
|
[
"Get",
"Recommendations",
"Based",
"on",
"Seeds",
"."
] |
mental32/spotify.py
|
python
|
https://github.com/mental32/spotify.py/blob/bb296cac7c3dd289908906b7069bd80f43950515/spotify/http.py#L429-L460
|
[
"def",
"recommendations",
"(",
"self",
",",
"seed_artists",
",",
"seed_genres",
",",
"seed_tracks",
",",
"*",
",",
"limit",
"=",
"20",
",",
"market",
"=",
"None",
",",
"*",
"*",
"filters",
")",
":",
"route",
"=",
"Route",
"(",
"'GET'",
",",
"'/recommendations'",
")",
"payload",
"=",
"{",
"'seed_artists'",
":",
"seed_artists",
",",
"'seed_genres'",
":",
"seed_genres",
",",
"'seed_tracks'",
":",
"seed_tracks",
",",
"'limit'",
":",
"limit",
"}",
"if",
"market",
":",
"payload",
"[",
"'market'",
"]",
"=",
"market",
"if",
"filters",
":",
"payload",
".",
"update",
"(",
"filters",
")",
"return",
"self",
".",
"request",
"(",
"route",
",",
"param",
"=",
"payload",
")"
] |
bb296cac7c3dd289908906b7069bd80f43950515
|
test
|
HTTPClient.following_artists_or_users
|
Check to see if the current user is following one or more artists or other Spotify users.
Parameters
----------
ids : List[str]
A comma-separated list of the artist or the user Spotify IDs to check.
A maximum of 50 IDs can be sent in one request.
type : Optional[str]
The ID type: either "artist" or "user".
Default: "artist"
|
spotify/http.py
|
def following_artists_or_users(self, ids, *, type='artist'):
"""Check to see if the current user is following one or more artists or other Spotify users.
Parameters
----------
ids : List[str]
A comma-separated list of the artist or the user Spotify IDs to check.
A maximum of 50 IDs can be sent in one request.
type : Optional[str]
The ID type: either "artist" or "user".
Default: "artist"
"""
route = Route('GET', '/me/following/contains')
payload = {'ids': ids, 'type': type}
return self.request(route, params=payload)
|
def following_artists_or_users(self, ids, *, type='artist'):
"""Check to see if the current user is following one or more artists or other Spotify users.
Parameters
----------
ids : List[str]
A comma-separated list of the artist or the user Spotify IDs to check.
A maximum of 50 IDs can be sent in one request.
type : Optional[str]
The ID type: either "artist" or "user".
Default: "artist"
"""
route = Route('GET', '/me/following/contains')
payload = {'ids': ids, 'type': type}
return self.request(route, params=payload)
|
[
"Check",
"to",
"see",
"if",
"the",
"current",
"user",
"is",
"following",
"one",
"or",
"more",
"artists",
"or",
"other",
"Spotify",
"users",
"."
] |
mental32/spotify.py
|
python
|
https://github.com/mental32/spotify.py/blob/bb296cac7c3dd289908906b7069bd80f43950515/spotify/http.py#L462-L477
|
[
"def",
"following_artists_or_users",
"(",
"self",
",",
"ids",
",",
"*",
",",
"type",
"=",
"'artist'",
")",
":",
"route",
"=",
"Route",
"(",
"'GET'",
",",
"'/me/following/contains'",
")",
"payload",
"=",
"{",
"'ids'",
":",
"ids",
",",
"'type'",
":",
"type",
"}",
"return",
"self",
".",
"request",
"(",
"route",
",",
"params",
"=",
"payload",
")"
] |
bb296cac7c3dd289908906b7069bd80f43950515
|
test
|
Artist.get_albums
|
Get the albums of a Spotify artist.
Parameters
----------
limit : Optional[int]
The maximum number of items to return. Default: 20. Minimum: 1. Maximum: 50.
offset : Optiona[int]
The offset of which Spotify should start yielding from.
include_groups : INCLUDE_GROUPS_TP
INCLUDE_GROUPS
market : Optional[str]
An ISO 3166-1 alpha-2 country code.
Returns
-------
albums : List[Album]
The albums of the artist.
|
spotify/models/artist.py
|
async def get_albums(self, *, limit: Optional[int] = 20, offset: Optional[int] = 0, include_groups=None, market: Optional[str] = None) -> List[Album]:
"""Get the albums of a Spotify artist.
Parameters
----------
limit : Optional[int]
The maximum number of items to return. Default: 20. Minimum: 1. Maximum: 50.
offset : Optiona[int]
The offset of which Spotify should start yielding from.
include_groups : INCLUDE_GROUPS_TP
INCLUDE_GROUPS
market : Optional[str]
An ISO 3166-1 alpha-2 country code.
Returns
-------
albums : List[Album]
The albums of the artist.
"""
from .album import Album
data = await self.__client.http.artist_albums(self.id, limit=limit, offset=offset, include_groups=include_groups, market=market)
return list(Album(self.__client, item) for item in data['items'])
|
async def get_albums(self, *, limit: Optional[int] = 20, offset: Optional[int] = 0, include_groups=None, market: Optional[str] = None) -> List[Album]:
"""Get the albums of a Spotify artist.
Parameters
----------
limit : Optional[int]
The maximum number of items to return. Default: 20. Minimum: 1. Maximum: 50.
offset : Optiona[int]
The offset of which Spotify should start yielding from.
include_groups : INCLUDE_GROUPS_TP
INCLUDE_GROUPS
market : Optional[str]
An ISO 3166-1 alpha-2 country code.
Returns
-------
albums : List[Album]
The albums of the artist.
"""
from .album import Album
data = await self.__client.http.artist_albums(self.id, limit=limit, offset=offset, include_groups=include_groups, market=market)
return list(Album(self.__client, item) for item in data['items'])
|
[
"Get",
"the",
"albums",
"of",
"a",
"Spotify",
"artist",
"."
] |
mental32/spotify.py
|
python
|
https://github.com/mental32/spotify.py/blob/bb296cac7c3dd289908906b7069bd80f43950515/spotify/models/artist.py#L52-L74
|
[
"async",
"def",
"get_albums",
"(",
"self",
",",
"*",
",",
"limit",
":",
"Optional",
"[",
"int",
"]",
"=",
"20",
",",
"offset",
":",
"Optional",
"[",
"int",
"]",
"=",
"0",
",",
"include_groups",
"=",
"None",
",",
"market",
":",
"Optional",
"[",
"str",
"]",
"=",
"None",
")",
"->",
"List",
"[",
"Album",
"]",
":",
"from",
".",
"album",
"import",
"Album",
"data",
"=",
"await",
"self",
".",
"__client",
".",
"http",
".",
"artist_albums",
"(",
"self",
".",
"id",
",",
"limit",
"=",
"limit",
",",
"offset",
"=",
"offset",
",",
"include_groups",
"=",
"include_groups",
",",
"market",
"=",
"market",
")",
"return",
"list",
"(",
"Album",
"(",
"self",
".",
"__client",
",",
"item",
")",
"for",
"item",
"in",
"data",
"[",
"'items'",
"]",
")"
] |
bb296cac7c3dd289908906b7069bd80f43950515
|
test
|
Artist.get_all_albums
|
loads all of the artists albums, depending on how many the artist has this may be a long operation.
Parameters
----------
market : Optional[str]
An ISO 3166-1 alpha-2 country code.
Returns
-------
albums : List[Album]
The albums of the artist.
|
spotify/models/artist.py
|
async def get_all_albums(self, *, market='US') -> List[Album]:
"""loads all of the artists albums, depending on how many the artist has this may be a long operation.
Parameters
----------
market : Optional[str]
An ISO 3166-1 alpha-2 country code.
Returns
-------
albums : List[Album]
The albums of the artist.
"""
from .album import Album
albums = []
offset = 0
total = await self.total_albums(market=market)
while len(albums) < total:
data = await self.__client.http.artist_albums(self.id, limit=50, offset=offset, market=market)
offset += 50
albums += list(Album(self.__client, item) for item in data['items'])
return albums
|
async def get_all_albums(self, *, market='US') -> List[Album]:
"""loads all of the artists albums, depending on how many the artist has this may be a long operation.
Parameters
----------
market : Optional[str]
An ISO 3166-1 alpha-2 country code.
Returns
-------
albums : List[Album]
The albums of the artist.
"""
from .album import Album
albums = []
offset = 0
total = await self.total_albums(market=market)
while len(albums) < total:
data = await self.__client.http.artist_albums(self.id, limit=50, offset=offset, market=market)
offset += 50
albums += list(Album(self.__client, item) for item in data['items'])
return albums
|
[
"loads",
"all",
"of",
"the",
"artists",
"albums",
"depending",
"on",
"how",
"many",
"the",
"artist",
"has",
"this",
"may",
"be",
"a",
"long",
"operation",
"."
] |
mental32/spotify.py
|
python
|
https://github.com/mental32/spotify.py/blob/bb296cac7c3dd289908906b7069bd80f43950515/spotify/models/artist.py#L76-L101
|
[
"async",
"def",
"get_all_albums",
"(",
"self",
",",
"*",
",",
"market",
"=",
"'US'",
")",
"->",
"List",
"[",
"Album",
"]",
":",
"from",
".",
"album",
"import",
"Album",
"albums",
"=",
"[",
"]",
"offset",
"=",
"0",
"total",
"=",
"await",
"self",
".",
"total_albums",
"(",
"market",
"=",
"market",
")",
"while",
"len",
"(",
"albums",
")",
"<",
"total",
":",
"data",
"=",
"await",
"self",
".",
"__client",
".",
"http",
".",
"artist_albums",
"(",
"self",
".",
"id",
",",
"limit",
"=",
"50",
",",
"offset",
"=",
"offset",
",",
"market",
"=",
"market",
")",
"offset",
"+=",
"50",
"albums",
"+=",
"list",
"(",
"Album",
"(",
"self",
".",
"__client",
",",
"item",
")",
"for",
"item",
"in",
"data",
"[",
"'items'",
"]",
")",
"return",
"albums"
] |
bb296cac7c3dd289908906b7069bd80f43950515
|
test
|
Artist.total_albums
|
get the total amout of tracks in the album.
Parameters
----------
market : Optional[str]
An ISO 3166-1 alpha-2 country code.
Returns
-------
total : int
The total amount of albums.
|
spotify/models/artist.py
|
async def total_albums(self, *, market: str = None) -> int:
"""get the total amout of tracks in the album.
Parameters
----------
market : Optional[str]
An ISO 3166-1 alpha-2 country code.
Returns
-------
total : int
The total amount of albums.
"""
data = await self.__client.http.artist_albums(self.id, limit=1, offset=0, market=market)
return data['total']
|
async def total_albums(self, *, market: str = None) -> int:
"""get the total amout of tracks in the album.
Parameters
----------
market : Optional[str]
An ISO 3166-1 alpha-2 country code.
Returns
-------
total : int
The total amount of albums.
"""
data = await self.__client.http.artist_albums(self.id, limit=1, offset=0, market=market)
return data['total']
|
[
"get",
"the",
"total",
"amout",
"of",
"tracks",
"in",
"the",
"album",
"."
] |
mental32/spotify.py
|
python
|
https://github.com/mental32/spotify.py/blob/bb296cac7c3dd289908906b7069bd80f43950515/spotify/models/artist.py#L103-L117
|
[
"async",
"def",
"total_albums",
"(",
"self",
",",
"*",
",",
"market",
":",
"str",
"=",
"None",
")",
"->",
"int",
":",
"data",
"=",
"await",
"self",
".",
"__client",
".",
"http",
".",
"artist_albums",
"(",
"self",
".",
"id",
",",
"limit",
"=",
"1",
",",
"offset",
"=",
"0",
",",
"market",
"=",
"market",
")",
"return",
"data",
"[",
"'total'",
"]"
] |
bb296cac7c3dd289908906b7069bd80f43950515
|
test
|
Artist.top_tracks
|
Get Spotify catalog information about an artist’s top tracks by country.
Parameters
----------
country : str
The country to search for, it defaults to 'US'.
Returns
-------
tracks : List[Track]
The artists top tracks.
|
spotify/models/artist.py
|
async def top_tracks(self, country: str = 'US') -> List[Track]:
"""Get Spotify catalog information about an artist’s top tracks by country.
Parameters
----------
country : str
The country to search for, it defaults to 'US'.
Returns
-------
tracks : List[Track]
The artists top tracks.
"""
from .track import Track
top = await self.__client.http.artist_top_tracks(self.id, country=country)
return list(Track(self.__client, item) for item in top['tracks'])
|
async def top_tracks(self, country: str = 'US') -> List[Track]:
"""Get Spotify catalog information about an artist’s top tracks by country.
Parameters
----------
country : str
The country to search for, it defaults to 'US'.
Returns
-------
tracks : List[Track]
The artists top tracks.
"""
from .track import Track
top = await self.__client.http.artist_top_tracks(self.id, country=country)
return list(Track(self.__client, item) for item in top['tracks'])
|
[
"Get",
"Spotify",
"catalog",
"information",
"about",
"an",
"artist’s",
"top",
"tracks",
"by",
"country",
"."
] |
mental32/spotify.py
|
python
|
https://github.com/mental32/spotify.py/blob/bb296cac7c3dd289908906b7069bd80f43950515/spotify/models/artist.py#L119-L135
|
[
"async",
"def",
"top_tracks",
"(",
"self",
",",
"country",
":",
"str",
"=",
"'US'",
")",
"->",
"List",
"[",
"Track",
"]",
":",
"from",
".",
"track",
"import",
"Track",
"top",
"=",
"await",
"self",
".",
"__client",
".",
"http",
".",
"artist_top_tracks",
"(",
"self",
".",
"id",
",",
"country",
"=",
"country",
")",
"return",
"list",
"(",
"Track",
"(",
"self",
".",
"__client",
",",
"item",
")",
"for",
"item",
"in",
"top",
"[",
"'tracks'",
"]",
")"
] |
bb296cac7c3dd289908906b7069bd80f43950515
|
test
|
Artist.related_artists
|
Get Spotify catalog information about artists similar to a given artist.
Similarity is based on analysis of the Spotify community’s listening history.
Returns
-------
artists : List[Artits]
The artists deemed similar.
|
spotify/models/artist.py
|
async def related_artists(self) -> List[Artist]:
"""Get Spotify catalog information about artists similar to a given artist.
Similarity is based on analysis of the Spotify community’s listening history.
Returns
-------
artists : List[Artits]
The artists deemed similar.
"""
related = await self.__client.http.artist_related_artists(self.id)
return list(Artist(self.__client, item) for item in related['artists'])
|
async def related_artists(self) -> List[Artist]:
"""Get Spotify catalog information about artists similar to a given artist.
Similarity is based on analysis of the Spotify community’s listening history.
Returns
-------
artists : List[Artits]
The artists deemed similar.
"""
related = await self.__client.http.artist_related_artists(self.id)
return list(Artist(self.__client, item) for item in related['artists'])
|
[
"Get",
"Spotify",
"catalog",
"information",
"about",
"artists",
"similar",
"to",
"a",
"given",
"artist",
"."
] |
mental32/spotify.py
|
python
|
https://github.com/mental32/spotify.py/blob/bb296cac7c3dd289908906b7069bd80f43950515/spotify/models/artist.py#L137-L148
|
[
"async",
"def",
"related_artists",
"(",
"self",
")",
"->",
"List",
"[",
"Artist",
"]",
":",
"related",
"=",
"await",
"self",
".",
"__client",
".",
"http",
".",
"artist_related_artists",
"(",
"self",
".",
"id",
")",
"return",
"list",
"(",
"Artist",
"(",
"self",
".",
"__client",
",",
"item",
")",
"for",
"item",
"in",
"related",
"[",
"'artists'",
"]",
")"
] |
bb296cac7c3dd289908906b7069bd80f43950515
|
test
|
User.currently_playing
|
Get the users currently playing track.
Returns
-------
context, track : Tuple[Context, Track]
A tuple of the context and track.
|
spotify/models/user.py
|
async def currently_playing(self) -> Tuple[Context, Track]:
"""Get the users currently playing track.
Returns
-------
context, track : Tuple[Context, Track]
A tuple of the context and track.
"""
data = await self.http.currently_playing()
if data.get('item'):
data['Context'] = Context(data.get('context'))
data['item'] = Track(self.__client, data.get('item'))
return data
|
async def currently_playing(self) -> Tuple[Context, Track]:
"""Get the users currently playing track.
Returns
-------
context, track : Tuple[Context, Track]
A tuple of the context and track.
"""
data = await self.http.currently_playing()
if data.get('item'):
data['Context'] = Context(data.get('context'))
data['item'] = Track(self.__client, data.get('item'))
return data
|
[
"Get",
"the",
"users",
"currently",
"playing",
"track",
"."
] |
mental32/spotify.py
|
python
|
https://github.com/mental32/spotify.py/blob/bb296cac7c3dd289908906b7069bd80f43950515/spotify/models/user.py#L157-L171
|
[
"async",
"def",
"currently_playing",
"(",
"self",
")",
"->",
"Tuple",
"[",
"Context",
",",
"Track",
"]",
":",
"data",
"=",
"await",
"self",
".",
"http",
".",
"currently_playing",
"(",
")",
"if",
"data",
".",
"get",
"(",
"'item'",
")",
":",
"data",
"[",
"'Context'",
"]",
"=",
"Context",
"(",
"data",
".",
"get",
"(",
"'context'",
")",
")",
"data",
"[",
"'item'",
"]",
"=",
"Track",
"(",
"self",
".",
"__client",
",",
"data",
".",
"get",
"(",
"'item'",
")",
")",
"return",
"data"
] |
bb296cac7c3dd289908906b7069bd80f43950515
|
test
|
User.get_player
|
Get information about the users current playback.
Returns
-------
player : Player
A player object representing the current playback.
|
spotify/models/user.py
|
async def get_player(self) -> Player:
"""Get information about the users current playback.
Returns
-------
player : Player
A player object representing the current playback.
"""
self._player = player = Player(self.__client, self, await self.http.current_player())
return player
|
async def get_player(self) -> Player:
"""Get information about the users current playback.
Returns
-------
player : Player
A player object representing the current playback.
"""
self._player = player = Player(self.__client, self, await self.http.current_player())
return player
|
[
"Get",
"information",
"about",
"the",
"users",
"current",
"playback",
"."
] |
mental32/spotify.py
|
python
|
https://github.com/mental32/spotify.py/blob/bb296cac7c3dd289908906b7069bd80f43950515/spotify/models/user.py#L174-L183
|
[
"async",
"def",
"get_player",
"(",
"self",
")",
"->",
"Player",
":",
"self",
".",
"_player",
"=",
"player",
"=",
"Player",
"(",
"self",
".",
"__client",
",",
"self",
",",
"await",
"self",
".",
"http",
".",
"current_player",
"(",
")",
")",
"return",
"player"
] |
bb296cac7c3dd289908906b7069bd80f43950515
|
test
|
User.get_devices
|
Get information about the users avaliable devices.
Returns
-------
devices : List[Device]
The devices the user has available.
|
spotify/models/user.py
|
async def get_devices(self) -> List[Device]:
"""Get information about the users avaliable devices.
Returns
-------
devices : List[Device]
The devices the user has available.
"""
data = await self.http.available_devices()
return [Device(item) for item in data['devices']]
|
async def get_devices(self) -> List[Device]:
"""Get information about the users avaliable devices.
Returns
-------
devices : List[Device]
The devices the user has available.
"""
data = await self.http.available_devices()
return [Device(item) for item in data['devices']]
|
[
"Get",
"information",
"about",
"the",
"users",
"avaliable",
"devices",
"."
] |
mental32/spotify.py
|
python
|
https://github.com/mental32/spotify.py/blob/bb296cac7c3dd289908906b7069bd80f43950515/spotify/models/user.py#L186-L195
|
[
"async",
"def",
"get_devices",
"(",
"self",
")",
"->",
"List",
"[",
"Device",
"]",
":",
"data",
"=",
"await",
"self",
".",
"http",
".",
"available_devices",
"(",
")",
"return",
"[",
"Device",
"(",
"item",
")",
"for",
"item",
"in",
"data",
"[",
"'devices'",
"]",
"]"
] |
bb296cac7c3dd289908906b7069bd80f43950515
|
test
|
User.recently_played
|
Get tracks from the current users recently played tracks.
Returns
-------
playlist_history : List[Dict[str, Union[Track, Context, str]]]
A list of playlist history object.
Each object is a dict with a timestamp, track and context field.
|
spotify/models/user.py
|
async def recently_played(self) -> List[Dict[str, Union[Track, Context, str]]]:
"""Get tracks from the current users recently played tracks.
Returns
-------
playlist_history : List[Dict[str, Union[Track, Context, str]]]
A list of playlist history object.
Each object is a dict with a timestamp, track and context field.
"""
data = await self.http.recently_played()
f = lambda data: {'context': Context(data.get('context')), 'track': Track(self.__client, data.get('track'))}
# List[T] where T: {'track': Track, 'content': Context: 'timestamp': ISO8601}
return [{'timestamp': track['timestamp'], **f(track)} for track in data['items']]
|
async def recently_played(self) -> List[Dict[str, Union[Track, Context, str]]]:
"""Get tracks from the current users recently played tracks.
Returns
-------
playlist_history : List[Dict[str, Union[Track, Context, str]]]
A list of playlist history object.
Each object is a dict with a timestamp, track and context field.
"""
data = await self.http.recently_played()
f = lambda data: {'context': Context(data.get('context')), 'track': Track(self.__client, data.get('track'))}
# List[T] where T: {'track': Track, 'content': Context: 'timestamp': ISO8601}
return [{'timestamp': track['timestamp'], **f(track)} for track in data['items']]
|
[
"Get",
"tracks",
"from",
"the",
"current",
"users",
"recently",
"played",
"tracks",
"."
] |
mental32/spotify.py
|
python
|
https://github.com/mental32/spotify.py/blob/bb296cac7c3dd289908906b7069bd80f43950515/spotify/models/user.py#L198-L210
|
[
"async",
"def",
"recently_played",
"(",
"self",
")",
"->",
"List",
"[",
"Dict",
"[",
"str",
",",
"Union",
"[",
"Track",
",",
"Context",
",",
"str",
"]",
"]",
"]",
":",
"data",
"=",
"await",
"self",
".",
"http",
".",
"recently_played",
"(",
")",
"f",
"=",
"lambda",
"data",
":",
"{",
"'context'",
":",
"Context",
"(",
"data",
".",
"get",
"(",
"'context'",
")",
")",
",",
"'track'",
":",
"Track",
"(",
"self",
".",
"__client",
",",
"data",
".",
"get",
"(",
"'track'",
")",
")",
"}",
"# List[T] where T: {'track': Track, 'content': Context: 'timestamp': ISO8601}",
"return",
"[",
"{",
"'timestamp'",
":",
"track",
"[",
"'timestamp'",
"]",
",",
"*",
"*",
"f",
"(",
"track",
")",
"}",
"for",
"track",
"in",
"data",
"[",
"'items'",
"]",
"]"
] |
bb296cac7c3dd289908906b7069bd80f43950515
|
test
|
User.add_tracks
|
Add one or more tracks to a user’s playlist.
Parameters
----------
playlist : Union[str, Playlist]
The playlist to modify
tracks : Sequence[Union[str, Track]]
Tracks to add to the playlistv
Returns
-------
snapshot_id : str
The snapshot id of the playlist.
|
spotify/models/user.py
|
async def add_tracks(self, playlist: Union[str, Playlist], *tracks) -> str:
"""Add one or more tracks to a user’s playlist.
Parameters
----------
playlist : Union[str, Playlist]
The playlist to modify
tracks : Sequence[Union[str, Track]]
Tracks to add to the playlistv
Returns
-------
snapshot_id : str
The snapshot id of the playlist.
"""
tracks = [str(track) for track in tracks]
data = await self.http.add_playlist_tracks(self.id, str(playlist), tracks=','.join(tracks))
return data['snapshot_id']
|
async def add_tracks(self, playlist: Union[str, Playlist], *tracks) -> str:
"""Add one or more tracks to a user’s playlist.
Parameters
----------
playlist : Union[str, Playlist]
The playlist to modify
tracks : Sequence[Union[str, Track]]
Tracks to add to the playlistv
Returns
-------
snapshot_id : str
The snapshot id of the playlist.
"""
tracks = [str(track) for track in tracks]
data = await self.http.add_playlist_tracks(self.id, str(playlist), tracks=','.join(tracks))
return data['snapshot_id']
|
[
"Add",
"one",
"or",
"more",
"tracks",
"to",
"a",
"user’s",
"playlist",
"."
] |
mental32/spotify.py
|
python
|
https://github.com/mental32/spotify.py/blob/bb296cac7c3dd289908906b7069bd80f43950515/spotify/models/user.py#L214-L231
|
[
"async",
"def",
"add_tracks",
"(",
"self",
",",
"playlist",
":",
"Union",
"[",
"str",
",",
"Playlist",
"]",
",",
"*",
"tracks",
")",
"->",
"str",
":",
"tracks",
"=",
"[",
"str",
"(",
"track",
")",
"for",
"track",
"in",
"tracks",
"]",
"data",
"=",
"await",
"self",
".",
"http",
".",
"add_playlist_tracks",
"(",
"self",
".",
"id",
",",
"str",
"(",
"playlist",
")",
",",
"tracks",
"=",
"','",
".",
"join",
"(",
"tracks",
")",
")",
"return",
"data",
"[",
"'snapshot_id'",
"]"
] |
bb296cac7c3dd289908906b7069bd80f43950515
|
test
|
User.replace_tracks
|
Replace all the tracks in a playlist, overwriting its existing tracks.
This powerful request can be useful for replacing tracks, re-ordering existing tracks, or clearing the playlist.
Parameters
----------
playlist : Union[str, PLaylist]
The playlist to modify
tracks : Sequence[Union[str, Track]]
Tracks to place in the playlist
|
spotify/models/user.py
|
async def replace_tracks(self, playlist, *tracks) -> str:
"""Replace all the tracks in a playlist, overwriting its existing tracks.
This powerful request can be useful for replacing tracks, re-ordering existing tracks, or clearing the playlist.
Parameters
----------
playlist : Union[str, PLaylist]
The playlist to modify
tracks : Sequence[Union[str, Track]]
Tracks to place in the playlist
"""
tracks = [str(track) for track in tracks]
await self.http.replace_playlist_tracks(self.id, str(playlist), tracks=','.join(tracks))
|
async def replace_tracks(self, playlist, *tracks) -> str:
"""Replace all the tracks in a playlist, overwriting its existing tracks.
This powerful request can be useful for replacing tracks, re-ordering existing tracks, or clearing the playlist.
Parameters
----------
playlist : Union[str, PLaylist]
The playlist to modify
tracks : Sequence[Union[str, Track]]
Tracks to place in the playlist
"""
tracks = [str(track) for track in tracks]
await self.http.replace_playlist_tracks(self.id, str(playlist), tracks=','.join(tracks))
|
[
"Replace",
"all",
"the",
"tracks",
"in",
"a",
"playlist",
"overwriting",
"its",
"existing",
"tracks",
".",
"This",
"powerful",
"request",
"can",
"be",
"useful",
"for",
"replacing",
"tracks",
"re",
"-",
"ordering",
"existing",
"tracks",
"or",
"clearing",
"the",
"playlist",
"."
] |
mental32/spotify.py
|
python
|
https://github.com/mental32/spotify.py/blob/bb296cac7c3dd289908906b7069bd80f43950515/spotify/models/user.py#L233-L245
|
[
"async",
"def",
"replace_tracks",
"(",
"self",
",",
"playlist",
",",
"*",
"tracks",
")",
"->",
"str",
":",
"tracks",
"=",
"[",
"str",
"(",
"track",
")",
"for",
"track",
"in",
"tracks",
"]",
"await",
"self",
".",
"http",
".",
"replace_playlist_tracks",
"(",
"self",
".",
"id",
",",
"str",
"(",
"playlist",
")",
",",
"tracks",
"=",
"','",
".",
"join",
"(",
"tracks",
")",
")"
] |
bb296cac7c3dd289908906b7069bd80f43950515
|
test
|
User.remove_tracks
|
Remove one or more tracks from a user’s playlist.
Parameters
----------
playlist : Union[str, Playlist]
The playlist to modify
tracks : Sequence[Union[str, Track]]
Tracks to remove from the playlist
Returns
-------
snapshot_id : str
The snapshot id of the playlist.
|
spotify/models/user.py
|
async def remove_tracks(self, playlist, *tracks):
"""Remove one or more tracks from a user’s playlist.
Parameters
----------
playlist : Union[str, Playlist]
The playlist to modify
tracks : Sequence[Union[str, Track]]
Tracks to remove from the playlist
Returns
-------
snapshot_id : str
The snapshot id of the playlist.
"""
tracks = [str(track) for track in tracks]
data = await self.http.remove_playlist_tracks(self.id, str(playlist), tracks=','.join(tracks))
return data['snapshot_id']
|
async def remove_tracks(self, playlist, *tracks):
"""Remove one or more tracks from a user’s playlist.
Parameters
----------
playlist : Union[str, Playlist]
The playlist to modify
tracks : Sequence[Union[str, Track]]
Tracks to remove from the playlist
Returns
-------
snapshot_id : str
The snapshot id of the playlist.
"""
tracks = [str(track) for track in tracks]
data = await self.http.remove_playlist_tracks(self.id, str(playlist), tracks=','.join(tracks))
return data['snapshot_id']
|
[
"Remove",
"one",
"or",
"more",
"tracks",
"from",
"a",
"user’s",
"playlist",
"."
] |
mental32/spotify.py
|
python
|
https://github.com/mental32/spotify.py/blob/bb296cac7c3dd289908906b7069bd80f43950515/spotify/models/user.py#L247-L264
|
[
"async",
"def",
"remove_tracks",
"(",
"self",
",",
"playlist",
",",
"*",
"tracks",
")",
":",
"tracks",
"=",
"[",
"str",
"(",
"track",
")",
"for",
"track",
"in",
"tracks",
"]",
"data",
"=",
"await",
"self",
".",
"http",
".",
"remove_playlist_tracks",
"(",
"self",
".",
"id",
",",
"str",
"(",
"playlist",
")",
",",
"tracks",
"=",
"','",
".",
"join",
"(",
"tracks",
")",
")",
"return",
"data",
"[",
"'snapshot_id'",
"]"
] |
bb296cac7c3dd289908906b7069bd80f43950515
|
test
|
User.reorder_tracks
|
Reorder a track or a group of tracks in a playlist.
Parameters
----------
playlist : Union[str, Playlist]
The playlist to modify
start : int
The position of the first track to be reordered.
insert_before : int
The position where the tracks should be inserted.
length : Optional[int]
The amount of tracks to be reordered. Defaults to 1 if not set.
snapshot_id : str
The playlist’s snapshot ID against which you want to make the changes.
Returns
-------
snapshot_id : str
The snapshot id of the playlist.
|
spotify/models/user.py
|
async def reorder_tracks(self, playlist, start, insert_before, length=1, *, snapshot_id=None):
"""Reorder a track or a group of tracks in a playlist.
Parameters
----------
playlist : Union[str, Playlist]
The playlist to modify
start : int
The position of the first track to be reordered.
insert_before : int
The position where the tracks should be inserted.
length : Optional[int]
The amount of tracks to be reordered. Defaults to 1 if not set.
snapshot_id : str
The playlist’s snapshot ID against which you want to make the changes.
Returns
-------
snapshot_id : str
The snapshot id of the playlist.
"""
data = await self.http.reorder_playlists_tracks(self.id, str(playlist), start, length, insert_before, snapshot_id=snapshot_id)
return data['snapshot_id']
|
async def reorder_tracks(self, playlist, start, insert_before, length=1, *, snapshot_id=None):
"""Reorder a track or a group of tracks in a playlist.
Parameters
----------
playlist : Union[str, Playlist]
The playlist to modify
start : int
The position of the first track to be reordered.
insert_before : int
The position where the tracks should be inserted.
length : Optional[int]
The amount of tracks to be reordered. Defaults to 1 if not set.
snapshot_id : str
The playlist’s snapshot ID against which you want to make the changes.
Returns
-------
snapshot_id : str
The snapshot id of the playlist.
"""
data = await self.http.reorder_playlists_tracks(self.id, str(playlist), start, length, insert_before, snapshot_id=snapshot_id)
return data['snapshot_id']
|
[
"Reorder",
"a",
"track",
"or",
"a",
"group",
"of",
"tracks",
"in",
"a",
"playlist",
"."
] |
mental32/spotify.py
|
python
|
https://github.com/mental32/spotify.py/blob/bb296cac7c3dd289908906b7069bd80f43950515/spotify/models/user.py#L266-L288
|
[
"async",
"def",
"reorder_tracks",
"(",
"self",
",",
"playlist",
",",
"start",
",",
"insert_before",
",",
"length",
"=",
"1",
",",
"*",
",",
"snapshot_id",
"=",
"None",
")",
":",
"data",
"=",
"await",
"self",
".",
"http",
".",
"reorder_playlists_tracks",
"(",
"self",
".",
"id",
",",
"str",
"(",
"playlist",
")",
",",
"start",
",",
"length",
",",
"insert_before",
",",
"snapshot_id",
"=",
"snapshot_id",
")",
"return",
"data",
"[",
"'snapshot_id'",
"]"
] |
bb296cac7c3dd289908906b7069bd80f43950515
|
test
|
User.edit_playlist
|
Change a playlist’s name and public/private, collaborative state and description.
Parameters
----------
playlist : Union[str, Playlist]
The playlist to modify
name : Optional[str]
The new name of the playlist.
public : Optional[bool]
The public/private status of the playlist.
`True` for public, `False` for private.
collaborative : Optional[bool]
If `True`, the playlist will become collaborative and other users will be able to modify the playlist.
description : Optional[str]
The new playlist description
|
spotify/models/user.py
|
async def edit_playlist(self, playlist, *, name=None, public=None, collaborative=None, description=None):
"""Change a playlist’s name and public/private, collaborative state and description.
Parameters
----------
playlist : Union[str, Playlist]
The playlist to modify
name : Optional[str]
The new name of the playlist.
public : Optional[bool]
The public/private status of the playlist.
`True` for public, `False` for private.
collaborative : Optional[bool]
If `True`, the playlist will become collaborative and other users will be able to modify the playlist.
description : Optional[str]
The new playlist description
"""
data = {}
if name:
data['name'] = name
if public:
data['public'] = public
if collaborative:
data['collaborative'] = collaborative
if description:
data['description'] = description
await self.http.change_playlist_details(self.id, str(playlist), data)
|
async def edit_playlist(self, playlist, *, name=None, public=None, collaborative=None, description=None):
"""Change a playlist’s name and public/private, collaborative state and description.
Parameters
----------
playlist : Union[str, Playlist]
The playlist to modify
name : Optional[str]
The new name of the playlist.
public : Optional[bool]
The public/private status of the playlist.
`True` for public, `False` for private.
collaborative : Optional[bool]
If `True`, the playlist will become collaborative and other users will be able to modify the playlist.
description : Optional[str]
The new playlist description
"""
data = {}
if name:
data['name'] = name
if public:
data['public'] = public
if collaborative:
data['collaborative'] = collaborative
if description:
data['description'] = description
await self.http.change_playlist_details(self.id, str(playlist), data)
|
[
"Change",
"a",
"playlist’s",
"name",
"and",
"public",
"/",
"private",
"collaborative",
"state",
"and",
"description",
"."
] |
mental32/spotify.py
|
python
|
https://github.com/mental32/spotify.py/blob/bb296cac7c3dd289908906b7069bd80f43950515/spotify/models/user.py#L293-L324
|
[
"async",
"def",
"edit_playlist",
"(",
"self",
",",
"playlist",
",",
"*",
",",
"name",
"=",
"None",
",",
"public",
"=",
"None",
",",
"collaborative",
"=",
"None",
",",
"description",
"=",
"None",
")",
":",
"data",
"=",
"{",
"}",
"if",
"name",
":",
"data",
"[",
"'name'",
"]",
"=",
"name",
"if",
"public",
":",
"data",
"[",
"'public'",
"]",
"=",
"public",
"if",
"collaborative",
":",
"data",
"[",
"'collaborative'",
"]",
"=",
"collaborative",
"if",
"description",
":",
"data",
"[",
"'description'",
"]",
"=",
"description",
"await",
"self",
".",
"http",
".",
"change_playlist_details",
"(",
"self",
".",
"id",
",",
"str",
"(",
"playlist",
")",
",",
"data",
")"
] |
bb296cac7c3dd289908906b7069bd80f43950515
|
test
|
User.create_playlist
|
Create a playlist for a Spotify user.
Parameters
----------
name : str
The name of the playlist.
public : Optional[bool]
The public/private status of the playlist.
`True` for public, `False` for private.
collaborative : Optional[bool]
If `True`, the playlist will become collaborative and other users will be able to modify the playlist.
description : Optional[str]
The playlist description
Returns
-------
playlist : Playlist
The playlist that was created.
|
spotify/models/user.py
|
async def create_playlist(self, name, *, public=True, collaborative=False, description=None):
"""Create a playlist for a Spotify user.
Parameters
----------
name : str
The name of the playlist.
public : Optional[bool]
The public/private status of the playlist.
`True` for public, `False` for private.
collaborative : Optional[bool]
If `True`, the playlist will become collaborative and other users will be able to modify the playlist.
description : Optional[str]
The playlist description
Returns
-------
playlist : Playlist
The playlist that was created.
"""
data = {
'name': name,
'public': public,
'collaborative': collaborative
}
if description:
data['description'] = description
playlist_data = await self.http.create_playlist(self.id, data)
return Playlist(self.__client, playlist_data)
|
async def create_playlist(self, name, *, public=True, collaborative=False, description=None):
"""Create a playlist for a Spotify user.
Parameters
----------
name : str
The name of the playlist.
public : Optional[bool]
The public/private status of the playlist.
`True` for public, `False` for private.
collaborative : Optional[bool]
If `True`, the playlist will become collaborative and other users will be able to modify the playlist.
description : Optional[str]
The playlist description
Returns
-------
playlist : Playlist
The playlist that was created.
"""
data = {
'name': name,
'public': public,
'collaborative': collaborative
}
if description:
data['description'] = description
playlist_data = await self.http.create_playlist(self.id, data)
return Playlist(self.__client, playlist_data)
|
[
"Create",
"a",
"playlist",
"for",
"a",
"Spotify",
"user",
"."
] |
mental32/spotify.py
|
python
|
https://github.com/mental32/spotify.py/blob/bb296cac7c3dd289908906b7069bd80f43950515/spotify/models/user.py#L327-L357
|
[
"async",
"def",
"create_playlist",
"(",
"self",
",",
"name",
",",
"*",
",",
"public",
"=",
"True",
",",
"collaborative",
"=",
"False",
",",
"description",
"=",
"None",
")",
":",
"data",
"=",
"{",
"'name'",
":",
"name",
",",
"'public'",
":",
"public",
",",
"'collaborative'",
":",
"collaborative",
"}",
"if",
"description",
":",
"data",
"[",
"'description'",
"]",
"=",
"description",
"playlist_data",
"=",
"await",
"self",
".",
"http",
".",
"create_playlist",
"(",
"self",
".",
"id",
",",
"data",
")",
"return",
"Playlist",
"(",
"self",
".",
"__client",
",",
"playlist_data",
")"
] |
bb296cac7c3dd289908906b7069bd80f43950515
|
test
|
User.get_playlists
|
get the users playlists from spotify.
Parameters
----------
limit : Optional[int]
The limit on how many playlists to retrieve for this user (default is 20).
offset : Optional[int]
The offset from where the api should start from in the playlists.
Returns
-------
playlists : List[Playlist]
A list of the users playlists.
|
spotify/models/user.py
|
async def get_playlists(self, *, limit=20, offset=0):
"""get the users playlists from spotify.
Parameters
----------
limit : Optional[int]
The limit on how many playlists to retrieve for this user (default is 20).
offset : Optional[int]
The offset from where the api should start from in the playlists.
Returns
-------
playlists : List[Playlist]
A list of the users playlists.
"""
if hasattr(self, 'http'):
http = self.http
else:
http = self.__client.http
data = await http.get_playlists(self.id, limit=limit, offset=offset)
return [Playlist(self.__client, playlist_data) for playlist_data in data['items']]
|
async def get_playlists(self, *, limit=20, offset=0):
"""get the users playlists from spotify.
Parameters
----------
limit : Optional[int]
The limit on how many playlists to retrieve for this user (default is 20).
offset : Optional[int]
The offset from where the api should start from in the playlists.
Returns
-------
playlists : List[Playlist]
A list of the users playlists.
"""
if hasattr(self, 'http'):
http = self.http
else:
http = self.__client.http
data = await http.get_playlists(self.id, limit=limit, offset=offset)
return [Playlist(self.__client, playlist_data) for playlist_data in data['items']]
|
[
"get",
"the",
"users",
"playlists",
"from",
"spotify",
"."
] |
mental32/spotify.py
|
python
|
https://github.com/mental32/spotify.py/blob/bb296cac7c3dd289908906b7069bd80f43950515/spotify/models/user.py#L359-L380
|
[
"async",
"def",
"get_playlists",
"(",
"self",
",",
"*",
",",
"limit",
"=",
"20",
",",
"offset",
"=",
"0",
")",
":",
"if",
"hasattr",
"(",
"self",
",",
"'http'",
")",
":",
"http",
"=",
"self",
".",
"http",
"else",
":",
"http",
"=",
"self",
".",
"__client",
".",
"http",
"data",
"=",
"await",
"http",
".",
"get_playlists",
"(",
"self",
".",
"id",
",",
"limit",
"=",
"limit",
",",
"offset",
"=",
"offset",
")",
"return",
"[",
"Playlist",
"(",
"self",
".",
"__client",
",",
"playlist_data",
")",
"for",
"playlist_data",
"in",
"data",
"[",
"'items'",
"]",
"]"
] |
bb296cac7c3dd289908906b7069bd80f43950515
|
test
|
Album.get_tracks
|
get the albums tracks from spotify.
Parameters
----------
limit : Optional[int]
The limit on how many tracks to retrieve for this album (default is 20).
offset : Optional[int]
The offset from where the api should start from in the tracks.
Returns
-------
tracks : List[Track]
The tracks of the artist.
|
spotify/models/album.py
|
async def get_tracks(self, *, limit: Optional[int] = 20, offset: Optional[int] = 0) -> List[Track]:
"""get the albums tracks from spotify.
Parameters
----------
limit : Optional[int]
The limit on how many tracks to retrieve for this album (default is 20).
offset : Optional[int]
The offset from where the api should start from in the tracks.
Returns
-------
tracks : List[Track]
The tracks of the artist.
"""
data = await self.__client.http.album_tracks(self.id, limit=limit, offset=offset)
return list(Track(self.__client, item) for item in data['items'])
|
async def get_tracks(self, *, limit: Optional[int] = 20, offset: Optional[int] = 0) -> List[Track]:
"""get the albums tracks from spotify.
Parameters
----------
limit : Optional[int]
The limit on how many tracks to retrieve for this album (default is 20).
offset : Optional[int]
The offset from where the api should start from in the tracks.
Returns
-------
tracks : List[Track]
The tracks of the artist.
"""
data = await self.__client.http.album_tracks(self.id, limit=limit, offset=offset)
return list(Track(self.__client, item) for item in data['items'])
|
[
"get",
"the",
"albums",
"tracks",
"from",
"spotify",
"."
] |
mental32/spotify.py
|
python
|
https://github.com/mental32/spotify.py/blob/bb296cac7c3dd289908906b7069bd80f43950515/spotify/models/album.py#L73-L89
|
[
"async",
"def",
"get_tracks",
"(",
"self",
",",
"*",
",",
"limit",
":",
"Optional",
"[",
"int",
"]",
"=",
"20",
",",
"offset",
":",
"Optional",
"[",
"int",
"]",
"=",
"0",
")",
"->",
"List",
"[",
"Track",
"]",
":",
"data",
"=",
"await",
"self",
".",
"__client",
".",
"http",
".",
"album_tracks",
"(",
"self",
".",
"id",
",",
"limit",
"=",
"limit",
",",
"offset",
"=",
"offset",
")",
"return",
"list",
"(",
"Track",
"(",
"self",
".",
"__client",
",",
"item",
")",
"for",
"item",
"in",
"data",
"[",
"'items'",
"]",
")"
] |
bb296cac7c3dd289908906b7069bd80f43950515
|
test
|
Album.get_all_tracks
|
loads all of the albums tracks, depending on how many the album has this may be a long operation.
Parameters
----------
market : Optional[str]
An ISO 3166-1 alpha-2 country code. Provide this parameter if you want to apply Track Relinking.
Returns
-------
tracks : List[Track]
The tracks of the artist.
|
spotify/models/album.py
|
async def get_all_tracks(self, *, market: Optional[str] = 'US') -> List[Track]:
"""loads all of the albums tracks, depending on how many the album has this may be a long operation.
Parameters
----------
market : Optional[str]
An ISO 3166-1 alpha-2 country code. Provide this parameter if you want to apply Track Relinking.
Returns
-------
tracks : List[Track]
The tracks of the artist.
"""
tracks = []
offset = 0
total = self.total_tracks or None
while True:
data = await self.__client.http.album_tracks(self.id, limit=50, offset=offset, market=market)
if total is None:
total = data['total']
offset += 50
tracks += list(Track(self.__client, item) for item in data['items'])
if len(tracks) >= total:
break
return tracks
|
async def get_all_tracks(self, *, market: Optional[str] = 'US') -> List[Track]:
"""loads all of the albums tracks, depending on how many the album has this may be a long operation.
Parameters
----------
market : Optional[str]
An ISO 3166-1 alpha-2 country code. Provide this parameter if you want to apply Track Relinking.
Returns
-------
tracks : List[Track]
The tracks of the artist.
"""
tracks = []
offset = 0
total = self.total_tracks or None
while True:
data = await self.__client.http.album_tracks(self.id, limit=50, offset=offset, market=market)
if total is None:
total = data['total']
offset += 50
tracks += list(Track(self.__client, item) for item in data['items'])
if len(tracks) >= total:
break
return tracks
|
[
"loads",
"all",
"of",
"the",
"albums",
"tracks",
"depending",
"on",
"how",
"many",
"the",
"album",
"has",
"this",
"may",
"be",
"a",
"long",
"operation",
"."
] |
mental32/spotify.py
|
python
|
https://github.com/mental32/spotify.py/blob/bb296cac7c3dd289908906b7069bd80f43950515/spotify/models/album.py#L91-L120
|
[
"async",
"def",
"get_all_tracks",
"(",
"self",
",",
"*",
",",
"market",
":",
"Optional",
"[",
"str",
"]",
"=",
"'US'",
")",
"->",
"List",
"[",
"Track",
"]",
":",
"tracks",
"=",
"[",
"]",
"offset",
"=",
"0",
"total",
"=",
"self",
".",
"total_tracks",
"or",
"None",
"while",
"True",
":",
"data",
"=",
"await",
"self",
".",
"__client",
".",
"http",
".",
"album_tracks",
"(",
"self",
".",
"id",
",",
"limit",
"=",
"50",
",",
"offset",
"=",
"offset",
",",
"market",
"=",
"market",
")",
"if",
"total",
"is",
"None",
":",
"total",
"=",
"data",
"[",
"'total'",
"]",
"offset",
"+=",
"50",
"tracks",
"+=",
"list",
"(",
"Track",
"(",
"self",
".",
"__client",
",",
"item",
")",
"for",
"item",
"in",
"data",
"[",
"'items'",
"]",
")",
"if",
"len",
"(",
"tracks",
")",
">=",
"total",
":",
"break",
"return",
"tracks"
] |
bb296cac7c3dd289908906b7069bd80f43950515
|
test
|
Client.oauth2_url
|
Generate an outh2 url for user authentication.
Parameters
----------
redirect_uri : str
Where spotify should redirect the user to after authentication.
scope : Optional[str]
Space seperated spotify scopes for different levels of access.
state : Optional[str]
Using a state value can increase your assurance that an incoming connection is the result of an authentication request.
Returns
-------
url : str
The OAuth2 url.
|
spotify/client.py
|
def oauth2_url(self, redirect_uri: str, scope: Optional[str] = None, state: Optional[str] = None) -> str:
"""Generate an outh2 url for user authentication.
Parameters
----------
redirect_uri : str
Where spotify should redirect the user to after authentication.
scope : Optional[str]
Space seperated spotify scopes for different levels of access.
state : Optional[str]
Using a state value can increase your assurance that an incoming connection is the result of an authentication request.
Returns
-------
url : str
The OAuth2 url.
"""
return OAuth2.url_(self.http.client_id, redirect_uri, scope=scope, state=state)
|
def oauth2_url(self, redirect_uri: str, scope: Optional[str] = None, state: Optional[str] = None) -> str:
"""Generate an outh2 url for user authentication.
Parameters
----------
redirect_uri : str
Where spotify should redirect the user to after authentication.
scope : Optional[str]
Space seperated spotify scopes for different levels of access.
state : Optional[str]
Using a state value can increase your assurance that an incoming connection is the result of an authentication request.
Returns
-------
url : str
The OAuth2 url.
"""
return OAuth2.url_(self.http.client_id, redirect_uri, scope=scope, state=state)
|
[
"Generate",
"an",
"outh2",
"url",
"for",
"user",
"authentication",
"."
] |
mental32/spotify.py
|
python
|
https://github.com/mental32/spotify.py/blob/bb296cac7c3dd289908906b7069bd80f43950515/spotify/client.py#L68-L85
|
[
"def",
"oauth2_url",
"(",
"self",
",",
"redirect_uri",
":",
"str",
",",
"scope",
":",
"Optional",
"[",
"str",
"]",
"=",
"None",
",",
"state",
":",
"Optional",
"[",
"str",
"]",
"=",
"None",
")",
"->",
"str",
":",
"return",
"OAuth2",
".",
"url_",
"(",
"self",
".",
"http",
".",
"client_id",
",",
"redirect_uri",
",",
"scope",
"=",
"scope",
",",
"state",
"=",
"state",
")"
] |
bb296cac7c3dd289908906b7069bd80f43950515
|
test
|
Client.get_album
|
Retrive an album with a spotify ID.
Parameters
----------
spotify_id : str
The ID to search for.
market : Optional[str]
An ISO 3166-1 alpha-2 country code
Returns
-------
album : Album
The album from the ID
|
spotify/client.py
|
async def get_album(self, spotify_id: str, *, market: str = 'US') -> Album:
"""Retrive an album with a spotify ID.
Parameters
----------
spotify_id : str
The ID to search for.
market : Optional[str]
An ISO 3166-1 alpha-2 country code
Returns
-------
album : Album
The album from the ID
"""
data = await self.http.album(to_id(spotify_id), market=market)
return Album(self, data)
|
async def get_album(self, spotify_id: str, *, market: str = 'US') -> Album:
"""Retrive an album with a spotify ID.
Parameters
----------
spotify_id : str
The ID to search for.
market : Optional[str]
An ISO 3166-1 alpha-2 country code
Returns
-------
album : Album
The album from the ID
"""
data = await self.http.album(to_id(spotify_id), market=market)
return Album(self, data)
|
[
"Retrive",
"an",
"album",
"with",
"a",
"spotify",
"ID",
"."
] |
mental32/spotify.py
|
python
|
https://github.com/mental32/spotify.py/blob/bb296cac7c3dd289908906b7069bd80f43950515/spotify/client.py#L108-L124
|
[
"async",
"def",
"get_album",
"(",
"self",
",",
"spotify_id",
":",
"str",
",",
"*",
",",
"market",
":",
"str",
"=",
"'US'",
")",
"->",
"Album",
":",
"data",
"=",
"await",
"self",
".",
"http",
".",
"album",
"(",
"to_id",
"(",
"spotify_id",
")",
",",
"market",
"=",
"market",
")",
"return",
"Album",
"(",
"self",
",",
"data",
")"
] |
bb296cac7c3dd289908906b7069bd80f43950515
|
test
|
Client.get_artist
|
Retrive an artist with a spotify ID.
Parameters
----------
spotify_id : str
The ID to search for.
Returns
-------
artist : Artist
The artist from the ID
|
spotify/client.py
|
async def get_artist(self, spotify_id: str) -> Artist:
"""Retrive an artist with a spotify ID.
Parameters
----------
spotify_id : str
The ID to search for.
Returns
-------
artist : Artist
The artist from the ID
"""
data = await self.http.artist(to_id(spotify_id))
return Artist(self, data)
|
async def get_artist(self, spotify_id: str) -> Artist:
"""Retrive an artist with a spotify ID.
Parameters
----------
spotify_id : str
The ID to search for.
Returns
-------
artist : Artist
The artist from the ID
"""
data = await self.http.artist(to_id(spotify_id))
return Artist(self, data)
|
[
"Retrive",
"an",
"artist",
"with",
"a",
"spotify",
"ID",
"."
] |
mental32/spotify.py
|
python
|
https://github.com/mental32/spotify.py/blob/bb296cac7c3dd289908906b7069bd80f43950515/spotify/client.py#L126-L140
|
[
"async",
"def",
"get_artist",
"(",
"self",
",",
"spotify_id",
":",
"str",
")",
"->",
"Artist",
":",
"data",
"=",
"await",
"self",
".",
"http",
".",
"artist",
"(",
"to_id",
"(",
"spotify_id",
")",
")",
"return",
"Artist",
"(",
"self",
",",
"data",
")"
] |
bb296cac7c3dd289908906b7069bd80f43950515
|
test
|
Client.get_track
|
Retrive an track with a spotify ID.
Parameters
----------
spotify_id : str
The ID to search for.
Returns
-------
track : Track
The track from the ID
|
spotify/client.py
|
async def get_track(self, spotify_id: str) -> Track:
"""Retrive an track with a spotify ID.
Parameters
----------
spotify_id : str
The ID to search for.
Returns
-------
track : Track
The track from the ID
"""
data = await self.http.track(to_id(spotify_id))
return Track(self, data)
|
async def get_track(self, spotify_id: str) -> Track:
"""Retrive an track with a spotify ID.
Parameters
----------
spotify_id : str
The ID to search for.
Returns
-------
track : Track
The track from the ID
"""
data = await self.http.track(to_id(spotify_id))
return Track(self, data)
|
[
"Retrive",
"an",
"track",
"with",
"a",
"spotify",
"ID",
"."
] |
mental32/spotify.py
|
python
|
https://github.com/mental32/spotify.py/blob/bb296cac7c3dd289908906b7069bd80f43950515/spotify/client.py#L142-L156
|
[
"async",
"def",
"get_track",
"(",
"self",
",",
"spotify_id",
":",
"str",
")",
"->",
"Track",
":",
"data",
"=",
"await",
"self",
".",
"http",
".",
"track",
"(",
"to_id",
"(",
"spotify_id",
")",
")",
"return",
"Track",
"(",
"self",
",",
"data",
")"
] |
bb296cac7c3dd289908906b7069bd80f43950515
|
test
|
Client.get_user
|
Retrive an user with a spotify ID.
Parameters
----------
spotify_id : str
The ID to search for.
Returns
-------
user : User
The user from the ID
|
spotify/client.py
|
async def get_user(self, spotify_id: str) -> User:
"""Retrive an user with a spotify ID.
Parameters
----------
spotify_id : str
The ID to search for.
Returns
-------
user : User
The user from the ID
"""
data = await self.http.user(to_id(spotify_id))
return User(self, data)
|
async def get_user(self, spotify_id: str) -> User:
"""Retrive an user with a spotify ID.
Parameters
----------
spotify_id : str
The ID to search for.
Returns
-------
user : User
The user from the ID
"""
data = await self.http.user(to_id(spotify_id))
return User(self, data)
|
[
"Retrive",
"an",
"user",
"with",
"a",
"spotify",
"ID",
"."
] |
mental32/spotify.py
|
python
|
https://github.com/mental32/spotify.py/blob/bb296cac7c3dd289908906b7069bd80f43950515/spotify/client.py#L158-L172
|
[
"async",
"def",
"get_user",
"(",
"self",
",",
"spotify_id",
":",
"str",
")",
"->",
"User",
":",
"data",
"=",
"await",
"self",
".",
"http",
".",
"user",
"(",
"to_id",
"(",
"spotify_id",
")",
")",
"return",
"User",
"(",
"self",
",",
"data",
")"
] |
bb296cac7c3dd289908906b7069bd80f43950515
|
test
|
Client.get_albums
|
Retrive multiple albums with a list of spotify IDs.
Parameters
----------
ids : List[str]
the ID to look for
market : Optional[str]
An ISO 3166-1 alpha-2 country code
Returns
-------
albums : List[Album]
The albums from the IDs
|
spotify/client.py
|
async def get_albums(self, *ids: List[str], market: str = 'US') -> List[Album]:
"""Retrive multiple albums with a list of spotify IDs.
Parameters
----------
ids : List[str]
the ID to look for
market : Optional[str]
An ISO 3166-1 alpha-2 country code
Returns
-------
albums : List[Album]
The albums from the IDs
"""
data = await self.http.albums(','.join(to_id(_id) for _id in ids), market=market)
return list(Album(self, album) for album in data['albums'])
|
async def get_albums(self, *ids: List[str], market: str = 'US') -> List[Album]:
"""Retrive multiple albums with a list of spotify IDs.
Parameters
----------
ids : List[str]
the ID to look for
market : Optional[str]
An ISO 3166-1 alpha-2 country code
Returns
-------
albums : List[Album]
The albums from the IDs
"""
data = await self.http.albums(','.join(to_id(_id) for _id in ids), market=market)
return list(Album(self, album) for album in data['albums'])
|
[
"Retrive",
"multiple",
"albums",
"with",
"a",
"list",
"of",
"spotify",
"IDs",
"."
] |
mental32/spotify.py
|
python
|
https://github.com/mental32/spotify.py/blob/bb296cac7c3dd289908906b7069bd80f43950515/spotify/client.py#L176-L192
|
[
"async",
"def",
"get_albums",
"(",
"self",
",",
"*",
"ids",
":",
"List",
"[",
"str",
"]",
",",
"market",
":",
"str",
"=",
"'US'",
")",
"->",
"List",
"[",
"Album",
"]",
":",
"data",
"=",
"await",
"self",
".",
"http",
".",
"albums",
"(",
"','",
".",
"join",
"(",
"to_id",
"(",
"_id",
")",
"for",
"_id",
"in",
"ids",
")",
",",
"market",
"=",
"market",
")",
"return",
"list",
"(",
"Album",
"(",
"self",
",",
"album",
")",
"for",
"album",
"in",
"data",
"[",
"'albums'",
"]",
")"
] |
bb296cac7c3dd289908906b7069bd80f43950515
|
test
|
Client.get_artists
|
Retrive multiple artists with a list of spotify IDs.
Parameters
----------
ids : List[str]
the IDs to look for
Returns
-------
artists : List[Artist]
The artists from the IDs
|
spotify/client.py
|
async def get_artists(self, *ids: List[str]) -> List[Artist]:
"""Retrive multiple artists with a list of spotify IDs.
Parameters
----------
ids : List[str]
the IDs to look for
Returns
-------
artists : List[Artist]
The artists from the IDs
"""
data = await self.http.artists(','.join(to_id(_id) for _id in ids))
return list(Artist(self, artist) for artist in data['artists'])
|
async def get_artists(self, *ids: List[str]) -> List[Artist]:
"""Retrive multiple artists with a list of spotify IDs.
Parameters
----------
ids : List[str]
the IDs to look for
Returns
-------
artists : List[Artist]
The artists from the IDs
"""
data = await self.http.artists(','.join(to_id(_id) for _id in ids))
return list(Artist(self, artist) for artist in data['artists'])
|
[
"Retrive",
"multiple",
"artists",
"with",
"a",
"list",
"of",
"spotify",
"IDs",
"."
] |
mental32/spotify.py
|
python
|
https://github.com/mental32/spotify.py/blob/bb296cac7c3dd289908906b7069bd80f43950515/spotify/client.py#L194-L208
|
[
"async",
"def",
"get_artists",
"(",
"self",
",",
"*",
"ids",
":",
"List",
"[",
"str",
"]",
")",
"->",
"List",
"[",
"Artist",
"]",
":",
"data",
"=",
"await",
"self",
".",
"http",
".",
"artists",
"(",
"','",
".",
"join",
"(",
"to_id",
"(",
"_id",
")",
"for",
"_id",
"in",
"ids",
")",
")",
"return",
"list",
"(",
"Artist",
"(",
"self",
",",
"artist",
")",
"for",
"artist",
"in",
"data",
"[",
"'artists'",
"]",
")"
] |
bb296cac7c3dd289908906b7069bd80f43950515
|
test
|
Client.search
|
Access the spotify search functionality.
Parameters
----------
q : str
the search query
types : Optional[Iterable[str]]
A sequence of search types (can be any of `track`, `playlist`, `artist` or `album`) to refine the search request.
A `ValueError` may be raised if a search type is found that is not valid.
limit : Optional[int]
The limit of search results to return when searching.
Maximum limit is 50, any larger may raise a :class:`HTTPException`
offset : Optional[int]
The offset from where the api should start from in the search results.
market : Optional[str]
An ISO 3166-1 alpha-2 country code. Provide this parameter if you want to apply Track Relinking.
Returns
-------
results : Dict[str, List[Union[Track, Playlist, Artist, Album]]]
The results of the search.
|
spotify/client.py
|
async def search(self, q: str, *, types: Optional[Iterable[str]] = ['track', 'playlist', 'artist', 'album'], limit: Optional[int] = 20, offset: Optional[int] = 0, market: Optional[str] = None) -> Dict[str, List[Union[Track, Playlist, Artist, Album]]]:
"""Access the spotify search functionality.
Parameters
----------
q : str
the search query
types : Optional[Iterable[str]]
A sequence of search types (can be any of `track`, `playlist`, `artist` or `album`) to refine the search request.
A `ValueError` may be raised if a search type is found that is not valid.
limit : Optional[int]
The limit of search results to return when searching.
Maximum limit is 50, any larger may raise a :class:`HTTPException`
offset : Optional[int]
The offset from where the api should start from in the search results.
market : Optional[str]
An ISO 3166-1 alpha-2 country code. Provide this parameter if you want to apply Track Relinking.
Returns
-------
results : Dict[str, List[Union[Track, Playlist, Artist, Album]]]
The results of the search.
"""
if not hasattr(types, '__iter__'):
raise TypeError('types must be an iterable.')
elif not isinstance(types, list):
types = list(item for item in types)
types_ = set(types)
if not types_.issubset(_SEARCH_TYPES):
raise ValueError(_SEARCH_TYPE_ERR % types_.difference(_SEARCH_TYPES).pop())
kwargs = {
'q': q.replace(' ', '+'),
'queary_type': ','.join(tp.strip() for tp in types),
'market': market,
'limit': limit,
'offset': offset
}
data = await self.http.search(**kwargs)
return {key: [_TYPES[obj['type']](self, obj) for obj in value['items']] for key, value in data.items()}
|
async def search(self, q: str, *, types: Optional[Iterable[str]] = ['track', 'playlist', 'artist', 'album'], limit: Optional[int] = 20, offset: Optional[int] = 0, market: Optional[str] = None) -> Dict[str, List[Union[Track, Playlist, Artist, Album]]]:
"""Access the spotify search functionality.
Parameters
----------
q : str
the search query
types : Optional[Iterable[str]]
A sequence of search types (can be any of `track`, `playlist`, `artist` or `album`) to refine the search request.
A `ValueError` may be raised if a search type is found that is not valid.
limit : Optional[int]
The limit of search results to return when searching.
Maximum limit is 50, any larger may raise a :class:`HTTPException`
offset : Optional[int]
The offset from where the api should start from in the search results.
market : Optional[str]
An ISO 3166-1 alpha-2 country code. Provide this parameter if you want to apply Track Relinking.
Returns
-------
results : Dict[str, List[Union[Track, Playlist, Artist, Album]]]
The results of the search.
"""
if not hasattr(types, '__iter__'):
raise TypeError('types must be an iterable.')
elif not isinstance(types, list):
types = list(item for item in types)
types_ = set(types)
if not types_.issubset(_SEARCH_TYPES):
raise ValueError(_SEARCH_TYPE_ERR % types_.difference(_SEARCH_TYPES).pop())
kwargs = {
'q': q.replace(' ', '+'),
'queary_type': ','.join(tp.strip() for tp in types),
'market': market,
'limit': limit,
'offset': offset
}
data = await self.http.search(**kwargs)
return {key: [_TYPES[obj['type']](self, obj) for obj in value['items']] for key, value in data.items()}
|
[
"Access",
"the",
"spotify",
"search",
"functionality",
"."
] |
mental32/spotify.py
|
python
|
https://github.com/mental32/spotify.py/blob/bb296cac7c3dd289908906b7069bd80f43950515/spotify/client.py#L210-L254
|
[
"async",
"def",
"search",
"(",
"self",
",",
"q",
":",
"str",
",",
"*",
",",
"types",
":",
"Optional",
"[",
"Iterable",
"[",
"str",
"]",
"]",
"=",
"[",
"'track'",
",",
"'playlist'",
",",
"'artist'",
",",
"'album'",
"]",
",",
"limit",
":",
"Optional",
"[",
"int",
"]",
"=",
"20",
",",
"offset",
":",
"Optional",
"[",
"int",
"]",
"=",
"0",
",",
"market",
":",
"Optional",
"[",
"str",
"]",
"=",
"None",
")",
"->",
"Dict",
"[",
"str",
",",
"List",
"[",
"Union",
"[",
"Track",
",",
"Playlist",
",",
"Artist",
",",
"Album",
"]",
"]",
"]",
":",
"if",
"not",
"hasattr",
"(",
"types",
",",
"'__iter__'",
")",
":",
"raise",
"TypeError",
"(",
"'types must be an iterable.'",
")",
"elif",
"not",
"isinstance",
"(",
"types",
",",
"list",
")",
":",
"types",
"=",
"list",
"(",
"item",
"for",
"item",
"in",
"types",
")",
"types_",
"=",
"set",
"(",
"types",
")",
"if",
"not",
"types_",
".",
"issubset",
"(",
"_SEARCH_TYPES",
")",
":",
"raise",
"ValueError",
"(",
"_SEARCH_TYPE_ERR",
"%",
"types_",
".",
"difference",
"(",
"_SEARCH_TYPES",
")",
".",
"pop",
"(",
")",
")",
"kwargs",
"=",
"{",
"'q'",
":",
"q",
".",
"replace",
"(",
"' '",
",",
"'+'",
")",
",",
"'queary_type'",
":",
"','",
".",
"join",
"(",
"tp",
".",
"strip",
"(",
")",
"for",
"tp",
"in",
"types",
")",
",",
"'market'",
":",
"market",
",",
"'limit'",
":",
"limit",
",",
"'offset'",
":",
"offset",
"}",
"data",
"=",
"await",
"self",
".",
"http",
".",
"search",
"(",
"*",
"*",
"kwargs",
")",
"return",
"{",
"key",
":",
"[",
"_TYPES",
"[",
"obj",
"[",
"'type'",
"]",
"]",
"(",
"self",
",",
"obj",
")",
"for",
"obj",
"in",
"value",
"[",
"'items'",
"]",
"]",
"for",
"key",
",",
"value",
"in",
"data",
".",
"items",
"(",
")",
"}"
] |
bb296cac7c3dd289908906b7069bd80f43950515
|
test
|
Library.contains_albums
|
Check if one or more albums is already saved in the current Spotify user’s ‘Your Music’ library.
Parameters
----------
albums : Union[Album, str]
A sequence of artist objects or spotify IDs
|
spotify/models/library.py
|
async def contains_albums(self, *albums: Sequence[Union[str, Album]]) -> List[bool]:
"""Check if one or more albums is already saved in the current Spotify user’s ‘Your Music’ library.
Parameters
----------
albums : Union[Album, str]
A sequence of artist objects or spotify IDs
"""
_albums = [(obj if isinstance(obj, str) else obj.id) for obj in albums]
return await self.user.http.is_saved_album(_albums)
|
async def contains_albums(self, *albums: Sequence[Union[str, Album]]) -> List[bool]:
"""Check if one or more albums is already saved in the current Spotify user’s ‘Your Music’ library.
Parameters
----------
albums : Union[Album, str]
A sequence of artist objects or spotify IDs
"""
_albums = [(obj if isinstance(obj, str) else obj.id) for obj in albums]
return await self.user.http.is_saved_album(_albums)
|
[
"Check",
"if",
"one",
"or",
"more",
"albums",
"is",
"already",
"saved",
"in",
"the",
"current",
"Spotify",
"user’s",
"‘Your",
"Music’",
"library",
"."
] |
mental32/spotify.py
|
python
|
https://github.com/mental32/spotify.py/blob/bb296cac7c3dd289908906b7069bd80f43950515/spotify/models/library.py#L29-L38
|
[
"async",
"def",
"contains_albums",
"(",
"self",
",",
"*",
"albums",
":",
"Sequence",
"[",
"Union",
"[",
"str",
",",
"Album",
"]",
"]",
")",
"->",
"List",
"[",
"bool",
"]",
":",
"_albums",
"=",
"[",
"(",
"obj",
"if",
"isinstance",
"(",
"obj",
",",
"str",
")",
"else",
"obj",
".",
"id",
")",
"for",
"obj",
"in",
"albums",
"]",
"return",
"await",
"self",
".",
"user",
".",
"http",
".",
"is_saved_album",
"(",
"_albums",
")"
] |
bb296cac7c3dd289908906b7069bd80f43950515
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.