repo_name
stringlengths 6
77
| path
stringlengths 8
215
| license
stringclasses 15
values | content
stringlengths 335
154k
|
|---|---|---|---|
statsmodels/statsmodels.github.io
|
v0.13.1/examples/notebooks/generated/stationarity_detrending_adf_kpss.ipynb
|
bsd-3-clause
|
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import statsmodels.api as sm
"""
Explanation: Stationarity and detrending (ADF/KPSS)
Stationarity means that the statistical properties of a time series i.e. mean, variance and covariance do not change over time. Many statistical models require the series to be stationary to make effective and precise predictions.
Two statistical tests would be used to check the stationarity of a time series – Augmented Dickey Fuller (“ADF”) test and Kwiatkowski-Phillips-Schmidt-Shin (“KPSS”) test. A method to convert a non-stationary time series into stationary series shall also be used.
This first cell imports standard packages and sets plots to appear inline.
End of explanation
"""
sunspots = sm.datasets.sunspots.load_pandas().data
"""
Explanation: Sunspots dataset is used. It contains yearly (1700-2008) data on sunspots from the National Geophysical Data Center.
End of explanation
"""
sunspots.index = pd.Index(sm.tsa.datetools.dates_from_range("1700", "2008"))
del sunspots["YEAR"]
"""
Explanation: Some preprocessing is carried out on the data. The "YEAR" column is used in creating index.
End of explanation
"""
sunspots.plot(figsize=(12, 8))
"""
Explanation: The data is plotted now.
End of explanation
"""
from statsmodels.tsa.stattools import adfuller
def adf_test(timeseries):
print("Results of Dickey-Fuller Test:")
dftest = adfuller(timeseries, autolag="AIC")
dfoutput = pd.Series(
dftest[0:4],
index=[
"Test Statistic",
"p-value",
"#Lags Used",
"Number of Observations Used",
],
)
for key, value in dftest[4].items():
dfoutput["Critical Value (%s)" % key] = value
print(dfoutput)
"""
Explanation: ADF test
ADF test is used to determine the presence of unit root in the series, and hence helps in understand if the series is stationary or not. The null and alternate hypothesis of this test are:
Null Hypothesis: The series has a unit root.
Alternate Hypothesis: The series has no unit root.
If the null hypothesis in failed to be rejected, this test may provide evidence that the series is non-stationary.
A function is created to carry out the ADF test on a time series.
End of explanation
"""
from statsmodels.tsa.stattools import kpss
def kpss_test(timeseries):
print("Results of KPSS Test:")
kpsstest = kpss(timeseries, regression="c", nlags="auto")
kpss_output = pd.Series(
kpsstest[0:3], index=["Test Statistic", "p-value", "Lags Used"]
)
for key, value in kpsstest[3].items():
kpss_output["Critical Value (%s)" % key] = value
print(kpss_output)
"""
Explanation: KPSS test
KPSS is another test for checking the stationarity of a time series. The null and alternate hypothesis for the KPSS test are opposite that of the ADF test.
Null Hypothesis: The process is trend stationary.
Alternate Hypothesis: The series has a unit root (series is not stationary).
A function is created to carry out the KPSS test on a time series.
End of explanation
"""
adf_test(sunspots["SUNACTIVITY"])
"""
Explanation: The ADF tests gives the following results – test statistic, p value and the critical value at 1%, 5% , and 10% confidence intervals.
ADF test is now applied on the data.
End of explanation
"""
kpss_test(sunspots["SUNACTIVITY"])
"""
Explanation: Based upon the significance level of 0.05 and the p-value of ADF test, the null hypothesis can not be rejected. Hence, the series is non-stationary.
The KPSS tests gives the following results – test statistic, p value and the critical value at 1%, 5% , and 10% confidence intervals.
KPSS test is now applied on the data.
End of explanation
"""
sunspots["SUNACTIVITY_diff"] = sunspots["SUNACTIVITY"] - sunspots["SUNACTIVITY"].shift(
1
)
sunspots["SUNACTIVITY_diff"].dropna().plot(figsize=(12, 8))
"""
Explanation: Based upon the significance level of 0.05 and the p-value of KPSS test, there is evidence for rejecting the null hypothesis in favor of the alternative. Hence, the series is non-stationary as per the KPSS test.
It is always better to apply both the tests, so that it can be ensured that the series is truly stationary. Possible outcomes of applying these stationary tests are as follows:
Case 1: Both tests conclude that the series is not stationary - The series is not stationary
Case 2: Both tests conclude that the series is stationary - The series is stationary
Case 3: KPSS indicates stationarity and ADF indicates non-stationarity - The series is trend stationary. Trend needs to be removed to make series strict stationary. The detrended series is checked for stationarity.
Case 4: KPSS indicates non-stationarity and ADF indicates stationarity - The series is difference stationary. Differencing is to be used to make series stationary. The differenced series is checked for stationarity.
Here, due to the difference in the results from ADF test and KPSS test, it can be inferred that the series is trend stationary and not strict stationary. The series can be detrended by differencing or by model fitting.
Detrending by Differencing
It is one of the simplest methods for detrending a time series. A new series is constructed where the value at the current time step is calculated as the difference between the original observation and the observation at the previous time step.
Differencing is applied on the data and the result is plotted.
End of explanation
"""
adf_test(sunspots["SUNACTIVITY_diff"].dropna())
"""
Explanation: ADF test is now applied on these detrended values and stationarity is checked.
End of explanation
"""
kpss_test(sunspots["SUNACTIVITY_diff"].dropna())
"""
Explanation: Based upon the p-value of ADF test, there is evidence for rejecting the null hypothesis in favor of the alternative. Hence, the series is strict stationary now.
KPSS test is now applied on these detrended values and stationarity is checked.
End of explanation
"""
|
AtmaMani/pyChakras
|
udemy_ml_bootcamp/Python-for-Data-Analysis/Pandas/Pandas Exercises/SF Salaries Exercise.ipynb
|
mit
|
import pandas as pd
"""
Explanation: <a href='http://www.pieriandata.com'> <img src='../../Pierian_Data_Logo.png' /></a>
SF Salaries Exercise
Welcome to a quick exercise for you to practice your pandas skills! We will be using the SF Salaries Dataset from Kaggle! Just follow along and complete the tasks outlined in bold below. The tasks will get harder and harder as you go along.
Import pandas as pd.
End of explanation
"""
sal = pd.read_csv('salaries.csv')
"""
Explanation: Read Salaries.csv as a dataframe called sal.
End of explanation
"""
sal.head()
sal.describe()
"""
Explanation: Check the head of the DataFrame.
End of explanation
"""
sal.info()
"""
Explanation: Use the .info() method to find out how many entries there are.
End of explanation
"""
sal.BasePay.mean()
"""
Explanation: What is the average BasePay ?
End of explanation
"""
sal.OvertimePay.max()
"""
Explanation: What is the highest amount of OvertimePay in the dataset ?
End of explanation
"""
sal[sal['EmployeeName']=='JOSEPH DRISCOLL'].JobTitle
"""
Explanation: What is the job title of JOSEPH DRISCOLL ? Note: Use all caps, otherwise you may get an answer that doesn't match up (there is also a lowercase Joseph Driscoll).
End of explanation
"""
sal[sal['EmployeeName']=='JOSEPH DRISCOLL'].TotalPayBenefits
"""
Explanation: How much does JOSEPH DRISCOLL make (including benefits)?
End of explanation
"""
sal[sal['TotalPayBenefits'] == sal.TotalPayBenefits.max()]
"""
Explanation: What is the name of highest paid person (including benefits)?
End of explanation
"""
sal[sal['TotalPayBenefits'] == sal.TotalPayBenefits.min()]
"""
Explanation: What is the name of lowest paid person (including benefits)? Do you notice something strange about how much he or she is paid?
End of explanation
"""
sal.groupby('Year').BasePay.mean()
"""
Explanation: What was the average (mean) BasePay of all employees per year? (2011-2014) ?
End of explanation
"""
sal['JobTitle'].nunique()
"""
Explanation: How many unique job titles are there?
End of explanation
"""
sal['JobTitle'].value_counts()[:5]
"""
Explanation: What are the top 5 most common jobs?
End of explanation
"""
one_person_jos = sum(sal[sal['Year']==2013]['JobTitle'].value_counts()==1)
one_person_jos
"""
Explanation: How many Job Titles were represented by only one person in 2013? (e.g. Job Titles with only one occurence in 2013?)
End of explanation
"""
def chief_in_title(title):
if 'chief' in title.lower():
return True
else:
return False
sum(sal['JobTitle'].apply(chief_in_title))
sum(sal['JobTitle'].apply(lambda x : chief_in_title(x)))
"""
Explanation: How many people have the word Chief in their job title? (This is pretty tricky)
End of explanation
"""
sal['title_len'] = sal['JobTitle'].apply(len)
sal[['title_len', 'TotalPayBenefits']].corr()
"""
Explanation: Bonus: Is there a correlation between length of the Job Title string and Salary?
End of explanation
"""
|
cathalmccabe/PYNQ
|
boards/Pynq-Z2/logictools/notebooks/pattern_generator_and_trace_analyzer.ipynb
|
bsd-3-clause
|
from pynq.overlays.logictools import LogicToolsOverlay
logictools_olay = LogicToolsOverlay('logictools.bit')
"""
Explanation: Pattern Generator and Trace Analyzer
This notebook will show how to use the Pattern Generator to generate patterns on I/O pins. The pattern that will be generated is 3-bit up count performed 4 times.
Step 1: Download the logictools overlay
End of explanation
"""
from pynq.lib.logictools import Waveform
up_counter = {'signal': [
['stimulus',
{'name': 'bit0', 'pin': 'D0', 'wave': 'lh' * 8},
{'name': 'bit1', 'pin': 'D1', 'wave': 'l.h.' * 4},
{'name': 'bit2', 'pin': 'D2', 'wave': 'l...h...' * 2}],
['analysis',
{'name': 'bit2_loopback', 'pin': 'D17'},
{'name': 'bit1_loopback', 'pin': 'D18'},
{'name': 'bit0_loopback', 'pin': 'D19'}]],
'foot': {'tock': 1},
'head': {'text': 'up_counter'}}
waveform = Waveform(up_counter)
waveform.display()
"""
Explanation: Step 2: Create WaveJSON waveform
The pattern to be generated is specified in the waveJSON format
The pattern is applied to the Arduino interface, pins D0, D1 and D2 are set to generate a 3-bit count.
To check the generated pattern we loop them back to pins D19, D18 and D17 respectively and use the the trace analyzer to view the loopback signals
The Waveform class is used to display the specified waveform.
End of explanation
"""
pattern_generator = logictools_olay.pattern_generator
pattern_generator.trace(num_analyzer_samples=16)
"""
Explanation: Note: Since there are no captured samples at this moment, the analysis group will be empty.
Step 3: Instantiate the pattern generator and trace analyzer objects
Users can choose whether to use the trace analyzer by calling the trace() method.
The analyzer can be set to trace a specific number of samples using, num_analyzer_samples argument.
End of explanation
"""
pattern_generator.setup(up_counter,
stimulus_group_name='stimulus',
analysis_group_name='analysis')
"""
Explanation: Step 4: Setup the pattern generator
The pattern generator will work at the default frequency of 10MHz. This can be modified using a frequency argument in the setup() method.
End of explanation
"""
pattern_generator.run()
pattern_generator.show_waveform()
"""
Explanation: Set the loopback connections using jumper wires on the Arduino Interface
Output pins D0, D1 and D2 are connected to pins D19, D18 and D17 respectively
Loopback/Input pins D19, D18 and D17 are observed using the trace analyzer as shown below
After setup, the pattern generator should be ready to run
Note: Make sure all other pins are disconnected.
Step 5: Run and display waveform
The run() method will execute all the samples, show_waveform() method is used to display the waveforms.
Alternatively, we can also use step() method to single step the pattern.
End of explanation
"""
pattern_generator.stop()
"""
Explanation: Step 6: Stop the pattern generator
Calling stop() will clear the logic values on output pins; however, the waveform will be recorded locally in the pattern generator instance.
End of explanation
"""
|
dereneaton/ipyrad
|
newdocs/API-analysis/cookbook-treemix-ipcoal.ipynb
|
gpl-3.0
|
# conda install treemix ipyrad ipcoal -c conda-forge -c bioconda
import ipyrad.analysis as ipa
import toytree
import toyplot
import ipcoal
print('ipyrad', ipa.__version__)
print('toytree', toytree.__version__)
! treemix --version | grep 'TreeMix v. '
"""
Explanation: <h1><span style="color:gray">ipyrad-analysis toolkit:</span> treemix</h1>
The program TreeMix by Pickrell & Pritchard (2012) is used to infer population splits and admixture from allele frequency data. From the TreeMix documentation: "In the underlying model, the modern-day populations in a species are related to a common ancestor via a graph of ancestral populations. We use the allele frequencies in the modern populations to infer the structure of this graph."
Required software
End of explanation
"""
# network model
tree = toytree.rtree.unittree(7, treeheight=4e6, seed=123)
tree.draw(ts='o', admixture_edges=(3, 2));
# simulation model
model = ipcoal.Model(tree, Ne=1e4, nsamples=4, admixture_edges=(3, 2, 0.5, 0.2))
model.sim_snps(1000)
model.write_snps_to_hdf5(name="test-treemix", outdir="/tmp", diploid=True)
"""
Explanation: Simulate example data
End of explanation
"""
# the path to your HDF5 formatted snps file
SNPS = "/tmp/test-treemix.snps.hdf5"
"""
Explanation: Input data file
End of explanation
"""
IMAP = {
"r0": ["r0-0", "r0-1"],
"r1": ["r1-0", "r1-1"],
"r2": ["r2-0", "r2-1"],
"r3": ["r3-0", "r3-1"],
"r4": ["r4-0", "r4-1"],
"r5": ["r5-0", "r5-1"],
"r6": ["r6-0", "r6-1"],
}
"""
Explanation: Population assignments
End of explanation
"""
tmx = ipa.treemix(SNPS, imap=IMAP, workdir="/tmp")
"""
Explanation: Load tool and filter missing data
End of explanation
"""
tmx.params.root = "r4,r5,r6"
tmx.params.m = 1
tmx.params.global_ = 1
tmx.params
"""
Explanation: Set parameters
End of explanation
"""
# the command that will be run
tmx.command
# execute command
tmx.run()
"""
Explanation: Run analysis
End of explanation
"""
tmx.results
canvas1, axes1 = tmx.draw_tree();
canvas2, axes2 = tmx.draw_cov();
# save your plots
import toyplot.svg
toyplot.svg.render(canvas1, "/tmp/treemix-m1.svg")
"""
Explanation: Parse results
The result here is not accurate. Perhaps it would improve with more samples per lineage or more SNPs.
End of explanation
"""
tests = {}
nadmix = [0, 1, 2, 3, 4, 5]
# iterate over n admixture edges and store results in a dictionary
for adm in nadmix:
tmx.params.m = adm
tmx.run()
tests[adm] = tmx.results.llik
# plot the likelihood for different values of m
toyplot.plot(
nadmix,
[tests[i] for i in nadmix],
width=350,
height=275,
stroke_width=3,
xlabel="n admixture edges",
ylabel="ln(likelihood)",
);
"""
Explanation: Finding the best value for m
As with structure plots there is no True best value, but you can use model selection methods to decide whether one is a statistically better fit to your data than another. Adding additional admixture edges will always improve the likelihood score, but with diminishing returns as you add additional edges that explain little variation in the data. You can look at the log likelihood score of each model fit by running a for-loop like below. You may want to run this within another for-loop that iterates over different subsampled SNPs.
End of explanation
"""
|
metpy/MetPy
|
v1.0/_downloads/bb9caa5586d62e19ca46e30c02d29b43/Station_Plot.ipynb
|
bsd-3-clause
|
import cartopy.crs as ccrs
import cartopy.feature as cfeature
import matplotlib.pyplot as plt
from metpy.calc import reduce_point_density
from metpy.cbook import get_test_data
from metpy.io import metar
from metpy.plots import add_metpy_logo, current_weather, sky_cover, StationPlot
"""
Explanation: Station Plot
Make a station plot, complete with sky cover and weather symbols.
The station plot itself is pretty straightforward, but there is a bit of code to perform the
data-wrangling (hopefully that situation will improve in the future). Certainly, if you have
existing point data in a format you can work with trivially, the station plot will be simple.
End of explanation
"""
data = metar.parse_metar_file(get_test_data('metar_20190701_1200.txt', as_file_obj=False))
# Drop rows with missing winds
data = data.dropna(how='any', subset=['wind_direction', 'wind_speed'])
"""
Explanation: The setup
First read in the data. We use the metar reader because it simplifies a lot of tasks,
like dealing with separating text and assembling a pandas dataframe
https://thredds-test.unidata.ucar.edu/thredds/catalog/noaaport/text/metar/catalog.html
End of explanation
"""
# Set up the map projection
proj = ccrs.LambertConformal(central_longitude=-95, central_latitude=35,
standard_parallels=[35])
# Use the Cartopy map projection to transform station locations to the map and
# then refine the number of stations plotted by setting a 300km radius
point_locs = proj.transform_points(ccrs.PlateCarree(), data['longitude'].values,
data['latitude'].values)
data = data[reduce_point_density(point_locs, 300000.)]
"""
Explanation: This sample data has way too many stations to plot all of them. The number
of stations plotted will be reduced using reduce_point_density.
End of explanation
"""
# Change the DPI of the resulting figure. Higher DPI drastically improves the
# look of the text rendering.
plt.rcParams['savefig.dpi'] = 255
# Create the figure and an axes set to the projection.
fig = plt.figure(figsize=(20, 10))
add_metpy_logo(fig, 1100, 300, size='large')
ax = fig.add_subplot(1, 1, 1, projection=proj)
# Add some various map elements to the plot to make it recognizable.
ax.add_feature(cfeature.LAND)
ax.add_feature(cfeature.OCEAN)
ax.add_feature(cfeature.LAKES)
ax.add_feature(cfeature.COASTLINE)
ax.add_feature(cfeature.STATES)
ax.add_feature(cfeature.BORDERS)
# Set plot bounds
ax.set_extent((-118, -73, 23, 50))
#
# Here's the actual station plot
#
# Start the station plot by specifying the axes to draw on, as well as the
# lon/lat of the stations (with transform). We also the fontsize to 12 pt.
stationplot = StationPlot(ax, data['longitude'].values, data['latitude'].values,
clip_on=True, transform=ccrs.PlateCarree(), fontsize=12)
# Plot the temperature and dew point to the upper and lower left, respectively, of
# the center point. Each one uses a different color.
stationplot.plot_parameter('NW', data['air_temperature'].values, color='red')
stationplot.plot_parameter('SW', data['dew_point_temperature'].values,
color='darkgreen')
# A more complex example uses a custom formatter to control how the sea-level pressure
# values are plotted. This uses the standard trailing 3-digits of the pressure value
# in tenths of millibars.
stationplot.plot_parameter('NE', data['air_pressure_at_sea_level'].values,
formatter=lambda v: format(10 * v, '.0f')[-3:])
# Plot the cloud cover symbols in the center location. This uses the codes made above and
# uses the `sky_cover` mapper to convert these values to font codes for the
# weather symbol font.
stationplot.plot_symbol('C', data['cloud_coverage'].values, sky_cover)
# Same this time, but plot current weather to the left of center, using the
# `current_weather` mapper to convert symbols to the right glyphs.
stationplot.plot_symbol('W', data['present_weather'].values, current_weather)
# Add wind barbs
stationplot.plot_barb(data['eastward_wind'].values, data['northward_wind'].values)
# Also plot the actual text of the station id. Instead of cardinal directions,
# plot further out by specifying a location of 2 increments in x and 0 in y.
stationplot.plot_text((2, 0), data['station_id'].values)
plt.show()
"""
Explanation: The payoff
End of explanation
"""
|
mne-tools/mne-tools.github.io
|
0.14/_downloads/plot_maxwell_filter.ipynb
|
bsd-3-clause
|
# Authors: Eric Larson <larson.eric.d@gmail.com>
# Alexandre Gramfort <alexandre.gramfort@telecom-paristech.fr>
# Mark Wronkiewicz <wronk.mark@gmail.com>
#
# License: BSD (3-clause)
import mne
from mne.preprocessing import maxwell_filter
print(__doc__)
data_path = mne.datasets.sample.data_path()
"""
Explanation: Maxwell filter raw data
This example shows how to process M/EEG data with Maxwell filtering
in mne-python.
End of explanation
"""
raw_fname = data_path + '/MEG/sample/sample_audvis_raw.fif'
ctc_fname = data_path + '/SSS/ct_sparse_mgh.fif'
fine_cal_fname = data_path + '/SSS/sss_cal_mgh.dat'
# Preprocess with Maxwell filtering
raw = mne.io.Raw(raw_fname)
raw.info['bads'] = ['MEG 2443', 'EEG 053', 'MEG 1032', 'MEG 2313'] # set bads
# Here we don't use tSSS (set st_duration) because MGH data is very clean
raw_sss = maxwell_filter(raw, cross_talk=ctc_fname, calibration=fine_cal_fname)
# Select events to extract epochs from, pick M/EEG channels, and plot evoked
tmin, tmax = -0.2, 0.5
event_id = {'Auditory/Left': 1}
events = mne.find_events(raw, 'STI 014')
picks = mne.pick_types(raw.info, meg=True, eeg=False, stim=False, eog=True,
include=[], exclude='bads')
for r, kind in zip((raw, raw_sss), ('Raw data', 'Maxwell filtered data')):
epochs = mne.Epochs(r, events, event_id, tmin, tmax, picks=picks,
baseline=(None, 0), reject=dict(eog=150e-6),
preload=False)
evoked = epochs.average()
evoked.plot(window_title=kind)
"""
Explanation: Set parameters
End of explanation
"""
|
tkurfurst/deep-learning
|
reinforcement/Q-learning-cart.ipynb
|
mit
|
import gym
import tensorflow as tf
import numpy as np
"""
Explanation: Deep Q-learning
In this notebook, we'll build a neural network that can learn to play games through reinforcement learning. More specifically, we'll use Q-learning to train an agent to play a game called Cart-Pole. In this game, a freely swinging pole is attached to a cart. The cart can move to the left and right, and the goal is to keep the pole upright as long as possible.
We can simulate this game using OpenAI Gym. First, let's check out how OpenAI Gym works. Then, we'll get into training an agent to play the Cart-Pole game.
End of explanation
"""
# Create the Cart-Pole game environment
env = gym.make('CartPole-v0')
"""
Explanation: Note: Make sure you have OpenAI Gym cloned into the same directory with this notebook. I've included gym as a submodule, so you can run git submodule --init --recursive to pull the contents into the gym repo.
End of explanation
"""
env.reset()
rewards = []
for _ in range(100):
env.render()
state, reward, done, info = env.step(env.action_space.sample()) # take a random action
rewards.append(reward)
if done:
rewards = []
env.reset()
env.render(close=True)
env.reset()
"""
Explanation: We interact with the simulation through env. To show the simulation running, you can use env.render() to render one frame. Passing in an action as an integer to env.step will generate the next step in the simulation. You can see how many actions are possible from env.action_space and to get a random action you can use env.action_space.sample(). This is general to all Gym games. In the Cart-Pole game, there are two possible actions, moving the cart left or right. So there are two actions we can take, encoded as 0 and 1.
Run the code below to watch the simulation run.
End of explanation
"""
print(rewards[-20:])
print(sum(rewards))
print(len(rewards))
"""
Explanation: To shut the window showing the simulation, use env.close().
If you ran the simulation above, we can look at the rewards:
End of explanation
"""
class QNetwork:
def __init__(self, learning_rate=0.01, state_size=4,
action_size=2, hidden_size=10,
name='QNetwork'):
# state inputs to the Q-network
with tf.variable_scope(name):
self.inputs_ = tf.placeholder(tf.float32, [None, state_size], name='inputs')
# One hot encode the actions to later choose the Q-value for the action
self.actions_ = tf.placeholder(tf.int32, [None], name='actions')
one_hot_actions = tf.one_hot(self.actions_, action_size)
# Target Q values for training
self.targetQs_ = tf.placeholder(tf.float32, [None], name='target')
# ReLU hidden layers
self.fc1 = tf.contrib.layers.fully_connected(self.inputs_, hidden_size)
self.fc2 = tf.contrib.layers.fully_connected(self.fc1, hidden_size)
# Linear output layer
self.output = tf.contrib.layers.fully_connected(self.fc2, action_size,
activation_fn=None)
### Train with loss (targetQ - Q)^2
# output has length 2, for two actions. This next line chooses
# one value from output (per row) according to the one-hot encoded actions.
self.Q = tf.reduce_sum(tf.multiply(self.output, one_hot_actions), axis=1)
self.loss = tf.reduce_mean(tf.square(self.targetQs_ - self.Q))
self.opt = tf.train.AdamOptimizer(learning_rate).minimize(self.loss)
"""
Explanation: The game resets after the pole has fallen past a certain angle. For each frame while the simulation is running, it returns a reward of 1.0. The longer the game runs, the more reward we get. Then, our network's goal is to maximize the reward by keeping the pole vertical. It will do this by moving the cart to the left and the right.
Q-Network
We train our Q-learning agent using the Bellman Equation:
$$
Q(s, a) = r + \gamma \max{Q(s', a')}
$$
where $s$ is a state, $a$ is an action, and $s'$ is the next state from state $s$ and action $a$.
Before we used this equation to learn values for a Q-table. However, for this game there are a huge number of states available. The state has four values: the position and velocity of the cart, and the position and velocity of the pole. These are all real-valued numbers, so ignoring floating point precisions, you practically have infinite states. Instead of using a table then, we'll replace it with a neural network that will approximate the Q-table lookup function.
<img src="assets/deep-q-learning.png" width=450px>
Now, our Q value, $Q(s, a)$ is calculated by passing in a state to the network. The output will be Q-values for each available action, with fully connected hidden layers.
<img src="assets/q-network.png" width=550px>
As I showed before, we can define our targets for training as $\hat{Q}(s,a) = r + \gamma \max{Q(s', a')}$. Then we update the weights by minimizing $(\hat{Q}(s,a) - Q(s,a))^2$.
For this Cart-Pole game, we have four inputs, one for each value in the state, and two outputs, one for each action. To get $\hat{Q}$, we'll first choose an action, then simulate the game using that action. This will get us the next state, $s'$, and the reward. With that, we can calculate $\hat{Q}$ then pass it back into the $Q$ network to run the optimizer and update the weights.
Below is my implementation of the Q-network. I used two fully connected layers with ReLU activations. Two seems to be good enough, three might be better. Feel free to try it out.
End of explanation
"""
from collections import deque
class Memory():
def __init__(self, max_size = 1000):
self.buffer = deque(maxlen=max_size)
def add(self, experience):
self.buffer.append(experience)
def sample(self, batch_size):
idx = np.random.choice(np.arange(len(self.buffer)),
size=batch_size,
replace=False)
return [self.buffer[ii] for ii in idx]
"""
Explanation: Experience replay
Reinforcement learning algorithms can have stability issues due to correlations between states. To reduce correlations when training, we can store the agent's experiences and later draw a random mini-batch of those experiences to train on.
Here, we'll create a Memory object that will store our experiences, our transitions $<s, a, r, s'>$. This memory will have a maxmium capacity, so we can keep newer experiences in memory while getting rid of older experiences. Then, we'll sample a random mini-batch of transitions $<s, a, r, s'>$ and train on those.
Below, I've implemented a Memory object. If you're unfamiliar with deque, this is a double-ended queue. You can think of it like a tube open on both sides. You can put objects in either side of the tube. But if it's full, adding anything more will push an object out the other side. This is a great data structure to use for the memory buffer.
End of explanation
"""
train_episodes = 1000 # max number of episodes to learn from
max_steps = 200 # max steps in an episode
gamma = 0.99 # future reward discount
# Exploration parameters
explore_start = 1.0 # exploration probability at start
explore_stop = 0.01 # minimum exploration probability
decay_rate = 0.0001 # exponential decay rate for exploration prob
# Network parameters
hidden_size = 64 # number of units in each Q-network hidden layer
learning_rate = 0.0001 # Q-network learning rate
# Memory parameters
memory_size = 10000 # memory capacity
batch_size = 20 # experience mini-batch size
pretrain_length = batch_size # number experiences to pretrain the memory
tf.reset_default_graph()
mainQN = QNetwork(name='main', hidden_size=hidden_size, learning_rate=learning_rate)
"""
Explanation: Exploration - Exploitation
To learn about the environment and rules of the game, the agent needs to explore by taking random actions. We'll do this by choosing a random action with some probability $\epsilon$ (epsilon). That is, with some probability $\epsilon$ the agent will make a random action and with probability $1 - \epsilon$, the agent will choose an action from $Q(s,a)$. This is called an $\epsilon$-greedy policy.
At first, the agent needs to do a lot of exploring. Later when it has learned more, the agent can favor choosing actions based on what it has learned. This is called exploitation. We'll set it up so the agent is more likely to explore early in training, then more likely to exploit later in training.
Q-Learning training algorithm
Putting all this together, we can list out the algorithm we'll use to train the network. We'll train the network in episodes. One episode is one simulation of the game. For this game, the goal is to keep the pole upright for 195 frames. So we can start a new episode once meeting that goal. The game ends if the pole tilts over too far, or if the cart moves too far the left or right. When a game ends, we'll start a new episode. Now, to train the agent:
Initialize the memory $D$
Initialize the action-value network $Q$ with random weights
For episode = 1, $M$ do
For $t$, $T$ do
With probability $\epsilon$ select a random action $a_t$, otherwise select $a_t = \mathrm{argmax}_a Q(s,a)$
Execute action $a_t$ in simulator and observe reward $r_{t+1}$ and new state $s_{t+1}$
Store transition $<s_t, a_t, r_{t+1}, s_{t+1}>$ in memory $D$
Sample random mini-batch from $D$: $<s_j, a_j, r_j, s'_j>$
Set $\hat{Q}j = r_j$ if the episode ends at $j+1$, otherwise set $\hat{Q}_j = r_j + \gamma \max{a'}{Q(s'_j, a')}$
Make a gradient descent step with loss $(\hat{Q}_j - Q(s_j, a_j))^2$
endfor
endfor
Hyperparameters
One of the more difficult aspects of reinforcememt learning are the large number of hyperparameters. Not only are we tuning the network, but we're tuning the simulation.
End of explanation
"""
# Initialize the simulation
env.reset()
# Take one random step to get the pole and cart moving
state, reward, done, _ = env.step(env.action_space.sample())
memory = Memory(max_size=memory_size)
# Make a bunch of random actions and store the experiences
for ii in range(pretrain_length):
# Uncomment the line below to watch the simulation
# env.render()
# Make a random action
action = env.action_space.sample()
next_state, reward, done, _ = env.step(action)
if done:
# The simulation fails so no next state
next_state = np.zeros(state.shape)
# Add experience to memory
memory.add((state, action, reward, next_state))
# Start new episode
env.reset()
# Take one random step to get the pole and cart moving
state, reward, done, _ = env.step(env.action_space.sample())
else:
# Add experience to memory
memory.add((state, action, reward, next_state))
state = next_state
"""
Explanation: Populate the experience memory
Here I'm re-initializing the simulation and pre-populating the memory. The agent is taking random actions and storing the transitions in memory. This will help the agent with exploring the game.
End of explanation
"""
# Now train with experiences
saver = tf.train.Saver()
rewards_list = []
with tf.Session() as sess:
# Initialize variables
sess.run(tf.global_variables_initializer())
step = 0
for ep in range(1, train_episodes):
total_reward = 0
t = 0
while t < max_steps:
step += 1
# Uncomment this next line to watch the training
# env.render()
# Explore or Exploit
explore_p = explore_stop + (explore_start - explore_stop)*np.exp(-decay_rate*step)
if explore_p > np.random.rand():
# Make a random action
action = env.action_space.sample()
else:
# Get action from Q-network
feed = {mainQN.inputs_: state.reshape((1, *state.shape))}
Qs = sess.run(mainQN.output, feed_dict=feed)
action = np.argmax(Qs)
# Take action, get new state and reward
next_state, reward, done, _ = env.step(action)
total_reward += reward
if done:
# the episode ends so no next state
next_state = np.zeros(state.shape)
t = max_steps
print('Episode: {}'.format(ep),
'Total reward: {}'.format(total_reward),
'Training loss: {:.4f}'.format(loss),
'Explore P: {:.4f}'.format(explore_p))
rewards_list.append((ep, total_reward))
# Add experience to memory
memory.add((state, action, reward, next_state))
# Start new episode
env.reset()
# Take one random step to get the pole and cart moving
state, reward, done, _ = env.step(env.action_space.sample())
else:
# Add experience to memory
memory.add((state, action, reward, next_state))
state = next_state
t += 1
# Sample mini-batch from memory
batch = memory.sample(batch_size)
states = np.array([each[0] for each in batch])
actions = np.array([each[1] for each in batch])
rewards = np.array([each[2] for each in batch])
next_states = np.array([each[3] for each in batch])
# Train network
target_Qs = sess.run(mainQN.output, feed_dict={mainQN.inputs_: next_states})
# Set target_Qs to 0 for states where episode ends
episode_ends = (next_states == np.zeros(states[0].shape)).all(axis=1)
target_Qs[episode_ends] = (0, 0)
targets = rewards + gamma * np.max(target_Qs, axis=1)
loss, _ = sess.run([mainQN.loss, mainQN.opt],
feed_dict={mainQN.inputs_: states,
mainQN.targetQs_: targets,
mainQN.actions_: actions})
saver.save(sess, "checkpoints/cartpole.ckpt")
"""
Explanation: Training
Below we'll train our agent. If you want to watch it train, uncomment the env.render() line. This is slow because it's rendering the frames slower than the network can train. But, it's cool to watch the agent get better at the game.
End of explanation
"""
%matplotlib inline
import matplotlib.pyplot as plt
def running_mean(x, N):
cumsum = np.cumsum(np.insert(x, 0, 0))
return (cumsum[N:] - cumsum[:-N]) / N
eps, rews = np.array(rewards_list).T
smoothed_rews = running_mean(rews, 10)
plt.plot(eps[-len(smoothed_rews):], smoothed_rews)
plt.plot(eps, rews, color='grey', alpha=0.3)
plt.xlabel('Episode')
plt.ylabel('Total Reward')
"""
Explanation: Visualizing training
Below I'll plot the total rewards for each episode. I'm plotting the rolling average too, in blue.
End of explanation
"""
test_episodes = 10
test_max_steps = 400
env.reset()
with tf.Session() as sess:
saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))
for ep in range(1, test_episodes):
t = 0
while t < test_max_steps:
env.render()
# Get action from Q-network
feed = {mainQN.inputs_: state.reshape((1, *state.shape))}
Qs = sess.run(mainQN.output, feed_dict=feed)
action = np.argmax(Qs)
# Take action, get new state and reward
next_state, reward, done, _ = env.step(action)
if done:
t = test_max_steps
env.reset()
# Take one random step to get the pole and cart moving
state, reward, done, _ = env.step(env.action_space.sample())
else:
state = next_state
t += 1
env.close()
"""
Explanation: Testing
Let's checkout how our trained agent plays the game.
End of explanation
"""
|
OceanPARCELS/parcels
|
parcels/examples/documentation_unstuck_Agrid.ipynb
|
mit
|
import numpy as np
import numpy.ma as ma
from netCDF4 import Dataset
import xarray as xr
from scipy import interpolate
from parcels import FieldSet, ParticleSet, JITParticle, ScipyParticle, AdvectionRK4, Variable, Field,GeographicPolar,Geographic
from datetime import timedelta as delta
import matplotlib.pyplot as plt
import matplotlib.gridspec as gridspec
from matplotlib.colors import ListedColormap
from matplotlib.lines import Line2D
from copy import copy
import cmocean
"""
Explanation: Tutorial on implementing boundary conditions in an A grid
In another notebook, we have shown how particles may end up getting stuck on land, especially in A gridded velocity fields. Here we show how you can work around this problem and how large the effects of the solutions on the trajectories are.
Common solutions are:
1. Delete the particles
2. Displace the particles when they are within a certain distance of the coast.
3. Implement free-slip or partial-slip boundary conditions
In the first two of these solutions, kernels are used to modify the trajectories near the coast. The kernels all consist of two parts:
1. Flag particles whose trajectory should be modified
2. Modify the trajectory accordingly
In the third solution, the interpolation method is changed; this has to be done when creating the FieldSet.
This notebook is mainly focused on comparing the different modifications to the trajectory. The flagging of particles is also very relevant however and further discussion on this is encouraged. Some options shown here are:
1. Flag particles within a specific distance to the shore
2. Flag particles in any gridcell that has a shore edge
As argued in the previous notebook, it is important to accurately plot the grid discretization, in order to understand the motion of particles near the boundary. The velocity fields can best be depicted using points or arrows that define the velocity at a single position. Four of these nodes then form gridcells that can be shown using tiles, for example with matplotlib.pyplot.pcolormesh.
End of explanation
"""
file_path = "GLOBAL_ANALYSIS_FORECAST_PHY_001_024_SMOC/SMOC_20190704_R20190705.nc"
model = xr.open_dataset(file_path)
# --------- Define meshgrid coordinates to plot velocity field with matplotlib pcolormesh ---------
latmin = 1595
latmax = 1612
lonmin = 2235
lonmax = 2260
# Velocity nodes
lon_vals, lat_vals = np.meshgrid(model['longitude'], model['latitude'])
lons_plot = lon_vals[latmin:latmax,lonmin:lonmax]
lats_plot = lat_vals[latmin:latmax,lonmin:lonmax]
dlon = 1/12
dlat = 1/12
# Centers of the gridcells formed by 4 nodes = velocity nodes + 0.5 dx
x = model['longitude'][:-1]+np.diff(model['longitude'])/2
y = model['latitude'][:-1]+np.diff(model['latitude'])/2
lon_centers, lat_centers = np.meshgrid(x, y)
color_land = copy(plt.get_cmap('Reds'))(0)
color_ocean = copy(plt.get_cmap('Reds'))(128)
"""
Explanation: 1. Particle deletion
The simplest way to avoid trajectories that interact with the coastline is to remove them entirely. To do this, all Particle objects have a delete function that can be invoked in a kernel using particle.delete()
2. Displacement
A simple concept to avoid particles moving onto shore is displacing them towards the ocean as they get close to shore. This is for example done in Kaandorp et al. (2020) and Delandmeter and van Sebille (2018). To do so, a particle must be 'aware' of where the shore is and displaced accordingly. In Parcels, we can do this by adding a 'displacement' Field to the Fieldset, which contains vectors pointing away from shore.
Import a velocity field - the A gridded SMOC product
End of explanation
"""
def make_landmask(fielddata):
"""Returns landmask where land = 1 and ocean = 0
fielddata is a netcdf file.
"""
datafile = Dataset(fielddata)
landmask = datafile.variables['uo'][0, 0]
landmask = np.ma.masked_invalid(landmask)
landmask = landmask.mask.astype('int')
return landmask
landmask = make_landmask(file_path)
# Interpolate the landmask to the cell centers - only cells with 4 neighbouring land points will be land
fl = interpolate.interp2d(model['longitude'],model['latitude'],landmask)
l_centers = fl(lon_centers[0,:],lat_centers[:,0])
lmask = np.ma.masked_values(l_centers,1) # land when interpolated value == 1
fig = plt.figure(figsize=(12,5))
fig.suptitle('Figure 1. Landmask', fontsize=18, y=1.01)
gs = gridspec.GridSpec(ncols=2, nrows=1, figure=fig)
ax0 = fig.add_subplot(gs[0, 0])
ax0.set_title('A) lazy use of pcolormesh', fontsize=11)
ax0.set_ylabel('Latitude [degrees]')
ax0.set_xlabel('Longitude [degrees]')
land0 = ax0.pcolormesh(lons_plot, lats_plot, landmask[latmin:latmax,lonmin:lonmax],cmap='Reds_r', shading='auto')
ax0.scatter(lons_plot, lats_plot, c=landmask[latmin:latmax,lonmin:lonmax],s=20,cmap='Reds_r',vmin=-0.05,vmax=0.05,edgecolors='k')
custom_lines = [Line2D([0], [0], c = color_ocean, marker='o', markersize=10, markeredgecolor='k', lw=0),
Line2D([0], [0], c = color_land, marker='o', markersize=10, markeredgecolor='k', lw=0)]
ax0.legend(custom_lines, ['ocean point', 'land point'], bbox_to_anchor=(.01,.93), loc='center left', borderaxespad=0.,framealpha=1)
ax1 = fig.add_subplot(gs[0, 1])
ax1.set_title('B) correct A grid representation in Parcels', fontsize=11)
ax1.set_ylabel('Latitude [degrees]')
ax1.set_xlabel('Longitude [degrees]')
land1 = ax1.pcolormesh(lon_vals[latmin:latmax+1,lonmin:lonmax+1], lat_vals[latmin:latmax+1,lonmin:lonmax+1], lmask.mask[latmin:latmax,lonmin:lonmax],cmap='Reds_r')
ax1.scatter(lons_plot, lats_plot, c=landmask[latmin:latmax,lonmin:lonmax],s=20,cmap='Reds_r',vmin=-0.05,vmax=0.05,edgecolors='k')
ax1.legend(custom_lines, ['ocean point', 'land point'], bbox_to_anchor=(.01,.93), loc='center left', borderaxespad=0.,framealpha=1)
"""
Explanation: Make a landmask where land = 1 and ocean = 0.
End of explanation
"""
def get_coastal_nodes(landmask):
"""Function that detects the coastal nodes, i.e. the ocean nodes directly
next to land. Computes the Laplacian of landmask.
- landmask: the land mask built using `make_landmask`, where land cell = 1
and ocean cell = 0.
Output: 2D array array containing the coastal nodes, the coastal nodes are
equal to one, and the rest is zero.
"""
mask_lap = np.roll(landmask, -1, axis=0) + np.roll(landmask, 1, axis=0)
mask_lap += np.roll(landmask, -1, axis=1) + np.roll(landmask, 1, axis=1)
mask_lap -= 4*landmask
coastal = np.ma.masked_array(landmask, mask_lap > 0)
coastal = coastal.mask.astype('int')
return coastal
def get_shore_nodes(landmask):
"""Function that detects the shore nodes, i.e. the land nodes directly
next to the ocean. Computes the Laplacian of landmask.
- landmask: the land mask built using `make_landmask`, where land cell = 1
and ocean cell = 0.
Output: 2D array array containing the shore nodes, the shore nodes are
equal to one, and the rest is zero.
"""
mask_lap = np.roll(landmask, -1, axis=0) + np.roll(landmask, 1, axis=0)
mask_lap += np.roll(landmask, -1, axis=1) + np.roll(landmask, 1, axis=1)
mask_lap -= 4*landmask
shore = np.ma.masked_array(landmask, mask_lap < 0)
shore = shore.mask.astype('int')
return shore
def get_coastal_nodes_diagonal(landmask):
"""Function that detects the coastal nodes, i.e. the ocean nodes where
one of the 8 nearest nodes is land. Computes the Laplacian of landmask
and the Laplacian of the 45 degree rotated landmask.
- landmask: the land mask built using `make_landmask`, where land cell = 1
and ocean cell = 0.
Output: 2D array array containing the coastal nodes, the coastal nodes are
equal to one, and the rest is zero.
"""
mask_lap = np.roll(landmask, -1, axis=0) + np.roll(landmask, 1, axis=0)
mask_lap += np.roll(landmask, -1, axis=1) + np.roll(landmask, 1, axis=1)
mask_lap += np.roll(landmask, (-1,1), axis=(0,1)) + np.roll(landmask, (1, 1), axis=(0,1))
mask_lap += np.roll(landmask, (-1,-1), axis=(0,1)) + np.roll(landmask, (1, -1), axis=(0,1))
mask_lap -= 8*landmask
coastal = np.ma.masked_array(landmask, mask_lap > 0)
coastal = coastal.mask.astype('int')
return coastal
def get_shore_nodes_diagonal(landmask):
"""Function that detects the shore nodes, i.e. the land nodes where
one of the 8 nearest nodes is ocean. Computes the Laplacian of landmask
and the Laplacian of the 45 degree rotated landmask.
- landmask: the land mask built using `make_landmask`, where land cell = 1
and ocean cell = 0.
Output: 2D array array containing the shore nodes, the shore nodes are
equal to one, and the rest is zero.
"""
mask_lap = np.roll(landmask, -1, axis=0) + np.roll(landmask, 1, axis=0)
mask_lap += np.roll(landmask, -1, axis=1) + np.roll(landmask, 1, axis=1)
mask_lap += np.roll(landmask, (-1,1), axis=(0,1)) + np.roll(landmask, (1, 1), axis=(0,1))
mask_lap += np.roll(landmask, (-1,-1), axis=(0,1)) + np.roll(landmask, (1, -1), axis=(0,1))
mask_lap -= 8*landmask
shore = np.ma.masked_array(landmask, mask_lap < 0)
shore = shore.mask.astype('int')
return shore
coastal = get_coastal_nodes_diagonal(landmask)
shore = get_shore_nodes_diagonal(landmask)
fig = plt.figure(figsize=(10,4), constrained_layout=True)
fig.suptitle('Figure 2. Coast and Shore', fontsize=18, y=1.04)
gs = gridspec.GridSpec(ncols=2, nrows=1, figure=fig)
ax0 = fig.add_subplot(gs[0, 0])
land0 = ax0.pcolormesh(lon_vals[latmin:latmax+1,lonmin:lonmax+1], lat_vals[latmin:latmax+1,lonmin:lonmax+1], lmask.mask[latmin:latmax,lonmin:lonmax],cmap='Reds_r')
coa = ax0.scatter(lons_plot,lats_plot, c=coastal[latmin:latmax,lonmin:lonmax], cmap='Reds_r', s=50)
ax0.scatter(lons_plot, lats_plot, c=landmask[latmin:latmax,lonmin:lonmax],s=20,cmap='Reds_r',vmin=-0.05,vmax=0.05)
ax0.set_title('Coast')
ax0.set_ylabel('Latitude [degrees]')
ax0.set_xlabel('Longitude [degrees]')
custom_lines = [Line2D([0], [0], c = color_ocean, marker='o', markersize=5, lw=0),
Line2D([0], [0], c = color_ocean, marker='o', markersize=7, markeredgecolor='w', markeredgewidth=2, lw=0),
Line2D([0], [0], c = color_land, marker='o', markersize=7, markeredgecolor='firebrick', lw=0)]
ax0.legend(custom_lines, ['ocean node', 'coast node', 'land node'], bbox_to_anchor=(.01,.9), loc='center left', borderaxespad=0.,framealpha=1, facecolor='silver')
ax1 = fig.add_subplot(gs[0, 1])
land1 = ax1.pcolormesh(lon_vals[latmin:latmax+1,lonmin:lonmax+1], lat_vals[latmin:latmax+1,lonmin:lonmax+1], lmask.mask[latmin:latmax,lonmin:lonmax],cmap='Reds_r')
sho = ax1.scatter(lons_plot,lats_plot, c=shore[latmin:latmax,lonmin:lonmax], cmap='Reds_r', s=50)
ax1.scatter(lons_plot, lats_plot, c=landmask[latmin:latmax,lonmin:lonmax],s=20,cmap='Reds_r',vmin=-0.05,vmax=0.05)
ax1.set_title('Shore')
ax1.set_ylabel('Latitude [degrees]')
ax1.set_xlabel('Longitude [degrees]')
custom_lines = [Line2D([0], [0], c = color_ocean, marker='o', markersize=5, lw=0),
Line2D([0], [0], c = color_land, marker='o', markersize=7, markeredgecolor='w', markeredgewidth=2, lw=0),
Line2D([0], [0], c = color_land, marker='o', markersize=7, markeredgecolor='firebrick', lw=0)]
ax1.legend(custom_lines, ['ocean node', 'shore node', 'land node'], bbox_to_anchor=(.01,.9), loc='center left', borderaxespad=0.,framealpha=1, facecolor='silver')
"""
Explanation: Figure 1 shows why it is important to be precise when visualizing the model land and ocean. Parcels trajectories should not cross the land boundary between two land nodes as seen in 1B.
Detect the coast
We can detect the edges between land and ocean nodes by computing the Laplacian with the 4 nearest neighbors [i+1,j], [i-1,j], [i,j+1] and [i,j-1]:
$$\nabla^2 \text{landmask} = \partial_{xx} \text{landmask} + \partial_{yy} \text{landmask},$$
and filtering the positive and negative values. This gives us the location of coast nodes (ocean nodes next to land) and shore nodes (land nodes next to the ocean).
Additionally, we can find the nodes that border the coast/shore diagonally by considering the 8 nearest neighbors, including [i+1,j+1], [i-1,j+1], [i-1,j+1] and [i-1,j-1].
End of explanation
"""
def create_displacement_field(landmask, double_cell=False):
"""Function that creates a displacement field 1 m/s away from the shore.
- landmask: the land mask dUilt using `make_landmask`.
- double_cell: Boolean for determining if you want a double cell.
Default set to False.
Output: two 2D arrays, one for each camponent of the velocity.
"""
shore = get_shore_nodes(landmask)
shore_d = get_shore_nodes_diagonal(landmask) # bordering ocean directly and diagonally
shore_c = shore_d - shore # corner nodes that only border ocean diagonally
Ly = np.roll(landmask, -1, axis=0) - np.roll(landmask, 1, axis=0) # Simple derivative
Lx = np.roll(landmask, -1, axis=1) - np.roll(landmask, 1, axis=1)
Ly_c = np.roll(landmask, -1, axis=0) - np.roll(landmask, 1, axis=0)
Ly_c += np.roll(landmask, (-1,-1), axis=(0,1)) + np.roll(landmask, (-1,1), axis=(0,1)) # Include y-component of diagonal neighbours
Ly_c += - np.roll(landmask, (1,-1), axis=(0,1)) - np.roll(landmask, (1,1), axis=(0,1))
Lx_c = np.roll(landmask, -1, axis=1) - np.roll(landmask, 1, axis=1)
Lx_c += np.roll(landmask, (-1,-1), axis=(1,0)) + np.roll(landmask, (-1,1), axis=(1,0)) # Include x-component of diagonal neighbours
Lx_c += - np.roll(landmask, (1,-1), axis=(1,0)) - np.roll(landmask, (1,1), axis=(1,0))
v_x = -Lx*(shore)
v_y = -Ly*(shore)
v_x_c = -Lx_c*(shore_c)
v_y_c = -Ly_c*(shore_c)
v_x = v_x + v_x_c
v_y = v_y + v_y_c
magnitude = np.sqrt(v_y**2 + v_x**2)
# the coastal nodes between land create a problem. Magnitude there is zero
# I force it to be 1 to avoid problems when normalizing.
ny, nx = np.where(magnitude == 0)
magnitude[ny, nx] = 1
v_x = v_x/magnitude
v_y = v_y/magnitude
return v_x, v_y
v_x, v_y = create_displacement_field(landmask)
fig = plt.figure(figsize=(7,6), constrained_layout=True)
fig.suptitle('Figure 3. Displacement field', fontsize=18, y=1.04)
gs = gridspec.GridSpec(ncols=1, nrows=1, figure=fig)
ax0 = fig.add_subplot(gs[0, 0])
land = ax0.pcolormesh(lon_vals[latmin:latmax+1,lonmin:lonmax+1], lat_vals[latmin:latmax+1,lonmin:lonmax+1], lmask.mask[latmin:latmax,lonmin:lonmax],cmap='Reds_r')
ax0.scatter(lons_plot, lats_plot, c=landmask[latmin:latmax,lonmin:lonmax],s=30,cmap='Reds_r',vmin=-0.05,vmax=0.05, edgecolors='k')
quiv = ax0.quiver(lons_plot,lats_plot,v_x[latmin:latmax,lonmin:lonmax],v_y[latmin:latmax,lonmin:lonmax],color='orange',angles='xy', scale_units='xy', scale=19, width=0.005)
ax0.set_ylabel('Latitude [degrees]')
ax0.set_xlabel('Longitude [degrees]')
custom_lines = [Line2D([0], [0], c = color_ocean, marker='o', markersize=10, markeredgecolor='k', lw=0),
Line2D([0], [0], c = color_land, marker='o', markersize=10, markeredgecolor='k', lw=0)]
ax0.legend(custom_lines, ['ocean point', 'land point'], bbox_to_anchor=(.01,.93), loc='center left', borderaxespad=0.,framealpha=1)
"""
Explanation: Assigning coastal velocities
For the displacement kernel we define a velocity field that pushes the particles back to the ocean. This velocity is a vector normal to the shore.
For the shore nodes directly next to the ocean, we can take the simple derivative of landmask and project the result to the shore array, this will capture the orientation of the velocity vectors.
For the shore nodes that only have a diagonal component, we need to take into account the diagonal nodes also and project the vectors only onto the inside corners that border the ocean diagonally.
Then to make the vectors unitary, we normalize them by their magnitude.
End of explanation
"""
def distance_to_shore(landmask, dx=1):
"""Function that computes the distance to the shore. It is based in the
the `get_coastal_nodes` algorithm.
- landmask: the land mask dUilt using `make_landmask` function.
- dx: the grid cell dimension. This is a crude approxsimation of the real
distance (be careful).
Output: 2D array containing the distances from shore.
"""
ci = get_coastal_nodes(landmask) # direct neighbours
dist = ci*dx # 1 dx away
ci_d = get_coastal_nodes_diagonal(landmask) # diagonal neighbours
dist_d = (ci_d - ci)*np.sqrt(2*dx**2) # sqrt(2) dx away
return dist+dist_d
d_2_s = distance_to_shore(landmask)
fig = plt.figure(figsize=(6,5), constrained_layout=True)
ax0 = fig.add_subplot()
ax0.set_title('Figure 4. Distance to shore', fontsize=18)
ax0.set_ylabel('Latitude [degrees]')
ax0.set_xlabel('Longitude [degrees]')
land = ax0.pcolormesh(lon_vals[latmin:latmax+1,lonmin:lonmax+1], lat_vals[latmin:latmax+1,lonmin:lonmax+1], lmask.mask[latmin:latmax,lonmin:lonmax],cmap='Reds_r')
d2s = ax0.scatter(lons_plot,lats_plot, c=d_2_s[latmin:latmax,lonmin:lonmax])
plt.colorbar(d2s,ax=ax0, label='Distance [gridcells]')
"""
Explanation: Calculate the distance to the shore
In this tutorial, we will only displace particles that are within some distance (smaller than the grid size) to the shore.
For this we map the distance of the coastal nodes to the shore: Coastal nodes directly neighboring the shore are $1dx$ away. Diagonal neighbors are $\sqrt{2}dx$ away. The particles can then sample this field and will only be displaced when closer than a threshold value. This gives a crude estimate of the distance.
End of explanation
"""
class DisplacementParticle(JITParticle):
dU = Variable('dU')
dV = Variable('dV')
d2s = Variable('d2s', initial=1e3)
def set_displacement(particle, fieldset, time):
particle.d2s = fieldset.distance2shore[time, particle.depth,
particle.lat, particle.lon]
if particle.d2s < 0.5:
dispUab = fieldset.dispU[time, particle.depth, particle.lat,
particle.lon]
dispVab = fieldset.dispV[time, particle.depth, particle.lat,
particle.lon]
particle.dU = dispUab
particle.dV = dispVab
else:
particle.dU = 0.
particle.dV = 0.
def displace(particle, fieldset, time):
if particle.d2s < 0.5:
particle.lon += particle.dU*particle.dt
particle.lat += particle.dV*particle.dt
"""
Explanation: Particle and Kernels
The distance to shore, used to flag whether a particle must be displaced, is stored in a particle Variable d2s. To visualize the displacement, the zonal and meridional displacements are stored in the variables dU and dV.
To write the displacement vector to the output before displacing the particle, the set_displacement kernel is invoked after the advection kernel. Then only in the next timestep are particles displaced by displace, before resuming the advection.
End of explanation
"""
SMOCfile = 'GLOBAL_ANALYSIS_FORECAST_PHY_001_024_SMOC/SMOC_201907*.nc'
filenames = {'U': SMOCfile,
'V': SMOCfile}
variables = {'U': 'uo',
'V': 'vo'}
dimensions = {'U': {'lon': 'longitude', 'lat': 'latitude', 'depth': 'depth', 'time': 'time'},
'V': {'lon': 'longitude', 'lat': 'latitude', 'depth': 'depth', 'time': 'time'}}
indices = {'lon': range(lonmin, lonmax), 'lat': range(latmin, latmax)} # to load only a small part of the domain
fieldset = FieldSet.from_netcdf(filenames, variables, dimensions, indices=indices)
"""
Explanation: Simulation
Let us first do a simulation with the default AdvectionRK4 kernel for comparison later
End of explanation
"""
npart = 9 # number of particles to be released
lon = np.linspace(7, 7.2, int(np.sqrt(npart)), dtype=np.float32)
lat = np.linspace(53.45, 53.65, int(np.sqrt(npart)), dtype=np.float32)
lons, lats = np.meshgrid(lon,lat)
time = np.zeros(lons.size)
runtime = delta(hours=100)
dt = delta(minutes=10)
pset = ParticleSet(fieldset=fieldset, pclass=JITParticle, lon=lons, lat=lats, time=time)
kernels = AdvectionRK4
output_file = pset.ParticleFile(name="SMOC.nc", outputdt=delta(hours=1))
pset.execute(kernels, runtime=runtime, dt=dt, output_file=output_file)
output_file.close()
"""
Explanation: And we use the following set of 9 particles
End of explanation
"""
fieldset = FieldSet.from_netcdf(filenames, variables, dimensions, indices=indices)
u_displacement = v_x
v_displacement = v_y
fieldset.add_field(Field('dispU', data=u_displacement[latmin:latmax,lonmin:lonmax],
lon=fieldset.U.grid.lon, lat=fieldset.U.grid.lat,
mesh='spherical'))
fieldset.add_field(Field('dispV', data=v_displacement[latmin:latmax,lonmin:lonmax],
lon=fieldset.U.grid.lon, lat=fieldset.U.grid.lat,
mesh='spherical'))
fieldset.dispU.units = GeographicPolar()
fieldset.dispV.units = Geographic()
fieldset.add_field(Field('landmask', landmask[latmin:latmax,lonmin:lonmax],
lon=fieldset.U.grid.lon, lat=fieldset.U.grid.lat,
mesh='spherical'))
fieldset.add_field(Field('distance2shore', d_2_s[latmin:latmax,lonmin:lonmax],
lon=fieldset.U.grid.lon, lat=fieldset.U.grid.lat,
mesh='spherical'))
pset = ParticleSet(fieldset=fieldset, pclass=DisplacementParticle, lon=lons, lat=lats, time=time)
kernels = pset.Kernel(displace)+pset.Kernel(AdvectionRK4)+pset.Kernel(set_displacement)
output_file = pset.ParticleFile(name="SMOC-disp.nc", outputdt=delta(hours=1))
pset.execute(kernels, runtime=runtime, dt=dt, output_file=output_file)
output_file.close()
"""
Explanation: Now let's add the Fields we created above to the FieldSet and do a simulation to test the displacement of the particles as they approach the shore.
End of explanation
"""
ds_SMOC = xr.open_dataset('SMOC.nc')
ds_SMOC_disp = xr.open_dataset('SMOC-disp.nc')
fig = plt.figure(figsize=(16,4), facecolor='silver', constrained_layout=True)
fig.suptitle('Figure 5. Trajectory difference', fontsize=18, y=1.06)
gs = gridspec.GridSpec(ncols=4, nrows=1, width_ratios=[1,1,1,0.3], figure=fig)
ax0 = fig.add_subplot(gs[0, 0])
ax0.set_ylabel('Latitude [degrees]')
ax0.set_xlabel('Longitude [degrees]')
ax0.set_title('A) No displacement', fontsize=14, fontweight = 'bold')
ax0.set_xlim(6.9, 7.6)
ax0.set_ylim(53.4, 53.8)
land = ax0.pcolormesh(lon_vals[latmin:latmax+1,lonmin:lonmax+1], lat_vals[latmin:latmax+1,lonmin:lonmax+1], lmask.mask[latmin:latmax,lonmin:lonmax],cmap='Reds_r')
ax0.scatter(lons_plot, lats_plot, c=landmask[latmin:latmax,lonmin:lonmax],s=50,cmap='Reds_r',vmin=-0.05,vmax=0.05, edgecolors='k')
ax0.plot(ds_SMOC['lon'].T, ds_SMOC['lat'].T,linewidth=3, zorder=1)
ax0.scatter(ds_SMOC['lon'], ds_SMOC['lat'], color='limegreen', zorder=2)
n_p0 = 0
ax1 = fig.add_subplot(gs[0, 1])
ax1.set_ylabel('Latitude [degrees]')
ax1.set_xlabel('Longitude [degrees]')
ax1.set_title('B) Displacement trajectory '+str(n_p0), fontsize=14, fontweight = 'bold')
ax1.set_xlim(6.9, 7.3)
ax1.set_ylim(53.4, 53.55)
land = ax1.pcolormesh(lon_vals[latmin:latmax+1,lonmin:lonmax+1], lat_vals[latmin:latmax+1,lonmin:lonmax+1], lmask.mask[latmin:latmax,lonmin:lonmax],cmap='Reds_r')
ax1.scatter(lons_plot, lats_plot, c=landmask[latmin:latmax,lonmin:lonmax],s=50,cmap='Reds_r',vmin=-0.05,vmax=0.05, edgecolors='k')
quiv = ax1.quiver(lons_plot,lats_plot,v_x[latmin:latmax,lonmin:lonmax],v_y[latmin:latmax,lonmin:lonmax],color='orange', scale=19, width=0.005)
ax1.plot(ds_SMOC_disp['lon'][n_p0].T, ds_SMOC_disp['lat'][n_p0].T,linewidth=3, zorder=1)
ax1.scatter(ds_SMOC['lon'][n_p0], ds_SMOC['lat'][n_p0], color='limegreen', zorder=2)
ax1.scatter(ds_SMOC_disp['lon'][n_p0], ds_SMOC_disp['lat'][n_p0], cmap='viridis_r', zorder=2)
ax1.quiver(ds_SMOC_disp['lon'][n_p0], ds_SMOC_disp['lat'][n_p0],ds_SMOC_disp['dU'][n_p0], ds_SMOC_disp['dV'][n_p0], color='w',angles='xy', scale_units='xy', scale=2e-4, zorder=3)
n_p1 = 4
ax2 = fig.add_subplot(gs[0, 2])
ax2.set_ylabel('Latitude [degrees]')
ax2.set_xlabel('Longitude [degrees]')
ax2.set_title('C) Displacement trajectory '+str(n_p1), fontsize=14, fontweight = 'bold')
ax2.set_xlim(7., 7.6)
ax2.set_ylim(53.4, 53.8)
land = ax2.pcolormesh(lon_vals[latmin:latmax+1,lonmin:lonmax+1], lat_vals[latmin:latmax+1,lonmin:lonmax+1], lmask.mask[latmin:latmax,lonmin:lonmax],cmap='Reds_r')
ax2.scatter(lons_plot, lats_plot, c=landmask[latmin:latmax,lonmin:lonmax],s=50,cmap='Reds_r',vmin=-0.05,vmax=0.05, edgecolors='k')
q1 = ax2.quiver(lons_plot,lats_plot,v_x[latmin:latmax,lonmin:lonmax],v_y[latmin:latmax,lonmin:lonmax],color='orange', scale=19, width=0.005)
ax2.plot(ds_SMOC_disp['lon'][n_p1].T, ds_SMOC_disp['lat'][n_p1].T,linewidth=3, zorder=1)
ax2.scatter(ds_SMOC['lon'][n_p1], ds_SMOC['lat'][n_p1], color='limegreen', zorder=2)
ax2.scatter(ds_SMOC_disp['lon'][n_p1], ds_SMOC_disp['lat'][n_p1], cmap='viridis_r', zorder=2)
q2 = ax2.quiver(ds_SMOC_disp['lon'][n_p1], ds_SMOC_disp['lat'][n_p1],ds_SMOC_disp['dU'][n_p1], ds_SMOC_disp['dV'][n_p1], color='w',angles='xy', scale_units='xy', scale=2e-4, zorder=3)
ax3 = fig.add_subplot(gs[0, 3])
ax3.axis('off')
custom_lines = [Line2D([0], [0], c = 'tab:blue', marker='o', markersize=10),
Line2D([0], [0], c = 'limegreen', marker='o', markersize=10),
Line2D([0], [0], c = color_ocean, marker='o', markersize=10, markeredgecolor='k', lw=0),
Line2D([0], [0], c = color_land, marker='o', markersize=10, markeredgecolor='k', lw=0)]
ax3.legend(custom_lines, ['with displacement', 'without displacement', 'ocean point', 'land point'], bbox_to_anchor=(0.,0.6), loc='center left', borderaxespad=0.,framealpha=1)
ax2.quiverkey(q1, 1.3, 0.9, 2, 'displacement field', coordinates='axes')
ax2.quiverkey(q2, 1.3, 0.8, 1e-5, 'particle displacement', coordinates='axes')
plt.show()
"""
Explanation: Output
To visualize the effect of the displacement, the particle trajectory output can be compared to the simulation without the displacement kernel.
End of explanation
"""
d2s_cmap = copy(plt.get_cmap('cmo.deep_r'))
d2s_cmap.set_over('gold')
fig = plt.figure(figsize=(11,6), constrained_layout=True)
ax0 = fig.add_subplot()
ax0.set_title('Figure 6. Distance to shore', fontsize=18)
land = ax0.pcolormesh(lon_vals[latmin:latmax+1,lonmin:lonmax+1], lat_vals[latmin:latmax+1,lonmin:lonmax+1], lmask.mask[latmin:latmax,lonmin:lonmax],cmap='Reds_r')
ax0.scatter(lons_plot, lats_plot, c=landmask[latmin:latmax,lonmin:lonmax],s=50,cmap='Reds_r', edgecolor='k',vmin=-0.05,vmax=0.05)
ax0.plot(ds_SMOC_disp['lon'].T, ds_SMOC_disp['lat'].T,linewidth=3, zorder=1)
d2s = ax0.scatter(ds_SMOC_disp['lon'], ds_SMOC_disp['lat'], c=ds_SMOC_disp['d2s'],cmap=d2s_cmap, s=20,vmax=0.5, zorder=2)
q2 = ax0.quiver(ds_SMOC_disp['lon'], ds_SMOC_disp['lat'],ds_SMOC_disp['dU'], ds_SMOC_disp['dV'], color='k',angles='xy', scale_units='xy', scale=2.3e-4, width=0.003, zorder=3)
ax0.set_xlim(6.9, 8)
ax0.set_ylim(53.4, 53.8)
ax0.set_ylabel('Latitude [degrees]')
ax0.set_xlabel('Longitude [degrees]')
plt.colorbar(d2s,ax=ax0, label='Distance [gridcells]',extend='max')
color_land = copy(plt.get_cmap('Reds'))(0)
color_ocean = copy(plt.get_cmap('Reds'))(128)
custom_lines = [Line2D([0], [0], c = color_ocean, marker='o', markersize=10, markeredgecolor='k', lw=0),
Line2D([0], [0], c = color_land, marker='o', markersize=10, markeredgecolor='k', lw=0)]
ax0.legend(custom_lines, ['ocean point', 'land point'], bbox_to_anchor=(.01,.95), loc='center left', borderaxespad=0.,framealpha=1)
"""
Explanation: Conclusion
Figure 5 shows how particles are prevented from approaching the coast in a 5 day simulation. Note that to show each computation, the integration timestep (dt) is equal to the output timestep (outputdt): 1 hour. This is relatively large, and causes the displacement to be on the order of 4 km and be relatively infrequent. It is advised to use smaller dt in real simulations.
End of explanation
"""
cells_x = np.array([[0,0],[1,1],[2,2]])
cells_y = np.array([[0,1],[0,1],[0,1]])
U0 = 1
V0 = 1
U = np.array([U0,U0,0,0,0,0])
V = np.array([V0,V0,0,0,0,0])
xsi = np.linspace(0.001,0.999)
u_interp = U0*(1-xsi)
v_interp = V0*(1-xsi)
u_freeslip = u_interp
v_freeslip = v_interp/(1-xsi)
u_partslip = u_interp
v_partslip = v_interp*(1-.5*xsi)/(1-xsi)
fig = plt.figure(figsize=(15,4), constrained_layout=True)
fig.suptitle('Figure 7. Boundary conditions', fontsize=18, y=1.06)
gs = gridspec.GridSpec(ncols=3, nrows=1, figure=fig)
ax0 = fig.add_subplot(gs[0, 0])
ax0.pcolormesh(cells_x, cells_y, np.array([[0],[1]]), cmap='Greys',edgecolor='k')
ax0.scatter(cells_x,cells_y, c='w', edgecolor='k')
ax0.quiver(cells_x,cells_y,U,V, scale=15)
ax0.plot(xsi, u_interp,linewidth=5, label='u_interpolation')
ax0.plot(xsi, v_interp, linestyle='dashed',linewidth=5, label='v_interpolation')
ax0.set_xlim(-0.3,2.3)
ax0.set_ylim(-0.5,1.5)
ax0.set_ylabel('u - v [-]', fontsize=14)
ax0.set_xlabel(r'$\xi$', fontsize = 14)
ax0.set_title('A) Bilinear interpolation')
ax0.legend(loc='lower right')
ax1 = fig.add_subplot(gs[0, 1])
ax1.pcolormesh(cells_x, cells_y,np.array([[0],[1]]), cmap='Greys',edgecolor='k')
ax1.scatter(cells_x,cells_y, c='w', edgecolor='k')
ax1.quiver(cells_x,cells_y,U,V, scale=15)
ax1.plot(xsi, u_freeslip,linewidth=5, label='u_freeslip')
ax1.plot(xsi, v_freeslip, linestyle='dashed',linewidth=5, label='v_freeslip')
ax1.set_xlim(-0.3,2.3)
ax1.set_ylim(-0.5,1.5)
ax1.set_xlabel(r'$\xi$', fontsize = 14)
ax1.text(0., 1.3, r'$v_{freeslip} = v_{interpolation}*\frac{1}{1-\xi}$', fontsize = 18)
ax1.set_title('B) Free slip condition')
ax1.legend(loc='lower right')
ax2 = fig.add_subplot(gs[0, 2])
ax2.pcolormesh(cells_x, cells_y,np.array([[0],[1]]), cmap='Greys',edgecolor='k')
ax2.scatter(cells_x,cells_y, c='w', edgecolor='k')
ax2.quiver(cells_x,cells_y,U,V, scale=15)
ax2.plot(xsi, u_partslip,linewidth=5, label='u_partialslip')
ax2.plot(xsi, v_partslip, linestyle='dashed',linewidth=5, label='v_partialslip')
ax2.set_xlim(-0.3,2.3)
ax2.set_ylim(-0.5,1.5)
ax2.set_xlabel(r'$\xi$', fontsize = 14)
ax2.text(0., 1.3, r'$v_{partialslip} = v_{interpolation}*\frac{1-1/2\xi}{1-\xi}$', fontsize = 18)
ax2.set_title('C) Partial slip condition')
ax2.legend(loc='lower right');
"""
Explanation: 3. Slip boundary conditions
The reason trajectories do not neatly follow the coast in A grid velocity fields is that the lack of staggering causes both velocity components to go to zero in the same way towards the cell edge. This no-slip condition can be turned into a free-slip or partial-slip condition by separately considering the cross-shore and along-shore velocity components as in a staggered C-grid. Each interpolation of the velocity field must then be corrected with a factor depending on the direction of the boundary.
These boundary conditions have been implemented in Parcels as interp_method=partialslip and interp_method=freeslip, which we will show in the plot below
End of explanation
"""
SMOCfiles = 'GLOBAL_ANALYSIS_FORECAST_PHY_001_024_SMOC/SMOC_201907*.nc'
filenames = {'U': SMOCfile,
'V': SMOCfile}
variables = {'U': 'uo',
'V': 'vo'}
dimensions = {'U': {'lon': 'longitude', 'lat': 'latitude', 'depth': 'depth', 'time': 'time'},
'V': {'lon': 'longitude', 'lat': 'latitude', 'depth': 'depth', 'time': 'time'}}
indices = {'lon': range(lonmin, lonmax), 'lat': range(latmin, latmax)}
"""
Explanation: Consider a grid cell with a solid boundary to the right and vectors $(U0, V0)$ = $(1, 1)$ on the lefthand nodes, as in figure 7. Parcels bilinear interpolation will interpolate in the $x$ and $y$ directions. This cell is invariant in the $y$-direction, we will only consider the effect in the direction normal to the boundary. In the x-direction, both u and v will be interpolated along $\xi$, the normalized $x$-coordinate within the cell. This is plotted with the blue and orange dashed lines in subfigure 7A.
A free slip boundary condition is defined with $\frac{\delta v}{\delta \xi}=0$. This means that the tangential velocity is constant in the direction normal to the boundary. This can be achieved in a kernel after interpolation by dividing by $(1-\xi)$. The resulting velocity profiles are shown in subfigure 7B.
A partial slip boundary condition is defined with a tangential velocity profile that decreases toward the boundary, but not to zero. This can be achieved by multiplying the interpolated velocity by $\frac{1-1/2\xi}{1-\xi}$. This is shown in subfigure 7C.
For each direction and boundary condition a different factor must be used (where $\xi$ and $\eta$ are the normalized x- and y-coordinates within the cell, respectively):
- Free slip
1: $f_u = \frac{1}{\eta}$
2: $f_u = \frac{1}{(1-\eta)}$
4: $f_v = \frac{1}{\xi}$
8: $f_v = \frac{1}{(1-\xi)}$
Partial slip
1: $f_u = \frac{1/2+1/2\eta}{\eta}$
2: $f_u = \frac{1-1/2\eta}{1-\eta}$
4: $f_v = \frac{1/2+1/2\xi}{\xi}$
8: $f_v = \frac{1-1/2\xi}{1-\xi}$
We now simulate the three different boundary conditions by advecting the 9 particles from above in a time-evolving SMOC dataset from CMEMS.
End of explanation
"""
fieldset = FieldSet.from_netcdf(filenames, variables, dimensions, indices=indices,
interp_method={'U': 'partialslip', 'V': 'partialslip'}) # Setting the interpolation for U and V
pset = ParticleSet(fieldset=fieldset, pclass=JITParticle, lon=lons, lat=lats, time=time)
kernels = pset.Kernel(AdvectionRK4)
output_file = pset.ParticleFile(name="SMOC_partialslip.nc", outputdt=delta(hours=1))
pset.execute(kernels, runtime=runtime, dt=dt, output_file=output_file)
output_file.close() # export the trajectory data to a netcdf file
"""
Explanation: First up is the partialslip interpolation (note that we have to redefine the FieldSet because the interp_method=partialslip is set there)
End of explanation
"""
fieldset = FieldSet.from_netcdf(filenames, variables, dimensions, indices=indices,
interp_method={'U': 'freeslip', 'V': 'freeslip'}) # Setting the interpolation for U and V
pset = ParticleSet(fieldset=fieldset, pclass=JITParticle, lon=lons, lat=lats, time=time)
kernels = pset.Kernel(AdvectionRK4)
output_file = pset.ParticleFile(name="SMOC_freeslip.nc", outputdt=delta(hours=1))
pset.execute(kernels, runtime=runtime, dt=dt, output_file=output_file)
output_file.close() # export the trajectory data to a netcdf file
"""
Explanation: And then we also use the freeslip interpolation
End of explanation
"""
ds_SMOC = xr.open_dataset('SMOC.nc')
ds_SMOC_part = xr.open_dataset('SMOC_partialslip.nc')
ds_SMOC_free = xr.open_dataset('SMOC_freeslip.nc')
fig = plt.figure(figsize=(18,5), constrained_layout=True)
fig.suptitle('Figure 8. Solution comparison', fontsize=18, y=1.06)
gs = gridspec.GridSpec(ncols=3, nrows=1, figure=fig)
n_p=[[0, 1, 3, 4, 6, 7, 8], 0, 6]
for i in range(3):
ax = fig.add_subplot(gs[0, i])
ax.set_title(chr(i+65)+') Trajectory '+str(n_p[i]), fontsize = 18)
land = ax.pcolormesh(lon_vals[latmin:latmax+1,lonmin:lonmax+1], lat_vals[latmin:latmax+1,lonmin:lonmax+1], lmask.mask[latmin:latmax,lonmin:lonmax],cmap='Reds_r')
ax.scatter(lons_plot, lats_plot, c=landmask[latmin:latmax,lonmin:lonmax],s=50,cmap='Reds_r',vmin=-0.05,vmax=0.05, edgecolors='k')
ax.scatter(ds_SMOC['lon'][n_p[i]], ds_SMOC['lat'][n_p[i]], s=30, color='limegreen', zorder=2)
ax.scatter(ds_SMOC_disp['lon'][n_p[i]], ds_SMOC_disp['lat'][n_p[i]], s=25, color='tab:blue', zorder=2)
ax.scatter(ds_SMOC_part['lon'][n_p[i]], ds_SMOC_part['lat'][n_p[i]], s=20, color='magenta', zorder=2)
ax.scatter(ds_SMOC_free['lon'][n_p[i]], ds_SMOC_free['lat'][n_p[i]], s=15, color='gold', zorder=2)
ax.set_xlim(6.9, 7.6)
ax.set_ylim(53.4, 53.9)
ax.set_ylabel('Latitude [degrees]')
ax.set_xlabel('Longitude [degrees]')
color_land = copy(plt.get_cmap('Reds'))(0)
color_ocean = copy(plt.get_cmap('Reds'))(128)
custom_lines = [Line2D([0], [0], c = 'limegreen', marker='o', markersize=10, lw=0),
Line2D([0], [0], c = 'tab:blue', marker='o', markersize=10, lw=0),
Line2D([0], [0], c = 'magenta', marker='o', markersize=10, lw=0),
Line2D([0], [0], c = 'gold', marker='o', markersize=10, lw=0),
Line2D([0], [0], c = color_ocean, marker='o', markersize=10, markeredgecolor='k', lw=0),
Line2D([0], [0], c = color_land, marker='o', markersize=10, markeredgecolor='k', lw=0)]
ax.legend(custom_lines, ['basic RK4','displacement','partial slip', 'free slip','ocean point', 'land point'], bbox_to_anchor=(.01,.8), loc='center left', borderaxespad=0.,framealpha=1)
"""
Explanation: Now we can load and plot the three different interpolation_methods
End of explanation
"""
|
mari-linhares/tensorflow-workshop
|
code_samples/StructuredDataExample/automobile.ipynb
|
apache-2.0
|
from __future__ import print_function
from __future__ import division
from __future__ import absolute_import
# We're using pandas to read the CSV file. This is easy for small datasets, but for large and complex datasets,
# tensorflow parsing and processing functions are more powerful
import pandas as pd
import numpy as np
# TensorFlow
import tensorflow as tf
print('please make sure that version >= 1.2:')
print(tf.__version__)
print('@monteirom: I made changes so it also works with 1.1.0 that is the current pip install version')
print('@monteirom: The lines that were changed have @1.2 as comment')
# Layers that will define the features
#
# real_value_column: real values, float32
# sparse_column_with_hash_bucket: Use this when your sparse features are in string or integer format,
# but you don't have a vocab file that maps each value to an integer ID.
# output_id = Hash(input_feature_string) % bucket_size
# sparse_column_with_keys: Look up logic is as follows:
# lookup_id = index_of_feature_in_keys if feature in keys else default_value.
# You should use this when you know the vocab file for the feature
# one_hot_column: Creates an _OneHotColumn for a one-hot or multi-hot repr in a DNN.
# The input can be a _SparseColumn which is created by `sparse_column_with_*`
# or crossed_column functions
from tensorflow.contrib.layers import real_valued_column, sparse_column_with_keys, sparse_column_with_hash_bucket
from tensorflow.contrib.layers import one_hot_column
"""
Explanation: Structure Data Example: Automobile dataset
https://archive.ics.uci.edu/ml/machine-learning-databases/autos/imports-85.data
End of explanation
"""
# The CSV file does not have a header, so we have to fill in column names.
names = [
'symboling',
'normalized-losses',
'make',
'fuel-type',
'aspiration',
'num-of-doors',
'body-style',
'drive-wheels',
'engine-location',
'wheel-base',
'length',
'width',
'height',
'curb-weight',
'engine-type',
'num-of-cylinders',
'engine-size',
'fuel-system',
'bore',
'stroke',
'compression-ratio',
'horsepower',
'peak-rpm',
'city-mpg',
'highway-mpg',
'price',
]
# We also have to specify dtypes.
dtypes = {
'symboling': np.int32,
'normalized-losses': np.float32,
'make': str,
'fuel-type': str,
'aspiration': str,
'num-of-doors': str,
'body-style': str,
'drive-wheels': str,
'engine-location': str,
'wheel-base': np.float32,
'length': np.float32,
'width': np.float32,
'height': np.float32,
'curb-weight': np.float32,
'engine-type': str,
'num-of-cylinders': str,
'engine-size': np.float32,
'fuel-system': str,
'bore': np.float32,
'stroke': np.float32,
'compression-ratio': np.float32,
'horsepower': np.float32,
'peak-rpm': np.float32,
'city-mpg': np.float32,
'highway-mpg': np.float32,
'price': np.float32,
}
# Read the file.
df = pd.read_csv('data/imports-85.data', names=names, dtype=dtypes, na_values='?')
# Some rows don't have price data, we can't use those.
df = df.dropna(axis='rows', how='any', subset=['price'])
"""
Explanation: Please Download
https://archive.ics.uci.edu/ml/machine-learning-databases/autos/imports-85.data
And move it to data/
So: data/imports-85.data is expected to exist!
Preparing the data
End of explanation
"""
# Fill missing values in continuous columns with zeros instead of NaN.
float_columns = [k for k,v in dtypes.items() if v == np.float32]
df[float_columns] = df[float_columns].fillna(value=0., axis='columns')
# Fill missing values in continuous columns with '' instead of NaN (NaN mixed with strings is very bad for us).
string_columns = [k for k,v in dtypes.items() if v == str]
df[string_columns] = df[string_columns].fillna(value='', axis='columns')
"""
Explanation: Dealing with NaN
There are many approaches possibles for NaN values in the data, here we just changing it to " " or 0 depending of the data type. This is the simplest way, but for sure is not the best in most cases, so in practice you should try some other ways to use the NaN data. Some approaches are:
use the mean of the row
use the mean of the column
if/else substituion (e.g if a lot of NaN do this, else do this other thing)
...
google others
End of explanation
"""
# We have too many variables let's just use some of them
df = df[['num-of-doors','num-of-cylinders', 'horsepower', 'make', 'price', 'length', 'height', 'width']]
# Since we're possibly dealing with parameters of different units and scales. We'll need to rescale our data.
# There are two main ways to do it:
# * Normalization, which scales all numeric variables in the range [0,1].
# Example:
# * Standardization, it will then transform it to have zero mean and unit variance.
# Example:
# Which is better? It deppends of your data and your features.
# But one disadvantage of normalization over standardization is that it loses
# some information in the data. Since normalization loses more info it can make harder
# for gradient descent to converse, so we'll use standardization.
# In practice: please analyse your data and see what gives you better results.
def std(x):
return (x - x.mean()) / x.std()
before = df.length[0]
df.length = std(df.length)
df.width = std(df.width)
df.height = std(df.height)
df.horsepower = std(df.horsepower)
after = df.length[0]
print('before:', before, 'after:', after)
"""
Explanation: Standardize features
End of explanation
"""
TRAINING_DATA_SIZE = 160
TEST_DATA_SIZE = 10
LABEL = 'price'
# Split the data into a training set, eval set and test set
training_data = df[:TRAINING_DATA_SIZE]
eval_data = df[TRAINING_DATA_SIZE: TRAINING_DATA_SIZE + TEST_DATA_SIZE]
test_data = df[TRAINING_DATA_SIZE + TEST_DATA_SIZE:]
# Separate input features from labels
training_label = training_data.pop(LABEL)
eval_label = eval_data.pop(LABEL)
test_label = test_data.pop(LABEL)
"""
Explanation: Separating training data from testing data
End of explanation
"""
BATCH_SIZE = 64
# Make input function for training:
# num_epochs=None -> will cycle through input data forever
# shuffle=True -> randomize order of input data
training_input_fn = tf.estimator.inputs.pandas_input_fn(x=training_data,
y=training_label,
batch_size=BATCH_SIZE,
shuffle=True,
num_epochs=None)
# Make input function for evaluation:
# shuffle=False -> do not randomize input data
eval_input_fn = tf.estimator.inputs.pandas_input_fn(x=eval_data,
y=eval_label,
batch_size=BATCH_SIZE,
shuffle=False)
# Make input function for testing:
# shuffle=False -> do not randomize input data
eval_input_fn = tf.estimator.inputs.pandas_input_fn(x=test_data,
y=test_label,
batch_size=1,
shuffle=False)
"""
Explanation: Using Tensorflow
Defining input function
End of explanation
"""
# Describe how the model should interpret the inputs. The names of the feature columns have to match the names
# of the series in the dataframe.
# @1.2.0 tf.feature_column.numeric_column -> tf.contrib.layers.real_valued_column
horsepower = real_valued_column('horsepower')
width = real_valued_column('width')
height = real_valued_column('height')
length = real_valued_column('length')
# @1.2.0 tf.feature_column.categorical_column_with_hash_bucket -> tf.contrib.layers.sparse_column_with_hash_bucket
make = sparse_column_with_hash_bucket('make', 50)
# @1.2.0 tf.feature_column.categorical_column_with_vocabulary_list -> tf.contrib.layers.sparse_column_with_keys
fuel_type = sparse_column_with_keys('fuel-type', keys=['diesel', 'gas'])
num_of_doors = sparse_column_with_keys('num-of-doors', keys=['two', 'four'])
num_of_cylinders = sparse_column_with_keys('num-of-cylinders', ['eight', 'five', 'four', 'six', 'three', 'twelve', 'two'])
linear_features = [horsepower, make, num_of_doors, num_of_cylinders, length, width, height]
regressor = tf.contrib.learn.LinearRegressor(feature_columns=linear_features, model_dir='tensorboard/linear_regressor/')
"""
Explanation: Defining a Linear Estimator
End of explanation
"""
regressor.fit(input_fn=training_input_fn, steps=10000)
"""
Explanation: Training
End of explanation
"""
regressor.evaluate(input_fn=eval_input_fn)
"""
Explanation: Evaluating
End of explanation
"""
preds = list(regressor.predict(input_fn=eval_input_fn))
for i in range(TEST_DATA_SIZE):
print('prediction:', preds[i], 'real value:', test_label.iloc[i])
"""
Explanation: Predicting
End of explanation
"""
# @1.2.0 tf.feature_column.indicator_column -> tf.contrib.layers.one_hot_column(tf.contrib.layers.sparse_column_with_keys(...))
dnn_features = [
#numerical features
length, width, height, horsepower,
# densify categorical features:
one_hot_column(make),
one_hot_column(num_of_doors)
]
dnnregressor = tf.contrib.learn.DNNRegressor(feature_columns=dnn_features,
hidden_units=[50, 30, 10], model_dir='tensorboard/DNN_regressor/')
"""
Explanation: Defining a DNN Estimator
End of explanation
"""
dnnregressor.fit(input_fn=training_input_fn, steps=10000)
"""
Explanation: Training
End of explanation
"""
dnnregressor.evaluate(input_fn=eval_input_fn)
"""
Explanation: Evaluating
End of explanation
"""
preds = list(dnnregressor.predict(input_fn=eval_input_fn))
for i in range(TEST_DATA_SIZE):
print('prediction:', preds[i], 'real value:', test_label.iloc[i])
"""
Explanation: Predicting
End of explanation
"""
# @1.2.0 experiment_fn(run_config, params) - > experiment_fn(output_dir)
def experiment_fn(output_dir):
# This function makes an Experiment, containing an Estimator and inputs for training and evaluation.
# You can use params and config here to customize the Estimator depending on the cluster or to use
# hyperparameter tuning.
# Collect information for training
# @1.2.0 config=run_config -> ''
return tf.contrib.learn.Experiment(estimator=tf.contrib.learn.LinearRegressor(
feature_columns=linear_features, model_dir=output_dir),
train_input_fn=training_input_fn,
train_steps=10000,
eval_input_fn=eval_input_fn)
import shutil
# @1.2.0 tf.contrib.learn.learn_runner(exp, run_config=tf.contrib.learn.RunConfig(model_dir="/tmp/output_dir")
# -> tf.contrib.learn.python.learn.learm_runner.run(exp, output_dir='/tmp/output_dir')
shutil.rmtree("/tmp/output_dir", ignore_errors=True)
from tensorflow.contrib.learn.python.learn import learn_runner
learn_runner.run(experiment_fn, output_dir='/tmp/output_dir')
"""
Explanation: Creating an Experiment
End of explanation
"""
|
rishuatgithub/MLPy
|
torch/PYTORCH_NOTEBOOKS/00-Crash-Course-Topics/00-Crash-Course-NumPy/02-NumPy-Operations.ipynb
|
apache-2.0
|
import numpy as np
arr = np.arange(0,10)
arr
arr + arr
arr * arr
arr - arr
# This will raise a Warning on division by zero, but not an error!
# It just fills the spot with nan
arr/arr
# Also a warning (but not an error) relating to infinity
1/arr
arr**3
"""
Explanation: <a href='http://www.pieriandata.com'><img src='../Pierian_Data_Logo.png'/></a>
<center><em>Copyright Pierian Data</em></center>
<center><em>For more information, visit us at <a href='http://www.pieriandata.com'>www.pieriandata.com</a></em></center>
NumPy Operations
Arithmetic
You can easily perform array with array arithmetic, or scalar with array arithmetic. Let's see some examples:
End of explanation
"""
# Taking Square Roots
np.sqrt(arr)
# Calculating exponential (e^)
np.exp(arr)
# Trigonometric Functions like sine
np.sin(arr)
# Taking the Natural Logarithm
np.log(arr)
"""
Explanation: Universal Array Functions
NumPy comes with many universal array functions, or <em>ufuncs</em>, which are essentially just mathematical operations that can be applied across the array.<br>Let's show some common ones:
End of explanation
"""
arr = np.arange(0,10)
arr
arr.sum()
arr.mean()
arr.max()
"""
Explanation: Summary Statistics on Arrays
NumPy also offers common summary statistics like <em>sum</em>, <em>mean</em> and <em>max</em>. You would call these as methods on an array.
End of explanation
"""
arr_2d = np.array([[1,2,3,4],[5,6,7,8],[9,10,11,12]])
arr_2d
arr_2d.sum(axis=0)
"""
Explanation: <strong>Other summary statistics include:</strong>
<pre>
arr.min() returns 0 minimum
arr.var() returns 8.25 variance
arr.std() returns 2.8722813232690143 standard deviation
</pre>
Axis Logic
When working with 2-dimensional arrays (matrices) we have to consider rows and columns. This becomes very important when we get to the section on pandas. In array terms, axis 0 (zero) is the vertical axis (rows), and axis 1 is the horizonal axis (columns). These values (0,1) correspond to the order in which <tt>arr.shape</tt> values are returned.
Let's see how this affects our summary statistic calculations from above.
End of explanation
"""
arr_2d.shape
"""
Explanation: By passing in <tt>axis=0</tt>, we're returning an array of sums along the vertical axis, essentially <tt>[(1+5+9), (2+6+10), (3+7+11), (4+8+12)]</tt>
<img src='axis_logic.png' width=400/>
End of explanation
"""
# THINK ABOUT WHAT THIS WILL RETURN BEFORE RUNNING THE CELL!
arr_2d.sum(axis=1)
"""
Explanation: This tells us that <tt>arr_2d</tt> has 3 rows and 4 columns.
In <tt>arr_2d.sum(axis=0)</tt> above, the first element in each row was summed, then the second element, and so forth.
So what should <tt>arr_2d.sum(axis=1)</tt> return?
End of explanation
"""
|
awsteiner/o2sclpy
|
doc/static/examples/interp.ipynb
|
gpl-3.0
|
import o2sclpy
import matplotlib.pyplot as plot
import sys
import math
import numpy
from sklearn.gaussian_process import GaussianProcessRegressor
from sklearn.gaussian_process.kernels import RBF, ConstantKernel
plots=True
if 'pytest' in sys.modules:
plots=False
"""
Explanation: O$_2$scl interpolation example for O$_2$sclpy
See the O$_2$sclpy documentation at
https://neutronstars.utk.edu/code/o2sclpy for more information.
End of explanation
"""
link=o2sclpy.linker()
link.link_o2scl()
"""
Explanation: Link the o2scl library:
End of explanation
"""
def f(x):
return math.sin(1.0/(0.3+x))
"""
Explanation: Create a sample function to interpolate:
End of explanation
"""
xa=[0 for i in range(0,20)]
ya=[0 for i in range(0,20)]
for i in range(0,20):
if i>0:
xa[i]=xa[i-1]+((i)/40)**2
ya[i]=math.sin(1.0/(0.3+xa[i]))
"""
Explanation: Create sample data from our function:
End of explanation
"""
m=numpy.mean(ya)
s=numpy.std(ya,ddof=1)
print('mean: %7.6e, std: %7.6e' % (m,s))
ya2=[(ya[i]-m)/s for i in range(0,20)]
"""
Explanation: Compute the mean and standard deviation so that we can normalize the data:
End of explanation
"""
xp=o2sclpy.std_vector(link)
yp=o2sclpy.std_vector(link)
xp.resize(20)
yp.resize(20)
for i in range(0,20):
xp[i]=xa[i]
yp[i]=ya2[i]
"""
Explanation: Copy the data into std_vector objects:
End of explanation
"""
iv_lin=o2sclpy.interp_vec(link)
iv_lin.set(20,xp,yp,o2sclpy.itp_linear)
iv_csp=o2sclpy.interp_vec(link)
iv_csp.set(20,xp,yp,o2sclpy.itp_cspline)
iv_aki=o2sclpy.interp_vec(link)
iv_aki.set(20,xp,yp,o2sclpy.itp_akima)
iv_mon=o2sclpy.interp_vec(link)
iv_mon.set(20,xp,yp,o2sclpy.itp_monotonic)
iv_stef=o2sclpy.interp_vec(link)
iv_stef.set(20,xp,yp,o2sclpy.itp_steffen)
iv_ko=o2sclpy.interp_krige_optim(link)
iv_ko.set(20,xp,yp,True)
plot.plot(xa,ya,lw=0,marker='+')
plot.plot(xa,[iv_lin.eval(xa[i])*s+m for i in range(0,20)])
plot.plot(xa,[iv_csp.eval(xa[i])*s+m for i in range(0,20)])
plot.plot(xa,[iv_aki.eval(xa[i])*s+m for i in range(0,20)])
plot.plot(xa,[iv_mon.eval(xa[i])*s+m for i in range(0,20)])
plot.plot(xa,[iv_stef.eval(xa[i])*s+m for i in range(0,20)])
plot.plot(xa,[iv_ko.eval(xa[i])*s+m for i in range(0,20)])
max=xa[19]
xb=[i/2000.0*max for i in range(0,2001)]
xa2=numpy.array(xa).reshape(-1,1)
"""
Explanation: Create the interpolators:
End of explanation
"""
kernel=RBF(1.0,(1.0e-2,1.0e2))
gpr=GaussianProcessRegressor(kernel=kernel).fit(xa2,ya)
for hyperparameter in gpr.kernel_.hyperparameters:
print('hp',hyperparameter)
params = gpr.kernel_.get_params()
for key in sorted(params):
print("kp: %s : %s" % (key, params[key]))
params2=gpr.get_params()
for key in sorted(params2):
print("gpp: %s : %s" % (key, params2[key]))
plot.rcParams['figure.figsize'] = [11, 9]
plot.semilogy(xb,[abs(f(xb[i])-(iv_lin.eval(xb[i])*s+m))
for i in range(0,2001)],color='black',lw=0.5,label='linear')
plot.semilogy(xb,[abs(f(xb[i])-(iv_csp.eval(xb[i])*s+m))
for i in range(0,2001)],color='red',lw=0.5,label='cubic spline')
plot.semilogy(xb,[abs(f(xb[i])-(iv_stef.eval(xb[i])*s+m))
for i in range(0,2001)],color='blue',lw=0.5,label='steffen')
plot.semilogy(xb,[abs(f(xb[i])-(iv_ko.eval(xb[i])*s+m))
for i in range(0,2001)],color='purple',lw=0.5,label='GP o2scl')
plot.semilogy(xb,[abs(f(xb[i])-(gpr.predict(numpy.array(xb[i]).reshape(-1,1))))
for i in range(0,2001)],color='green',lw=0.5,label='GP sklearn')
plot.legend()
"""
Explanation: Create a Gaussian process from sklearn to perform the interpolation. Like the O$_2$scl class interp_krige_optim, this is a simple one-parameter version which only varies the length scale.
End of explanation
"""
|
AllenDowney/ThinkBayes2
|
examples/game_of_ur_soln.ipynb
|
mit
|
# Configure Jupyter so figures appear in the notebook
%matplotlib inline
# Configure Jupyter to display the assigned value after an assignment
%config InteractiveShell.ast_node_interactivity='last_expr_or_assign'
from thinkbayes2 import Pmf, Cdf, Suite
import thinkplot
"""
Explanation: Think Bayes
This notebook presents code and exercises from Think Bayes, second edition.
Copyright 2018 Allen B. Downey
MIT License: https://opensource.org/licenses/MIT
End of explanation
"""
die = Pmf([0, 1])
"""
Explanation: The Game of Ur problem
In the Royal Game of Ur, players advance tokens along a track with 14 spaces. To determine how many spaces to advance, a player rolls 4 dice with 4 sides. Two corners on each die are marked; the other two are not. The total number of marked corners -- which is 0, 1, 2, 3, or 4 -- is the number of spaces to advance.
For example, if the total on your first roll is 2, you could advance a token to space 2. If you roll a 3 on the next roll, you could advance the same token to space 5.
Suppose you have a token on space 13. How many rolls did it take to get there?
Hint: you might want to start by computing the distribution of k given n, where k is the number of the space and n is the number of rolls.
Then think about the prior distribution of n.
Here's a Pmf that represents one of the 4-sided dice.
End of explanation
"""
roll = sum([die]*4)
"""
Explanation: And here's the outcome of a single roll.
End of explanation
"""
def roll_until(iters):
"""Generates observations of the game.
iters: number of observations
yields: number of rolls, total
"""
for i in range(iters):
total = 0
for n in range(1, 1000):
total += roll.Random()
if total > 14:
break
yield(n, total)
"""
Explanation: I'll start with a simulation, which helps in two ways: it makes modeling assumptions explicit and it provides an estimate of the answer.
The following function simulates playing the game over and over; after every roll, it yields the number of rolls and the total so far. When it gets past the 14th space, it starts over.
End of explanation
"""
pmf_sim = Pmf()
for n, k in roll_until(1000000):
if k == 13:
pmf_sim[n] += 1
"""
Explanation: Now I'll the simulation many times and, every time the token is observed on space 13, record the number of rolls it took to get there.
End of explanation
"""
pmf_sim.Normalize()
pmf_sim.Print()
thinkplot.Hist(pmf_sim, label='Simulation')
thinkplot.decorate(xlabel='Number of rolls to get to space 13',
ylabel='PMF')
"""
Explanation: Here's the distribution of the number of rolls:
End of explanation
"""
pmf_13 = Pmf()
for n in range(4, 15):
pmf_n = sum([roll]*n)
pmf_13[n] = pmf_n[13]
pmf_13.Print()
pmf_13.Total()
"""
Explanation: Bayes
Now let's think about a Bayesian solution. It is straight forward to compute the likelihood function, which is the probability of being on space 13 after a hypothetical n rolls.
pmf_n is the distribution of spaces after n rolls.
pmf_13 is the probability of being on space 13 after n rolls.
End of explanation
"""
posterior = pmf_13.Copy()
posterior.Normalize()
posterior.Print()
"""
Explanation: The total probability of the data is very close to 1/2, but it's not obvious (to me) why.
Nevertheless, pmf_13 is the probability of the data for each hypothetical values of n, so it is the likelihood function.
The prior
Now we need to think about a prior distribution on the number of rolls. This is not easy to reason about, so let's start by assuming that it is uniform, and see where that gets us.
If the prior is uniform, the posterior equals the likelihood function, normalized.
End of explanation
"""
thinkplot.Hist(pmf_sim, label='Simulation')
thinkplot.Pmf(posterior, color='orange', label='Normalized likelihoods')
thinkplot.decorate(xlabel='Number of rolls (n)',
ylabel='PMF')
"""
Explanation: That sure looks similar to what we got by simulation. Let's compare them.
End of explanation
"""
|
phoebe-project/phoebe2-docs
|
2.2/tutorials/irrad_method_horvat.ipynb
|
gpl-3.0
|
!pip install -I "phoebe>=2.2,<2.3"
"""
Explanation: Lambert Scattering (irrad_method='horvat')
Setup
Let's first make sure we have the latest version of PHOEBE 2.2 installed. (You can comment out this line if you don't use pip for your installation or don't want to update to the latest release).
End of explanation
"""
%matplotlib inline
import phoebe
from phoebe import u # units
import numpy as np
import matplotlib.pyplot as plt
logger = phoebe.logger('error')
b = phoebe.default_binary()
"""
Explanation: As always, let's do imports and initialize a logger and a new bundle. See Building a System for more details.
End of explanation
"""
print(b['irrad_method'])
"""
Explanation: Relevant Parameters
For parameters that affect reflection and heating (irrad_frac_*) see the tutorial on reflection and heating.
The 'irrad_method' compute option dictates whether irradiation is handled according to the new Horvat scheme which includes Lambert Scattering, Wilson's original reflection scheme, or ignored entirely.
End of explanation
"""
b['teff@primary'] = 11000
b['requiv@primary'] = 2.5
b['gravb_bol@primary'] = 1.0
b['teff@secondary'] = 5000
b['requiv@secondary'] = 0.85
b['q@binary'] = 0.8/3.0
b.flip_constraint('mass@primary', solve_for='sma@binary')
b['mass@primary'] = 3.0
print(b.filter(qualifier=['mass', 'requiv', 'teff'], context='component'))
b['irrad_frac_refl_bol@primary'] = 1.0
b['irrad_frac_refl_bol@secondary'] = 0.6
"""
Explanation: Influence on Light Curves (fluxes)
Let's (roughtly) reproduce Figure 8 from Prsa et al. 2016 which shows the difference between Wilson and Horvat schemes for various inclinations.
<img src="prsa+2016_fig8.png" alt="Figure 8" width="600px"/>
First we'll roughly create a A0-K0 binary and set reasonable albedos.
End of explanation
"""
b['eclipse_method'] = 'only_horizon'
"""
Explanation: We'll also disable any eclipsing effects.
End of explanation
"""
phases = phoebe.linspace(0,1,101)
b.add_dataset('lc', times=b.to_time(phases))
for incl in [0,30,60,90]:
b.set_value('incl@binary', incl)
b.run_compute(irrad_method='wilson')
fluxes_wilson = b.get_value('fluxes', context='model')
b.run_compute(irrad_method='horvat')
fluxes_horvat = b.get_value('fluxes', context='model')
plt.plot(phases, (fluxes_wilson-fluxes_horvat)/fluxes_wilson, label='i={}'.format(incl))
plt.xlabel('phase')
plt.ylabel('[F(wilson) - F(horvat)] / F(wilson)')
plt.legend(loc='upper center')
plt.show()
"""
Explanation: Now we'll compute the light curves with wilson and horvat irradiation, and plot the relative differences between the two as a function of phase, for several different values of the inclination.
End of explanation
"""
|
kylepjohnson/notebooks
|
public_talks/2016_10_26_harvard/3.1b Classification, extract features, fewer epithets.ipynb
|
mit
|
from cltk.corpus.greek.tlg.parse_tlg_indices import get_epithet_index
import pandas
epithet_frequencies = []
for epithet, _ids in get_epithet_index().items():
epithet_frequencies.append((epithet, len(_ids)))
df = pandas.DataFrame(epithet_frequencies)
df.sort_values(1, ascending=False)
"""
Explanation: Problem of distribution of epithet docs
Because most epithets do not have many representative documents, I will create another feature table, this time with most of the docs cut out.
Looking at the following, there is a long tail epithets with few surviving representatives.
End of explanation
"""
from scipy import stats
distribution = sorted(list(df[1]), reverse=True)
zscores = stats.zscore(distribution)
list(zip(distribution, zscores))
# Make list of epithets to drop
to_drop = df[0].where(df[1] < 26)
to_drop = [epi for epi in to_drop if not type(epi) is float]
to_drop = set(to_drop)
to_drop
"""
Explanation: Wikipedia on the long tail:
The specific cutoff of what part of a distribution is the "long tail" is often arbitrary, but in some cases may be specified objectively; see segmentation of rank-size distributions.
So I'll do this semi-objectively. I'm going to cut out any documents with a negative standard score (that is, below the mean). Thus, epithets with fewer than 26 (-0.064414235569960288) representative documents I will drop.
See following printout for z-score distribution
End of explanation
"""
import datetime as dt
import os
import time
from cltk.corpus.greek.tlg.parse_tlg_indices import get_epithet_of_author
from cltk.corpus.greek.tlg.parse_tlg_indices import get_id_author
import pandas
from sklearn.externals import joblib
from sklearn.feature_extraction.text import CountVectorizer
def stream_lemmatized_files(corpus_dir):
# return all docs in a dir
user_dir = os.path.expanduser('~/cltk_data/user_data/' + corpus_dir)
files = os.listdir(user_dir)
for file in files:
filepath = os.path.join(user_dir, file)
with open(filepath) as fo:
#TODO rm words less the 3 chars long
yield file[3:-4], fo.read()
t0 = dt.datetime.utcnow()
map_id_author = get_id_author()
df = pandas.DataFrame(columns=['id', 'author' 'text', 'epithet'])
for _id, text in stream_lemmatized_files('tlg_lemmatized_no_accents_no_stops'):
author = map_id_author[_id]
epithet = get_epithet_of_author(_id)
if epithet in to_drop:
continue
df = df.append({'id': _id, 'author': author, 'text': text, 'epithet': epithet}, ignore_index=True)
print(df.shape)
print('... finished in {}'.format(dt.datetime.utcnow() - t0))
print('Number of texts:', len(df))
text_list = df['text'].tolist()
# make a list of short texts to drop
# For pres, get distributions of words per doc
short_text_drop_index = [index if len(text) > 500 else None for index, text in enumerate(text_list) ] # ~100 words
t0 = dt.datetime.utcnow()
# TODO: Consider using generator to CV http://stackoverflow.com/a/21600406
# time & size counts, w/ 50 texts:
# 0:01:15 & 202M @ ngram_range=(1, 3), min_df=2, max_features=500
# 0:00:26 & 80M @ ngram_range=(1, 2), analyzer='word', min_df=2, max_features=5000
# 0:00:24 & 81M @ ngram_range=(1, 2), analyzer='word', min_df=2, max_features=50000
# time & size counts, w/ 1823 texts:
# 0:02:18 & 46MB @ ngram_range=(1, 1), analyzer='word', min_df=2, max_features=500000
# 0:2:01 & 47 @ ngram_range=(1, 1), analyzer='word', min_df=2, max_features=1000000
# max features in the lemmatized data set: 551428
max_features = 100000
ngrams = 1
vectorizer = CountVectorizer(ngram_range=(1, ngrams), analyzer='word',
min_df=2, max_features=max_features)
term_document_matrix = vectorizer.fit_transform(text_list) # input is a list of strings, 1 per document
# save matrix
vector_fp = os.path.expanduser('~/cltk_data/user_data/vectorizer_test_features{0}_ngrams{1}.pickle'.format(max_features, ngrams))
joblib.dump(term_document_matrix, vector_fp)
print('... finished in {}'.format(dt.datetime.utcnow() - t0))
"""
Explanation: Make vectorizer
Now when loading documents, drop those belonging to an epithet in the to_drop list
End of explanation
"""
# Put BoW vectors into a new df
term_document_matrix = joblib.load(vector_fp) # scipy.sparse.csr.csr_matrix
term_document_matrix.shape
term_document_matrix_array = term_document_matrix.toarray()
dataframe_bow = pandas.DataFrame(term_document_matrix_array, columns=vectorizer.get_feature_names())
ids_list = df['id'].tolist()
len(ids_list)
dataframe_bow.shape
dataframe_bow['id'] = ids_list
authors_list = df['author'].tolist()
dataframe_bow['author'] = authors_list
epithets_list = df['epithet'].tolist()
dataframe_bow['epithet'] = epithets_list
# For pres, give distribution of epithets, including None
dataframe_bow['epithet']
t0 = dt.datetime.utcnow()
# removes 334
#! remove rows whose epithet = None
# note on selecting none in pandas: http://stackoverflow.com/a/24489602
dataframe_bow = dataframe_bow[dataframe_bow.epithet.notnull()]
dataframe_bow.shape
print('... finished in {}'.format(dt.datetime.utcnow() - t0))
t0 = dt.datetime.utcnow()
dataframe_bow.to_csv(os.path.expanduser('~/cltk_data/user_data/tlg_bow.csv'))
print('... finished in {}'.format(dt.datetime.utcnow() - t0))
dataframe_bow.shape
dataframe_bow.head(10)
# write dataframe_bow to disk, for fast reuse while classifying
# 2.3G
fp_df = os.path.expanduser('~/cltk_data/user_data/tlg_bow_df.pickle')
joblib.dump(dataframe_bow, fp_df)
"""
Explanation: Transform term matrix into feature table
End of explanation
"""
|
adityaka/misc_scripts
|
python-scripts/data_analytics_learn/link_pandas/Ex_Files_Pandas_Data/Exercise Files/02_04/Final/.ipynb_checkpoints/Missing Data-checkpoint.ipynb
|
bsd-3-clause
|
browser_index = ['Firefox', 'Chrome', 'Safari', 'IE10', 'Konqueror']
browser_df = pd.DataFrame({
'http_status': [200,200,404,404,301],
'response_time': [0.04, 0.02, 0.07, 0.08, 1.0]},
index=browser_index)
browser_df
"""
Explanation: Missing Data
pandas uses np.nan to represent missing data. By default, it is not included in computations.
documentation: http://pandas.pydata.org/pandas-docs/stable/missing_data.html#missing-data
reindex()
documentation: http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.reindex.html
End of explanation
"""
new_index= ['Safari', 'Iceweasel', 'Comodo Dragon', 'IE10', 'Chrome']
browser_df_2 = browser_df.reindex(new_index)
browser_df_2
"""
Explanation: reindex() creates a copy (not a view)
End of explanation
"""
browser_df_3 = browser_df_2.dropna(how='any')
browser_df_3
"""
Explanation: drop rows that have missing data
documentation: http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.dropna.html
End of explanation
"""
browser_df_2.fillna(value=-0.05555)
"""
Explanation: fill-in missing data
documentation: http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.fillna.html
End of explanation
"""
pd.isnull(browser_df_2)
"""
Explanation: get boolean mask where values are nan
End of explanation
"""
browser_df_2 * 17
"""
Explanation: NaN propagates during arithmetic operations
End of explanation
"""
|
sarathid/Learning
|
Deep_learning_ND/tv-script-generation/dlnd_tv_script_generation.ipynb
|
gpl-3.0
|
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
data_dir = './data/simpsons/moes_tavern_lines.txt'
text = helper.load_data(data_dir)
# Ignore notice, since we don't use it for analysing the data
text = text[81:]
"""
Explanation: TV Script Generation
In this project, you'll generate your own Simpsons TV scripts using RNNs. You'll be using part of the Simpsons dataset of scripts from 27 seasons. The Neural Network you'll build will generate a new TV script for a scene at Moe's Tavern.
Get the Data
The data is already provided for you. You'll be using a subset of the original dataset. It consists of only the scenes in Moe's Tavern. This doesn't include other versions of the tavern, like "Moe's Cavern", "Flaming Moe's", "Uncle Moe's Family Feed-Bag", etc..
End of explanation
"""
view_sentence_range = (0, 10)
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
scenes = text.split('\n\n')
print('Number of scenes: {}'.format(len(scenes)))
sentence_count_scene = [scene.count('\n') for scene in scenes]
print('Average number of sentences in each scene: {}'.format(np.average(sentence_count_scene)))
sentences = [sentence for scene in scenes for sentence in scene.split('\n')]
print('Number of lines: {}'.format(len(sentences)))
word_count_sentence = [len(sentence.split()) for sentence in sentences]
print('Average number of words in each line: {}'.format(np.average(word_count_sentence)))
print()
print('The sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
"""
Explanation: Explore the Data
Play around with view_sentence_range to view different parts of the data.
End of explanation
"""
import numpy as np
import problem_unittests as tests
def create_lookup_tables(text):
"""
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
"""
# TODO: Implement Function
unique_word = set(text)
vocab_to_int = {word:i for i,word in enumerate(unique_word)}
int_to_vocab = {vocab_to_int[word]:word for word in vocab_to_int}
return vocab_to_int, int_to_vocab
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_create_lookup_tables(create_lookup_tables)
"""
Explanation: Implement Preprocessing Functions
The first thing to do to any dataset is preprocessing. Implement the following preprocessing functions below:
- Lookup Table
- Tokenize Punctuation
Lookup Table
To create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:
- Dictionary to go from the words to an id, we'll call vocab_to_int
- Dictionary to go from the id to word, we'll call int_to_vocab
Return these dictionaries in the following tuple (vocab_to_int, int_to_vocab)
End of explanation
"""
def token_lookup():
"""
Generate a dict to turn punctuation into a token.
:return: Tokenize dictionary where the key is the punctuation and the value is the token
"""
token_dict = {
'.': '||Period||',
',': '||Comma||',
'"': '||Quotation||',
'!': '||Exclamation||',
'?': '||Question||',
'(': '||Left_par||',
')': '||Rigth_par||',
'--': '||Dash||',
'\n': '||Return||',
';': '||Semicolon||'
}
return token_dict
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_tokenize(token_lookup)
"""
Explanation: Tokenize Punctuation
We'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks make it hard for the neural network to distinguish between the word "bye" and "bye!".
Implement the function token_lookup to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token:
- Period ( . )
- Comma ( , )
- Quotation Mark ( " )
- Semicolon ( ; )
- Exclamation mark ( ! )
- Question mark ( ? )
- Left Parentheses ( ( )
- Right Parentheses ( ) )
- Dash ( -- )
- Return ( \n )
This dictionary will be used to token the symbols and add the delimiter (space) around it. This separates the symbols as it's own word, making it easier for the neural network to predict on the next word. Make sure you don't use a token that could be confused as a word. Instead of using the token "dash", try using something like "||dash||".
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# Preprocess Training, Validation, and Testing Data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
"""
Explanation: Preprocess all the data and save it
Running the code cell below will preprocess all the data and save it to file.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
import numpy as np
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
"""
Explanation: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
from distutils.version import LooseVersion
import warnings
import tensorflow as tf
# Check TensorFlow Version
assert LooseVersion(tf.__version__) >= LooseVersion('1.0'), 'Please use TensorFlow version 1.0 or newer'
print('TensorFlow Version: {}'.format(tf.__version__))
# Check for a GPU
if not tf.test.gpu_device_name():
warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
"""
Explanation: Build the Neural Network
You'll build the components necessary to build a RNN by implementing the following functions below:
- get_inputs
- get_init_cell
- get_embed
- build_rnn
- build_nn
- get_batches
Check the Version of TensorFlow and Access to GPU
End of explanation
"""
def get_inputs():
"""
Create TF Placeholders for input, targets, and learning rate.
:return: Tuple (input, targets, learning rate)
"""
# TODO: Implement Function
inputs = tf.placeholder(tf.int32, [None, None], name='input')
targets = tf.placeholder(tf.int32, [None, None], name='labels')
learning_rate = tf.placeholder(tf.float32, name='learning_rate')
return (inputs, targets, learning_rate)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_get_inputs(get_inputs)
"""
Explanation: Input
Implement the get_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders:
- Input text placeholder named "input" using the TF Placeholder name parameter.
- Targets placeholder
- Learning Rate placeholder
Return the placeholders in the following tuple (Input, Targets, LearningRate)
End of explanation
"""
def get_init_cell(batch_size, rnn_size):
"""
Create an RNN Cell and initialize it.
:param batch_size: Size of batches
:param rnn_size: Size of RNNs
:return: Tuple (cell, initialize state)
"""
# TODO: Implement Function
lstm = tf.contrib.rnn.BasicLSTMCell
# Stack up multiple LSTM layers, for deep learning
cell = tf.contrib.rnn.MultiRNNCell([lstm(rnn_size) for _ in range(2)])
# Getting an initial state of all zeros
initial_state = tf.identity(cell.zero_state(batch_size, tf.float32), name="initial_state")
return cell, initial_state
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_get_init_cell(get_init_cell)
"""
Explanation: Build RNN Cell and Initialize
Stack one or more BasicLSTMCells in a MultiRNNCell.
- The Rnn size should be set using rnn_size
- Initalize Cell State using the MultiRNNCell's zero_state() function
- Apply the name "initial_state" to the initial state using tf.identity()
Return the cell and initial state in the following tuple (Cell, InitialState)
End of explanation
"""
def get_embed(input_data, vocab_size, embed_dim):
"""
Create embedding for <input_data>.
:param input_data: TF placeholder for text input.
:param vocab_size: Number of words in vocabulary.
:param embed_dim: Number of embedding dimensions
:return: Embedded input.
"""
embedding = tf.Variable(tf.random_uniform((vocab_size, embed_dim), -1, 1))
embed = tf.nn.embedding_lookup(embedding, input_data)
return embed
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_get_embed(get_embed)
"""
Explanation: Word Embedding
Apply embedding to input_data using TensorFlow. Return the embedded sequence.
End of explanation
"""
def build_rnn(cell, inputs):
"""
Create a RNN using a RNN Cell
:param cell: RNN Cell
:param inputs: Input text data
:return: Tuple (Outputs, Final State)
"""
outputs, final_state = tf.nn.dynamic_rnn(cell, inputs, dtype = tf.float32)
final_state = tf.identity(final_state, name="final_state")
return (outputs, final_state)
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_build_rnn(build_rnn)
"""
Explanation: Build RNN
You created a RNN Cell in the get_init_cell() function. Time to use the cell to create a RNN.
- Build the RNN using the tf.nn.dynamic_rnn()
- Apply the name "final_state" to the final state using tf.identity()
Return the outputs and final_state state in the following tuple (Outputs, FinalState)
End of explanation
"""
def build_nn(cell, rnn_size, input_data, vocab_size, embed_dim):
"""
Build part of the neural network
:param cell: RNN cell
:param rnn_size: Size of rnns
:param input_data: Input data
:param vocab_size: Vocabulary size
:param embed_dim: Number of embedding dimensions
:return: Tuple (Logits, FinalState)
"""
embed = get_embed(input_data, vocab_size, embed_dim)
outputs, final_state = build_rnn(cell, embed)
Logits = tf.contrib.layers.fully_connected(outputs, vocab_size, activation_fn = None)
return Logits, final_state
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_build_nn(build_nn)
"""
Explanation: Build the Neural Network
Apply the functions you implemented above to:
- Apply embedding to input_data using your get_embed(input_data, vocab_size, embed_dim) function.
- Build RNN using cell and your build_rnn(cell, inputs) function.
- Apply a fully connected layer with a linear activation and vocab_size as the number of outputs.
Return the logits and final state in the following tuple (Logits, FinalState)
End of explanation
"""
def get_batches(int_text, batch_size, seq_length):
"""
Return batches of input and target
:param int_text: Text with the words replaced by their ids
:param batch_size: The size of batch
:param seq_length: The length of sequence
:return: Batches as a Numpy array
"""
n_batches = int(len(int_text) / (batch_size * seq_length))
xdata = np.array(int_text[: n_batches * batch_size * seq_length])
ydata = np.zeros_like(xdata)
ydata[: n_batches * batch_size * seq_length-1] = np.array(int_text[1: (n_batches * batch_size * seq_length)])
ydata[-1] = int_text[0]
x_batches = np.split(xdata.reshape(batch_size, -1), n_batches, 1)
y_batches = np.split(ydata.reshape(batch_size, -1), n_batches, 1)
return np.asarray(list(zip(x_batches, y_batches)))
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_get_batches(get_batches)
"""
Explanation: Batches
Implement get_batches to create batches of input and targets using int_text. The batches should be a Numpy array with the shape (number of batches, 2, batch size, sequence length). Each batch contains two elements:
- The first element is a single batch of input with the shape [batch size, sequence length]
- The second element is a single batch of targets with the shape [batch size, sequence length]
If you can't fill the last batch with enough data, drop the last batch.
For exmple, get_batches([1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20], 3, 2) would return a Numpy array of the following:
```
[
# First Batch
[
# Batch of Input
[[ 1 2], [ 7 8], [13 14]]
# Batch of targets
[[ 2 3], [ 8 9], [14 15]]
]
# Second Batch
[
# Batch of Input
[[ 3 4], [ 9 10], [15 16]]
# Batch of targets
[[ 4 5], [10 11], [16 17]]
]
# Third Batch
[
# Batch of Input
[[ 5 6], [11 12], [17 18]]
# Batch of targets
[[ 6 7], [12 13], [18 1]]
]
]
```
Notice that the last target value in the last batch is the first input value of the first batch. In this case, 1. This is a common technique used when creating sequence batches, although it is rather unintuitive.
End of explanation
"""
# Number of Epochs
num_epochs = 50
# Batch Size
batch_size = 128
# RNN Size
rnn_size = 512
# Embedding Dimension Size
embed_dim = 300
# Sequence Length
seq_length = 32
# Learning Rate
learning_rate = 0.01
# Show stats for every n number of batches
show_every_n_batches = 50
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
save_dir = './save'
"""
Explanation: Neural Network Training
Hyperparameters
Tune the following parameters:
Set num_epochs to the number of epochs.
Set batch_size to the batch size.
Set rnn_size to the size of the RNNs.
Set embed_dim to the size of the embedding.
Set seq_length to the length of sequence.
Set learning_rate to the learning rate.
Set show_every_n_batches to the number of batches the neural network should print progress.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
from tensorflow.contrib import seq2seq
train_graph = tf.Graph()
with train_graph.as_default():
vocab_size = len(int_to_vocab)
input_text, targets, lr = get_inputs()
input_data_shape = tf.shape(input_text)
cell, initial_state = get_init_cell(input_data_shape[0], rnn_size)
logits, final_state = build_nn(cell, rnn_size, input_text, vocab_size, embed_dim)
# Probabilities for generating words
probs = tf.nn.softmax(logits, name='probs')
# Loss function
cost = seq2seq.sequence_loss(
logits,
targets,
tf.ones([input_data_shape[0], input_data_shape[1]]))
# Optimizer
optimizer = tf.train.AdamOptimizer(lr)
# Gradient Clipping
gradients = optimizer.compute_gradients(cost)
capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None]
train_op = optimizer.apply_gradients(capped_gradients)
"""
Explanation: Build the Graph
Build the graph using the neural network you implemented.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
batches = get_batches(int_text, batch_size, seq_length)
with tf.Session(graph=train_graph) as sess:
sess.run(tf.global_variables_initializer())
for epoch_i in range(num_epochs):
state = sess.run(initial_state, {input_text: batches[0][0]})
for batch_i, (x, y) in enumerate(batches):
feed = {
input_text: x,
targets: y,
initial_state: state,
lr: learning_rate}
train_loss, state, _ = sess.run([cost, final_state, train_op], feed)
# Show every <show_every_n_batches> batches
if (epoch_i * len(batches) + batch_i) % show_every_n_batches == 0:
print('Epoch {:>3} Batch {:>4}/{} train_loss = {:.3f}'.format(
epoch_i,
batch_i,
len(batches),
train_loss))
# Save Model
saver = tf.train.Saver()
saver.save(sess, save_dir)
print('Model Trained and Saved')
"""
Explanation: Train
Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# Save parameters for checkpoint
helper.save_params((seq_length, save_dir))
"""
Explanation: Save Parameters
Save seq_length and save_dir for generating a new TV script.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import tensorflow as tf
import numpy as np
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
seq_length, load_dir = helper.load_params()
"""
Explanation: Checkpoint
End of explanation
"""
def get_tensors(loaded_graph):
"""
Get input, initial state, final state, and probabilities tensor from <loaded_graph>
:param loaded_graph: TensorFlow graph loaded from file
:return: Tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)
"""
# TODO: Implement Function
input0 = loaded_graph.get_tensor_by_name('input:0')
init_state = loaded_graph.get_tensor_by_name('initial_state:0')
final_state = loaded_graph.get_tensor_by_name('final_state:0')
probs = loaded_graph.get_tensor_by_name('probs:0')
return input0, init_state, final_state, probs
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_get_tensors(get_tensors)
"""
Explanation: Implement Generate Functions
Get Tensors
Get tensors from loaded_graph using the function get_tensor_by_name(). Get the tensors using the following names:
- "input:0"
- "initial_state:0"
- "final_state:0"
- "probs:0"
Return the tensors in the following tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)
End of explanation
"""
def pick_word(probabilities, int_to_vocab):
"""
Pick the next word in the generated text
:param probabilities: Probabilites of the next word
:param int_to_vocab: Dictionary of word ids as the keys and words as the values
:return: String of the predicted word
"""
# TODO: Implement Function
# next_word = np.random.choice(list(int_to_vocab.values()), p=probabilities)
next_word_index = np.random.choice(len(int_to_vocab), p=probabilities)
return int_to_vocab[next_word_index]
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_pick_word(pick_word)
"""
Explanation: Choose Word
Implement the pick_word() function to select the next word using probabilities.
End of explanation
"""
gen_length = 200
# homer_simpson, moe_szyslak, or Barney_Gumble
prime_word = 'moe_szyslak'
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load saved model
loader = tf.train.import_meta_graph(load_dir + '.meta')
loader.restore(sess, load_dir)
# Get Tensors from loaded model
input_text, initial_state, final_state, probs = get_tensors(loaded_graph)
# Sentences generation setup
gen_sentences = [prime_word + ':']
prev_state = sess.run(initial_state, {input_text: np.array([[1]])})
# Generate sentences
for n in range(gen_length):
# Dynamic Input
dyn_input = [[vocab_to_int[word] for word in gen_sentences[-seq_length:]]]
dyn_seq_length = len(dyn_input[0])
# Get Prediction
probabilities, prev_state = sess.run(
[probs, final_state],
{input_text: dyn_input, initial_state: prev_state})
pred_word = pick_word(probabilities[dyn_seq_length-1], int_to_vocab)
gen_sentences.append(pred_word)
# Remove tokens
tv_script = ' '.join(gen_sentences)
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
tv_script = tv_script.replace(' ' + token.lower(), key)
tv_script = tv_script.replace('\n ', '\n')
tv_script = tv_script.replace('( ', '(')
print(tv_script)
"""
Explanation: Generate TV Script
This will generate the TV script for you. Set gen_length to the length of TV script you want to generate.
End of explanation
"""
|
sastels/Onboarding
|
4 - Sorting.ipynb
|
mit
|
a = [5, 1, 4, 3]
print sorted(a)
print a
"""
Explanation: Sorting
The easiest way to sort is with the sorted(list) function, which takes a list and returns a new list with those elements in sorted order. The original list is not changed.
End of explanation
"""
strs = ['aa', 'BB', 'zz', 'CC']
print sorted(strs)
print sorted(strs, reverse=True)
"""
Explanation: It's most common to pass a list into the sorted() function, but in fact it can take as input any sort of iterable collection. The older list.sort() method is an alternative detailed below. The sorted() function seems easier to use compared to sort(), so I recommend using sorted().
The sorted() function can be customized though optional arguments. The sorted() optional argument reverse=True, e.g. sorted(list, reverse=True), makes it sort backwards.
End of explanation
"""
strs = ['ccc', 'aaaa', 'd', 'bb']
print sorted(strs, key=len)
"""
Explanation: Custom Sorting With key
For more complex custom sorting, sorted() takes an optional "key=" specifying a "key" function that transforms each element before comparison. The key function takes in 1 value and returns 1 value, and the returned "proxy" value is used for the comparisons within the sort.
For example with a list of strings, specifying key=len (the built in len() function) sorts the strings by length, from shortest to longest. The sort calls len() for each string to get the list of proxy length values, and the sorts with those proxy values.
End of explanation
"""
strs = ['aa', 'BB', 'zz', 'CC']
print sorted(strs, key=str.lower)
"""
Explanation: As another example, specifying "str.lower" as the key function is a way to force the sorting to treat uppercase and lowercase the same:
End of explanation
"""
strs = ['xc', 'zb', 'yd' ,'wa']
"""
Explanation: You can also pass in your own MyFn as the key function. Say we have a list of strings we want to sort by the last letter of the string.
End of explanation
"""
def MyFn(s):
return s[-1]
"""
Explanation: A little function that takes a string, and returns its last letter.
This will be the key function (takes in 1 value, returns 1 value).
End of explanation
"""
print sorted(strs, key=MyFn)
"""
Explanation: Now pass key=MyFn to sorted() to sort by the last letter.
End of explanation
"""
alist = [1,5,9,2,5]
alist.sort()
alist
"""
Explanation: To use key= custom sorting, remember that you provide a function that takes one value and returns the proxy value to guide the sorting. There is also an optional argument "cmp=cmpFn" to sorted() that specifies a traditional two-argument comparison function that takes two values from the list and returns negative/0/positive to indicate their ordering. The built in comparison function for strings, ints, ... is cmp(a, b), so often you want to call cmp() in your custom comparator. The newer one argument key= sorting is generally preferable.
sort() method
As an alternative to sorted(), the sort() method on a list sorts that list into ascending order, e.g. list.sort(). The sort() method changes the underlying list and returns None, so use it like this:
End of explanation
"""
blist = alist.sort()
blist
"""
Explanation: Incorrect (returns None):
End of explanation
"""
tuple = (1, 2, 'hi')
print len(tuple)
print tuple[2]
"""
Explanation: The above is a very common misunderstanding with sort() -- it does not return the sorted list. The sort() method must be called on a list; it does not work on any enumerable collection (but the sorted() function above works on anything). The sort() method predates the sorted() function, so you will likely see it in older code. The sort() method does not need to create a new list, so it can be a little faster in the case that the elements to sort are already in a list.
Tuples
A tuple is a fixed size grouping of elements, such as an (x, y) co-ordinate. Tuples are like lists, except they are immutable and do not change size (tuples are not strictly immutable since one of the contained elements could be mutable). Tuples play a sort of "struct" role in Python -- a convenient way to pass around a little logical, fixed size bundle of values. A function that needs to return multiple values can just return a tuple of the values. For example, if I wanted to have a list of 3-d coordinates, the natural python representation would be a list of tuples, where each tuple is size 3 holding one (x, y, z) group.
To create a tuple, just list the values within parenthesis separated by commas. The "empty" tuple is just an empty pair of parenthesis. Accessing the elements in a tuple is just like a list -- len(), [ ], for, in, etc. all work the same.
End of explanation
"""
tuple[2] = 'bye'
"""
Explanation: Tuples are immutable, i.e. they cannot be changed.
End of explanation
"""
tuple = (1, 2, 'bye')
tuple
"""
Explanation: If you want to change a tuple variable, you must reassign it to a new tuple:
End of explanation
"""
tuple = ('hi',)
tuple
"""
Explanation: To create a size-1 tuple, the lone element must be followed by a comma.
End of explanation
"""
(err_string, err_code) = ('uh oh', 666)
print err_code, ':', err_string
"""
Explanation: It's a funny case in the syntax, but the comma is necessary to distinguish the tuple from the ordinary case of putting an expression in parentheses. In some cases you can omit the parenthesis and Python will see from the commas that you intend a tuple.
Assigning a tuple to an identically sized tuple of variable names assigns all the corresponding values. If the tuples are not the same size, it throws an error. This feature works for lists too.
End of explanation
"""
nums = [1, 2, 3, 4]
squares = [ n * n for n in nums ]
squares
"""
Explanation: List Comprehensions
A list comprehension is a compact way to write an expression that expands to a whole list. Suppose we have a list nums [1, 2, 3], here is the list comprehension to compute a list of their squares [1, 4, 9]:
End of explanation
"""
strs = ['hello', 'and', 'goodbye']
shouting = [ s.upper() + '!!!' for s in strs ]
shouting
"""
Explanation: The syntax is [ expr for var in list ] -- the for var in list looks like a regular for-loop, but without the colon (:). The expr to its left is evaluated once for each element to give the values for the new list. Here is an example with strings, where each string is changed to upper case with '!!!' appended:
End of explanation
"""
## Select values <= 2
nums = [2, 8, 1, 6]
small = [ n for n in nums if n <= 2 ]
small
## Select fruits containing 'a', change to upper case
fruits = ['apple', 'cherry', 'bannana', 'lemon']
afruits = [ s.upper() for s in fruits if 'a' in s ]
afruits
"""
Explanation: You can add an if test to the right of the for-loop to narrow the result. The if test is evaluated for each element, including only the elements where the test is true.
End of explanation
"""
|
rjdkmr/do_x3dna
|
docs/notebooks/helical_steps_tutorial.ipynb
|
gpl-3.0
|
import numpy as np
import matplotlib.pyplot as plt
import dnaMD
%matplotlib inline
"""
Explanation: Analysis of local helical parameters
This tutorial discuss the analyses that can be performed using the dnaMD Python module included in the do_x3dna package. The tutorial is prepared using Jupyter Notebook and this notebook tutorial file could be downloaded from this link.
Download the input files that are used in the tutorial from this link.
Two following input files are required in this tutorial
L-BPH_cdna.dat (do_x3dna output from the trajectory, which contains the DNA bound with the protein)
L-BPH_odna.dat (do_x3dna output from the trajectory, which only contains the free DNA)
These two file should be present inside tutorial_data of the current/present working directory.
The Python APIs should be only used when do_x3dna is executed with -ref option.
Detailed documentation is provided here.
Importing Python Modules
numpy: Required for the calculations involving large arrays
matplotlib: Required to plot the results
dnaMD: Python module to analyze DNA/RNA structures from the do_x3dna output files.
End of explanation
"""
## Initialization
pdna = dnaMD.DNA(60) #Initialization for 60 base-pairs DNA bound with the protein
fdna = dnaMD.DNA(60) #Initialization for 60 base-pairs free DNA
## If HDF5 file is used to store/save data use these:
# pdna = dnaMD.DNA(60, filename='cdna.h5') #Initialization for 60 base-pairs DNA bound with the protein
# fdna = dnaMD.DNA(60, filename='odna.h5') #Initialization for 60 base-pairs free DNA
## Loading data from input files in respective DNA object
# Number of helical steps = Number of base-pairs - one
# Number of helcial steps in a 60 base-pairs DNA = 59
# "bp=[1, 59]" will load local helical parameters of 1 to 59 base-steps
# "parameters = 'All' " will load all six parameters (X-disp, Y-disp, h-Rise, Inclination, Tip and h-Twist)
pdna.set_base_step_parameters('tutorial_data/L-BPH_cdna.dat', bp_step=[1, 59], parameters='all', step_range=True, helical=True)
fdna.set_base_step_parameters('tutorial_data/L-BPH_odna.dat', bp_step=[1, 59], parameters='all', step_range=True, helical=True)
"""
Explanation: Initializing DNA object and storing data to it
DNA object is initialized by using the total number of base-pairs
One helical-step is formed by two adjacent base-pairs. Therefore, total number of helical-steps is less than one of total number of base-pairs.
Six helical parameters (X-displacement, Y-displacement, helical-rise, Inclination, Tip and Helical-twist) can be read and stored in DNA object from the input file using function set_base_step_parameters(..., helical=True).
To speed up processing and analysis, data can be stored in a HDF5 file by including HDF5 file name as a argument during initialization. Same file can be used to store and retrieve all other parameters.
End of explanation
"""
# Extracting "h-Twist" of 22nd bp
twist_20bp = pdna.data['bps']['22']['h-twist']
#h-Twist vs Time for 22nd bp
plt.title('22nd bp')
plt.plot(pdna.time, twist_20bp)
plt.xlabel('Time (ps)')
plt.ylabel('Twist ( $^o$)')
plt.show()
"""
Explanation: Local base-step parameter of a base-pair directly from dictionary
The DNA.data is a python dictionary which contains all the data as a Python Dictionary. For a base-step, parameter as a function of time can be directly extracted.
End of explanation
"""
# Extracting "h-Twist" of 20 to 30 base-steps
twist, bp_idx = pdna.get_parameters('h-twist',[20,30], bp_range=True)
# h-Twist vs Time for 22nd base-step
plt.title('22nd bp')
plt.plot(pdna.time, twist[2]) # index is 2 for 22nd base-step: (20 + 2)
plt.xlabel('Time (ps)')
plt.ylabel('Helical Twist ( $^o$)')
plt.show()
# Average h-Twist vs Time for segment 20-30 base-step
avg_twist = np.mean(twist, axis=0) # Calculation of mean using mean function of numpy
plt.title('20-30 bp segment')
plt.plot(pdna.time, avg_twist)
plt.xlabel('Time (ps)')
plt.ylabel('Helical Twist ( $^o$)')
plt.show()
# Average h-Twist vs Time for segment 24-28 base-step
# index of 24th base-step is 4 (20 + 4). index of 28th base-step is 8 (20 + 8)
avg_twist = np.mean(twist[4:8], axis=0)
plt.title('24-28 bp segment')
plt.plot(pdna.time, avg_twist)
plt.xlabel('Time (ps)')
plt.ylabel('Helical Twist ( $^o$)')
plt.show()
"""
Explanation: Local helical parameters as a function of time (manually)
A specific local helical parameters for the given base-pairs range can be extracted from the DNA obejct using function dnaMD.DNA.get_parameters(...).
The extracted parameters of the given helical step can be plotted as a function of time
The extracted parameters (average) for the DNA segment can be plotted as a function of time
Following example shows h-Twist vs Time plots. These example also shows that how to extract the parameters value from the DNA object. Other properties could be extracted and plotted using similar steps.
End of explanation
"""
# X-disp vs Time for 22nd bp
plt.title('X-displacement for 22nd bp')
time, value = pdna.time_vs_parameter('x-disp', [22])
plt.plot(time, value)
plt.xlabel('Time (ps)')
plt.ylabel('X-displacement ($\AA$)')
plt.show()
# Helical Rise vs Time for 25-40 bp segment
plt.title('Helical Rise for 25-40 bp segment')
# Bound DNA
# Helical Rise is the length of helix formed between two base-pairs, so for a given segment it is sum over the base-steps
time, value = pdna.time_vs_parameter('h-rise', [25, 40], merge=True, merge_method='sum')
plt.plot(time, value, label='bound DNA', c='k') # balck color => bound DNA
# Free DNA
time, value = fdna.time_vs_parameter('h-rise', [25, 40], merge=True, merge_method='sum')
plt.plot(time, value, label='free DNA', c='r') # red color => free DNA
plt.xlabel('Time (ps)')
plt.ylabel('Helical Rise ( $\AA$)')
plt.legend()
plt.show()
"""
Explanation: Local helical parameters as a function of time (using provided functions)
Above examples show the method to extract the values from the DNA object. However, dnaMD.DNA.time_vs_parameter(...) function could be use to get parameter values as a function of time for the given base-pairs/step or segment
End of explanation
"""
#### Helical Rise distribution for 20-45 bp segment
plt.title('Helical Rise distribution for 20-45 bp segment')
### Bound DNA ###
## calculation of parameter distribution for the segment
values, density = pdna.parameter_distribution('h-rise', [20, 45], bins=20, merge=True, merge_method='sum')
## plot distribution
plt.plot(values, density, label='bound DNA', c='k') # balck color => bound DNA
### Free DNA ###
## calculation of parameter distribution for the segment
values, density = fdna.parameter_distribution('h-rise', [20, 45], bins=20, merge=True, merge_method='sum')
## plot distribution
plt.plot(values, density, label='free DNA', c='r') # red color => free DNA
plt.xlabel('Helical Rise ( $\AA$)')
plt.ylabel('Density')
plt.legend()
plt.show()
#### Helical Twist distribution for 25-40 bp segment
plt.title('Helical Twist distribution for 25-40 bp segment')
### Bound DNA ###
## calculation of parameter distribution for the segment
# Helical Twist is a measure of twisting in the helix formed between two base-pairs, so for helical twist of a given segment
# is considered here as sum over the base-steps
values, density = pdna.parameter_distribution('h-twist', [25, 40], bins=20, merge=True, merge_method='sum')
## plot distribution
plt.plot(values, density, label='bound DNA', c='k') # balck color => bound DNA
### Free DNA ###
## calculation of parameter distribution for the segment
values, density = fdna.parameter_distribution('h-twist', [25, 40], bins=20, merge=True, merge_method='sum')
## plot distribution
plt.plot(values, density, label='free DNA', c='r') # red color => free DNA
plt.xlabel('Helical Twist ( $^o$)')
plt.ylabel('Density')
plt.legend()
plt.show()
"""
Explanation: Distribution of local helical parameters during MD simulations
As shown in above plot of Time vs Helical Rise, comparison between bound and free DNA is very difficult. Therefore, to compare the parameters of either different DNAs or same DNAs in different environment or different segment of same DNAs, the distribution of parameters over the MD trajectory are sometime useful.
The distribution could be calculated using the function dnaMD.DNA.parameter_distribution(...) as shown in the following examples.
The normalized distribution is calculated using numpy.histogram(...).
End of explanation
"""
######## Average Helical Rise as a function of base-steps ########
plt.title('Average Helical Rise for each base-pairs')
### Calculating Average Helical Rise values for 5 to 56 base-steps DNA bound with protein
bp, rise, error = pdna.get_mean_error([5, 56], 'h-rise', err_type='block', bp_range=True)
# plot these values
plt.errorbar(bp, rise, yerr=error, ecolor='k', elinewidth=1, color='k', lw=0, marker='o', mfc='k', mew=1, ms=4, label='bound DNA' )
### Calculating Average Helical Rise values for 5 to 56 base-steps DNA
bp, rise, error = fdna.get_mean_error([5, 56], 'h-rise', err_type='block', bp_range=True)
# plot these values
plt.errorbar(bp, rise, yerr=error, ecolor='r', elinewidth=1, color='r', lw=0, marker='x', mfc='r', mew=1, ms=4, label='free DNA' )
plt.ylabel('Helical Rise ($\AA$)')
plt.xlabel('base-step number')
plt.xlim(0,61)
plt.ylim(1.5, 4.0)
plt.legend()
plt.show()
######## Average Helical Rise as a function of DNA segments ########
plt.title('Average Helical Rise for DNA segments')
### Calculating Average Helical Rise for 5 to 56 base-steps DNA bound with protein
### DNA segments are assumed to made up of 4 base-steps (merge_bp=4)
bp, rise, error = pdna.get_mean_error([5,56], 'h-rise', err_type='block', bp_range=True, merge_bp=4, merge_method='sum')
# plot these values
plt.errorbar(bp, rise,yerr=error, ecolor='k', elinewidth=1, color='k', lw=1, marker='o', mfc='k', mew=1, ms=4, label='bound DNA' )
### Calculating Average Helical Rise values for 5 to 56 base-steps DNA
### DNA segments are assumed to made up of 5 base-steps (merge_bp=4)
bp, rise, error = fdna.get_mean_error([5,56], 'h-rise', err_type='block', bp_range=True, merge_bp=4, merge_method='sum')
# plot these values
plt.errorbar(bp, rise, yerr=error, ecolor='r', elinewidth=1, color='r', lw=1, marker='x', mfc='r', mew=1, ms=4, label='free DNA' )
plt.ylabel('Helical Rise ( $\AA$)')
plt.xlabel('base-step number')
plt.xlim(0,61)
plt.ylim(9.5, 15.0)
plt.legend()
plt.show()
"""
Explanation: Local helical parameters as a function of base-steps
What is the average values of a given parameter for either each helical step or a DNA segment?
To address this question, average values of a given parameter with its error could be calculated for either each base-step or a DNA segment using a function dnaMD.DNA.get_mean_error(...).
This average values could be also use to compare two DNA.
Standard error could be calculated using block averaging method as derived in this publication. To use this method, g_analyze of GROMACS package should be present in $PATH environment variable.
End of explanation
"""
#### Deviation in X-disp, Y-disp, h-Rise, Inclination, Tip and h-Twist
#### Deviation = Bound DNA(parameter) - Free DNA(parameter)
### Deviation in X-displacement
fdna_bp, pdna_bp, deviation, error = dnaMD.localDeformationVsBPS(fdna, [5,56], pdna, [5,56],
'x-disp', err_type='block', bp_range=True,
merge_bp=4, merge_method='sum')
# plot these values
plt.errorbar(pdna_bp, deviation, yerr=error, ecolor='k', elinewidth=1, color='k', lw=1, marker='o', mfc='k', mew=1, ms=4)
# plot line at zero
plt.plot([0,61], [0.0, 0.0], '--k')
plt.ylabel('Deviation in X-displacement ($\AA$)')
plt.xlabel('base-step number')
plt.xlim(0,61)
plt.show()
### Deviation in Y-displacement
fdna_bp, pdna_bp, deviation, error = dnaMD.localDeformationVsBPS(fdna, [5,56], pdna, [5,56],
'y-disp', err_type='block', bp_range=True,
merge_bp=4, merge_method='sum')
# plot these values
plt.errorbar(pdna_bp, deviation, yerr=error, ecolor='k', elinewidth=1, color='k', lw=1, marker='o', mfc='k', mew=1, ms=4)
# plot line at zero
plt.plot([0,61], [0.0, 0.0], '--k')
plt.ylabel('Deviation in Y-displacement ($\AA$)')
plt.xlabel('base-step number')
plt.xlim(0,61)
plt.show()
### Deviation in Helical Rise
fdna_bp, pdna_bp, deviation, error = dnaMD.localDeformationVsBPS(fdna, [5,56], pdna, [5,56],
'h-rise', err_type='block', bp_range=True,
merge_bp=4, merge_method='sum')
# plot these values
plt.errorbar(pdna_bp, deviation, yerr=error, ecolor='k', elinewidth=1, color='k', lw=1, marker='o', mfc='k', mew=1, ms=4)
# plot line at zero
plt.plot([0,61], [0.0, 0.0], '--k')
plt.ylabel('Deviation in Helical Rise ($\AA$)')
plt.xlabel('base-step number')
plt.xlim(0,61)
plt.show()
### Deviation in Inclination
fdna_bp, pdna_bp, deviation, error = dnaMD.localDeformationVsBPS(fdna, [5,56], pdna, [5,56],
'inclination', err_type='block', bp_range=True,
merge_bp=4, merge_method='sum')
# plot these values
plt.errorbar(pdna_bp, deviation, yerr=error, ecolor='k', elinewidth=1, color='k', lw=1, marker='o', mfc='k', mew=1, ms=4)
# plot line at zero
plt.plot([0,61], [0.0, 0.0], '--k')
plt.ylabel('Deviation in Inclination ( $^o$)')
plt.xlabel('base-step number')
plt.xlim(0,61)
plt.show()
### Deviation in Tip
fdna_bp, pdna_bp, deviation, error = dnaMD.localDeformationVsBPS(fdna, [5,56], pdna, [5,56],
'tip', err_type='block', bp_range=True,
merge_bp=4, merge_method='sum')
# plot these values
plt.errorbar(pdna_bp, deviation, yerr=error, ecolor='k', elinewidth=1, color='k', lw=1, marker='o', mfc='k', mew=1, ms=4)
# plot line at zero
plt.plot([0,61], [0.0, 0.0], '--k')
plt.ylabel('Deviation in Tip ( $^o$)')
plt.xlabel('base-pair number')
plt.xlim(0,61)
plt.show()
### Deviation in Helical Twist
fdna_bp, pdna_bp, deviation, error = dnaMD.localDeformationVsBPS(fdna, [5,56], pdna, [5,56],
'h-twist', err_type='block', bp_range=True,
merge_bp=4, merge_method='sum')
# plot these values
plt.errorbar(pdna_bp, deviation, yerr=error, ecolor='k', elinewidth=1, color='k', lw=1, marker='o', mfc='k', mew=1, ms=4)
# plot line at zero
plt.plot([0,61], [0.0, 0.0], '--k')
plt.ylabel('Deviation in Helical Twist ( $^o$)')
plt.xlabel('base-step number')
plt.xlim(0,61)
plt.show()
"""
Explanation: Deviation in parameters of bound DNA with respect to free DNA
As discussed in the above section, average parameters with standard error can be calculated for both bound and free DNA. Additionally, deviation in bound DNA with respect to the free DNA could be calculated using function dnaMD.localDeformationVsBPS(...) as shown in the following example.
End of explanation
"""
|
mne-tools/mne-tools.github.io
|
0.16/_downloads/plot_lcmv_beamformer.ipynb
|
bsd-3-clause
|
# Author: Alexandre Gramfort <alexandre.gramfort@telecom-paristech.fr>
#
# License: BSD (3-clause)
# sphinx_gallery_thumbnail_number = 3
import matplotlib.pyplot as plt
import numpy as np
import mne
from mne.datasets import sample
from mne.beamformer import make_lcmv, apply_lcmv
print(__doc__)
data_path = sample.data_path()
raw_fname = data_path + '/MEG/sample/sample_audvis_raw.fif'
event_fname = data_path + '/MEG/sample/sample_audvis_raw-eve.fif'
fname_fwd = data_path + '/MEG/sample/sample_audvis-meg-eeg-oct-6-fwd.fif'
label_name = 'Aud-lh'
fname_label = data_path + '/MEG/sample/labels/%s.label' % label_name
subjects_dir = data_path + '/subjects'
"""
Explanation: Compute LCMV beamformer on evoked data
Compute LCMV beamformer solutions on an evoked dataset for three different
choices of source orientation and store the solutions in stc files for
visualisation.
End of explanation
"""
event_id, tmin, tmax = 1, -0.2, 0.5
# Setup for reading the raw data
raw = mne.io.read_raw_fif(raw_fname, preload=True)
raw.info['bads'] = ['MEG 2443', 'EEG 053'] # 2 bads channels
events = mne.read_events(event_fname)
# Set up pick list: EEG + MEG - bad channels (modify to your needs)
picks = mne.pick_types(raw.info, meg=True, eeg=False, stim=True, eog=True,
exclude='bads')
# Pick the channels of interest
raw.pick_channels([raw.ch_names[pick] for pick in picks])
# Re-normalize our empty-room projectors, so they are fine after subselection
raw.info.normalize_proj()
# Read epochs
epochs = mne.Epochs(raw, events, event_id, tmin, tmax,
baseline=(None, 0), preload=True, proj=True,
reject=dict(grad=4000e-13, mag=4e-12, eog=150e-6))
evoked = epochs.average()
forward = mne.read_forward_solution(fname_fwd)
forward = mne.convert_forward_solution(forward, surf_ori=True)
# Compute regularized noise and data covariances
noise_cov = mne.compute_covariance(epochs, tmin=tmin, tmax=0, method='shrunk')
data_cov = mne.compute_covariance(epochs, tmin=0.04, tmax=0.15,
method='shrunk')
evoked.plot(time_unit='s')
"""
Explanation: Get epochs
End of explanation
"""
pick_oris = [None, 'normal', 'max-power']
names = ['free', 'normal', 'max-power']
descriptions = ['Free orientation, voxel: %i', 'Normal orientation, voxel: %i',
'Max-power orientation, voxel: %i']
colors = ['b', 'k', 'r']
fig, ax = plt.subplots(1)
max_voxs = list()
for pick_ori, name, desc, color in zip(pick_oris, names, descriptions, colors):
# compute unit-noise-gain beamformer with whitening of the leadfield and
# data (enabled by passing a noise covariance matrix)
filters = make_lcmv(evoked.info, forward, data_cov, reg=0.05,
noise_cov=noise_cov, pick_ori=pick_ori,
weight_norm='unit-noise-gain')
# apply this spatial filter to source-reconstruct the evoked data
stc = apply_lcmv(evoked, filters, max_ori_out='signed')
# View activation time-series in maximum voxel at 100 ms:
time_idx = stc.time_as_index(0.1)
max_idx = np.argmax(stc.data[:, time_idx])
# we know these are all left hemi, so we can just use vertices[0]
max_voxs.append(stc.vertices[0][max_idx])
ax.plot(stc.times, stc.data[max_idx, :], color, label=desc % max_idx)
ax.set(xlabel='Time (ms)', ylabel='LCMV value', ylim=(-0.8, 2.2),
title='LCMV in maximum voxel')
ax.legend()
mne.viz.utils.plt_show()
"""
Explanation: Run beamformers and look at maximum outputs
End of explanation
"""
# take absolute value for plotting
np.abs(stc.data, out=stc.data)
# Plot last stc in the brain in 3D with PySurfer if available
brain = stc.plot(hemi='lh', subjects_dir=subjects_dir,
initial_time=0.1, time_unit='s')
brain.show_view('lateral')
for color, vertex in zip(colors, max_voxs):
brain.add_foci([vertex], coords_as_verts=True, scale_factor=0.5,
hemi='lh', color=color)
"""
Explanation: We can also look at the spatial distribution
End of explanation
"""
|
awagner-mainz/notebooks
|
gallery/textreuse_mainz_2020/12000-segment-paragraphs.ipynb
|
mit
|
import os
import lxml
from lxml import etree
resolved_dir = "./data/processing/10000_resolved"
# we create a dictionary with our editions:
resolved = { os.path.basename(file).split(os.extsep)[0] :
(etree.parse(resolved_dir + "/" + file))
for file in sorted(os.listdir(resolved_dir))
}
# and a list of available editions for quick lookup:
editions = list(resolved.keys())
# For now, hard-code languages
language = {}
language['azp1549'] = "pt"
language['azp1552'] = "pt"
language['azp1556'] = "es"
language['azp1573'] = "la"
print ("Parsed {} resolved files: {}".format(len(resolved), editions))
"""
Explanation: Segmentation based on paragraphs (more than just pilcrow signs)
Table of Contents
<p><div class="lev1 toc-item"><a href="#Segmentation-based-on-paragraphs-(more-than-just-pilcrow-signs)" data-toc-modified-id="Segmentation-based-on-paragraphs-(more-than-just-pilcrow-signs)-1"><span class="toc-item-num">1 </span>Segmentation based on paragraphs (more than just pilcrow signs)</a></div><div class="lev2 toc-item"><a href="#Parse-input-files,-count-some-key-values-etc." data-toc-modified-id="Parse-input-files,-count-some-key-values-etc.-11"><span class="toc-item-num">1.1 </span>Parse input files, count some key values etc.</a></div><div class="lev2 toc-item"><a href="#Segment-editions" data-toc-modified-id="Segment-editions-12"><span class="toc-item-num">1.2 </span>Segment editions</a></div><div class="lev2 toc-item"><a href="#Discussion" data-toc-modified-id="Discussion-13"><span class="toc-item-num">1.3 </span>Discussion</a></div>
Taking a closer look at the ways in which the texts are structured, we found (a) "paragraphs" to be a promising candidate for segmentation, provided that we do not understand "paragraph" in the typographical sense but as a section of the text that is introduced with a pilcrow sign ("¶").
As second and third criteria for segmentation, we also use (b) daggers and (c) two subsequent capital letters when no pilcrow sign is around.
So this is what we try here...
After a revision of the results, we also
- added headings as being segments of their own
- added lists (of type "summaries") as being segments of their own
## Parse input files, count some key values etc.
We parse what resolved files we find:
End of explanation
"""
import ipywidgets as widgets
from ipywidgets import interact # for interactively en-/disabling overwrite
def ow_seg(overwrite_segmented):
global overwrite_seg
overwrite_seg = overwrite_segmented
if os.listdir('./data/processing/12000_segmented_paragraphs/'):
overwrite_seg = False
interact(ow_seg, overwrite_segmented=True)
else:
overwrite_seg = True
print('Overwrite segmented files?: {}'.format(overwrite_seg))
"""
Explanation: Next, we add a switch allowing to specify whether we want to overwrite result files that might exist already:
End of explanation
"""
import re
nsmap = {"tei": "http://www.tei-c.org/ns/1.0"}
string_doc = {}
string_reverse_doc = {}
find_divs = etree.XPath("//tei:body/tei:div[@type = 'chapter'][not(@n = '0')]", namespaces=nsmap)
find_ps = etree.XPath("//tei:body/tei:div[not(@n = '0')]//tei:p", namespaces=nsmap)
find_ms = etree.XPath("//tei:body//tei:milestone", namespaces=nsmap)
find_body = etree.XPath("//tei:body", namespaces=nsmap)
# since python negative look*behind* assertions have to be fixed-length,
# we reverse the document and do negative look*ahead* assertions...
find_lone_daggers = re.compile(r'reggad#(?!.{0,100}¶)') # daggers not preceded by pilcrow within 100 characters
find_lone_ms = re.compile(r'derohcnanu#(?!.{0,100}¶)') # unanchored milestones not preceded by pilcrow within 100 characters
find_lone_caps = re.compile(r'[A-Z]{2}\b(?!.{0,100}¶)') # two capital letters not preceded ...
ct_divs = {}
ct_ps = {}
ct_pilcrows = {}
ct_ms = {}
ct_total_daggers = {}
ct_lone_daggers = {}
ct_lone_ms = {}
ct_lone_caps = {}
for ed in resolved:
ct_divs[ed] = len(find_divs(resolved[ed]))
ct_ps[ed] = len(find_ps(resolved[ed]))
ct_ms[ed] = len(find_ms(resolved[ed]))
string_doc[ed] = etree.tostring(find_body(resolved[ed])[0], encoding='utf-8', method='xml').decode('utf-8')
string_reverse_doc[ed] = string_doc[ed][::-1]
ct_pilcrows[ed] = string_doc[ed].count('¶')
ct_total_daggers[ed] = string_doc[ed].count('#dagger')
ct_lone_daggers[ed] = len(find_lone_daggers.findall(string_reverse_doc[ed]))
ct_lone_ms[ed] = len(find_lone_ms.findall(string_reverse_doc[ed]))
ct_lone_caps[ed] = len(find_lone_caps.findall(string_reverse_doc[ed]))
print ("number of top-level divs[not(@n = '0')]: {}".format(ct_divs))
print ("number of typographical paragraphs (<tei:p>): {}".format(ct_ps))
print ("number of pilcrow signs: {}".format(ct_pilcrows))
print ("number of milestones: {}".format(ct_ms))
print ("number of total daggers: {}".format(ct_total_daggers))
print ("number of standalone daggers: {}".format(ct_lone_daggers))
print ("number of standalone unanchored milestones: {}".format(ct_lone_ms))
print ("number of standalone capital bigrams: {}".format(ct_lone_caps))
"""
Explanation: Next, to have some diagnostic information, we count milestone and div elements for all editions:
End of explanation
"""
beginnings = { "pt": ["Se", "Mas\s+se", "E\s+se", "O\s+q[uv]e", "Os\s+q[uv]e", "Diz", "E\s+a\s+reza", "Dissemos",
"Acrecenamos", "Acreceto[uv]se"],
"es": ["S[uv]mm?ario", "Preg[uv]ntas", "De\s+los\s+pecc?ados", "Diximos", "Anadiose", "Anadimos",
"Sig[uv]ese\s+tambien", "Acrecentose", "Allegase", "(Donde|De\s+lo\s+q[uv]al)\s+inferimos",
"De\s+donde\s+inferimos", "Desto\s+inferimos", "Desta\s+resol[uv]cion\s+inferimos",
"Pares?cenos", "Si", "Ante\s+de\s+los\s+q[uv]ales\s+a[uv]isamos",
"En\s+otro\s+gercero"],
"la": ["Dixi", "Seq[uv]it[uv]r", "Pro\s+f[uv]ndam[eẽ]n?to", "Ex\s+(his|q[uv]ib[uv]s|q[uv]o)\s+infert[uv]r",
"Ex\s+pr(ae|æ)dictis", "Ex\s+his\s+pr(ae|æ)missis", "Et\s+conseq[uv]enter", "Adijcimus", "Admoneo",
"Accedit", "(Ex\s+q[uv]o|[UV]nde)\s+infer(im[uv]s|t[uv]r)", "[UV]nde\s+seq[uv]it[uv]r", "Addo", "Ante\s+quor[uv]m",
"Videtur", "Prior\s+cas[uv]s\s+est", "Posterior\s+cas[uv]s\s+est",
"S[uv]per\s+alio\s+vero\s+tertio"]
}
numbers = ["primum", "secundum", "tertium", "quartum", "quintum", "sextum", "septimum", "octa[uv]um", "nonum", "decimum", "[uv]ndecimum",
"prima", "prime[iy]?r[ao]", "se[cg]und[ao]", "terti[ao]", "terce[iy]?r[ao]", "quart[ao]",
"quint[ao]", "sext[ao]", "septim[ao]", "octa[uv][ao]", "non[ao]", "decim[ao]",
"[uv]ndecim[ao]", "duodecim[ao]",
"[cijlvxCIJLVX]+"
]
numbers_caps = ["Primum", "Secundum", "Tertium", "Quartum", "Quintum", "Sextum", "Septimum", "Octa[uv]um", "Nonum", "Decimum", "[UV]ndecimum", "D[uv]odecimum",
"Prima", "Prime[iy]?r[ao]", "Se[cg]und[ao]", "Terti[ao]", "Terce[iy]?r[ao]", "Quart[ao]",
"Quint[ao]", "Sext[ao]", "Septim[ao]", "Octa[uv][ao]", "Non[ao]", "Decim[ao]",
"[UV]ndecim[ao]", "D[uv]odecim[ao]",
"[CIJLVX]+"
]
prefixes = ["Ho\.?\s+", "O\.?\s+", "El\.?\s+", "Lo\.?\s+", "A\.?\s+", "Ad\s+", "La\.?\s+", "Dela\s+",
"Decim[ao]", "Vigesim[ao]", "Trigesim[ao]"]
suffixes = ["mente", "decimo", " infertur"]
rex_all_num = [ [ num for num in numbers_caps ], # all numbers
[ num + suf for num in numbers_caps for suf in suffixes ], # all numbers plus all suffixes
[ pref + num for num in numbers for pref in prefixes ], # all prefixes plus all numbers
[ pref + num + suf for num in numbers for pref in prefixes for suf in suffixes ] # all prefixes plus all numbers plus all suffixes
]
num_rex = sum(rex_all_num, [])
def flatten(element: lxml.etree._Element):
t = ""
# Dagger milestones
if element.get("rendition")=="#dagger":
t += "†"
if element.tail:
t += str.replace(element.tail, "\n", " ")
# asterisk milestones (additions in the 1556 ed.) - create temporary marker
elif element.get("rendition")=="#asterisk":
t += "*"
if element.tail:
t += str.replace(element.tail, "\n", " ")
# Unanchored milestones - create temporary marker
elif element.get("rendition")=="#unanchored":
t += "‡"
if element.tail:
t += str.replace(element.tail, "\n", " ")
# Summaries lists
elif element.get("type")=="summaries":
t += "++break--"
if element.text:
t += str.replace(element.text, "\n", " ")
if element.getchildren():
t += " ".join((flatten(child)) for child in element.getchildren())
if element.tail:
t += str.replace(element.tail, "\n", " ")
# Headings (except for summaries headings)
elif etree.QName(element).localname=="head" and element.getparent().get("type")!="summaries":
if element.text:
t += str.replace(element.text, "\n", " ")
if element.getchildren():
t += " ".join((flatten(child)) for child in element.getchildren())
t += "++break--"
if element.tail:
t += str.replace(element.tail, "\n", " ")
# horizontal space followed by "Circa"
elif etree.QName(element).localname=="space" and element.tail and str.strip(element.tail)[:5] == "Circa":
t += "++break--"
t += str.replace(element.tail, "\n", " ")
# paragraphs
elif etree.QName(element).localname=="p":
t += "<p>"
if element.text:
t += str.replace(element.text, "\n", " ")
if element.getchildren():
t += " ".join((flatten(child)) for child in element.getchildren())
if element.tail:
t += str.replace(element.tail, "\n", " ")
t += "</p>"
else:
if element.text:
t += str.replace(element.text, "\n", " ")
if element.getchildren():
t += " ".join((flatten(child)) for child in element.getchildren())
if element.tail:
t += str.replace(element.tail, "\n", " ")
return t
xp_divs = etree.XPath("(//tei:body/tei:div[@type = 'chapter'][not(@n = '0')])", namespaces = nsmap)
divs = {}
flattened = {}
lera = {}
for ed in resolved:
t, ttemp1, ttemp2, ttemp3, ttemp4, ttemp5, ttemp6, ttemp7, ttemp8, ttemp9, ttemp10, ttemp11, ttemp12, ttemp13 = ("", "", "", "", "", "", "", "", "", "", "", "", "", "")
divs[ed] = xp_divs(resolved[ed])
t = "".join("++div_" + str(div.get("n")) + "--" + flatten(div) for div in divs[ed])
# Add breaks
ttemp1 = re.sub(r'<p>', r'\n++break--<p>', t) # paragraphs begins
ttemp2 = re.sub(r'¶', '++break--¶', ttemp1) # where pilcrow signs are
ttemp3 = re.sub(r'([:\.\?\]])\s+([A-Z])(?!([CIJLVX]+|.)?\.)(?![^†‡*]{0,80}[:\.\?\]][^a-z]*[A-Z])(?=.{0,80}[†‡*])',
r'\1 ++break-- \2', ttemp2) # sentences beginning
# with punctuation, whitespace, and a
# capital letter (not immediately followed by
# an abbreviation period)
# and a milestone follows within 80 characters
# (that do not contain a punctuation character)
for rex in beginnings[language[ed]]:
ttemp4 = re.sub('([:\.\?\]])\s+(' + rex + '\s+)', r'\1 ++break-- \2', ttemp3)
for rex in num_rex:
ttemp5 = re.sub('([:\.\?\]])\s+(' + rex + '\.?\s+)', r'\1 ++break-- \2', ttemp4)
ttemp6 = re.sub(r'\b([A-Z]{2}\s*[a-z])', r'++break-- \1', ttemp5) # two capital letters
ttemp7 = ttemp6[::-1] # reverse the string
ttemp8 = re.sub(r'([†‡*])(?!.{0,100}--kaerb)', r'\1--kaerb++', ttemp7) # daggers without sentence boundaries, i.e. not covered above
# Eliminate breaks
ttemp9 = re.sub(r'--kaerb\+\+\s*(?=\.\s*(bil|pac|[a-z])\sni\s)', '', ttemp8) # preceded by " in (lib|cap|[a-z])."
ttemp10 = re.sub(r'--kaerb\+\+\s*(?=\.\s*[SP]\s+)', '', ttemp9) # preceded by " S." or " P."
ttemp11 = re.sub(r'--kaerb\+\+\s*(?=[.¶†‡&* ]+--kaerb\+\+)', '', ttemp10) # redundant ones
ttemp12 = re.sub(r'--kaerb\+\+\s*(?=--\d+_vid\+\+)', '', ttemp11) # preceded by a "div-break"
ttemp13 = re.sub(r'--kaerb\+\+\s*(?=(\.[cijlvx]+|\.(o[LH]|A)|[^\.?\]]){1,100}(¶|>p<))',
'', ttemp12) # preceded within 100 chars by ¶ or <p>
ttemp14 = re.sub(r'--kaerb\+\+\s*(?=.{0,40}(acriC\s*--kaerb\+\+))',
'', ttemp13) # preceded within 30 chars by ++break--Circa
ttemp15 = re.sub(r'--kaerb\+\+\s*(?=[†‡*]?\s*\.?[CIJLVXcijlvx]+\s*[†‡*]?\s*--kaerb\+\+)',
'', ttemp14) # preceded only by a roman numeral.
ttemp16 = ttemp15[::-1] # re-reverse i.e. restore original reading direction
ttemp17 = re.sub(r'\+\+break--\s*(?=([A-Za-z0-9]+\.\s+)+\+\+(break|div_))',
'', ttemp16) # followed only by words with period
ttemp18 = re.sub(r'\+\+break--\s*(?=\+\+div_)', '', ttemp17) # followed by a "div-break"
# Eliminate temporary markers
ttemp19 = re.sub(r'‡', '', ttemp18) # unanchored milestones
ttemp20 = re.sub(r'</?p>', '', ttemp19) # paragraphs
# Concat everything and do a final removal of redundant breaks.
flattened[ed] = re.sub(r'\+\+break--\s*\+\+break--', '++break--', " ".join(ttemp20.strip().split()))
lera[ed] = re.sub(r'\+\+break--', r'<milestone type="lera-segment"/>', flattened[ed])
lera[ed] = re.sub(r'\+\+div_([0-9]+)--', r'</div><div type="chapter" n="\1">', lera[ed])
lera[ed] = '<root>' + re.sub(r'&', '&', lera[ed])[6:] + '</div></root>'
"""
Explanation: Segment editions
After some experiments with XPath and lxml's iter() method (see appendices in milestones segmentation approach), we take a third approach to segment the texts: (a) We flatten the whole text and replace the breakpoints we have identified by a key string; (b) we split the text by using the key strings. ([c] We save our results.)
Here are the rules we use for segmentation:
Add breaks
after "summaries"-type lists
after headings (except for those of "summaries"-type lists)
after a horizontal space that is followed by the word "Circa" (case-sensitive)
before paragraphs
before the beginning of sentences in which a dagger or marginal number occurs
we identify these by: punctuation, followed by whitespace, followed by a capital letter (that is itself not immediately followed by an abbreviation period), followed by a dagger or marginal number within 80 characters (that do not contain a punctuation character)
before a cue phrase or a numeral expression at the beginning of a sentence
before a word beginning with two capital letters followed by a space or lower-case letters
before other daggers, i.e. where that is not preceded by a break within 100 characters
before pilcrow signs ('¶')
before xml body/div elements
Then, from these, we remove breaks where they would be redundant
where two are present, separated only by whitespace and/or a dagger
where they are preceded by a single lowercase letter (with a period) which is in turn preceded by "in"
where up to the next segment break only words/numbers with subsequent period would occur
where they are preceded within 100 chars (other than a period, question mark or closing square bracket, making exceptions for 'Lo. ij.' and the like) by ¶ or an xml p element boundary
where they are preceded within 30 chars by another break followed by the word "Circa"
flatten
Recursively extract text, children and tail text properties. Insert ++div_xy-- and ++break-- keystrings where div breaks and breakpoints occur.
End of explanation
"""
for ed in editions:
print("number of divs/milestones in {}: {}/{}".format(ed,
str(lera[ed].count('<div')),
str(lera[ed].count('<milestone type="lera-segment"'))
))
"""
Explanation: Check if results make sense:
End of explanation
"""
import glob
if overwrite_seg:
for ed in editions:
with open('./data/processing/12000_segmented_paragraphs/' + ed + '.xml', 'w', encoding='utf-8') as txt_file:
txt_file.write(lera[ed])
else:
print("Files present no overwriting requested.")
flattened_files = glob.glob('./data/processing/12000_segmented_paragraphs/*.xml')
print ("Flattened files: {}".format(flattened_files))
"""
Explanation: Let's save this so that we can easier check if the break marks are in the right places...
End of explanation
"""
import glob
# First load the files again (so they may be manually tweaked in-between)
fEd = []
flattened = {}
for filename in glob.glob("./data/processing/12000_segmented_paragraphs/*.xml"):
e = os.path.basename(filename)[:-4]
fEd.append(e)
if e in set(editions):
with open(filename, encoding='utf-8') as file:
flattened[e] = file.read()
print("File {} read.".format(filename))
for i in set(editions) ^ set(fEd):
print("Check for problems with these editions: ".format(i))
import re
segmented = {}
key_prb = {}
for ed in editions:
segmented[ed] = {}
key_prb[ed] = []
body = flattened[ed][5:-6]
for div in re.split('<div', body):
i = 0
dlabel = div[div.find('n="')+3:div.find('">')]
content = div[div.find('">')+2:div.find('</div>')]
for seg in re.split(r'<milestone type="lera-segment"/>', content):
if seg[0:31] == '<milestone type="lera-segment"/>':
mscontent = " ".join(seg[seg.find('--')+2:].strip().split())
else:
mscontent = " ".join(seg.strip().split())
if (len(mscontent) > 0):
segmented[ed].update({dlabel.zfill(2) + '_' + str(i).zfill(3): mscontent})
i += 1
"""
Explanation: split
Now we split our long string into actual segments (and we do this for all our editions).
End of explanation
"""
for ed in editions:
print("number of segments in {}: {}".format(ed, str(len(segmented[ed]))))
"""
Explanation: Report how many segments we have found:
End of explanation
"""
import csv
if overwrite_seg:
for ed in segmented:
with open('./data/processing/12000_segmented_paragraphs/' + ed + '_seg.csv', 'w', encoding='utf-8') as csv_file:
writer = csv.writer(csv_file, lineterminator="\n")
for key, value in segmented[ed].items():
writer.writerow([key, value])
else:
print("Files present no overwriting requested.")
segmented_files = glob.glob('./data/processing/12000_segmented_paragraphs/*.csv')
print ("Segmented files: {}".format(segmented_files))
segmented['azp1552'].keys()
segmented['azp1556'].keys()
"""
Explanation: save
Now we save our first intermediate results, the segmented editions:
End of explanation
"""
|
zczapran/datascienceintensive
|
data_wrangling_json/sliderule_dsi_json_exercise.ipynb
|
mit
|
import pandas as pd
"""
Explanation: JSON examples and exercise
get familiar with packages for dealing with JSON
study examples with JSON strings and files
work on exercise to be completed and submitted
reference: http://pandas.pydata.org/pandas-docs/stable/io.html#io-json-reader
data source: http://jsonstudio.com/resources/
End of explanation
"""
import json
from pandas.io.json import json_normalize
"""
Explanation: imports for Python, Pandas
End of explanation
"""
# define json string
data = [{'state': 'Florida',
'shortname': 'FL',
'info': {'governor': 'Rick Scott'},
'counties': [{'name': 'Dade', 'population': 12345},
{'name': 'Broward', 'population': 40000},
{'name': 'Palm Beach', 'population': 60000}]},
{'state': 'Ohio',
'shortname': 'OH',
'info': {'governor': 'John Kasich'},
'counties': [{'name': 'Summit', 'population': 1234},
{'name': 'Cuyahoga', 'population': 1337}]}]
# use normalization to create tables from nested element
json_normalize(data, 'counties')
# further populate tables created from nested element
json_normalize(data, 'counties', ['state', 'shortname', ['info', 'governor']])
"""
Explanation: JSON example, with string
demonstrates creation of normalized dataframes (tables) from nested json string
source: http://pandas.pydata.org/pandas-docs/stable/io.html#normalization
End of explanation
"""
# load json as string
json.load((open('data/world_bank_projects_less.json')))
# load as Pandas dataframe
sample_json_df = pd.read_json('data/world_bank_projects_less.json')
sample_json_df
"""
Explanation: JSON example, with file
demonstrates reading in a json file as a string and as a table
uses small sample file containing data about projects funded by the World Bank
data source: http://jsonstudio.com/resources/
End of explanation
"""
df = pd.read_json('data/world_bank_projects.json')
df.groupby('countryshortname').size().sort_values(ascending=False).head(10)
data = json.load((open('data/world_bank_projects.json')))
project_themes = json_normalize(data, 'mjtheme_namecode')
project_themes.groupby(['code', 'name']).size().sort_values(ascending=False).head(10)
p = project_themes.copy()
c = p[p.name != ''].groupby('code').first().squeeze()
p['name'] = [c[x] for x in p.code]
p
"""
Explanation: JSON exercise
Using data in file 'data/world_bank_projects.json' and the techniques demonstrated above,
1. Find the 10 countries with most projects
2. Find the top 10 major project themes (using column 'mjtheme_namecode')
3. In 2. above you will notice that some entries have only the code and the name is missing. Create a dataframe with the missing names filled in.
End of explanation
"""
|
feststelltaste/software-analytics
|
prototypes/Reading Git logs with Pandas 2.0-checkpoint.ipynb
|
gpl-3.0
|
import git
GIT_LOG_FILE = r'${REPO}/spring-petclinic'
repo = git.Repo(GIT_LOG_FILE)
git_bin = repo.git
git_bin
"""
Explanation: Context
In https://www.feststelltaste.de/reading-a-git-log-file-output-with-pandas/ I show you a way to read in Git log data with Pandas's DataFrame and GitPython.
Looking back, this was really difficult and tedious to do. So with a few tricks we can do it much more better.
The idea
There are three new ideas that I introduce here:
We use GitPython's feature to directly access an underlying Git installation. This is way more faster than using GitPython's object representation of the repository makes it possible to have everything we need in one notebook.
We use in-memory reading by using StringIO to avoid unnecessary file access. This avoids storing the Git output on disk and read is again from disc. Way more faster, too.
We also hack Pandas's <tt>read_csv</tt> method even more. This makes the transformation of the Git log as easy as pie.
Side Note
This method also scales for analyzing huge GitHub repositories like the one for Linux. In the case for the Linux repo it has as of today almost 700.000 commits. But be warned: Cloning the repo as well as retrieving the Git log informaiton takes a while. I recommend cloning my fork because I fixed some encoding errors in author's names via an extended <tt>.mailmap</tt> file that maps these weird names to real names. Otherwise Python cannot read in the Git log because of an UnicodeError exception. On Windows machines, you'll also get some weird errors while cloning. But this doesn't influence the analysis.
Reading the Git log
The first step is to connect GitPython with the Git repo. If we have an instance of the repo, we can gain access to the underlying Git installation of the operation system via <tt>repo.git</tt>.
In this case, again, we tap the Spring Pet Clinic project, a small sample application for the Spring framework.
End of explanation
"""
git_log = git_bin.execute('git log --numstat --pretty=format:"\t\t\t%h\t%at\t%aN"')
git_log[:100]
"""
Explanation: With the <tt>git_bin</tt>, we can execute almost any Git command we like directly. In our hypothetical use case, we want to retrieve some information about the change frequency of files. For this, we need the complete history of the Git repo including statistics for the changed files (via <tt>--numstat</tt>).
We use a little trick to make sure, that the format for the file's statistics fits nicely with the commit's metadata (SHA <tt>%h</tt>, UNIX timestamp <tt>%at</tt> and author's name <tt>%aN</tt>). The <tt>--numstat</tt> option provides data for additions, deletions and the affected file name in one line, separated by the tabulator character <tt>\t</tt>:
<p>
<tt>1<b>\t</b>1<b>\t</b>some/file/name.ext</tt>
</p>
We use the same tabular separator <tt>\t</tt> for the format string:
<p>
<tt>%h<b>\t</b>%at<b>\t</b>%aN</tt>
</p>
And here is the trick: Additionally, we add the amount of tabulators of the file's statistics plus an additional tabulator in front of the format string to pretend that there are empty file statistics' information in front of the format string.
The results looks like this:
<p>
<tt>\t\t\t%h\t%at\t%aN</tt>
</p>
Note: If you want to export the Git log on the command line into a file to read that file later, you need to use the tabulator character xxx as separator instead of <tt>\t</tt> in the format string. Otherwise, the trick doesn't work.
OK, let's first executed the Git log export:
End of explanation
"""
import pandas as pd
from io import StringIO
commits_raw = pd.read_csv(StringIO(git_log),
sep="\t",
header=None,
names=['additions', 'deletions', 'filename', 'sha', 'timestamp', 'author']
)
commits_raw.head()
"""
Explanation: We now read in the complete files' history in the <tt>git_log</tt> variable. Don't let confuse you by all the <tt>\t</tt> characters.
Let's read the result into a Pandas <tt>DataFrame</tt> by using the <tt>read_csv</tt> method. Because we can't provide a file path to a CSV data, we have to use StringIO to read in our in-memory buffered content.
Pandas will read the first line of the tabular-separated "file", sees the many tabular-separated columns and parses all other lines in the same format / column layout. Additionaly, we set the <tt>header</tt> to <tt>None</tt> because we don't have one and provide nice names for all the columns that we read in.
End of explanation
"""
commits = commits_raw.fillna(method='ffill')
commits.head()
"""
Explanation: The last steps are easy. We fill all the empty file statistics rows with the commit's metadata.
End of explanation
"""
commits = commits.dropna()
commits.head()
"""
Explanation: And drop all the commit metadata rows that don't contain file statitics.
End of explanation
"""
pd.read_csv("../../spring-petclinic/git.log",
sep="\t",
header=None,
names=[
'additions',
'deletions',
'filename',
'sha',
'timestamp',
'author']).fillna(method='ffill').dropna().head()
"""
Explanation: We are finished! This is it.
In summary, you'll need "one-liner" for converting a Git log file output that was exported with
git log --numstat --pretty=format:"%x09%x09%x09%h%x09%at%x09%aN" > git.log
into a <tt>DataFrame</tt>:
End of explanation
"""
commits['additions'] = pd.to_numeric(commits['additions'], errors='coerce')
commits['deletions'] = pd.to_numeric(commits['deletions'], errors='coerce')
commits = commits.dropna()
commits['timestamp'] = pd.to_datetime(commits['timestamp'], unit="s")
commits.head()
commits.groupby('filename')[['timestamp']].count().sort_values(by='timestamp', ascending=False).head(10)
java_commits = commits[commits['filename'].str.endswith(".c")]
java_commits.head()
java_commits.groupby('author').sum()[['additions']].sort_values(by='additions', ascending=False).head()
commits[commits['timestamp'].max() == commits['timestamp']]
java_commits[java_commits['timestamp'].min() == java_commits['timestamp']]
commits = commits[commits['timestamp'] <= 'today']
latest = commits.sort_values(by='timestamp', ascending=False)
latest.head()
commits['today'] = pd.Timestamp('today')
commits.head()
initial_commit_date = commits[-1:]['timestamp'].values[0]
initial_commit_date
commits = commits[commits['timestamp'] >= initial_commit_date]
commits.head()
commits['age'] = commits['timestamp'] - commits['today']
commits.head()
commits.groupby('filename')[['age']].min().sort_values(by='age').head(10)
java_commits.groupby('filename')\
.count()[['additions']]\
.sort_values(by='additions', ascending=False).head()
ages = commits.sort_values(by='age', ascending=False).drop_duplicates(subset=['filename'])['age'] * -1
ages.head()
ages.dt.days.hist()
commits.groupby('filename')
import glob
file_list = [
os.path.abspath(path).replace(os.sep, "/") for path in glob.iglob("../../linux/**/*.*")]
file_list[:5]
[os.path.normpath
%matplotlib inline
commits.groupby('filename')\
.count()[['additions']]\
.sort_values(by='additions', ascending=False)\
.plot(kind='bar')
commits.sort_values(by='age', ascending=False).groupby('filename').first().sort_values(by='age', ascending=False)
%matplotlib inline
commits.groupby('filename')\
.count()['additions']\
.hist(bins=20)
commits.groupby('filename').count().sort_values(by='additions', ascending=False)
"""
Explanation: Bonus section
As a bonus, we can now convert some columns to their right data types. The columns <tt>additions</tt> and <tt>deletions</tt> columns are representing the added or deletes lines of code respectively. But there are also a few exceptions for binary files like the images. We skip these lines with the <tt>errors='coerce'</tt> option. This will lead to <tt>Nan</tt>< in the rows that will be dropped after the converstion.
The <tt>timestamp</tt> column is a UNIX timestamp with the past seconds since January 1st 1970.
End of explanation
"""
commits.groupby('author').sum()[['additions']].sort_values(by='additions', ascending=False)
"""
Explanation: After this, we have to tell Git which information we want. We can do this via the <tt>pretty-format</tt> option.
For each commit, we choose to create a header line with the following commit info (by using <tt>--pretty=format:'--%h--%ad--%aN'</tt>), which gives us the following output:
<pre>
--fa1ca6f--Thu Dec 22 08:04:18 2016 +0100--feststelltaste
</pre>
It contains the SHA key, the timestamp as well as the author's name of the commit, separated by a character that isn't certaninly in these information<tt>--</tt>. My favorite separator for this is <tt>\u0012</tt>
We also want to have some details about the modifications of each file per commit. This is why we use the <tt>--numstat</tt> flag.
Together with the <tt>--all</tt> flag to get all commits and the <tt>--no-renames</tt> flag to avoid commits that only rename files, we retrieve all the needed information directly via Git.
For each other row, we got some statistics about the modified files:
<pre>
2 0 src/main/asciidoc/appendices/bibliography.adoc
</pre>
It contains the number of lines inserted, the number of lines deleted and the relative path of the file. With a little trick and a little bit of data wrangling, we can read that information into a nicely structured DataFrame.
The first entries of that file look something like this:
Let's get started!
Import the data
First, I'll show you my approach on how to read nearly everything into a <tt>DataFrame</tt>. The key is to use Pandas' <tt>read_csv</tt> for reading "non-character separated values". How to do that? We simply choose a separator that doesn't occur in the file that we want to read. My favorite character for this is the "DEVICE CONTROL TWO" character U+0012. I haven't encountered a situation yet where this character was included in a data set.
We just read our <tt>git.log</tt> file without any headers (because there are none) and give the only column a nice name.
Data Wrangling
OK, but now we have a <strike>problem</strike> data wrangling challenge. We have the commit info as well as the statistic for the modified file in one column, but they don't belong together. What we want is to have the commit info along with the file statistics in separate columns to get some serious analysis started.
End of explanation
"""
%matplotlib inline
timed_commits = java_commits.set_index(pd.DatetimeIndex(java_commits['timestamp']))[['additions', 'deletions']].resample('1D').sum()
timed_commits
(timed_commits['additions'] - timed_commits['deletions']).cumsum().fillna(method='ffill').plot()
c = commits[commits['timestamp'] <= 'today']
c.sort_values(by='timestamp', ascending=False).head()
c = c\
.groupby('sha')\
.first()\
.reset_index()
c.head()
%matplotlib inline
c.set_index(
pd.DatetimeIndex(c['timestamp'])
)['additions']\
.resample('W-SUN', convention='start')\
.count()\
.tail(500)\
.plot(kind='area', figsize=(100,7))
c.set_index(
pd.DatetimeIndex(c['timestamp'])
)['additions']\
.resample('W-SUN', convention='start')\
.count()\
.tail(500)\
df = c.set_index(
pd.DatetimeIndex(c['timestamp']))
df2 = df.resample('W').count().dropna()
df2.tail()
df2['month'] = df2.index.month
df2.head()
df3 = df2.groupby([df2.index.year, df2.index.month]).aggregate({'month': 'first', 'sha' : 'min'})
df3.head()
df3.groupby(df3.index).count()
"""
Explanation: OK, this part is ready, let's have a look at the file statistics!
We're done!
Complete code block
To much code to look through? Here is everything from above in a condensed format.
Just some milliseconds to run through, not bad!
Summary
In this notebook, I showed you how to read some non-perfect structured data via the non-character separator trick. I also showed you how to transform the rows that contain multiple kinds of data into one nicely structured <tt>DataFrame</tt>.
Now that we have the Git repository <tt>DataFrame</tt>, we can do some nice things with it e. g. visualizing the code churn of a project, but that's a story for another notebook! But to give you a short preview:
End of explanation
"""
%matplotlib inline
commits['author'].value_counts().plot(kind='pie', figsize=(10,10))
"""
Explanation: Discussion
I hope you see how easy it is to retrieve some insights from your version control system by using Python and Pandas for some data wrangling. Because it's too good to be true, there are the drawbacks of this simple approach:
Files, not code
We observe the file change frequency, not the code change frequency:
All files
We also didn't look up if the files we are analyzing are still existing or have been deleted in the past. Because for our change analysis,
End of explanation
"""
|
ES-DOC/esdoc-jupyterhub
|
notebooks/nasa-giss/cmip6/models/giss-e2-1h/atmoschem.ipynb
|
gpl-3.0
|
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'nasa-giss', 'giss-e2-1h', 'atmoschem')
"""
Explanation: ES-DOC CMIP6 Model Properties - Atmoschem
MIP Era: CMIP6
Institute: NASA-GISS
Source ID: GISS-E2-1H
Topic: Atmoschem
Sub-Topics: Transport, Emissions Concentrations, Gas Phase Chemistry, Stratospheric Heterogeneous Chemistry, Tropospheric Heterogeneous Chemistry, Photo Chemistry.
Properties: 84 (39 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:20
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
"""
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Authors
Set document authors
End of explanation
"""
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Contributors
Specify document contributors
End of explanation
"""
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
"""
Explanation: Document Publication
Specify document publication status
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Software Properties
3. Key Properties --> Timestep Framework
4. Key Properties --> Timestep Framework --> Split Operator Order
5. Key Properties --> Tuning Applied
6. Grid
7. Grid --> Resolution
8. Transport
9. Emissions Concentrations
10. Emissions Concentrations --> Surface Emissions
11. Emissions Concentrations --> Atmospheric Emissions
12. Emissions Concentrations --> Concentrations
13. Gas Phase Chemistry
14. Stratospheric Heterogeneous Chemistry
15. Tropospheric Heterogeneous Chemistry
16. Photo Chemistry
17. Photo Chemistry --> Photolysis
1. Key Properties
Key properties of the atmospheric chemistry
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of atmospheric chemistry model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of atmospheric chemistry model code.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.chemistry_scheme_scope')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "troposhere"
# "stratosphere"
# "mesosphere"
# "mesosphere"
# "whole atmosphere"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.3. Chemistry Scheme Scope
Is Required: TRUE Type: ENUM Cardinality: 1.N
Atmospheric domains covered by the atmospheric chemistry model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.basic_approximations')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: STRING Cardinality: 1.1
Basic approximations made in the atmospheric chemistry model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.prognostic_variables_form')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "3D mass/mixing ratio for gas"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.5. Prognostic Variables Form
Is Required: TRUE Type: ENUM Cardinality: 1.N
Form of prognostic variables in the atmospheric chemistry component.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.number_of_tracers')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 1.6. Number Of Tracers
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of advected tracers in the atmospheric chemistry model
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.family_approach')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 1.7. Family Approach
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Atmospheric chemistry calculations (not advection) generalized into families of species?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.coupling_with_chemical_reactivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 1.8. Coupling With Chemical Reactivity
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Atmospheric chemistry transport scheme turbulence is couple with chemical reactivity?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2. Key Properties --> Software Properties
Software properties of aerosol code
2.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 2.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Operator splitting"
# "Integrated"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 3. Key Properties --> Timestep Framework
Timestepping in the atmospheric chemistry model
3.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Mathematical method deployed to solve the evolution of a given variable
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_advection_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 3.2. Split Operator Advection Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for chemical species advection (in seconds)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_physical_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 3.3. Split Operator Physical Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for physics (in seconds).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_chemistry_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 3.4. Split Operator Chemistry Timestep
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Timestep for chemistry (in seconds).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_alternate_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 3.5. Split Operator Alternate Order
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.integrated_timestep')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 3.6. Integrated Timestep
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Timestep for the atmospheric chemistry model (in seconds)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.integrated_scheme_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Explicit"
# "Implicit"
# "Semi-implicit"
# "Semi-analytic"
# "Impact solver"
# "Back Euler"
# "Newton Raphson"
# "Rosenbrock"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 3.7. Integrated Scheme Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Specify the type of timestep scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.turbulence')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 4. Key Properties --> Timestep Framework --> Split Operator Order
**
4.1. Turbulence
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for turbulence scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.convection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 4.2. Convection
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for convection scheme This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.precipitation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 4.3. Precipitation
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for precipitation scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.emissions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 4.4. Emissions
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for emissions scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 4.5. Deposition
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for deposition scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.gas_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 4.6. Gas Phase Chemistry
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for gas phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.tropospheric_heterogeneous_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 4.7. Tropospheric Heterogeneous Phase Chemistry
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for tropospheric heterogeneous phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.stratospheric_heterogeneous_phase_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 4.8. Stratospheric Heterogeneous Phase Chemistry
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for stratospheric heterogeneous phase chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.photo_chemistry')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 4.9. Photo Chemistry
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for photo chemistry scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.timestep_framework.split_operator_order.aerosols')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 4.10. Aerosols
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Call order for aerosols scheme. This should be an integer greater than zero, and may be the same value as for another process if they are calculated at the same time.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5. Key Properties --> Tuning Applied
Tuning methodology for atmospheric chemistry component
5.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. &Document the relative weight given to climate performance metrics versus process oriented metrics, &and on the possible conflicts with parameterization level tuning. In particular describe any struggle &with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics of the global mean state used in tuning model/component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics of mean state used in tuning model/component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics used in tuning model/component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6. Grid
Atmospheric chemistry grid
6.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe the general structure of the atmopsheric chemistry grid
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.matches_atmosphere_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6.2. Matches Atmosphere Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
* Does the atmospheric chemistry grid match the atmosphere grid?*
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7. Grid --> Resolution
Resolution in the atmospheric chemistry grid
7.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.2. Canonical Horizontal Resolution
Is Required: FALSE Type: STRING Cardinality: 0.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 7.3. Number Of Horizontal Gridpoints
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 7.4. Number Of Vertical Levels
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Number of vertical levels resolved on computational grid.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.grid.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 7.5. Is Adaptive Grid
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Default is False. Set true if grid resolution changes during execution.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8. Transport
Atmospheric chemistry transport
8.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview of transport implementation
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.use_atmospheric_transport')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 8.2. Use Atmospheric Transport
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is transport handled by the atmosphere, rather than within atmospheric cehmistry?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.transport.transport_details')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.3. Transport Details
Is Required: FALSE Type: STRING Cardinality: 0.1
If transport is handled within the atmospheric chemistry scheme, describe it.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9. Emissions Concentrations
Atmospheric chemistry emissions
9.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview atmospheric chemistry emissions
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Vegetation"
# "Soil"
# "Sea surface"
# "Anthropogenic"
# "Biomass burning"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 10. Emissions Concentrations --> Surface Emissions
**
10.1. Sources
Is Required: FALSE Type: ENUM Cardinality: 0.N
Sources of the chemical species emitted at the surface that are taken into account in the emissions scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Climatology"
# "Spatially uniform mixing ratio"
# "Spatially uniform concentration"
# "Interactive"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 10.2. Method
Is Required: FALSE Type: ENUM Cardinality: 0.N
Methods used to define chemical species emitted directly into model layers above the surface (several methods allowed because the different species may not use the same method).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 10.3. Prescribed Climatology Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted at the surface and prescribed via a climatology, and the nature of the climatology (E.g. CO (monthly), C2H6 (constant))
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 10.4. Prescribed Spatially Uniform Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted at the surface and prescribed as spatially uniform
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 10.5. Interactive Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted at the surface and specified via an interactive method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.surface_emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 10.6. Other Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted at the surface and specified via any other method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.sources')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Aircraft"
# "Biomass burning"
# "Lightning"
# "Volcanos"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 11. Emissions Concentrations --> Atmospheric Emissions
TO DO
11.1. Sources
Is Required: FALSE Type: ENUM Cardinality: 0.N
Sources of chemical species emitted in the atmosphere that are taken into account in the emissions scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.method')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Climatology"
# "Spatially uniform mixing ratio"
# "Spatially uniform concentration"
# "Interactive"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 11.2. Method
Is Required: FALSE Type: ENUM Cardinality: 0.N
Methods used to define the chemical species emitted in the atmosphere (several methods allowed because the different species may not use the same method).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.prescribed_climatology_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11.3. Prescribed Climatology Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted in the atmosphere and prescribed via a climatology (E.g. CO (monthly), C2H6 (constant))
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.prescribed_spatially_uniform_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11.4. Prescribed Spatially Uniform Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted in the atmosphere and prescribed as spatially uniform
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.interactive_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11.5. Interactive Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted in the atmosphere and specified via an interactive method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.atmospheric_emissions.other_emitted_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 11.6. Other Emitted Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of chemical species emitted in the atmosphere and specified via an "other method"
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.concentrations.prescribed_lower_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 12. Emissions Concentrations --> Concentrations
TO DO
12.1. Prescribed Lower Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the lower boundary.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.emissions_concentrations.concentrations.prescribed_upper_boundary')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 12.2. Prescribed Upper Boundary
Is Required: FALSE Type: STRING Cardinality: 0.1
List of species prescribed at the upper boundary.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 13. Gas Phase Chemistry
Atmospheric chemistry transport
13.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview gas phase atmospheric chemistry
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "HOx"
# "NOy"
# "Ox"
# "Cly"
# "HSOx"
# "Bry"
# "VOCs"
# "isoprene"
# "H2O"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13.2. Species
Is Required: FALSE Type: ENUM Cardinality: 0.N
Species included in the gas phase chemistry scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_bimolecular_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 13.3. Number Of Bimolecular Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of bi-molecular reactions in the gas phase chemistry scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_termolecular_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 13.4. Number Of Termolecular Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of ter-molecular reactions in the gas phase chemistry scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_tropospheric_heterogenous_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 13.5. Number Of Tropospheric Heterogenous Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of reactions in the tropospheric heterogeneous chemistry scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_stratospheric_heterogenous_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 13.6. Number Of Stratospheric Heterogenous Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of reactions in the stratospheric heterogeneous chemistry scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_advected_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 13.7. Number Of Advected Species
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of advected species in the gas phase chemistry scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 13.8. Number Of Steady State Species
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of gas phase species for which the concentration is updated in the chemical solver assuming photochemical steady state
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.interactive_dry_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 13.9. Interactive Dry Deposition
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is dry deposition interactive (as opposed to prescribed)? Dry deposition describes the dry processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.wet_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 13.10. Wet Deposition
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is wet deposition included? Wet deposition describes the moist processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.gas_phase_chemistry.wet_oxidation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 13.11. Wet Oxidation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is wet oxidation included? Oxidation describes the loss of electrons or an increase in oxidation state by a molecule
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 14. Stratospheric Heterogeneous Chemistry
Atmospheric chemistry startospheric heterogeneous chemistry
14.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview stratospheric heterogenous atmospheric chemistry
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.gas_phase_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Cly"
# "Bry"
# "NOy"
# TODO - please enter value(s)
"""
Explanation: 14.2. Gas Phase Species
Is Required: FALSE Type: ENUM Cardinality: 0.N
Gas phase species included in the stratospheric heterogeneous chemistry scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.aerosol_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Polar stratospheric ice"
# "NAT (Nitric acid trihydrate)"
# "NAD (Nitric acid dihydrate)"
# "STS (supercooled ternary solution aerosol particule))"
# TODO - please enter value(s)
"""
Explanation: 14.3. Aerosol Species
Is Required: FALSE Type: ENUM Cardinality: 0.N
Aerosol species included in the stratospheric heterogeneous chemistry scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 14.4. Number Of Steady State Species
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of steady state species in the stratospheric heterogeneous chemistry scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.sedimentation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 14.5. Sedimentation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is sedimentation is included in the stratospheric heterogeneous chemistry scheme or not?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.stratospheric_heterogeneous_chemistry.coagulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 14.6. Coagulation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is coagulation is included in the stratospheric heterogeneous chemistry scheme or not?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15. Tropospheric Heterogeneous Chemistry
Atmospheric chemistry tropospheric heterogeneous chemistry
15.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview tropospheric heterogenous atmospheric chemistry
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.gas_phase_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 15.2. Gas Phase Species
Is Required: FALSE Type: STRING Cardinality: 0.1
List of gas phase species included in the tropospheric heterogeneous chemistry scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.aerosol_species')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Sulphate"
# "Nitrate"
# "Sea salt"
# "Dust"
# "Ice"
# "Organic"
# "Black carbon/soot"
# "Polar stratospheric ice"
# "Secondary organic aerosols"
# "Particulate organic matter"
# TODO - please enter value(s)
"""
Explanation: 15.3. Aerosol Species
Is Required: FALSE Type: ENUM Cardinality: 0.N
Aerosol species included in the tropospheric heterogeneous chemistry scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.number_of_steady_state_species')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 15.4. Number Of Steady State Species
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of steady state species in the tropospheric heterogeneous chemistry scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.interactive_dry_deposition')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 15.5. Interactive Dry Deposition
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is dry deposition interactive (as opposed to prescribed)? Dry deposition describes the dry processes by which gaseous species deposit themselves on solid surfaces thus decreasing their concentration in the air.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.tropospheric_heterogeneous_chemistry.coagulation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 15.6. Coagulation
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is coagulation is included in the tropospheric heterogeneous chemistry scheme or not?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 16. Photo Chemistry
Atmospheric chemistry photo chemistry
16.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview atmospheric photo chemistry
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.number_of_reactions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 16.2. Number Of Reactions
Is Required: TRUE Type: INTEGER Cardinality: 1.1
The number of reactions in the photo-chemistry scheme.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.photolysis.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Offline (clear sky)"
# "Offline (with clouds)"
# "Online"
# TODO - please enter value(s)
"""
Explanation: 17. Photo Chemistry --> Photolysis
Photolysis scheme
17.1. Method
Is Required: TRUE Type: ENUM Cardinality: 1.1
Photolysis scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.atmoschem.photo_chemistry.photolysis.environmental_conditions')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17.2. Environmental Conditions
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe any environmental conditions taken into account by the photolysis scheme (e.g. whether pressure- and temperature-sensitive cross-sections and quantum yields in the photolysis calculations are modified to reflect the modelled conditions.)
End of explanation
"""
|
mne-tools/mne-tools.github.io
|
stable/_downloads/aec45e1f20057e833cee12bb6bd292dc/10_evoked_overview.ipynb
|
bsd-3-clause
|
import os
import mne
"""
Explanation: The Evoked data structure: evoked/averaged data
This tutorial covers the basics of creating and working with :term:evoked
data. It introduces the :class:~mne.Evoked data structure in detail,
including how to load, query, subselect, export, and plot data from an
:class:~mne.Evoked object. For info on creating an :class:~mne.Evoked
object from (possibly simulated) data in a :class:NumPy array
<numpy.ndarray>, see tut-creating-data-structures.
As usual we'll start by importing the modules we need:
End of explanation
"""
sample_data_folder = mne.datasets.sample.data_path()
sample_data_raw_file = os.path.join(sample_data_folder, 'MEG', 'sample',
'sample_audvis_raw.fif')
raw = mne.io.read_raw_fif(sample_data_raw_file, verbose=False)
events = mne.find_events(raw, stim_channel='STI 014')
# we'll skip the "face" and "buttonpress" conditions, to save memory:
event_dict = {'auditory/left': 1, 'auditory/right': 2, 'visual/left': 3,
'visual/right': 4}
epochs = mne.Epochs(raw, events, tmin=-0.3, tmax=0.7, event_id=event_dict,
preload=True)
evoked = epochs['auditory/left'].average()
del raw # reduce memory usage
"""
Explanation: Creating Evoked objects from Epochs
:class:~mne.Evoked objects typically store an EEG or MEG signal that has
been averaged over multiple :term:epochs, which is a common technique for
estimating stimulus-evoked activity. The data in an :class:~mne.Evoked
object are stored in an :class:array <numpy.ndarray> of shape
(n_channels, n_times) (in contrast to an :class:~mne.Epochs object,
which stores data of shape (n_epochs, n_channels, n_times)). Thus to
create an :class:~mne.Evoked object, we'll start by epoching some raw data,
and then averaging together all the epochs from one condition:
End of explanation
"""
print(f'Epochs baseline: {epochs.baseline}')
print(f'Evoked baseline: {evoked.baseline}')
"""
Explanation: You may have noticed that MNE informed us that "baseline correction" has been
applied. This happened automatically by during creation of the
~mne.Epochs object, but may also be initiated (or disabled!) manually:
We will discuss this in more detail later.
The information about the baseline period of ~mne.Epochs is transferred to
derived ~mne.Evoked objects to maintain provenance as you process your
data:
End of explanation
"""
evoked.plot()
"""
Explanation: Basic visualization of Evoked objects
We can visualize the average evoked response for left-auditory stimuli using
the :meth:~mne.Evoked.plot method, which yields a butterfly plot of each
channel type:
End of explanation
"""
print(evoked.data[:2, :3]) # first 2 channels, first 3 timepoints
"""
Explanation: Like the plot() methods for :meth:Raw <mne.io.Raw.plot> and
:meth:Epochs <mne.Epochs.plot> objects,
:meth:evoked.plot() <mne.Evoked.plot> has many parameters for customizing
the plot output, such as color-coding channel traces by scalp location, or
plotting the :term:global field power alongside the channel traces.
See tut-visualize-evoked for more information about visualizing
:class:~mne.Evoked objects.
Subselecting Evoked data
.. sidebar:: Evokeds are not memory-mapped
:class:~mne.Evoked objects use a :attr:~mne.Evoked.data attribute
rather than a :meth:~mne.Epochs.get_data method; this reflects the fact
that the data in :class:~mne.Evoked objects are always loaded into
memory, never memory-mapped_ from their location on disk (because they
are typically much smaller than :class:~mne.io.Raw or
:class:~mne.Epochs objects).
Unlike :class:~mne.io.Raw and :class:~mne.Epochs objects,
:class:~mne.Evoked objects do not support selection by square-bracket
indexing. Instead, data can be subselected by indexing the
:attr:~mne.Evoked.data attribute:
End of explanation
"""
evoked_eeg = evoked.copy().pick_types(meg=False, eeg=True)
print(evoked_eeg.ch_names)
new_order = ['EEG 002', 'MEG 2521', 'EEG 003']
evoked_subset = evoked.copy().reorder_channels(new_order)
print(evoked_subset.ch_names)
"""
Explanation: To select based on time in seconds, the :meth:~mne.Evoked.time_as_index
method can be useful, although beware that depending on the sampling
frequency, the number of samples in a span of given duration may not always
be the same (see the time-as-index section of the
tutorial about Raw data <tut-raw-class> for details).
Selecting, dropping, and reordering channels
By default, when creating :class:~mne.Evoked data from an
:class:~mne.Epochs object, only the "data" channels will be retained:
eog, ecg, stim, and misc channel types will be dropped. You
can control which channel types are retained via the picks parameter of
:meth:epochs.average() <mne.Epochs.average>, by passing 'all' to
retain all channels, or by passing a list of integers, channel names, or
channel types. See the documentation of :meth:~mne.Epochs.average for
details.
If you've already created the :class:~mne.Evoked object, you can use the
:meth:~mne.Evoked.pick, :meth:~mne.Evoked.pick_channels,
:meth:~mne.Evoked.pick_types, and :meth:~mne.Evoked.drop_channels methods
to modify which channels are included in an :class:~mne.Evoked object.
You can also use :meth:~mne.Evoked.reorder_channels for this purpose; any
channel names not provided to :meth:~mne.Evoked.reorder_channels will be
dropped. Note that channel selection methods modify the object in-place, so
in interactive/exploratory sessions you may want to create a
:meth:~mne.Evoked.copy first.
End of explanation
"""
sample_data_evk_file = os.path.join(sample_data_folder, 'MEG', 'sample',
'sample_audvis-ave.fif')
evokeds_list = mne.read_evokeds(sample_data_evk_file, verbose=False)
print(evokeds_list)
print(type(evokeds_list))
"""
Explanation: Similarities among the core data structures
:class:~mne.Evoked objects have many similarities with :class:~mne.io.Raw
and :class:~mne.Epochs objects, including:
They can be loaded from and saved to disk in .fif format, and their
data can be exported to a :class:NumPy array <numpy.ndarray> (but through
the :attr:~mne.Evoked.data attribute, not through a get_data()
method). :class:Pandas DataFrame <pandas.DataFrame> export is also
available through the :meth:~mne.Evoked.to_data_frame method.
You can change the name or type of a channel using
:meth:evoked.rename_channels() <mne.Evoked.rename_channels> or
:meth:evoked.set_channel_types() <mne.Evoked.set_channel_types>.
Both methods take :class:dictionaries <dict> where the keys are existing
channel names, and the values are the new name (or type) for that channel.
Existing channels that are not in the dictionary will be unchanged.
:term:SSP projector <projector> manipulation is possible through
:meth:~mne.Evoked.add_proj, :meth:~mne.Evoked.del_proj, and
:meth:~mne.Evoked.plot_projs_topomap methods, and the
:attr:~mne.Evoked.proj attribute. See tut-artifact-ssp for more
information on SSP.
Like :class:~mne.io.Raw and :class:~mne.Epochs objects,
:class:~mne.Evoked objects have :meth:~mne.Evoked.copy,
:meth:~mne.Evoked.crop, :meth:~mne.Evoked.time_as_index,
:meth:~mne.Evoked.filter, and :meth:~mne.Evoked.resample methods.
Like :class:~mne.io.Raw and :class:~mne.Epochs objects,
:class:~mne.Evoked objects have evoked.times,
:attr:evoked.ch_names <mne.Evoked.ch_names>, and :class:info <mne.Info>
attributes.
Loading and saving Evoked data
Single :class:~mne.Evoked objects can be saved to disk with the
:meth:evoked.save() <mne.Evoked.save> method. One difference between
:class:~mne.Evoked objects and the other data structures is that multiple
:class:~mne.Evoked objects can be saved into a single .fif file, using
:func:mne.write_evokeds. The example data <sample-dataset>
includes just such a .fif file: the data have already been epoched and
averaged, and the file contains separate :class:~mne.Evoked objects for
each experimental condition:
End of explanation
"""
for evok in evokeds_list:
print(evok.comment)
"""
Explanation: Notice that :func:mne.read_evokeds returned a :class:list of
:class:~mne.Evoked objects, and each one has an evoked.comment
attribute describing the experimental condition that was averaged to
generate the estimate:
End of explanation
"""
right_vis = mne.read_evokeds(sample_data_evk_file, condition='Right visual')
print(right_vis)
print(type(right_vis))
"""
Explanation: If you want to load only some of the conditions present in a .fif file,
:func:~mne.read_evokeds has a condition parameter, which takes either a
string (matched against the comment attribute of the evoked objects on disk),
or an integer selecting the :class:~mne.Evoked object based on the order
it's stored in the file. Passing lists of integers or strings is also
possible. If only one object is selected, the :class:~mne.Evoked object
will be returned directly (rather than a length-one list containing it):
End of explanation
"""
evokeds_list[0].plot(picks='eeg')
"""
Explanation: Above, when we created an :class:~mne.Evoked object by averaging epochs,
baseline correction was applied by default when we extracted epochs from the
~mne.io.Raw object (the default baseline period is (None, 0),
which assured zero mean for times before the stimulus event). In contrast, if
we plot the first :class:~mne.Evoked object in the list that was loaded
from disk, we'll see that the data have not been baseline-corrected:
End of explanation
"""
# Original baseline (none set).
print(f'Baseline after loading: {evokeds_list[0].baseline}')
# Apply a custom baseline correction.
evokeds_list[0].apply_baseline((None, 0))
print(f'Baseline after calling apply_baseline(): {evokeds_list[0].baseline}')
# Visualize the evoked response.
evokeds_list[0].plot(picks='eeg')
"""
Explanation: This can be remedied by either passing a baseline parameter to
:func:mne.read_evokeds, or by applying baseline correction after loading,
as shown here:
End of explanation
"""
left_right_aud = epochs['auditory'].average()
print(left_right_aud)
"""
Explanation: Notice that :meth:~mne.Evoked.apply_baseline operated in-place. Similarly,
:class:~mne.Evoked objects may have been saved to disk with or without
:term:projectors <projector> applied; you can pass proj=True to the
:func:~mne.read_evokeds function, or use the :meth:~mne.Evoked.apply_proj
method after loading.
Combining Evoked objects
One way to pool data across multiple conditions when estimating evoked
responses is to do so prior to averaging (recall that MNE-Python can select
based on partial matching of /-separated epoch labels; see
tut-section-subselect-epochs for more info):
End of explanation
"""
left_aud = epochs['auditory/left'].average()
right_aud = epochs['auditory/right'].average()
print([evok.nave for evok in (left_aud, right_aud)])
"""
Explanation: This approach will weight each epoch equally and create a single
:class:~mne.Evoked object. Notice that the printed representation includes
(average, N=145), indicating that the :class:~mne.Evoked object was
created by averaging across 145 epochs. In this case, the event types were
fairly close in number:
End of explanation
"""
left_right_aud = mne.combine_evoked([left_aud, right_aud], weights='nave')
assert left_right_aud.nave == left_aud.nave + right_aud.nave
"""
Explanation: However, this may not always be the case; if for statistical reasons it is
important to average the same number of epochs from different conditions,
you can use :meth:~mne.Epochs.equalize_event_counts prior to averaging.
Another approach to pooling across conditions is to create separate
:class:~mne.Evoked objects for each condition, and combine them afterward.
This can be accomplished by the function :func:mne.combine_evoked, which
computes a weighted sum of the :class:~mne.Evoked objects given to it. The
weights can be manually specified as a list or array of float values, or can
be specified using the keyword 'equal' (weight each ~mne.Evoked object
by $\frac{1}{N}$, where $N$ is the number of ~mne.Evoked
objects given) or the keyword 'nave' (weight each ~mne.Evoked object
proportional to the number of epochs averaged together to create it):
End of explanation
"""
for ix, trial in enumerate(epochs[:3].iter_evoked()):
channel, latency, value = trial.get_peak(ch_type='eeg',
return_amplitude=True)
latency = int(round(latency * 1e3)) # convert to milliseconds
value = int(round(value * 1e6)) # convert to µV
print('Trial {}: peak of {} µV at {} ms in channel {}'
.format(ix, value, latency, channel))
"""
Explanation: Note that the nave attribute of the resulting ~mne.Evoked object will
reflect the effective number of averages, and depends on both the nave
attributes of the contributing ~mne.Evoked objects and the weights at
which they are combined. Keeping track of effective nave is important for
inverse imaging, because nave is used to scale the noise covariance
estimate (which in turn affects the magnitude of estimated source activity).
See minimum_norm_estimates for more information (especially the
whitening_and_scaling section). Note that mne.grand_average does
not adjust nave to reflect effective number of averaged epochs; rather
it simply sets nave to the number of evokeds that were averaged
together. For this reason, it is best to use mne.combine_evoked rather than
mne.grand_average if you intend to perform inverse imaging on the resulting
:class:~mne.Evoked object.
Other uses of Evoked objects
Although the most common use of :class:~mne.Evoked objects is to store
averages of epoched data, there are a couple other uses worth noting here.
First, the method :meth:epochs.standard_error() <mne.Epochs.standard_error>
will create an :class:~mne.Evoked object (just like
:meth:epochs.average() <mne.Epochs.average> does), but the data in the
:class:~mne.Evoked object will be the standard error across epochs instead
of the average. To indicate this difference, :class:~mne.Evoked objects
have a :attr:~mne.Evoked.kind attribute that takes values 'average' or
'standard error' as appropriate.
Another use of :class:~mne.Evoked objects is to represent a single trial
or epoch of data, usually when looping through epochs. This can be easily
accomplished with the :meth:epochs.iter_evoked() <mne.Epochs.iter_evoked>
method, and can be useful for applications where you want to do something
that is only possible for :class:~mne.Evoked objects. For example, here
we use the :meth:~mne.Evoked.get_peak method (which isn't available for
:class:~mne.Epochs objects) to get the peak response in each trial:
End of explanation
"""
|
GoogleCloudPlatform/asl-ml-immersion
|
notebooks/ml_fairness_explainability/explainable_ai/solutions/xai_structured_caip.ipynb
|
apache-2.0
|
import os
PROJECT_ID = "" # TODO: your PROJECT_ID here.
os.environ["PROJECT_ID"] = PROJECT_ID
BUCKET_NAME = PROJECT_ID # TODO: replace your BUCKET_NAME, if needed
REGION = "us-central1"
os.environ["BUCKET_NAME"] = BUCKET_NAME
os.environ["REGION"] = REGION
"""
Explanation: AI Explanations: Explaining a tabular data model
Overview
In this tutorial we will perform the following steps:
Build and train a Keras model.
Export the Keras model as a TF 1 SavedModel and deploy the model on Cloud AI Platform.
Compute explainations for our model's predictions using Explainable AI on Cloud AI Platform.
Dataset
The dataset used for this tutorial was created from a BigQuery Public Dataset: London Bike Dataset.
Objective
The goal is to train a model using the Keras Sequential API that predicts the duration of a bike ride given the weekday, weather conditions, and start and stop station of the bike.
This tutorial focuses more on deploying the model to AI Explanations than on the design of the model itself. We will be using preprocessed data for this lab.
Setup
End of explanation
"""
%%bash
exists=$(gsutil ls -d | grep -w gs://${BUCKET_NAME}/)
if [ -n "$exists" ]; then
echo -e "Bucket gs://${BUCKET_NAME} already exists."
else
echo "Creating a new GCS bucket."
gsutil mb -l ${REGION} gs://${BUCKET_NAME}
echo -e "\nHere are your current buckets:"
gsutil ls
fi
"""
Explanation: Run the following cell to create your Cloud Storage bucket if it does not already exist.
End of explanation
"""
from datetime import datetime
TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S")
"""
Explanation: Timestamp
If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, we create a timestamp for each instance session, and append onto the name of resources which will be created in this tutorial.
End of explanation
"""
import explainable_ai_sdk
import pandas as pd
import tensorflow as tf
print(tf.__version__)
"""
Explanation: Import libraries
Import the libraries for this tutorial. This tutorial has been tested with TensorFlow versions 2.3.
End of explanation
"""
# Copy the data to your notebook instance
! gsutil cp 'gs://explanations_sample_data/bike-data.csv' ./
"""
Explanation: Download and preprocess the data
In this section you'll download the data to train your model from a public GCS bucket. The original data is from the BigQuery datasets linked above. For your convenience, we've joined the London bike and NOAA weather tables, done some preprocessing, and provided a subset of that dataset here.
End of explanation
"""
data = pd.read_csv("bike-data.csv")
# Shuffle the data
data = data.sample(frac=1, random_state=2)
# Drop rows with null values
data = data[data["wdsp"] != 999.9]
data = data[data["dewp"] != 9999.9]
# Rename some columns for readability
data = data.rename(columns={"day_of_week": "weekday"})
data = data.rename(columns={"max": "max_temp"})
data = data.rename(columns={"dewp": "dew_point"})
# Drop columns you won't use to train this model
data = data.drop(
columns=[
"start_station_name",
"end_station_name",
"bike_id",
"snow_ice_pellets",
]
)
# Convert trip duration from seconds to minutes so it's easier to understand
data["duration"] = data["duration"].apply(lambda x: float(x / 60))
# Preview the first 5 rows of training data
data.head()
"""
Explanation: Read the data with Pandas
You'll use Pandas to read the data into a DataFrame and then do some additional pre-processing.
End of explanation
"""
# Save duration to its own DataFrame and remove it from the original DataFrame
labels = data["duration"]
data = data.drop(columns=["duration"])
"""
Explanation: Next, you will separate the data into features ('data') and labels ('labels').
End of explanation
"""
# Use 80/20 train/test split
train_size = int(len(data) * 0.8)
print("Train size: %d" % train_size)
print("Test size: %d" % (len(data) - train_size))
# Split your data into train and test sets
train_data = data[:train_size]
train_labels = labels[:train_size]
test_data = data[train_size:]
test_labels = labels[train_size:]
"""
Explanation: Split data into train and test sets
You'll split your data into train and test sets using an 80 / 20 train / test split.
End of explanation
"""
# Build your model
model = tf.keras.Sequential(name="bike_predict")
model.add(
tf.keras.layers.Dense(
64, input_dim=len(train_data.iloc[0]), activation="relu"
)
)
model.add(tf.keras.layers.Dense(32, activation="relu"))
model.add(tf.keras.layers.Dense(1))
# Compile the model and see a summary
model.compile(loss="mean_squared_logarithmic_error", optimizer="adam")
model.summary()
"""
Explanation: Build, train, and evaluate our model with Keras
This section shows how to build, train, evaluate, and get local predictions from a model by using the Keras Sequential API. The model will takes your 10 features as input and predict the trip duration in minutes.
End of explanation
"""
batch_size = 256
epochs = 3
input_train = tf.data.Dataset.from_tensor_slices(train_data)
output_train = tf.data.Dataset.from_tensor_slices(train_labels)
input_train = input_train.batch(batch_size).repeat()
output_train = output_train.batch(batch_size).repeat()
train_dataset = tf.data.Dataset.zip((input_train, output_train))
"""
Explanation: Create an input data pipeline with tf.data
Per best practices, we will use tf.Data to create our input data pipeline. Our data is all in an in-memory dataframe, so we will use tf.data.Dataset.from_tensor_slices to create our pipeline.
End of explanation
"""
# This will take about a minute to run
# To keep training time short, you're not using the full dataset
model.fit(
train_dataset, steps_per_epoch=train_size // batch_size, epochs=epochs
)
"""
Explanation: Train the model
Now we train the model. We will specify a number of epochs which to train the model and tell the model how many steps to expect per epoch.
End of explanation
"""
# Run evaluation
results = model.evaluate(test_data, test_labels)
print(results)
# Send test instances to model for prediction
predict = model.predict(test_data[:5])
# Preview predictions on the first 5 examples from your test dataset
for i, val in enumerate(predict):
print(f"Predicted duration: {round(val[0])}")
print(f"Actual duration: {test_labels.iloc[i]} \n")
"""
Explanation: Evaluate the trained model locally
End of explanation
"""
export_path = "gs://" + BUCKET_NAME + "/explanations/mymodel"
model.save(export_path)
print(export_path)
"""
Explanation: Export the model as a TF 2.x SavedModel
When using TensorFlow 2.x, you export the model as a SavedModel and load it into Cloud Storage.
End of explanation
"""
! saved_model_cli show --dir $export_path --all
"""
Explanation: Use TensorFlow's saved_model_cli to inspect the model's SignatureDef. We'll use this information when we deploy our model to AI Explanations in the next section.
End of explanation
"""
# Print the names of your tensors
print("Model input tensor: ", model.input.name)
print("Model output tensor: ", model.output.name)
from explainable_ai_sdk.metadata.tf.v2 import SavedModelMetadataBuilder
builder = SavedModelMetadataBuilder(export_path)
builder.set_numeric_metadata(
model.input.name.split(":")[0],
input_baselines=[train_data.median().values.tolist()],
index_feature_mapping=train_data.columns.tolist(),
)
builder.save_metadata(export_path)
"""
Explanation: Deploy the model to AI Explanations
In order to deploy the model to Explanations, you need to generate an explanations_metadata.json file and upload this to the Cloud Storage bucket with your SavedModel. Then you'll deploy the model using gcloud.
Prepare explanation metadata
In order to deploy this model to AI Explanations, you need to create an explanation_metadata.json file with information about your model inputs, outputs, and baseline. You can use the Explainable AI SDK to generate most of the fields.
The value for input_baselines tells the explanations service what the baseline input should be for your model. Here you're using the median for all of your input features. That means the baseline prediction for this model will be the trip duration your model predicts for the median of each feature in your dataset.
Since this model accepts a single numpy array with all numerical feature, you can optionally pass an index_feature_mapping list to AI Explanations to make the API response easier to parse. When you provide a list of feature names via this parameter, the service will return a key / value mapping of each feature with its corresponding attribution value.
End of explanation
"""
import datetime
MODEL = "bike" + datetime.datetime.now().strftime("%d%m%Y%H%M%S")
# Create the model if it doesn't exist yet (you only need to run this once)
! gcloud ai-platform models create $MODEL --enable-logging --region $REGION
"""
Explanation: Since this is a regression model (predicting a numerical value), the baseline prediction will be the same for every example we send to the model. If this were instead a classification model, each class would have a different baseline prediction.
Create the model
End of explanation
"""
# Each time you create a version the name should be unique
VERSION = "v1"
# Create the version with gcloud
explain_method = 'integrated-gradients'
! gcloud beta ai-platform versions create $VERSION \
--model $MODEL \
--origin $export_path \
--runtime-version 2.3 \
--framework TENSORFLOW \
--python-version 3.7 \
--machine-type n1-standard-4 \
--explanation-method $explain_method \
--num-integral-steps 25 \
--region $REGION
# Make sure the model deployed correctly. State should be `READY` in the following log
! gcloud ai-platform versions describe $VERSION --model $MODEL --region $REGION
"""
Explanation: Create the model version
Creating the version will take ~5-10 minutes. Note that your first deploy could take longer.
End of explanation
"""
# Format data for prediction to your model
prediction_json = {
model.input.name.split(":")[0]: test_data.iloc[0].values.tolist()
}
"""
Explanation: Get predictions and explanations
Now that your model is deployed, you can use the AI Platform Prediction API to get feature attributions. You'll pass it a single test example here and see which features were most important in the model's prediction. Here you'll use the Explainable AI SDK to get your prediction and explanation. You can also use gcloud.
Format your explanation request
To make your AI Explanations request, you need to create a JSON object with your test data for prediction.
End of explanation
"""
remote_ig_model = explainable_ai_sdk.load_model_from_ai_platform(
project=PROJECT_ID, model=MODEL, version=VERSION, region=REGION
)
ig_response = remote_ig_model.explain([prediction_json])
"""
Explanation: Send the explain request
You can use the Explainable AI SDK to send explanation requests to your deployed model.
End of explanation
"""
attr = ig_response[0].get_attribution()
predicted = round(attr.example_score, 2)
print("Predicted duration: " + str(predicted) + " minutes")
print("Actual duration: " + str(test_labels.iloc[0]) + " minutes")
"""
Explanation: Understanding the explanations response
First, let's look at the trip duration your model predicted and compare it to the actual value.
End of explanation
"""
ig_response[0].visualize_attributions()
# The above graph is missing because ig_response[0].get_attribution()
# does not fill `_values_dict` when the model is coming from AI Platform.
# below is a workaround, which redefines the Attribution with values_dict:
import IPython
import numpy as np
from explainable_ai_sdk.common import attribution
from xai_tabular_widget import TabularWidget
test_data_dict = dict(test_data.iloc[0])
for key, item in test_data_dict.items():
test_data_dict[key] = np.array([item], dtype=np.float32)
raw_attribution = ig_response[0].get_attribution()
attribution = attribution.Attribution(
output_name=raw_attribution.output_name,
baseline_score=raw_attribution.baseline_score,
example_score=raw_attribution.example_score,
values_dict=test_data_dict,
attrs_dict=raw_attribution.attrs_dict,
label_index=raw_attribution.label_index,
processed_attrs_dict=raw_attribution._get_attributions_dict(),
approx_error=raw_attribution.approx_error,
label_name=raw_attribution.label_name,
)
target_label_attr = attribution.to_json(include_input_values=True)
widget = TabularWidget()
def input_to_widget():
widget.load_data_from_json(target_label_attr)
widget.on_trait_change(input_to_widget, "ready")
IPython.display.display(widget)
"""
Explanation: Next let's look at the feature attributions for this particular example. Positive attribution values mean a particular feature pushed your model prediction up by that amount, and vice versa for negative attribution values.
End of explanation
"""
# Prepare 10 test examples to your model for prediction
pred_batch = []
for i in range(10):
pred_batch.append(
{model.input.name.split(":")[0]: test_data.iloc[i].values.tolist()}
)
test_response = remote_ig_model.explain(pred_batch)
"""
Explanation: Check your explanations and baselines
To better make sense of the feature attributions you're getting, you should compare them with your model's baseline. In most cases, the sum of your attribution values + the baseline should be very close to your model's predicted value for each input. Also note that for regression models, the baseline_score returned from AI Explanations will be the same for each example sent to your model. For classification models, each class will have its own baseline.
In this section you'll send 10 test examples to your model for prediction in order to compare the feature attributions with the baseline. Then you'll run each test example's attributions through two sanity checks in the sanity_check_explanations method.
End of explanation
"""
def sanity_check_explanations(
example, mean_tgt_value=None, variance_tgt_value=None
):
passed_test = 0
total_test = 1
# `attributions` is a dict where keys are the feature names
# and values are the feature attributions for each feature
attr = example.get_attribution()
baseline_score = attr.baseline_score
# sum_with_baseline = np.sum(attribution_vals) + baseline_score
predicted_val = attr.example_score
# Sanity check 1
# The prediction at the input is equal to that at the baseline.
# Please use a different baseline. Some suggestions are: random input, training
# set mean.
if abs(predicted_val - baseline_score) <= 0.05:
print("Warning: example score and baseline score are too close.")
print("You might not get attributions.")
else:
passed_test += 1
# Sanity check 2 (only for models using Integrated Gradient explanations)
# Ideally, the sum of the integrated gradients must be equal to the difference
# in the prediction probability at the input and baseline. Any discrepency in
# these two values is due to the errors in approximating the integral.
if explain_method == "integrated-gradients":
total_test += 1
want_integral = predicted_val - baseline_score
got_integral = sum(attr.post_processed_attributions.values())
if abs(want_integral - got_integral) / abs(want_integral) > 0.05:
print("Warning: Integral approximation error exceeds 5%.")
print(
"Please try increasing the number of integrated gradient steps."
)
else:
passed_test += 1
print(passed_test, " out of ", total_test, " sanity checks passed.")
for response in test_response:
sanity_check_explanations(response)
"""
Explanation: In the function below you perform two sanity checks for models using Integrated Gradient (IG) explanations and one sanity check for models using Sampled Shapley.
End of explanation
"""
# This is the number of data points you'll send to the What-if Tool
WHAT_IF_TOOL_SIZE = 500
from witwidget.notebook.visualization import WitConfigBuilder, WitWidget
def create_list(ex_dict):
new_list = []
for i in feature_names:
new_list.append(ex_dict[i])
return new_list
def example_dict_to_input(example_dict):
return {"dense_input": create_list(example_dict)}
from collections import OrderedDict
wit_data = test_data.iloc[:WHAT_IF_TOOL_SIZE].copy()
wit_data["duration"] = test_labels[:WHAT_IF_TOOL_SIZE]
wit_data_dict = wit_data.to_dict(orient="records", into=OrderedDict)
config_builder = (
WitConfigBuilder(wit_data_dict)
.set_ai_platform_model(
PROJECT_ID, MODEL, VERSION, adjust_example=example_dict_to_input
)
.set_target_feature("duration")
.set_model_type("regression")
)
WitWidget(config_builder)
"""
Explanation: Understanding AI Explanations with the What-If Tool
In this section you'll use the What-If Tool to better understand how your model is making predictions. See the cell below the What-if Tool for visualization ideas.
The What-If-Tool expects data with keys for each feature name, but your model expects a flat list. The functions below convert data to the format required by the What-If Tool.
End of explanation
"""
# # Delete model version resource
# ! gcloud ai-platform versions delete $VERSION --quiet --model $MODEL --region $REGION
# # Delete model resource
# ! gcloud ai-platform models delete $MODEL --quiet --region $REGION
"""
Explanation: What-If Tool visualization ideas
On the x-axis, you'll see the predicted trip duration for the test inputs you passed to the What-If Tool. Each circle represents one of your test examples. If you click on a circle, you'll be able to see the feature values for that example along with the attribution values for each feature.
You can edit individual feature values and re-run prediction directly within the What-If Tool. Try changing distance, click Run inference and see how that affects the model's prediction
You can sort features for an individual example by their attribution value, try changing the sort from the attributions dropdown
The What-If Tool also lets you create custom visualizations. You can do this by changing the values in the dropdown menus above the scatter plot visualization. For example, you can sort data points by inference error, or by their similarity to a single datapoint.
Cleaning up
End of explanation
"""
|
planetlabs/notebooks
|
jupyter-notebooks/data-api-tutorials/search_and_download_quickstart.ipynb
|
apache-2.0
|
# Stockton, CA bounding box (created via geojson.io)
geojson_geometry = {
"type": "Polygon",
"coordinates": [
[
[-121.59290313720705, 37.93444993515032],
[-121.27017974853516, 37.93444993515032],
[-121.27017974853516, 38.065932950547484],
[-121.59290313720705, 38.065932950547484],
[-121.59290313720705, 37.93444993515032]
]
]
}
"""
Explanation: Getting started with the Data API
Let's search & download some imagery of farmland near Stockton, CA. Here are the steps we'll follow:
Define an Area of Interest (AOI)
Save our AOI's coordinates to GeoJSON format
Create a few search filters
Search for imagery using those filters
Activate an image for downloading
Download an image
Requirements
Python 2.7 or 3+
requests
A Planet API Key
Define an Area of Interest
An Area of Interest (or AOI) is how we define the geographic "window" out of which we want to get data.
For the Data API, this could be a simple bounding box with four corners, or a more complex shape, as long as the definition is in GeoJSON format.
For this example, let's just use a simple box. To make it easy, I'll use geojson.io to quickly draw a shape & generate GeoJSON output for our box:
We only need the "geometry" object for our Data API request:
End of explanation
"""
# get images that overlap with our AOI
geometry_filter = {
"type": "GeometryFilter",
"field_name": "geometry",
"config": geojson_geometry
}
# get images acquired within a date range
date_range_filter = {
"type": "DateRangeFilter",
"field_name": "acquired",
"config": {
"gte": "2016-08-31T00:00:00.000Z",
"lte": "2016-09-01T00:00:00.000Z"
}
}
# only get images which have <50% cloud coverage
cloud_cover_filter = {
"type": "RangeFilter",
"field_name": "cloud_cover",
"config": {
"lte": 0.5
}
}
# combine our geo, date, cloud filters
combined_filter = {
"type": "AndFilter",
"config": [geometry_filter, date_range_filter, cloud_cover_filter]
}
"""
Explanation: Create Filters
Now let's set up some filters to further constrain our Data API search:
End of explanation
"""
import os
import json
import requests
from requests.auth import HTTPBasicAuth
# API Key stored as an env variable
PLANET_API_KEY = os.getenv('PL_API_KEY')
item_type = "PSScene"
# API request object
search_request = {
"item_types": [item_type],
"filter": combined_filter
}
# fire off the POST request
search_result = \
requests.post(
'https://api.planet.com/data/v1/quick-search',
auth=HTTPBasicAuth(PLANET_API_KEY, ''),
json=search_request)
print(json.dumps(search_result.json(), indent=1))
"""
Explanation: Searching: Items and Assets
Planet's products are categorized as items and assets: an item is a single picture taken by a satellite at a certain time. Items have multiple asset types including the image in different formats, along with supporting metadata files.
For this demonstration, let's get a satellite image that is best suited for analytic applications; i.e., a 4-band image with spectral data for Red, Green, Blue and Near-infrared values. To get the image we want, we will specify an item type of PSScene, and asset type ps4b_analytic (to get a PSScene4Band Analytic asset).
You can learn more about item & asset types in Planet's Data API here.
Now let's search for all the items that match our filters:
End of explanation
"""
# extract image IDs only
image_ids = [feature['id'] for feature in search_result.json()['features']]
print(image_ids)
"""
Explanation: Our search returns metadata for all of the images within our AOI that match our date range and cloud coverage filters. It looks like there are multiple images here; let's extract a list of just those image IDs:
End of explanation
"""
# For demo purposes, just grab the first image ID
id0 = image_ids[0]
id0_url = 'https://api.planet.com/data/v1/item-types/{}/items/{}/assets'.format(item_type, id0)
# Returns JSON metadata for assets in this ID. Learn more: planet.com/docs/reference/data-api/items-assets/#asset
result = \
requests.get(
id0_url,
auth=HTTPBasicAuth(PLANET_API_KEY, '')
)
# List of asset types available for this particular satellite image
print(result.json().keys())
"""
Explanation: Since we just want a single image, and this is only a demonstration, for our purposes here we can arbitrarily select the first image in that list. Let's do that, and get the asset list available for that image:
End of explanation
"""
# This is "inactive" if the "analytic" asset has not yet been activated; otherwise 'active'
print(result.json()['ps4b_analytic']['status'])
"""
Explanation: ## Activation and Downloading
The Data API does not pre-generate assets, so they are not always immediately availiable to download. In order to download an asset, we first have to activate it.
Remember, earlier we decided we wanted a color-corrected image best suited for analytic applications. We can check the status of the PSScene 4-Band analytic asset we want to download like so:
End of explanation
"""
# Parse out useful links
links = result.json()[u"ps4b_analytic"]["_links"]
self_link = links["_self"]
activation_link = links["activate"]
# Request activation of the 'analytic' asset:
activate_result = \
requests.get(
activation_link,
auth=HTTPBasicAuth(PLANET_API_KEY, '')
)
"""
Explanation: Let's now go ahead and activate that asset for download:
End of explanation
"""
activation_status_result = \
requests.get(
self_link,
auth=HTTPBasicAuth(PLANET_API_KEY, '')
)
print(activation_status_result.json()["status"])
"""
Explanation: At this point, we wait for the activation status for the asset we are requesting to change from inactive to active. We can monitor this by polling the "status" of the asset:
End of explanation
"""
# Image can be downloaded by making a GET with your Planet API key, from here:
download_link = activation_status_result.json()["location"]
print(download_link)
"""
Explanation: Once the asset has finished activating (status is "active"), we can download it.
Note: the download link on an active asset is temporary
End of explanation
"""
|
bjshaw/phys202-2015-work
|
project/NeuralNetworks.ipynb
|
mit
|
%matplotlib inline
import matplotlib.pyplot as plt
from IPython.html.widgets import interact
from sklearn.datasets import load_digits
digits = load_digits()
print(digits.data.shape)
def show_digit(i):
plt.matshow(digits.images[i]);
interact(show_digit, i=(0,100));
"""
Explanation: Neural Networks
This project was created by Brian Granger. All content is licensed under the MIT License.
Introduction
Neural networks are a class of algorithms that can learn how to compute the value of a function given previous examples of the functions output. Because neural networks are capable of learning how to compute the output of a function based on existing data, they generally fall under the field of Machine Learning.
Let's say that we don't know how to compute some function $f$:
$$ f(x) \rightarrow y $$
But we do have some data about the output that $f$ produces for particular input $x$:
$$ f(x_1) \rightarrow y_1 $$
$$ f(x_2) \rightarrow y_2 $$
$$ \ldots $$
$$ f(x_n) \rightarrow y_n $$
A neural network learns how to use that existing data to compute the value of the function $f$ on yet unseen data. Neural networks get their name from the similarity of their design to how neurons in the brain work.
Work on neural networks began in the 1940s, but significant advancements were made in the 1970s (backpropagation) and more recently, since the late 2000s, with the advent of deep neural networks. These days neural networks are starting to be used extensively in products that you use. A great example of the application of neural networks is the recently released Flickr automated image tagging. With these algorithms, Flickr is able to determine what tags ("kitten", "puppy") should be applied to each photo, without human involvement.
In this case the function takes an image as input and outputs a set of tags for that image:
$$ f(image) \rightarrow {tag_1, \ldots} $$
For the purpose of this project, good introductions to neural networks can be found at:
The Nature of Code, Daniel Shiffman.
Neural Networks and Deep Learning, Michael Nielsen.
Data Science from Scratch, Joel Grus
The Project
Your general goal is to write Python code to predict the number associated with handwritten digits. The dataset for these digits can be found in sklearn:
End of explanation
"""
digits.target
"""
Explanation: The actual, known values (0,1,2,3,4,5,6,7,8,9) associated with each image can be found in the target array:
End of explanation
"""
|
elect000/Journal
|
value-tracker/report/report.ipynb
|
bsd-3-clause
|
import quandl
data = quandl.get('NIKKEI/INDEX')
data[:5]
data_normal = (((data['Close Price']).to_frame())[-10000:-1])['Close Price']
data_normal[-10:-1] # 最新のデータ10件を表示
"""
Explanation: データの取得方法
ここではQuandl.comからのデータを受け取っています。今回入手した日経平均株価は、
時間、開始値、最高値、最低値、終値のデータを入手していますが、古いデータは終値しかないようですので、終値を用います。
*** TODO いつからデータを入手することが最も効果的かを考える。(処理時間と制度に影響が出るため)
End of explanation
"""
data_normal = data_normal.fillna(method='pad').resample('W-MON').fillna(method='pad')
data_normal[:5]
type(data_normal.index[0])
data_normal.index
"""
Explanation: 抜けデータが目立ったため、週単位でのデータを入手します
End of explanation
"""
import numpy as np
import pandas as pd
from scipy import stats
from pandas.core import datetools
# grapgh plotting
from matplotlib import pylab as plt
import seaborn as sns
%matplotlib inline
# settings graph size
from matplotlib.pylab import rcParams
rcParams['figure.figsize'] = 15,6
# model
import statsmodels.api as sm
"""
Explanation: データの用い方
必要となるpythonパッケージのインポートを行っています。
*** TODO 実装はClojureで行いため、これに相当するパッケージを検索、作成を行う
End of explanation
"""
plt.plot(data_normal)
"""
Explanation: 以下のグラフから、2000年ごろのデータからの推測でも十分に予測が行える可能性が伺えます。
End of explanation
"""
# ARIMA model prediction ... (This is self thought (not automatically))
diff = data_normal - data_normal.shift()
diff = diff.dropna()
diff.head()
# difference plot
plt.plot(diff)
"""
Explanation: ARIMAモデルでモデル推定を行うための下準備として、株価の変化量を取得します。
End of explanation
"""
# automatically ARIMA prediction function (using AIC)
resDiff = sm.tsa.arma_order_select_ic(diff, ic='aic', trend='nc')
# few Times ...(orz...)
"""
Explanation: AICを求めてモデルの良さを計算しますが、やや時間(約三分)がかかってしまします。
(SARIMAモデルでこれを行うと、更に時間がかかります)
*** TODO 実行時間の計測と最適化・マシンスペックの向上と性能の関係の調査
End of explanation
"""
resDiff
# search min
resDiff['aic_min_order']
"""
Explanation: 先程の実行結果から、AR=2, MA=2という値の場合が最も良いモデルになることがわかりました。
End of explanation
"""
# we found x = x, y= y autopmatically
from statsmodels.tsa.arima_model import ARIMA
ARIMAx_1_y = ARIMA(data_normal,
order=(resDiff['aic_min_order'][0], 1,
resDiff['aic_min_order'][1])).fit(dist=False)
# AR = resDiff[...][0] / I = 1 / MA = resDiff[...][1]
ARIMAx_1_y.params
"""
Explanation: 比較のためSARIMAモデルではなく、ARIMAモデルでの推定を行ってみます。
こちらの実行はそれほど時間がかかりません。
End of explanation
"""
# check Residual error (... I think this is "White noise")
# this is not Arima ... (Periodicity remained)
resid = ARIMAx_1_y.resid
fig = plt.figure(figsize=(12,8))
ax1 = fig.add_subplot(211)
fig = sm.graphics.tsa.plot_acf(resid.values.squeeze(), lags=40, ax=ax1)
ax2 = fig.add_subplot(212)
fig = sm.graphics.tsa.plot_pacf(resid, lags=40, ax=ax2)
# ok?
# We test SARIMA_model
"""
Explanation: 予測のブレがあまりないことが伺えます
End of explanation
"""
# predict SARIMA model by myself (not automatically)
import statsmodels.api as sm
SARIMAx_1_y_111 = sm.tsa.SARIMAX(data_normal,
order=(2,1,2),seasonal_order=(1,1,1,12))
SARIMAx_1_y_111 = SARIMAx_1_y_111.fit()
# order ... from ARIMA model // seasonal_order ... 1 1 1 ... ?
print(SARIMAx_1_y_111.summary())
# maybe use "Box-Jenkins method" ...
# https://github.com/statsmodels/statsmodels/issues/3620 for error
"""
Explanation: SARIMAモデルでの推定を行ってみます。
ARIMAモデルの実行がそれほど時間がかからなかったのに対して、SARIMAモデルはやや時間がかかること、Wariningが出ることが難点です。
End of explanation
"""
# check Residual error
residSARIMA = SARIMAx_1_y_111.resid
fig = plt.figure(figsize=(12,8))
ax1 = fig.add_subplot(211)
fig = sm.graphics.tsa.plot_acf(residSARIMA.values.squeeze(), lags=40, ax=ax1)
ax2 = fig.add_subplot(212)
fig = sm.graphics.tsa.plot_pacf(residSARIMA, lags=40, ax=ax2)
# prediction
pred = SARIMAx_1_y_111.predict(start = 1, end = '2018-01-15')
# (print(SARIMAx_1_y_111.__doc__))
# 本来は未来(インデクスの外)まで予測ができるはずなのですが、
# 何故かエラーが出てしまうので、既存のデータ部分だけ予測します
# TODO エラーの原因特定
# plot real data and predict data
plt.plot(data_normal[:-150:-1])
plt.plot(pred[:-150:-1], "r")
"""
Explanation: おおよそ見た限りではARIMAモデルと大差はないようですが、他の論文を読む限りではこちらの手法のほうが推測が上手く行くようです。
*** TODO データを少なくした場合の実行結果
End of explanation
"""
data_extra = pd.concat( [data_normal, pred[data_normal.index[-1] + 1:]] )
plt.plot(data_extra[:-150:-1])
"""
Explanation: 以下が予測を結合したもの
End of explanation
"""
# require
import quandl
import numpy as np
import pandad as pd
from scipy import stats
from pandas.coda import datatools
import statsmodels.api as sm
def get_data(quandl_name):
data = quandl.get(quandl_name)
return data
def set_data(data):
data_normal = (((data['Close Price']).to_frame())[-10000:-1])['Close Price']
data_normal = data_normal.fillna(method='pad').resample('W-MON').fillna(method='pad')
return data_normal
def sarima(quandl_name):
data_normal = set_data(get_data(quandl_name))
diff = (data_normal - (data_normal.shift())).dropna()
resDiff = aic(diff)['aic_min_order']
ar = resDiff[0]
ma = resDiff[1]
SARIMAx_1_y_111 = \
sm.tsa.SARIMAX(data_normal, order=(int(ar),1, int(ma)),seasonal_order=(1,1,1,12))
return SARIMAx_1_y_111
def pred_data(SARIMAx_1_y_111, predict_date):
SARIMAx_1_y_111 = SARIMAx_1_y_111.fit()
print(SARIMAx_1_y_111.summary())
pred = SARIMAx_1_y_111.predict(start = 1, end = predict_date)
return pd.concat( [data_normal, pred[data_normal.index[-1] + 1:]] )
def aic (diff):
return (sm.tsa.arma_order_select_ic(diff, ic='aic', trend='nc'))
# 以上をまとめたもの
def predict_data(quandl_name, predict_data):
sarima_model = sarima(quandl_name)
return pred_data(sarima_model, predict_data)
predict_res = predict_data('NIKKEI/INDEX','2018-01-15')
plt.plot(predict_res[:-150:-1])
"""
Explanation: 青が実測値、赤が予測値です。それに近い値を計測できたのではないでしょうか?
後のために関数化しておく
End of explanation
"""
|
staeiou/github-analytics
|
github-organizations-intro.ipynb
|
mit
|
!pip install pygithub
!pip install geopy
!pip install ipywidgets
from github import Github
#this is my private login credentials, stored in ghlogin.py
import ghlogin
g = Github(login_or_token=ghlogin.gh_user, password=ghlogin.gh_passwd)
"""
Explanation: Querying the GitHub API for repositories and organizations
By Stuart Geiger and Jamie Whitacre, made at a SciPy 2016 sprint. See the rendered, interactive, embedable map here.
A Warning: When logged in, you can push, delete, comment, etc. using the API
The Github API is powerful. Almost anything you can do on Github can be done through the API. While this notebook is only taking you through the more passive functions that read data from Github, there are also many functions that let you make changes to Github. Be careful if you are trying out a new function!
Getting started with the Github API
We are using the githubPy library, and you are going to want to log in for much higher rate limits. You can put your username and password directly into a notebook (not recommended!) or put it in a file named "ghlogin.py" and then import it. Make sure that your ghlogin.py file is ignored by git in your .gitignore file.
Packages
We are using pygithub, geopy, and ipywidgets in this notebook. We are also using datetime, but that comes with python.
End of explanation
"""
def vdir(obj):
return [x for x in dir(obj) if not x.startswith('_')]
vdir(g)
"""
Explanation: With this Github object, you can retreive all kinds of Github objects, which you can then futher explore.
Exploring methods and properties of objects.
A quick lightning tutorial inside this tutorial: there are many ways to explore the properties and methods of various objects. This is very useful when exploring a new method.
One way is to use tab completion, which is supported in Jupyter notebooks. Once you have executed code storing an object to a variable, type the variable name, then a dot, then hit tab to explore. If you don't have this, you can also use an extended version of the dir function. This vdir() function shows the methods and properties of an object, excluding those that begin with underscores (which are ones you will likely not use in this tutorial).
End of explanation
"""
user = g.get_user("staeiou")
vdir(user)
print(user.name)
print(user.created_at)
print(user.location)
"""
Explanation: Users
To get a user object, call the get_user() function of the main Github object.
End of explanation
"""
repo = g.get_repo("jupyter/notebook")
vdir(repo)
"""
Explanation: Repositories
Repositories work similarly to users. You have to call the name of the user or organization that owns the repository, then a slash, then the name of the repository. Some of these objects are easily printed (like name, description), while others are fully fledged Github objects in themselves, with many methods and properties (like organization or commit)
End of explanation
"""
print(repo.name)
print(repo.description)
print(repo.organization)
print(repo.organization.name)
print(repo.organization.location)
print(repo.language)
print(repo.get_contributors())
print(repo.get_commits())
"""
Explanation: There are lots of properties or methods of objects that return other objects (like repos, users, organizations), and you can quickly access properties or methods of these objects with a dot.
There there are also methods that return lists of objects, like repo.get_commits() or repo.get_contributors(). You need to iterate through these lists, or access them with indexes. What you usually get from these lists are also objects that have their own properties and methods.
End of explanation
"""
commits = repo.get_commits()
commit = commits[0]
print("Author name: ", commit.author.name)
print("Committer name: ", commit.committer.name)
print("Lines added: ", commit.stats.additions)
print("Lines deleted: ", commit.stats.deletions)
print("Commit message:\n---------\n", commit.commit.message)
"""
Explanation: Commits
End of explanation
"""
import datetime
one_month_ago = datetime.datetime.now() - datetime.timedelta(days=30)
net_lines_added = 0
num_commits = 0
for commit in repo.get_commits(since = one_month_ago):
net_lines_added += commit.stats.additions
net_lines_added -= commit.stats.deletions
num_commits += 1
print(net_lines_added, num_commits)
"""
Explanation: Working with timedeltas
This is a function that iterates through all the commits in a repository since one month ago, and then counts the number of commits and the net lines added/removed.
End of explanation
"""
dir(issue)
issues = repo.get_issues()
for issue in issues:
last_updated_delta = datetime.datetime.now() - issue.updated_at
if last_updated_delta > datetime.timedelta(days=365):
print(issue.title, last_updated_delta.days)
"""
Explanation: Issues
Issues are objects similar to commits.
End of explanation
"""
org = g.get_organization("jupyter")
print(org.name)
print(org.created_at)
print(org.html_url)
"""
Explanation: Organizations
Organizations are objects too, which have similar properties:
End of explanation
"""
repos = {}
for repo in org.get_repos():
repos[repo.name] = repo.forks_count
repos
"""
Explanation: We can go through all the repositories in the organization with the get_repos() function. It returns a list of repository objects, which have their own properties and methods.
In this example, we are iterating through all the repositories in an organization, then for an empty dictionary, setting the key to the repository's name and the value to the number of times the repository has been forked.
End of explanation
"""
from geopy.geocoders import Nominatim
geolocator = Nominatim()
uk_loc = geolocator.geocode("UK")
print(uk_loc.longitude,uk_loc.latitude)
us_loc = geolocator.geocode("USA")
print(us_loc.longitude,us_loc.latitude)
bids_loc = geolocator.geocode("Doe Library, Berkeley CA, 94720 USA")
print(bids_loc.longitude,bids_loc.latitude)
"""
Explanation: Getting location data for an organization's contributors
Mapping and geolocation
Before we get into how to query GitHub, we know we will have to get location coordinates for each contributor, and then plot it on a map. So we are going to do that first.
For geolocation, we are using geopy's geolocator object, which is based on Open Street Map's Nominatim API. Nominatim takes in any arbitrary location data and then returns a location object, which includes the best latitude and longitude coordinates it can find.
This does mean that we will have more error than if we did this manually, and there might be vastly different levels of accuracy. For example, if someone just has "UK" as their location, it will show up in the geographic center of the UK, which is somewhere on the edge of the Lake District. "USA" resolves to somewhere in Kansas. However, you can get very specific location data if you put in more detail.
End of explanation
"""
import ipywidgets
from ipyleaflet import (
Map,
Marker,
TileLayer, ImageOverlay,
Polyline, Polygon, Rectangle, Circle, CircleMarker,
GeoJSON,
DrawControl
)
center = [30.0, 5.0]
zoom = 2
m = Map(default_tiles=TileLayer(opacity=1.0), center=center, zoom=zoom, layout=ipywidgets.Layout(height="600px"))
uk_mark = Marker(location=[uk_loc.latitude,uk_loc.longitude])
uk_mark.visible
m += uk_mark
us_mark = Marker(location=[us_loc.latitude,us_loc.longitude])
us_mark.visible
m += us_mark
bids_mark = Marker(location=[bids_loc.latitude,bids_loc.longitude])
bids_mark.visible
m += bids_mark
"""
Explanation: We can plot points on a map using ipyleaflets and ipywidgets. We first set up a map object, which is created with various parameters. Then we create Marker objects, which are then appended to the map. We then display the map inline in this notebook.
End of explanation
"""
g.rate_limiting
reset_time = g.rate_limiting_resettime
reset_time
"""
Explanation: Rate limiting
Now that we have made a few requests, we can see what our rate limit is. If you are logged in, you get 5,000 requests per hour. If you are not, you only get 60 per hour. You can use methods in the GitHub object to see your remaining queries, hourly limit, and reset time. We have used less than 100 of our 5,000 requests with these calls.
End of explanation
"""
import datetime
def minutes_to_reset(github):
reset_time = github.rate_limiting_resettime
timedelta_to_reset = datetime.datetime.fromtimestamp(reset_time) - datetime.datetime.now()
return timedelta_to_reset.seconds / 60
minutes_to_reset(g)
"""
Explanation: This value is in seconds since the UTC epoch (Jan 1st, 1970), so we have to convert it. Here is a quick function that takes a GitHub object, queries the API to find our next reset time, and converts it to minutes.
End of explanation
"""
def get_org_contributor_locations(github, org_name):
"""
For a GitHub organization, get location for contributors to any repo in the org.
Returns a dictionary of {username URLS : geopy Locations}, then a dictionary of various metadata.
"""
# Set up empty dictionaries and metadata variables
contributor_locs = {}
locations = []
none_count = 0
error_count = 0
user_loc_count = 0
duplicate_count = 0
geolocator = Nominatim()
# For each repo in the organization
for repo in github.get_organization(org_name).get_repos():
#print(repo.name)
# For each contributor in the repo
for contributor in repo.get_contributors():
print('.', end="")
# If the contributor_locs dictionary doesn't have an entry for this user
if contributor_locs.get(contributor.url) is None:
# Try-Except block to handle API errors
try:
# If the contributor has no location in profile
if(contributor.location is None):
#print("No Location")
none_count += 1
else:
# Get coordinates for location string from Nominatim API
location=geolocator.geocode(contributor.location)
#print(contributor.location, " | ", location)
# Add a new entry to the dictionary. Value is user's URL, key is geocoded location object
contributor_locs[contributor.url] = location
user_loc_count += 1
except Exception:
print('!', end="")
error_count += 1
else:
duplicate_count += 1
return contributor_locs,{'no_loc_count':none_count, 'user_loc_count':user_loc_count,
'duplicate_count':duplicate_count, 'error_count':error_count}
"""
Explanation: Querying GitHub for location data
For our mapping script, we want to get profiles for everyone who has made a commit to any of the repositories in the Jupyter organization, find their location (if any), then add it to a list. The API has a get_contributors function for repo objects, which returns a list of contributors ordered by number of commits, but not one that works across all repos in an org. So we have to iterate through all the repos in the org, and run the get_contributors method for We also want to make sure we don't add any duplicates to our list to over-represent any areas, so we keep track of people in a dictionary.
I've written a few functions to make it easy to retreive and map an organization's contributors.
End of explanation
"""
usds_locs, usds_metadata = get_org_contributor_locations(g,'usds')
usds_metadata
"""
Explanation: With this, we can easily query an organization. The U.S. Digital Service (org name: usds) is a small organization that works well for testing these kinds of queries. It takes about a second per contributor to get this data, so we want to test on small orgs first. To show the status, it prints a period for each successful query and an exclaimation point for each error.
The get_org_contributor_locations function takes a Github object and an organization name, and returns two dictionaries: one of user and location data, and one of metadata about the geolocation query (including the number of users without a location in their profile).
End of explanation
"""
usds_locs_nousernames = []
for contributor, location in usds_locs.items():
usds_locs_nousernames.append(location)
usds_locs_nousernames
"""
Explanation: We are going to explore this dataset, but not plot names or usernames. I'm a bit hesitant to publish location data with unique identifiers, even if people put that information in their profiles. This code iterates through the dictionary and puts location data into a list.
End of explanation
"""
def map_location_dict(map_obj,org_location_dict):
"""
Maps the locations in a dictionary of {ids : geoPy Locations}.
Must be passed a map object, then the dictionary. Returns the map object.
"""
for username, location in org_location_dict.items():
if(location is not None):
mark = Marker(location=[location.latitude,location.longitude])
mark.visible
map_obj += mark
return map_obj
center = [30.0,5.0]
zoom = 2
usds_map = Map(default_tiles=TileLayer(opacity=1.0), center=center, zoom=zoom, layout=ipywidgets.Layout(height="600px"))
usds_map = map_location_dict(usds_map, usds_locs)
"""
Explanation: Now we can map this data using another function I have written.
End of explanation
"""
usds_map
"""
Explanation: Now show the map inline! With the leaflet widget, you can zoom in and out directly in the notebook. And we can also export it to an html widget by going to the Widgets menu in Jupyter notebooks, clicking "Embed widgets," and copy/pasting this to an html file. It will not show up in rendered Jupyter notebooks on Github, but may show up in nbviewer.
End of explanation
"""
|
SKA-ScienceDataProcessor/crocodile
|
examples/notebooks/grid-predict.ipynb
|
apache-2.0
|
theta = 0.1
lam = 18000
grid_size = int(theta * lam)
def kernel_oversample(ff, Qpx, s=None, P = 1):
"""
Takes a farfield pattern and creates an oversampled convolution
function.
If the far field size is smaller than N*Qpx, we will pad it. This
essentially means we apply a sinc anti-aliasing kernel by default.
:param ff: Far field pattern
:param Qpx: Factor to oversample by -- there will be Qpx x Qpx convolution arl
:param s: Size of convolution function to extract
:returns: Numpy array of shape [ov, ou, v, u], e.g. with sub-pixel
offsets as the outer coordinates.
"""
# Pad the far field to the required pixel size
N = ff.shape[0]
if s is None: s = N
padff = pad_mid(ff, N*Qpx*P)
# Obtain oversampled uv-grid
af = fft(padff)
# Extract kernels
return extract_oversampled(extract_mid(af, N*Qpx), Qpx, s)
"""
Explanation: First, some grid characteristics. Only theta is actually important here, the rest is just decides the range of the example $u/v$ values.
End of explanation
"""
grid_size = 2047
aa_over = 256
aa_support = 10
aa_x0 = 0.375
aa_mode = 0
aa_szetan = False
aa_nifty = True
aa_parameter = numpy.pi*aa_support/2
if aa_support == 1:
print("Using trivial gridder")
aa_gcf = numpy.ones((aa_over, aa_support))
def aa(x): return numpy.ones_like(x)
elif aa_nifty:
print("Using exponential of semi-circle with beta=%d" % (aa_support))
aa = numpy.exp(aa_parameter*(numpy.sqrt(1-(2*coordinates(grid_size))**2)-1))
aa_gcf = kernel_oversample(aa, aa_over, aa_support) / grid_size
def aa(x):
return numpy.exp(aa_parameter*(numpy.sqrt(1-(2*x)**2)-1))
elif aa_szetan:
print("Using Sze-Tan's gridder with R=%d, x_0=%g" % (aa_support//2, aa_x0))
aa_gcf = sze_tan_gridder(aa_support//2, aa_x0, aa_over)
def aa(x):
return sze_tan_grid_correction_gen(aa_support//2, aa_x0, x)
print("Mean error:", sze_tan_mean_error(aa_support//2, aa_x0))
else:
print("Using PSWF with mode %d and parameter %g" % (aa_mode, aa_parameter))
aa = scipy.special.pro_ang1(aa_mode, aa_mode, aa_parameter, 2*coordinates(grid_size))[0]
aa_gcf = kernel_oversample(aa, aa_over, aa_support) / grid_size
def aa(x):
return scipy.special.pro_ang1(aa_mode, aa_mode, aa_parameter, 2*x)[0]
# Calculate appropriate step length to give us full accuracy for a field of view of size theta
du = du_opt = aa_x0/(theta/2)
print("Optimal du =", du)
# Plot gridding function
plt.rcParams['figure.figsize'] = 10, 5
r = numpy.arange(-aa_over*(aa_support//2), aa_over*((aa_support+1)//2)) / aa_over
plt.semilogy(du_opt*r, numpy.abs(numpy.transpose(aa_gcf).flatten()));
#plt.semilogy(du_opt*r, numpy.transpose(aa2_gcf).flatten().real);
plt.xticks(du_opt*numpy.arange(-(aa_support//2), ((aa_support+1)//2)+1))
plt.grid(True);plt.xlabel('u/v [$\lambda$]');plt.title('$u/v$ Gridder');plt.show()
# Plot grid correction function
theta_x0 = theta/aa_x0/2
x = coordinates(101)
plt.semilogy(theta*x/aa_x0/2, aa(x));
plt.title('$u/v$ Grid correction');plt.grid(True);plt.xlabel('l [1]')
plt.axvspan(theta/2, theta_x0/2, color='lightgray', hatch='x', alpha=0.5)
plt.axvspan(-theta/2, -theta_x0/2, color='lightgray', hatch='x', alpha=0.5)
plt.annotate('(unused)', xy=((theta+theta_x0)/4,0.9), ha='center', color='gray')
plt.annotate('(unused)', xy=(-(theta+theta_x0)/4,0.9), ha='center', color='gray');
#plt.semilogy(theta*coordinates(grid_size)/aa_x0/2, anti_aliasing_function(grid_size, aa_mode, aa_parameter));
aa_support_w = 8
aa_x0_w = 0.125
aa_szetan_w = False
aa_nifty_w = False
aa_parameter_w = numpy.pi*aa_support_w/2
if aa_support_w == 1:
print("Using trivial gridder")
aa_gcf_w = numpy.ones((aa_over, aa_support_w))
def aa_w(x): return numpy.ones_like(x)
elif aa_nifty_w:
print("Using exponential of semi-circle with beta=%d" % (aa_support))
aa_gcf_w = kernel_oversample(
numpy.exp(aa_support*(numpy.sqrt(1-(2*coordinates(grid_size))**2)-1)),
aa_over, aa_support) / grid_size
def aa_w(x):
return numpy.exp(aa_support*(numpy.sqrt(1-(2*x)**2)-1))
elif aa_szetan_w:
print("Using Sze-Tan's gridder with R=%d, x_0=%g" % (aa_support_w//2, aa_x0_w))
aa_gcf_w = sze_tan_gridder(aa_support_w//2, aa_x0_w, aa_over)
def aa_w(x):
return sze_tan_grid_correction_gen(aa_support_w//2, aa_x0_w, x)
print("Mean error:", sze_tan_mean_error(aa_support_w//2, aa_x0_w))
else:
aa_w = anti_aliasing_function(grid_size, 0, aa_parameter_w)
aa_gcf_w = kernel_oversample(aa_w, aa_over, aa_support_w) / grid_size
def aa_w(x):
return scipy.special.pro_ang1(aa_mode, aa_mode, aa_parameter_w, 2*x)[0]
# Calculate appropriate step length to give us full accuracy for a field of view of size theta
max_n = 1.0 - numpy.sqrt(1.0 - 2*(theta/2)**2)
print("max_n =", max_n)
dw = dw_opt = aa_x0_w / max_n
print("Optimal dw =", dw)
# Plot gridding function
plt.rcParams['figure.figsize'] = 10, 5
r = numpy.arange(-aa_over*(aa_support_w//2), aa_over*((aa_support_w+1)//2)) / aa_over
plt.semilogy(dw_opt*r, numpy.transpose(aa_gcf_w).flatten().real);
plt.xticks(dw_opt*numpy.arange(-(aa_support_w//2), ((aa_support_w+1)//2)+1))
plt.grid(True); plt.xlabel('w [$\lambda$]'); plt.title('$w$ Gridder'); plt.show()
x = coordinates(101)
plt.semilogy(max_n*x/aa_x0_w, aa_w(x));
plt.title('$w$ Grid correction'); plt.grid(True); plt.xlabel('$n$ [1]');
max_n_x0 = max_n/aa_x0_w/2
plt.axvspan(max_n, max_n_x0, color='lightgray', hatch='x', alpha=0.5)
plt.axvspan(-max_n, -max_n_x0, color='lightgray', hatch='x', alpha=0.5)
plt.annotate('(unused)', xy=((max_n+max_n_x0)/2,0.9), ha='center', color='gray')
plt.annotate('(unused)', xy=(-(max_n+max_n_x0)/2,0.9), ha='center', color='gray');
"""
Explanation: Determine $u/v$ gridding function to use. Three choices here - trivial, Sze-Tan's version and PSWF. x0 decides how much of the image coordinate space we can actually use without errors rising.
We use that to calculate the appropriate grid step length du for good accuracy in our target field of view theta:
End of explanation
"""
Npt = 500
points = theta * (numpy.random.rand(Npt,2)-0.5)
#points = list(theta/10 * numpy.array(list(itertools.product(range(-5, 6), range(-5, 6)))))
#points.append((theta/3,0))
#points = numpy.array(points)
plt.rcParams['figure.figsize'] = 8, 8
plt.scatter(points[:,0], points[:,1]);
"""
Explanation: Now generate some sources on the sky. We use a random pattern to make reasonably sure that we are not hand-picking a good sky pattern.
End of explanation
"""
def predict(dist_uvw, du=du_opt, dw=dw_opt, apply_aa = False, apply_aa_w = False):
# Get image coordinates
ls, ms = numpy.transpose(points)
ns = numpy.sqrt(1.0 - ls**2 - ms**2) - 1
# Evaluate grid correction functions in uv & w
aas = numpy.ones(len(ls))
if apply_aa:
aas *= aa(du*ls) * aa(du*ms)
if apply_aa_w:
aas *= aa_w(dw*ns)
# Now simulate points, dividing out grid correction
vis = 0
for l,m, a in zip(ls, ms, aas):
vis += simulate_point(dist_uvw, l, m) / a
return vis
def predict_grid(u,v,w,ov_u,ov_v,ov_w,du=du_opt, dw=dw_opt, visualise=False):
# Generate offsets that we are going to sample at
ius, ivs, iws = numpy.meshgrid(numpy.arange(aa_support), numpy.arange(aa_support), numpy.arange(aa_support_w))
dus = du*(ius.flatten()-(aa_support//2)+ov_u/aa_over)
dvs = du*(ivs.flatten()-(aa_support//2)+ov_v/aa_over)
dws = dw*(iws.flatten()-(aa_support_w//2)+ov_w/aa_over)
# Get grid convolution function for offsets
aas = aa_gcf[ov_u,ius.flatten()] * aa_gcf[ov_v,ivs.flatten()] * aa_gcf_w[ov_w,iws.flatten()]
# Add offsets to all uvw coordinates
us = numpy.array(u)[:,numpy.newaxis] + dus[numpy.newaxis,:]
vs = numpy.array(v)[:,numpy.newaxis] + dvs[numpy.newaxis,:]
ws = numpy.array(w)[:,numpy.newaxis] + dws[numpy.newaxis,:]
# Visualise sampling pattern?
if visualise:
ax = plt.subplot(111, projection='3d')
ax.scatter(us,vs,ws, color='red');
ax.set_xlabel('u'); ax.set_ylabel('v'); ax.set_zlabel('w')
# Predict visibilities
vis = predict(numpy.transpose([us.flatten(),vs.flatten(),ws.flatten()]),
du=du, dw=dw, apply_aa=True, apply_aa_w=True).reshape(us.shape)
# Convolve with gridder, sum up
return numpy.sum(vis * aas[numpy.newaxis,:], axis=1)
"""
Explanation: Set up code to predict visibilities - either directly or by visibilities weighted by the grid correction and offset in a grid-like fashion.
End of explanation
"""
@interact(u=(-lam/2,lam/2,0.1),v=(-lam/2,lam/2,0.1),w=(-lam/2,lam/2,0.1),
ov_u=(0,aa_over-1), ov_v=(0,aa_over-1), ov_w=(0,aa_over-1),
du=(du_opt/10,du_opt*2,du_opt/10), dw=(dw_opt/10,dw_opt*2,dw_opt/10))
def test(u=0,v=0,w=0, ov_u=0,ov_v=0,ov_w=0, du=du_opt, dw=dw_opt):
vis = predict(numpy.transpose([[u],[v],[w]]))
print("Direct: ", vis[0])
vis_sum = predict_grid([u],[v],[w],ov_u,ov_v,ov_w,du,dw)
print("Grid: ", vis_sum[0])
print("Error: ", numpy.abs(vis[0]-vis_sum[0]) / numpy.sqrt(Npt))
"""
Explanation: Now we can test the performance of the sampling over a wide variety of parameters. Note that u,v and w do not actually matter too much, but we get into trouble quickly by increasing du or dw -- that is when we start using our gridder for inaccurate image coordinates!
End of explanation
"""
N = 500
us = lam * (numpy.random.rand(N)-0.5)
vs = lam * (numpy.random.rand(N)-0.5)
ws = lam * (numpy.random.rand(N)-0.5)
ov_u = random.randint(0, aa_over-1)
ov_v = random.randint(0, aa_over-1)
ov_w = random.randint(0, aa_over-1)
vis = predict(numpy.transpose([us,vs,ws]))
grid_vis = predict_grid(us,vs,ws,ov_u,ov_v,ov_w)
diff = numpy.abs(vis-grid_vis)
mean_err = numpy.sqrt(numpy.mean(diff**2)) / numpy.mean(numpy.abs(vis))
print("Mean error:", mean_err)
"""
Explanation: We can make a quick statistic by feeding in a good couple of points:
End of explanation
"""
|
xesscorp/skidl
|
examples/skidl_spice_test/skidl_2_pyspice_check.ipynb
|
mit
|
from skidl.pyspice import *
from PySpice.Spice.Netlist import Circuit
"""
Explanation: <h1>Table of Contents<span class="tocSkip"></span></h1>
<div class="toc"><ul class="toc-item"><li><span><a href="#Checking-tool" data-toc-modified-id="Checking-tool-1"><span class="toc-item-num">1 </span>Checking tool</a></span></li><li><span><a href="#Basic-Elements" data-toc-modified-id="Basic-Elements-2"><span class="toc-item-num">2 </span>Basic Elements</a></span><ul class="toc-item"><li><span><a href="#A------------|-XSPICE-code-model-(not-checked)" data-toc-modified-id="A------------|-XSPICE-code-model-(not-checked)-2.1"><span class="toc-item-num">2.1 </span>A | XSPICE code model (not checked)</a></span></li><li><span><a href="#B------------|-Behavioral-(arbitrary)-source-(not-checked)" data-toc-modified-id="B------------|-Behavioral-(arbitrary)-source-(not-checked)-2.2"><span class="toc-item-num">2.2 </span>B | Behavioral (arbitrary) source (not checked)</a></span></li><li><span><a href="#C------------|-Capacitor" data-toc-modified-id="C------------|-Capacitor-2.3"><span class="toc-item-num">2.3 </span>C | Capacitor</a></span><ul class="toc-item"><li><span><a href="#Notes" data-toc-modified-id="Notes-2.3.1"><span class="toc-item-num">2.3.1 </span>Notes</a></span></li></ul></li><li><span><a href="#D------------|-Diode" data-toc-modified-id="D------------|-Diode-2.4"><span class="toc-item-num">2.4 </span>D | Diode</a></span><ul class="toc-item"><li><span><a href="#Notes" data-toc-modified-id="Notes-2.4.1"><span class="toc-item-num">2.4.1 </span>Notes</a></span></li></ul></li><li><span><a href="#E------------|-Voltage-controlled-voltage-source-(VCVS)" data-toc-modified-id="E------------|-Voltage-controlled-voltage-source-(VCVS)-2.5"><span class="toc-item-num">2.5 </span>E | Voltage-controlled voltage source (VCVS)</a></span><ul class="toc-item"><li><span><a href="#Notes" data-toc-modified-id="Notes-2.5.1"><span class="toc-item-num">2.5.1 </span>Notes</a></span></li></ul></li><li><span><a href="#F------------|-Current-controlled-current-source-(CCCs)" data-toc-modified-id="F------------|-Current-controlled-current-source-(CCCs)-2.6"><span class="toc-item-num">2.6 </span>F | Current-controlled current source (CCCs)</a></span><ul class="toc-item"><li><span><a href="#Notes" data-toc-modified-id="Notes-2.6.1"><span class="toc-item-num">2.6.1 </span>Notes</a></span></li></ul></li><li><span><a href="#G------------|-Voltage-controlled-current-source-(VCCS)" data-toc-modified-id="G------------|-Voltage-controlled-current-source-(VCCS)-2.7"><span class="toc-item-num">2.7 </span>G | Voltage-controlled current source (VCCS)</a></span><ul class="toc-item"><li><span><a href="#Notes" data-toc-modified-id="Notes-2.7.1"><span class="toc-item-num">2.7.1 </span>Notes</a></span></li></ul></li><li><span><a href="#H------------|-Current-controlled-voltage-source-(CCVS)" data-toc-modified-id="H------------|-Current-controlled-voltage-source-(CCVS)-2.8"><span class="toc-item-num">2.8 </span>H | Current-controlled voltage source (CCVS)</a></span><ul class="toc-item"><li><span><a href="#Notes" data-toc-modified-id="Notes-2.8.1"><span class="toc-item-num">2.8.1 </span>Notes</a></span></li></ul></li><li><span><a href="#I------------|-Current-source" data-toc-modified-id="I------------|-Current-source-2.9"><span class="toc-item-num">2.9 </span>I | Current source</a></span><ul class="toc-item"><li><span><a href="#Notes" data-toc-modified-id="Notes-2.9.1"><span class="toc-item-num">2.9.1 </span>Notes</a></span></li></ul></li><li><span><a href="#J------------|-Junction-field-effect-transistor-(JFET)" data-toc-modified-id="J------------|-Junction-field-effect-transistor-(JFET)-2.10"><span class="toc-item-num">2.10 </span>J | Junction field effect transistor (JFET)</a></span><ul class="toc-item"><li><span><a href="#Notes" data-toc-modified-id="Notes-2.10.1"><span class="toc-item-num">2.10.1 </span>Notes</a></span></li></ul></li><li><span><a href="#K------------|-Coupled-(Mutual)-Inductors" data-toc-modified-id="K------------|-Coupled-(Mutual)-Inductors-2.11"><span class="toc-item-num">2.11 </span>K | Coupled (Mutual) Inductors</a></span><ul class="toc-item"><li><span><a href="#Notes" data-toc-modified-id="Notes-2.11.1"><span class="toc-item-num">2.11.1 </span>Notes</a></span></li></ul></li><li><span><a href="#L------------|-Inductor" data-toc-modified-id="L------------|-Inductor-2.12"><span class="toc-item-num">2.12 </span>L | Inductor</a></span><ul class="toc-item"><li><span><a href="#Notes" data-toc-modified-id="Notes-2.12.1"><span class="toc-item-num">2.12.1 </span>Notes</a></span></li></ul></li><li><span><a href="#M------------|-Metal-oxide-field-effect-transistor-(MOSFET)" data-toc-modified-id="M------------|-Metal-oxide-field-effect-transistor-(MOSFET)-2.13"><span class="toc-item-num">2.13 </span>M | Metal oxide field effect transistor (MOSFET)</a></span><ul class="toc-item"><li><span><a href="#Notes" data-toc-modified-id="Notes-2.13.1"><span class="toc-item-num">2.13.1 </span>Notes</a></span></li></ul></li><li><span><a href="#Q------------|-Bipolar-junction-transistor-(BJT)" data-toc-modified-id="Q------------|-Bipolar-junction-transistor-(BJT)-2.14"><span class="toc-item-num">2.14 </span>Q | Bipolar junction transistor (BJT)</a></span><ul class="toc-item"><li><span><a href="#Notes" data-toc-modified-id="Notes-2.14.1"><span class="toc-item-num">2.14.1 </span>Notes</a></span></li></ul></li><li><span><a href="#R------------|-Resistor" data-toc-modified-id="R------------|-Resistor-2.15"><span class="toc-item-num">2.15 </span>R | Resistor</a></span><ul class="toc-item"><li><span><a href="#Notes" data-toc-modified-id="Notes-2.15.1"><span class="toc-item-num">2.15.1 </span>Notes</a></span></li></ul></li><li><span><a href="#V-|-Voltage-source" data-toc-modified-id="V-|-Voltage-source-2.16"><span class="toc-item-num">2.16 </span>V | Voltage source</a></span><ul class="toc-item"><li><span><a href="#Notes" data-toc-modified-id="Notes-2.16.1"><span class="toc-item-num">2.16.1 </span>Notes</a></span></li></ul></li><li><span><a href="#Z------------|-Metal-semiconductor-field-effect-transistor-(MESFET)" data-toc-modified-id="Z------------|-Metal-semiconductor-field-effect-transistor-(MESFET)-2.17"><span class="toc-item-num">2.17 </span>Z | Metal semiconductor field effect transistor (MESFET)</a></span><ul class="toc-item"><li><span><a href="#Notes" data-toc-modified-id="Notes-2.17.1"><span class="toc-item-num">2.17.1 </span>Notes</a></span></li></ul></li></ul></li><li><span><a href="#Highlevel-Elements-SinusoidalMixin-Based" data-toc-modified-id="Highlevel-Elements-SinusoidalMixin-Based-3"><span class="toc-item-num">3 </span>Highlevel Elements <code>SinusoidalMixin</code> Based</a></span><ul class="toc-item"><li><span><a href="#Note-in-Armour's-fort-added-as_phase" data-toc-modified-id="Note-in-Armour's-fort-added-as_phase-3.1"><span class="toc-item-num">3.1 </span>Note in Armour's fort added as_phase</a></span></li><li><span><a href="#SinusoidalMixin-args:" data-toc-modified-id="SinusoidalMixin-args:-3.2"><span class="toc-item-num">3.2 </span><code>SinusoidalMixin</code> args:</a></span></li><li><span><a href="#SinusoidalVoltageSource-(AC)" data-toc-modified-id="SinusoidalVoltageSource-(AC)-3.3"><span class="toc-item-num">3.3 </span>SinusoidalVoltageSource (AC)</a></span><ul class="toc-item"><li><span><a href="#Notes" data-toc-modified-id="Notes-3.3.1"><span class="toc-item-num">3.3.1 </span>Notes</a></span></li></ul></li><li><span><a href="#SinusoidalCurrentSource-(AC)" data-toc-modified-id="SinusoidalCurrentSource-(AC)-3.4"><span class="toc-item-num">3.4 </span>SinusoidalCurrentSource (AC)</a></span><ul class="toc-item"><li><span><a href="#Notes" data-toc-modified-id="Notes-3.4.1"><span class="toc-item-num">3.4.1 </span>Notes</a></span></li></ul></li><li><span><a href="#AcLine(SinusoidalVoltageSource)" data-toc-modified-id="AcLine(SinusoidalVoltageSource)-3.5"><span class="toc-item-num">3.5 </span>AcLine(SinusoidalVoltageSource)</a></span><ul class="toc-item"><li><span><a href="#Notes" data-toc-modified-id="Notes-3.5.1"><span class="toc-item-num">3.5.1 </span>Notes</a></span></li></ul></li></ul></li><li><span><a href="#Highlevel-Elements-PulseMixin-Based" data-toc-modified-id="Highlevel-Elements-PulseMixin-Based-4"><span class="toc-item-num">4 </span>Highlevel Elements <code>PulseMixin</code> Based</a></span></li><li><span><a href="#Highlevel-Elements-ExponentialMixin-Based" data-toc-modified-id="Highlevel-Elements-ExponentialMixin-Based-5"><span class="toc-item-num">5 </span>Highlevel Elements <code>ExponentialMixin</code> Based</a></span><ul class="toc-item"><li><span><a href="#ExponentialMixin-args:" data-toc-modified-id="ExponentialMixin-args:-5.1"><span class="toc-item-num">5.1 </span><code>ExponentialMixin</code> args:</a></span></li><li><span><a href="#ExponentialVoltageSource" data-toc-modified-id="ExponentialVoltageSource-5.2"><span class="toc-item-num">5.2 </span>ExponentialVoltageSource</a></span><ul class="toc-item"><li><span><a href="#Notes" data-toc-modified-id="Notes-5.2.1"><span class="toc-item-num">5.2.1 </span>Notes</a></span></li></ul></li><li><span><a href="#ExponentialCurrentSource" data-toc-modified-id="ExponentialCurrentSource-5.3"><span class="toc-item-num">5.3 </span>ExponentialCurrentSource</a></span><ul class="toc-item"><li><span><a href="#Notes" data-toc-modified-id="Notes-5.3.1"><span class="toc-item-num">5.3.1 </span>Notes</a></span></li></ul></li></ul></li><li><span><a href="#Highlevel-Elements-PieceWiseLinearMixin-Based" data-toc-modified-id="Highlevel-Elements-PieceWiseLinearMixin-Based-6"><span class="toc-item-num">6 </span>Highlevel Elements <code>PieceWiseLinearMixin</code> Based</a></span></li><li><span><a href="#Highlevel-Elements-SingleFrequencyFMMixin-Based" data-toc-modified-id="Highlevel-Elements-SingleFrequencyFMMixin-Based-7"><span class="toc-item-num">7 </span>Highlevel Elements <code>SingleFrequencyFMMixin</code> Based</a></span></li><li><span><a href="#Highlevel-Elements-AmplitudeModulatedMixin-Based" data-toc-modified-id="Highlevel-Elements-AmplitudeModulatedMixin-Based-8"><span class="toc-item-num">8 </span>Highlevel Elements <code>AmplitudeModulatedMixin</code> Based</a></span></li><li><span><a href="#Highlevel-Elements-RandomMixin-Based" data-toc-modified-id="Highlevel-Elements-RandomMixin-Based-9"><span class="toc-item-num">9 </span>Highlevel Elements <code>RandomMixin</code> Based</a></span></li></ul></div>
End of explanation
"""
def netlist_comp_check(skidl_netlist, pyspice_netlist):
"""
Simple dumb check tool to compare the netlist from skidl and pyspice
Args:
skidl_netlist (PySpice.Spice.Netlist.Circuit): resulting netlist obj from
skidl using skidl's `generate_netlist` utlity to compare to pyspice direct
creation
pyspice_netlist (PySpice.Spice.Netlist.Circuit): circuit obj created directly in pyspice via
`PySpice.Spice.Netlist.Circuit` to compare it's netlist to skidl produced one
Returns:
if skidl_netlist is longer then pyspice_netlist will return string statment saying: 'skidl_netlist is longer then pyspice_netlist'
if skidl_netlist is shorter then pyspice_netlist will return string statment saying: 'skidl_netlist is shorter then pyspice_netlist'
if skidl_netlist and pyspice_netlist are equall and but there are diffrances then will print
message of thoes difrances(|1 indexed) and return a list of indexs where the skidl netlist is differs from the pyspice one
if skidl_netlist == pyspice_netlist then will return the word: 'Match'
TODO: Where should I start
"""
#only care about the final netlist string
skidl_netlist=skidl_netlist.str()
pyspice_netlist=pyspice_netlist.str()
#check the lengths
if len(skidl_netlist)>len(pyspice_netlist):
return('skidl_netlist is longer then pyspice_netlist')
elif len(skidl_netlist)<len(pyspice_netlist):
return('skidl_netlist is shorter then pyspice_netlist')
#compare strings char by char
else:
string_check=[i for i in range(len(skidl_netlist)) if skidl_netlist[i] != pyspice_netlist[i]]
if string_check==[]:
return 'Match'
else:
print('Match failed skidl_netlist:')
print(f'{[i|1 for i in string_check]}')
return string_check
"""
Explanation: Checking tool
End of explanation
"""
reset()
net_1=Net('N1'); net_2=Net('N2')
skidl_C=C(ref='1', value=5, scale=5, temp=5, dtemp=5, ic=5, m=5)
skidl_C['p', 'n']+=net_1, net_2
skidl_circ=generate_netlist()
print(skidl_circ)
pyspice_circ=Circuit('')
pyspice_circ.C('1', 'N1', 'N2', 5, scale=5, temp=5, dtemp=5, ic=5, m=5)
print(pyspice_circ)
netlist_comp_check(skidl_circ, pyspice_circ)
"""
Explanation: Basic Elements
A | XSPICE code model (not checked)
PySpice/PySpice/Spice/BasicElement.py; (need to find):
skidl/skidl/libs/pyspice_sklib.py; name="A"
B | Behavioral (arbitrary) source (not checked)
PySpice/PySpice/Spice/BasicElement.py; class BehavioralSource:
skidl/skidl/libs/pyspice_sklib.py; name="B"
ngspice 5.1: Bxxxx: Nonlinear dependent source (ASRC): BXXXXXXX n| n- <i=expr > <v=expr > <tc1=value > <tc2=value > <temp=value > <dtemp=value >
C | Capacitor
PySpice/PySpice/Spice/BasicElement.py; class Capacitor(DipoleElement)
skidl/skidl/libs/pyspice_sklib.py; name="C"
ngspice 3.2.5 Capacitors:
CXXXXXXX n| n- <value > <mname > <m=val> <scale=val> <temp=val> <dtemp=val> <tc1=val> <tc2=val> <ic=init_condition >
Notes
End of explanation
"""
reset()
net_1=Net('N1'); net_2=Net('N2')
skidl_D=D(ref='1',model=5, area=5, m=5, pj=5, off=5, temp=5, dtemp=5)
skidl_D['p', 'n']+=net_1, net_2
skidl_circ=generate_netlist()
print(skidl_circ)
pyspice_circ=Circuit('')
pyspice_circ.D('1', 'N1', 'N2', model=5, area=5, m=5, pj=5, off=5, temp=5, dtemp=5)
print(pyspice_circ)
netlist_comp_check(skidl_circ, pyspice_circ)
"""
Explanation: D | Diode
PySpice/PySpice/Spice/BasicElement.py; class Diode(FixedPinElement)
skidl/skidl/libs/pyspice_sklib.py; name="D"
ngspice 7.1 Junction Diodes:
DXXXXXXX n| n- mname <area=val> <m=val> <pj=val> <off> <ic=vd> <temp=val> <dtemp=val>
Notes
ic: did not work in eather skidl or pyspice
End of explanation
"""
reset()
net_1=Net('N1'); net_2=Net('N2'); net_3=Net('N3'); net_4=Net('N4')
skidl_E=E(ref='1', voltage_gain=5)
skidl_E['ip', 'in']+=net_1, net_2; skidl_E['op', 'on']+=net_3, net_4
skidl_circ=generate_netlist()
print(skidl_circ)
pyspice_circ=Circuit('')
pyspice_circ.VoltageControlledVoltageSource('1', 'N3', 'N4', 'N1', 'N2', voltage_gain=5)
print(pyspice_circ)
netlist_comp_check(skidl_circ, pyspice_circ)
"""
Explanation: E | Voltage-controlled voltage source (VCVS)
PySpice/PySpice/Spice/BasicElement.py; class VoltageControlledVoltageSource(TwoPortElement)
skidl/skidl/libs/pyspice_sklib.py; name="E"
ngspice 4.2.2 Exxxx: Linear Voltage-Controlled Voltage Sources (VCVS):
EXXXXXXX N| N- NC| NC- VALUE
Notes
End of explanation
"""
reset()
net_1=Net('N1'); net_2=Net('N2')
skidl_F=F(ref='1', control='V1', current_gain=5, m=5)
skidl_F['p', 'n']+=net_1, net_2;
skidl_circ=generate_netlist()
print(skidl_circ)
pyspice_circ=Circuit('')
pyspice_circ.CurrentControlledCurrentSource('1', 'N1', 'N2', 'V1', current_gain=5, m=5)
print(pyspice_circ)
netlist_comp_check(skidl_circ, pyspice_circ)
"""
Explanation: F | Current-controlled current source (CCCs)
PySpice/PySpice/Spice/BasicElement.py; class CurrentControlledCurrentSource(DipoleElement)
skidl/skidl/libs/pyspice_sklib.py; name="F"
ngspice 4.2.3 Fxxxx: Linear Current-Controlled Current Sources (CCCS):
FXXXXXXX N| N- VNAM VALUE <m=val>
Notes
End of explanation
"""
reset()
net_1=Net('N1'); net_2=Net('N2'); net_3=Net('N3'); net_4=Net('N4')
skidl_G=G(ref='1', current_gain=5, m=5)
skidl_G['ip', 'in']+=net_1, net_2; skidl_G['op', 'on']+=net_3, net_4
skidl_circ=generate_netlist()
print(skidl_circ)
pyspice_circ=Circuit('')
pyspice_circ.VoltageControlledCurrentSource('1', 'N3', 'N4', 'N1', 'N2', transconductance=5, m=5)
print(pyspice_circ)
netlist_comp_check(skidl_circ, pyspice_circ)
"""
Explanation: G | Voltage-controlled current source (VCCS)
PySpice/PySpice/Spice/BasicElement.py; class VoltageControlledCurrentSource(TwoPortElement)
skidl/skidl/libs/pyspice_sklib.py; name="G"
ngspice 4.2.1 Gxxxx: Linear Voltage-Controlled Current Sources (VCCS):
GXXXXXXX N| N- NC| NC- VALUE <m=val>
Notes
'transconductance' did not work in skidl; but gain did as did current_gain
End of explanation
"""
reset()
net_1=Net('N1'); net_2=Net('N2')
skidl_H=H(ref='1', control='V1', transresistance=5)
skidl_H['p', 'n']+=net_1, net_2;
skidl_circ=generate_netlist()
print(skidl_circ)
pyspice_circ=Circuit('')
pyspice_circ.CurrentControlledVoltageSource('1', 'N1', 'N2', 'V1', transresistance=5)
print(pyspice_circ)
netlist_comp_check(skidl_circ, pyspice_circ)
"""
Explanation: H | Current-controlled voltage source (CCVS)
PySpice/PySpice/Spice/BasicElement.py; class CurrentControlledVoltageSource(DipoleElement)
skidl/skidl/libs/pyspice_sklib.py; name="H"
ngspice 4.2.4 Hxxxx: Linear Current-Controlled Voltage Sources (CCVS):
HXXXXXXX n| n- vnam val
Notes
End of explanation
"""
reset()
net_1=Net('N1'); net_2=Net('N2')
skidl_I=I(ref='1', dc_value=5)
skidl_I['p', 'n']+=net_1, net_2
skidl_circ=generate_netlist()
print(skidl_circ)
pyspice_circ=Circuit('')
pyspice_circ.I('1', 'N1', 'N2', dc_value=5)
print(pyspice_circ)
netlist_comp_check(skidl_circ, pyspice_circ)
"""
Explanation: I | Current source
PySpice/PySpice/Spice/BasicElement.py; class CurrentSource(DipoleElement)
skidl/skidl/libs/pyspice_sklib.py; name="I"
ngspice 4.1 Independent Sources for Voltage or Current:
IYYYYYYY N| N- <<DC>
Notes
a reduced version of ngspices IYYYYYYY only generating the arguement for <<DC> DC/TRAN VALUE >
End of explanation
"""
reset()
net_1=Net('N1'); net_2=Net('N2'); net_3=Net('N3')
skidl_J=J(ref='1',model=5, area=5, m=5, off=5, temp=5)
skidl_J['d', 'g', 's']+=net_1, net_2, net_3
skidl_circ=generate_netlist()
print(skidl_circ)
pyspice_circ=Circuit('')
pyspice_circ.J('1', 'N1', 'N2', 'N3', model=5, area=5, m=5, off=5, temp=5)
print(pyspice_circ)
netlist_comp_check(skidl_circ, pyspice_circ)
"""
Explanation: J | Junction field effect transistor (JFET)
PySpice/PySpice/Spice/BasicElement.py; class JunctionFieldEffectTransistor(JfetElement)
skidl/skidl/libs/pyspice_sklib.py; name="J"
ngspice 9.1 Junction Field-Effect Transistors (JFETs):
JXXXXXXX nd ng ns mname <area > <off> <ic=vds,vgs> <temp=t>
Notes
ic: did not work in eather skidl or pyspice
End of explanation
"""
reset()
net_1=Net('N1'); net_2=Net('N2')
skidl_L1=L(ref='1', value=5, m=5, temp=5, dtemp=5, ic=5)
skidl_L1['p', 'n']+=net_1, net_2
skidl_L2=L(ref='2', value=5, m=5, temp=5, dtemp=5, ic=5)
skidl_L2['p', 'n']+=net_1, net_2
#need to find out how to use this
#skidl_K=K()
skidl_circ=generate_netlist()
print(skidl_circ)
pyspice_circ=Circuit('')
#inductors need to exsist to then be coupled
pyspice_circ.L('1', 'N1', 'N2', 5, m=5, temp=5, dtemp=5, ic=5)
pyspice_circ.L('2', 'N1', 'N2', 5, m=5, temp=5, dtemp=5, ic=5)
pyspice_circ.K('1', 'L1', 'L2', coupling_factor=5)
print(pyspice_circ)
netlist_comp_check(skidl_circ, pyspice_circ)
"""
Explanation: K | Coupled (Mutual) Inductors
PySpice/PySpice/Spice/BasicElement.py; class CoupledInductor(AnyPinElement)
skidl/skidl/libs/pyspice_sklib.py; name="K"
ngspice 3.2.11 Coupled (Mutual) Inductors:
KXXXXXXX LYYYYYYY LZZZZZZZ value
Notes
need to get daves help on using K inside skidl
the inductors must already exsist for pyspice to work
End of explanation
"""
reset()
net_1=Net('N1'); net_2=Net('N2')
skidl_L=L(ref='1', value=5, m=5, temp=5, dtemp=5, ic=5)
skidl_L['p', 'n']+=net_1, net_2
skidl_circ=generate_netlist()
print(skidl_circ)
pyspice_circ=Circuit('')
pyspice_circ.L('1', 'N1', 'N2', 5, m=5, temp=5, dtemp=5, ic=5)
print(pyspice_circ)
netlist_comp_check(skidl_circ, pyspice_circ)
"""
Explanation: L | Inductor
PySpice/PySpice/Spice/BasicElement.py; class Inductor(DipoleElement)
skidl/skidl/libs/pyspice_sklib.py; name="L"
ngspice 3.2.9 Inductors:
LYYYYYYY n| n- <value > <mname > <nt=val> <m=val> <scale=val> <temp=val> <dtemp=val> <tc1=val> <tc2=val> <ic=init_condition >
Notes
End of explanation
"""
reset()
net_1=Net('N1'); net_2=Net('N2'); net_3=Net('N3'); net_4=Net('N4')
skidl_M=M(ref='1', model=5, m=5, l=5, w=5,
drain_area=5, source_area=5, drain_perimeter=5, source_perimeter=5,
drain_number_square=5, source_number_square=5,
off=5, temp=5)
skidl_M['d', 'g', 's', 'b']+=net_1, net_2, net_3, net_4
skidl_circ=generate_netlist()
print(skidl_circ)
pyspice_circ=Circuit('')
pyspice_circ.M('1', 'N1', 'N2', 'N3', 'N4', model=5, m=5, l=5, w=5,
drain_area=5, source_area=5, drain_perimeter=5, source_perimeter=5,
drain_number_square=5, source_number_square=5,
off=5, temp=5)
print(pyspice_circ)
netlist_comp_check(skidl_circ, pyspice_circ)
"""
Explanation: M | Metal oxide field effect transistor (MOSFET)
PySpice/PySpice/Spice/BasicElement.py; class Mosfet(FixedPinElement)
skidl/skidl/libs/pyspice_sklib.py; name="M"
ngspice 11.1 MOSFET devices:
MXXXXXXX nd ng ns nb mname <m=val> <l=val> <w=val> <ad=val> <as=val> <pd=val> <ps=val> <nrd=val> <nrs=val> <off> <ic=vds, vgs, vbs> <temp=t>
Notes
ic: did not work in eather skidl or pyspice
End of explanation
"""
reset()
net_1=Net('N1'); net_2=Net('N2'); net_3=Net('N3'); net_4=Net('N4')
skidl_Q=Q(ref='1',model=5,
area=5, areab=5, areac=5,
m=5, off=5, temp=5, dtemp=5)
skidl_Q['c', 'b', 'e']+=net_1, net_2, net_3
#skidl will make the substrate connection fine but could not get pyspice to do so
#therefore skiping for the time being
#skidl_Q['s']+=net_4
skidl_circ=generate_netlist()
print(skidl_circ)
pyspice_circ=Circuit('')
pyspice_circ.Q('1', 'N1', 'N2', 'N3', model=5, area=5, areab=5, areac=5,
m=5, off=5, temp=5, dtemp=5,
#could not get the substrate connection working in pyspice
#ns='N4'
)
print(pyspice_circ)
netlist_comp_check(skidl_circ, pyspice_circ)
"""
Explanation: | N | Numerical device for GSS |
| O | Lossy transmission line |
| P | Coupled multiconductor line (CPL) |
Q | Bipolar junction transistor (BJT)
PySpice/PySpice/Spice/BasicElement.py; class BipolarJunctionTransistor(FixedPinElement)
skidl/skidl/libs/pyspice_sklib.py; name="Q"
ngspice 8.1 Bipolar Junction Transistors (BJTs):
QXXXXXXX nc nb ne <ns> mname <area=val> <areac=val> <areab=val> <m=val> <off> <ic=vbe,vce> <temp=val> <dtemp=val>
Notes
could not get the substrate connection working in pyspice but it worked fine with skidl
ic: did not work in eather skidl or pyspice
End of explanation
"""
reset()
net_1=Net('N1'); net_2=Net('N2')
skidl_R=R(ref='1', value=5, ac=5, m=5, scale=5, temp=5, dtemp=5, noisy=1)
skidl_R['p', 'n']+=net_1, net_2
skidl_circ=generate_netlist()
print(skidl_circ)
pyspice_circ=Circuit('')
pyspice_circ.R('1', 'N1', 'N2', 5, ac=5, m=5, scale=5, temp=5, dtemp=5, noisy=1)
print(pyspice_circ)
netlist_comp_check(skidl_circ, pyspice_circ)
"""
Explanation: R | Resistor
PySpice/PySpice/Spice/BasicElement.py; class Resistor(DipoleElement)
skidl/skidl/libs/pyspice_sklib.py; name="R"
ngspice 3.2.1 Resistors:
RXXXXXXX n| n- <resistance|r=>value <ac=val> <m=val> <scale=val> <temp=val> <dtemp=val> <tc1=val> <tc2=val> <noisy=0|1>
Notes
End of explanation
"""
reset()
net_1=Net('N1'); net_2=Net('N2')
skidl_V=V(ref='1', dc_value=5)
skidl_V['p', 'n']+=net_1, net_2
skidl_circ=generate_netlist()
print(skidl_circ)
pyspice_circ=Circuit('')
pyspice_circ.V('1', 'N1', 'N2', dc_value=5)
print(pyspice_circ)
netlist_comp_check(skidl_circ, pyspice_circ)
"""
Explanation: | S | Switch (voltage-controlled) |
| T | Lossless transmission line |
| U | Uniformly distributed RC line |
V | Voltage source
PySpice/PySpice/Spice/BasicElement.py; class VoltageSource(DipoleElement)
skidl/skidl/libs/pyspice_sklib.py; name="V"
ngspice 4.1 Independent Sources for Voltage or Current:
VXXXXXXX N| N- <<DC> DC/TRAN VALUE >
Notes
a reduced version of ngspices VXXXXXXX only generating the arguement for <<DC> DC/TRAN VALUE >
End of explanation
"""
reset()
net_1=Net('N1'); net_2=Net('N2'); net_3=Net('N3')
skidl_Z=Z(ref='1',model=5, area=5, m=5, off=5)
skidl_Z['d', 'g', 's']+=net_1, net_2, net_3
skidl_circ=generate_netlist()
print(skidl_circ)
pyspice_circ=Circuit('')
pyspice_circ.Z('1', 'N1', 'N2', 'N3', model=5, area=5, m=5, off=5)
print(pyspice_circ)
netlist_comp_check(skidl_circ, pyspice_circ)
"""
Explanation: | W | Switch (current-controlled) |
| X | Subcircuit |
| Y | Single lossy transmission line (TXL) |
| Z | Metal semiconductor field effect transistor (MESFET) |
Z | Metal semiconductor field effect transistor (MESFET)
PySpice/PySpice/Spice/BasicElement.py; class Mesfet(JfetElement)
skidl/skidl/libs/pyspice_sklib.py; name="Z"
ngspice 10.1 MESFETs:
ZXXXXXXX ND NG NS MNAME <AREA > <OFF> <IC=VDS, VGS>
Notes
ic: did not work in eather skidl or pyspice
End of explanation
"""
reset()
net_1=Net('N1'); net_2=Net('N2')
skidl_SINV=SINEV(ref='1',
#transit sim statments
offset=5,amplitude=5, frequency=5 , delay=5, damping_factor=5,
#ac sim statments
ac_magnitude=5, dc_offset=5)
skidl_SINV['p', 'n']+=net_1, net_2
skidl_circ=generate_netlist()
print(skidl_circ)
pyspice_circ=Circuit('')
pyspice_circ.SinusoidalVoltageSource('1', 'N1', 'N2',
#transit sim statments
offset=5,amplitude=5, frequency=5 , delay=5, damping_factor=5,
#ac sim statments
ac_magnitude=5, dc_offset=5
)
print(pyspice_circ)
netlist_comp_check(skidl_circ, pyspice_circ)
"""
Explanation: Highlevel Elements SinusoidalMixin Based
Note in Armour's fort added as_phase
SinusoidalMixin is the base translation class for sinusoid wave waveform sources, in other words even thou ngspice compines most sinusoid source as just argument extations to exsisting DC source to create AC souces through pyspice to ngspice these elements must be used
SinusoidalMixin args:
| Name | Parameter | Default Value | Units |
|------|----------------|---------------|-------|
| Vo | offset | | V, A |
|------|----------------|---------------|-------|
| Va | amplitude | | V, A |
|------|----------------|---------------|-------|
| f | frequency | 1 / TStop | Hz |
|------|----------------|---------------|-------|
| Td | delay | 0.0 | sec |
|------|----------------|---------------|-------|
| Df | damping factor | 0.01 | 1/sec |
|------|----------------|---------------|-------|
so for a AC SIN voltage sours it's output should be equilint to the following:
$$V(t) = \begin{cases}
V_o & \text{if}\ 0 \leq t < T_d, \
V_o + V_a e^{-D_f(t-T_d)} \sin\left(2\pi f (t-T_d)\right) & \text{if}\ T_d \leq t < T_{stop}.
\end{cases}$$
SinusoidalVoltageSource (AC)
PySpice/PySpice/Spice/HighLevelElement.py; class SinusoidalVoltageSource(VoltageSource, VoltageSourceMixinAbc, SinusoidalMixin)
skidl/skidl/libs/pyspice_sklib.py; name="SINEV"
ngspice 4.1 Independent Sources for Voltage or Current & 4.1.2 Sinusoidal:
VXXXXXXX N+ N- <<DC> DC/TRAN VALUE > <AC \<ACMAG \<ACPHASE >>> <DISTOF1 \<F1MAG \<F1PHASE >>> <DISTOF2 \<F2MAG \<F2PHASE >>>
SIN(VO VA FREQ TD THETA PHASE)
Notes
a amalgumation of ngspice's Independent Sources for Voltage & Sinusoidal statment for transint simulations
End of explanation
"""
reset()
net_1=Net('N1'); net_2=Net('N2')
skidl_SINI=SINEI(ref='1',
#transit sim statments
offset=5,amplitude=5, frequency=5 , delay=5, damping_factor=5,
#ac sim statments
ac_magnitude=5, dc_offset=5)
skidl_SINI['p', 'n']+=net_1, net_2
skidl_circ=generate_netlist()
print(skidl_circ)
pyspice_circ=Circuit('')
pyspice_circ.SinusoidalCurrentSource('1', 'N1', 'N2',
#transit sim statments
offset=5,amplitude=5, frequency=5 , delay=5, damping_factor=5,
#ac sim statments
ac_magnitude=5, dc_offset=5
)
print(pyspice_circ)
netlist_comp_check(skidl_circ, pyspice_circ)
"""
Explanation: SinusoidalCurrentSource (AC)
PySpice/PySpice/Spice/HighLevelElement.py; class class SinusoidalCurrentSource(CurrentSource, CurrentSourceMixinAbc, SinusoidalMixin):
skidl/skidl/libs/pyspice_sklib.py; name="SINEI"
ngspice 4.1 Independent Sources for Voltage or Current & 4.1.2 Sinusoidal:
IYYYYYYY N+ N- <<DC> DC/TRAN VALUE > <AC \<ACMAG \<ACPHASE >>> <DISTOF1 \<F1MAG \<F1PHASE >>> <DISTOF2 \<F2MAG \<F2PHASE >>>
SIN(VO VA FREQ TD THETA PHASE)
Notes
a amalgumation of ngspice's Independent Sources for Voltage & Sinusoidal statment for transint simulations
End of explanation
"""
reset()
net_1=Net('N1'); net_2=Net('N2')
# Skidle does not impliment an AcLine equivlent at this time
skidl_circ=generate_netlist()
print(skidl_circ)
pyspice_circ=Circuit('')
pyspice_circ.AcLine('1', 'N1', 'N2',
#transit sim statments
rms_voltage=8, frequency=5
)
print(pyspice_circ)
netlist_comp_check(skidl_circ, pyspice_circ)
"""
Explanation: AcLine(SinusoidalVoltageSource)
PySpice/PySpice/Spice/HighLevelElement.py; class AcLine(SinusoidalVoltageSource)
skidl/skidl/libs/pyspice_sklib.py; NOT IMPLIMENTED
ngspice 4.1 Independent Sources for Voltage or Current:
VXXXXXXX N+ N- <<DC> DC/TRAN VALUE > <AC \<ACMAG \<ACPHASE >>> <DISTOF1 \<F1MAG \<F1PHASE >>> <DISTOF2 \<F2MAG \<F2PHASE >>>
Notes
it's a pyspice only wraper around pyspices SinusoidalVoltageSource that makes a pure for transisint simulation only SIN voltage source with the only arguments being rms_voltage and frequency
pyspice does the rms to amplitute conversion internaly
pyspice does not have a offset arg
pyspice does not have a delay arg
pyspice does not have a damping_factor arg
pyspice does not have a ac_magnitude arg
pyspice does not have a dc_offset arg
pspice still gives a AC output of the default 1V; this needs to be changed to be equal to amplitude internal value or else will give aid in producing incorect results with ac simulations
End of explanation
"""
reset()
net_1=Net('N1'); net_2=Net('N2')
skidl_EXPV=EXPV(ref='1',
#transit sim statments
initial_value=5,pulsed_value=5, rise_delay_time=5 , rise_time_constant=5, fall_delay_time=5, fall_time_constant=5,
)
skidl_EXPV['p', 'n']+=net_1, net_2
skidl_circ=generate_netlist()
print(skidl_circ)
pyspice_circ=Circuit('')
pyspice_circ.ExponentialVoltageSource('1', 'N1', 'N2',
#transit sim statments
initial_value=5,pulsed_value=5, rise_delay_time=5 , rise_time_constant=5, fall_delay_time=5, fall_time_constant=5,
)
print(pyspice_circ)
netlist_comp_check(skidl_circ, pyspice_circ)
"""
Explanation: Highlevel Elements PulseMixin Based
Highlevel Elements ExponentialMixin Based
ExponentialMixin is the base translation class for exponential shped sources used for transisint simulations. Typicly used for simulating responce to charing and discharing events from capcitor/inductor networks. Pyspice does not include ac arguements that are technicly allowed by ngspice
ExponentialMixin args:
| Name | Parameter | Default Value | Units |
|------|--------------------|---------------|-------|
| V1 | Initial value | | V, A |
|------|--------------------|---------------|-------|
| V2 | pulsed value | | V, A |
|------|--------------------|---------------|-------|
| Td1 | rise delay time | 0.0 | sec |
|------|--------------------|---------------|-------|
| tau1 | rise time constant | Tstep | sec |
|------|--------------------|---------------|-------|
| Td2 | fall delay time | Td1|Tstep | sec |
|------|--------------------|---------------|-------|
| tau2 | fall time constant | Tstep | sec |
|------|--------------------|---------------|-------|
so for a expoential based voltage source it's output should be equilint to the following:
$$V(t) = \begin{cases}
V_1 & \text{if}\ 0 \leq t < T_{d1}, \
V_1 + V_{21} ( 1 − e^{-\frac{t-T_{d1}}{\tau_1}} )
& \text{if}\ T_{d1} \leq t < T_{d2}, \
V_1 + V_{21} ( 1 − e^{-\frac{t-T_{d1}}{\tau_1}} ) + V_{12} ( 1 − e^{-\frac{t-T_{d2}}{\tau_2}} )
& \text{if}\ T_{d2} \leq t < T_{stop}
\end{cases}$$
where $V_{21} = V_2 - V_1$ and $V_{12} = V_1 - V_2$
ExponentialVoltageSource
PySpice/PySpice/Spice/HighLevelElement.py; class ExponentialVoltageSource(VoltageSource, VoltageSourceMixinAbc, ExponentialMixin)
skidl/skidl/libs/pyspice_sklib.py; name="EXPV"
ngspice 4.1 Independent Sources for Voltage or Current & 4.1.3 Exponential:
VXXXXXXX N+ N-
EXP(V1 V2 TD1 TAU1 TD2 TAU2)
Notes
should technicly also alow dc and ac values from ngspice Independent voltage source statment
End of explanation
"""
reset()
net_1=Net('N1'); net_2=Net('N2')
skidl_EXPI=EXPI(ref='1',
#transit sim statments
initial_value=5,pulsed_value=5, rise_delay_time=5 , rise_time_constant=5, fall_delay_time=5, fall_time_constant=5,
)
skidl_EXPI['p', 'n']+=net_1, net_2
skidl_circ=generate_netlist()
print(skidl_circ)
pyspice_circ=Circuit('')
pyspice_circ.ExponentialCurrentSource('1', 'N1', 'N2',
#transit sim statments
initial_value=5,pulsed_value=5, rise_delay_time=5 , rise_time_constant=5, fall_delay_time=5, fall_time_constant=5,
)
print(pyspice_circ)
netlist_comp_check(skidl_circ, pyspice_circ)
"""
Explanation: ExponentialCurrentSource
PySpice/PySpice/Spice/HighLevelElement.py; class ExponentialCurrentSource(VoltageSource, VoltageSourceMixinAbc, ExponentialMixin)
skidl/skidl/libs/pyspice_sklib.py; name="EXPI"
ngspice 4.1 Independent Sources for Voltage or Current & 4.1.3 Exponential:
IXXXXXXX N+ N-
EXP(I1 I2 TD1 TAU1 TD2 TAU2)
Notes
should technicly also alow dc and ac values from ngspice Independent voltage source statment
End of explanation
"""
|
DOV-Vlaanderen/pydov
|
docs/notebooks/search_lithologische_beschrijvingen.ipynb
|
mit
|
%matplotlib inline
import os, sys
import inspect
import pydov
"""
Explanation: Example of DOV search methods for lithologische beschrijvingen
Use cases:
Select records in a bbox
Select records in a bbox with selected properties
Select records in a municipality
Get records using info from wfs fields, not available in the standard output dataframe
End of explanation
"""
from pydov.search.interpretaties import LithologischeBeschrijvingenSearch
ip_litho = LithologischeBeschrijvingenSearch()
# information about the HydrogeologischeStratigrafie type (In Dutch):
ip_litho.get_description()
# information about the available fields for a HydrogeologischeStratigrafie object
fields = ip_litho.get_fields()
# print available fields
for f in fields.values():
print(f['name'])
# print information for a certain field
fields['beschrijving']
"""
Explanation: Get information about code base
End of explanation
"""
# if an attribute can have several values, these are listed under 'values', e.g. for 'Type_proef':
fields['Type_proef']
"""
Explanation: The cost is an arbitrary attribute to indicate if the information is retrieved from a wfs query (cost = 1),
or from an xml (cost = 10)
End of explanation
"""
from pydov.util.location import Within, Box
# Get all lithological descriptions in a bounding box (llx, lly, ulx, uly)
# the pkey_boring link is not available below, but is in the df
df = ip_litho.search(location=Within(Box(152145, 204930, 153150, 206935)))
df = df[df.beschrijving.notnull()]
df.head()
"""
Explanation: Try-out of use cases
Select interpretations in a bbox
End of explanation
"""
# list available query methods
methods = [i for i,j in inspect.getmembers(sys.modules['owslib.fes'],
inspect.isclass)
if 'Property' in i]
methods
from owslib.fes import PropertyIsGreaterThanOrEqualTo
"""
Explanation: Select interpretations in a bbox with selected properties
End of explanation
"""
# Get deep boreholes in a bounding box
from owslib.fes import PropertyIsEqualTo
# the propertyname can be any of the fields of the hydrogeological interpretations object that belong to the wfs source
# the literal is always a string, no matter what its definition is in the boring object (string, float...)
query = PropertyIsGreaterThanOrEqualTo(
propertyname='betrouwbaarheid_interpretatie', literal='goed')
df = ip_litho.search(location=Within(Box(153145, 206930, 153150, 206935)),
query=query
)
df.head()
"""
Explanation: The property feature methodes listed above are available from the owslib module. These were not adapted for use in pydov.
End of explanation
"""
query = PropertyIsEqualTo(propertyname='gemeente',
literal='Aartselaar')
df = ip_litho.search(query=query)
df.head()
"""
Explanation: Select interpretations in a municipality
End of explanation
"""
# import the necessary modules (not included in the requirements of pydov!)
import folium
from folium.plugins import MarkerCluster
from pyproj import Transformer
# convert the coordinates to lat/lon for folium
def convert_latlon(x1, y1):
transformer = Transformer.from_crs("epsg:31370", "epsg:4326", always_xy=True)
x2,y2 = transformer.transform(x1, y1)
return x2, y2
df['lon'], df['lat'] = zip(*map(convert_latlon, df['x'], df['y']))
# convert to list
loclist = df[['lat', 'lon']].values.tolist()
# initialize the Folium map on the centre of the selected locations, play with the zoom until ok
fmap = folium.Map(location=[df['lat'].mean(), df['lon'].mean()], zoom_start=12)
marker_cluster = MarkerCluster().add_to(fmap)
for loc in range(0, len(loclist)):
# limit marker size for folium (:10)
folium.Marker(loclist[loc], popup=df['beschrijving'][loc][:10]).add_to(marker_cluster)
fmap
"""
Explanation: Visualize results
Using Folium, we can display the results of our search on a map.
End of explanation
"""
|
cmshobe/landlab
|
notebooks/tutorials/overland_flow/overland_flow_driver.ipynb
|
mit
|
from landlab.components.overland_flow import OverlandFlow
from landlab.plot.imshow import imshow_grid
from landlab.plot.colors import water_colormap
from landlab import RasterModelGrid
from landlab.io.esri_ascii import read_esri_ascii
from matplotlib.pyplot import figure
import numpy as np
from time import time
%matplotlib inline
"""
Explanation: <a href="http://landlab.github.io"><img style="float: left" src="../../landlab_header.png"></a>
The deAlmeida Overland Flow Component
<hr>
<small>For more Landlab tutorials, click here: <a href="https://landlab.readthedocs.io/en/latest/user_guide/tutorials.html">https://landlab.readthedocs.io/en/latest/user_guide/tutorials.html</a></small>
<hr>
This notebook illustrates running the deAlmeida overland flow component in an extremely simple-minded way on a real topography, then shows it creating a flood sequence along an inclined surface with an oscillating water surface at one end.
First, import what we'll need:
End of explanation
"""
run_time = 100 # duration of run, (s)
h_init = 0.1 # initial thin layer of water (m)
n = 0.01 # roughness coefficient, (s/m^(1/3))
g = 9.8 # gravity (m/s^2)
alpha = 0.7 # time-step factor (nondimensional; from Bates et al., 2010)
u = 0.4 # constant velocity (m/s, de Almeida et al., 2012)
run_time_slices = (10, 50, 100)
"""
Explanation: Pick the initial and run conditions
End of explanation
"""
elapsed_time = 1.0
"""
Explanation: Elapsed time starts at 1 second. This prevents errors when setting our boundary conditions.
End of explanation
"""
rmg, z = read_esri_ascii('Square_TestBasin.asc')
rmg.add_field('topographic__elevation', z, at='node')
rmg.set_closed_boundaries_at_grid_edges(True, True, True, True)
"""
Explanation: Use Landlab methods to import an ARC ascii grid, and load the data into the field that the component needs to look at to get the data. This loads the elevation data, z, into a "field" in the grid itself, defined on the nodes.
End of explanation
"""
np.all(rmg.at_node['topographic__elevation'] == z)
"""
Explanation: We can get at this data with this syntax:
End of explanation
"""
my_outlet_node = 100 # This DEM was generated using Landlab and the outlet node ID was known
rmg.status_at_node[my_outlet_node] = 1 # 1 is the code for fixed value
"""
Explanation: Note that the boundary conditions for this grid mainly got handled with the final line of those three, but for the sake of completeness, we should probably manually "open" the outlet. We can find and set the outlet like this:
End of explanation
"""
rmg.add_zeros('surface_water__depth', at='node') # water depth (m)
rmg.at_node['surface_water__depth'] += h_init
"""
Explanation: Now initialize a couple more grid fields that the component is going to need:
End of explanation
"""
imshow_grid(rmg, 'topographic__elevation')
"""
Explanation: Let's look at our watershed topography
End of explanation
"""
of = OverlandFlow(
rmg, steep_slopes=True
) #for stability in steeper environments, we set the steep_slopes flag to True
"""
Explanation: Now instantiate the component itself
End of explanation
"""
while elapsed_time < run_time:
# First, we calculate our time step.
dt = of.calc_time_step()
# Now, we can generate overland flow.
of.overland_flow()
# Increased elapsed time
print('Elapsed time: ', elapsed_time)
elapsed_time += dt
imshow_grid(rmg, 'surface_water__depth', cmap='Blues')
"""
Explanation: Now we're going to run the loop that drives the component:
End of explanation
"""
elapsed_time = 1.
for t in run_time_slices:
while elapsed_time < t:
# First, we calculate our time step.
dt = of.calc_time_step()
# Now, we can generate overland flow.
of.overland_flow()
# Increased elapsed time
elapsed_time += dt
figure(t)
imshow_grid(rmg, 'surface_water__depth', cmap='Blues')
"""
Explanation: Now let's get clever, and run a set of time slices:
End of explanation
"""
|
lneuhaus/pyrpl
|
docs/example-notebooks/tutorial.ipynb
|
mit
|
import pyrpl
print(pyrpl.__file__)
"""
Explanation: Introduction to pyrpl
1) Introduction
The RedPitaya is an affordable FPGA board with fast analog inputs and outputs. This makes it interesting also for quantum optics experiments. The software package PyRPL (Python RedPitaya Lockbox) is an implementation of many devices that are needed for optics experiments every day. The user interface and all high-level functionality is written in python, but an essential part of the software is hidden in a custom FPGA design (based on the official RedPitaya software version 0.95). While most users probably never want to touch the FPGA design, the Verilog source code is provided together with this package and may be modified to customize the software to your needs.
2) Table of contents
In this document, you will find the following sections:
1. Introduction
2. ToC
3. Installation
4. First steps
5. RedPitaya Modules
6. The Pyrpl class
7. The Graphical User Interface
If you are using Pyrpl for the first time, you should read sections 1-4. This will take about 15 minutes and should leave you able to communicate with your RedPitaya via python.
If you plan to use Pyrpl for a project that is not related to quantum optics, you probably want to go to section 5 then and omit section 6 altogether. Inversely, if you are only interested in a powerful tool for quantum optics and dont care about the details of the implementation, go to section 6. If you plan to contribute to the repository, you should definitely read section 5 to get an idea of what this software package realy does, and where help is needed. Finaly, Pyrpl also comes with a Graphical User Interface (GUI) to interactively control the modules described in section 5. Please, read section 7 for a quick description of the GUI.
3) Installation
Option 3: Simple clone from GitHub (developers)
If instead you plan to synchronize with github on a regular basis, you can also leave the downloaded code where it is and add the parent directory of the pyrpl folder to the PYTHONPATH environment variable as described in this thread: http://stackoverflow.com/questions/3402168/permanently-add-a-directory-to-pythonpath. For all beta-testers and developers, this is the preferred option. So the typical PYTHONPATH environment variable should look somewhat like this:
$\texttt{PYTHONPATH=C:\OTHER_MODULE;C:\GITHUB\PYRPL}$
If you are experiencing problems with the dependencies on other python packages, executing the following command in the pyrpl directory might help:
$\texttt{python setup.py install develop}$
If at a later point, you have the impression that updates from github are not reflected in the program's behavior, try this:
End of explanation
"""
#no-test
!pip install pyrpl #if you look at this file in ipython notebook, just execute this cell to install pyrplockbox
"""
Explanation: Should the directory not be the one of your local github installation, you might have an older version of pyrpl installed. Just delete any such directories other than your principal github clone and everything should work.
Option 2: from GitHub using setuptools (beta version)
Download the code manually from https://github.com/lneuhaus/pyrpl/archive/master.zip and unzip it or get it directly from git by typing
$\texttt{git clone https://github.com/lneuhaus/pyrpl.git YOUR_DESTINATIONFOLDER}$
In a command line shell, navigate into your new local pyrplockbox directory and execute
$\texttt{python setup.py install}$
This copies the files into the side-package directory of python. The setup should make sure that you have the python libraries paramiko (http://www.paramiko.org/installing.html) and scp (https://pypi.python.org/pypi/scp) installed. If this is not the case you will get a corresponding error message in a later step of this tutorial.
Option 1: with pip (coming soon)
If you have pip correctly installed, executing the following line in a command line should install pyrplockbox and all dependencies:
$\texttt{pip install pyrpl}$
End of explanation
"""
from pyrpl import Pyrpl
"""
Explanation: Compiling the server application (optional)
The software comes with a precompiled version of the server application (written in C) that runs on the RedPitaya. This application is uploaded automatically when you start the connection. If you made changes to this file, you can recompile it by typing
$\texttt{python setup.py compile_server}$
For this to work, you must have gcc and the cross-compiling libraries installed. Basically, if you can compile any of the official RedPitaya software written in C, then this should work, too.
If you do not have a working cross-compiler installed on your UserPC, you can also compile directly on the RedPitaya (tested with ecosystem v0.95). To do so, you must upload the directory pyrpl/monitor_server on the redpitaya, and launch the compilation with the command
$\texttt{make CROSS_COMPILE=}$
Compiling the FPGA bitfile (optional)
If you would like to modify the FPGA code or just make sure that it can be compiled, you should have a working installation of Vivado 2015.4. For windows users it is recommended to set up a virtual machine with Ubuntu on which the compiler can be run in order to avoid any compatibility problems. For the FPGA part, you only need the /fpga subdirectory of this software. Make sure it is somewhere in the file system of the machine with the vivado installation. Then type the following commands. You should adapt the path in the first and second commands to the locations of the Vivado installation / the fpga directory in your filesystem:
$\texttt{source /opt/Xilinx/Vivado/2015.4/settings64.sh}$
$\texttt{cd /home/myusername/fpga}$
$\texttt{make}$
The compilation should take between 15 and 30 minutes. The result will be the file $\texttt{fpga/red_pitaya.bin}$. To test the new FPGA design, make sure that this file in the fpga subdirectory of your pyrpl code directory. That is, if you used a virtual machine for the compilation, you must copy the file back to the original machine on which you run pyrpl.
Unitary tests (optional)
In order to make sure that any recent changes do not affect prior functionality, a large number of automated tests have been implemented. Every push to the github repository is automatically installed tested on an empty virtual linux system. However, the testing server has currently no RedPitaya available to run tests directly on the FPGA. Therefore it is also useful to run these tests on your local machine in case you modified the code.
Currently, the tests confirm that
- all pyrpl modules can be loaded in python
- all designated registers can be read and written
- future: functionality of all major submodules against reference benchmarks
To run the test, navigate in command line into the pyrpl directory and type
$\texttt{set REDPITAYA=192.168.1.100}$ (in windows) or
$\texttt{export REDPITAYA=192.168.1.100}$ (in linux)
$\texttt{python setup.py nosetests}$
The first command tells the test at which IP address it can find a RedPitaya. The last command runs the actual test. After a few seconds, there should be some output saying that the software has passed more than 140 tests.
After you have implemented additional features, you are encouraged to add unitary tests to consolidate the changes. If you immediately validate your changes with unitary tests, this will result in a huge productivity improvement for you. You can find all test files in the folder $\texttt{pyrpl/pyrpl/test}$, and the existing examples (notably $\texttt{test_example.py}$) should give you a good point to start. As long as you add a function starting with 'test_' in one of these files, your test should automatically run along with the others. As you add more tests, you will see the number of total tests increase when you run the test launcher.
Workflow to submit code changes (for developers)
As soon as the code will have reached version 0.9.0.3 (high-level unitary tests implemented and passing, approx. end of May 2016), we will consider the master branch of the github repository as the stable pre-release version. The goal is that the master branch will guarantee functionality at all times.
Any changes to the code, if they do not pass the unitary tests or have not been tested, are to be submitted as pull-requests in order not to endanger the stability of the master branch. We will briefly desribe how to properly submit your changes in that scenario.
Let's say you already changed the code of your local clone of pyrpl. Instead of directly committing the change to the master branch, you should create your own branch. In the windows application of github, when you are looking at the pyrpl repository, there is a small symbol looking like a steet bifurcation in the upper left corner, that says "Create new branch" when you hold the cursor over it. Click it and enter the name of your branch "leos development branch" or similar. The program will automatically switch to that branch. Now you can commit your changes, and then hit the "publish" or "sync" button in the upper right. That will upload your changes so everyone can see and test them.
You can continue working on your branch, add more commits and sync them with the online repository until your change is working. If the master branch has changed in the meantime, just click 'sync' to download them, and then the button "update from master" (upper left corner of the window) that will insert the most recent changes of the master branch into your branch. If the button doesn't work, that means that there are no changes available. This way you can benefit from the updates of the stable pre-release version, as long as they don't conflict with the changes you have been working on. If there are conflicts, github will wait for you to resolve them. In case you have been recompiling the fpga, there will always be a conflict w.r.t. the file 'red_pitaya.bin' (since it is a binary file, github cannot simply merge the differences you implemented). The best way to deal with this problem is to recompile the fpga bitfile after the 'update from master'. This way the binary file in your repository will correspond to the fpga code of the merged verilog files, and github will understand from the most recent modification date of the file that your local version of red_pitaya.bin is the one to keep.
At some point, you might want to insert your changes into the master branch, because they have been well-tested and are going to be useful for everyone else, too. To do so, after having committed and synced all recent changes to your branch, click on "Pull request" in the upper right corner, enter a title and description concerning the changes you have made, and click "Send pull request". Now your job is done. I will review and test the modifications of your code once again, possibly fix incompatibility issues, and merge it into the master branch once all is well. After the merge, you can delete your development branch. If you plan to continue working on related changes, you can also keep the branch and send pull requests later on. If you plan to work on a different feature, I recommend you create a new branch with a name related to the new feature, since this will make the evolution history of the feature more understandable for others. Or, if you would like to go back to following the master branch, click on the little downward arrow besides the name of your branch close to the street bifurcation symbol in the upper left of the github window. You will be able to choose which branch to work on, and to select master.
Let's all try to stick to this protocol. It might seem a little complicated at first, but you will quikly appreciate the fact that other people's mistakes won't be able to endanger your working code, and that by following the commits of the master branch alone, you will realize if an update is incompatible with your work.
4) First steps
If the installation went well, you should now be able to load the package in python. If that works you can pass directly to the next section 'Connecting to the RedPitaya'.
End of explanation
"""
#no-test
cd c:\lneuhaus\github\pyrpl
"""
Explanation: Sometimes, python has problems finding the path to pyrplockbox. In that case you should add the pyrplockbox directory to your pythonpath environment variable (http://stackoverflow.com/questions/3402168/permanently-add-a-directory-to-pythonpath). If you do not know how to do that, just manually navigate the ipython console to the directory, for example:
End of explanation
"""
from pyrpl import Pyrpl
"""
Explanation: Now retry to load the module. It should really work now.
End of explanation
"""
#define hostname
HOSTNAME = ""
from pyrpl import Pyrpl
p = Pyrpl(config='', # do not use a config file
#config='tutorial', # this would continuously save the current redpitaya state to a file "tutorial.yml"
hostname=HOSTNAME)
"""
Explanation: Connecting to the RedPitaya
You should have a working SD card (any version of the SD card content is okay) in your RedPitaya (for instructions see http://redpitaya.com/quick-start/). The RedPitaya should be connected via ethernet to your computer. To set this up, there is plenty of instructions on the RedPitaya website (http://redpitaya.com/quick-start/). If you type the ip address of your module in a browser, you should be able to start the different apps from the manufacturer. The default address is http://192.168.1.100.
If this works, we can load the python interface of pyrplockbox by specifying the RedPitaya's ip address. If you leave the HOSTNAME blanck, a popup window will open up to let you choose among the various connected Redpitayas on your local network.
End of explanation
"""
#check the value of input1
print(p.rp.scope.voltage_in1)
"""
Explanation: If you see at least one '>' symbol, your computer has successfully connected to your RedPitaya via SSH. This means that your connection works. The message 'Server application started on port 2222' means that your computer has sucessfully installed and started a server application on your RedPitaya. Once you get 'Client started with success', your python session has successfully connected to that server and all things are in place to get started.
Basic communication with your RedPitaya
End of explanation
"""
#see how the adc reading fluctuates over time
import time
from matplotlib import pyplot as plt
times,data = [],[]
t0 = time.time()
n = 3000
for i in range(n):
times.append(time.time()-t0)
data.append(p.rp.scope.voltage_in1)
print("Rough time to read one FPGA register: ", (time.time()-t0)/n*1e6, "µs")
%matplotlib inline
f, axarr = plt.subplots(1,2, sharey=True)
axarr[0].plot(times, data, "+");
axarr[0].set_title("ADC voltage vs time");
axarr[1].hist(data, bins=10,normed=True, orientation="horizontal");
axarr[1].set_title("ADC voltage histogram");
"""
Explanation: With the last command, you have successfully retrieved a value from an FPGA register. This operation takes about 300 µs on my computer. So there is enough time to repeat the reading n times.
End of explanation
"""
#blink some leds for 5 seconds
from time import sleep
for i in range(1025):
p.rp.hk.led=i
sleep(0.005)
# now feel free to play around a little to get familiar with binary representation by looking at the leds.
from time import sleep
p.rp.hk.led = 0b00000001
for i in range(10):
p.rp.hk.led = ~p.rp.hk.led>>1
sleep(0.2)
import random
for i in range(100):
p.rp.hk.led = random.randint(0,255)
sleep(0.02)
"""
Explanation: You see that the input values are not exactly zero. This is normal with all RedPitayas as some offsets are hard to keep zero when the environment changes (temperature etc.). So we will have to compensate for the offsets with our software. Another thing is that you see quite a bit of scatter beetween the points - almost as much that you do not see that the datapoints are quantized. The conclusion here is that the input noise is typically not totally negligible. Therefore we will need to use every trick at hand to get optimal noise performance.
After reading from the RedPitaya, let's now try to write to the register controlling the first 8 yellow LED's on the board. The number written to the LED register is displayed on the LED array in binary representation. You should see some fast flashing of the yellow leds for a few seconds when you execute the next block.
End of explanation
"""
r = p.rp #redpitaya object
r.hk #"housekeeping" = LEDs and digital inputs/outputs
r.ams #"analog mixed signals" = auxiliary ADCs and DACs.
r.scope #oscilloscope interface
r.asg0 #"arbitrary signal generator" channel 1
r.asg1 #"arbitrary signal generator" channel 2
r.pid0 #first of four PID modules
r.pid1
r.pid2
r.iq0 #first of three I+Q quadrature demodulation/modulation modules
r.iq1
r.iq2
r.iir #"infinite impules response" filter module that can realize complex transfer functions
"""
Explanation: 5) RedPitaya modules
Let's now look a bit closer at the class RedPitaya. Besides managing the communication with your board, it contains different modules that represent the different sections of the FPGA. You already encountered two of them in the example above: "hk" and "scope". Here is the full list of modules:
End of explanation
"""
asg = r.asg0 # make a shortcut
print("Trigger sources:", asg.trigger_sources)
print("Output options: ", asg.output_directs)
"""
Explanation: ASG and Scope module
Arbitrary Signal Generator
There are two Arbitrary Signal Generator modules: asg1 and asg2. For these modules, any waveform composed of $2^{14}$ programmable points is sent to the output with arbitrary frequency and start phase upon a trigger event.
End of explanation
"""
asg.output_direct = 'out2'
asg.setup(waveform='halframp', frequency=20e4, amplitude=0.8, offset=0, trigger_source='immediately')
"""
Explanation: Let's set up the ASG to output a sawtooth signal of amplitude 0.8 V (peak-to-peak 1.6 V) at 1 MHz on output 2:
End of explanation
"""
s = p.rp.scope # shortcut
print("Available decimation factors:", s.decimations)
print("Trigger sources:", s.trigger_sources)
print("Available inputs: ", s.inputs)
s.inputs
"""
Explanation: Oscilloscope
The scope works similar to the ASG but in reverse: Two channels are available. A table of $2^{14}$ datapoints for each channel is filled with the time series of incoming data. Downloading a full trace takes about 10 ms over standard ethernet. The rate at which the memory is filled is the sampling rate (125 MHz) divided by the value of 'decimation'. The property 'average' decides whether each datapoint is a single sample or the average of all samples over the decimation interval.
End of explanation
"""
from pyrpl.async_utils import sleep
from pyrpl import RedPitaya
#reload everything
r = p.rp #redpitaya object
asg = r.asg1
s = r.scope
# turn off asg so the scope has a chance to measure its "off-state" as well
asg.output_direct = "off"
# setup scope
s.input1 = 'asg1'
# pass asg signal through pid0 with a simple integrator - just for fun (detailed explanations for pid will follow)
r.pid0.input = 'asg1'
r.pid0.ival = 0 # reset the integrator to zero
r.pid0.i = 1000 # unity gain frequency of 1000 hz
r.pid0.p = 1.0 # proportional gain of 1.0
r.pid0.inputfilter = [0,0,0,0] # leave input filter disabled for now
# show pid output on channel2
s.input2 = 'pid0'
# trig at zero volt crossing
s.threshold_ch1 = 0
# positive/negative slope is detected by waiting for input to
# sweept through hysteresis around the trigger threshold in
# the right direction
s.hysteresis_ch1 = 0.01
# trigger on the input signal positive slope
s.trigger_source = 'ch1_positive_edge'
# take data symetrically around the trigger event
s.trigger_delay = 0
# set decimation factor to 64 -> full scope trace is 8ns * 2^14 * decimation = 8.3 ms long
s.decimation = 64
# only 1 trace average
s.trace_average = 1
# setup the scope for an acquisition
curve = s.single_async()
sleep(0.001)
print("\nBefore turning on asg:")
print("Curve ready:", s.curve_ready()) # trigger should still be armed
# turn on asg and leave enough time for the scope to record the data
asg.setup(frequency=1e3, amplitude=0.3, start_phase=90, waveform='halframp', trigger_source='immediately')
sleep(0.010)
# check that the trigger has been disarmed
print("\nAfter turning on asg:")
print("Curve ready:", s.curve_ready())
print("Trigger event age [ms]:",8e-9*((s.current_timestamp&0xFFFFFFFFFFFFFFFF) - s.trigger_timestamp)*1000)
# plot the data
%matplotlib inline
curve = curve.result()
plt.plot(s.times*1e3, curve[0], s.times*1e3, curve[1]);
plt.xlabel("Time [ms]");
plt.ylabel("Voltage");
"""
Explanation: Let's have a look at a signal generated by asg1. Later we will use convenience functions to reduce the amount of code necessary to set up the scope:
End of explanation
"""
# useful functions for scope diagnostics
print("Curve ready:", s.curve_ready())
print("Trigger source:",s.trigger_source)
print("Trigger threshold [V]:",s.threshold_ch1)
print("Averaging:",s.average)
print("Trigger delay [s]:",s.trigger_delay)
print("Trace duration [s]: ",s.duration)
print("Trigger hysteresis [V]", s.hysteresis_ch1)
print("Current scope time [cycles]:",hex(s.current_timestamp))
print("Trigger time [cycles]:",hex(s.trigger_timestamp))
print("Current voltage on channel 1 [V]:", r.scope.voltage_in1)
print("First point in data buffer 1 [V]:", s.ch1_firstpoint)
"""
Explanation: What do we see? The blue trace for channel 1 shows just the output signal of the asg. The time=0 corresponds to the trigger event. One can see that the trigger was not activated by the constant signal of 0 at the beginning, since it did not cross the hysteresis interval. One can also see a 'bug': After setting up the asg, it outputs the first value of its data table until its waveform output is triggered. For the halframp signal, as it is implemented in pyrpl, this is the maximally negative value. However, we passed the argument start_phase=90 to the asg.setup function, which shifts the first point by a quarter period. Can you guess what happens when we set start_phase=180? You should try it out!
In green, we see the same signal, filtered through the pid module. The nonzero proportional gain leads to instant jumps along with the asg signal. The integrator is responsible for the constant decrease rate at the beginning, and the low-pass that smoothens the asg waveform a little. One can also foresee that, if we are not paying attention, too large an integrator gain will quickly saturate the outputs.
End of explanation
"""
print(r.pid0.help())
"""
Explanation: PID module
We have already seen some use of the pid module above. There are four PID modules available: pid0 to pid3.
End of explanation
"""
#make shortcut
pid = r.pid0
#turn off by setting gains to zero
pid.p,pid.i = 0,0
print("P/I gain when turned off:", pid.i,pid.p)
# small nonzero numbers set gain to minimum value - avoids rounding off to zero gain
pid.p = 1e-100
pid.i = 1e-100
print("Minimum proportional gain: ",pid.p)
print("Minimum integral unity-gain frequency [Hz]: ",pid.i)
# saturation at maximum values
pid.p = 1e100
pid.i = 1e100
print("Maximum proportional gain: ",pid.p)
print("Maximum integral unity-gain frequency [Hz]: ",pid.i)
"""
Explanation: Proportional and integral gain
End of explanation
"""
import numpy as np
#make shortcut
pid = r.pid0
# set input to asg1
pid.input = "asg1"
# set asg to constant 0.1 Volts
r.asg1.setup(waveform="dc", offset = 0.1)
# set scope ch1 to pid0
r.scope.input1 = 'pid0'
#turn off the gains for now
pid.p,pid.i = 0, 0
#set integral value to zero
pid.ival = 0
#prepare data recording
from time import time
times, ivals, outputs = [], [], []
# turn on integrator to whatever negative gain
pid.i = -10
# set integral value above the maximum positive voltage
pid.ival = 1.5
#take 1000 points - jitter of the ethernet delay will add a noise here but we dont care
for n in range(1000):
times.append(time())
ivals.append(pid.ival)
outputs.append(r.scope.voltage_in1)
#plot
import matplotlib.pyplot as plt
%matplotlib inline
times = np.array(times)-min(times)
plt.plot(times,ivals,times,outputs);
plt.xlabel("Time [s]");
plt.ylabel("Voltage");
"""
Explanation: Control with the integral value register
End of explanation
"""
# off by default
r.pid0.inputfilter
# minimum cutoff frequency is 2 Hz, maximum 77 kHz (for now)
r.pid0.inputfilter = [1,1e10,-1,-1e10]
print(r.pid0.inputfilter)
# not setting a coefficient turns that filter off
r.pid0.inputfilter = [0,4,8]
print(r.pid0.inputfilter)
# setting without list also works
r.pid0.inputfilter = -2000
print(r.pid0.inputfilter)
# turn off again
r.pid0.inputfilter = []
print(r.pid0.inputfilter)
"""
Explanation: Again, what do we see? We set up the pid module with a constant (positive) input from the ASG. We then turned on the integrator (with negative gain), which will inevitably lead to a slow drift of the output towards negative voltages (blue trace). We had set the integral value above the positive saturation voltage, such that it takes longer until it reaches the negative saturation voltage. The output of the pid module is bound to saturate at +- 1 Volts, which is clearly visible in the green trace. The value of the integral is internally represented by a 32 bit number, so it can practically take arbitrarily large values compared to the 14 bit output. You can set it within the range from +4 to -4V, for example if you want to exloit the delay, or even if you want to compensate it with proportional gain.
Input filters
The pid module has one more feature: A bank of 4 input filters in series. These filters can be either off (bandwidth=0), lowpass (bandwidth positive) or highpass (bandwidth negative). The way these filters were implemented demands that the filter bandwidths can only take values that scale as the powers of 2.
End of explanation
"""
#reload to make sure settings are default ones
#from pyrpl import Pyrpl
#r = Pyrpl(hostname=HOSTNAME, config='tutorial').rp
#shortcut
iq = r.iq0
# modulation/demodulation frequency 25 MHz
# two lowpass filters with 10 and 20 kHz bandwidth
# input signal is analog input 1
# input AC-coupled with cutoff frequency near 50 kHz
# modulation amplitude 0.1 V
# modulation goes to out1
# output_signal is the demodulated quadrature 1
# quadrature_1 is amplified by 10
iq.setup(frequency=25e6, bandwidth=[10e3,20e3], gain=0.0,
phase=0, acbandwidth=50000, amplitude=0.5,
input='in1', output_direct='out1',
output_signal='quadrature', quadrature_factor=10)
"""
Explanation: You should now go back to the Scope and ASG example above and play around with the setting of these filters to convince yourself that they do what they are supposed to.
IQ module
Demodulation of a signal means convolving it with a sine and cosine at the 'carrier frequency'. The two resulting signals are usually low-pass filtered and called 'quadrature I' and and 'quadrature Q'. Based on this simple idea, the IQ module of pyrpl can implement several functionalities, depending on the particular setting of the various registers. In most cases, the configuration can be completely carried out through the setup function of the module.
<img src="IQmodule.png">
Lock-in detection / PDH / synchronous detection
End of explanation
"""
# shortcut for na
na = p.networkanalyzer
na.iq_name = 'iq1'
#take transfer functions. first: iq1 -> iq1, second iq1->out1->(your cable)->adc1
na.setup(start=1e3,stop=62.5e6,points=1001,rbw=1000,amplitude=0.2,input='iq1',output_direct='off', acbandwidth=0, trace_average=1)
iq1 = na.single()
na.setup(start=1e3,stop=62.5e6,points=1001,rbw=1000,amplitude=0.2,input='in1',output_direct='out1', acbandwidth=0, trace_average=1)
adc1 = na.single()
f = na.data_x
#plot
from pyrpl.hardware_modules.iir.iir_theory import bodeplot
%matplotlib inline
bodeplot([(f, iq1, "iq1->iq1"), (f, adc1, "iq1->out1->in1->iq1")], xlog=True)
"""
Explanation: After this setup, the demodulated quadrature is available as the output_signal of iq0, and can serve for example as the input of a PID module to stabilize the frequency of a laser to a reference cavity. The module was tested and is in daily use in our lab. Frequencies as low as 20 Hz and as high as 50 MHz have been used for this technique. At the present time, the functionality of a PDH-like detection as the one set up above cannot be conveniently tested internally. We plan to upgrade the IQ-module to VCO functionality in the near future, which will also enable testing the PDH functionality.
Network analyzer
When implementing complex functionality in the RedPitaya, the network analyzer module is by far the most useful tool for diagnostics. The network analyzer is able to probe the transfer function of any other module or external device by exciting the device with a sine of variable frequency and analyzing the resulting output from that device. This is done by demodulating the device output (=network analyzer input) with the same sine that was used for the excitation and a corresponding cosine, lowpass-filtering, and averaging the two quadratures for a well-defined number of cycles. From the two quadratures, one can extract the magnitude and phase shift of the device's transfer function at the probed frequencies. Let's illustrate the behaviour. For this example, you should connect output 1 to input 1 of your RedPitaya, such that we can compare the analog transfer function to a reference. Make sure you put a 50 Ohm terminator in parallel with input 1.
End of explanation
"""
# shortcut for na and bpf (bandpass filter)
na = p.networkanalyzer
na.iq_name = 'iq1'
bpf = r.iq2
# setup bandpass
bpf.setup(frequency = 2.5e6, #center frequency
Q=10.0, # the filter quality factor
acbandwidth = 10e5, # ac filter to remove pot. input offsets
phase=0, # nominal phase at center frequency (propagation phase lags not accounted for)
gain=2.0, # peak gain = +6 dB
output_direct='off',
output_signal='output_direct',
input='iq1')
# take transfer function
na.setup(start=1e5, stop=4e6, points=201, rbw=100, avg=3,
amplitude=0.2, input='iq2',output_direct='off', trace_average=1)
tf1 = na.single()
# add a phase advance of 82.3 degrees and measure transfer function
bpf.phase = 82.3
na.setup(start=1e5, stop=4e6, points=201, rbw=100, avg=3,
amplitude=0.2, input='iq2',output_direct='off', trace_average=1)
tf2 = na.single()
f = na.data_x
#plot
from pyrpl.hardware_modules.iir.iir_theory import bodeplot
%matplotlib inline
bodeplot([(f, tf1, "phase = 0.0"), (f, tf2, "phase = %.1f"%bpf.phase)])
"""
Explanation: If your cable is properly connected, you will see that both magnitudes are near 0 dB over most of the frequency range. Near the Nyquist frequency (62.5 MHz), one can see that the internal signal remains flat while the analog signal is strongly attenuated, as it should be to avoid aliasing. One can also see that the delay (phase lag) of the internal signal is much less than the one through the analog signal path.
If you have executed the last example (PDH detection) in this python session, iq0 should still send a modulation to out1, which is added to the signal of the network analyzer, and sampled by input1. In this case, you should see a little peak near the PDH modulation frequency, which was 25 MHz in the example above.
Lorentzian bandpass filter
The iq module can also be used as a bandpass filter with continuously tunable phase. Let's measure the transfer function of such a bandpass with the network analyzer:
End of explanation
"""
iq = r.iq0
# turn off pfd module for settings
iq.pfd_on = False
# local oscillator frequency
iq.frequency = 33.7e6
# local oscillator phase
iq.phase = 0
iq.input = 'in1'
iq.output_direct = 'off'
iq.output_signal = 'pfd'
print("Before turning on:")
print("Frequency difference error integral", iq.pfd_integral)
print("After turning on:")
iq.pfd_on = True
for i in range(10):
print("Frequency difference error integral", iq.pfd_integral)
"""
Explanation: Frequency comparator module
To lock the frequency of a VCO (Voltage controlled oscillator) to a frequency reference defined by the RedPitaya, the IQ module contains the frequency comparator block. This is how you set it up. You have to feed the output of this module through a PID block to send it to the analog output. As you will see, if your feedback is not already enabled when you turn on the module, its integrator will rapidly saturate (-585 is the maximum value here, while a value of the order of 1e-3 indicates a reasonable frequency lock).
End of explanation
"""
#shortcut
iir = r.iir
#print docstring of the setup function
print(iir.setup.__doc__)
#prepare plot parameters
%matplotlib inline
import matplotlib
matplotlib.rcParams['figure.figsize'] = (10, 6)
#setup a complicated transfer function
zeros = [ +4e4j-300,-2e5j-1000]
#[ -4e4j-300, +4e4j-300,-2e5j-1000, +2e5j-1000, -2e6j-3000, +2e6j-3000]
poles = [ -1e6, +5e4j-300]
#[ -1e6, -5e4j-300, +5e4j-300, -1e5j-3000, +1e5j-3000, -1e6j-30000, +1e6j-30000]
designdata = iir.setup(zeros=zeros, poles=poles, loops=None, plot=True);
print("Filter sampling frequency: ", 125./iir.loops,"MHz")
"""
Explanation: IIR module
Sometimes it is interesting to realize even more complicated filters. This is the case, for example, when a piezo resonance limits the maximum gain of a feedback loop. For these situations, the IIR module can implement filters with 'Infinite Impulse Response' (https://en.wikipedia.org/wiki/Infinite_impulse_response). It is the your task to choose the filter to be implemented by specifying the complex values of the poles and zeros of the filter. In the current version of pyrpl, the IIR module can implement IIR filters with the following properties:
- strictly proper transfer function (number of poles > number of zeros)
- poles (zeros) either real or complex-conjugate pairs
- no three or more identical real poles (zeros)
- no two or more identical pairs of complex conjugate poles (zeros)
- pole and zero frequencies should be larger than $\frac{f_\rm{nyquist}}{1000}$ (but you can optimize the nyquist frequency of your filter by tuning the 'loops' parameter)
- the DC-gain of the filter must be 1.0. Despite the FPGA implemention being more flexible, we found this constraint rather practical. If you need different behavior, pass the IIR signal through a PID module and use its input filter and proportional gain. If you still need different behaviour, the file iir.py is a good starting point.
- total filter order <= 16 (realizable with 8 parallel biquads)
- a remaining bug limits the dynamic range to about 30 dB before internal saturation interferes with filter performance
Filters whose poles have a positive real part are unstable by design. Zeros with positive real part lead to non-minimum phase lag. Nevertheless, the IIR module will let you implement these filters.
In general the IIR module is still fragile in the sense that you should verify the correct implementation of each filter you design. Usually you can trust the simulated transfer function. It is nevertheless a good idea to use the internal network analyzer module to actually measure the IIR transfer function with an amplitude comparable to the signal you expect to go through the filter, as to verify that no saturation of internal filter signals limits its performance.
End of explanation
"""
# first thing to check if the filter is not ok
print("IIR overflows before:", bool(iir.overflow))
# measure tf of iir filter
p.rp.iir.input = 'iq1'
p.networkanalyzer.setup(iq_name='iq1', start=1e4, stop=3e6, points = 301, rbw=100, trace_average=1,
amplitude=0.1, input='iir', output_direct='off', logscale=True)
tf = p.networkanalyzer.single()
f = p.networkanalyzer.data_x
# first thing to check if the filter is not ok
print("IIR overflows after:", bool(iir.overflow))
#plot with design data
%matplotlib inline
import matplotlib
matplotlib.rcParams['figure.figsize'] = (10, 6)
from pyrpl.hardware_modules.iir.iir_theory import bodeplot
bodeplot([(f, iir.transfer_function(f),"designed system")] + [(f,tf,"measured system")],xlog=True)
"""
Explanation: If you try changing a few coefficients, you will see that your design filter is not always properly realized. The bottleneck here is the conversion from the analytical expression (poles and zeros) to the filter coefficients, not the FPGA performance. This conversion is (among other things) limited by floating point precision. We hope to provide a more robust algorithm in future versions. If you can obtain filter coefficients by another, preferrably analytical method, this might lead to better results than our generic algorithm.
Let's check if the filter is really working as it is supposed:
End of explanation
"""
#rescale the filter by 20fold reduction of DC gain
iir.setup(zeros=zeros, poles=poles, g=0.1,loops=None,plot=False);
# first thing to check if the filter is not ok
print("IIR overflows before:", bool(iir.overflow))
# measure tf of iir filter
p.rp.iir.input = 'networkanalyzer'
p.networkanalyzer.setup(start=1e4, stop=3e6, points= 301, rbw=100, trace_average=1,
amplitude=0.1, input='iir', output_direct='off', logscale=True)
tf = p.networkanalyzer.single()
f = p.networkanalyzer.data_x
# first thing to check if the filter is not ok
print("IIR overflows after:", bool(iir.overflow))
#plot with design data
%matplotlib inline
import pylab
pylab.rcParams['figure.figsize'] = (10, 6)
from pyrpl.hardware_modules.iir.iir_theory import bodeplot
bodeplot([(f, p.rp.iir.transfer_function(f), "design")]+[(f,tf,"measured system")],xlog=True)
"""
Explanation: As you can see, the filter has trouble to realize large dynamic ranges. With the current standard design software, it takes some 'practice' to design transfer functions which are properly implemented by the code. While most zeros are properly realized by the filter, you see that the first two poles suffer from some kind of saturation. We are working on an automatic rescaling of the coefficients to allow for optimum dynamic range. From the overflow register printed above the plot, you can also see that the network analyzer scan caused an internal overflow in the filter. All these are signs that different parameters should be tried.
A straightforward way to impove filter performance is to adjust the DC-gain and compensate it later with the gain of a subsequent PID module. See for yourself what the parameter g=0.1 (instead of the default value g=1.0) does here:
End of explanation
"""
iir = p.rp.iir
# useful diagnostic functions
print("IIR on:", iir.on)
#print("IIR bypassed:", iir.shortcut)
#print("IIR copydata:", iir.copydata)
print("IIR loops:", iir.loops)
print("IIR overflows:", iir.overflow)
print("\nCoefficients (6 per biquad):")
print(iir.coefficients)
# set the unity transfer function to the filter
iir._setup_unity()
"""
Explanation: You see that we have improved the second peak (and avoided internal overflows) at the cost of increased nosie in other regions. Of course this noise can be reduced by increasing the NA averaging time. But maybe it will be detrimental to your application? After all, IIR filter design is far from trivial, but this tutorial should have given you enough information to get started and maybe to improve the way we have implemented the filter in pyrpl (e.g. by implementing automated filter coefficient scaling).
If you plan to play more with the filter, these are the remaining internal iir registers:
End of explanation
"""
pid = p.rp.pid0
print(pid.help())
pid.ival #bug: help forgets about pid.ival: current integrator value [volts]
"""
Explanation: 6) The Pyrpl class
The RedPitayas in our lab are mostly used to stabilize one item or another in quantum optics experiments. To do so, the experimenter usually does not want to bother with the detailed implementation on the RedPitaya while trying to understand the physics going on in her/his experiment. For this situation, we have developed the Pyrpl class, which provides an API with high-level functions such as:
# optimial pdh-lock with setpoint 0.1 cavity bandwidth away from resonance
cavity.lock(method='pdh',detuning=0.1)
# unlock the cavity
cavity.unlock()
# calibrate the fringe height of an interferometer, and lock it at local oscillator phase 45 degrees
interferometer.lock(phase=45.0)
First attempts at locking
SECTION NOT READY YET, BECAUSE CODE NOT CLEANED YET
Now lets go for a first attempt to lock something. Say you connect the error signal (transmission or reflection) of your setup to input 1. Make sure that the peak-to-peak of the error signal coincides with the maximum voltages the RedPitaya can handle (-1 to +1 V if the jumpers are set to LV). This is important for getting optimal noise performance. If your signal is too low, amplify it. If it is too high, you should build a voltage divider with 2 resistors of the order of a few kOhm (that way, the input impedance of the RedPitaya of 1 MOhm does not interfere).
Next, connect output 1 to the standard actuator at your hand, e.g. a piezo. Again, you should try to exploit the full -1 to +1 V output range. If the voltage at the actuator must be kept below 0.5V for example, you should make another voltage divider for this. Make sure that you take the input impedance of your actuator into consideration here. If you output needs to be amplified, it is best practice to put the voltage divider after the amplifier as to also attenuate the noise added by the amplifier. Hovever, when this poses a problem (limited bandwidth because of capacity of the actuator), you have to put the voltage divider before the amplifier. Also, this is the moment when you should think about low-pass filtering the actuator voltage. Because of DAC noise, analog low-pass filters are usually more effective than digital ones. A 3dB bandwidth of the order of 100 Hz is a good starting point for most piezos.
You often need two actuators to control your cavity. This is because the output resolution of 14 bits can only realize 16384 different values. This would mean that with a finesse of 15000, you would only be able to set it to resonance or a linewidth away from it, but nothing in between. To solve this, use a coarse actuator to cover at least one free spectral range which brings you near the resonance, and a fine one whose range is 1000 or 10000 times smaller and who gives you lots of graduation around the resonance. The coarse actuator should be strongly low-pass filtered (typical bandwidth of 1Hz or even less), the fine actuator can have 100 Hz or even higher bandwidth. Do not get confused here: the unity-gain frequency of your final lock can be 10- or even 100-fold above the 3dB bandwidth of the analog filter at the output - it suffices to increase the proportional gain of the RedPitaya Lockbox.
Once everything is connected, let's grab a PID module, make a shortcut to it and print its helpstring. All modules have a metho help() which prints all available registers and their description:
End of explanation
"""
pid.input = 'in1'
pid.output_direct = 'out1'
#see other available options just for curiosity:
print(pid.inputs)
print(pid.output_directs)
"""
Explanation: We need to inform our RedPitaya about which connections we want to make. The cabling discussed above translates into:
End of explanation
"""
# turn on the laser
offresonant = p.rp.scope.voltage_in1 #volts at analog input 1 with the unlocked cavity
# make a guess of what voltage you will measure at an optical resonance
resonant = 0.5 #Volts at analog input 1
# set the setpoint at relative reflection of 0.75 / rel. transmission of 0.25
pid.setpoint = 0.75*offresonant + 0.25*resonant
"""
Explanation: Finally, we need to define a setpoint. Lets first measure the offset when the laser is away from the resonance, and then measure or estimate how much light gets through on resonance.
End of explanation
"""
pid.i = 0 # make sure gain is off
pid.p = 0
#errorsignal = adc1 - setpoint
if resonant > offresonant: # when we are away from resonance, error is negative.
slopesign = 1.0 # therefore, near resonance, the slope is positive as the error crosses zero.
else:
slopesign = -1.0
gainsign = -slopesign #the gain must be the opposite to stabilize
# the effectove gain will in any case slopesign*gainsign = -1.
#Therefore we must start at the maximum positive voltage, so the negative effective gain leads to a decreasing output
pid.ival = 1.0 #sets the integrator value = output voltage to maximum
from time import sleep
sleep(1.0) #wait for the voltage to stabilize (adjust for a few times the lowpass filter bandwidth)
#finally, turn on the integrator
pid.i = gainsign * 0.1
#no-test
#with a bit of luck, this should work
from time import time
t0 = time()
while True:
relative_error = abs((p.rp.scope.voltage_in1-pid.setpoint)/(offresonant-resonant))
if time()-t0 > 2: #diagnostics every 2 seconds
print("relative error:",relative_error)
t0 = time()
if relative_error < 0.1:
break
sleep(0.01)
if pid.ival <= -1:
print("Resonance missed. Trying again slower..")
pid.ival = 1.2 #overshoot a little
pid.i /= 2
print("Resonance approch successful")
"""
Explanation: Now lets start to approach the resonance. We need to figure out from which side we are coming. The choice is made such that a simple integrator will naturally drift into the resonance and stay there:
End of explanation
"""
#shortcut
iq = p.rp.iq0
iq.setup(frequency=1000e3, bandwidth=[10e3,20e3], gain=0.0,
phase=0, acbandwidth=50000, amplitude=0.4,
input='in1', output_direct='out1',
output_signal='output_direct', quadrature_factor=0)
iq.frequency=10
p.rp.scope.input1='in1'
# shortcut for na
na = p.networkanalyzer
na.iq_name = "iq1"
# pid1 will be our device under test
pid = p.rp.pid0
pid.input = 'iq1'
pid.i = 0
pid.ival = 0
pid.p = 1.0
pid.setpoint = 0
pid.inputfilter = []#[-1e3, 5e3, 20e3, 80e3]
# take the transfer function through pid1, this will take a few seconds...
na.setup(start=0,stop=200e3,points=101,rbw=100,avg=1,amplitude=0.5,input='iq1',output_direct='off', acbandwidth=0)
y = na.single()
x = na.data_x
#plot
import matplotlib.pyplot as plt
%matplotlib inline
import numpy as np
plt.plot(x*1e-3,np.abs(y)**2);
plt.xlabel("Frequency [kHz]");
plt.ylabel("|S21|");
"""
Explanation: Questions to users: what parameters do you know?
finesse of the cavity? 1000
length? 1.57m
what error signals are available? transmission direct, reflection AC -> directement pdh analogique
are modulators available n/a
what cavity length / laser frequency actuators are available? PZT mephisto DC - 10kHz, 48MHz opt./V, V_rp apmplifie x20
temperature du laser <1 Hz 2.5~GHz/V, apres AOM
what is known about them (displacement, bandwidth, amplifiers)?
what analog filters are present? YAG PZT a 10kHz
imposer le design des sorties
More to come
End of explanation
"""
#no-test
from pyrpl import Pyrpl
p = Pyrpl(hostname=HOSTNAME, config='tutorial')
"""
Explanation: 7) The Graphical User Interface
Most of the modules described in section 5 can be controlled via a graphical user interface. The graphical window can be displayed with the following:
WARNING: For the GUI to work fine within an ipython session, the option --gui=qt has to be given to the command launching ipython. This makes sure that an event loop is running.
End of explanation
"""
|
moble/PostNewtonian
|
C++/TestBackwardsEvolution.ipynb
|
mit
|
v_i = 0.15
m1 = 0.4
m2 = 0.6
chi1_i = [0.1,0.2,0.3]
chi2_i = [0.2,0.3,0.4]
R_frame_i = Quaternions.Quaternion(1,0,0,0)
ForwardInTime = True
v_0 = 0.9*v_i
tA,vA,chi1A,chi2A,R_frameA,PhiA = \
PNEvolution.EvolvePN("TaylorT1", 4.0, v_0, v_i, m1, m2, chi1_i, chi2_i, R_frame_i, ForwardInTime)
plot(tA, vA, label='v_0 = {0}'.format(v_0))
v_0 = v_i
tB,vB,chi1B,chi2B,R_frameB,PhiB = \
PNEvolution.EvolvePN("TaylorT1", 4.0, v_0, v_i, m1, m2, chi1_i, chi2_i, R_frame_i, ForwardInTime)
plot(tB, vB, label='v_0 = {0}'.format(v_0))
legend()
"""
Explanation: The following calculates a PN evolution with initial velocity $v=0.15$, and a starting velocity $v$ of either $0.15$ or $90\%$ of that. The two evolutions have the same "initial" conditions, and so should be identical after that point. But the evolution starting at $v=0.9 \times 0.15$ calculates the portion of the evolution that came before the "initial" velocity. This distinction is important with precessing systems because we want to give the spins and binary orientation at the "initial" time, but we also want to know data from before that time.
End of explanation
"""
max(tA[tA.size-tB.size:]-tB)
"""
Explanation: Now, to show that the times that both waveforms calculated are identical:
End of explanation
"""
max(vA[tA.size-tB.size:]-vB)
"""
Explanation: And the velocities:
End of explanation
"""
norm(chi1A[tA.size-tB.size:]-chi1B), norm(chi2A[tA.size-tB.size:]-chi2B)
"""
Explanation: And the spins:
End of explanation
"""
v_i = 0.15
m1 = 0.4
m2 = 0.6
chi1_i = [0.1,0.2,0.3]
chi2_i = [0.2,0.3,0.4]
R_frame_i = Quaternions.Quaternion(1,0,0,0)
ForwardInTime = True
v_0 = 0.9*v_i
tC,vC,chi1C,chi2C,R_frameC,PhiC = \
PNEvolution.EvolvePN_Q("TaylorT1", 4.0, v_0, v_i, m1, m2, chi1_i, chi2_i, R_frame_i, ForwardInTime)
plot(tC, vC, label='v_0 = {0}'.format(v_0))
v_0 = v_i
tD,vD,chi1D,chi2D,R_frameD,PhiD = \
PNEvolution.EvolvePN_Q("TaylorT1", 4.0, v_0, v_i, m1, m2, chi1_i, chi2_i, R_frame_i, ForwardInTime)
plot(tD, vD, label='v_0 = {0}'.format(v_0))
legend()
max(tA[tA.size-tB.size:]-tB)
max(vA[tA.size-tB.size:]-vB)
norm(chi1A[tA.size-tB.size:]-chi1B), norm(chi2A[tA.size-tB.size:]-chi2B)
"""
Explanation: Quaternion evolution system
End of explanation
"""
|
abulbasar/machine-learning
|
Scikit - 03 Linear Regression.ipynb
|
apache-2.0
|
df_null_idx = df[df.isnull().sum(axis = 1) > 0].index
df.iloc[df_null_idx]
median_values = df.groupby("State")[["R&D Spend", "Marketing Spend"]].median()
median_values
df["R&D Spend"] = df.apply(lambda row: median_values.loc[row["State"], "R&D Spend"] if np.isnan(row["R&D Spend"]) else row["R&D Spend"], axis = 1 )
df["Marketing Spend"] = df.apply(lambda row: median_values.loc[row["State"], "Marketing Spend"] if np.isnan(row["Marketing Spend"]) else row["Marketing Spend"], axis = 1 )
df.iloc[df_null_idx]
# Check if there are any more null values.
df.isnull().sum()
"""
Explanation: There are 50 observations and 5 columns. 4 columns - R&D Spend, Administration and Marketing Spend, and Profile are numeric and one is categorical - State. There is 2 null values in columns R&D Spend feature and and 3 in Marketing Spend.
Replace the null values with median for respective state.
End of explanation
"""
plt.figure(figsize = (8, 6))
plt.subplot(2, 1, 1)
df.Profit.plot.hist(bins = 10, normed = True)
df.Profit.plot.kde(title = "Historgram of Profit")
plt.subplot(2, 1, 2)
df.Profit.plot.box(vert = False, title = "Boxplot of Profit")
plt.tight_layout()
"""
Explanation: Let's see the distribution of the Profit using a histogram plot and see if there is any outliers in the data using bosplot.
End of explanation
"""
sns.pairplot(df)
"""
Explanation: Profit has one outlier. We can try to take log scale to remove the outlier value before doing any prediction. But for now, let ignore the outlier.
Let's plot association between each pair of columns.
End of explanation
"""
df.groupby("State").Profit.mean().sort_values().plot.bar(title = "Avg Profit by State")
plt.xlabel("State")
plt.ylabel("Profit")
"""
Explanation: Displays only the numeric column. Let's how the avg Profit plays for each State.
End of explanation
"""
y = df.Profit.values
y
"""
Explanation: Avg Profit is highest in state of Florida and least in California.
Let's create the y vector containing the outcome column.
End of explanation
"""
df_features = df.iloc[:, 0:4]
df_dummied = pd.get_dummies(df_features, columns=["State"], drop_first=True)
df_dummied.sample(10)
"""
Explanation: Create dummy variables for categorical feature.
End of explanation
"""
X = df_dummied.values
X[0, :]
"""
Explanation: State column has been replaced by two additional column - one for Florida and one NY. First value in the categorical values CA has been dropped to avoid collinearity issue.
Now, let's create X feature matrix and y outcome vector.
End of explanation
"""
scaler = StandardScaler()
X_std = scaler.fit_transform(X)
pd.DataFrame(X_std).head()
"""
Explanation: Let's normalize the feature values to bring them to a similar scale.
End of explanation
"""
X_train, X_test, y_train, y_test = train_test_split(X_std, y,
test_size = 0.3, random_state = 100)
print("Training set: ", X_train.shape, y_train.shape)
print("Test set: ", X_test.shape, y_test.shape)
"""
Explanation: Split the X and y into training and test sets.
End of explanation
"""
X_train.shape[0] / df.shape[0]
"""
Explanation: Ratio of the size of the training data
End of explanation
"""
lr = LinearRegression()
lr.fit(X_train, y_train)
lr.intercept_, lr.coef_
"""
Explanation: Fit linear regression model
End of explanation
"""
y_test_pred = lr.predict(X_test)
output = pd.DataFrame({"actual": y_test, "prediction": y_test_pred})
output["error"] = output.actual - output.prediction
output
"""
Explanation: By looking at the cofficients, we can conclude that R&D Spend has the higest influence on the outcome variable.
Predict the outcome based on the model
End of explanation
"""
X_test_inv = scaler.inverse_transform(X_test)
plt.scatter(X_test_inv[:, 0], y_test, alpha = 0.3, c = "blue", label = "Actual")
plt.scatter(X_test_inv[:, 0], y_test_pred, c = "red", label = "Predicted")
plt.xlabel("R&D Spend")
plt.ylabel("Profit")
plt.title("Profit Actual vs Estimate")
plt.legend()
np.mean((y_test_pred - y_test) ** 2)
y_train_pred = lr.predict(X_train)
"""
Explanation: A simpliest prediction model could have been the average. Let's how the model did overall against one feature.
End of explanation
"""
print("Test rmse: ", sqrt(mean_squared_error(y_test, y_test_pred)),
"\nTraining rmse:", sqrt(mean_squared_error(y_train, y_train_pred)))
"""
Explanation: Compare the root mean squared error (RMSE) of test dataset against the training.
End of explanation
"""
r2_score(y_test, y_test_pred), r2_score(y_train, y_train_pred)
"""
Explanation: r2 score can have a max value 1, negative values of R2 means suboptimal model
End of explanation
"""
SSR = np.sum((y_train - y_train_pred) ** 2) # Sum of squared residuals
SST = np.sum((y_train - np.mean(y_train_pred)) ** 2) # Sum of squared totals
R2 = 1 - SSR/SST
R2
"""
Explanation: On the training the both RMSE and R2 scores perform natually better than those on the test dataset.
Let's calculate R2 score manually.
End of explanation
"""
from sklearn.feature_selection import f_regression
_, p_vals = f_regression(X_train, y_train)
p_vals
pd.DataFrame({"feature": df_dummied.columns, "p_value": p_vals})
"""
Explanation: R2 can be viewed as (1 - mse/variance(y))
Significance Scores for feature selection
End of explanation
"""
df = pd.read_csv("/data/Combined_Cycle_Power_Plant.csv")
df.head()
X = df.iloc[:, 0:4].values
y = df.PE.values
sns.pairplot(df)
scaler = StandardScaler()
X_std = scaler.fit_transform(X)
X_train, X_test, y_train, y_test = train_test_split(X_std, y, test_size = 0.3, random_state = 1)
def rmse(y_true, y_pred):
return sqrt(mean_squared_error(y_true, y_pred))
lr = LinearRegression(normalize=False)
lr.fit(X_train, y_train)
y_train_pred = lr.predict(X_train)
y_test_pred = lr.predict(X_test)
rmse(y_test, y_test_pred)
from scipy import stats
residuals = y_test - y_test_pred
plt.figure(figsize=(15, 6))
plt.subplot(1, 2, 1)
plt.scatter(y_test, residuals)
plt.xlabel("y_test")
plt.ylabel("Residuals")
plt.hlines([0], xmin = 420, xmax = 500, linestyles = "dashed")
plt.subplot(1, 2, 2)
stats.probplot(residuals, plot=plt)
"""
Explanation: p-value indicates the significant scores for each feature. p-value < 0.05 indicates, the corresponding feature is statistically significant. We can rebuild the model excluding the non-significant features one by one until all remaining features are significant.
Power Plant Dataset
Let's look at another dataset
End of explanation
"""
poly = PolynomialFeatures(degree=2)
X = df.iloc[:, 0:4].values
X_poly = poly.fit_transform(X)
X_poly_train, X_poly_test, y_train, y_test = train_test_split(X_poly, y, test_size = 0.3, random_state = 100)
X_poly_train_std = scaler.fit_transform(X_poly_train)
X_poly_test_std = scaler.transform(X_poly_test)
pd.DataFrame(X_poly_train_std).head()
lr.fit(X_poly_train_std, y_train)
print("Train rmse: ", rmse(y_train, lr.predict(X_poly_train_std)))
print("Test rmse: ", rmse(y_test, lr.predict(X_poly_test_std)))
print(lr.intercept_, lr.coef_)
"""
Explanation: Residual plots show there are outliers in the lower end of the y_test values. qqPlot shows that residuals do not exhibit normaality, indicating non linearity in the model.
End of explanation
"""
lasso = Lasso(alpha=0.03, max_iter=10000, normalize=False, random_state=100)
lasso.fit(X_poly_train_std, y_train)
print("Train rmse: ", rmse(y_train, lasso.predict(X_poly_train_std)))
print("Test rmse: ", rmse(y_test, lasso.predict(X_poly_test_std)))
print(lasso.intercept_, lasso.coef_)
"""
Explanation: Polynomial regression generally sufferes from overfitting. Let's regularize the model using Lasso.
End of explanation
"""
X_poly_std = scaler.fit_transform(X_poly)
lasso = Lasso(alpha=0.03, max_iter=10000, random_state=100)
scores = cross_val_score(lasso, X_poly_std, y, cv = 10, scoring="neg_mean_squared_error")
scores = np.sqrt(-scores)
print("RMSE scores", scores)
print("Mean rmse: ", np.mean(scores))
"""
Explanation: Let's find cross validation score that accuracy score is more reliable in a sense that it incorporates every piece of is incorporated in both training and testing.
End of explanation
"""
from sklearn.pipeline import Pipeline
pipeline = Pipeline(steps = [
("poly", PolynomialFeatures(degree=2, include_bias=False)),
("scaler", StandardScaler()),
("lasso", Lasso(alpha=0.03, max_iter=10000, normalize=False, random_state=1))
])
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size = 0.3, random_state = 1)
pipeline.fit(X_train, y_train)
rmse(y_test, pipeline.predict(X_test))
"""
Explanation: Encapsulate the steps in a pipeline
End of explanation
"""
# Find best alpha
lassocv = LassoCV(cv = 10, max_iter=10000, tol=1e-5)
lassocv.fit(X_poly_std, y)
print("Lassocv alpha: ", lassocv.alpha_)
# Apply the best alpha to find cross validation score
lasso = Lasso(alpha = lassocv.alpha_, max_iter=10000, random_state=100)
scores = cross_val_score(lasso, X_poly_std, y, cv = 10, scoring="neg_mean_squared_error")
print("Mean rmse: ", np.mean(np.sqrt(-scores)))
"""
Explanation: LassoCV helps find the best alpha. We could also use model tuning techqniues to find best alpha as well.
End of explanation
"""
coefs = []
alphas = 10 ** np.linspace(-5, 5, 20)
for alpha in alphas:
lasso = Lasso(alpha=alpha, max_iter=10000, tol=1e-5,random_state=100)
lasso.fit(X_poly_std, y)
coefs.append(lasso.coef_)
plt.plot(alphas, coefs)
plt.xscale("log")
plt.xlabel("Alpha (penalty term on the coefficients)")
plt.ylabel("Coefficients of the features")
"""
Explanation: Look at the cofficients values. Many of the features are not zero making the model parsimonious hence more robust - that is less prone to overfitting.
Let's plot how coefficient reached 0 values by varying the alpha valuess.
End of explanation
"""
poly = PolynomialFeatures(degree=2)
X = df.iloc[:, 0:4].values
X_poly = poly.fit_transform(X)
X_poly_train, X_poly_test, y_train, y_test = train_test_split(X_poly, y, test_size = 0.3, random_state = 100)
X_poly_train_std = scaler.fit_transform(X_poly_train)
X_poly_test_std = scaler.transform(X_poly_test)
gbm = xgb.XGBRegressor(max_depth=10, learning_rate=0.1, n_estimators=100,
objective='reg:linear', booster='gbtree',
reg_alpha=0.01, reg_lambda=1, random_state=0)
gbm.fit(X_poly_train_std, y_train)
print("rmse:", rmse(y_test, gbm.predict(X_poly_test_std)))
param = {'silent':1,
'objective':'reg:linear',
'booster':'gbtree',
'alpha': 0.01,
'lambda': 1
}
dtrain = xgb.DMatrix(X_poly_train_std, label=y_train)
dtest = xgb.DMatrix(X_poly_test_std, label=y_test)
watchlist = [(dtrain,'eval'), (dtest, 'train')]
num_round = 100
bst = xgb.train(param, dtrain, num_round, watchlist, verbose_eval=False)
print("rmse:", rmse(y_test, bst.predict(dtest)))
plt.figure(figsize=(8, 10))
xgb.plot_importance(bst)
"""
Explanation: From this graph, which alpha values should we select. That question can be answered by looking which alpha values gives the best performance (rmse for example). lassocv function does that for us, or we can use model tuning techniques using grid search - that will be explained later.
Xgboost
End of explanation
"""
|
tanmay987/deepLearning
|
tv-script-generation/.ipynb_checkpoints/dlnd_tv_script_generation-checkpoint.ipynb
|
mit
|
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
data_dir = './data/simpsons/moes_tavern_lines.txt'
text = helper.load_data(data_dir)
# Ignore notice, since we don't use it for analysing the data
text = text[81:]
"""
Explanation: TV Script Generation
In this project, you'll generate your own Simpsons TV scripts using RNNs. You'll be using part of the Simpsons dataset of scripts from 27 seasons. The Neural Network you'll build will generate a new TV script for a scene at Moe's Tavern.
Get the Data
The data is already provided for you. You'll be using a subset of the original dataset. It consists of only the scenes in Moe's Tavern. This doesn't include other versions of the tavern, like "Moe's Cavern", "Flaming Moe's", "Uncle Moe's Family Feed-Bag", etc..
End of explanation
"""
view_sentence_range = (0, 10)
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import numpy as np
print('Dataset Stats')
print('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))
scenes = text.split('\n\n')
print('Number of scenes: {}'.format(len(scenes)))
sentence_count_scene = [scene.count('\n') for scene in scenes]
print('Average number of sentences in each scene: {}'.format(np.average(sentence_count_scene)))
sentences = [sentence for scene in scenes for sentence in scene.split('\n')]
print('Number of lines: {}'.format(len(sentences)))
word_count_sentence = [len(sentence.split()) for sentence in sentences]
print('Average number of words in each line: {}'.format(np.average(word_count_sentence)))
print()
print('The sentences {} to {}:'.format(*view_sentence_range))
print('\n'.join(text.split('\n')[view_sentence_range[0]:view_sentence_range[1]]))
"""
Explanation: Explore the Data
Play around with view_sentence_range to view different parts of the data.
End of explanation
"""
import numpy as np
from collections import Counter
import problem_unittests as tests
def create_lookup_tables(text):
"""
Create lookup tables for vocabulary
:param text: The text of tv scripts split into words
:return: A tuple of dicts (vocab_to_int, int_to_vocab)
"""
# TODO: Implement Function
counts= Counter(text)
vocab=sorted(counts,key=counts.get, reverse=True)
vocab_to_int={word:ii for ii ,word in enumerate(vocab)}
int_to_vocab={ii:word for ii ,word in enumerate(vocab)}
return vocab_to_int, int_to_vocab
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_create_lookup_tables(create_lookup_tables)
"""
Explanation: Implement Preprocessing Functions
The first thing to do to any dataset is preprocessing. Implement the following preprocessing functions below:
- Lookup Table
- Tokenize Punctuation
Lookup Table
To create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:
- Dictionary to go from the words to an id, we'll call vocab_to_int
- Dictionary to go from the id to word, we'll call int_to_vocab
Return these dictionaries in the following tuple (vocab_to_int, int_to_vocab)
End of explanation
"""
def token_lookup():
"""
Generate a dict to turn punctuation into a token.
:return: Tokenize dictionary where the key is the punctuation and the value is the token
"""
table = {'.': '|period|',
',': '|comma|',
'"': '|quotation_mark|',
';': '|semicolon|',
'!': '|exclamation_mark|',
'?': '|question_mark|',
'(': '|left_parentheses|',
')': '|right_parentheses|',
'--': '|dash|',
'\n': '|return|'}
return table
# TODO: Implement Function
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_tokenize(token_lookup)
"""
Explanation: Tokenize Punctuation
We'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks make it hard for the neural network to distinguish between the word "bye" and "bye!".
Implement the function token_lookup to return a dict that will be used to tokenize symbols like "!" into "||Exclamation_Mark||". Create a dictionary for the following symbols where the symbol is the key and value is the token:
- Period ( . )
- Comma ( , )
- Quotation Mark ( " )
- Semicolon ( ; )
- Exclamation mark ( ! )
- Question mark ( ? )
- Left Parentheses ( ( )
- Right Parentheses ( ) )
- Dash ( -- )
- Return ( \n )
This dictionary will be used to token the symbols and add the delimiter (space) around it. This separates the symbols as it's own word, making it easier for the neural network to predict on the next word. Make sure you don't use a token that could be confused as a word. Instead of using the token "dash", try using something like "||dash||".
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# Preprocess Training, Validation, and Testing Data
helper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)
"""
Explanation: Preprocess all the data and save it
Running the code cell below will preprocess all the data and save it to file.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import helper
import numpy as np
import problem_unittests as tests
int_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
"""
Explanation: Check Point
This is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
from distutils.version import LooseVersion
import warnings
import tensorflow as tf
# Check TensorFlow Version
assert LooseVersion(tf.__version__) >= LooseVersion('1.0'), 'Please use TensorFlow version 1.0 or newer'
print('TensorFlow Version: {}'.format(tf.__version__))
# Check for a GPU
if not tf.test.gpu_device_name():
warnings.warn('No GPU found. Please use a GPU to train your neural network.')
else:
print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
"""
Explanation: Build the Neural Network
You'll build the components necessary to build a RNN by implementing the following functions below:
- get_inputs
- get_init_cell
- get_embed
- build_rnn
- build_nn
- get_batches
Check the Version of TensorFlow and Access to GPU
End of explanation
"""
def get_inputs():
"""
Create TF Placeholders for input, targets, and learning rate.
:return: Tuple (input, targets, learning rate)
"""
input = tf.placeholder(tf.int32, shape=(None,None), name='input')
targets= tf.placeholder(tf.int32, shape=(None,None), name='targets')
learning_rate = tf.placeholder(tf.float32)
# TODO: Implement Function
return input, targets, learning_rate
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_get_inputs(get_inputs)
"""
Explanation: Input
Implement the get_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders:
- Input text placeholder named "input" using the TF Placeholder name parameter.
- Targets placeholder
- Learning Rate placeholder
Return the placeholders in the following tuple (Input, Targets, LearningRate)
End of explanation
"""
def get_init_cell(batch_size, rnn_size):
"""
Create an RNN Cell and initialize it.
:param batch_size: Size of batches
:param rnn_size: Size of RNNs
:return: Tuple (cell, initialize state)
"""
lstm_layers = 2
cell = tf.contrib.rnn.BasicLSTMCell(num_units=rnn_size)
# TODO: Implement Function
cell = tf.contrib.rnn.MultiRNNCell([cell]*lstm_layers)
initial_state=cell.zero_state(batch_size,tf.float32)
initial_state = tf.identity(initial_state, name='initial_state')
return cell, initial_state
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_get_init_cell(get_init_cell)
"""
Explanation: Build RNN Cell and Initialize
Stack one or more BasicLSTMCells in a MultiRNNCell.
- The Rnn size should be set using rnn_size
- Initalize Cell State using the MultiRNNCell's zero_state() function
- Apply the name "initial_state" to the initial state using tf.identity()
Return the cell and initial state in the following tuple (Cell, InitialState)
End of explanation
"""
def get_embed(input_data, vocab_size, embed_dim):
"""
Create embedding for <input_data>.
:param input_data: TF placeholder for text input.
:param vocab_size: Number of words in vocabulary.
:param embed_dim: Number of embedding dimensions
:return: Embedded input.
"""
embedding = tf.Variable(tf.random_uniform((vocab_size, embed_dim),-1,1))
embed = tf.nn.embedding_lookup(embedding, input_data)
# TODO: Implement Function
return embed
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_get_embed(get_embed)
"""
Explanation: Word Embedding
Apply embedding to input_data using TensorFlow. Return the embedded sequence.
End of explanation
"""
def build_rnn(cell, inputs):
"""
Create a RNN using a RNN Cell
:param cell: RNN Cell
:param inputs: Input text data
:return: Tuple (Outputs, Final State)
"""
output,final_state = tf.nn.dynamic_rnn(cell,inputs,dtype=tf.float32)
# TODO: Implement Function
final_state = tf.identity(final_state,name="final_state")
return output, final_state
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_build_rnn(build_rnn)
"""
Explanation: Build RNN
You created a RNN Cell in the get_init_cell() function. Time to use the cell to create a RNN.
- Build the RNN using the tf.nn.dynamic_rnn()
- Apply the name "final_state" to the final state using tf.identity()
Return the outputs and final_state state in the following tuple (Outputs, FinalState)
End of explanation
"""
def build_nn(cell, rnn_size, input_data, vocab_size, embed_dim=300):
"""
Build part of the neural network
:param cell: RNN cell
:param rnn_size: Size of rnns
:param input_data: Input data
:param vocab_size: Vocabulary size
:param embed_dim: Number of embedding dimensions
:return: Tuple (Logits, FinalState)
"""
embed = get_embed(input_data, vocab_size, embed_dim = 300)
# TODO: Imprplement Function
output, final_state = build_rnn(cell, embed)
predictions = tf.contrib.layers.fully_connected( output , vocab_size, activation_fn=None)
return predictions , final_state
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_build_nn(build_nn)
"""
Explanation: Build the Neural Network
Apply the functions you implemented above to:
- Apply embedding to input_data using your get_embed(input_data, vocab_size, embed_dim) function.
- Build RNN using cell and your build_rnn(cell, inputs) function.
- Apply a fully connected layer with a linear activation and vocab_size as the number of outputs.
Return the logits and final state in the following tuple (Logits, FinalState)
End of explanation
"""
def get_batches(int_text, batch_size, seq_length):
"""
Return batches of input and target
:param int_text: Text with the words replaced by their ids
:param batch_size: The size of batch
:param seq_length: The length of sequence
:return: Batches as a Numpy array
"""
num_batches = len(int_text) // (batch_size * seq_length)
batches = np.zeros([num_batches, 2, batch_size, seq_length], dtype=np.int32)
for idx in range(0, len(int_text), seq_length):
batch_no = (idx // seq_length) % num_batches
batch_idx = idx // (seq_length * num_batches)
if (batch_idx == batch_size):
break
batches[batch_no, 0, batch_idx, ] = int_text[idx:idx + seq_length]
batches[batch_no, 1, batch_idx, ] = int_text[idx + 1:idx + seq_length + 1]
print([batch_no, 1, batch_idx-1, seq_length])
batches[(len(int_text)//seq_length)%num_batches, 1, batch_idx-1, seq_length-1] = batches[0, 0, 0, 0]
return batches
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_get_batches(get_batches)
"""
Explanation: Batches
Implement get_batches to create batches of input and targets using int_text. The batches should be a Numpy array with the shape (number of batches, 2, batch size, sequence length). Each batch contains two elements:
- The first element is a single batch of input with the shape [batch size, sequence length]
- The second element is a single batch of targets with the shape [batch size, sequence length]
If you can't fill the last batch with enough data, drop the last batch.
For exmple, get_batches([1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20], 3, 2) would return a Numpy array of the following:
```
[
# First Batch
[
# Batch of Input
[[ 1 2], [ 7 8], [13 14]]
# Batch of targets
[[ 2 3], [ 8 9], [14 15]]
]
# Second Batch
[
# Batch of Input
[[ 3 4], [ 9 10], [15 16]]
# Batch of targets
[[ 4 5], [10 11], [16 17]]
]
# Third Batch
[
# Batch of Input
[[ 5 6], [11 12], [17 18]]
# Batch of targets
[[ 6 7], [12 13], [18 1]]
]
]
```
Notice that the last target value in the last batch is the first input value of the first batch. In this case, 1. This is a common technique used when creating sequence batches, although it is rather unintuitive.
End of explanation
"""
# Number of Epochs
num_epochs = 100
# Batch Size
batch_size = 128
# RNN Size
rnn_size = 512
# Sequence Length
seq_length = 64
# Learning Rate
learning_rate = 0.01
# Show stats for every n number of batches
show_every_n_batches = 100
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
save_dir = './save'
"""
Explanation: Neural Network Training
Hyperparameters
Tune the following parameters:
Set num_epochs to the number of epochs.
Set batch_size to the batch size.
Set rnn_size to the size of the RNNs.
Set embed_dim to the size of the embedding.
Set seq_length to the length of sequence.
Set learning_rate to the learning rate.
Set show_every_n_batches to the number of batches the neural network should print progress.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import tensorflow as tf
from tensorflow.contrib import seq2seq
train_graph = tf.Graph()
with train_graph.as_default():
vocab_size = len(int_to_vocab)
input_text, targets, lr = get_inputs()
input_data_shape = tf.shape(input_text)
cell, initial_state = get_init_cell(input_data_shape[0], rnn_size)
logits, final_state = build_nn(cell, rnn_size, input_text, vocab_size, embed_dim=300)
# Probabilities for generating words
probs = tf.nn.softmax(logits, name='probs')
# Loss function
cost = seq2seq.sequence_loss(
logits,
targets,
tf.ones([input_data_shape[0], input_data_shape[1]]))
# Optimizer
optimizer = tf.train.AdamOptimizer(lr)
# Gradient Clipping
gradients = optimizer.compute_gradients(cost)
capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None]
train_op = optimizer.apply_gradients(capped_gradients)
"""
Explanation: Build the Graph
Build the graph using the neural network you implemented.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
batches = get_batches(int_text, batch_size, seq_length)
with tf.Session(graph=train_graph) as sess:
sess.run(tf.global_variables_initializer())
for epoch_i in range(num_epochs):
state = sess.run(initial_state, {input_text: batches[0][0]})
for batch_i, (x, y) in enumerate(batches):
feed = {
input_text: x,
targets: y,
initial_state: state,
lr: learning_rate}
train_loss, state, _ = sess.run([cost, final_state, train_op], feed)
# Show every <show_every_n_batches> batches
if (epoch_i * len(batches) + batch_i) % show_every_n_batches == 0:
print('Epoch {:>3} Batch {:>4}/{} train_loss = {:.3f}'.format(
epoch_i,
batch_i,
len(batches),
train_loss))
# Save Model
saver = tf.train.Saver()
saver.save(sess, save_dir)
print('Model Trained and Saved')
"""
Explanation: Train
Train the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forms to see if anyone is having the same problem.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
# Save parameters for checkpoint
helper.save_params((seq_length, save_dir))
"""
Explanation: Save Parameters
Save seq_length and save_dir for generating a new TV script.
End of explanation
"""
"""
DON'T MODIFY ANYTHING IN THIS CELL
"""
import tensorflow as tf
import numpy as np
import helper
import problem_unittests as tests
_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()
seq_length, load_dir = helper.load_params()
"""
Explanation: Checkpoint
End of explanation
"""
def get_tensors(loaded_graph):
"""
Get input, initial state, final state, and probabilities tensor from <loaded_graph>
:param loaded_graph: TensorFlow graph loaded from file
:return: Tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)
"""
input_tensor = loaded_graph.get_tensor_by_name('input:0')
init_state_tensor = loaded_graph.get_tensor_by_name('initial_state:0')
final_state_tensor = loaded_graph.get_tensor_by_name('final_state:0')
probs_tensor = loaded_graph.get_tensor_by_name('probs:0')
return input_tensor, init_state_tensor, final_state_tensor, probs_tensor
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_get_tensors(get_tensors)
"""
Explanation: Implement Generate Functions
Get Tensors
Get tensors from loaded_graph using the function get_tensor_by_name(). Get the tensors using the following names:
- "input:0"
- "initial_state:0"
- "final_state:0"
- "probs:0"
Return the tensors in the following tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)
End of explanation
"""
def pick_word(probabilities, int_to_vocab):
"""
Pick the next word in the generated text
:param probabilities: Probabilites of the next word
:param int_to_vocab: Dictionary of word ids as the keys and words as the values
:return: String of the predicted word
"""
# TODO: Implement Function
return int_to_vocab[np.argmax(probabilities)]
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
tests.test_pick_word(pick_word)
"""
Explanation: Choose Word
Implement the pick_word() function to select the next word using probabilities.
End of explanation
"""
gen_length = 200
# homer_simpson, moe_szyslak, or Barney_Gumble
prime_word = 'moe_szyslak'
"""
DON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE
"""
loaded_graph = tf.Graph()
with tf.Session(graph=loaded_graph) as sess:
# Load saved model
loader = tf.train.import_meta_graph(load_dir + '.meta')
loader.restore(sess, load_dir)
# Get Tensors from loaded model
input_text, initial_state, final_state, probs = get_tensors(loaded_graph)
# Sentences generation setup
gen_sentences = [prime_word + ':']
prev_state = sess.run(initial_state, {input_text: np.array([[1]])})
# Generate sentences
for n in range(gen_length):
# Dynamic Input
dyn_input = [[vocab_to_int[word] for word in gen_sentences[-seq_length:]]]
dyn_seq_length = len(dyn_input[0])
# Get Prediction
probabilities, prev_state = sess.run(
[probs, final_state],
{input_text: dyn_input, initial_state: prev_state})
pred_word = pick_word(probabilities[dyn_seq_length-1], int_to_vocab)
gen_sentences.append(pred_word)
# Remove tokens
tv_script = ' '.join(gen_sentences)
for key, token in token_dict.items():
ending = ' ' if key in ['\n', '(', '"'] else ''
tv_script = tv_script.replace(' ' + token.lower(), key)
tv_script = tv_script.replace('\n ', '\n')
tv_script = tv_script.replace('( ', '(')
print(tv_script)
"""
Explanation: Generate TV Script
This will generate the TV script for you. Set gen_length to the length of TV script you want to generate.
End of explanation
"""
|
IBMDecisionOptimization/docplex-examples
|
examples/mp/jupyter/sparktrans/SparkML_transformers_pipeline.ipynb
|
apache-2.0
|
try:
import numpy as np
except ImportError:
raise RuntimError('This notebook requires numpy')
"""
Explanation: Embedding CPLEX in a ML Spark Pipeline
Spark ML provides a uniform set of high-level APIs that help users create and tune practical machine learning pipelines.
In this notebook, we show how to embed CPLEX as a Spark transformer class.
DOcplex provides transformer classes that take a matrix X of constraints and a vector y of costs and solves a linear problem using CPLEX.
Transformer classes share a solve(X, Y, **params) method which expects:
- an X matrix containing the constraints of the linear problem
- a Y vector containing the cost coefficients.
The transformer classes requires a Spark DataFrame for the 'X' matrix, and support various formats for the 'Y' vector:
Python lists,
numpy vector,
pandas Series,
Spark columns
The same formats are also supported to optionally specify upper bounds for decision variables.
DOcplex transformer classes
There are two DOcplex transformer classes:
$CplexTransformer$ expects to solve a linear problem in the classical form:
$$ minimize\ C^{t} x\ s.t.\
Ax <= B$$
Where $A$ is a (M,N) matrix describing the constraints and $B$ is a scalar vector of size M, containing the right hand sides of the constraints, and $C$ is the cost vector of size N. In this case the transformer expects a (M,N+1) matrix, where the last column contains the right hand sides.
$CplexRangeTransformer$ expects to solve linear problem as a set of range constraints:
$$ minimize\ C^{t} x\ s.t.\
m <= Ax <= M$$
Where $A$ is a (M,N) matrix describing the constraints, $m$ and $M$ are two scalar vectors of size M, containing the minimum and maximum values for the row expressions, and $C$ is the cost vector of size N. In this case the transformer expects a (M,N+2) matrix, where the last two columns contains the minimum and maximum values (in this order).
End of explanation
"""
# the baseline diet data as Python lists of tuples.
FOODS = [
("Roasted Chicken", 0.84, 0, 10),
("Spaghetti W/ Sauce", 0.78, 0, 10),
("Tomato,Red,Ripe,Raw", 0.27, 0, 10),
("Apple,Raw,W/Skin", .24, 0, 10),
("Grapes", 0.32, 0, 10),
("Chocolate Chip Cookies", 0.03, 0, 10),
("Lowfat Milk", 0.23, 0, 10),
("Raisin Brn", 0.34, 0, 10),
("Hotdog", 0.31, 0, 10)
]
NUTRIENTS = [
("Calories", 2000, 2500),
("Calcium", 800, 1600),
("Iron", 10, 30),
("Vit_A", 5000, 50000),
("Dietary_Fiber", 25, 100),
("Carbohydrates", 0, 300),
("Protein", 50, 100)
]
FOOD_NUTRIENTS = [
# ("Roasted Chicken", 277.4, 21.9, 1.8, 77.4, 0.0, 0.0, 42.2),
("Roasted Chicken", 277.4, 21.9, 1.8, np.nan, 0.0, 0.0, 42.2), # Set a value as missing (NaN)
("Spaghetti W/ Sauce", 358.2, 80.2, 2.3, 3055.2, 11.6, 58.3, 8.2),
("Tomato,Red,Ripe,Raw", 25.8, 6.2, 0.6, 766.3, 1.4, 5.7, 1.0),
("Apple,Raw,W/Skin", 81.4, 9.7, 0.2, 73.1, 3.7, 21.0, 0.3),
("Grapes", 15.1, 3.4, 0.1, 24.0, 0.2, 4.1, 0.2),
("Chocolate Chip Cookies", 78.1, 6.2, 0.4, 101.8, 0.0, 9.3, 0.9),
("Lowfat Milk", 121.2, 296.7, 0.1, 500.2, 0.0, 11.7, 8.1),
("Raisin Brn", 115.1, 12.9, 16.8, 1250.2, 4.0, 27.9, 4.0),
("Hotdog", 242.1, 23.5, 2.3, 0.0, 0.0, 18.0, 10.4)
]
nb_foods = len(FOODS)
nb_nutrients = len(NUTRIENTS)
print('#foods={0}'.format(nb_foods))
print('#nutrients={0}'.format(nb_nutrients))
assert nb_foods == len(FOOD_NUTRIENTS)
"""
Explanation: In the next section we illustrate the range transformer with the Diet Problem, from DOcplex distributed examples.
The Diet Problem
The diet problem is delivered in the DOcplex examples.
Given a breakdown matrix of various foods in elementary nutrients, plus limitations on quantities for foods an nutrients, and food costs, the goal is to find the optimal quantity for each food for a balanced diet.
The FOOD_NUTRIENTS data intentionally contains a missing value ($np.nan$) to illustrate the use of a pipeline involving a data cleansing stage.
End of explanation
"""
try:
import findspark
findspark.init()
except ImportError:
# Ignore exception: the 'findspark' module is required when executing Spark in a Windows environment
pass
import pyspark # Only run after findspark.init() (if running in a Windows environment)
from pyspark.sql import SparkSession
spark = SparkSession.builder.getOrCreate()
if spark.version < '2.2':
raise "This notebook requires at least version '2.2' for PySpark"
"""
Explanation: Creating a Spark session
End of explanation
"""
mat_fn = np.matrix([FOOD_NUTRIENTS[f][1:] for f in range(nb_foods)])
print('The food-nutrient matrix has shape: {0}'.format(mat_fn.shape))
"""
Explanation: Using the transformer with a Spark dataframe
In this section we show how to use a transformer with data stored in a Spark dataframe.
Prepare the data as a numpy matrix
In this section we build a numpy matrix to be passed to the transformer.
First, we extract the food to nutrient matrix by stripping the names.
End of explanation
"""
nutrient_mins = [NUTRIENTS[n][1] for n in range(nb_nutrients)]
nutrient_maxs = [NUTRIENTS[n][2] for n in range(nb_nutrients)]
food_names ,food_costs, food_mins, food_maxs = map(list, zip(*FOODS))
"""
Explanation: Then we extract the two vectors of min/max for each nutrient. Each vector has nb_nutrients elements.
We also break the FOODS collection of tuples into columns
End of explanation
"""
# step 1. add two lines for nutrient mins, maxs
nf2 = np.append(mat_fn, np.matrix([nutrient_mins, nutrient_maxs]), axis=0)
mat_nf = nf2.transpose()
mat_nf.shape
"""
Explanation: We are now ready to prepare the transformer matrix. This matrix has shape (7, 11) as we
have 7 nutrients and 9 foods, plus the additional min and max columns
End of explanation
"""
from pyspark.sql import SQLContext
sc = spark.sparkContext
sqlContext = SQLContext(sc)
columns = food_names + ['min', 'max']
food_nutrients_df = sqlContext.createDataFrame(mat_nf.tolist(), columns)
"""
Explanation: Populate a Spark dataframe with the matrix data
In this section we build a Spark dataframe matrix to be passed to the transformer.
Using a Spark dataframe will also allow us to chain multiple transformers in a pipeline.
End of explanation
"""
food_nutrients_df.printSchema()
food_nutrients_df.show()
"""
Explanation: Let's display the dataframe schema and content
End of explanation
"""
from docplex.mp.sparktrans.transformers import CplexRangeTransformer
from pyspark.ml.feature import Imputer
from pyspark.ml import Pipeline
from pyspark.sql.functions import *
# Create a data cleansing stage to replace missing values with column mean value
data_cleansing = Imputer(inputCols=food_names, outputCols=food_names)
# Create an optimization stage to calculate the optimal quantity for each food for a balanced diet.
cplexSolve = CplexRangeTransformer(minCol='min', maxCol='max', ubs=food_maxs)
# Configure an ML pipeline, which chains these two stages
pipeline = Pipeline(stages=[data_cleansing, cplexSolve])
# Fit the pipeline: during this step, the data cleansing estimator is configured
model = pipeline.fit(food_nutrients_df)
# Make evaluation on input data. One can still specify stage-specific parameters when invoking 'transform' on the PipelineModel
diet_df = model.transform(food_nutrients_df, params={cplexSolve.y: food_costs, cplexSolve.sense: 'min'})
diet_df.orderBy(desc("value")).show()
"""
Explanation: Chaining a data cleansing stage with the $CplexRangeTransformer$ in a Pipeline
To use the transformer, create an instance and pass the following parameters to the transform method
- the X matrix of size(M, N+2) containing coefficients for N column variables plus two addition column for range mins and maxs.
- the Y cost vector (using "y" parameter id)
- whether one wants to solve a minimization (min) or maximization (max) problem (using "sense" parameter id)
In addition, some data elements that can't be encoded in the matrix itself should be passed as keyword arguments:
ubs denotes the upper bound for the column variables that are created. The expected size of this scalar vector is N (when matrix has size (M,N+2))
minCol and maxCol are the names of the columns corresponding to the constraints min and max range in the X matrix
Since the input data contains some missing values, we'll actually define a pipeline that will:
- first, perform a data cleansing stage: here missing values are replaced by the column mean value
- then, perform the optimization stage: the Cplex transformer will be invoked using the output dataframe from the cleansing stage as the constraint matrix.
End of explanation
"""
data_cleansing.fit(food_nutrients_df).transform(food_nutrients_df).show()
"""
Explanation: Just for checking purpose, let's have a look at the Spark dataframe at the output of the cleansing stage.<br>
This is the dataframe that is fed to the $CplexRangeTransformer$ in the pipeline.<br>
One can check that the first entry in the fourth row has been set to the average of the other values in the same column ($57.2167$).
End of explanation
"""
food_nutrients_LP_df = food_nutrients_df.select([item for item in food_nutrients_df.columns if item not in ['min']])
food_nutrients_LP_df.show()
from docplex.mp.sparktrans.transformers import CplexTransformer
# Create a data cleansing stage to replace missing values with column mean value
data_cleansing = Imputer(inputCols=food_names, outputCols=food_names)
# Create an optimization stage to calculate the optimal quantity for each food for a balanced diet.
# Here, let's use the CplexTransformer by specifying only a maximum amount for each nutrient.
cplexSolve = CplexTransformer(rhsCol='max', ubs=food_maxs)
# Configure an ML pipeline, which chains these two stages
pipeline = Pipeline(stages=[data_cleansing, cplexSolve])
# Fit the pipeline: during this step, the data cleansing estimator is configured
model = pipeline.fit(food_nutrients_LP_df)
# Make evaluation on input data. One can still specify stage-specific parameters when invoking 'transform' on the PipelineModel
# Since there is no lower range for decision variables, let's maximize cost instead! (otherwise, the result is all 0's)
diet_max_cost_df = model.transform(food_nutrients_LP_df, params={cplexSolve.y: food_costs, cplexSolve.sense: 'max'})
diet_max_cost_df.orderBy(desc("value")).show()
%matplotlib inline
import matplotlib.pyplot as plt
def plot_radar_chart(labels, stats, **kwargs):
angles=np.linspace(0, 2*np.pi, len(labels), endpoint=False)
# close the plot
stats = np.concatenate((stats, [stats[0]]))
angles = np.concatenate((angles, [angles[0]]))
fig = plt.figure()
ax = fig.add_subplot(111, polar=True)
ax.plot(angles, stats, 'o-', linewidth=2, **kwargs)
ax.fill(angles, stats, alpha=0.30, **kwargs)
ax.set_thetagrids(angles * 180/np.pi, labels)
#ax.set_title([df.loc[386,"Name"]])
ax.grid(True)
diet = diet_df.toPandas()
plot_radar_chart(labels=diet['name'], stats=diet['value'], color='r')
diet_max_cost = diet_max_cost_df.toPandas()
plot_radar_chart(labels=diet_max_cost['name'], stats=diet_max_cost['value'], color='r')
"""
Explanation: Example with CplexTransformer
To illustrate the usage of the $CplexTransformer$, let's remove the constraint on the minimum amount for nutrients, and reformulate the problem as a cost maximization.
First, let's define a new dataframe for the constraints matrix by removing the min column from the food_nutrients_df dataframe so that it is a well-formed input matrix for the $CplexTransformer$:
End of explanation
"""
|
GoogleCloudPlatform/training-data-analyst
|
self-paced-labs/vertex-ai/vertex-pipelines/tfx/lab_exercise.ipynb
|
apache-2.0
|
GOOGLE_CLOUD_PROJECT_ID = !(gcloud config get-value core/project)
GOOGLE_CLOUD_PROJECT_ID = GOOGLE_CLOUD_PROJECT_ID[0]
GOOGLE_CLOUD_REGION = 'us-central1'
BQ_DATASET_NAME = 'chicago_taxifare_tips'
BQ_TABLE_NAME = 'chicago_taxi_tips_ml'
BQ_LOCATION = 'US'
BQ_URI = f"bq://{GOOGLE_CLOUD_PROJECT_ID}.{BQ_DATASET_NAME}.{BQ_TABLE_NAME}"
DATASET_DISPLAY_NAME = 'chicago-taxifare-tips'
MODEL_DISPLAY_NAME = f'{DATASET_DISPLAY_NAME}-classifier'
PIPELINE_NAME = f'{MODEL_DISPLAY_NAME}-train-pipeline'
"""
Explanation: Lab: Chicago taxifare tip prediction on Google Cloud Vertex Pipelines using the TFX SDK
Learning objectives
Define a machine learning pipeline to predict taxi fare tips using the TFX SDK.
Compile and run a TFX pipeline on Google Cloud's Vertex Pipelines.
Dataset
The Chicago Taxi Trips dataset is one of the public datasets hosted with BigQuery, which includes taxi trips from 2013 to the present, reported to the City of Chicago in its role as a regulatory agency. The task is to predict whether a given trip will result in a tip > 20%.
Setup
Define constants
End of explanation
"""
GCS_LOCATION = f"gs://{GOOGLE_CLOUD_PROJECT_ID}-tfx"
!gsutil mb -l $GOOGLE_CLOUD_REGION $GCS_LOCATION
"""
Explanation: Create Google Cloud Storage bucket for storing Vertex Pipeline artifacts
End of explanation
"""
import os
import tensorflow as tf
import tfx
import kfp
from google.cloud import bigquery
from google.cloud import aiplatform as vertex_ai
print(f"tensorflow: {tf.__version__}")
print(f"tfx: {tfx.__version__}")
print(f"kfp: {kfp.__version__}")
print(f"Google Cloud Vertex AI Python SDK: {vertex_ai.__version__}")
"""
Explanation: Import libraries
End of explanation
"""
!bq --location=$BQ_LOCATION mk -d \
$GOOGLE_CLOUD_PROJECT_ID:$BQ_DATASET_NAME
"""
Explanation: Create BigQuery dataset
End of explanation
"""
SAMPLE_SIZE = 20000
YEAR = 2020
sql_script = '''
CREATE OR REPLACE TABLE `@PROJECT_ID.@DATASET.@TABLE`
AS (
WITH
taxitrips AS (
SELECT
trip_start_timestamp,
trip_seconds,
trip_miles,
payment_type,
pickup_longitude,
pickup_latitude,
dropoff_longitude,
dropoff_latitude,
tips,
fare
FROM
`bigquery-public-data.chicago_taxi_trips.taxi_trips`
WHERE 1=1
AND pickup_longitude IS NOT NULL
AND pickup_latitude IS NOT NULL
AND dropoff_longitude IS NOT NULL
AND dropoff_latitude IS NOT NULL
AND trip_miles > 0
AND trip_seconds > 0
AND fare > 0
AND EXTRACT(YEAR FROM trip_start_timestamp) = @YEAR
)
SELECT
trip_start_timestamp,
EXTRACT(MONTH from trip_start_timestamp) as trip_month,
EXTRACT(DAY from trip_start_timestamp) as trip_day,
EXTRACT(DAYOFWEEK from trip_start_timestamp) as trip_day_of_week,
EXTRACT(HOUR from trip_start_timestamp) as trip_hour,
trip_seconds,
trip_miles,
payment_type,
ST_AsText(
ST_SnapToGrid(ST_GeogPoint(pickup_longitude, pickup_latitude), 0.1)
) AS pickup_grid,
ST_AsText(
ST_SnapToGrid(ST_GeogPoint(dropoff_longitude, dropoff_latitude), 0.1)
) AS dropoff_grid,
ST_Distance(
ST_GeogPoint(pickup_longitude, pickup_latitude),
ST_GeogPoint(dropoff_longitude, dropoff_latitude)
) AS euclidean,
CONCAT(
ST_AsText(ST_SnapToGrid(ST_GeogPoint(pickup_longitude,
pickup_latitude), 0.1)),
ST_AsText(ST_SnapToGrid(ST_GeogPoint(dropoff_longitude,
dropoff_latitude), 0.1))
) AS loc_cross,
IF((tips/fare >= 0.2), 1, 0) AS tip_bin,
IF(ABS(MOD(FARM_FINGERPRINT(STRING(trip_start_timestamp)), 10)) < 9, 'UNASSIGNED', 'TEST') AS ml_use
FROM
taxitrips
LIMIT @LIMIT
)
'''
sql_script = sql_script.replace(
'@PROJECT_ID', GOOGLE_CLOUD_PROJECT_ID).replace(
'@DATASET', BQ_DATASET_NAME).replace(
'@TABLE', BQ_TABLE_NAME).replace(
'@YEAR', str(YEAR)).replace(
'@LIMIT', str(SAMPLE_SIZE))
bq_client = bigquery.Client(project=GOOGLE_CLOUD_PROJECT_ID, location=BQ_LOCATION)
job = bq_client.query(sql_script)
_ = job.result()
%%bigquery
SELECT ml_use, COUNT(*)
FROM chicago_taxifare_tips.chicago_taxi_tips_ml
GROUP BY ml_use
"""
Explanation: Create BigQuery dataset for ML classification task
End of explanation
"""
vertex_ai.init(project=GOOGLE_CLOUD_PROJECT_ID, location=GOOGLE_CLOUD_REGION)
"""
Explanation: Create a Vertex AI managed dataset resource for pipeline dataset lineage tracking
Initialize Vertex AI Python SDK
End of explanation
"""
tabular_dataset = vertex_ai.TabularDataset.create(display_name=f"{DATASET_DISPLAY_NAME}", bq_source=f"{BQ_URI}")
tabular_dataset.gca_resource
"""
Explanation: Create Vertex managed tabular dataset
End of explanation
"""
PIPELINE_DIR="tfx_taxifare_tips"
"""
Explanation: Create a TFX pipeline
End of explanation
"""
%%writefile {PIPELINE_DIR}/model_training/features.py
%%writefile {PIPELINE_DIR}/model_training/preprocessing.py
%%writefile {PIPELINE_DIR}/model_training/model.py
"""
Explanation: Write model code
End of explanation
"""
%%writefile {PIPELINE_DIR}/pipeline.py
%%writefile {PIPELINE_DIR}/runner.py
"""
Explanation: Write pipeline definition with the TFX SDK
End of explanation
"""
ARTIFACT_REGISTRY="tfx-taxifare-tips"
# TODO: create a Docker Artifact Registry using the gcloud CLI.
# Documentation link: https://cloud.google.com/sdk/gcloud/reference/artifacts/repositories/create
!gcloud artifacts repositories create {ARTIFACT_REGISTRY} \
--repository-format=docker \
--location={GOOGLE_CLOUD_REGION} \
--description="Artifact registry for TFX pipeline images for Chicago taxifare prediction."
IMAGE_NAME="tfx-taxifare-tips"
IMAGE_TAG="latest"
IMAGE_URI=f"{GOOGLE_CLOUD_REGION}-docker.pkg.dev/{GOOGLE_CLOUD_PROJECT_ID}/{ARTIFACT_REGISTRY}/{IMAGE_NAME}:{IMAGE_TAG}"
"""
Explanation: Run your TFX pipeline on Vertex Pipelines
Create a Artifact Registry on Google Cloud for your pipeline container image
End of explanation
"""
os.environ["DATASET_DISPLAY_NAME"] = DATASET_DISPLAY_NAME
os.environ["MODEL_DISPLAY_NAME"] = MODEL_DISPLAY_NAME
os.environ["PIPELINE_NAME"] = PIPELINE_NAME
os.environ["GOOGLE_CLOUD_PROJECT_ID"] = GOOGLE_CLOUD_PROJECT_ID
os.environ["GOOGLE_CLOUD_REGION"] = GOOGLE_CLOUD_REGION
os.environ["GCS_LOCATION"] = GCS_LOCATION
os.environ["TRAIN_LIMIT"] = "5000"
os.environ["TEST_LIMIT"] = "1000"
os.environ["BEAM_RUNNER"] = "DataflowRunner"
os.environ["TRAINING_RUNNER"] = "vertex"
os.environ["TFX_IMAGE_URI"] = IMAGE_URI
os.environ["ENABLE_CACHE"] = "1"
from tfx_taxifare_tips.tfx_pipeline import config
import importlib
importlib.reload(config)
for key, value in config.__dict__.items():
if key.isupper(): print(f'{key}: {value}')
"""
Explanation: Set the pipeline configurations for the Vertex AI run
End of explanation
"""
!echo $TFX_IMAGE_URI
# !docker build . -t test-image
!gcloud builds submit --tag $TFX_IMAGE_URI . --timeout=20m --machine-type=e2-highcpu-8
"""
Explanation: Build the TFX pipeline container image
End of explanation
"""
import tfx_taxifare_tips
# importlib.reload(tfx_taxifare_tips)
PIPELINE_DEFINITION_FILE = f'{config.PIPELINE_NAME}.json'
from tfx_taxifare_tips.tfx_pipeline import pipeline_runner
pipeline_definition = pipeline_runner.compile_training_pipeline(PIPELINE_DEFINITION_FILE)
pipeline_job = vertex_ai.pipeline_jobs.PipelineJob(
display_name=config.PIPELINE_NAME,
template_path=PIPELINE_DEFINITION_FILE,
pipeline_root=os.path.join(config.ARTIFACT_STORE_URI,config.PIPELINE_NAME)
)
pipeline_job.run(sync=False)
"""
Explanation: Compile the TFX pipeline
End of explanation
"""
pipeline_df = vertex_ai.get_pipeline_df(PIPELINE_NAME)
pipeline_df = pipeline_df[pipeline_df.pipeline_name == PIPELINE_NAME]
pipeline_df.T
"""
Explanation: Extracting pipeline run metadata
End of explanation
"""
"""Pipeline definition code."""
import os
import sys
import logging
from typing import Text
import tensorflow_model_analysis as tfma
from tfx.proto import example_gen_pb2, transform_pb2, pusher_pb2
from tfx.v1.types.standard_artifacts import Model, ModelBlessing, Schema
from tfx.v1.extensions.google_cloud_big_query import BigQueryExampleGen
from tfx.v1.extensions.google_cloud_ai_platform import Trainer as VertexTrainer
from tfx.v1.dsl import Pipeline, Importer, Resolver, Channel
from tfx.v1.dsl.experimental import LatestBlessedModelStrategy
from tfx.v1.components import (
StatisticsGen,
ExampleValidator,
Transform,
Evaluator,
Pusher,
)
from tfx_taxifare_tips.tfx_pipeline import config
from tfx_taxifare_tips.model_training import features, bq_datasource_utils
import os, time
from tfx.orchestration.experimental.interactive.interactive_context import (
InteractiveContext,
)
ARTIFACT_STORE = os.path.join(os.sep, "home", "jupyter", "artifact-store")
SERVING_MODEL_DIR = os.path.join(os.sep, "home", "jupyter", "serving_model")
DATA_ROOT = "../../../data"
PIPELINE_NAME = "tfx-covertype-classifier"
PIPELINE_ROOT = os.path.join(
ARTIFACT_STORE, PIPELINE_NAME, time.strftime("%Y%m%d_%H%M%S")
)
os.makedirs(PIPELINE_ROOT, exist_ok=True)
context = InteractiveContext(
pipeline_name=PIPELINE_NAME,
pipeline_root=PIPELINE_ROOT,
metadata_connection_config=None,
)
import_schema = Importer(
source_uri="tfx_taxifare_tips/raw_schema",
artifact_type=Schema,
).with_id("SchemaImporter")
context.run(import_schema)
import_schema.outputs["result"].get()[0].uri
examplevalidator = ExampleValidator(
statistics=statisticsgen.outputs["statistics"],
schema=import_schema.outputs["result"],
).with_id("ExampleValidator")
"""
Explanation: Upload trained model from Google Cloud Storage to Vertex AI
End of explanation
"""
|
kimkipyo/dss_git_kkp
|
통계, 머신러닝 복습/160621화_18일차_QDALDA QuandraticLinear Discriminant Analysis/1.QDA and LDA.ipynb
|
mit
|
N = 100
np.random.seed(0)
X1 = sp.stats.multivariate_normal([ 0, 0], [[0.7, 0],[0, 0.7]]).rvs(100)
X2 = sp.stats.multivariate_normal([ 1, 1], [[0.8, 0.2],[0.2, 0.8]]).rvs(100)
X3 = sp.stats.multivariate_normal([-1, 1], [[0.8, 0.2],[0.2, 0.8]]).rvs(100)
y1 = np.zeros(N)
y2 = np.ones(N)
y3 = 2*np.ones(N)
X = np.vstack([X1, X2, X3])
y = np.hstack([y1, y2, y3])
len(X1), X1.shape
plt.scatter(X1[:, 0], X1[:, 1], alpha=0.8, s=50, color='r', label='class1')
plt.scatter(X2[:, 0], X2[:, 1], alpha=0.8, s=50, color='g', label='class2')
plt.scatter(X3[:, 0], X3[:, 1], alpha=0.8, s=50, color='b', label='class3')
sns.kdeplot(X1[:, 0], X1[:, 1], alpha=0.3, cmap=mpl.cm.hot)
sns.kdeplot(X2[:, 0], X2[:, 1], alpha=0.3, cmap=mpl.cm.summer)
sns.kdeplot(X3[:, 0], X3[:, 1], alpha=0.3, cmap=mpl.cm.cool)
plt.xlim(-5, 5)
plt.ylim(-4, 5)
plt.legend()
plt.show()
"""
Explanation: QDA and LDA
QDA
QDA(quadratic discriminant analysis)는 Y 클래스에 대한 독립 변수 X의 조건부 확률 분포가 다변수 가우시안 정규 분포(multivariate Gaussian normal distribution)이라는 가정을 한다.
$$
p(x \mid y = k) = \dfrac{1}{(2\pi)^{D/2} |\Sigma_k|^{1/2}} \exp \left( -\dfrac{1}{2} (x-\mu_k)^T \Sigma_k^{-1} (x-\mu_k) \right)
$$
이 분포들을 알고 있으면 독립 변수 X에 대한 Y 클래스의 조건부 확률 분포는 다음과 같이 베이즈 규칙으로부터 구할 수 있다.
$$
P(y=k \mid x) = \dfrac{p(x \mid y = k)P(y=k)}{p(x)} = \dfrac{p(x \mid y = k)P(y=k)}{\sum_l p(x \mid y = l)P(y=l) }
$$
예를 들어 Y 가 1, 2, 3 이라는 3개의 클래스를 가지고 각 클래스에서의 X 의 확률 변수가 다음과 같은 기대값 및 공분산 행렬을 가진다고 가정하자.
$$
\mu_1 = \begin{bmatrix} 0 \ 0 \end{bmatrix}, \;\;
\mu_2 = \begin{bmatrix} 1 \ 1 \end{bmatrix}, \;\;
\mu_3 = \begin{bmatrix}-1 \ 1 \end{bmatrix}
$$
$$
\Sigma_1 = \begin{bmatrix} 0.7 & 0 \ 0 & 0.7 \end{bmatrix}, \;\;
\Sigma_2 = \begin{bmatrix} 0.8 & 0.2 \ 0.2 & 0.8 \end{bmatrix}, \;\;
\Sigma_3 = \begin{bmatrix} 0.8 & 0.2 \ 0.2 & 0.8 \end{bmatrix}
$$
Y의 사전 확률은 다음과 같이 동일하다
$$
P(Y=1) = P(Y=2) = P(Y=3) = \dfrac{1}{3}
$$
이번에는 각 학생간 관련이 있다?
End of explanation
"""
from sklearn.naive_bayes import GaussianNB
model = GaussianNB().fit(X,y)
from sklearn.metrics import confusion_matrix, classification_report
confusion_matrix(y, model.predict(X))
print(classification_report(y, model.predict(X)))
"""
Explanation: 가우시안 베이즈 모델로 해보기
End of explanation
"""
from sklearn.discriminant_analysis import QuadraticDiscriminantAnalysis
qda = QuadraticDiscriminantAnalysis(store_covariances=True).fit(X, y)
#True 한 것의 의미는? 그러면 밑에 값들이 안 생길 수 있다. 공분산 값들
qda.means_
qda.covariances_[0]
qda.covariances_[1]
qda.covariances_[2]
confusion_matrix(y, qda.predict(X))
print(classification_report(y, qda.predict(X)))
xmin, xmax = -5, 5
ymin, ymax = -4, 5
XX, YY = np.meshgrid(np.arange(xmin, xmax, (xmax-xmin)/1000), np.arange(ymin, ymax, (ymax-ymin)/1000))
ZZ = np.reshape(qda.predict(np.array([XX.ravel(), YY.ravel()]).T), XX.shape) # predict 안에서 1차원으로 플어야 한다. ravel = flatten
cmap = mpl.colors.ListedColormap(sns.color_palette("Set3")) #reshape을 한 이유는?
plt.contourf(XX, YY, ZZ, cmap=cmap, alpha=0.5)
plt.scatter(X[:, 0], X[:, 1], c=y, s=50, cmap=cmap)
plt.xlim(xmin, xmax)
plt.ylim(ymin, ymax)
plt.show()
"""
Explanation: 이제 QDA
End of explanation
"""
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis
lda = LinearDiscriminantAnalysis(n_components=3, solver="svd", store_covariance=True).fit(X, y)
lda.means_
lda.covariance_
confusion_matrix(y, lda.predict(X))
print(classification_report(y, qda.predict(X)))
xmin, xmax = -5, 5
ymin, ymax = -4, 5
XX, YY = np.meshgrid(np.arange(xmin, xmax, (xmax-xmin)/1000), np.arange(ymin, ymax, (ymax-ymin)/1000))
ZZ = np.reshape(lda.predict(np.array([XX.ravel(), YY.ravel()]).T), XX.shape)
cmap = mpl.colors.ListedColormap(sns.color_palette("Set3"))
plt.contourf(XX, YY, ZZ, cmap=cmap, alpha=0.5)
plt.scatter(X[:, 0], X[:, 1], c=y, s=50, cmap=cmap)
plt.xlim(xmin, xmax)
plt.ylim(ymin, ymax)
plt.show()
"""
Explanation: LDA
LDA(linear discriminant analysis)에서는 각 Y 클래스에 대한 독립 변수 X의 조건부 확률 분포가 공통된 공분산 행렬을 가지는 다변수 가우시안 정규 분포(multivariate Gaussian normal distribution)이라고 가정한다. 즉
$$ \Sigma_k = \Sigma \;\;\; \text{ for all } k $$
이다.
이 때는 조건부 확률 분포를 다음과 같이 정리할 수 있다.
$$
\begin{eqnarray}
\log p(x \mid y = k)
&=& \log \dfrac{1}{(2\pi)^{D/2} |\Sigma|^{1/2}} - \dfrac{1}{2} (x-\mu_k)^T \Sigma^{-1} (x-\mu_k) \
&=& \log \pi + \dfrac{1}{2} (x-\mu_k)^T \Sigma^{-1} (x-\mu_k) \
&=& \log \pi + \dfrac{1}{2} \left( x^T\Sigma^{-1}x - 2\mu_k^T \Sigma^{-1}x + \mu_k^T \Sigma^{-1}\mu_k \right) \
\end{eqnarray}
$$
$$
\begin{eqnarray}
p(x \mid y = k)
&=& C(x)\exp(w_k^Tx + w_{k0}) \
\end{eqnarray}
$$
$$
\begin{eqnarray}
P(y=k \mid x)
&=& \dfrac{p(x \mid y = k)P(y=k)}{\sum_l p(x \mid y = l)P(y=l) } \
&=& \dfrac{C(x)\exp(w_k^Tx + w_{k0}) P(y=k)}{\sum_l C(x)\exp(w_k^Tx + w_{k0})P(y=l) } \
&=& \dfrac{P_k \exp(w_k^Tx + w_{k0}) }{\sum_l P_l \exp(w_k^Tx + w_{k0})} \
\end{eqnarray}
$$
$$
\log P(y=k \mid x) = \log P_k + w_k^Tx + w_{k0} - \sum_l \left( \log P_l + w_l^Tx + w_{l0} \right) = w^T x + w_0
$$
즉, 조건부 확률 변수가 x에 대한 선형 방정식이 된다.
End of explanation
"""
|
PyPSA/PyPSA
|
examples/notebooks/simple-electricity-market-examples.ipynb
|
mit
|
import pypsa, numpy as np
# marginal costs in EUR/MWh
marginal_costs = {"Wind": 0, "Hydro": 0, "Coal": 30, "Gas": 60, "Oil": 80}
# power plant capacities (nominal powers in MW) in each country (not necessarily realistic)
power_plant_p_nom = {
"South Africa": {"Coal": 35000, "Wind": 3000, "Gas": 8000, "Oil": 2000},
"Mozambique": {
"Hydro": 1200,
},
"Swaziland": {
"Hydro": 600,
},
}
# transmission capacities in MW (not necessarily realistic)
transmission = {
"South Africa": {"Mozambique": 500, "Swaziland": 250},
"Mozambique": {"Swaziland": 100},
}
# country electrical loads in MW (not necessarily realistic)
loads = {"South Africa": 42000, "Mozambique": 650, "Swaziland": 250}
"""
Explanation: Simple electricity market examples
This example gradually builds up more and more complicated energy-only electricity markets in PyPSA, starting from a single bidding zone, going up to multiple bidding zones connected with transmission (NTCs) along with variable renewables and storage.
Preliminaries
Here libraries are imported and data is defined.
End of explanation
"""
country = "South Africa"
network = pypsa.Network()
network.add("Bus", country)
for tech in power_plant_p_nom[country]:
network.add(
"Generator",
"{} {}".format(country, tech),
bus=country,
p_nom=power_plant_p_nom[country][tech],
marginal_cost=marginal_costs[tech],
)
network.add("Load", "{} load".format(country), bus=country, p_set=loads[country])
# Run optimisation to determine market dispatch
network.lopf()
# print the load active power (P) consumption
network.loads_t.p
# print the generator active power (P) dispatch
network.generators_t.p
# print the clearing price (corresponding to gas)
network.buses_t.marginal_price
"""
Explanation: Single bidding zone with fixed load, one period
In this example we consider a single market bidding zone, South Africa.
The inelastic load has essentially infinite marginal utility (or higher than the marginal cost of any generator).
End of explanation
"""
network = pypsa.Network()
countries = ["Mozambique", "South Africa"]
for country in countries:
network.add("Bus", country)
for tech in power_plant_p_nom[country]:
network.add(
"Generator",
"{} {}".format(country, tech),
bus=country,
p_nom=power_plant_p_nom[country][tech],
marginal_cost=marginal_costs[tech],
)
network.add("Load", "{} load".format(country), bus=country, p_set=loads[country])
# add transmission as controllable Link
if country not in transmission:
continue
for other_country in countries:
if other_country not in transmission[country]:
continue
# NB: Link is by default unidirectional, so have to set p_min_pu = -1
# to allow bidirectional (i.e. also negative) flow
network.add(
"Link",
"{} - {} link".format(country, other_country),
bus0=country,
bus1=other_country,
p_nom=transmission[country][other_country],
p_min_pu=-1,
)
network.lopf()
network.loads_t.p
network.generators_t.p
network.links_t.p0
# print the clearing price (corresponding to water in Mozambique and gas in SA)
network.buses_t.marginal_price
# link shadow prices
network.links_t.mu_lower
"""
Explanation: Two bidding zones connected by transmission, one period
In this example we have bidirectional transmission capacity between two bidding zones. The power transfer is treated as controllable (like an A/NTC (Available/Net Transfer Capacity) or HVDC line). Note that in the physical grid, power flows passively according to the network impedances.
End of explanation
"""
network = pypsa.Network()
countries = ["Swaziland", "Mozambique", "South Africa"]
for country in countries:
network.add("Bus", country)
for tech in power_plant_p_nom[country]:
network.add(
"Generator",
"{} {}".format(country, tech),
bus=country,
p_nom=power_plant_p_nom[country][tech],
marginal_cost=marginal_costs[tech],
)
network.add("Load", "{} load".format(country), bus=country, p_set=loads[country])
# add transmission as controllable Link
if country not in transmission:
continue
for other_country in countries:
if other_country not in transmission[country]:
continue
# NB: Link is by default unidirectional, so have to set p_min_pu = -1
# to allow bidirectional (i.e. also negative) flow
network.add(
"Link",
"{} - {} link".format(country, other_country),
bus0=country,
bus1=other_country,
p_nom=transmission[country][other_country],
p_min_pu=-1,
)
network.lopf()
network.loads_t.p
network.generators_t.p
network.links_t.p0
# print the clearing price (corresponding to hydro in S and M, and gas in SA)
network.buses_t.marginal_price
# link shadow prices
network.links_t.mu_lower
"""
Explanation: Three bidding zones connected by transmission, one period
In this example we have bidirectional transmission capacity between three bidding zones. The power transfer is treated as controllable (like an A/NTC (Available/Net Transfer Capacity) or HVDC line). Note that in the physical grid, power flows passively according to the network impedances.
End of explanation
"""
country = "South Africa"
network = pypsa.Network()
network.add("Bus", country)
for tech in power_plant_p_nom[country]:
network.add(
"Generator",
"{} {}".format(country, tech),
bus=country,
p_nom=power_plant_p_nom[country][tech],
marginal_cost=marginal_costs[tech],
)
# standard high marginal utility consumers
network.add("Load", "{} load".format(country), bus=country, p_set=loads[country])
# add an industrial load as a dummy negative-dispatch generator with marginal utility of 70 EUR/MWh for 8000 MW
network.add(
"Generator",
"{} industrial load".format(country),
bus=country,
p_max_pu=0,
p_min_pu=-1,
p_nom=8000,
marginal_cost=70,
)
network.lopf()
network.loads_t.p
# NB only half of industrial load is served, because this maxes out
# Gas. Oil is too expensive with a marginal cost of 80 EUR/MWh
network.generators_t.p
network.buses_t.marginal_price
"""
Explanation: Single bidding zone with price-sensitive industrial load, one period
In this example we consider a single market bidding zone, South Africa.
Now there is a large industrial load with a marginal utility which is low enough to interact with the generation marginal cost.
End of explanation
"""
country = "South Africa"
network = pypsa.Network()
# snapshots labelled by [0,1,2,3]
network.set_snapshots(range(4))
network.add("Bus", country)
# p_max_pu is variable for wind
for tech in power_plant_p_nom[country]:
network.add(
"Generator",
"{} {}".format(country, tech),
bus=country,
p_nom=power_plant_p_nom[country][tech],
marginal_cost=marginal_costs[tech],
p_max_pu=([0.3, 0.6, 0.4, 0.5] if tech == "Wind" else 1),
)
# load which varies over the snapshots
network.add(
"Load",
"{} load".format(country),
bus=country,
p_set=loads[country] + np.array([0, 1000, 3000, 4000]),
)
# specify that we consider all snapshots
network.lopf(network.snapshots)
network.loads_t.p
network.generators_t.p
network.buses_t.marginal_price
"""
Explanation: Single bidding zone with fixed load, several periods
In this example we consider a single market bidding zone, South Africa.
We consider multiple time periods (labelled [0,1,2,3]) to represent variable wind generation.
End of explanation
"""
country = "South Africa"
network = pypsa.Network()
# snapshots labelled by [0,1,2,3]
network.set_snapshots(range(4))
network.add("Bus", country)
# p_max_pu is variable for wind
for tech in power_plant_p_nom[country]:
network.add(
"Generator",
"{} {}".format(country, tech),
bus=country,
p_nom=power_plant_p_nom[country][tech],
marginal_cost=marginal_costs[tech],
p_max_pu=([0.3, 0.6, 0.4, 0.5] if tech == "Wind" else 1),
)
# load which varies over the snapshots
network.add(
"Load",
"{} load".format(country),
bus=country,
p_set=loads[country] + np.array([0, 1000, 3000, 4000]),
)
# storage unit to do price arbitrage
network.add(
"StorageUnit",
"{} pumped hydro".format(country),
bus=country,
p_nom=1000,
max_hours=6, # energy storage in terms of hours at full power
)
network.lopf(network.snapshots)
network.loads_t.p
network.generators_t.p
network.storage_units_t.p
network.storage_units_t.state_of_charge
network.buses_t.marginal_price
"""
Explanation: Single bidding zone with fixed load and storage, several periods
In this example we consider a single market bidding zone, South Africa.
We consider multiple time periods (labelled [0,1,2,3]) to represent variable wind generation. Storage is allowed to do price arbitrage to reduce oil consumption.
End of explanation
"""
|
w4zir/ml17s
|
assignments/.ipynb_checkpoints/assignment02-logistic-regression-and-neural-network-checkpoint.ipynb
|
mit
|
import cv2
img = cv2.imread('test.png',0)
resized_image = cv2.resize(img, (28, 28), interpolation = cv2.INTER_AREA)
"""
Explanation: CSAL4243: Introduction to Machine Learning
Muhammad Mudassir Khan (mudasssir.khan@ucp.edu.pk)
Assignment 2:
Digits Recognition using Logistic Regression & Neural Networks
In this assignment you are going to use Logistic Regression and Neural Networks. You are going to use digits dataset from digits recognition competition on kaggle. First task is to train a logistic regression model from scikit learn on the training dataset and then predict the labels of the given test dataset and submit it to kaggle. Then you are going to play around with the regularization parameter of logistic regression and see if it has any effect on your results. Later you are going to use neural networks from scikit learn and train it on the same dataset and use the trained model to predict the labels of the test dataset and submit the results to kaggle. You will need to report the results of neural networks as well. Lastly you will create some handwritten digits using a drawing software like MS paint or even write it on a paper and take a picture of it and see how good your trained model works on it.
Note:
The given images are grey scale and has digits written in white, make sure your generated digits are of the same format.
Overview
Digit Recognizer Dataset
Tasks
Resources
Credits
<br>
<br>
Digit Recognizer Dataset
The dataset you are going to use in this assignment is called Digit Recognizer, available at kaggle. To download the dataset go to dataset data tab. Download 'train.csv', 'test.csv' and 'sample_submission.csv.gz' files. 'train.csv' is going to be used for training the model. 'test.csv' is used to test the model i.e. generalization. 'sample_submission.csv.gz' contain sample submission file that you need to generate to be submitted to kaggle.
Note:
Thare are some tutorials available at the dataset tutorial section which you can use as a starting point. Specially the A beginner’s approach to classification which uses scikit learn's SVM classifier. You can replace it with logistic regression and neural network. You can download the notebook by clicking fork notebook first and then download button.
<br>
Tasks
Use scikit learn logistic regression to train on digit recognizer dataset from kaggle competition. Submit your best result to the competition and report result.
Use different values of regularization parameter (parameter C which is inverse of regularization parameter i.e. C = $\frac{1}{\lambda}$) in logistic regression and report the effect.
Use scikit learn neural network to train on digit recognizer dataset and subimit your best result.
Hand draw digits using any drawing software with black background and white font and test it on the trained model above and report results.
Note:
If your system takes too much time on training then reduce training data. Around 5000 examples are enough to get a good classifier.
It is a good idea to convert images to binary values i.e. 0's and 1's.
Since dataset include images of $28\times 28$ dimensions, you should use opencv libaray for image resize if needed in task 4. You can download it as anaconda package.
Image resize using opencv
End of explanation
"""
test_images[test_images>0]=1
train_images[train_images>0]=1
"""
Explanation: convert grey scale image to binary
covert every non zero value to one.
End of explanation
"""
|
GoogleCloudPlatform/vertex-ai-samples
|
notebooks/official/automl/sdk_automl_video_object_tracking_batch.ipynb
|
apache-2.0
|
import os
# Google Cloud Notebook
if os.path.exists("/opt/deeplearning/metadata/env_version"):
USER_FLAG = "--user"
else:
USER_FLAG = ""
! pip3 install --upgrade google-cloud-aiplatform $USER_FLAG
"""
Explanation: Vertex AI SDK for Python: AutoML training video object tracking model for batch prediction
<table align="left">
<td>
<a href="https://colab.research.google.com/github/GoogleCloudPlatform/vertex-ai-samples/blob/main/notebooks/official/automl/sdk_automl_video_object_tracking_batch.ipynb">
<img src="https://cloud.google.com/ml-engine/images/colab-logo-32px.png" alt="Colab logo"> Run in Colab
</a>
</td>
<td>
<a href="https://github.com/GoogleCloudPlatform/vertex-ai-samples/blob/main/notebooks/official/automl/sdk_automl_video_object_tracking_batch.ipynb">
<img src="https://cloud.google.com/ml-engine/images/github-logo-32px.png" alt="GitHub logo">
View on GitHub
</a>
</td>
<td>
<a href="https://console.cloud.google.com/vertex-ai/workbench/deploy-notebook?download_url=https://github.com/GoogleCloudPlatform/vertex-ai-samples/blob/main/notebooks/official/automl/sdk_automl_video_object_tracking_batch.ipynb">
<img src="https://lh3.googleusercontent.com/UiNooY4LUgW_oTvpsNhPpQzsstV5W8F7rYgxgGBD85cWJoLmrOzhVs_ksK_vgx40SHs7jCqkTkCk=e14-rj-sc0xffffff-h130-w32" alt="Vertex AI logo">
Open in Vertex AI Workbench
</a>
</td>
</table>
<br/><br/><br/>
Overview
This tutorial demonstrates how to use the Vertex AI SDK for Python to create video object tracking models and do batch prediction using a Google Cloud AutoML model.
Dataset
The dataset used for this tutorial is the Traffic dataset. The version of the dataset you will use in this tutorial is stored in a public Cloud Storage bucket.
Objective
In this tutorial, you create an AutoML video object tracking model from a Python script, and then do a batch prediction using the Vertex SDK. You can alternatively create and deploy models using the gcloud command-line tool or online using the Cloud Console.
The steps performed include:
Create a Vertex Dataset resource.
Train the model.
View the model evaluation.
Make a batch prediction.
There is one key difference between using batch prediction and using online prediction:
Prediction Service: Does an on-demand prediction for the entire set of instances (i.e., one or more data items) and returns the results in real-time.
Batch Prediction Service: Does a queued (batch) prediction for the entire set of instances in the background and stores the results in a Cloud Storage bucket when ready.
Costs
This tutorial uses billable components of Google Cloud:
Vertex AI
Cloud Storage
Learn about Vertex AI
pricing and Cloud Storage
pricing, and use the Pricing
Calculator
to generate a cost estimate based on your projected usage.
Set up your local development environment
If you are using Colab or Google Cloud Notebooks, your environment already meets all the requirements to run this notebook. You can skip this step.
Otherwise, make sure your environment meets this notebook's requirements. You need the following:
The Cloud Storage SDK
Git
Python 3
virtualenv
Jupyter notebook running in a virtual environment with Python 3
The Cloud Storage guide to Setting up a Python development environment and the Jupyter installation guide provide detailed instructions for meeting these requirements. The following steps provide a condensed set of instructions:
Install and initialize the SDK.
Install Python 3.
Install virtualenv and create a virtual environment that uses Python 3. Activate the virtual environment.
To install Jupyter, run pip3 install jupyter on the command-line in a terminal shell.
To launch Jupyter, run jupyter notebook on the command-line in a terminal shell.
Open this notebook in the Jupyter Notebook Dashboard.
Installation
Install the latest version of Vertex SDK for Python.
End of explanation
"""
! pip3 install -U google-cloud-storage $USER_FLAG
"""
Explanation: Install the latest GA version of google-cloud-storage library as well.
End of explanation
"""
import os
if not os.getenv("IS_TESTING"):
# Automatically restart kernel after installs
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
"""
Explanation: Restart the kernel
Once you've installed the additional packages, you need to restart the notebook kernel so it can find the packages.
End of explanation
"""
PROJECT_ID = "" # @param {type:"string"}
if PROJECT_ID == "" or PROJECT_ID is None or PROJECT_ID == "[your-project-id]":
# Get your GCP project id from gcloud
shell_output = ! gcloud config list --format 'value(core.project)' 2>/dev/null
PROJECT_ID = shell_output[0]
print("Project ID:", PROJECT_ID)
! gcloud config set project $PROJECT_ID
"""
Explanation: Before you begin
GPU runtime
This tutorial does not require a GPU runtime.
Set up your Google Cloud project
The following steps are required, regardless of your notebook environment.
Select or create a Google Cloud project. When you first create an account, you get a $300 free credit towards your compute/storage costs.
Make sure that billing is enabled for your project.
Enable the following APIs: Vertex AI APIs, Compute Engine APIs, and Cloud Storage.
If you are running this notebook locally, you will need to install the Cloud SDK.
Enter your project ID in the cell below. Then run the cell to make sure the
Cloud SDK uses the right project for all the commands in this notebook.
Note: Jupyter runs lines prefixed with ! as shell commands, and it interpolates Python variables prefixed with $.
End of explanation
"""
REGION = "[your-region]"
if REGION == "[your-region]":
REGION = "us-central1"
"""
Explanation: Region
You can also change the REGION variable, which is used for operations
throughout the rest of this notebook. Below are regions supported for Vertex AI. We recommend that you choose the region closest to you.
Americas: us-central1
Europe: europe-west4
Asia Pacific: asia-east1
You may not use a multi-regional bucket for training with Vertex AI. Not all regions provide support for all Vertex AI services.
Learn more about Vertex AI regions
End of explanation
"""
from datetime import datetime
TIMESTAMP = datetime.now().strftime("%Y%m%d%H%M%S")
"""
Explanation: Timestamp
If you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append the timestamp onto the name of resources you create in this tutorial.
End of explanation
"""
# If you are running this notebook in Colab, run this cell and follow the
# instructions to authenticate your GCP account. This provides access to your
# Cloud Storage bucket and lets you submit training jobs and prediction
# requests.
import os
import sys
# If on Google Cloud Notebook, then don't execute this code
if not os.path.exists("/opt/deeplearning/metadata/env_version"):
if "google.colab" in sys.modules:
from google.colab import auth as google_auth
google_auth.authenticate_user()
# If you are running this notebook locally, replace the string below with the
# path to your service account key and run this cell to authenticate your GCP
# account.
elif not os.getenv("IS_TESTING"):
%env GOOGLE_APPLICATION_CREDENTIALS ''
"""
Explanation: Authenticate your Google Cloud account
If you are using Google Cloud Notebooks, your environment is already authenticated. Skip this step.
If you are using Colab, run the cell below and follow the instructions when prompted to authenticate your account via oAuth.
Otherwise, follow these steps:
In the Cloud Console, go to the Create service account key page.
Click Create service account.
In the Service account name field, enter a name, and click Create.
In the Grant this service account access to project section, click the Role drop-down list. Type "Vertex" into the filter box, and select Vertex Administrator. Type "Storage Object Admin" into the filter box, and select Storage Object Admin.
Click Create. A JSON file that contains your key downloads to your local environment.
Enter the path to your service account key as the GOOGLE_APPLICATION_CREDENTIALS variable in the cell below and run the cell.
End of explanation
"""
BUCKET_URI = "" # @param {type:"string"}
if BUCKET_URI == "" or BUCKET_URI is None or BUCKET_URI == "gs://[your-bucket-name]":
BUCKET_URI = "gs://" + PROJECT_ID + "aip-" + TIMESTAMP
"""
Explanation: Create a Cloud Storage bucket
The following steps are required, regardless of your notebook environment.
When you initialize the Vertex SDK for Python, you specify a Cloud Storage staging bucket. The staging bucket is where all the data associated with your dataset and model resources are retained across sessions.
Set the name of your Cloud Storage bucket below. Bucket names must be globally unique across all Google Cloud projects, including those outside of your organization.
End of explanation
"""
! gsutil mb -l $REGION -p $PROJECT_ID $BUCKET_URI
"""
Explanation: Only if your bucket doesn't already exist: Run the following cell to create your Cloud Storage bucket.
End of explanation
"""
! gsutil ls -al $BUCKET_URI
"""
Explanation: Finally, validate access to your Cloud Storage bucket by examining its contents:
End of explanation
"""
import json
import os
import google.cloud.aiplatform as aiplatform
from google.cloud import storage
"""
Explanation: Set up variables
Next, set up some variables used throughout the tutorial.
Import libraries and define constants
End of explanation
"""
aiplatform.init(project=PROJECT_ID, staging_bucket=BUCKET_URI)
"""
Explanation: Initialize Vertex SDK for Python
Initialize the Vertex SDK for Python for your project and corresponding bucket.
End of explanation
"""
IMPORT_FILE = "gs://cloud-samples-data/ai-platform-unified/video/traffic/traffic_videos_labels.csv"
"""
Explanation: Tutorial
Now you are ready to start creating your own AutoML video object tracking model.
Location of Cloud Storage training data.
Now set the variable IMPORT_FILE to the location of the CSV index file in Cloud Storage.
End of explanation
"""
FILE = IMPORT_FILE
count = ! gsutil cat $FILE | wc -l
print("Number of Examples", int(count[0]))
! gsutil cat $FILE | head
"""
Explanation: Quick peek at your data
This tutorial uses a version of the Traffic dataset that is stored in a public Cloud Storage bucket, using a CSV index file.
Start by doing a quick peek at the data. You count the number of examples by counting the number of rows in the CSV index file (wc -l) and then peek at the first few rows.
End of explanation
"""
dataset = aiplatform.VideoDataset.create(
display_name="Traffic" + "_" + TIMESTAMP,
gcs_source=[IMPORT_FILE],
import_schema_uri=aiplatform.schema.dataset.ioformat.video.object_tracking,
)
print(dataset.resource_name)
"""
Explanation: Create the Dataset
Next, create the Dataset resource using the create method for the VideoDataset class, which takes the following parameters:
display_name: The human readable name for the Dataset resource.
gcs_source: A list of one or more dataset index files to import the data items into the Dataset resource.
This operation may take several minutes.
End of explanation
"""
job = aiplatform.AutoMLVideoTrainingJob(
display_name="traffic_" + TIMESTAMP,
prediction_type="object_tracking",
)
print(job)
"""
Explanation: Create and run training pipeline
To train an AutoML model, you perform two steps: 1) create a training pipeline, and 2) run the pipeline.
Create training pipeline
An AutoML training pipeline is created with the AutoMLVideoTrainingJob class, with the following parameters:
display_name: The human readable name for the TrainingJob resource.
prediction_type: The type task to train the model for.
classification: A video classification model.
object_tracking: A video object tracking model.
action_recognition: A video action recognition model.
End of explanation
"""
model = job.run(
dataset=dataset,
model_display_name="traffic_" + TIMESTAMP,
training_fraction_split=0.8,
test_fraction_split=0.2,
)
"""
Explanation: Run the training pipeline
Next, you start the training job by invoking the method run, with the following parameters:
dataset: The Dataset resource to train the model.
model_display_name: The human readable name for the trained model.
training_fraction_split: The percentage of the dataset to use for training.
test_fraction_split: The percentage of the dataset to use for test (holdout data).
The run method when completed returns the Model resource.
The execution of the training pipeline will take upto 5 hours.
End of explanation
"""
# Get model resource ID
models = aiplatform.Model.list(filter="display_name=traffic_" + TIMESTAMP)
# Get a reference to the Model Service client
client_options = {"api_endpoint": f"{REGION}-aiplatform.googleapis.com"}
model_service_client = aiplatform.gapic.ModelServiceClient(
client_options=client_options
)
model_evaluations = model_service_client.list_model_evaluations(
parent=models[0].resource_name
)
model_evaluation = list(model_evaluations)[0]
print(model_evaluation)
"""
Explanation: Review model evaluation scores
After your model has finished training, you can review the evaluation scores for it.
First, you need to get a reference to the new model. As with datasets, you can either use the reference to the model variable you created when you deployed the model or you can list all of the models in your project.
End of explanation
"""
test_items = ! gsutil cat $IMPORT_FILE | head -n2
cols_1 = test_items[0].split(",")
cols_2 = test_items[1].split(",")
if len(cols_1) > 12:
test_item_1 = str(cols_1[1])
test_item_2 = str(cols_2[1])
test_label_1 = str(cols_1[2])
test_label_2 = str(cols_2[2])
else:
test_item_1 = str(cols_1[0])
test_item_2 = str(cols_2[0])
test_label_1 = str(cols_1[1])
test_label_2 = str(cols_2[1])
"""
Explanation: Send a batch prediction request
Send a batch prediction to your deployed model.
Get test item(s)
Now do a batch prediction to your Vertex model. You will use arbitrary examples out of the dataset as a test items. Don't be concerned that the examples were likely used in training the model -- we just want to demonstrate how to make a prediction.
End of explanation
"""
test_filename = "test.jsonl"
gcs_input_uri = BUCKET_URI + "/test.jsonl"
# making data_1 and data_2 variables using the structure mentioned above
data_1 = {
"content": test_item_1,
"mimeType": "video/avi",
"timeSegmentStart": "0.0s",
"timeSegmentEnd": "5.0s",
}
data_2 = {
"content": test_item_2,
"mimeType": "video/avi",
"timeSegmentStart": "0.0s",
"timeSegmentEnd": "5.0s",
}
# getting reference to bucket
bucket = storage.Client(project=PROJECT_ID).bucket(BUCKET_URI.replace("gs://", ""))
# creating a blob
blob = bucket.blob(blob_name=test_filename)
# creating data variable
data = json.dumps(data_1) + "\n" + json.dumps(data_2) + "\n"
# uploading data variable content to bucket
blob.upload_from_string(data)
# printing path of uploaded file
print(gcs_input_uri)
# printing content of uploaded file
! gsutil cat $gcs_input_uri
"""
Explanation: Make a batch input file
Now make a batch input file, which you store in your local Cloud Storage bucket. The batch input file can be either CSV or JSONL. You will use JSONL in this tutorial. For JSONL file, you make one dictionary entry per line for each video. The dictionary contains the key/value pairs:
content: The Cloud Storage path to the video.
mimeType: The content type. In our example, it is a avi file.
timeSegmentStart: The start timestamp in the video to do prediction on. Note, the timestamp must be specified as a string and followed by s (second), m (minute) or h (hour).
timeSegmentEnd: The end timestamp in the video to do prediction on.
End of explanation
"""
batch_predict_job = model.batch_predict(
job_display_name="traffic_" + TIMESTAMP,
gcs_source=gcs_input_uri,
gcs_destination_prefix=BUCKET_URI,
sync=False,
)
print(batch_predict_job)
"""
Explanation: Make the batch prediction request
Now that your Model resource is trained, you can make a batch prediction by invoking the batch_predict() method, with the following parameters:
job_display_name: The human readable name for the batch prediction job.
gcs_source: A list of one or more batch request input files.
gcs_destination_prefix: The Cloud Storage location for storing the batch prediction resuls.
sync: If set to True, the call will block while waiting for the asynchronous batch job to complete.
End of explanation
"""
batch_predict_job.wait()
"""
Explanation: Wait for completion of batch prediction job
Next, wait for the batch job to complete. Alternatively, one can set the parameter sync to True in the batch_predict() method to block until the batch prediction job is completed.
End of explanation
"""
bp_iter_outputs = batch_predict_job.iter_outputs()
prediction_results = list()
for blob in bp_iter_outputs:
if blob.name.split("/")[-1].startswith("prediction"):
prediction_results.append(blob.name)
tags = list()
for prediction_result in prediction_results:
gfile_name = f"gs://{bp_iter_outputs.bucket.name}/{prediction_result}".replace(
BUCKET_URI + "/", ""
)
data = bucket.get_blob(gfile_name).download_as_string()
data = json.loads(data)
print(data)
"""
Explanation: Get the predictions
Next, get the results from the completed batch prediction job.
The results are written to the Cloud Storage output bucket you specified in the batch prediction request. You call the method iter_outputs() to get a list of each Cloud Storage file generated with the results. Each file contains one or more prediction requests in a JSON format:
content: The prediction request.
prediction: The prediction response.
id: The internal assigned unique identifiers for each prediction request.
displayName: The class names for the predicted label.
confidences: The predicted confidence, between 0 and 1, per class label.
timeSegmentStart: The time offset in the video to the start of the video sequence.
timeSegmentEnd: The time offset in the video to the end of the video sequence.
frames: Location with frames of the tracked object.
End of explanation
"""
# Delete the dataset using the Vertex dataset object
dataset.delete()
# Delete the model using the Vertex model object
model.delete()
# Delete the AutoML or Pipeline training job
job.delete()
# Delete the batch prediction job using the Vertex batch prediction object
batch_predict_job.delete()
if os.getenv("IS_TESTING"):
! gsutil -m rm -r $BUCKET_URI
"""
Explanation: Cleaning up
To clean up all Google Cloud resources used in this project, you can delete the Google Cloud
project you used for the tutorial.
Otherwise, you can delete the individual resources you created in this tutorial:
Dataset
Pipeline
Model
Endpoint
AutoML Training Job
Batch Job
Custom Job
Hyperparameter Tuning Job
Cloud Storage Bucket
End of explanation
"""
|
ModestoCabrera/IS360_W7Assignment
|
Week7_Assignment.ipynb
|
gpl-2.0
|
import urllib2, argparse
from bs4 import BeautifulSoup
import pandas as pd
link = "https://www.globalpolicy.org/component/content/article/109/27519.html"
"""
Explanation: Reading HTML Tables into DataFrame
End of explanation
"""
from week_7_code import *
"""
Explanation: I'd previously coded this in a python file so it would be easier for me to find errors.
End of explanation
"""
download_link = url_download(link)
"""
Explanation: I used a define function that can be used on any url to load the link into python.
End of explanation
"""
table_data = parse_site(download_link)
"""
Explanation: After using the request i've prepared a function to parse the site, find the table and return the values, index and title into python as a tuple.
End of explanation
"""
talbe_inf = zip(table_data[1], [n[0] for n in table_data[2]], [n[1] for n in table_data[2]])
"""
Explanation: After parsing the table, what is return is a tuple of lists, that include the index, data, and title of the table.
End of explanation
"""
table = pd.DataFrame(data = talbe_inf, columns = list(table_data[0]), index = table_data[1])
print table
"""
Explanation: In order to create a DataFrame i've ziped the cells in the data with the year columns so that it can be further used in the next process of instantiating the DataFrame in the variable table.
End of explanation
"""
|
Applied-Groundwater-Modeling-2nd-Ed/Chapter_5_problems-1
|
P5.3_Flopy_Industrial_pond.ipynb
|
gpl-2.0
|
%matplotlib inline
import sys
import os
import shutil
import numpy as np
from subprocess import check_output
# Import flopy
import flopy
"""
Explanation: <img src="AW&H2015.tiff" style="float: left">
<img src="flopylogo.png" style="float: center">
Problem P5.3 Industrial Pond Leakage
In Problem P5.3 from pages 246-247 in Anderson, Woessner and Hunt (2015), we are asked to construct an areal 2D model to represent an industrial facility in an arid setting. It is disposing of fluids in a 900 m by 900 m pond that is leaking at a rate of 0.2 m/d (Fig. P5.2). Recharge from precipitation in this area is negligible. The pond is located in the center of the horizontal problem domain and is underlain by a sequence of sediment layers consisting of sand, clay, and sand and gravel. Wet areas around the pond perimeter at the land surface are causing some local water-logging of the soils and impacting vegetation. The owners of the pond believe that the water-logging is caused by seepage out of the pond through the berms around the sides of the pond. The state regulatory agency, however, suspects that leakage through the bottom of the pond has created a water table mound that intersects the land surface. The objective of modeling is to determine whether the groundwater mound beneath the pond reaches the land surface and is water-logging the soil.
Part a.
The consulting firm hired by the industrial facility recommends a 2D areal steady-state unconfined model as a quick and easy way to address the modeling objective. As a newly hired hydrogeologist of the consulting firm, you are
instructed to construct the model. The width of the problem domain is 11,700 m. Use no flow boundary conditions at the north and south ends of the problem domain and specified heads along the sides (Fig. P5.2). Use a uniform nodal spacing of 900 m. Use Eqns (B5.3.2) and (B5.3.3) in Box 5.3 to compute the average horizontal and vertical hydraulic conductivity for the layer. Although the vertical hydraulic conductivity is not used in a one-layer 2D areal model, the vertical anisotropy ratio of the layer is of interest. Produce a contour map of the water table (use 1-m contour intervals) using the computed heads. Under this representation does the water table intersect the land surface?
<img src="P5.3_figure.tiff" style="float: center">
Below is an iPython Notebook that builds a Python MODFLOW model for this problem and plots results. See the Github wiki associated with this Chapter for information on one suggested installation and setup configuration for Python and iPython Notebook.
[Acknowledgements: This tutorial was created by Randy Hunt and all failings are mine. The exercise here has benefited greatly from the online Flopy tutorial and example notebooks developed by Chris Langevin and Joe Hughes for the USGS Spring 2015 Python Training course GW1774]
Creating the Model
In this example, we will create a simple groundwater flow modelusing the Flopy website approach. Visit the tutorial website here.
Setup the Notebook Environment and Import Flopy
Load a few standard libraries, and then load flopy.
End of explanation
"""
# Set the name of the path to the model working directory
dirname = "P5-3_Industrial_pond"
datapath = os.getcwd()
modelpath = os.path.join(datapath, dirname)
print 'Name of model path: ', modelpath
# Now let's check if this directory exists. If not, then we will create it.
if os.path.exists(modelpath):
print 'Model working directory already exists.'
else:
print 'Creating model working directory.'
os.mkdir(modelpath)
"""
Explanation: Setup a New Directory and Change Paths
For this tutorial, we will work in a new subdirectory underneath the directory where the notebook is located. We can use some fancy Python tools to help us manage the directory creation. Note that if you encounter path problems with this workbook, you can stop and then restart the kernel and the paths will be reset.
End of explanation
"""
# model domain and grid definition
# for clarity, user entered variables are typically all caps except for layer thickness (b) and K direction (Kh or Kv); python syntax are lower case or mixed case
# This is an unconfined areal 2D model.
LX = 13500. # aquifer width of 11700 m + 2 900m nodes for constant head boundary condition on each end
LY = 9000. # height of aquifer of 9000 m, no flow boundaries do not require explicit cells in MODFLOW
ZTOP = 130. # the system is unconfined
ZBOT = 0.
NLAY = 1
NROW = 10
NCOL = 15
DELR = LX / NCOL # recall that MODFLOW convention is DELR is along a row, thus has items = NCOL; see page XXX in AW&H (2015)
DELC = LY / NROW # recall that MODFLOW convention is DELC is along a column, thus has items = NROW; see page XXX in AW&H (2015)
DELV = (ZTOP - ZBOT) / NLAY
BOTM = np.linspace(ZTOP, ZBOT, NLAY + 1)
# RCH = 0.0 #not needed for Problem P5.3
# WELLQ = 0. #not needed for Problem P5.3
POND_SEEP=0.2
print "DELR =", DELR, " DELC =", DELC, ' DELV =', DELV
print "BOTM =", BOTM
#print "Recharge =", RCH
print "Pond Seepage =", POND_SEEP, "m/d"
#print "Pumping well rate =", WELLQ
"""
Explanation: Define the Model Extent, Grid Resolution, and Characteristics
It is normally good practice to group things that you might want to change into a single code block. This makes it easier to make changes and rerun the code.
End of explanation
"""
LAY1b=((120.-80)+(90-80))/2 #this calcultes the average saturated thickness of Layer 1
print "The average Layer 1 thickness =", LAY1b, "m"
"""
Explanation: In this problem we are required to calculate the equivalent 1-layer hydraulic conductivity from the 3-layer system showed in Figure P5.2 using the correct equation from Box 5.3 (equation B5.3.2)
End of explanation
"""
LAY2b = 80.-40
LAY3b = 40.-0
print "Layer 2 thickness =", LAY2b, "m"
print "Layer 3 thickness =", LAY3b, "m"
#In equation B5.3.2 we also need the total thickess of the aquifer B
TOT_THICK = LAY1b+LAY2b+LAY3b
print "The total aquifer thickness B =", TOT_THICK, "m"
#Now we can assign the Kh for each layer from Figure P5.2
LAY1Kh=44.
LAY2Kh=0.13
LAY3Kh=113.
#now we can calculate the equivalent Kh from Box 5.3's equation B5.3.2
EQUIV_Kh=(LAY1Kh*LAY1b/TOT_THICK)+(LAY2Kh*LAY2b/TOT_THICK)+(LAY3Kh*LAY3b/TOT_THICK)
print "The 1-layer equivlant Kh =", EQUIV_Kh, "m/d"
"""
Explanation: This agrees with the value given in the caption in figure P5.2
End of explanation
"""
# Assign name and create modflow model object
modelname = 'P5-3'
#exe_name = os.path.join(datapath, 'mf2005.exe') # for Windows OS
exe_name = os.path.join(datapath, 'mf2005') # for Mac OS
print 'Model executable: ', exe_name
MF = flopy.modflow.Modflow(modelname, exe_name=exe_name, model_ws=modelpath)
"""
Explanation: Don't forget that we are asked to do a similar calculation for equivalent Kv. I will leave that for you to calculate because it is not needed for a 1-layer MODFLOW model.
Create the MODFLOW Model Object
Create a flopy MODFLOW object: flopy.modflow.Modflow.
End of explanation
"""
# Create the discretization object
TOP = ZTOP * np.ones((NROW, NCOL),dtype=np.float)
DIS_PACKAGE = flopy.modflow.ModflowDis(MF, NLAY, NROW, NCOL, delr=DELR, delc=DELC,
top=TOP, botm=BOTM[1:], laycbd=0)
# print DIS_PACKAGE #uncomment this on far left to see information about the flopy object
"""
Explanation: Discretization Package
Create a flopy discretization package object: flopy.modflow.ModflowDis.
End of explanation
"""
# Variables for the BAS package
IBOUND = np.ones((NLAY, NROW, NCOL), dtype=np.int32) # all nodes are active (IBOUND = 1)
# make the top of the profile specified head by setting the IBOUND = -1
IBOUND[:, :, 0] = -1 #don't forget arrays are zero-based! Sets first column
IBOUND[:, :, -1] = -1 # Sets last column
print IBOUND
STRT = 130 * np.ones((NLAY, NROW, NCOL), dtype=np.float32) # set starting head to landsurface (130 m) throughout model domain
STRT[:, :, 0] = 90. # leftmost constant head
STRT[:, :, -1] = 120. # rightmost constant head
print STRT
BAS_PACKAGE = flopy.modflow.ModflowBas(MF, ibound=IBOUND, strt=STRT)
# print BAS_PACKAGE # uncomment this at far left to see the information about the flopy BAS object
"""
Explanation: Basic Package
Create a flopy basic package object: flopy.modflow.ModflowBas.
End of explanation
"""
LPF_PACKAGE = flopy.modflow.ModflowLpf(MF, laytyp=1, hk=EQUIV_Kh) # we defined the K and anisotropy at top of file
# print LPF_PACKAGE # uncomment this at far left to see the information about the flopy LPF object
"""
Explanation: Layer Property Flow Package
Create a flopy layer property flow package object: flopy.modflow.ModflowLpf.
End of explanation
"""
#WEL_PACKAGE = flopy.modflow.ModflowWel(MF, stress_period_data=[0,0,0,WELLQ]) # remember python 0 index, layer 0 = layer 1 in MF
#print WEL_PACKAGE # uncomment this at far left to see the information about the flopy WEL object
"""
Explanation: Well Package
This is not needed for Problem P5.3
End of explanation
"""
OC_PACKAGE = flopy.modflow.ModflowOc(MF) # we'll use the defaults for the model output
# print OC_PACKAGE # uncomment this at far left to see the information about the flopy OC object
"""
Explanation: Output Control
Create a flopy output control object: flopy.modflow.ModflowOc.
End of explanation
"""
PCG_PACKAGE = flopy.modflow.ModflowPcg(MF, mxiter=500, iter1=100, hclose=1e-04, rclose=1e-03, relax=0.98, damp=0.5)
# print PCG_PACKAGE # uncomment this at far left to see the information about the flopy PCG object
"""
Explanation: Preconditioned Conjugate Gradient Solver
Create a flopy pcg package object: flopy.modflow.ModflowPcg.
End of explanation
"""
SEEP_ARRAY = 0 * np.ones((NROW, NCOL), dtype=np.float32) # set seepage to 0 over model grid
SEEP_ARRAY[4, 7] = POND_SEEP # add Pond seepage at the pond location
print SEEP_ARRAY
"""
Explanation: Recharge Package
Because the pond seepage is given as a flux over the cell (L/T) rather than a volumetric flow rate (L3/T), the Recharge Package is the easier to use than MODFLOW's Well Package. However, we only want to add pond seepage to the cells that represent the pond. Therefore, we'll need to make an array like we do for starting heads.
End of explanation
"""
RCH_PACKAGE = flopy.modflow.ModflowRch(MF, rech=SEEP_ARRAY)
# print RCH_PACKAGE # uncomment this at far left to see the information about the flopy RCH object
"""
Explanation: Create a flopy pcg package object: flopy.modflow.ModflowRch.
End of explanation
"""
#Before writing input, destroy all files in folder to prevent reusing old files
#Here's the working directory
print modelpath
#Here's what's currently in the working directory
modelfiles = os.listdir(modelpath)
print modelfiles
#delete these files to prevent us from reading old results
modelfiles = os.listdir(modelpath)
for filename in modelfiles:
f = os.path.join(modelpath, filename)
if modelname in f:
try:
os.remove(f)
print 'Deleted: ', filename
except:
print 'Unable to delete: ', filename
#Now write the model input files
MF.write_input()
"""
Explanation: Writing the MODFLOW Input Files
Before we create the model input datasets, we can do some directory cleanup to make sure that we don't accidently use old files.
End of explanation
"""
# return current working directory
print "You can check the newly created files in", modelpath
"""
Explanation: The model datasets are written using a single command (mf.write_input).
Check in the model working directory and verify that the input files have been created. Or if you might just add another cell, right after this one, that prints a list of all the files in our model directory. The path we are working in is returned from this next block.
End of explanation
"""
silent = False #Print model output to screen?
pause = False #Require user to hit enter? Doesn't mean much in Ipython notebook
report = True #Store the output from the model in buff
success, buff = MF.run_model(silent=silent, pause=pause, report=report)
"""
Explanation: Running the Model
Flopy has several methods attached to the model object that can be used to run the model. They are run_model, run_model2, and run_model3. Here we use run_model3, which will write output to the notebook.
End of explanation
"""
#imports for plotting and reading the MODFLOW binary output file
import matplotlib.pyplot as plt
import flopy.utils.binaryfile as bf
#Create the headfile object and grab the results for last time.
headfile = os.path.join(modelpath, modelname + '.hds')
headfileobj = bf.HeadFile(headfile)
#Get a list of times that are contained in the model
times = headfileobj.get_times()
print 'Headfile (' + modelname + '.hds' + ') contains the following list of times: ', times
#Get a numpy array of heads for totim = 1.0
#The get_data method will extract head data from the binary file.
HEAD = headfileobj.get_data(totim=1.0)
#Print statistics on the head
print 'Head statistics'
print ' min: ', HEAD.min()
print ' max: ', HEAD.max()
print ' std: ', HEAD.std()
"""
Explanation: Post Processing the Results
To read heads from the MODFLOW binary output file, we can use the flopy.utils.binaryfile module. Specifically, we can use the HeadFile object from that module to extract head data arrays.
End of explanation
"""
#Create a contour plot of heads
FIG = plt.figure(figsize=(12,10))
#setup contour levels and plot extent
LEVELS = np.arange(90., 126., 1.)
EXTENT = (DELR/2., LX - DELR/2., DELC/2., LY - DELC/2.)
print 'Contour Levels: ', LEVELS
print 'Extent of domain: ', EXTENT
#Make a contour plot on the first axis
AX1 = FIG.add_subplot(1, 2, 1, aspect='equal')
AX1.set_xlabel("x")
AX1.set_ylabel("y")
YTICKS = np.arange(0, 9000, 1000)
AX1.set_yticks(YTICKS)
AX1.set_title("P5.3 1-layer Industrial Pond Problem")
AX1.text(6000, 5500, r"pond", fontsize=10, color="blue")
AX1.contour(np.flipud(HEAD[0, :, :]), levels=LEVELS, extent=EXTENT)
#Make a color flood on the second axis
AX2 = FIG.add_subplot(1, 2, 2, aspect='equal')
AX2.set_xlabel("x")
AX2.set_ylabel("y")
AX2.set_yticks(YTICKS)
AX2.set_title("P5.3 color flood")
AX2.text(6000, 5500, r"pond", fontsize=10, color="white")
cax = AX2.imshow(HEAD[0, :, :], extent=EXTENT, interpolation='nearest')
cbar = FIG.colorbar(cax, orientation='vertical', shrink=0.45)
#Using code from P4.4, let's plot a cross section of head in row = 4 from headobj, recall that Python is zero based
#so that MODFLOW row 5 with the pond is equal to Python row 4
#define Y as HEAD along the row and then print; ROW is a variable that allows us to change the row plotted easily
ROW = 4
Y = HEAD[0,ROW,:]
print Y
#in order to plot the cross section we'll need to create X-coordinates to match with heads at the node centers
XCOORD = np.arange(0, 13500, 900) + 450
print XCOORD
fig = plt.figure(figsize=(10, 5))
ax = fig.add_subplot(1, 1, 1)
TITLE = '1-layer model 900m x 900m grid: cross section of head along Row = ({0})'.format(ROW) #this allows the title to be updated as ROW changes
ax.set_title(TITLE)
ax.set_xlabel('X')
ax.set_ylabel('head')
ax.set_xlim(0, 13500.)
ax.set_ylim(0.,150.)
ax.text(6400,125, r"pond", fontsize=12, color="black")
ax.text(2030,132, r"land surface", fontsize=15, color="green")
ax.text(13200, 80, r"constant head = 120m", fontsize=10, color="blue",rotation='vertical')
ax.text(150, 80, r"constant head = 90m", fontsize=10, color="blue",rotation='vertical')
ax.plot(XCOORD, Y)
ax.plot(XCOORD, XCOORD*0+130) #land surface
"""
Explanation: Land surface elevation = 130 m, which is significantly higher than the maximum head in the model.
End of explanation
"""
#we have to redefine the layering
ZTOP = 130. # the system is unconfined so set the top above land surface so that the water table never > layer top
ZBOT = 0.
NLAY = 3
BOTM = np.zeros((NLAY, NROW, NCOL), dtype=np.float)
BOTM[0,:,:] = 80.
BOTM[1,:,:] = 40.
BOTM[2,:,:] = 0.
print BOTM
# Create the discretization object
TOP = ZTOP * np.ones((NROW, NCOL),dtype=np.float)
DIS_PACKAGE = flopy.modflow.ModflowDis(MF, NLAY, NROW, NCOL, delr=DELR, delc=DELC,
top=TOP, botm=BOTM, laycbd=0)
"""
Explanation: P5.3b
The state regulatory agency insists that a 3D model be developed to examine how vertical flow and anisotropy influence the height of the groundwater mound. They point out that the low hydraulic conductivity of layer 2 andthe vertical anisotropy present in the layered sequence of units might cause the mound to rise to the surface. Construct a three- layer steady-state model based on the information in Fig. P5.2. Specified head and no flow boundaries
extend to all layers. Generate an equipotential surface (using 1-m contour intervals) for each layer. Also show the head distribution in a cross section that passes through the specified head boundaries and the pond.
End of explanation
"""
IBOUND = np.ones((NLAY, NROW, NCOL), dtype=np.int)
IBOUND[:,:,0] = -1
IBOUND[:,:,-1] = -1
print IBOUND
STRT = 130. * np.ones((NLAY, NROW, NCOL), dtype=np.float)
STRT[:,:,0] = 90.
STRT[:,:,-1] = 120.
print STRT
BAS_PACKAGE = flopy.modflow.ModflowBas(MF, ibound=IBOUND, strt=STRT)
# print BAS_PACKAGE # uncomment this at far left to see the information about the flopy BAS object
"""
Explanation: Now we must do the same for MODFLOW's Basic Package
End of explanation
"""
#Now we can assign the Kh for each layer from Figure P5.2
LAY1Kh=44.
LAY2Kh=0.13
LAY3Kh=113.
KH_ARRAY = np.zeros((NLAY, NROW, NCOL), dtype=np.float)
KH_ARRAY[0,:,:] = LAY1Kh
KH_ARRAY[1,:,:] = LAY2Kh
KH_ARRAY[2,:,:] = LAY3Kh
print KH_ARRAY
#Now we can assign the Kv for each layer from Figure P5.2
LAY1Kv=4.4
LAY2Kv=0.013
LAY3Kv=11.3
KV_ARRAY = np.zeros((NLAY, NROW, NCOL), dtype=np.float)
KV_ARRAY[0,:,:] = LAY1Kv
KV_ARRAY[1,:,:] = LAY2Kv
KV_ARRAY[2,:,:] = LAY3Kv
print KV_ARRAY
LPF_PACKAGE = flopy.modflow.ModflowLpf(MF, laytyp=1, hk=KH_ARRAY, vka=KV_ARRAY) # we defined the K and anisotropy at top of file
# print LPF_PACKAGE # uncomment this at far left to see the information about the flopy LPF object
#Before writing input, destroy all files in folder to prevent reusing old files
#Here's the working directory
print modelpath
#Here's what's currently in the working directory
modelfiles = os.listdir(modelpath)
print modelfiles
#delete these files to prevent us from reading old results
modelfiles = os.listdir(modelpath)
for filename in modelfiles:
f = os.path.join(modelpath, filename)
if modelname in f:
try:
os.remove(f)
print 'Deleted: ', filename
except:
print 'Unable to delete: ', filename
#Now write the model input files and run MODFLOW
MF.write_input()
silent = False #Print model output to screen?
pause = False #Require user to hit enter? Doesn't mean much in Ipython notebook
report = True #Store the output from the model in buff
success, buff = MF.run_model(silent=silent, pause=pause, report=report)
#imports for plotting and reading the MODFLOW binary output file
import matplotlib.pyplot as plt
import flopy.utils.binaryfile as bf
#Create the headfile object and grab the results for last time.
headfile = os.path.join(modelpath, modelname + '.hds')
headfileobj = bf.HeadFile(headfile)
#Get a list of times that are contained in the model
times = headfileobj.get_times()
print 'Headfile (' + modelname + '.hds' + ') contains the following list of times: ', times
#Get a numpy array of heads for totim = 1.0
#The get_data method will extract head data from the binary file.
HEAD = headfileobj.get_data(totim=1.0)
#Print statistics on the head
print 'Head statistics'
print ' min: ', HEAD.min()
print ' max: ', HEAD.max()
print ' std: ', HEAD.std()
"""
Explanation: Now we must do the same for hydraulic conductivity.
End of explanation
"""
#Create a contour plot of heads
FIG = plt.figure(figsize=(12,10))
#setup contour levels and plot extent
LEVELS = np.arange(90., 146., 1.)
EXTENT = (DELR/2., LX - DELR/2., DELC/2., LY - DELC/2.)
print 'Contour Levels: ', LEVELS
print 'Extent of domain: ', EXTENT
#Make a contour plot on the first axis
AX1 = FIG.add_subplot(1, 2, 1, aspect='equal')
AX1.set_xlabel("x")
AX1.set_ylabel("y")
YTICKS = np.arange(0, 9000, 1000)
AX1.set_yticks(YTICKS)
AX1.set_title("3-layer P5.3 Industrial Pond Problem 900x900m grid")
AX1.text(6000, 5500, r"pond", fontsize=10, color="blue")
AX1.contour(np.flipud(HEAD[0, :, :]), levels=LEVELS, extent=EXTENT)
#Make a color flood on the second axis
AX2 = FIG.add_subplot(1, 2, 2, aspect='equal')
AX2.set_xlabel("x")
AX2.set_ylabel("y")
AX2.set_yticks(YTICKS)
AX2.set_title("P5.3 color flood")
AX2.text(6000, 5500, r"pond", fontsize=10, color="white")
cax = AX2.imshow(HEAD[0, :, :], extent=EXTENT, interpolation='nearest')
cbar = FIG.colorbar(cax, orientation='vertical', shrink=0.45)
#Using code from P4.4, let's again plot a cross section of head in row = 4 from headobj, recall that Python is zero based
#so that MODFLOW row 5 with the pond is equal to Python row 4
#define Y as HEAD along the row and then print; ROW is a variable that allows us to change the row plotted easily
ROW = 4
Y = HEAD[0,ROW,:]
print Y
#in order to plot the cross section we'll need to create X-coordinates to match with heads at the node centers
#(we could have just used the XCOORD calculated above as the X-Y spacing did not change, but we'll do it again
#to make sure we have the correct X coordinates)
XCOORD = np.arange(0, 13500, 900) + 450
print XCOORD
fig = plt.figure(figsize=(10, 5))
ax = fig.add_subplot(1, 1, 1)
TITLE = '3-layer model 900m x 900m grid: cross section of head along Row = ({0})'.format(ROW) #this allows the title to be updated as ROW changes
ax.set_title(TITLE)
ax.set_xlabel('X')
ax.set_ylabel('head')
ax.set_xlim(0, 13500.)
ax.set_ylim(0.,150.)
ax.text(6150,125, r"pond", fontsize=12, color="black")
ax.text(2030,132, r"land surface", fontsize=15, color="green")
ax.text(2030,75, r"Layer 1 bottom", fontsize=10, color="red")
ax.text(2030,35, r"Layer 2 bottom", fontsize=10, color="teal")
ax.text(13200, 80, r"constant head = 120m", fontsize=10, color="blue",rotation='vertical')
ax.text(150, 80, r"constant head = 90m", fontsize=10, color="blue",rotation='vertical')
ax.plot(XCOORD, Y)
ax.plot(XCOORD, XCOORD*0+130) #land surface
ax.plot(XCOORD, XCOORD*0+80) #Bottom of Layer 1
ax.plot(XCOORD, XCOORD*0+40) #Bottom of Layer 2
"""
Explanation: Note the new maximum head value (recall land surface elevation = 130).
End of explanation
"""
# let's redefine the grid
LX = 12300. # aquifer width of 11700 m + 2 300m nodes for constant head boundary condition on each end
LY = 9000. # height of aquifer of 9000 m, no flow boundaries do not require explicit cells in MODFLOW
ZTOP = 130. # the system is unconfined
NROW = 30 #these two lines are the only things we need to change to refine
NCOL = 41 # the grid
DELR = LX / NCOL # recall that MODFLOW convention is DELR is along a row, thus has items = NCOL; see page XXX in AW&H (2015)
DELC = LY / NROW # recall that MODFLOW convention is DELC is along a column, thus has items = NROW; see page XXX in AW&H (2015)
POND_SEEP=0.2
print "DELR =", DELR, " DELC =", DELC
print "Pond Seepage =", POND_SEEP, "m/d"
#repeat the modeling building steps from P5.3b to make sure all model definition is current
NLAY = 3
ZBOT = 0.
BOTM = np.zeros((NLAY, NROW, NCOL), dtype=np.float)
BOTM[0,:,:] = 80.
BOTM[1,:,:] = 40.
BOTM[2,:,:] = 0.
#print BOTM
# Create the discretization object
TOP = ZTOP * np.ones((NROW, NCOL),dtype=np.float)
DIS_PACKAGE = flopy.modflow.ModflowDis(MF, NLAY, NROW, NCOL, delr=DELR, delc=DELC,
top=TOP, botm=BOTM, laycbd=0)
IBOUND = np.ones((NLAY, NROW, NCOL), dtype=np.int)
IBOUND[:,:,0] = -1
IBOUND[:,:,-1] = -1
#print IBOUND
STRT = 130. * np.ones((NLAY, NROW, NCOL), dtype=np.float)
STRT[:,:,0] = 90.
STRT[:,:,-1] = 120.
#print STRT
BAS_PACKAGE = flopy.modflow.ModflowBas(MF, ibound=IBOUND, strt=STRT)
# print BAS_PACKAGE # uncomment this at far left to see the information about the flopy BAS object
#Now we can assign the Kh for each layer from Figure P5.2
LAY1Kh=44.
LAY2Kh=0.13
LAY3Kh=113.
KH_ARRAY = np.zeros((NLAY, NROW, NCOL), dtype=np.float)
KH_ARRAY[0,:,:] = LAY1Kh
KH_ARRAY[1,:,:] = LAY2Kh
KH_ARRAY[2,:,:] = LAY3Kh
#print KH_ARRAY
#Now we can assign the Kv for each layer from Figure P5.2
LAY1Kv=4.4
LAY2Kv=0.013
LAY3Kv=11.3
KV_ARRAY = np.zeros((NLAY, NROW, NCOL), dtype=np.float)
KV_ARRAY[0,:,:] = LAY1Kv
KV_ARRAY[1,:,:] = LAY2Kv
KV_ARRAY[2,:,:] = LAY3Kv
#print KV_ARRAY
LPF_PACKAGE = flopy.modflow.ModflowLpf(MF, laytyp=1, hk=KH_ARRAY, vka=KV_ARRAY) # we defined the K and anisotropy at top of file
# print LPF_PACKAGE # uncomment this at far left to see the information about the flopy LPF object
#We have to place the pond in the new grid too via MODFLOW's Recharge Package (like in P5.3a)
SEEP_ARRAY = 0 * np.ones((NROW, NCOL), dtype=np.float32) # set seepage to 0 over model grid
SEEP_ARRAY[12, 20] = POND_SEEP # add Pond seepage at the pond location (have 9 nodes now with finer grid)
SEEP_ARRAY[13, 20] = POND_SEEP
SEEP_ARRAY[14, 20] = POND_SEEP
SEEP_ARRAY[12, 21] = POND_SEEP
SEEP_ARRAY[13, 21] = POND_SEEP
SEEP_ARRAY[14, 21] = POND_SEEP
SEEP_ARRAY[12, 22] = POND_SEEP
SEEP_ARRAY[13, 22] = POND_SEEP
SEEP_ARRAY[14, 22] = POND_SEEP
RCH_PACKAGE = flopy.modflow.ModflowRch(MF, rech=SEEP_ARRAY)
#Before writing input, destroy all files in folder to prevent reusing old files
#Here's the working directory
print modelpath
#Here's what's currently in the working directory
modelfiles = os.listdir(modelpath)
print modelfiles
#delete these files to prevent us from reading old results
modelfiles = os.listdir(modelpath)
for filename in modelfiles:
f = os.path.join(modelpath, filename)
if modelname in f:
try:
os.remove(f)
print 'Deleted: ', filename
except:
print 'Unable to delete: ', filename
#Now we are ready to write input and re-run MODFLOW with the 300 x 300 m grid
MF.write_input()
silent = False #Print model output to screen?
pause = False #Require user to hit enter? Doesn't mean much in Ipython notebook
report = True #Store the output from the model in buff
success, buff = MF.run_model(silent=silent, pause=pause, report=report)
#imports for plotting and reading the MODFLOW binary output file
import matplotlib.pyplot as plt
import flopy.utils.binaryfile as bf
#Create the headfile object and grab the results for last time.
headfile = os.path.join(modelpath, modelname + '.hds')
headfileobj = bf.HeadFile(headfile)
#Get a list of times that are contained in the model
times = headfileobj.get_times()
print 'Headfile (' + modelname + '.hds' + ') contains the following list of times: ', times
#Get a numpy array of heads for totim = 1.0
#The get_data method will extract head data from the binary file.
HEAD = headfileobj.get_data(totim=1.0)
#Print statistics on the head
print 'Head statistics'
print ' min: ', HEAD.min()
print ' max: ', HEAD.max()
print ' std: ', HEAD.std()
#Create a contour plot of heads
FIG = plt.figure(figsize=(12,10))
#setup contour levels and plot extent
LEVELS = np.arange(90., 146., 1.)
EXTENT = (DELR/2., LX - DELR/2., DELC/2., LY - DELC/2.)
print 'Contour Levels: ', LEVELS
print 'Extent of domain: ', EXTENT
#Make a contour plot on the first axis
AX1 = FIG.add_subplot(1, 2, 1, aspect='equal')
AX1.set_xlabel("x")
AX1.set_ylabel("y")
YTICKS = np.arange(0, 9000, 1000)
AX1.set_yticks(YTICKS)
AX1.set_title("3-layer P5.3 Industrial Pond Problem 300x300m grid")
AX1.text(6000, 5500, r"pond", fontsize=10, color="blue")
AX1.contour(np.flipud(HEAD[0, :, :]), levels=LEVELS, extent=EXTENT)
#Make a color flood on the second axis
AX2 = FIG.add_subplot(1, 2, 2, aspect='equal')
AX2.set_xlabel("x")
AX2.set_ylabel("y")
AX2.set_yticks(YTICKS)
AX2.set_title("P5.3 color flood")
AX2.text(6000, 5500, r"pond", fontsize=10, color="white")
cax = AX2.imshow(HEAD[0, :, :], extent=EXTENT, interpolation='nearest')
cbar = FIG.colorbar(cax, orientation='vertical', shrink=0.45)
#Using code from P4.4, let's look at the head in column = 13 from headobj, and then plot it
#print HEAD along a column; COL is a variable that allows us to change this easily
ROW = 13
Y = HEAD[0,ROW,:]
print Y
#for our cross section create X-coordinates to match with heads
XCOORD = np.arange(0, 12300, 300) + 150
print XCOORD
fig = plt.figure(figsize=(10, 5))
ax = fig.add_subplot(1, 1, 1)
TITLE = '3-layer model 300m x 300m grid: cross section of head along Row = ({0})'.format(ROW) #this allows the title to be updated as ROW changes
ax.set_title(TITLE)
ax.set_xlabel('X')
ax.set_ylabel('head')
ax.set_xlim(0, 12300.)
ax.set_ylim(0.,150.)
ax.text(6150,125, r"pond", fontsize=12, color="black")
ax.text(2030,132, r"land surface", fontsize=15, color="green")
ax.text(2030,75, r"Layer 1 bottom", fontsize=10, color="red")
ax.text(2030,35, r"Layer 2 bottom", fontsize=10, color="teal")
ax.text(12000, 80, r"constant head = 120m", fontsize=10, color="blue",rotation='vertical')
ax.text(150, 80, r"constant head = 90m", fontsize=10, color="blue",rotation='vertical')
ax.plot(XCOORD, Y)
ax.plot(XCOORD, XCOORD*0+130) #land surface
ax.plot(XCOORD, XCOORD*0+80) #Bottom of Layer 1
ax.plot(XCOORD, XCOORD*0+40) #Bottom of Layer 2
"""
Explanation: Examine your results and answer the following:
i. Explain why the results of the 2D model are different from the 3D model. What are the main factors that control the height of the water table mound under the pond? Discuss whether the 2D model is appropriate for this problem.
ii. Is it likely that the water table intersects the land surface away from the pond? If so, use shading on a map of the land surface in the vicinity of the pond to show the area affected by leakage.
P5.3c
When the modeling report is sent out for review, reviewers question whether
the large nodal spacing of 900 m sufficiently captures the head gradient that
defines the mound. They say that the surface area affected by the mound is
underestimated. Use the three-layer model developed in (b) to assess the effect
of nodal spacing on the solution. Reduce the nodal spacing uniformly over
the grid/mesh to 300 m, or construct an irregular FD grid, unstructured FD
grid, or FE mesh with fine nodal spacing in the vicinity of the pond. Run the
model and generate equipotential maps for each layer (use a 1-m contour interval)
and a cross section that passes through the pond and constant head boundaries.
If the mound intersects the land surface, show the area impacted by shading
on a map of the land surface. Compare and contrast results with those of parts
(a) and (b).
End of explanation
"""
|
MBARIMike/biofloat
|
notebooks/build_biofloat_cache.ipynb
|
mit
|
from biofloat import ArgoData
ad = ArgoData(verbosity=2)
"""
Explanation: Build local cache file from Argo data sources - first in a series of Notebooks
Execute commands to pull data from the Internet into a local HDF cache file so that we can better interact with the data
Import the ArgoData class and instatiate an ArgoData object (ad) with verbosity set to 2 so that we get INFO messages.
End of explanation
"""
%%time
floats340 = ad.get_oxy_floats_from_status(age_gte=340)
print('{} floats at least 340 days old'.format(len(floats340)))
"""
Explanation: You can now explore what methods the of object has by typing "ad." in a cell and pressing the tab key. One of the methods is get_oxy_floats(); to see what it does select it and press shift-tab with the cursor in the parentheses of "of.get_oxy_floats()". Let's get a list of all the floats that have been out for at least 340 days and print the length of that list.
End of explanation
"""
%%time
floats730 = ad.get_oxy_floats_from_status(age_gte=730)
print('{} floats at least 730 days old'.format(len(floats730)))
"""
Explanation: If this the first time you've executed the cell it will take minute or so to read the Argo status information from the Internet (the PerformanceWarning can be ignored - for this small table it doesn't matter much).
Once the status information is read it is cached locally and further calls to get_oxy_floats_from_status() will execute much faster. To demonstrate, let's count all the oxygen labeled floats that have been out for at least 2 years.
End of explanation
"""
%%time
dac_urls = ad.get_dac_urls(floats340)
print(len(dac_urls))
"""
Explanation: Now let's find the Data Assembly Center URL for each of the floats in our list. (The returned dictionary of URLs is also locally cached.)
End of explanation
"""
%%time
wmo_list = ['1900650']
ad.set_verbosity(0)
df = ad.get_float_dataframe(wmo_list, max_profiles=20)
"""
Explanation: Now, whenever we need to get profile data our lookups for status and Data Assembly Centers will be serviced from the local cache. Let's get a Pandas DataFrame (df) of 20 profiles from the float with WMO number 1900650.
End of explanation
"""
%%time
df = ad.get_float_dataframe(wmo_list, max_profiles=20)
"""
Explanation: Profile data is also cached locally. To demonstrate, perform the same command as in the previous cell and note the time difference.
End of explanation
"""
df.head()
"""
Explanation: Examine the first 5 records of the float data.
End of explanation
"""
time_range = '{} to {}'.format(df.index.get_level_values('time').min(),
df.index.get_level_values('time').max())
df.query('pressure < 10')
"""
Explanation: There's a lot that can be done with the profile data in this DataFrame structure. We can construct a time_range string and query for all the data values from less than 10 decibars:
End of explanation
"""
df.query('pressure < 10').groupby(level=['wmo', 'time']).mean()
"""
Explanation: In one command we can take the mean of all the values from the upper 10 decibars:
End of explanation
"""
%pylab inline
import pylab as plt
# Parameter long_name and units copied from attributes in NetCDF files
parms = {'TEMP_ADJUSTED': 'SEA TEMPERATURE IN SITU ITS-90 SCALE (degree_Celsius)',
'PSAL_ADJUSTED': 'PRACTICAL SALINITY (psu)',
'DOXY_ADJUSTED': 'DISSOLVED OXYGEN (micromole/kg)'}
plt.rcParams['figure.figsize'] = (18.0, 8.0)
fig, ax = plt.subplots(1, len(parms), sharey=True)
ax[0].invert_yaxis()
ax[0].set_ylabel('SEA PRESSURE (decibar)')
for i, (p, label) in enumerate(parms.iteritems()):
ax[i].set_xlabel(label)
ax[i].plot(df[p], df.index.get_level_values('pressure'), '.')
plt.suptitle('Float(s) ' + ' '.join(wmo_list) + ' from ' + time_range)
"""
Explanation: We can plot the profiles:
End of explanation
"""
from mpl_toolkits.basemap import Basemap
m = Basemap(llcrnrlon=15, llcrnrlat=-90, urcrnrlon=390, urcrnrlat=90, projection='cyl')
m.fillcontinents(color='0.8')
m.scatter(df.index.get_level_values('lon'), df.index.get_level_values('lat'), latlon=True)
plt.title('Float(s) ' + ' '.join(wmo_list) + ' from ' + time_range)
"""
Explanation: We can plot the location of these profiles on a map:
End of explanation
"""
|
marknabil/B31XI-SI-Clustering
|
03-clustering.ipynb
|
gpl-2.0
|
%matplotlib inline
%pprint off
# Matplotlib library
from mpl_toolkits.mplot3d import Axes3D
from matplotlib import cm
from matplotlib.ticker import LinearLocator, FormatStrFormatter
import matplotlib.pyplot as plt
# MPLD3 extension
import mpld3
# Numpy library
import numpy as np
# Import the Scipy library for griddata
from scipy.interpolate import griddata
"""
Explanation: Pattern Recognition - ViBOT MsCV
Guillaume Lemaitre - Fabrice Meriaudeau - Johan Massich
Clustering
End of explanation
"""
# Import k-means clustering method from scikit-learn
from sklearn.cluster import KMeans
# Import fuzzy c-means from scikit-fuzzy
import skfuzzy as fuzz
"""
Explanation: Import the library to perform the clustering with k-means and fuzzy c-means.
End of explanation
"""
# Size of points in the dataset
N = 1000
# Define the property of the gaussian distribution
mean1, mean2 = np.array([1., 1.]), np.array([-1., -1.])
cov1, cov2 = np.diagflat([1, 1]), np.diagflat([1, 1])
class_1 = np.random.multivariate_normal(mean1, cov1, N / 2)
class_2 = np.random.multivariate_normal(mean2, cov2, N / 2)
data = np.concatenate((class_1, class_2), axis=0)
gt = np.squeeze(np.concatenate((np.zeros((1, N / 2), dtype = int), np.ones((1, N / 2), dtype = int)), axis = 1))
fig = plt.figure()
# Find the indexes of the first cluster
plt.plot(class_1[:, 0], class_1[:, 1], 'xb', label='Cluster #1')
plt.plot(class_2[:, 0], class_2[:, 1], 'xr', label='Cluster #2')
plt.legend()
# Show the figure
plt.show()
"""
Explanation: Assuming the following generated points:
Two classes with respective labels 0 and 1,
Class #1 follows with labels 0 a multivariate normal distribution with:
$$\mu_1 = \left[ 1, 1 \right]$$
$$\Sigma_1 = \left[ \begin{matrix} 1 && 0 \ 0 && 1 \end{matrix} \right]$$
Class #2 with labels 1 follows a multivariate normal distribution with:
$$\mu_2 = \left[ -1, -1 \right]$$
$$\Sigma_2 = \left[ \begin{matrix} 1 && 0 \ 0 && 1 \end{matrix} \right]$$
End of explanation
"""
# Define the number of clusters k
k = 2
# Define the parameters of k-means
### use init 'random' and only one try
k_means_cluster = KMeans(k, n_init=1,init='random') # ,n_jobs=4
# number of iterations
# Run k-means
### Use the function predict()
k_means_cluster.fit(data)
labels = k_means_cluster.predict(data)
# Get the centers of k-means
centers_k_means = k_means_cluster.cluster_centers_
print 'The centers found by k-means are \n {}'.format(centers_k_means)
"""
Explanation: Clustering via k-means
(a) Use k-means clustering method to find the cluster centers for $k=2$. To do so, you will:
Call the constructor KMeans(),
Use the function predict of the object build in order to apply the clustering,
Get the centers of each cluster,
Display these centers.
End of explanation
"""
plt.plot(centers_k_means[0][0],centers_k_means[0][1],'ob', label='Centers of clusters 1')
plt.plot(centers_k_means[1][0],centers_k_means[1][1],'or', label='Centers of clusters 2')
plt.legend()
# Show the figure
plt.show()
"""
Explanation: (b) Plot the cluster centers and the data labelled by the k-means fitting.
End of explanation
"""
# Compute the misclassification rate
def compute_error_rate(k_means_labels, gt_labels):
### Use the function nonzero()
return float(np.size(np.nonzero(np.squeeze(k_means_labels != gt_labels)))) / float(np.size(gt_labels)) * 100.
"""
Explanation: (c) Complete the following function to compute the misclassification rate.
End of explanation
"""
# Show the misclassification rate
print 'The error rate is {} %'.format(compute_error_rate(k_means_cluster.labels_, gt))
# Plot the misclassified samples
# Find the samples
idx_wellclass = np.ravel(np.nonzero(np.squeeze(k_means_cluster.labels_==gt)))
idx_misclass = np.ravel(np.nonzero(np.squeeze(k_means_cluster.labels_!=gt)))
# Maybe we have to swap the cluster
if (np.size(idx_misclass) > np.size(idx_wellclass)):
tmp = idx_wellclass[:]
idx_wellclass = idx_misclass[:]
idx_misclass = tmp[:]
del tmp
# Get the data
data_wellclass = data[idx_wellclass,:]
data_misclass = data[idx_misclass,:]
# Make the plot
fig = plt.figure()
# Find the indexes of the first cluster
legend_tptn = plt.plot(data_wellclass[:,0],data_wellclass[:,1],'+r')
legend_fpfn = plt.plot(data_misclass[:,0],data_misclass[:,1],'ob')
plt.legend([legend_tptn[0], legend_fpfn[0]], ["TP & TN", "FP & FN"])
# Show the figure
plt.show()
"""
Explanation: (d) What is the misclassification for the current fitting? Highlight inside a plot the element which have been misclasified.
Hint: think at swapping the label if the error rate is really high. The label affected is performed in an unsupervised manner.
End of explanation
"""
# Define the number of repetitions
rep_t = 10
# Accumulate the error
acc_err = 0.
for rep in range(0, 10):
# Run k-means predict()
k_means_cluster.fit(data)
k_means_cluster.predict(data)
# Check the error and accumulate
acc_err += np.minimum(compute_error_rate(k_means_cluster.labels_,gt),compute_error_rate(k_means_cluster.labels_,gt))
print 'The error rate is {} %'.format(acc_err)
# Average the error
acc_err_avg = acc_err_avg + acc_err
acc_err_avg/=10
# Show the mean misclassification rate
print 'The mean error rate is {} %'.format(acc_err)
"""
Explanation: (e) Repeat 10 times the k-means fitting and compute the mean error.
End of explanation
"""
# Define the number of clusters
c = 2
# Exponentiation parameter
m = 2.
# Run the fuzzy c-means - need to transpose the data
fuzz(np.transpose(data))
"""
Explanation: Clustering via fuzzy c-means
(a) Use fuzzy c-means clustering method to find the cluster centers for $c=2$. Check the following link for an example:
https://github.com/scikit-fuzzy/scikit-fuzzy/blob/master/skfuzzy/cluster/tests/test_cmeans.py
End of explanation
"""
# Plot a representation depending of the membership
### Create a mesh grid using np.grid()
grid_x, grid_y = np.mgrid[-4.:5.:200j, -4.:5.:200j]
### Use the function griddata() in order to create the surface based on the membership degree
grid_z0 = griddata(data , U[0],(grid_x,grid_y),method='cubic')
grid_z1 = griddata(data , U[1],(grid_x,grid_y),method='cubic')
fig = plt.figure()
plt.imshow(grid_z0.T, extent=(-4,5,-4,5), origin='lower')
plt.title('Membership to belong to the class #1')
plt.figure()
plt.imshow(grid_z1.T, extent=(-4,5,-4,5), origin='lower')
plt.title('Membership to belong to the class #2')
plt.show()
"""
Explanation: (b) Plot the cluster centers and the membership degree of the data to each one of the two clusters.
End of explanation
"""
...
"""
Explanation: (c) Plot in each data point to the most probable cluster to which it will belongs. Plot also the centroids.
End of explanation
"""
...
"""
Explanation: (d) Compute the misclassifcation error rate.
End of explanation
"""
# Import scikit-image for input-output manipulation
from skimage import io
from skimage import img_as_float
"""
Explanation: Retina segmentation using k-means and fuzzy c-means
End of explanation
"""
# Number of classes
nb_classes = 4
"""
Explanation: Assuming that the image can be clustered with four classes:
One cluster with artefacts at the edges of the image
One cluster with the optic nerve and other artefacts
One cluster with noise across the image
One cluster with the vessels
End of explanation
"""
# Load the images
# Use the function img_as_float()
# Use the function io.imread()
retina_im = io.imread('data\retina.jpg')
# Show the results
fig, ax = plt.subplots()
ax.imshow(retina_im)
ax.set_title('Original image')
ax.axis('off')
plt.show()
"""
Explanation: (a) From the data folder, load the retina image retina.jpg. Convert it into float type.
End of explanation
"""
# Import morpho element
from skimage.morphology import square
# Import the median filtering
from skimage.filter.rank import median
# Function to pre process the images
def PreProcessing(rgb_image):
output = np.zeros(np.shape(rgb_image))
# Obtain the background image for each channel through median filtering
background_im_r = ...
background_im_g = ...
background_im_b = ...
# Remove the background to the original channels
output[:, :, 0] = ...
output[:, :, 1] = ...
output[:, :, 2] = ...
# Normalise the image
output[:, :, 0] = normalise_im(...)
output[:, :, 1] = normalise_im(...)
output[:, :, 2] = normalise_im(...)
return output
# Function to apply min-max normalisation
def normalise_im(im_2d):
return ...
"""
Explanation: (b) Complete the following Python function.
Compute a background image using a median filtering for each colour channel with a square kernel of size 30.
Subtract each background channel to the original channel.
Normalise each channel using min-max normalisation.
End of explanation
"""
...
"""
Explanation: (c) Apply the pre-processing to retina image and plot the resulting image.
End of explanation
"""
# Extraction of the data
### You can use np.reshape()
data = ...
"""
Explanation: (d) Extract the characteristic features from the pre-processed image.
End of explanation
"""
...
"""
Explanation: (e) Run k-means with 10 iterations and k-means++ as initialisation of the cluster.
End of explanation
"""
...
"""
Explanation: (f) Plot each cluster to observe the segmentation.
End of explanation
"""
...
"""
Explanation: (g) Run fuzzy c-means.
End of explanation
"""
...
"""
Explanation: (h) Plot the degree of membership for each cluster to depict the segmentation.
End of explanation
"""
|
prasants/pyds
|
04.String_me_along.ipynb
|
mit
|
print("Hello World!")
"""
Explanation: Table of Contents
<p><div class="lev1 toc-item"><a href="#Strings" data-toc-modified-id="Strings-1"><span class="toc-item-num">1 </span>Strings</a></div><div class="lev2 toc-item"><a href="#Switching-between-Single,-Double-and-Triple-Quotes" data-toc-modified-id="Switching-between-Single,-Double-and-Triple-Quotes-11"><span class="toc-item-num">1.1 </span>Switching between Single, Double and Triple Quotes</a></div><div class="lev2 toc-item"><a href="#"Raw"-Strings" data-toc-modified-id=""Raw"-Strings-12"><span class="toc-item-num">1.2 </span>"Raw" Strings</a></div><div class="lev2 toc-item"><a href="#String-Substitution" data-toc-modified-id="String-Substitution-13"><span class="toc-item-num">1.3 </span>String Substitution</a></div><div class="lev1 toc-item"><a href="#Indexing-Strings" data-toc-modified-id="Indexing-Strings-2"><span class="toc-item-num">2 </span>Indexing Strings</a></div><div class="lev3 toc-item"><a href="#Exercise" data-toc-modified-id="Exercise-201"><span class="toc-item-num">2.0.1 </span>Exercise</a></div><div class="lev1 toc-item"><a href="#String-Operations" data-toc-modified-id="String-Operations-3"><span class="toc-item-num">3 </span>String Operations</a></div><div class="lev2 toc-item"><a href="#Exercise" data-toc-modified-id="Exercise-31"><span class="toc-item-num">3.1 </span>Exercise</a></div><div class="lev2 toc-item"><a href="#Exercise" data-toc-modified-id="Exercise-32"><span class="toc-item-num">3.2 </span>Exercise</a></div><div class="lev1 toc-item"><a href="#Splitting-Strings" data-toc-modified-id="Splitting-Strings-4"><span class="toc-item-num">4 </span>Splitting Strings</a></div><div class="lev3 toc-item"><a href="#More-Splits" data-toc-modified-id="More-Splits-401"><span class="toc-item-num">4.0.1 </span>More Splits</a></div><div class="lev3 toc-item"><a href="#Exercise" data-toc-modified-id="Exercise-402"><span class="toc-item-num">4.0.2 </span>Exercise</a></div><div class="lev3 toc-item"><a href="#Exercise" data-toc-modified-id="Exercise-403"><span class="toc-item-num">4.0.3 </span>Exercise</a></div><div class="lev3 toc-item"><a href="#Solution-for-Email-Splitting-Problem" data-toc-modified-id="Solution-for-Email-Splitting-Problem-404"><span class="toc-item-num">4.0.4 </span>Solution for Email Splitting Problem</a></div>
# Strings
We have encountered strings already. Remember our "Hello World!" program?
End of explanation
"""
str01 = "Hello World!"
str02 = "22"
str03 = "This is so c00l!"
print(str01, str02, str03)
print(type(str02))
"""
Explanation: "Hello World!" was the string. Absolutely any character can be a string, including numbers, as long as they are within single, double, or triple quotation marks.
End of explanation
"""
children = 5
type(children)
new_children = float(children)
type(new_children)
"""
Explanation: Now remember how we converted an int to a float, and vice versa?
End of explanation
"""
stringy_kids = str(children)
type(stringy_kids)
"""
Explanation: You can convert them to strings too!
End of explanation
"""
text01 = "The name's Bond, James Bond."
print(text01)
text02 = """The name's Bond, James Bond."""
text03 = """Text here"""
text04 = """Here we go!"""
print(text02)
print(text03)
print(text04)
text05 = 'The name's Bond, James Bond.'
print(text05)
"""
Explanation: Switching between Single, Double and Triple Quotes
Text = The name's Bond, James Bond.
End of explanation
"""
text05 = 'The name\'s Bond, James Bond.'
print(text05)
text06 = 'The name\'s Bond, James Bond.\nYes really!'
print(text06)
text07 = 'The name\'s Bond, Tabbed Bond.\tYes really!'
print(text07)
text08 = "The name\'s Bond, Tabbed Bond. Yes really!"
print(text08)
"""
Explanation: Say hello to escape characteters!
End of explanation
"""
print("Please Access C:\home\test instead of C:\\games\n\hello")
print(r"Please Access C:\home\test instead of C:\\games\n\hello")
"""
Explanation: "Raw" Strings
What if a character like a backslash () is part of a string?
End of explanation
"""
age = 42
print("I am {} years old.".format(age))
print("I am {} years old".format(age))
print("Hello World!")
"""
Explanation: String Substitution
End of explanation
"""
fighter1 = input()
fighter2 = input()
type(fighter1)
print("We went to see a bout between {} and {}. {} totally kicked ass!".format(fighter1,fighter2, fighter1))
"""
Explanation: Did you see above how we used .format to substitute a string? This is so useful when writing more complex functions with strings. Let's see another example.
End of explanation
"""
show = 'Monty Python'
show[0]
show[-1]
show[0:4]
show[0:5]
show[-7:]
show[-3:]
show[1:]
"""
Explanation: Indexing Strings
This is a very important topic! <br>
Indexing and slicing form the basis of a lot of data manipulation techniques, so make sure you spend a lot of time practicing this.
Example 1: Indexing always begins at 0. Do you see why the answer here is "k"?
<img src="images/str/str01.png">
Example 2: Indexing can have a lower bound, and an upper bound. The lower bound is always inclusive, and the upper bound is always exclusive.
<img src="images/str/str02.png">
Example 3: Similarly, -1 would imply the very last character. -2 would imply the 2nd last character, and so on.
<img src="images/str/str03.png">
Example 4: Again, the upper bound is exclusive. Always!
<img src="images/str/str04.png">
Example 5: You can skip strings too. The format? [Lower Bound : Upper Bound : Character-to-Skip]
<img src="images/str/str05.png">
Example 6 : You can also index all character.
<img src="images/str/str06.png">
Example 7 : Index all character, and then skip every 2 (or 3 or 4 as you wish).
<img src="images/str/str07.png">
Example 8 : Reversing strings
<img src="images/str/str08.png">
End of explanation
"""
# Enter your code below:
"""
Explanation: Exercise
create a string named 'Welcome to the Jungle'
print the word 'Welcome' using indexing
print the word 'Jungle' using negative indexing
print the phrase 'come to the Jungle' using slicing
End of explanation
"""
sherlocked = "To Sherlock Holmes she is always the woman."
print(sherlocked)
print(len(sherlocked))
print(sherlocked.upper())
print(sherlocked.lower())
sherlocked.find("she")
"""
Explanation: String Operations
+ : concatenate two (or more) strings
len(string): find the number of characters ina string
string.upper(): returns an uppercase version of a string
string.lower(): returns a lowercase version of a string
haystack.find(needle): searches haystack for needle, prints the position of the first occurrence, indexed from 0. Returns -1 if not found
string_1.count(string_2): counts the number of occurrences of one string in another.
haystack.startswith(needle): does a the haystack string start with the needle string?
haystack.endswith(needle): does a the haystack string end with the needle string?
string_1.split(string_2): split the first string at every occurrence of the second string. Outputs a list (see below).
==: are the two operand strings the same?
string.strip(): remove any whitespace from the left or right of the string, including newlines.
Read more about string operations here: https://docs.python.org/3/library/string.html
End of explanation
"""
pronoun = sherlocked.find("she")
print(pronoun)
# Where can we find the first occurance of 'she'?
print("The world 'she' first appears at index", pronoun)
watson = "In the year 1878 I took my degree of Doctor of Medicine of the University of London, and proceeded to Netley to go through the course prescribed for surgeons in the army. Having completed my studies there, I was duly attached to the Fifth Northumberland Fusiliers as Assistant Surgeon. The regiment was stationed in India at the time, and before I could join it, the second Afghan war had broken out. On landing at Bombay, I learned that my corps had advanced through the passes, and was already deep in the enemy’s country. I followed, however, with many other officers who were in the same situation as myself, and succeeded in reaching Candahar in safety, where I found my regiment, and at once entered upon my new duties."
print(watson)
first_appearance = watson.find("my")
first_appearance
second_appearance = watson.find("my", first_appearance + 1)
second_appearance
watson.count("on")
"""
Explanation: Like most things in Python, we can assign that to a variable
End of explanation
"""
print("A Study in Scarlet".split(" "))
watson.count("me")
watson.startswith("Sherlock")
watson.endswith("duties.")
watson.split("and")
watson2 = " Hello "
watson2.strip()
watson3 = watson.replace(",","")
watson3 = watson3.replace(".", "")
watson3
watson3.split(" ")
"""
Explanation: Exercise
Find out how many times the word "the" occurs.
Exercise
Find the midpoint of the passage. Is there a difference in the occurrence of the word 'the' in the first half, versus the second half?
(Hint: The midpoint would be int(len(watson)/2). Why int? Because practically speaking, 104.5 being the midpoint is meaningless. Converting to int means you have a whole number.)
Splitting Strings
End of explanation
"""
ice_cream = "chocolate vanilla banana caramel"
ice_cream = ice_cream.split(" ")
print(ice_cream)
ice_cream[0]
ice_cream[-1]
ice_cream[1]
len(ice_cream)
"""
Explanation: More Splits
End of explanation
"""
# gavin@hooley.com
"""
Explanation: Exercise
Create a progam to take an email as an input, and print the username and domain as outputs.
Example: gavin@hooley.com should return
* Username: gavin
* Domain: hooley.com
Hint: Use the .split method, then use indexing. This is not something we have covered in great detail, but think of it as a challenge. The answer is provided at the end of this notebook.
End of explanation
"""
passage = """During the first week or so we had no callers, and I had begun to think that my companion was as friendless a man as I was myself. Presently, however, I found that he had many acquaintances, and those in the most different classes of society. There was one little sallow rat-faced, dark-eyed fellow who was introduced to me as Mr. Lestrade, and who came three or four times in a single week. One morning a young girl called, fashionably dressed, and stayed for half an hour or more. The same afternoon brought a grey-headed, seedy visitor, looking like a Jew pedlar, who appeared to me to be much excited, and who was closely followed by a slip-shod elderly woman. On another occasion an old white-haired gentleman had an interview with my companion; and on another a railway porter in his velveteen uniform. When any of these nondescript individuals put in an appearance, Sherlock Holmes used to beg for the use of the sitting-room, and I would retire to my bed-room. He always apologized to me for putting me to this inconvenience. “I have to use this room as a place of business,” he said, “and these people are my clients.” Again I had an opportunity of asking him a point blank question, and again my delicacy prevented me from forcing another man to confide in me. I imagined at the time that he had some strong reason for not alluding to it, but he soon dispelled the idea by coming round to the subject of his own accord.
It was upon the 4th of March, as I have good reason to remember, that I rose somewhat earlier than usual, and found that Sherlock Holmes had not yet finished his breakfast. The landlady had become so accustomed to my late habits that my place had not been laid nor my coffee prepared. With the unreasonable petulance of mankind I rang the bell and gave a curt intimation that I was ready. Then I picked up a magazine from the table and attempted to while away the time with it, while my companion munched silently at his toast. One of the articles had a pencil mark at the heading, and I naturally began to run my eye through it.
Its somewhat ambitious title was “The Book of Life,” and it attempted to show how much an observant man might learn by an accurate and systematic examination of all that came in his way. It struck me as being a remarkable mixture of shrewdness and of absurdity. The reasoning was close and intense, but the deductions appeared to me to be far-fetched and exaggerated. The writer claimed by a momentary expression, a twitch of a muscle or a glance of an eye, to fathom a man’s inmost thoughts. Deceit, according to him, was an impossibility in the case of one trained to observation and analysis. His conclusions were as infallible as so many propositions of Euclid. So startling would his results appear to the uninitiated that until they learned the processes by which he had arrived at them they might well consider him as a necromancer.
“From a drop of water,” said the writer, “a logician could infer the possibility of an Atlantic or a Niagara without having seen or heard of one or the other. So all life is a great chain, the nature of which is known whenever we are shown a single link of it. Like all other arts, the Science of Deduction and Analysis is one which can only be acquired by long and patient study nor is life long enough to allow any mortal to attain the highest possible perfection in it. Before turning to those moral and mental aspects of the matter which present the greatest difficulties, let the enquirer begin by mastering more elementary problems. Let him, on meeting a fellow-mortal, learn at a glance to distinguish the history of the man, and the trade or profession to which he belongs. Puerile as such an exercise may seem, it sharpens the faculties of observation, and teaches one where to look and what to look for. By a man’s finger nails, by his coat-sleeve, by his boot, by his trouser knees, by the callosities of his forefinger and thumb, by his expression, by his shirt cuffs—by each of these things a man’s calling is plainly revealed. That all united should fail to enlighten the competent enquirer in any case is almost inconceivable.”
“What ineffable twaddle!” I cried, slapping the magazine down on the table, “I never read such rubbish in my life.”
“What is it?” asked Sherlock Holmes.
“Why, this article,” I said, pointing at it with my egg spoon as I sat down to my breakfast. “I see that you have read it since you have marked it. I don’t deny that it is smartly written. It irritates me though. It is evidently the theory of some arm-chair lounger who evolves all these neat little paradoxes in the seclusion of his own study. It is not practical. I should like to see him clapped down in a third class carriage on the Underground, and asked to give the trades of all his fellow-travellers. I would lay a thousand to one against him.”
“You would lose your money,” Sherlock Holmes remarked calmly. “As for the article I wrote it myself.”
“You!”
“Yes, I have a turn both for observation and for deduction. The theories which I have expressed there, and which appear to you to be so chimerical are really extremely practical—so practical that I depend upon them for my bread and cheese.”
“And how?” I asked involuntarily.
“Well, I have a trade of my own. I suppose I am the only one in the world. I’m a consulting detective, if you can understand what that is. Here in London we have lots of Government detectives and lots of private ones. When these fellows are at fault they come to me, and I manage to put them on the right scent. They lay all the evidence before me, and I am generally able, by the help of my knowledge of the history of crime, to set them straight. There is a strong family resemblance about misdeeds, and if you have all the details of a thousand at your finger ends, it is odd if you can’t unravel the thousand and first. Lestrade is a well-known detective. He got himself into a fog recently over a forgery case, and that was what brought him here.”
“And these other people?”
“They are mostly sent on by private inquiry agencies. They are all people who are in trouble about something, and want a little enlightening. I listen to their story, they listen to my comments, and then I pocket my fee.”
“But do you mean to say,” I said, “that without leaving your room you can unravel some knot which other men can make nothing of, although they have seen every detail for themselves?”
“Quite so. I have a kind of intuition that way. Now and again a case turns up which is a little more complex. Then I have to bustle about and see things with my own eyes. You see I have a lot of special knowledge which I apply to the problem, and which facilitates matters wonderfully. Those rules of deduction laid down in that article which aroused your scorn, are invaluable to me in practical work. Observation with me is second nature. You appeared to be surprised when I told you, on our first meeting, that you had come from Afghanistan.”
“You were told, no doubt.”
“Nothing of the sort. I knew you came from Afghanistan. From long habit the train of thoughts ran so swiftly through my mind, that I arrived at the conclusion without being conscious of intermediate steps. There were such steps, however. The train of reasoning ran, ‘Here is a gentleman of a medical type, but with the air of a military man. Clearly an army doctor, then. He has just come from the tropics, for his face is dark, and that is not the natural tint of his skin, for his wrists are fair. He has undergone hardship and sickness, as his haggard face says clearly. His left arm has been injured. He holds it in a stiff and unnatural manner. Where in the tropics could an English army doctor have seen much hardship and got his arm wounded? Clearly in Afghanistan.’ The whole train of thought did not occupy a second. I then remarked that you came from Afghanistan, and you were astonished.”
“It is simple enough as you explain it,” I said, smiling. “You remind me of Edgar Allen Poe’s Dupin. I had no idea that such individuals did exist outside of stories.”
Sherlock Holmes rose and lit his pipe. “No doubt you think that you are complimenting me in comparing me to Dupin,” he observed. “Now, in my opinion, Dupin was a very inferior fellow. That trick of his of breaking in on his friends’ thoughts with an apropos remark after a quarter of an hour’s silence is really very showy and superficial. He had some analytical genius, no doubt; but he was by no means such a phenomenon as Poe appeared to imagine.”
“Have you read Gaboriau’s works?” I asked. “Does Lecoq come up to your idea of a detective?”
Sherlock Holmes sniffed sardonically. “Lecoq was a miserable bungler,” he said, in an angry voice; “he had only one thing to recommend him, and that was his energy. That book made me positively ill. The question was how to identify an unknown prisoner. I could have done it in twenty-four hours. Lecoq took six months or so. It might be made a text-book for detectives to teach them what to avoid.”
I felt rather indignant at having two characters whom I had admired treated in this cavalier style. I walked over to the window, and stood looking out into the busy street. “This fellow may be very clever,” I said to myself, “but he is certainly very conceited.”
“There are no crimes and no criminals in these days,” he said, querulously. “What is the use of having brains in our profession. I know well that I have it in me to make my name famous. No man lives or has ever lived who has brought the same amount of study and of natural talent to the detection of crime which I have done. And what is the result? There is no crime to detect, or, at most, some bungling villainy with a motive so transparent that even a Scotland Yard official can see through it.”
I was still annoyed at his bumptious style of conversation. I thought it best to change the topic.
“I wonder what that fellow is looking for?” I asked, pointing to a stalwart, plainly-dressed individual who was walking slowly down the other side of the street, looking anxiously at the numbers. He had a large blue envelope in his hand, and was evidently the bearer of a message.
“You mean the retired sergeant of Marines,” said Sherlock Holmes.
“Brag and bounce!” thought I to myself. “He knows that I cannot verify his guess.”
The thought had hardly passed through my mind when the man whom we were watching caught sight of the number on our door, and ran rapidly across the roadway. We heard a loud knock, a deep voice below, and heavy steps ascending the stair.
“For Mr. Sherlock Holmes,” he said, stepping into the room and handing my friend the letter.
Here was an opportunity of taking the conceit out of him. He little thought of this when he made that random shot. “May I ask, my lad,” I said, in the blandest voice, “what your trade may be?”
“Commissionaire, sir,” he said, gruffly. “Uniform away for repairs.”
“And you were?” I asked, with a slightly malicious glance at my companion.
“A sergeant, sir, Royal Marine Light Infantry, sir. No answer? Right, sir.”
He clicked his heels together, raised his hand in a salute, and was gone."""
# Count Sherlock
# Count Lestrade
"""
Explanation: Exercise
How many times does the name "Sherlock" appear in the passage below?
How many times does the name "Lestrade" appear in the passage below?
End of explanation
"""
mail1 = "gavin@hooley.com".split("@")
mail1
mail1[0]
Username = mail1[0]
Domain = mail1[1]
print("Username:",Username, "\nDomain:",Domain)
# Solution for Sherlock and Lestrade
print("Number of times 'Sherlock' appears in passage:", passage.count("Sherlock"))
print("Number of times 'Lestrade' appears in passage:", passage.count("Lestrade"))
"""
Explanation: Solution for Email Splitting Problem
End of explanation
"""
|
mne-tools/mne-tools.github.io
|
0.24/_downloads/09a8b0bb7a57481cdd1f7832f0291ee6/brain.ipynb
|
bsd-3-clause
|
# Author: Alex Rockhill <aprockhill@mailbox.org>
#
# License: BSD-3-Clause
"""
Explanation: Plotting with mne.viz.Brain
In this example, we'll show how to use :class:mne.viz.Brain.
End of explanation
"""
import os.path as op
import matplotlib.pyplot as plt
import mne
from mne.datasets import sample
print(__doc__)
data_path = sample.data_path()
subjects_dir = op.join(data_path, 'subjects')
sample_dir = op.join(data_path, 'MEG', 'sample')
"""
Explanation: Plot a brain
In this example we use the sample data which is data from a subject
being presented auditory and visual stimuli to display the functionality
of :class:mne.viz.Brain for plotting data on a brain.
End of explanation
"""
brain_kwargs = dict(alpha=0.1, background='white', cortex='low_contrast')
brain = mne.viz.Brain('sample', subjects_dir=subjects_dir, **brain_kwargs)
stc = mne.read_source_estimate(op.join(sample_dir, 'sample_audvis-meg'))
stc.crop(0.09, 0.1)
kwargs = dict(fmin=stc.data.min(), fmax=stc.data.max(), alpha=0.25,
smoothing_steps='nearest', time=stc.times)
brain.add_data(stc.lh_data, hemi='lh', vertices=stc.lh_vertno, **kwargs)
brain.add_data(stc.rh_data, hemi='rh', vertices=stc.rh_vertno, **kwargs)
"""
Explanation: Add source information
Plot source information.
End of explanation
"""
brain = mne.viz.Brain('sample', subjects_dir=subjects_dir, **brain_kwargs)
brain.show_view(azimuth=190, elevation=70, distance=350, focalpoint=(0, 0, 20))
"""
Explanation: Modify the view of the brain
You can adjust the view of the brain using show_view method.
End of explanation
"""
brain = mne.viz.Brain('sample', subjects_dir=subjects_dir, **brain_kwargs)
brain.add_label('BA44', hemi='lh', color='green', borders=True)
brain.show_view(azimuth=190, elevation=70, distance=350, focalpoint=(0, 0, 20))
"""
Explanation: Highlight a region on the brain
It can be useful to highlight a region of the brain for analyses.
To highlight a region on the brain you can use the add_label method.
Labels are stored in the Freesurfer label directory from the recon-all
for that subject. Labels can also be made following the
Freesurfer instructions
<https://surfer.nmr.mgh.harvard.edu/fswiki/mri_vol2label>_
Here we will show Brodmann Area 44.
<div class="alert alert-info"><h4>Note</h4><p>The MNE sample dataset contains only a subselection of the
Freesurfer labels created during the ``recon-all``.</p></div>
End of explanation
"""
brain = mne.viz.Brain('sample', subjects_dir=subjects_dir, **brain_kwargs)
brain.add_head(alpha=0.5)
"""
Explanation: Include the head in the image
Add a head image using the add_head method.
End of explanation
"""
brain = mne.viz.Brain('sample', subjects_dir=subjects_dir, **brain_kwargs)
evoked = mne.read_evokeds(op.join(sample_dir, 'sample_audvis-ave.fif'))[0]
trans = mne.read_trans(op.join(sample_dir, 'sample_audvis_raw-trans.fif'))
brain.add_sensors(evoked.info, trans)
brain.show_view(distance=500) # move back to show sensors
"""
Explanation: Add sensors positions
To put into context the data that generated the source time course,
the sensor positions can be displayed as well.
End of explanation
"""
brain = mne.viz.Brain('sample', subjects_dir=subjects_dir, **brain_kwargs)
img = brain.screenshot()
fig, ax = plt.subplots()
ax.imshow(img)
ax.axis('off')
fig.suptitle('Brain')
"""
Explanation: Create a screenshot for exporting the brain image
For publication you may wish to take a static image of the brain,
for this use screenshot.
End of explanation
"""
|
synthicity/activitysim
|
activitysim/examples/example_estimation/notebooks/15_non_mand_tour_freq.ipynb
|
agpl-3.0
|
import os
import larch # !conda install larch -c conda-forge # for estimation
import pandas as pd
"""
Explanation: Estimating Non-Mandatory Tour Frequency
This notebook illustrates how to re-estimate a single model component for ActivitySim. This process
includes running ActivitySim in estimation mode to read household travel survey files and write out
the estimation data bundles used in this notebook. To review how to do so, please visit the other
notebooks in this directory.
Load libraries
End of explanation
"""
os.chdir('test')
"""
Explanation: We'll work in our test directory, where ActivitySim has saved the estimation data bundles.
End of explanation
"""
modelname = "nonmand_tour_freq"
from activitysim.estimation.larch import component_model
model, data = component_model(modelname, return_data=True)
"""
Explanation: Load data and prep model for estimation
End of explanation
"""
type(model)
model.keys()
"""
Explanation: This component actually has a distinct choice model for each person type, so
instead of a single model there's a dict of models.
End of explanation
"""
data.coefficients['PTYPE_FULL']
"""
Explanation: Review data loaded from the EDB
We can review the data loaded as well, similarly there is seperate data
for each person type.
Coefficients
End of explanation
"""
data.spec['PTYPE_FULL']
"""
Explanation: Utility specification
End of explanation
"""
data.chooser_data['PTYPE_FULL']
"""
Explanation: Chooser data
End of explanation
"""
for k, m in model.items():
m.estimate(method='SLSQP')
"""
Explanation: Estimate
With the model setup for estimation, the next step is to estimate the model coefficients. Make sure to use a sufficiently large enough household sample and set of zones to avoid an over-specified model, which does not have a numerically stable likelihood maximizing solution. Larch has a built-in estimation methods including BHHH, and also offers access to more advanced general purpose non-linear optimizers in the scipy package, including SLSQP, which allows for bounds and constraints on parameters. BHHH is the default and typically runs faster, but does not follow constraints on parameters.
End of explanation
"""
model['PTYPE_FULL'].parameter_summary()
"""
Explanation: Estimated coefficients
End of explanation
"""
from activitysim.estimation.larch import update_coefficients
for k, m in model.items():
result_dir = data.edb_directory/k/"estimated"
update_coefficients(
m, data.coefficients[k], result_dir,
output_file=f"{modelname}_{k}_coefficients_revised.csv",
);
"""
Explanation: Output Estimation Results
End of explanation
"""
for k, m in model.items():
result_dir = data.edb_directory/k/"estimated"
m.to_xlsx(
result_dir/f"{modelname}_{k}_model_estimation.xlsx",
data_statistics=False,
)
"""
Explanation: Write the model estimation report, including coefficient t-statistic and log likelihood
End of explanation
"""
result_dir = data.edb_directory/'PTYPE_FULL'/"estimated"
pd.read_csv(result_dir/f"{modelname}_PTYPE_FULL_coefficients_revised.csv")
"""
Explanation: Next Steps
The final step is to either manually or automatically copy the *_coefficients_revised.csv file to the configs folder, rename it to *_coefficients.csv, and run ActivitySim in simulation mode.
End of explanation
"""
|
yugangzhang/CHX_Pipelines
|
Working_Pipleines/XPCS_Single_2017_V8_debug.ipynb
|
bsd-3-clause
|
from chxanalys.chx_packages import *
%matplotlib notebook
plt.rcParams.update({'figure.max_open_warning': 0})
plt.rcParams.update({ 'image.origin': 'lower' })
plt.rcParams.update({ 'image.interpolation': 'none' })
import pickle as cpk
from chxanalys.chx_xpcs_xsvs_jupyter_V1 import *
Javascript( '''
var nb = IPython.notebook;
var kernel = IPython.notebook.kernel;
var command = "NFP = '" + nb.base_url + nb.notebook_path + "'";
kernel.execute(command);
''' )
#print( 'The current running pipeline is: %s' %NFP)
#%reset -f -s dhist in out array
"""
Explanation: XPCS&XSVS Pipeline for Single-(Gi)-SAXS Run
"This notebook corresponds to version {{ version }} of the pipeline tool: https://github.com/NSLS-II/pipelines"
This notebook begins with a raw time-series of images and ends with $g_2(t)$ for a range of $q$, fit to an exponential or stretched exponential, and a two-time correlation functoin.
Overview
Setup: load packages/setup path
Load Metadata & Image Data
Apply Mask
Clean Data: shutter open/bad frames
Get Q-Map
Get 1D curve
Define Q-ROI (qr, qz)
Check beam damage
One-time Correlation
Fitting
Two-time Correlation
The important scientific code is imported from the chxanalys and scikit-beam project. Refer to chxanalys and scikit-beam for additional documentation and citation information.
DEV
V8: Update visbility error bar calculation using pi = his/N +/- sqrt(his_i)/N
Update normlization in g2 calculation uing 2D-savitzky golay (SG ) smooth
CHX Olog NoteBook
CHX Olog (https://logbook.nsls2.bnl.gov/11-ID/)
Setup
Import packages for I/O, visualization, and analysis.
End of explanation
"""
#scat_geometry = 'saxs' #suport 'saxs', 'gi_saxs', 'ang_saxs' (for anisotropics saxs or flow-xpcs)
scat_geometry = 'saxs'
qphi_analysis = False
#scat_geometry = 'ang_saxs' #suport 'saxs', 'gi_saxs', 'ang_saxs' (for anisotropics saxs or flow-xpcs)
#scat_geometry = 'gi_waxs' #suport 'saxs', 'gi_saxs', 'ang_saxs' (for anisotropics saxs or flow-xpcs)
# gi_waxs define a simple box-shaped ROI
#scat_geometry = 'gi_saxs'
force_compress = False #True #force to compress data
bin_frame = False #generally make bin_frame as False
para_compress = True #parallel compress
run_fit_form = False #run fit form factor
run_waterfall = False #run waterfall analysis
run_profile_plot = False #run prolfile plot for gi-saxs
run_t_ROI_Inten = True #run ROI intensity as a function of time
run_get_mass_center = False # Analysis for mass center of reflective beam center
run_invariant_analysis = False
run_one_time = True #run one-time
#run_fit_g2 = True #run fit one-time, the default function is "stretched exponential"
fit_g2_func = 'stretched'
run_two_time = True #run two-time
run_four_time = True #True #False #run four-time
run_xsvs= False #False #run visibility analysis
att_pdf_report = True #attach the pdf report to CHX olog
qth_interest = 1 #the intested single qth
use_sqnorm = True #if True, use sq to normalize intensity
use_SG = False #if True, use the Sawitzky-Golay filter for <I(pix)>
use_imgsum_norm= True #if True use imgsum to normalize intensity for one-time calculatoin
pdf_version='_%s'%get_today_date() #for pdf report name
run_dose = True #run dose_depend analysis
if scat_geometry == 'gi_saxs':run_xsvs= False;use_sqnorm=False
if scat_geometry == 'gi_waxs':use_sqnorm = False
if scat_geometry != 'saxs':qphi_analysis = False;scat_geometry_ = scat_geometry
else:scat_geometry_ = ['','ang_'][qphi_analysis]+ scat_geometry
if scat_geometry != 'gi_saxs':run_profile_plot = False
#%run ~/chxanalys_link/chxanalys/chx_generic_functions.py
scat_geometry
taus=None;g2=None;tausb=None;g2b=None;g12b=None;taus4=None;g4=None;times_xsv=None;contrast_factorL=None; lag_steps = None
"""
Explanation: Control Runs Here
End of explanation
"""
CYCLE= '2017_3' #change clycle here
#CYCLE= '2017_2' #change clycle here
path = '/XF11ID/analysis/%s/masks/'%CYCLE
username = getpass.getuser()
#username = 'rmhanna'
username = 'hkoerner'
#username = 'rheadric'
data_dir0 = create_user_folder(CYCLE, username)
print( data_dir0 )
"""
Explanation: Make a directory for saving results
End of explanation
"""
# dynamic mask
fp = '/XF11ID/analysis/2017_3/masks/roi_mask_Nov17_Rings.pkl'
roi_mask,qval_dict = cpk.load( open(fp, 'rb' ) ) #for load the saved roi data
print(fp)
# q map file
if scat_geometry =='gi_saxs':
# static mask
fp = data_dir0 + 'June_2017_Sam3_Graphene_no1_C60_200C_roi_static.pkl'
roi_masks,qval_dicts = cpk.load( open(fp, 'rb' ) ) #for load the saved roi data
print(fp)
fp = data_dir0 + 'June_2017_Sam3_Graphene_no1_C60_200C_gisaxs_qmap.pkl'
print(fp)
qr_map, qz_map, ticks, Qrs, Qzs, Qr, Qz, inc_x0,refl_x0, refl_y0 = cpk.load( open(fp, 'rb' ) )
#%run chxanalys_link/chxanalys/chx_generic_functions.py
"""
Explanation: Load ROI defined by "XPCS_Setup" Pipeline
End of explanation
"""
uid = '76f314' # (scan num: 9401) (Measurement: 750Hz 1k frames mbs =.05x.4 CoralPor )
uid = 'ec4b0c' # (scan num: 9402) (Measurement: 750Hz 1k frames mbs =.1x.4 CoralPor )
uid = '6d0761' #(scan num: 9403 (Measurement: 750Hz 5k frames 1000A "
uid = 'c298d2' #(scan num: 9404 (Measurement: 100Hz 5k T=.2 1000A
uid = 'f03425' #(scan num: 9405 (Measurement: 10Hz 5k T=.2 1000A
uid = 'b3ea84' #(scan num: 9406 (Measurement: 10Hz 5k T=.036 1000A "
uid = 'be6e4c' #(scan num: 9407 (Measurement: 10Hz 5k T=.036 1000E "
get_last_uids( -1 )
sud = get_sid_filenames(db[uid])
print ('scan_id, full-uid, data path are: %s--%s--%s'%(sud[0], sud[1], sud[2][0] ))
#start_time, stop_time = '2017-2-24 12:23:00', '2017-2-24 13:42:00'
#sids, uids, fuids = find_uids(start_time, stop_time)
data_dir = os.path.join(data_dir0, '%s/'%uid)
os.makedirs(data_dir, exist_ok=True)
print('Results from this analysis will be stashed in the directory %s' % data_dir)
uidstr = 'uid=%s'%uid
"""
Explanation: Load Metadata & Image Data
Change this line to give a uid
End of explanation
"""
def get_meta_data( uid,*argv,**kwargs ):
'''
Y.G. Dev Dec 8, 2016
Get metadata from a uid
Parameters:
uid: the unique data acquisition id
kwargs: overwrite the meta data, for example
get_meta_data( uid = uid, sample = 'test') --> will overwrtie the meta's sample to test
return:
meta data of the uid: a dictionay
with keys:
detector
suid: the simple given uid
uid: full uid
filename: the full path of the data
start_time: the data acquisition starting time in a human readable manner
And all the input metadata
'''
import time
md ={}
md['detector'] = get_detector( db[uid ] )
md['suid'] = uid #short uid
md['filename'] = get_sid_filenames(db[uid])[2][0]
#print( md )
ev, = get_events(db[uid], [md['detector']], fill= False)
dec = list( ev['descriptor']['configuration'].keys() )[0]
for k,v in ev['descriptor']['configuration'][dec]['data'].items():
md[ k[len(dec)+1:] ]= v
print(k)
for k,v in ev['descriptor']['run_start'].items():
if k!= 'plan_args':
md[k]= v
print(k)
md['start_time'] = time.strftime('%Y-%m-%d %H:%M:%S', time.localtime(md['time']))
md['stop_time'] = time.strftime('%Y-%m-%d %H:%M:%S', time.localtime( ev['time'] ))
md['img_shape'] = ev['descriptor']['data_keys'][md['detector']]['shape'][:2][::-1]
for k,v in kwargs.items():
md[k] =v
#print(k)
return md
md={}
md = get_meta_data( uid )
get_meta_data??
md
get_meta_data?
uid
ev, = get_events(db[uid], [md['detector']], fill= False)
ev
get_meta_data??
ev['descriptor']['configuration']['eiger4m_single']['data']
print(db[uid].stop)
uid
md
"""
Explanation: Don't Change these lines below here
get metadata
End of explanation
"""
imgs = load_data( uid, md['detector'], reverse= True )
md.update( imgs.md );Nimg = len(imgs);
#if 'number of images' not in list(md.keys()):
md['number of images'] = Nimg
pixel_mask = 1- np.int_( np.array( imgs.md['pixel_mask'], dtype= bool) )
print( 'The data are: %s' %imgs )
md['acquire period' ] = md['cam_acquire_period']
md['exposure time'] = md['cam_acquire_time']
print_dict( md, ['suid', 'number of images', 'uid', 'scan_id', 'start_time', 'stop_time', 'sample', 'Measurement',
'acquire period', 'exposure time',
'det_distance', 'beam_center_x', 'beam_center_y', ] )
"""
Explanation: get data
End of explanation
"""
if scat_geometry =='gi_saxs':
inc_x0 = md['beam_center_x']
inc_y0 = imgs[0].shape[0] - md['beam_center_y']
refl_x0 = 1541 #md['beam_center_x']
refl_y0 = 960 #imgs[0].shape[0] - 1758
print( "inc_x0, inc_y0, ref_x0,ref_y0 are: %s %s %s %s."%(inc_x0, inc_y0, refl_x0, refl_y0) )
else:
inc_x0 = imgs[0].shape[0] - md['beam_center_y']
inc_y0= md['beam_center_x']
dpix, lambda_, Ldet, exposuretime, timeperframe, center = check_lost_metadata(
md, Nimg, inc_x0 = inc_x0, inc_y0= inc_y0, pixelsize = 7.5*10*(-5) )
if scat_geometry =='gi_saxs':center=center[::-1]
setup_pargs=dict(uid=uidstr, dpix= dpix, Ldet=Ldet, lambda_= lambda_, exposuretime=exposuretime,
timeperframe=timeperframe, center=center, path= data_dir)
print_dict( setup_pargs )
setup_pargs
"""
Explanation: Overwrite Some Metadata if Wrong Input
Define incident beam center (also define reflection beam center for gisaxs)
End of explanation
"""
if scat_geometry == 'gi_saxs':
mask_path = '/XF11ID/analysis/2017_2/masks/'
mask_name = 'Jun4_2_GiSAXS.npy'
elif scat_geometry == 'saxs':
mask_path = '/XF11ID/analysis/2017_3/masks/'
mask_name = 'Nov17_SAXS.npy'
#mask_path = '/XF11ID/analysis/2017_2/masks/'
#mask_name = 'Jul26_SAXS.npy'
mask = load_mask(mask_path, mask_name, plot_ = False, image_name = uidstr + '_mask', reverse= True )
mask *= pixel_mask
show_img(mask,image_name = uidstr + '_mask', save=True, path=data_dir, aspect=1, center=center[::-1])
mask_load=mask.copy()
imgsa = apply_mask( imgs, mask )
"""
Explanation: Apply Mask
load and plot mask if exist
otherwise create a mask using Mask pipeline
Reverse the mask in y-direction due to the coordination difference between python and Eiger software
Reverse images in y-direction
Apply the mask
Change the blow line to give mask filename
End of explanation
"""
img_choice_N = 3
img_samp_index = random.sample( range(len(imgs)), img_choice_N)
avg_img = get_avg_img( imgsa, img_samp_index, plot_ = False, uid =uidstr)
if avg_img.max() == 0:
print('There are no photons recorded for this uid: %s'%uid)
print('The data analysis should be terminated! Please try another uid.')
#show_img( imgsa[1000], vmin=.1, vmax= 1e1, logs=True, aspect=1,
# image_name= uidstr + '_img_avg', save=True, path=data_dir, cmap = cmap_albula )
show_img( imgs[10], vmin=.0, vmax= 1e1, logs=False, aspect=1, #save_format='tif',
image_name= uidstr + '_img_avg', save=True, path=data_dir, cmap=cmap_albula,center=center[::-1] )
"""
Explanation: Check several frames average intensity
End of explanation
"""
compress=True
photon_occ = len( np.where(avg_img)[0] ) / ( imgsa[0].size)
#compress = photon_occ < .4 #if the photon ocupation < 0.5, do compress
print ("The non-zeros photon occupation is %s."%( photon_occ))
print("Will " + 'Always ' + ['NOT', 'DO'][compress] + " apply compress process.")
good_start = 5 #5 #make the good_start at least 0
bin_frame = False # True #generally make bin_frame as False
if bin_frame:
bin_frame_number=4
acquisition_period = md['acquire period']
timeperframe = acquisition_period * bin_frame_number
else:
bin_frame_number =1
import time
t0= time.time()
if bin_frame_number==1:
filename = '/XF11ID/analysis/Compressed_Data' +'/uid_%s.cmp'%md['uid']
else:
filename = '/XF11ID/analysis/Compressed_Data' +'/uid_%s_bined--%s.cmp'%(md['uid'],bin_frame_number)
mask, avg_img, imgsum, bad_frame_list = compress_eigerdata(imgs, mask, md, filename,
force_compress= force_compress, para_compress= para_compress, bad_pixel_threshold = 1e14,
bins=bin_frame_number, num_sub= 100, num_max_para_process= 500, with_pickle=True )
min_inten = 10
good_start = max(good_start, np.where( np.array(imgsum) > min_inten )[0][0] )
print ('The good_start frame number is: %s '%good_start)
FD = Multifile(filename, good_start, len(imgs)//bin_frame_number)
#FD = Multifile(filename, good_start, 100)
uid_ = uidstr + '_fra_%s_%s'%(FD.beg, FD.end)
print( uid_ )
plot1D( y = imgsum[ np.array( [i for i in np.arange(good_start, len(imgsum)) if i not in bad_frame_list])],
title =uidstr + '_imgsum', xlabel='Frame', ylabel='Total_Intensity', legend='imgsum' )
Nimg = Nimg/bin_frame_number
run_time(t0)
show_img( avg_img, vmin=.0001, vmax= 5e4, logs=True, aspect=1, #save_format='tif',
image_name= uidstr + '_img_avg', save=True, path=data_dir, cmap = cmap_albula, center=center[::-1] )
"""
Explanation: Compress Data
Generate a compressed data with filename
Replace old mask with a new mask with removed hot pixels
Do average image
Do each image sum
Find badframe_list for where image sum above bad_pixel_threshold
Check shutter open frame to get good time series
End of explanation
"""
good_end= None # 2000
if good_end is not None:
FD = Multifile(filename, good_start, min( len(imgs)//bin_frame_number, good_end) )
uid_ = uidstr + '_fra_%s_%s'%(FD.beg, FD.end)
print( uid_ )
re_define_good_start =False
if re_define_good_start:
good_start = 10
good_end = 19700
FD = Multifile(filename, good_start, good_end)
uid_ = uidstr + '_fra_%s_%s'%(FD.beg, FD.end)
print( FD.beg, FD.end)
bad_frame_list = get_bad_frame_list( imgsum, fit='both', plot=True,polyfit_order = 30,
scale= 3.5, good_start = good_start, good_end=good_end, uid= uidstr, path=data_dir)
print( 'The bad frame list length is: %s'%len(bad_frame_list) )
"""
Explanation: Get bad frame list by a polynominal fit
End of explanation
"""
imgsum_y = imgsum[ np.array( [i for i in np.arange( len(imgsum)) if i not in bad_frame_list])]
imgsum_x = np.arange( len( imgsum_y))
save_lists( [imgsum_x, imgsum_y], label=['Frame', 'Total_Intensity'],
filename=uidstr + '_img_sum_t', path= data_dir )
"""
Explanation: Creat new mask by masking the bad pixels and get new avg_img
End of explanation
"""
plot1D( y = imgsum_y, title = uidstr + '_img_sum_t', xlabel='Frame', c='b',
ylabel='Total_Intensity', legend='imgsum', save=True, path=data_dir)
"""
Explanation: Plot time~ total intensity of each frame
End of explanation
"""
if scat_geometry =='saxs':
## Get circular average| * Do plot and save q~iq
hmask = create_hot_pixel_mask( avg_img, threshold = 1e2, center=center, center_radius= 100)
mask = mask * hmask
qp_saxs, iq_saxs, q_saxs = get_circular_average( avg_img, mask * hmask, pargs=setup_pargs )
plot_circular_average( qp_saxs, iq_saxs, q_saxs, pargs=setup_pargs,
xlim=[q_saxs.min(), q_saxs.max()*1.0], ylim = [iq_saxs.min(), iq_saxs.max()] )
#mask =np.array( mask * hmask, dtype=bool)
#%run ~/chxanalys_link/chxanalys/chx_compress_analysis.py
if scat_geometry =='saxs':
if run_fit_form:
form_res = fit_form_factor( q_saxs,iq_saxs, guess_values={'radius': 2500, 'sigma':0.05,
'delta_rho':1E-10 }, fit_range=[0.0001, 0.015], fit_variables={'radius': T, 'sigma':T,
'delta_rho':T}, res_pargs=setup_pargs, xlim=[0.0001, 0.015])
qr = np.array( [qval_dict[k][0] for k in sorted( qval_dict.keys())] )
print(len(qr))
show_ROI_on_image( avg_img, roi_mask, center, label_on = False, rwidth = 840, alpha=.9,
save=True, path=data_dir, uid=uidstr, vmin= 1e-3,
vmax= 1e3, #np.max(avg_img),
aspect=1,
show_roi_edge=True,
show_ang_cor = True)
plot_qIq_with_ROI( q_saxs, iq_saxs, np.unique(qr), logs=True, uid=uidstr, xlim=[0.0001,0.08],
ylim = [iq_saxs.min(), iq_saxs.max()*2], save=True, path=data_dir)
"""
Explanation: Static Analysis
SAXS Scattering Geometry
End of explanation
"""
if scat_geometry =='saxs':
Nimg = FD.end - FD.beg
time_edge = create_time_slice( Nimg, slice_num= 4, slice_width= 1, edges = None )
time_edge = np.array( time_edge ) + good_start
#print( time_edge )
qpt, iqst, qt = get_t_iqc( FD, time_edge, mask, pargs=setup_pargs, nx=1500, show_progress= False )
plot_t_iqc( qt, iqst, time_edge, pargs=setup_pargs, xlim=[qt.min(), qt.max()],
ylim = [iqst.min(), iqst.max()], save=True )
if run_invariant_analysis:
if scat_geometry =='saxs':
invariant = get_iq_invariant( qt, iqst )
time_stamp = time_edge[:,0] * timeperframe
if scat_geometry =='saxs':
plot_q2_iq( qt, iqst, time_stamp,pargs=setup_pargs,ylim=[ -0.001, 0.01] ,
xlim=[0.007,0.2],legend_size= 6 )
if scat_geometry =='saxs':
plot_time_iq_invariant( time_stamp, invariant, pargs=setup_pargs, )
if False:
iq_int = np.zeros( len(iqst) )
fig, ax = plt.subplots()
q = qt
for i in range(iqst.shape[0]):
yi = iqst[i] * q**2
iq_int[i] = yi.sum()
time_labeli = 'time_%s s'%( round( time_edge[i][0] * timeperframe, 3) )
plot1D( x = q, y = yi, legend= time_labeli, xlabel='Q (A-1)', ylabel='I(q)*Q^2', title='I(q)*Q^2 ~ time',
m=markers[i], c = colors[i], ax=ax, ylim=[ -0.001, 0.01] , xlim=[0.007,0.2],
legend_size=4)
#print( iq_int )
"""
Explanation: Time Depedent I(q) Analysis
End of explanation
"""
if scat_geometry =='gi_saxs':
plot_qzr_map( qr_map, qz_map, inc_x0, ticks = ticks, data= avg_img, uid= uidstr, path = data_dir )
"""
Explanation: GiSAXS Scattering Geometry
End of explanation
"""
if scat_geometry =='gi_saxs':
#roi_masks, qval_dicts = get_gisaxs_roi( Qrs, Qzs, qr_map, qz_map, mask= mask )
show_qzr_roi( avg_img, roi_masks, inc_x0, ticks[:4], alpha=0.5, save=True, path=data_dir, uid=uidstr )
if scat_geometry =='gi_saxs':
Nimg = FD.end - FD.beg
time_edge = create_time_slice( N= Nimg, slice_num= 2, slice_width= 2, edges = None )
time_edge = np.array( time_edge ) + good_start
print( time_edge )
qrt_pds = get_t_qrc( FD, time_edge, Qrs, Qzs, qr_map, qz_map, mask=mask, path=data_dir, uid = uidstr )
plot_qrt_pds( qrt_pds, time_edge, qz_index = 0, uid = uidstr, path = data_dir )
"""
Explanation: Static Analysis for gisaxs
End of explanation
"""
if scat_geometry =='gi_saxs':
if run_profile_plot:
xcorners= [ 1100, 1250, 1250, 1100 ]
ycorners= [ 850, 850, 950, 950 ]
waterfall_roi_size = [ xcorners[1] - xcorners[0], ycorners[2] - ycorners[1] ]
waterfall_roi = create_rectangle_mask( avg_img, xcorners, ycorners )
#show_img( waterfall_roi * avg_img, aspect=1,vmin=.001, vmax=1, logs=True, )
wat = cal_waterfallc( FD, waterfall_roi, qindex= 1, bin_waterfall=True,
waterfall_roi_size = waterfall_roi_size,save =True, path=data_dir, uid=uidstr)
if scat_geometry =='gi_saxs':
if run_profile_plot:
plot_waterfallc( wat, qindex=1, aspect=None, vmin=1, vmax= np.max( wat), uid=uidstr, save =True,
path=data_dir, beg= FD.beg)
"""
Explanation: Make a Profile Plot
End of explanation
"""
if scat_geometry =='gi_saxs':
show_qzr_roi( avg_img, roi_mask, inc_x0, ticks[:4], alpha=0.5, save=True, path=data_dir, uid=uidstr )
## Get 1D Curve (Q||-intensity¶)
qr_1d_pds = cal_1d_qr( avg_img, Qr, Qz, qr_map, qz_map, inc_x0= None, mask=mask, setup_pargs=setup_pargs )
plot_qr_1d_with_ROI( qr_1d_pds, qr_center=np.unique( np.array(list( qval_dict.values() ) )[:,0] ),
loglog=False, save=True, uid=uidstr, path = data_dir)
"""
Explanation: Dynamic Analysis for gi_saxs
End of explanation
"""
if scat_geometry =='gi_waxs':
badpixel = np.where( avg_img[:600,:] >=300 )
roi_mask[badpixel] = 0
show_ROI_on_image( avg_img, roi_mask, label_on = True, alpha=.5,
save=True, path=data_dir, uid=uidstr, vmin=0.1, vmax=5)
"""
Explanation: GiWAXS Scattering Geometry
End of explanation
"""
qind, pixelist = roi.extract_label_indices(roi_mask)
noqs = len(np.unique(qind))
"""
Explanation: Extract the labeled array
End of explanation
"""
nopr = np.bincount(qind, minlength=(noqs+1))[1:]
nopr
"""
Explanation: Number of pixels in each q box
End of explanation
"""
roi_inten = check_ROI_intensity( avg_img, roi_mask, ring_number= 2, uid =uidstr ) #roi starting from 1
"""
Explanation: Check one ROI intensity
End of explanation
"""
#run_waterfall = False
qth_interest = 5 #the second ring. #qth_interest starting from 1
if scat_geometry =='saxs' or scat_geometry =='gi_waxs':
if run_waterfall:
wat = cal_waterfallc( FD, roi_mask, qindex= qth_interest, save =True, path=data_dir, uid=uidstr)
plot_waterfallc( wat, qth_interest, aspect= None, vmin=1e-1, vmax= wat.max(), uid=uidstr, save =True,
path=data_dir, beg= FD.beg, cmap = cmap_vge )
ring_avg = None
if run_t_ROI_Inten:
times_roi, mean_int_sets = cal_each_ring_mean_intensityc(FD, roi_mask, timeperframe = None, multi_cor=True )
plot_each_ring_mean_intensityc( times_roi, mean_int_sets, uid = uidstr, save=True, path=data_dir )
roi_avg = np.average( mean_int_sets, axis=0)
"""
Explanation: Do a waterfall analysis
End of explanation
"""
if run_get_mass_center:
cx, cy = get_mass_center_one_roi(FD, roi_mask, roi_ind=25)
if run_get_mass_center:
fig,ax=plt.subplots(2)
plot1D( cx, m='o', c='b',ax=ax[0], legend='mass center-refl_X',
ylim=[940, 960], ylabel='posX (pixel)')
plot1D( cy, m='s', c='r',ax=ax[1], legend='mass center-refl_Y',
ylim=[1540, 1544], xlabel='frames',ylabel='posY (pixel)')
"""
Explanation: Analysis for mass center of reflective beam center
End of explanation
"""
define_good_series = False
#define_good_series = True
if define_good_series:
good_start = 200
FD = Multifile(filename, beg = good_start, end = 800) #end=1000)
uid_ = uidstr + '_fra_%s_%s'%(FD.beg, FD.end)
print( uid_ )
#%run /home/yuzhang/chxanalys_link/chxanalys/chx_generic_functions.py
if use_sqnorm:#for transmision SAXS
norm = get_pixelist_interp_iq( qp_saxs, iq_saxs, roi_mask, center)
print('Using circular average in the normalization of G2 for SAXS scattering.')
elif use_SG:#for Gi-SAXS or WAXS
avg_imgf = sgolay2d( avg_img, window_size= 11, order= 5) * mask
norm=np.ravel(avg_imgf)[pixelist]
print('Using smoothed image by SavitzkyGolay filter in the normalization of G2.')
else:
norm= None
print('Using simple (average) normalization of G2.')
if use_imgsum_norm:
imgsum_ = imgsum
else:
imgsum_ = None
import time
#show_img( FD.rdframe(10), label_array=roi_mask, aspect=1, center=center )
if run_one_time:
t0 = time.time()
g2, lag_steps = cal_g2p( FD, roi_mask, bad_frame_list,good_start, num_buf = 8, num_lev= None,
imgsum= imgsum_, norm=norm )
run_time(t0)
lag_steps = lag_steps[:g2.shape[0]]
if run_one_time:
taus = lag_steps * timeperframe
try:
g2_pds = save_g2_general( g2, taus=taus,qr= np.array( list( qval_dict.values() ) )[:,0],
qz = np.array( list( qval_dict.values() ) )[:,1],
uid=uid_+'_g2.csv', path= data_dir, return_res=True )
except:
g2_pds = save_g2_general( g2, taus=taus,qr= np.array( list( qval_dict.values() ) )[:,0],
uid=uid_+'_g2.csv', path= data_dir, return_res=True )
#g2.shape
"""
Explanation: One time Correlation
Note : Enter the number of buffers for Muliti tau one time correlation
number of buffers has to be even. More details in https://github.com/scikit-beam/scikit-beam/blob/master/skbeam/core/correlation.py
if define another good_series
End of explanation
"""
if run_one_time:
g2_fit_result, taus_fit, g2_fit = get_g2_fit_general( g2, taus,
function = fit_g2_func, vlim=[0.95, 1.05], fit_range= None,
fit_variables={'baseline':True, 'beta': True, 'alpha':True,'relaxation_rate':True,},
guess_values={'baseline':1.0,'beta': 0.1,'alpha':1.0,'relaxation_rate':0.0100,},
guess_limits = dict( baseline =[1, 1.8], alpha=[0, 2],
beta = [0, 1], relaxation_rate= [0.00001, 5000]) ,)
g2_fit_paras = save_g2_fit_para_tocsv(g2_fit_result, filename= uid_ +'_g2_fit_paras.csv', path=data_dir )
print(scat_geometry_)
fit_g2_func
if run_one_time:
plot_g2_general( g2_dict={1:g2, 2:g2_fit}, taus_dict={1:taus, 2:taus_fit}, vlim=[0.95, 1.05],
qval_dict = qval_dict, fit_res= g2_fit_result, geometry= scat_geometry_,filename= uid_+'_g2',
path= data_dir, function= fit_g2_func, ylabel='g2', append_name= '_fit')
if run_one_time:
if False:
fs, fe = 0, 9
fs,fe=0, 12
qval_dict_ = {k:qval_dict[k] for k in list(qval_dict.keys())[fs:fe] }
D0, qrate_fit_res = get_q_rate_fit_general( qval_dict_, g2_fit_paras['relaxation_rate'][fs:fe],
geometry= scat_geometry_ )
plot_q_rate_fit_general( qval_dict_, g2_fit_paras['relaxation_rate'][fs:fe], qrate_fit_res,
geometry= scat_geometry_,uid=uid_ , path= data_dir )
else:
D0, qrate_fit_res = get_q_rate_fit_general( qval_dict, g2_fit_paras['relaxation_rate'],
fit_range=[0, 26], geometry= scat_geometry_ )
plot_q_rate_fit_general( qval_dict, g2_fit_paras['relaxation_rate'], qrate_fit_res,
geometry= scat_geometry_,uid=uid_ ,
show_fit=False, path= data_dir, plot_all_range=False)
#plot1D( x= qr, y=g2_fit_paras['beta'], ls='-', m = 'o', c='b', ylabel=r'$\beta$', xlabel=r'$Q( \AA^{-1} ) $' )
"""
Explanation: Fit g2
End of explanation
"""
define_good_series = False
#define_good_series = True
if define_good_series:
good_start = 5
FD = Multifile(filename, beg = good_start, end = 1000)
uid_ = uidstr + '_fra_%s_%s'%(FD.beg, FD.end)
print( uid_ )
#%run chxanalys_link/chxanalys/chx_generic_functions.py
data_pixel = None
if run_two_time:
data_pixel = Get_Pixel_Arrayc( FD, pixelist, norm= norm ).get_data()
import time
t0=time.time()
g12b=None
if run_two_time:
g12b = auto_two_Arrayc( data_pixel, roi_mask, index = None )
if run_dose:
np.save( data_dir + 'uid=%s_g12b'%uid, g12b)
run_time( t0 )
#%run chxanalys_link/chxanalys/Two_Time_Correlation_Function.py
if run_two_time:
show_C12(g12b, q_ind=3, qlabel=qval_dict,N1= FD.beg,logs=False, N2=min( FD.end,10000), vmin= 1.01, vmax=1.12,
timeperframe=timeperframe,save=True, path= data_dir, uid = uid_ ,cmap=cmap_albula)
multi_tau_steps = True
if run_two_time:
if lag_steps is None:
num_bufs=8
noframes = FD.end - FD.beg
num_levels = int(np.log( noframes/(num_bufs-1))/np.log(2) +1) +1
tot_channels, lag_steps, dict_lag = multi_tau_lags(num_levels, num_bufs)
max_taus= lag_steps.max()
#max_taus= lag_steps.max()
max_taus = Nimg
t0=time.time()
#tausb = np.arange( g2b.shape[0])[:max_taus] *timeperframe
if multi_tau_steps:
lag_steps_ = lag_steps[ lag_steps <= g12b.shape[0] ]
g2b = get_one_time_from_two_time(g12b)[lag_steps_]
tausb = lag_steps_ *timeperframe
else:
tausb = (np.arange( g12b.shape[0]) *timeperframe)[:-200]
g2b = (get_one_time_from_two_time(g12b))[:-200]
run_time(t0)
g2b_pds = save_g2_general( g2b, taus=tausb, qr= np.array( list( qval_dict.values() ) )[:,0],
qz=None, uid=uid_ +'_g2b.csv', path= data_dir, return_res=True )
if run_two_time:
g2b_fit_result, tausb_fit, g2b_fit = get_g2_fit_general( g2b, tausb,
function = fit_g2_func, vlim=[0.95, 1.05], fit_range= None,
fit_variables={'baseline':False, 'beta': True, 'alpha':False,'relaxation_rate':True},
guess_values={'baseline':1.0,'beta': 0.15,'alpha':1.0,'relaxation_rate':1,},
guess_limits = dict( baseline =[1, 1.8], alpha=[0, 2],
beta = [0, 1], relaxation_rate= [0.000001, 5000]) )
g2b_fit_paras = save_g2_fit_para_tocsv(g2b_fit_result, filename= uid_ +'_g2b_fit_paras.csv', path=data_dir )
#plot1D( x = tausb[1:], y =g2b[1:,0], ylim=[0.95, 1.46], xlim = [0.0001, 10], m='', c='r', ls = '-',
# logx=True, title='one_time_corelation', xlabel = r"$\tau $ $(s)$", )
if run_two_time:
plot_g2_general( g2_dict={1:g2b, 2:g2b_fit}, taus_dict={1:tausb, 2:tausb_fit}, vlim=[0.95, 1.05],
qval_dict=qval_dict, fit_res= g2b_fit_result, geometry=scat_geometry_,filename=uid_+'_g2',
path= data_dir, function= fit_g2_func, ylabel='g2', append_name= '_b_fit')
if run_two_time:
if False:
fs, fe = 0,9
fs, fe = 0,12
qval_dict_ = {k:qval_dict[k] for k in list(qval_dict.keys())[fs:fe] }
D0b, qrate_fit_resb = get_q_rate_fit_general( qval_dict_, g2b_fit_paras['relaxation_rate'][fs:fe], geometry= scat_geometry_ )
plot_q_rate_fit_general( qval_dict_, g2b_fit_paras['relaxation_rate'][fs:fe], qrate_fit_resb,
geometry= scat_geometry_,uid=uid_ +'_two_time' , path= data_dir )
else:
D0b, qrate_fit_resb = get_q_rate_fit_general( qval_dict, g2b_fit_paras['relaxation_rate'],
fit_range=[0, 10], geometry= scat_geometry_ )
plot_q_rate_fit_general( qval_dict, g2b_fit_paras['relaxation_rate'], qrate_fit_resb,
geometry= scat_geometry_,uid=uid_ +'_two_time', show_fit=False,path= data_dir, plot_all_range= True )
if run_two_time and run_one_time:
plot_g2_general( g2_dict={1:g2, 2:g2b}, taus_dict={1:taus, 2:tausb},vlim=[0.99, 1.007],
qval_dict=qval_dict, g2_labels=['from_one_time', 'from_two_time'],
geometry=scat_geometry_,filename=uid_+'_g2_two_g2', path= data_dir, ylabel='g2', )
"""
Explanation: For two-time
End of explanation
"""
if run_dose:
get_two_time_mulit_uids( [uid], roi_mask, norm= norm, bin_frame_number=1,
path= data_dir0, force_generate=False )
try:
print( md['transmission'] )
except:
md['transmission'] =1
exposuretime
if run_dose:
N = len(imgs)
print(N)
#exposure_dose = md['transmission'] * exposuretime* np.int_([ N/32, N/16, N/8, N/4 ,N/2, 3*N/4, N*0.99 ])
exposure_dose = md['transmission'] * exposuretime* np.int_([ N/8, N/4 ,N/2, 3*N/4, N*0.99 ])
print( exposure_dose )
if run_dose:
taus_uids, g2_uids = get_series_one_time_mulit_uids( [ uid ], qval_dict, good_start=good_start,
path= data_dir0, exposure_dose = exposure_dose, num_bufs =8, save_g2= False,
dead_time = 0, trans = [ md['transmission'] ] )
if run_dose:
plot_dose_g2( taus_uids, g2_uids, ylim=[0.98, 1.2], vshift= 0.00,
qval_dict = qval_dict, fit_res= None, geometry= scat_geometry_,
filename= '%s_dose_analysis'%uid_,
path= data_dir, function= None, ylabel='g2_Dose', g2_labels= None, append_name= '' )
if run_dose:
qth_interest = 2
plot_dose_g2( taus_uids, g2_uids, qth_interest= qth_interest, ylim=[0.98, 1.25], vshift= 0.00,
qval_dict = qval_dict, fit_res= None, geometry= scat_geometry_,
filename= '%s_dose_analysis'%uidstr,
path= data_dir, function= None, ylabel='g2_Dose', g2_labels= None, append_name= '' )
"""
Explanation: Run Dose dependent analysis
End of explanation
"""
if run_four_time:
t0=time.time()
g4 = get_four_time_from_two_time(g12b, g2=g2b)[:max_taus]
run_time(t0)
if run_four_time:
taus4 = np.arange( g4.shape[0])*timeperframe
g4_pds = save_g2_general( g4, taus=taus4, qr=np.array( list( qval_dict.values() ) )[:,0],
qz=None, uid=uid_ +'_g4.csv', path= data_dir, return_res=True )
if run_four_time:
plot_g2_general( g2_dict={1:g4}, taus_dict={1:taus4},vlim=[0.95, 1.05], qval_dict=qval_dict, fit_res= None,
geometry=scat_geometry_,filename=uid_+'_g4',path= data_dir, ylabel='g4')
"""
Explanation: Four Time Correlation
End of explanation
"""
#run_xsvs =True
if run_xsvs:
max_cts = get_max_countc(FD, roi_mask )
#max_cts = 15 #for eiger 500 K
qind, pixelist = roi.extract_label_indices( roi_mask )
noqs = len( np.unique(qind) )
nopr = np.bincount(qind, minlength=(noqs+1))[1:]
#time_steps = np.array( utils.geometric_series(2, len(imgs) ) )
time_steps = [0,1] #only run the first two levels
num_times = len(time_steps)
times_xsvs = exposuretime + (2**( np.arange( len(time_steps) ) ) -1 ) * timeperframe
print( 'The max counts are: %s'%max_cts )
"""
Explanation: Speckle Visiblity
End of explanation
"""
if run_xsvs:
if roi_avg is None:
times_roi, mean_int_sets = cal_each_ring_mean_intensityc(FD, roi_mask, timeperframe = None, )
roi_avg = np.average( mean_int_sets, axis=0)
t0=time.time()
spec_bins, spec_his, spec_std, spec_sum = xsvsp( FD, np.int_(roi_mask), norm=None,
max_cts=int(max_cts+2), bad_images=bad_frame_list, only_two_levels=True )
spec_kmean = np.array( [roi_avg * 2**j for j in range( spec_his.shape[0] )] )
run_time(t0)
spec_pds = save_bin_his_std( spec_bins, spec_his, spec_std, filename=uid_+'_spec_res.csv', path=data_dir )
"""
Explanation: Do historam
End of explanation
"""
if run_xsvs:
ML_val, KL_val,K_ = get_xsvs_fit( spec_his, spec_sum, spec_kmean,
spec_std, max_bins=2, fit_range=[1,60], varyK= False )
#print( 'The observed average photon counts are: %s'%np.round(K_mean,4))
#print( 'The fitted average photon counts are: %s'%np.round(K_,4))
print( 'The difference sum of average photon counts between fit and data are: %s'%np.round(
abs(np.sum( spec_kmean[0,:] - K_ )),4))
print( '#'*30)
qth= 0
print( 'The fitted M for Qth= %s are: %s'%(qth, ML_val[qth]) )
print( K_[qth])
print( '#'*30)
"""
Explanation: Do historam fit by negtive binominal function with maximum likehood method
End of explanation
"""
if run_xsvs:
qr = [qval_dict[k][0] for k in list(qval_dict.keys()) ]
plot_xsvs_fit( spec_his, ML_val, KL_val, K_mean = spec_kmean, spec_std=spec_std,
xlim = [0,10], vlim =[.9, 1.1],
uid=uid_, qth= qth_interest, logy= True, times= times_xsvs, q_ring_center=qr, path=data_dir)
plot_xsvs_fit( spec_his, ML_val, KL_val, K_mean = spec_kmean, spec_std = spec_std,
xlim = [0,15], vlim =[.9, 1.1],
uid=uid_, qth= None, logy= True, times= times_xsvs, q_ring_center=qr, path=data_dir )
"""
Explanation: Plot fit results
End of explanation
"""
if run_xsvs:
contrast_factorL = get_contrast( ML_val)
spec_km_pds = save_KM( spec_kmean, KL_val, ML_val, qs=qr, level_time=times_xsvs, uid=uid_, path = data_dir )
#spec_km_pds
"""
Explanation: Get contrast
End of explanation
"""
if run_xsvs:
plot_g2_contrast( contrast_factorL, g2b, times_xsvs, tausb, qr,
vlim=[0.8,1.2], qth = qth_interest, uid=uid_,path = data_dir, legend_size=14)
plot_g2_contrast( contrast_factorL, g2b, times_xsvs, tausb, qr,
vlim=[0.8,1.2], qth = None, uid=uid_,path = data_dir, legend_size=4)
#from chxanalys.chx_libs import cmap_vge, cmap_albula, Javascript
"""
Explanation: Plot contrast with g2 restuls
End of explanation
"""
md['mask_file']= mask_path + mask_name
md['roi_mask_file']= fp
md['mask'] = mask
md['NOTEBOOK_FULL_PATH'] = data_dir + get_current_pipeline_fullpath(NFP).split('/')[-1]
md['good_start'] = good_start
md['bad_frame_list'] = bad_frame_list
md['avg_img'] = avg_img
md['roi_mask'] = roi_mask
md['setup_pargs'] = setup_pargs
if scat_geometry == 'gi_saxs':
md['Qr'] = Qr
md['Qz'] = Qz
md['qval_dict'] = qval_dict
md['beam_center_x'] = inc_x0
md['beam_center_y']= inc_y0
md['beam_refl_center_x'] = refl_x0
md['beam_refl_center_y'] = refl_y0
elif scat_geometry == 'gi_waxs':
md['beam_center_x'] = center[1]
md['beam_center_y']= center[0]
else:
md['qr']= qr
#md['qr_edge'] = qr_edge
md['qval_dict'] = qval_dict
md['beam_center_x'] = center[1]
md['beam_center_y']= center[0]
md['beg'] = FD.beg
md['end'] = FD.end
md['qth_interest'] = qth_interest
md['metadata_file'] = data_dir + 'uid=%s_md.pkl'%uid
psave_obj( md, data_dir + 'uid=%s_md.pkl'%uid ) #save the setup parameters
save_dict_csv( md, data_dir + 'uid=%s_md.csv'%uid, 'w')
Exdt = {}
if scat_geometry == 'gi_saxs':
for k,v in zip( ['md', 'roi_mask','qval_dict','avg_img','mask','pixel_mask', 'imgsum', 'bad_frame_list', 'qr_1d_pds'],
[md, roi_mask, qval_dict, avg_img,mask,pixel_mask, imgsum, bad_frame_list, qr_1d_pds] ):
Exdt[ k ] = v
elif scat_geometry == 'saxs':
for k,v in zip( ['md', 'q_saxs', 'iq_saxs','iqst','qt','roi_mask','qval_dict','avg_img','mask','pixel_mask', 'imgsum', 'bad_frame_list'],
[md, q_saxs, iq_saxs, iqst, qt,roi_mask, qval_dict, avg_img,mask,pixel_mask, imgsum, bad_frame_list] ):
Exdt[ k ] = v
elif scat_geometry == 'gi_waxs':
for k,v in zip( ['md', 'roi_mask','qval_dict','avg_img','mask','pixel_mask', 'imgsum', 'bad_frame_list'],
[md, roi_mask, qval_dict, avg_img,mask,pixel_mask, imgsum, bad_frame_list] ):
Exdt[ k ] = v
if run_waterfall:Exdt['wat'] = wat
if run_t_ROI_Inten:Exdt['times_roi'] = times_roi;Exdt['mean_int_sets']=mean_int_sets
if run_one_time:
if run_invariant_analysis:
for k,v in zip( ['taus','g2','g2_fit_paras', 'time_stamp','invariant'], [taus,g2,g2_fit_paras,time_stamp,invariant] ):Exdt[ k ] = v
else:
for k,v in zip( ['taus','g2','g2_fit_paras' ], [taus,g2,g2_fit_paras ] ):Exdt[ k ] = v
if run_two_time:
for k,v in zip( ['tausb','g2b','g2b_fit_paras', 'g12b'], [tausb,g2b,g2b_fit_paras,g12b] ):Exdt[ k ] = v
#for k,v in zip( ['tausb','g2b','g2b_fit_paras', ], [tausb,g2b,g2b_fit_paras] ):Exdt[ k ] = v
if run_dose:
for k,v in zip( [ 'taus_uids', 'g2_uids' ], [taus_uids, g2_uids] ):Exdt[ k ] = v
if run_four_time:
for k,v in zip( ['taus4','g4'], [taus4,g4] ):Exdt[ k ] = v
if run_xsvs:
for k,v in zip( ['spec_kmean','spec_pds','times_xsvs','spec_km_pds','contrast_factorL'],
[ spec_kmean,spec_pds,times_xsvs,spec_km_pds,contrast_factorL] ):Exdt[ k ] = v
#%run chxanalys_link/chxanalys/Create_Report.py
export_xpcs_results_to_h5( 'uid=%s_Isotropic_Res.h5'%md['uid'], data_dir, export_dict = Exdt )
#extract_dict = extract_xpcs_results_from_h5( filename = 'uid=%s_Res.h5'%md['uid'], import_dir = data_dir )
#extract_dict = extract_xpcs_results_from_h5( filename = 'uid=%s_Res.h5'%md['uid'], import_dir = data_dir )
"""
Explanation: Export Results to a HDF5 File
End of explanation
"""
uid
pdf_out_dir = os.path.join('/XF11ID/analysis/', CYCLE, username, 'Results/')
pdf_filename = "XPCS_Analysis_Report2_for_uid=%s%s.pdf"%(uid,pdf_version)
if run_xsvs:
pdf_filename = "XPCS_XSVS_Analysis_Report_for_uid=%s%s.pdf"%(uid,pdf_version)
%run /home/yuzhang/chxanalys_link/chxanalys/Create_Report.py
#md['detector_distance'] = 4.8884902
make_pdf_report( data_dir, uid, pdf_out_dir, pdf_filename, username,
run_fit_form,run_one_time, run_two_time, run_four_time, run_xsvs, run_dose,
report_type= scat_geometry, report_invariant= run_invariant_analysis,
md = md )
"""
Explanation: Creat PDF Report
End of explanation
"""
#%run /home/yuzhang/chxanalys_link/chxanalys/chx_olog.py
if att_pdf_report:
os.environ['HTTPS_PROXY'] = 'https://proxy:8888'
os.environ['no_proxy'] = 'cs.nsls2.local,localhost,127.0.0.1'
update_olog_uid_with_file( uid[:6], text='Add XPCS Analysis PDF Report',
filename=pdf_out_dir + pdf_filename, append_name='_r1' )
"""
Explanation: Attach the PDF report to Olog
End of explanation
"""
uid
"""
Explanation: The End!
End of explanation
"""
save_current_pipeline( NFP, data_dir)
get_current_pipeline_fullpath(NFP)
"""
Explanation: Save the current pipeline in Results folder
End of explanation
"""
|
ComputationalModeling/spring-2017-danielak
|
past-semesters/fall_2016/day-by-day/day17-analyzing-tweets-with-string-processing/In-Class-Strings-SOLUTION.ipynb
|
agpl-3.0
|
%matplotlib inline
import matplotlib.pyplot as plt
from string import punctuation
"""
Explanation: Day 17 In-class assignment: Data analysis and Modeling in Social Sciences
Part 3
The first part of this notebook is a copy of a blog post tutorial written by Dr. Neal Caren (University of North Carolina, Chapel Hill). The format was modified to fit into a Jupyter Notebook, ported from python2 to python3, and adjusted to meet the goals of this class. Here is a link to the original tutorial:
http://nealcaren.web.unc.edu/an-introduction-to-text-analysis-with-python-part-3/
Student Names
//Put the names of everybody in your group here!
Learning Goals
Natural Language Processing can be tricky to model comparied to known physical processes with mathematical rules. A large part of modeling is trying to understand a model's limiatarions and determining what can be learned from a model dispite its limitations:
Apply what we have learned from the Pre-class notebooks to build a Twitter "bag of words" model on real Twitter data.
Introduce you to a method for downloading data from the Internet.
Gain practice doing string manipulation.
Learn how to make Pie Charts in Python.
This assignment explains how to expand the code written in your pre-class assignemnt so that you can use it to explore the positive and negative sentiment of any set of texts. Specifically, we’ll look at looping over more than one tweet, incorporating a more complete dictionary, and exporting the results.
Earlier, we used a small list of words to measure positive sentiment. While the study in Science used the commercial LIWC dictionary, an alternate sentiment dictionary is produced by Theresa Wilson, Janyce Wiebe, and Paul Hoffmann at the University of Pittsburgh and is freely available. In both cases, the sentiment dictionaries are used in a fairly straightforward way: the more positive words in the text, the higher the text scores on the positive sentiment scale. While this has some drawbacks, the method is quite popular: the LIWC database has over 1,000 citations in Google Scholar, and the Wilson et al. database has more than 600.
Do the following individually
First, load some libraries we will be using in this notebook.
End of explanation
"""
import urllib.request
"""
Explanation: Downloading
Since the Wilson et al. list combines negative and positive polarity words in one list, and includes both words and word stems, Dr. Caren cleaned it up a bit for us. You can download the positive list and the negative list using your browser, but you don’t have to. Python can do that.
First, you need to import one of the modules that Python uses to communicate with the Internet:
End of explanation
"""
url='http://www.unc.edu/~ncaren/haphazard/negative.txt'
"""
Explanation: Like many commands, Python won’t return anything unless something went wrong. In this case, the In [*] should change to a number like In [2]. Next, store the web address that you want to access in a string. You don’t have to do this, but it’s the type of thing that makes your code easier to read and allows you to scale up quickly when you want to download thousands of urls.
End of explanation
"""
file_name='negative.txt'
"""
Explanation: You can also create a string with the name you want the file to have on you hard drive:
End of explanation
"""
urllib.request.urlretrieve(url, file_name)
"""
Explanation: To download and save the file:
End of explanation
"""
urllib.request.urlretrieve('http://www.unc.edu/~ncaren/haphazard/negative.txt','negative.txt')
"""
Explanation: This will download the file into your current directory. If you want it to go somewhere else, you can put the full path in the file_name string. You didn’t have to enter the url and the file name in the prior lines. Something like the following would have worked exactly the same:
End of explanation
"""
files=['negative.txt','positive.txt','obama_tweets.txt']
path='http://www.unc.edu/~ncaren/haphazard/'
for file_name in files:
urllib.request.urlretrieve(path+file_name,file_name)
files=['BarackObama_tweets.txt','HillaryClinton_tweets.txt',
'realDonaldTrump_tweets.txt','mike_pence_tweets.txt',
'timkaine_tweets.txt']
path='https://raw.githubusercontent.com/bwoshea/CMSE201_datasets/master/pres_tweets/'
for file_name in files:
urllib.request.urlretrieve(path+file_name,file_name)
"""
Explanation: Note that the location and filename are both surrounded by quotation marks because you want Python to use this information literally; they aren’t referring to a string object, like in our previous code. This line of code is actually quite readable, and in most circumstances this would be the most efficient thing to do. But there are actually three files that we want to get: the negative list, the positive list, and the list of tweets. And we can download the three using a pretty simple loop:
End of explanation
"""
tweets = open("CHOOSE_YOUR_FILE_NAME_tweets.txt").read()
"""
Explanation: The first line creates a new list with two items - the names of the two files to be downloaded. The second line creates a string object that stores the url path that they all share. The third line starts a loop over each of the items in the files list using file_name to reference each item in turn. The fourth line is indented, because it happens once for each item in the list as a result of the loop, and downloads the file. This is the same as the original download line, except the URL is now the combination of two strings, path and file_name. As noted previously, Python can combine strings with a plus sign, so the result from the first pass through the loop will be http://www.unc.edu/~ncaren/haphazard/negative.txt, which is where the file can be found. Note that this takes advantage of the fact that we don’t mind reusing the original file name. If we wanted to change it, or if there were different paths to each of the files, things would get slightly trickier.
The second set of files, url path, and loop will download collections of tweets from various politicians involved in the current Presidential election: the sitting president (Barack Obama), the two candidates (Hillary Clinton and Donald Trump) and their vice-presidential running mates (Tim Kaine and Mike Pence). Everybody should pick one of these people to analyze - coordinate with your group members!
More fun with lists
Let’s take a look at the list of Tweets that we just downloaded. First, pick one of the politicians to analyze and open the appropriate file (you're going to have to change some stuff to do so):
End of explanation
"""
tweets_list = tweets.split('\n')
"""
Explanation: As you might have guessed, this line is actually doing double duty. It opens the file and reads it into memory before it is stored in tweets. Since the file has one tweet on each line, we can turn it into a list of tweets by splitting it at the end of line character. The file was originally created on a Mac, so the end of line character is an \n (think \n for new line). On a Windows computer, the end of line character is an \r\n (think \r for return and \n for new line). So if the file was created on a Windows computer, you might need to strip out the extra character with something like windows_file=windows_file.replace('\r','') before you split the lines, but you don’t need to worry about that here, no matter what operating system you are using. The end of line character comes from the computer that made the file, not the computer you are currently using. To split the tweets into a list:
End of explanation
"""
len(tweets_list)
"""
Explanation: As always, you can check how many items are in the list:
End of explanation
"""
for tweet in tweets_list[0:5]:
print(tweet)
"""
Explanation: You can print the entire list by typing print(tweets_list), but it will be very long. A more useful way to look at it is to print just some of the items. Since it’s a list, we can loop through the first few item so they each print on the same line.
End of explanation
"""
print(tweets_list[1:2])
"""
Explanation: Note the new [0:5] after the tweets_list but before the : that begins the loop. The first number tells Python where to make the first cut in the list. The potentially counterintuitive part is that this number doesn’t reference an actual item in the list, but rather a position between each item in the list–think about where the comma goes when lists are created or printed. Adding to the confusion, the position at the start of the list is 0. So, in this case, we are telling Python we want to slice our list starting at the beginning and continuing until the fifth comma, which is after the fifth item in the list.
So, if you wanted to just print the second item in the list, you could type:
End of explanation
"""
print(tweets_list[1])
"""
Explanation: OR
End of explanation
"""
pos_sent = open("positive.txt").read()
positive_words=pos_sent.split('\n')
print(positive_words[:10])
"""
Explanation: This slices the list from the first comma to the second comma, so the result is the second item in the list. Unless you have a computer science background, this may be confusing as it’s not the common way to think of items in lists.
As a shorthand, you can leave out the first number in the pair if you want to start at the very beginning or leave out the last number if you want to go until the end. So, if you want to print out the first five tweets, you could just type print(tweet_list[:5]). There are several other shortcuts along these lines that are available. We will cover some of them in other tutorials.
Now that we have our tweet list expanded, let’s load up the positive sentiment list and print out the first few entries:
End of explanation
"""
for tweet in tweets_list:
positive_counter=0
tweet_processed=tweet.lower()
for p in punctuation:
tweet_processed=tweet_processed.replace(p,'')
words=tweet_processed.split(' ')
for word in words:
if word in positive_words:
positive_counter=positive_counter+1
print(positive_counter/len(words))
"""
Explanation: Like the tweet list, this file contained each entry on its own line, so it loads exactly the same way. If you typed len(positive_words) you would find out that this list has 2,230 entries.
Preprocessing
In the pre-class assignment, we explored how to preprocess the tweets: remove the punctuation, convert to lower case, and examine whether or not each word was in the positive sentiment list. We can use this exact same code here with our long list. The one alteration is that instead of having just one tweet, we now have a list of 1,365 tweets, so we have to loop over that list.
End of explanation
"""
positive_counts=[]
"""
Explanation: Do the next part with your partner
If you saw a string of numbers roll past you, it worked! To review, we start by looping over each item of the list. We set up a counter to hold the running total of the number of positive words found in the tweet. Then we make everything lower case and store it in tweet_processed. To strip out the punctuation, we loop over every item of punctuation, swapping out the punctuation mark with nothing.
The cleaned tweet is then converted to a list of words, split at the white spaces. Finally, we loop through each word in the tweet, and if the word is in our new and expanded list of positive words, we increase the counter by one. After cycling through each of the tweet words, the proportion of positive words is computed and printed.
The major problem with this script is that it is currently useless. It prints the positive sentiment results, but then doesn’t do anything with it. A more practical solution would be to store the results somehow. In a standard statistical package, we would generate a new variable that held our results. We can do something similar here by storing the results in a new list. Before we start the tweet loop, we add the line:
End of explanation
"""
#Put your code here
for tweet in tweets_list:
positive_counter=0
tweet_processed=tweet.lower()
for p in punctuation:
tweet_processed=tweet_processed.replace(p,'')
words=tweet_processed.split(' ')
word_count = len(words)
for word in words:
if word in positive_words:
positive_counter=positive_counter+1
positive_counts.append(positive_counter/word_count)
"""
Explanation: Then, instead of printing the proportion, we can append it to the list using the following command:
positive_counts.append(positive_counter/word_count)
Step 1: make a list of counts. Copy and paste the above and rewrite it using the above append command.
End of explanation
"""
len(positive_counts)
"""
Explanation: The next time we run through the loop, it shouldn't produce any output, but it will create a list of the proportions. Lets do a quick check to see how many positive words there are in the entire set of tweets:
End of explanation
"""
#Put your code here
plt.hist(positive_counts, 100, facecolor='green');
"""
Explanation: The next step is to plot a histogram of the data to see the distribution of positive texts:
Step 2: make a histogram of the positive counts.
End of explanation
"""
#Put your code here
neg_sent = open("negative.txt").read()
negative_words=neg_sent.split('\n')
for tweet in tweets_list:
positive_counter=0
tweet_processed=tweet.lower()
for p in punctuation:
tweet_processed=tweet_processed.replace(p,'')
words=tweet_processed.split(' ')
word_count = len(words)
for word in words:
if word in positive_words:
positive_counter=positive_counter+1
if word in negative_words:
positive_counter=positive_counter-1
positive_counts.append(positive_counter/word_count)
"""
Explanation: Step 3: Subtract negative values. Now redo the caluclation in Step 1 but also subtract negative words (i.e. your measurement can now have a positive or negative value):
End of explanation
"""
#Put your code here
plt.hist(positive_counts, 20, facecolor='green', range=[-5, 5]);
"""
Explanation: Step 4: Generate positive/negative histogram. Generate a second histogram using range -5 to 5 and 20 bins.
End of explanation
"""
only_positive=0;
only_negative=0;
both_pos_and_neg=0;
neither_pos_nor_neg=0;
#Put your code here.
for tweet in tweets_list:
positive_counter=0
negative_counter=0
tweet_processed=tweet.lower()
for p in punctuation:
tweet_processed=tweet_processed.replace(p,'')
words=tweet_processed.split(' ')
word_count = len(words)
for word in words:
if word in positive_words:
positive_counter=positive_counter+1
if word in negative_words:
negative_counter=negative_counter+1
if(positive_counter > 0):
if(negative_counter > 0):
both_pos_and_neg=both_pos_and_neg+1
else:
only_positive=only_positive+1;
else:
if(negative_counter > 0):
only_negative=only_negative+1;
else:
neither_pos_nor_neg=neither_pos_nor_neg+1;
"""
Explanation: Another way to model the "bag of words" is to evaluate if the tweet has only positive words, only negative words, both positive and negative words or neither positive nor negative words. Rewrite your code to keep track of all four totals.
Step 5: Count "types" of tweets. Rewrite the code from steps 1 & 3 and determin if each tweet has only positive works, only negative words, both positive and negative words or neither positive nor negative words. Keep total counts the number of each kind of tweet.
End of explanation
"""
#Run this code. It should output True.
print(only_positive)
print(only_negative)
print(both_pos_and_neg)
print(neither_pos_nor_neg)
only_positive + only_negative + both_pos_and_neg + neither_pos_nor_neg == len(tweets_list)
"""
Explanation: Step 6: Check your answer. If everything went as planned, you should be able to add all four totals and it will be equal to the total number of tweets!
End of explanation
"""
# The slices will be ordered and plotted counter-clockwise.
labels = 'positive', 'both', 'negative', 'neither'
sizes = [only_positive, both_pos_and_neg, only_negative, neither_pos_nor_neg]
colors = ['yellowgreen', 'yellow','red', 'lightcyan']
explode = (0.1, 0, 0.1, 0)
plt.pie(sizes, explode=explode, labels=labels, colors=colors,
autopct='%1.1f%%', shadow=True, startangle=90);
"""
Explanation: Step 7: Make a Pie Graph of your results. Now we are just going to plot the results using matplotlib pie function. If you used the variables above this should just work.
End of explanation
"""
from IPython.display import HTML
HTML(
"""
<iframe
src="https://goo.gl/forms/MEOZvOwBcY7CEfEj1?embedded=true"
width="80%"
height="1200px"
frameborder="0"
marginheight="0"
marginwidth="0">
Loading...
</iframe>
"""
)
"""
Explanation: Assignment wrapup
Please fill out the form that appears when you run the code below. You must completely fill this out in order to receive credit for the assignment!
End of explanation
"""
|
ctn-waterloo/best-practices
|
Confidence Intervals - bootstrap.ipynb
|
mit
|
%matplotlib inline
import pylab
import numpy as np
"""
Explanation: Confidence Intervals
Purpose: take data from multiple runs and create aggregate data that is useful for drawing conclusions
End of explanation
"""
rng = np.random.RandomState(seed=0)
data = rng.normal(size=3)
pylab.scatter(np.zeros_like(data), data, marker='x', s=100, c='k')
pylab.xticks([])
pylab.yticks([])
pylab.show()
"""
Explanation: Let's say we have some model of some task and we want to know how well it's doing. Maybe we're about to change some parameter and want to to know whether changing that parameter makes the model works better. But every time we run our model we get a slightly different result, due to random variation.
Instead of running the model once, we might run the model 3 times and take the average. But what does that actually tell us? It tells us the average of those 3 runs. We don't care about those particular 3 runs: we care about what the actual average is: i.e. if we ran it an infinite number of times and took the average. But that might take too long.
The goal of this notebook is to deal with exactly this problem: what can you conclude about the underlying distribution, given only samples of that distribution.
An example sampling
Here are 3 samples from a normal distribution. Since it's the normal distribution, we know that the actual mean if we did an infinite number of samples is 0.
End of explanation
"""
rng = np.random.RandomState(seed=0)
data = rng.normal(size=3)
pylab.scatter(np.zeros_like(data), data, marker='x', s=100, c='k')
pylab.xticks([])
pylab.show()
"""
Explanation: For this simple example, where do you think the actual mean is on this graph? Note: this is just meant to be a typical example and I'm not doing anything tricky here like searching for a random number seed that gives horrible results (seed=0).
What do you think the chances are that the mean is above the largest value or below the smallest value?
End of explanation
"""
def test():
data = rng.normal(size=3)
if np.min(data) > 0 or np.max(data) < 0:
return 1
else:
return 0
results = [test() for i in range(100000)]
print np.mean(results)
"""
Explanation: Ouch. Even without trying to find an example where the sampling is horribly off, we have one. If I was working with a model and I made a change and the average used to be zero, and now it's this, then I would have convinced myself that I made an improvement, when I actually haven't.
Let's see how common this is.
End of explanation
"""
def test():
data = rng.uniform(size=3)
if np.min(data) > 0.5 or np.max(data) < 0.5:
return 1
else:
return 0
results = [test() for i in range(100000)]
print np.mean(results)
"""
Explanation: Does this depend on the underlying distribution?
Let's try it with a uniform distribution.
End of explanation
"""
def test_inside_bounds(n_samples):
data = rng.normal(size=n_samples)
if np.min(data) > 0 or np.max(data) < 0:
return 1
else:
return 0
Ss = [2, 3, 4, 5, 6, 7, 8, 9, 10]
rs = []
for S in Ss:
results = [test_inside_bounds(n_samples=S) for i in range(10000)]
rs.append(np.mean(results))
print S, rs[-1]
pylab.plot(Ss, rs)
pylab.xlabel('number of samples')
pylab.ylabel('probability of error')
pylab.show()
"""
Explanation: Doesn't seem to depend on the distribution, which is good (because in a real situation, we don't know what the underlying distribution is!
Why doesn't it rely on the distribution? And how does it change with the number of samples?
End of explanation
"""
def test_inside_bounds(n_samples):
data = rng.normal(size=n_samples)
ci = np.min(data), np.max(data)
if np.min(data) > 0 or np.max(data) < 0:
return 1, ci
else:
return 0, ci
Ss = [2, 3, 4, 5, 6, 7, 8, 9, 10]
rs = []
mins = []
maxs = []
for S in Ss:
results = [test_inside_bounds(n_samples=S) for i in range(10000)]
rs.append(np.mean([r[0] for r in results]))
mins.append(np.mean([r[1][0] for r in results]))
maxs.append(np.mean([r[1][1] for r in results]))
print S, rs[-1]
pylab.fill_between(Ss, mins, maxs, color='#888888')
pylab.xlabel('number of samples')
pylab.ylabel('confidence interval')
pylab.show()
"""
Explanation: This is for the incredibly conservative scenario where we just take the largest and smallest sample value and only conclude that our actual mean is somewhere inbetween those values.
Can you see why this decreases by around 50% each time?
?
This method only fails if the samples happen to all be above the mean or all be below the mean. What are the chances of that happening? (This also explains why it doesn't depend on the distribution!)
However, this also produces huge ranges, since the more samples you have the wider the bounds will be.
End of explanation
"""
import scipy.stats
def normal_ci(data, p=0.95):
t = np.array(scipy.stats.t.interval(0.95, len(data)-1))
ci = np.mean(data)+np.std(data)/np.sqrt(len(data))*t
return ci
print data
print normal_ci(data)
"""
Explanation: What could we do instead?
Confidence Intervals
In your stats class, they dealt with this problem by computing a "confidence interval".
A 95% confidence interval says that "If you assume the actual value for whatever distribution you are sampling from is inside this range, then you will only be wrong 5% of the time".
Note that this does not tell you whether you're wrong in this particular case. There's no way to know that. But on average, this should give you a range that has the right answer in it 95% of the time.
If the assumptions hold. This is a big IF.
The main one they covered in your stats class was IF the data is normally distributed, then the confidence interval is $\bar x \pm {s \over \sqrt N}t_{N-1, 95\%}$ where:
- $\bar x$ is the sample mean
- $s$ is the sample standard deviation
- $N$ is the number of samples
- $t_{N-1, 95\%}$ is the magic scaling factor that we look up on the table for Student's t-distribution.
Here it is computed for the samples we started with
End of explanation
"""
def test_inside_bounds(n_samples):
data = rng.normal(size=n_samples)
ci = normal_ci(data)
if ci[0] > 0 or ci[1] < 0:
return 1, ci
else:
return 0, ci
Ss = [2, 3, 4, 5, 6, 7, 8, 9, 10]
rs = []
mins = []
maxs = []
for S in Ss:
results = [test_inside_bounds(n_samples=S) for i in range(10000)]
rs.append(np.mean([r[0] for r in results]))
mins.append(np.mean([r[1][0] for r in results]))
maxs.append(np.mean([r[1][1] for r in results]))
print S, rs[-1]
pylab.figure()
pylab.plot(Ss, rs)
pylab.xlabel('number of samples')
pylab.ylabel('probability of error')
pylab.figure()
pylab.fill_between(Ss, mins, maxs, color='#888888')
pylab.xlabel('number of samples')
pylab.ylabel('confidence interval')
pylab.show()
"""
Explanation: So this says that the mean is somewhere between -0.34 and 2.44, which is correct, as the mean is actually 0!
What happens with more samples?
End of explanation
"""
def test_inside_bounds(n_samples):
data = rng.normal(size=n_samples)
ci = normal_ci(data)
if ci[0] > 0 or ci[1] < 0:
return 1, ci
else:
return 0, ci
Ss = [5, 10, 15, 20, 25, 30, 35, 40]
rs = []
mins = []
maxs = []
for S in Ss:
results = [test_inside_bounds(n_samples=S) for i in range(10000)]
rs.append(np.mean([r[0] for r in results]))
mins.append(np.mean([r[1][0] for r in results]))
maxs.append(np.mean([r[1][1] for r in results]))
print S, rs[-1]
pylab.figure()
pylab.title('Normal distribution')
pylab.plot(Ss, rs)
pylab.xlabel('number of samples')
pylab.ylabel('probability of error')
pylab.figure()
pylab.fill_between(Ss, mins, maxs, color='#888888')
pylab.xlabel('number of samples')
pylab.ylabel('confidence interval')
pylab.show()
"""
Explanation: Okay, that's better than before, but it's still a bit problematic.
Why is it wrong more than 5% of the time?
We're fine on the assumption that it's normally distributed data...
But there's another assumption: that you have enough samples. What is enough?
End of explanation
"""
def needed_samples(p=0.95):
return 1.0 / (1-p)
print 0.95, needed_samples(p=0.95)
print 0.8, needed_samples(p=0.8)
"""
Explanation: General rule of thumb: Need about 20 samples to do a 95% confidence interval
Why? What would the rule of thumb be for an 80% confidence interval?
End of explanation
"""
def test_inside_bounds(n_samples):
data = rng.uniform(-1,1, size=n_samples)
ci = normal_ci(data)
if ci[0] > 0 or ci[1] < 0:
return 1, ci
else:
return 0, ci
Ss = [5, 10, 15, 20, 25, 30, 35, 40]
rs = []
mins = []
maxs = []
for S in Ss:
results = [test_inside_bounds(n_samples=S) for i in range(10000)]
rs.append(np.mean([r[0] for r in results]))
mins.append(np.mean([r[1][0] for r in results]))
maxs.append(np.mean([r[1][1] for r in results]))
print S, rs[-1]
pylab.figure()
pylab.title('Uniform distribution')
pylab.plot(Ss, rs)
pylab.xlabel('number of samples')
pylab.ylabel('probability of error')
pylab.figure()
pylab.fill_between(Ss, mins, maxs, color='#888888')
pylab.xlabel('number of samples')
pylab.ylabel('confidence interval')
pylab.show()
def test_inside_bounds(n_samples):
data = rng.binomial(1, p=0.5, size=n_samples)-0.5
ci = normal_ci(data)
if ci[0] > 0 or ci[1] < 0:
return 1, ci
else:
return 0, ci
Ss = [5, 10, 15, 20, 25, 30, 35, 40]
rs = []
mins = []
maxs = []
for S in Ss:
results = [test_inside_bounds(n_samples=S) for i in range(10000)]
rs.append(np.mean([r[0] for r in results]))
mins.append(np.mean([r[1][0] for r in results]))
maxs.append(np.mean([r[1][1] for r in results]))
print S, rs[-1]
pylab.figure()
pylab.title('1-sample binomial distribution')
pylab.plot(Ss, rs)
pylab.xlabel('number of samples')
pylab.ylabel('probability of error')
pylab.figure()
pylab.fill_between(Ss, mins, maxs, color='#888888')
pylab.xlabel('number of samples')
pylab.ylabel('confidence interval')
pylab.show()
def test_inside_bounds(n_samples):
data = rng.gamma(0.1, size=n_samples)-0.1
ci = normal_ci(data)
if ci[0] > 0 or ci[1] < 0:
return 1, ci
else:
return 0, ci
Ss = [5, 10, 15, 20, 25, 30, 35, 40]
rs = []
mins = []
maxs = []
for S in Ss:
results = [test_inside_bounds(n_samples=S) for i in range(10000)]
rs.append(np.mean([r[0] for r in results]))
mins.append(np.mean([r[1][0] for r in results]))
maxs.append(np.mean([r[1][1] for r in results]))
print S, rs[-1]
pylab.figure()
pylab.title('gamma distribution')
pylab.plot(Ss, rs)
pylab.xlabel('number of samples')
pylab.ylabel('probability of error')
pylab.figure()
pylab.fill_between(Ss, mins, maxs, color='#888888')
pylab.xlabel('number of samples')
pylab.ylabel('confidence interval')
pylab.show()
"""
Explanation: The idea here is that you really should have enough samples that you should expect one to be outside the range. If you don't have any, then it's very hard to figure out where the range should be.
Other distributions
But the algorithm described only works if the data is normally distributed. What happens if it isn't?
Uniform distribution
End of explanation
"""
def bootstrap_ci(data, func, n=3000, p=0.95):
index = int(n*(1-p)/2)
samples = np.random.choice(data, size=(n, len(data)))
try:
r = func(samples, axis=1) # if the function supports axis
except TypeError:
r = [func(s) for s in samples] # otherwise do it the slow way
r.sort()
return r[index], r[-index]
def test_inside_bounds(n_samples):
data = rng.normal(size=n_samples)
ci = bootstrap_ci(data, np.mean)
if ci[0] > 0 or ci[1] < 0:
return 1, ci
else:
return 0, ci
Ss = [5, 10, 15, 20, 25, 30, 35, 40]
rs = []
mins = []
maxs = []
for S in Ss:
results = [test_inside_bounds(n_samples=S) for i in range(1000)]
rs.append(np.mean([r[0] for r in results]))
mins.append(np.mean([r[1][0] for r in results]))
maxs.append(np.mean([r[1][1] for r in results]))
print S, rs[-1]
pylab.figure()
pylab.title('Normal distribution')
pylab.plot(Ss, rs)
pylab.xlabel('number of samples')
pylab.ylabel('probability of error')
pylab.figure()
pylab.fill_between(Ss, mins, maxs, color='#888888')
pylab.xlabel('number of samples')
pylab.ylabel('confidence interval')
pylab.show()
def test_inside_bounds(n_samples):
data = rng.binomial(1, p=0.5, size=n_samples)-0.5
ci = bootstrap_ci(data, np.mean)
if ci[0] > 0 or ci[1] < 0:
return 1, ci
else:
return 0, ci
Ss = [5, 10, 15, 20, 25, 30, 35, 40]
rs = []
mins = []
maxs = []
for S in Ss:
results = [test_inside_bounds(n_samples=S) for i in range(10000)]
rs.append(np.mean([r[0] for r in results]))
mins.append(np.mean([r[1][0] for r in results]))
maxs.append(np.mean([r[1][1] for r in results]))
print S, rs[-1]
pylab.figure()
pylab.title('1-sample binomial distribution')
pylab.plot(Ss, rs)
pylab.xlabel('number of samples')
pylab.ylabel('probability of error')
pylab.figure()
pylab.fill_between(Ss, mins, maxs, color='#888888')
pylab.xlabel('number of samples')
pylab.ylabel('confidence interval')
pylab.show()
def test_inside_bounds(n_samples):
data = rng.gamma(0.1, size=n_samples)-0.1
ci = bootstrap_ci(data, np.mean)
if ci[0] > 0 or ci[1] < 0:
return 1, ci
else:
return 0, ci
Ss = [5, 10, 15, 20, 25, 30, 35, 40]
rs = []
mins = []
maxs = []
for S in Ss:
results = [test_inside_bounds(n_samples=S) for i in range(10000)]
rs.append(np.mean([r[0] for r in results]))
mins.append(np.mean([r[1][0] for r in results]))
maxs.append(np.mean([r[1][1] for r in results]))
print S, rs[-1]
pylab.figure()
pylab.title('gamma distribution')
pylab.plot(Ss, rs)
pylab.xlabel('number of samples')
pylab.ylabel('probability of error')
pylab.figure()
pylab.fill_between(Ss, mins, maxs, color='#888888')
pylab.xlabel('number of samples')
pylab.ylabel('confidence interval')
pylab.show()
"""
Explanation: It's not too bad for other distributions, unless they're really pathological. Still, it doesn't get the expected 5% error unless all the assumptions hold: normally distributed data and enough samples (~50 to be safe).
Other statistics
Oh, and one other assumption: it only works for computing the mean.
What if I want a confidence interval on my standard deviation? Or on the median? Or on the regression coefficient? Or something more complicated than that?
There are analytical results for a few of these, but we'd really like something that works for whatever statistic I want to do. This is especially the case for situations where we're looking at human data and it's the distribution of performance that matters, not just the mean.
Bootstrap Confidence Intervals
The term "bootstrap" refers to a whole family of algorithms that involve re-sampling from the samples you already have to approximate the underlying distribution.
The core idea for a bootstrap confidence interval is to simulate the process of running your experiment an infinite number of times. However, we can't actually re-run the experiment. Instead, what we do is to take your sample data and pretend that is the exact distribution of results. So we can simulate re-running the experiment by sampling from the results we already have.
In other words, if I have 20 data points, I can estimate what would happen if I re-ran that full experiment once by just sampling 20 values from those 20 data points. This, of course, has to be sampling with replacement, otherwise I'd just get the original data back again. So I can repeat the experiment and do that same measure on it (say, compute the mean).
Just doing it once doesn't tell me what would happen if I did it an infinite number of times. So let's do it over and over again. The general number is 3000 (which is pretty close to infinity). Now you have 3000 computations of the mean. To determine your range, sort them, throw out the bottom 2.5% and the top 2.5%, and you're left with the 95% confidence interval.
End of explanation
"""
import scikits.bootstrap
def test_inside_bounds(n_samples):
data = rng.normal(size=n_samples)
ci = scikits.bootstrap.ci(data, np.average, method='bca', n_samples=3000)
if ci[0] > 0 or ci[1] < 0:
return 1, ci
else:
return 0, ci
Ss = [5, 10, 15, 20, 25, 30, 35, 40]
rs = []
mins = []
maxs = []
for S in Ss:
results = [test_inside_bounds(n_samples=S) for i in range(1000)]
rs.append(np.mean([r[0] for r in results]))
mins.append(np.mean([r[1][0] for r in results]))
maxs.append(np.mean([r[1][1] for r in results]))
print S, rs[-1]
pylab.figure()
pylab.title('Normal distribution')
pylab.plot(Ss, rs)
pylab.xlabel('number of samples')
pylab.ylabel('probability of error')
pylab.figure()
pylab.fill_between(Ss, mins, maxs, color='#888888')
pylab.xlabel('number of samples')
pylab.ylabel('confidence interval')
pylab.show()
"""
Explanation: The bootstrap CI is worse for normally distributed data (unsurprising, since that's exactly the assumption the normal approach makes), but better for other distributions.
Also, notice that for the gamma distribution, the confidence interval is non-symmetric! This highlights another problem with the standard approach: it always gives symmetric distributions. The bootstrap does not.
However, also note that all these methods are having a hard time with the gamma distribution, and are giving errors much more than 5% of the time. Horrible distributions are horrible.
Bootstrap CI Variants
There are a few common variants of Boostrap CI. The one discussed above is the default, sometimes referred to as the "Percentile Interval" method.
Another common one is $BC_a$, or the Bias-Corrected Accelerated bootstrap confidence interval. It attempts to cancel out the sampling bias in the bootstrap, using a similar trick as in normal stats when you replace the z-distribution with the Student's t-distribution.
Implementing it is a pain, so I'm using someone else's implementation here, taken from
https://github.com/cgevans/scikits-bootstrap
It is slower (although that might just be the implementation) and does give slightly better results, but I don't find it makes that much of a difference. But lots of people recommend it anyway, so I'm tempted to start doing it when presenting results in papers.
End of explanation
"""
import scikits.bootstrap
def test_inside_bounds(n_samples):
data = rng.normal(size=n_samples)
ci = scikits.bootstrap.ci(data, np.average, method='abc', n_samples=3000)
if ci[0] > 0 or ci[1] < 0:
return 1, ci
else:
return 0, ci
Ss = [5, 10, 15, 20, 25, 30, 35, 40]
rs = []
mins = []
maxs = []
for S in Ss:
results = [test_inside_bounds(n_samples=S) for i in range(1000)]
rs.append(np.mean([r[0] for r in results]))
mins.append(np.mean([r[1][0] for r in results]))
maxs.append(np.mean([r[1][1] for r in results]))
print S, rs[-1]
pylab.figure()
pylab.title('Normal distribution')
pylab.plot(Ss, rs)
pylab.xlabel('number of samples')
pylab.ylabel('probability of error')
pylab.figure()
pylab.fill_between(Ss, mins, maxs, color='#888888')
pylab.xlabel('number of samples')
pylab.ylabel('confidence interval')
pylab.show()
"""
Explanation: Another variant is ABC, or Approximate Bootstrap Confidence. This attempts to approximate what bootstrapping would do without actually bootstrapping. I have no idea how it works and what assumptions are being made, but it can be handy for getting quick estimates. Again, I'm using the implementation from https://github.com/cgevans/scikits-bootstrap which doesn't quite seem to be as fast as it could be, so I'm not seeing much speedup over standard boostrap, but that might be able to be improved.
End of explanation
"""
|
mroberge/hydrofunctions
|
docs/notebooks/Hydrofunctions_Comparing_Stream_Environments.ipynb
|
mit
|
import hydrofunctions as hf
%matplotlib inline
"""
Explanation: Comparing Different Stream Environments
This Jupyter Notebook compares four streams in different environments in the U.S.
Using hydrofunctions, we are able to plot the flow duration graphs for all four streams and compare them.
End of explanation
"""
streams = ['09073400','11480390','01074520','09498502']
sites = hf.NWIS(streams, 'dv', start_date='2001-01-01', end_date='2003-12-31')
sites
#Create a dataframe of the four sites
Q = sites.df('discharge')
#Show the first few lines of the dataframe
Q.head()
# rename the columns based on the names of the sites from HydroCloud
Q.columns=['White Mountains National Park', 'White River National Forest', 'Tonto National Forest', 'Mendicino National Park']
# show the first few rows of the data to confirm the changes
Q.head()
#use the built-in functions from hydrofunctions to create a flow duration graph for the dataframe.
hf.flow_duration(Q)
#Pull the stats for each of the four sites.
Q.describe()
"""
Explanation: Choose four streams from different environments from HydroCloud. Import data for three years.
In this example, all four streams are in places with low development:
Colorado Western Slopes: ROARING FORK RIVER NEAR ASPEN, CO.
California Mendicino National Park: MAD R AB RUTH RES NR FOREST GLEN CA
White Mountains, NH: EAST BRANCH PEMIGEWASSET RIVER AT LINCOLN, NH
PINTO CREEK NEAR MIAMI, AZ
End of explanation
"""
|
robertoalotufo/ia898
|
master/tutorial_img_ds.ipynb
|
mit
|
%matplotlib inline
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
import numpy as np
!ls ../data
f = mpimg.imread('../data/cameraman.tif')
print('Tamanho de f: ', f.shape)
print('Tipo do pixel:', f.dtype)
print('Número total de pixels:', f.size)
print('Pixels:\n', f)
"""
Explanation: Table of Contents
<p><div class="lev1 toc-item"><a href="#ATENÇÃO:-este-notebook-ainda-não-está-pronto" data-toc-modified-id="ATENÇÃO:-este-notebook-ainda-não-está-pronto-1"><span class="toc-item-num">1 </span>ATENÇÃO: este notebook ainda não está pronto</a></div><div class="lev1 toc-item"><a href="#Representação,-Leitura-e-Visualização-de-Imagens" data-toc-modified-id="Representação,-Leitura-e-Visualização-de-Imagens-2"><span class="toc-item-num">2 </span>Representação, Leitura e Visualização de Imagens</a></div><div class="lev2 toc-item"><a href="#Imagem-como-matriz" data-toc-modified-id="Imagem-como-matriz-21"><span class="toc-item-num">2.1 </span>Imagem como matriz</a></div><div class="lev2 toc-item"><a href="#Leitura-de-uma-imagem" data-toc-modified-id="Leitura-de-uma-imagem-22"><span class="toc-item-num">2.2 </span>Leitura de uma imagem</a></div><div class="lev2 toc-item"><a href="#Visualização-de-uma-imagem" data-toc-modified-id="Visualização-de-uma-imagem-23"><span class="toc-item-num">2.3 </span>Visualização de uma imagem</a></div><div class="lev2 toc-item"><a href="#Visualizando-numericamente-uma-pequena-região-de-interesse-da-imagem" data-toc-modified-id="Visualizando-numericamente-uma-pequena-região-de-interesse-da-imagem-24"><span class="toc-item-num">2.4 </span>Visualizando numericamente uma pequena região de interesse da imagem</a></div><div class="lev2 toc-item"><a href="#Criando-legenda-da-imagem-com-impressão-de-variáveis" data-toc-modified-id="Criando-legenda-da-imagem-com-impressão-de-variáveis-25"><span class="toc-item-num">2.5 </span>Criando legenda da imagem com impressão de variáveis</a></div>
# ATENÇÃO: este notebook ainda não está pronto
# Representação, Leitura e Visualização de Imagens
Uma imagem digital pode ser representada por uma matriz bidimensional, onde os seus elementos são chamados de pixels
(abreviatura de *picture elements*). Existem vários pacotes de processamento de imagens onde a imagem é representada
por uma estrutura de dados específica. No nosso caso, no Adessowiki iremos utilizar a matriz disponível no *ndarray NumPy*.
A vantagem é que todas as operações disponíveis para processamento matricial podem ser utilizados como processamento de
imagens. Este é um dos principais objetivos deste curso: como utilizar linguagens de processamento matricial para fazermos
processamento de imagens.
## Imagem como matriz
Neste curso, uma imagem é definida pelo seu cabeçalho (tamanho da matriz e tipo de pixel) e pelos pixels em si. Estas
informações são inerentes ao tipo ``ndarray`` do NumPy.
O tamanho da matriz é caracterizado pelas suas dimensões: vertical e horizontal.
A dimensão vertical é definida pelo número de linhas (*rows*) ou altura H (*height*) e a dimensão
horizontal é definida pelo número de colunas (*cols*) ou largura W (*width*). No NumPy, as dimensões são armazenadas
no ``shape`` da matriz como uma tupla (H,W).
Uma imagem pode ter valores de pixels que podem ser armazenados em vários tipos de dados:
a imagem binária tem apenas dois valores possíveis,
muitas vezes atribuídos a preto e branco; uma imagem em nível de cinza tem valores inteiros positivos, muitas vezes, de 0 a um
valor máximo. É possível ter pixels com valores negativos, com números reais, e até mesmo pixels com valores complexos.
Um exemplo de uma imagem com valores de pixel negativos são imagens térmicas com temperaturas negativas.
As imagens com pixels que são números reais podem ser encontradas nas imagens que representam uma onda senóide com valores que
variam de -1 a +1. As imagens com os valores de pixel complexos podem ser encontrados em algumas transformações da imagem como
a Transformada Discreta de Fourier.
Como as imagens usualmente possuem centenas de milhares ou milhões de pixels, é importante escolher a menor representação do
pixel para economizar o uso da memória do computador e usar a representação que seja mais eficiente para processamento.
No Numpy, o tipo do pixel é armazenado no ``dtype`` que pode assumir vários tipos. Os quatro tipos que mais usaremos neste curso
são indicados na tabela:
====== ===============================
dtype valores
====== ===============================
bool True, False
uint8 8 bits sem sinal, de 0 a 255
uint16 16 bits sem sinal, de 0 a 65535
int 64 bits com sinal
float ponto flutuante
====== ===============================
## Leitura de uma imagem
Neste curso iremos trabalhar com imagens criadas sinteticamente e com imagens guardadas em arquivos. A leitura de uma
imagem no Adessowiki é feita pelas funções ``adread`` e ``adreadgray`` que utilizam o pacote
`http://effbot.org/imagingbook/ PIL` de processamento de imagens. Neste curso não utilizaremos as funções de processamento
de imagens do PIL, mas sim utilizaremos as operações matriciais do NumPy. Existem diversas formas de salvar uma imagem em
arquivo e utilizaremos as mais comuns: png, jpg, tif. As imagens disponíveis podem ser visualizadas na toolbox ia636 do Adessowiki:
`ia636:iaimages`.
Veja a seguir um exemplo de leitura de imagem e a impressão de seu cabeçalho e de seus pixels:
End of explanation
"""
plt.imshow(f,cmap='gray')
"""
Explanation: Note que a imagem possui 174 linhas e 314 colunas, totalizando mais de 54 mil pixels. A representação do pixel é pelo tipo
uint8, isto é, valores de 8 bits sem sinal, de 0 a 255. Note também que a impressão de todos os pixels é feita de
forma especial. Se todos os 54 mil pixels tivessem que ser impressos, o resultado da impressão seria proibitivo. Neste caso, quando
a imagem (matriz) for muito grande, o NumPy imprime apenas os pixels dos quatro cantos da imagem.
Visualização de uma imagem
No Adessowiki, a visualização de uma imagem é feita unicamente pela função adshow, que internamente utiliza o pacote PIL já
mencionado. O processo de exibição de uma imagem cria uma representação gráfica desta matriz
em que os valores do pixel é atribuído a um nível de cinza (imagem monocromática) ou a uma cor particular. Quando o pixel da imagem
é uint8, o valor zero é atribuído ao preto e o valor 255 ao branco e gerando um tom de cinza proporcional ao valor do pixel.
Veja abaixo a visualização da imagem cookies.tif já lida no trecho de programa anterior. Note que a função adshow possui
dois parâmetros, a imagem e um string para ser exibido na legenda da visualização da imagem.
End of explanation
"""
f_bin = f > 128
print('Tipo do pixel:', f_bin.dtype)
plt.imshow(f_bin,cmap='gray')
plt.colorbar()
print(f_bin.min(), f_bin.max())
f_f = f_bin.astype(np.float)
f_i = f_bin.astype(np.int)
print(f_f.min(),f_f.max())
print(f_i.min(),f_i.max())
"""
Explanation: O segundo tipo de imagem que o adshow visualiza é a imagem com pixels do tipo booleano. Como ilustração, faremos uma
operação comparando cada pixel da imagem cookies com o valor 128 gerando assim uma nova imagem f_bin onde cada pixel será
True ou False dependendo do resultado da comparação. O adshow mapeia os pixels verdadeiros como branco e os pixels
falsos como preto:
End of explanation
"""
f_cor = mpimg.imread('../data/boat.tif')
print('Dimensões: ', f_cor.shape)
print('Tipo do pixel:', f_cor.dtype)
plt.imshow(f_cor)
f_roi = f_cor[:2,:3,:]
print(f_roi)
"""
Explanation: Por fim, além destes dois modos de exibição, o adshow pode também exibir imagens coloridas no formato RGB e tipo de pixel uint8.
No NumPy a imagem RGB é representada como três images armazenadas na dimensão profundidade. Neste caso o array tem 3
dimensões e seu shape tem o formato (3,H,W).
End of explanation
"""
f= mpimg.imread('../data/gull.pgm')
plt.imshow(f,cmap='gray')
g = f[:7,:10]
print('g=')
print(g)
"""
Explanation: Neste curso, por motivos didáticos, o adshow somente visualiza estes 3 tipos de imagens. Qualquer outro tipo de imagem,
seja de valores maiores que 255, negativos ou complexos, precisam ser explicitamente convertidos para os valores entre 0 e 255
ou True e False.
Maiores informações no uso do adshow podem ser vistas em ia636:adshow.
.. note:: Uma das principais causas de erro em processamento de imagens é não prestar atenção no tipo do pixel ou nas dimensões da
imagem. Recomenda-se verificar esta informações. Uma função que é bastante útil é a ia636:iaimginfo que foi criada para
verificar rapidamente o tipo de pixel, dimensões e os valores mínimo e máximo da imagem. Veja a seguir um exemplo do seu uso
nas três imagens processadas anteriormente:
import ia636
print 'f: ', ia636.iaimginfo(f)
print 'f_bin:', ia636.iaimginfo(f_bin)
print 'f_cor:', ia636.iaimginfo(f_cor)
Visualizando numericamente uma pequena região de interesse da imagem
Para verificar que a imagem lida é composta de valores entre 0 e 255, vamos imprimir numericamente
apenas uma pequena região de 7 linhas e 10 colunas do canto superior esquerdo da imagem. Fazemos isto
com fatiamento:
End of explanation
"""
|
ChadFulton/statsmodels
|
examples/notebooks/statespace_concentrated_scale.ipynb
|
bsd-3-clause
|
import numpy as np
import pandas as pd
import statsmodels.api as sm
dta = sm.datasets.macrodata.load_pandas().data
dta.index = pd.PeriodIndex(start='1959Q1', end='2009Q3', freq='Q')
"""
Explanation: State space models - concentrating the scale out of the likelihood function
End of explanation
"""
class LocalLevel(sm.tsa.statespace.MLEModel):
_start_params = [1., 1.]
_param_names = ['var.level', 'var.irregular']
def __init__(self, endog):
super(LocalLevel, self).__init__(endog, k_states=1, initialization='diffuse')
self['design', 0, 0] = 1
self['transition', 0, 0] = 1
self['selection', 0, 0] = 1
def transform_params(self, unconstrained):
return unconstrained**2
def untransform_params(self, unconstrained):
return unconstrained**0.5
def update(self, params, **kwargs):
params = super(LocalLevel, self).update(params, **kwargs)
self['state_cov', 0, 0] = params[0]
self['obs_cov', 0, 0] = params[1]
"""
Explanation: Introduction
(much of this is based on Harvey (1989); see especially section 3.4)
State space models can generically be written as follows (here we focus on time-invariant state space models, but similar results apply also to time-varying models):
$$
\begin{align}
y_t & = Z \alpha_t + \varepsilon_t, \quad \varepsilon_t \sim N(0, H) \
\alpha_{t+1} & = T \alpha_t + R \eta_t \quad \eta_t \sim N(0, Q)
\end{align}
$$
Often, some or all of the values in the matrices $Z, H, T, R, Q$ are unknown and must be estimated; in Statsmodels, estimation is often done by finding the parameters that maximize the likelihood function. In particular, if we collect the parameters in a vector $\psi$, then each of these matrices can be thought of as functions of those parameters, for example $Z = Z(\psi)$, etc.
Usually, the likelihood function is maximized numerically, for example by applying quasi-Newton "hill-climbing" algorithms, and this becomes more and more difficult the more parameters there are. It turns out that in many cases we can reparameterize the model as $[\psi_', \sigma_^2]'$, where $\sigma_^2$ is the "scale" of the model (usually, it replaces one of the error variance terms) and it is possible to find the maximum likelihood estimate of $\sigma_^2$ analytically, by differentiating the likelihood function. This implies that numerical methods are only required to estimate the parameters $\psi_*$, which has dimension one less than that of $\psi$.
Example: local level model
(see, for example, section 4.2 of Harvey (1989))
As a specific example, consider the local level model, which can be written as:
$$
\begin{align}
y_t & = \alpha_t + \varepsilon_t, \quad \varepsilon_t \sim N(0, \sigma_\varepsilon^2) \
\alpha_{t+1} & = \alpha_t + \eta_t \quad \eta_t \sim N(0, \sigma_\eta^2)
\end{align}
$$
In this model, $Z, T,$ and $R$ are all fixed to be equal to $1$, and there are two unknown parameters, so that $\psi = [\sigma_\varepsilon^2, \sigma_\eta^2]$.
Typical approach
First, we show how to define this model without concentrating out the scale, using Statsmodels' state space library:
End of explanation
"""
mod = LocalLevel(dta.infl)
res = mod.fit()
print(res.summary())
"""
Explanation: There are two parameters in this model that must be chosen: var.level $(\sigma_\eta^2)$ and var.irregular $(\sigma_\varepsilon^2)$. We can use the built-in fit method to choose them by numerically maximizing the likelihood function.
In our example, we are applying the local level model to consumer price index inflation.
End of explanation
"""
print(res.mle_retvals)
"""
Explanation: We can look at the results from the numerical optimizer in the results attribute mle_retvals:
End of explanation
"""
class LocalLevelConcentrated(sm.tsa.statespace.MLEModel):
_start_params = [1.]
_param_names = ['ratio.irregular']
def __init__(self, endog):
super(LocalLevelConcentrated, self).__init__(endog, k_states=1, initialization='diffuse')
self['design', 0, 0] = 1
self['transition', 0, 0] = 1
self['selection', 0, 0] = 1
self['state_cov', 0, 0] = 1
self.ssm.filter_concentrated = True
def transform_params(self, unconstrained):
return unconstrained**2
def untransform_params(self, unconstrained):
return unconstrained**0.5
def update(self, params, **kwargs):
params = super(LocalLevelConcentrated, self).update(params, **kwargs)
self['obs_cov', 0, 0] = params[0]
"""
Explanation: Concentrating out the scale
Now, there are two ways to reparameterize this model as above:
The first way is to set $\sigma_^2 \equiv \sigma_\varepsilon^2$ so that $\psi_ = \psi / \sigma_\varepsilon^2 = [1, q_\eta]$ where $q_\eta = \sigma_\eta^2 / \sigma_\varepsilon^2$.
The second way is to set $\sigma_^2 \equiv \sigma_\eta^2$ so that $\psi_ = \psi / \sigma_\eta^2 = [h, 1]$ where $h = \sigma_\varepsilon^2 / \sigma_\eta^2$.
In the first case, we only need to numerically maximize the likelihood with respect to $q_\eta$, and in the second case we only need to numerically maximize the likelihood with respect to $h$.
Either approach would work well in most cases, and in the example below we will use the second method.
To reformulate the model to take advantage of the concentrated likelihood function, we need to write the model in terms of the parameter vector $\psi_* = [g, 1]$. Because this parameter vector defines $\sigma_\eta^2 \equiv 1$, we now include a new line self['state_cov', 0, 0] = 1 and the only unknown parameter is $h$. Because our parameter $h$ is no longer a variance, we renamed it here to be ratio.irregular.
The key piece that is required to formulate the model so that the scale can be computed from the Kalman filter recursions (rather than selected numerically) is setting the flag self.ssm.filter_concentrated = True.
End of explanation
"""
mod_conc = LocalLevelConcentrated(dta.infl)
res_conc = mod_conc.fit()
print(res_conc.summary())
"""
Explanation: Again, we can use the built-in fit method to find the maximum likelihood estimate of $h$.
End of explanation
"""
print(res_conc.mle_retvals)
"""
Explanation: The estimate of $h$ is provided in the middle table of parameters (ratio.irregular), while the estimate of the scale is provided in the upper table. Below, we will show that these estimates are consistent with those from the previous approach.
And we can again look at the results from the numerical optimizer in the results attribute mle_retvals. It turns out that two fewer iterations were required in this case, since there was one fewer parameter to select. Moreover, since the numerical maximization problem was easier, the optimizer was able to find a value that made the gradiant for this parameter slightly closer to zero than it was above.
End of explanation
"""
print('Original model')
print('var.level = %.5f' % res.params[0])
print('var.irregular = %.5f' % res.params[1])
print('\nConcentrated model')
print('scale = %.5f' % res_conc.scale)
print('h * scale = %.5f' % (res_conc.params[0] * res_conc.scale))
"""
Explanation: Comparing estimates
Recall that $h = \sigma_\varepsilon^2 / \sigma_\eta^2$ and the scale is $\sigma_*^2 = \sigma_\eta^2$. Using these definitions, we can see that both models produce nearly identical results:
End of explanation
"""
# Typical approach
mod_ar = sm.tsa.SARIMAX(dta.cpi, order=(1, 0, 0), trend='ct')
res_ar = mod_ar.fit()
# Estimating the model with the scale concentrated out
mod_ar_conc = sm.tsa.SARIMAX(dta.cpi, order=(1, 0, 0), trend='ct', concentrate_scale=True)
res_ar_conc = mod_ar_conc.fit()
"""
Explanation: Example: SARIMAX
By default in SARIMAX models, the variance term is chosen by numerically maximizing the likelihood function, but an option has been added to allow concentrating the scale out.
End of explanation
"""
print('Loglikelihood')
print('- Original model: %.4f' % res_ar.llf)
print('- Concentrated model: %.4f' % res_ar_conc.llf)
print('\nParameters')
print('- Original model: %.4f, %.4f, %.4f, %.4f' % tuple(res_ar.params))
print('- Concentrated model: %.4f, %.4f, %.4f, %.4f' % (tuple(res_ar_conc.params) + (res_ar_conc.scale,)))
"""
Explanation: These two approaches produce about the same loglikelihood and parameters, although the model with the concentrated scale was able to improve the fit very slightly:
End of explanation
"""
print('Optimizer iterations')
print('- Original model: %d' % res_ar.mle_retvals['iterations'])
print('- Concentrated model: %d' % res_ar_conc.mle_retvals['iterations'])
"""
Explanation: This time, about 1/3 fewer iterations of the optimizer are required under the concentrated approach:
End of explanation
"""
|
tensorflow/docs-l10n
|
site/ko/guide/migrate.ipynb
|
apache-2.0
|
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Explanation: Copyright 2018 The TensorFlow Authors.
End of explanation
"""
import tensorflow as tf
import tensorflow_datasets as tfds
"""
Explanation: 텐서플로 1 코드를 텐서플로 2로 바꾸기
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/guide/migrate">
<img src="https://www.tensorflow.org/images/tf_logo_32px.png" />
TensorFlow.org에서 보기</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ko/guide/migrate.ipynb">
<img src="https://www.tensorflow.org/images/colab_logo_32px.png" />
구글 코랩(Colab)에서 실행하기</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/ko/guide/migrate.ipynb">
<img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />
깃허브(GitHub) 소스 보기</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ko/guide/migrate.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Download notebook</a>
</td>
</table>
Note: 이 문서는 텐서플로 커뮤니티에서 번역했습니다. 커뮤니티 번역 활동의 특성상 정확한 번역과 최신 내용을 반영하기 위해 노력함에도
불구하고 공식 영문 문서의 내용과 일치하지 않을 수 있습니다.
이 번역에 개선할 부분이 있다면
tensorflow/docs-l10n 깃헙 저장소로 풀 리퀘스트를 보내주시기 바랍니다.
문서 번역이나 리뷰에 참여하려면
docs-ko@tensorflow.org로
메일을 보내주시기 바랍니다.
이 문서는 저수준 텐서플로 API를 사용자를 위한 가이드입니다.
만약 고수준 API(tf.keras)를 사용하고 있다면 텐서플로 2.0으로 바꾸기 위해 할 일이 거의 없습니다:
옵티마이저 학습률 기본값을 확인해 보세요.
측정 지표의 "이름"이 바뀌었을 수 있습니다.
여전히 텐서플로 1.X 버전의 코드를 수정하지 않고 텐서플로 2.0에서 실행할 수 있습니다(contrib 모듈은 제외):
import tensorflow.compat.v1 as tf
tf.disable_v2_behavior()
하지만 이렇게 하면 텐서플로 2.0에서 제공하는 많은 장점을 활용할 수 없습니다. 이 문서는 성능을 높이면서 코드는 더 간단하고 유지보수하기 쉽도록 업그레이드하는 방법을 안내합니다.
자동 변환 스크립트
첫 번째 단계는 업그레이드 스크립트를 사용해 보는 것입니다.
이는 텐서플로 2.0으로 업그레이드하기 위해 처음 시도할 일입니다. 하지만 이 작업이 기존 코드를 텐서플로 2.0 스타일로 바꾸어 주지는 못합니다. 여전히 플레이스홀더(placeholder)나 세션(session), 컬렉션(collection), 그외 1.x 스타일의 기능을 사용하기 위해 tf.compat.v1 아래의 모듈을 참조하고 있을 것입니다.
고수준 동작 변경
tf.compat.v1.disable_v2_behavior()를 사용해 텐서플로 2.0에서 코드를 실행한다면 전역 범위의 변경 사항에 대해 알고 있어야 합니다. 주요 변경 사항은 다음과 같습니다:
즉시 실행, v1.enable_eager_execution() : 암묵적으로 tf.Graph를 사용하는 모든 코드는 실패할 것입니다. 코드를 with tf.Graph().as_default() 컨택스트(context)로 감싸야 합니다.
리소스(resource) 변수, v1.enable_resource_variables(): 일부 코드는 TF 레퍼런스 변수의 결정적이지 않은 행동에 영향을 받을 수 있습니다.
리소스 변수는 저장되는 동안 잠깁니다. 따라서 이해하기 쉬운 일관성을 보장합니다.
극단적인 경우 동작을 바꿀 수 있습니다.
추가로 복사본을 만들고 메모리 사용량을 높일 수 있습니다.
tf.Variable 생성자에 use_resource=False를 전달하여 비활성화할 수 있습니다.
텐서 크기, v1.enable_v2_tensorshape(): TF 2.0에서 텐서 크기는 간단합니다. t.shape[0].value 대신에 t.shape[0]을 사용할 수 있습니다. 변경 사항이 작기 때문에 당장 고치는 것이 좋습니다. TensorShape 예를 참고하세요.
제어 흐름, v1.enable_control_flow_v2(): TF 2.0 제어 흐름 구현이 간단하게 바뀌었기 때문에 다른 그래프 표현을 만듭니다. 이슈가 있다면 버그를 신고해 주세요.
2.0에 맞도록 코드 수정하기
텐서플로 1.x 코드를 텐서플로 2.0으로 변환하는 몇 가지 예를 소개하겠습니다. 이 작업을 통해 성능을 최적화하고 간소화된 API의 이점을 사용할 수 있습니다.
각각의 경우에 수정하는 패턴은 다음과 같습니다:
1. tf.Session.run 호출을 바꾸세요.
모든 tf.Session.run 호출을 파이썬 함수로 바꾸어야 합니다.
feed_dict와 tf.placeholder는 함수의 매개변수가 됩니다.
fetches는 함수의 반환값이 됩니다.
변환 과정에서 즉시 실행 모드 덕분에 표준 파이썬 디버거 pdb를 사용하여 쉽게 디버깅할 수 있습니다.
그다음 그래프 모드에서 효율적으로 실행할 수 있도록 tf.function 데코레이터를 추가합니다. 더 자세한 내용은 오토그래프 가이드를 참고하세요.
노트:
v1.Session.run과 달리 tf.function은 반환 시그니처(signature)가 고정되어 있고 항상 모든 출력을 반환합니다. 성능에 문제가 된다면 두 개의 함수로 나누세요.
tf.control_dependencies나 비슷한 연산이 필요없습니다: tf.function은 쓰여진 순서대로 실행됩니다. 예를 들어 tf.Variable 할당이나 tf.assert는 자동으로 실행됩니다.
2. 파이썬 객체를 사용하여 변수와 손실을 관리하세요.
TF 2.0에서 이름 기반 변수 추적은 매우 권장되지 않습니다. 파이썬 객체로 변수를 추적하세요.
v1.get_variable 대신에 tf.Variable을 사용하세요.
모든 v1.variable_scope는 파이썬 객체로 바꾸어야 합니다. 일반적으로 다음 중 하나가 될 것입니다:
tf.keras.layers.Layer
tf.keras.Model
tf.Module
만약 (tf.Graph.get_collection(tf.GraphKeys.VARIABLES)처럼) 변수의 리스트가 필요하다면 Layer와 Model 객체의 .variables이나 .trainable_variables 속성을 사용하세요.
Layer와 Model 클래스는 전역 컬렉션이 필요하지 않도록 몇 가지 다른 속성들도 제공합니다. .losses 속성은 tf.GraphKeys.LOSSES 컬렉션 대신 사용할 수 있습니다.
자세한 내용은 케라스 가이드를 참고하세요.
경고: tf.compat.v1의 상당수 기능은 암묵적으로 전역 컬렉션을 사용합니다.
3. 훈련 루프를 업그레이드하세요.
풀려는 문제에 맞는 고수준 API를 사용하세요. 훈련 루프(loop)를 직접 만드는 것보다 tf.keras.Model.fit 메서드를 사용하는 것이 좋습니다.
고수준 함수는 훈련 루프를 직접 만들 때 놓치기 쉬운 여러 가지 저수준의 세부 사항들을 관리해 줍니다. 예를 들어 자동으로 규제(regularization) 손실을 수집하거나 모델을 호출할 때 training=True로 매개변수를 설정해 줍니다.
4. 데이터 입력 파이프라인을 업그레이드하세요.
데이터 입력을 위해 tf.data 데이터셋을 사용하세요. 이 객체는 효율적이고 간결하며 텐서플로와 잘 통합됩니다.
tf.keras.Model.fit 메서드에 바로 전달할 수 있습니다.
model.fit(dataset, epochs=5)
파이썬에서 직접 반복시킬 수 있습니다:
for example_batch, label_batch in dataset:
break
5. compat.v1에서 마이그레이션 하기
tf.compat.v1 모듈에는 완전한 텐서플로 1.x API가 들어 있습니다.
TF2 업그레이드 스크립트는 안전할 경우 이와 동일한 2.0 버전으로 바꿉니다. 즉 2.0 버전의 동작이 완전히 동일한 경우입니다(예를 들면, v1.arg_max가 tf.argmax로 이름이 바뀌었기 때문에 동일한 함수입니다).
업그레이드 스크립트가 코드를 수정하고 나면 코드에 compat.v1이 많이 등장할 것입니다. 코드를 살펴 보면서 수동으로 2.0 버전으로 바꿉니다(2.0 버전이 있다면 로그에 언급되어 있을 것입니다).
모델 변환하기
준비
End of explanation
"""
W = tf.Variable(tf.ones(shape=(2,2)), name="W")
b = tf.Variable(tf.zeros(shape=(2)), name="b")
@tf.function
def forward(x):
return W * x + b
out_a = forward([1,0])
print(out_a)
out_b = forward([0,1])
regularizer = tf.keras.regularizers.l2(0.04)
reg_loss = regularizer(W)
"""
Explanation: 저수준 변수와 연산 실행
저수준 API를 사용하는 예는 다음과 같습니다:
재사용을 위해 변수 범위(variable scopes)를 사용하기
v1.get_variable로 변수를 만들기
명시적으로 컬렉션을 참조하기
다음과 같은 메서드를 사용하여 암묵적으로 컬렉션을 참조하기:
v1.global_variables
v1.losses.get_regularization_loss
그래프 입력을 위해 v1.placeholder를 사용하기
session.run으로 그래프를 실행하기
변수를 수동으로 초기화하기
변환 전
다음 코드는 텐서플로 1.x를 사용한 코드에서 볼 수 있는 패턴입니다.
```python
in_a = tf.placeholder(dtype=tf.float32, shape=(2))
in_b = tf.placeholder(dtype=tf.float32, shape=(2))
def forward(x):
with tf.variable_scope("matmul", reuse=tf.AUTO_REUSE):
W = tf.get_variable("W", initializer=tf.ones(shape=(2,2)),
regularizer=tf.contrib.layers.l2_regularizer(0.04))
b = tf.get_variable("b", initializer=tf.zeros(shape=(2)))
return W * x + b
out_a = forward(in_a)
out_b = forward(in_b)
reg_loss = tf.losses.get_regularization_loss(scope="matmul")
with tf.Session() as sess:
sess.run(tf.global_variables_initializer())
outs = sess.run([out_a, out_b, reg_loss],
feed_dict={in_a: [1, 0], in_b: [0, 1]})
```
변환 후
변환된 코드의 패턴은 다음과 같습니다:
변수는 파이썬 지역 객체입니다.
forward 함수는 여전히 필요한 계산을 정의합니다.
Session.run 호출은 forward 함수를 호출하는 것으로 바뀝니다.
tf.function 데코레이터는 선택 사항으로 성능을 위해 추가할 수 있습니다.
어떤 전역 컬렉션도 참조하지 않고 규제를 직접 계산합니다.
세션이나 플레이스홀더를 사용하지 않습니다.
End of explanation
"""
model = tf.keras.Sequential([
tf.keras.layers.Conv2D(32, 3, activation='relu',
kernel_regularizer=tf.keras.regularizers.l2(0.04),
input_shape=(28, 28, 1)),
tf.keras.layers.MaxPooling2D(),
tf.keras.layers.Flatten(),
tf.keras.layers.Dropout(0.1),
tf.keras.layers.Dense(64, activation='relu'),
tf.keras.layers.BatchNormalization(),
tf.keras.layers.Dense(10)
])
train_data = tf.ones(shape=(1, 28, 28, 1))
test_data = tf.ones(shape=(1, 28, 28, 1))
train_out = model(train_data, training=True)
print(train_out)
test_out = model(test_data, training=False)
print(test_out)
# 훈련되는 전체 변수
len(model.trainable_variables)
# 규제 손실
model.losses
"""
Explanation: tf.layers 기반의 모델
v1.layers 모듈은 변수를 정의하고 재사용하기 위해 v1.variable_scope에 의존하는 층 함수를 포함합니다.
변환 전
```python
def model(x, training, scope='model'):
with tf.variable_scope(scope, reuse=tf.AUTO_REUSE):
x = tf.layers.conv2d(x, 32, 3, activation=tf.nn.relu,
kernel_regularizer=tf.contrib.layers.l2_regularizer(0.04))
x = tf.layers.max_pooling2d(x, (2, 2), 1)
x = tf.layers.flatten(x)
x = tf.layers.dropout(x, 0.1, training=training)
x = tf.layers.dense(x, 64, activation=tf.nn.relu)
x = tf.layers.batch_normalization(x, training=training)
x = tf.layers.dense(x, 10, activation=tf.nn.softmax)
return x
train_out = model(train_data, training=True)
test_out = model(test_data, training=False)
```
변환 후
층을 단순하게 쌓을 경우엔 tf.keras.Sequential이 적합합니다. (복잡한 모델인 경우 사용자 정의 층과 모델이나 함수형 API를 참고하세요.)
모델이 변수와 규제 손실을 관리합니다.
v1.layers에서 tf.keras.layers로 바로 매핑되기 때문에 일대일로 변환됩니다.
대부분 매개변수는 동일합니다. 다른 부분은 다음과 같습니다:
모델이 실행될 때 각 층에 training 매개변수가 전달됩니다.
원래 model 함수의 첫 번째 매개변수(입력 x)는 사라집니다. 층 객체가 모델 구축과 모델 호출을 구분하기 때문입니다.
추가 노트:
tf.contrib에서 규제를 초기화했다면 다른 것보다 매개변수 변화가 많습니다.
더 이상 컬렉션을 사용하지 않기 때문에 v1.losses.get_regularization_loss와 같은 함수는 값을 반환하지 않습니다. 이는 훈련 루프를 망가뜨릴 수 있습니다.
End of explanation
"""
# 모델에 추가하기 위해 사용자 정의 층을 만듭니다.
class CustomLayer(tf.keras.layers.Layer):
def __init__(self, *args, **kwargs):
super(CustomLayer, self).__init__(*args, **kwargs)
def build(self, input_shape):
self.w = self.add_weight(
shape=input_shape[1:],
dtype=tf.float32,
initializer=tf.keras.initializers.ones(),
regularizer=tf.keras.regularizers.l2(0.04),
trainable=True)
# call 메서드가 그래프 모드에서 사용되면
# training 변수는 텐서가 됩니다.
@tf.function
def call(self, inputs, training=None):
if training:
return inputs + self.w
else:
return inputs + self.w * 0.5
custom_layer = CustomLayer()
print(custom_layer([1]).numpy())
print(custom_layer([1], training=True).numpy())
train_data = tf.ones(shape=(1, 28, 28, 1))
test_data = tf.ones(shape=(1, 28, 28, 1))
# 사용자 정의 층을 포함한 모델을 만듭니다.
model = tf.keras.Sequential([
CustomLayer(input_shape=(28, 28, 1)),
tf.keras.layers.Conv2D(32, 3, activation='relu'),
tf.keras.layers.MaxPooling2D(),
tf.keras.layers.Flatten(),
])
train_out = model(train_data, training=True)
test_out = model(test_data, training=False)
"""
Explanation: 변수와 v1.layers의 혼용
기존 코드는 종종 저수준 TF 1.x 변수와 고수준 v1.layers 연산을 혼용합니다.
변경 전
```python
def model(x, training, scope='model'):
with tf.variable_scope(scope, reuse=tf.AUTO_REUSE):
W = tf.get_variable(
"W", dtype=tf.float32,
initializer=tf.ones(shape=x.shape),
regularizer=tf.contrib.layers.l2_regularizer(0.04),
trainable=True)
if training:
x = x + W
else:
x = x + W * 0.5
x = tf.layers.conv2d(x, 32, 3, activation=tf.nn.relu)
x = tf.layers.max_pooling2d(x, (2, 2), 1)
x = tf.layers.flatten(x)
return x
train_out = model(train_data, training=True)
test_out = model(test_data, training=False)
```
변경 후
이런 코드를 변환하려면 이전 예제처럼 층별로 매핑하는 패턴을 사용하세요.
v1.variable_scope는 기본적으로 하나의 층입니다. 따라서 tf.keras.layers.Layer로 재작성합니다. 자세한 내용은 이 문서를 참고하세요.
일반적인 패턴은 다음과 같습니다:
__init__에서 층에 필요한 매개변수를 입력 받습니다.
build 메서드에서 변수를 만듭니다.
call 메서드에서 연산을 실행하고 결과를 반환합니다.
End of explanation
"""
datasets, info = tfds.load(name='mnist', with_info=True, as_supervised=True)
mnist_train, mnist_test = datasets['train'], datasets['test']
"""
Explanation: 노트:
클래스 상속으로 만든 케라스 모델과 층은 v1 그래프(연산간의 의존성이 자동으로 제어되지 않습니다)와 즉시 실행 모드 양쪽에서 실행될 수 있어야 합니다.
오토그래프(autograph)와 의존성 자동 제어(automatic control dependency)를 위해 tf.function()으로 call() 메서드를 감쌉니다.
call 메서드에 training 매개변수를 추가하는 것을 잊지 마세요.
경우에 따라 이 값은 tf.Tensor가 됩니다.
경우에 따라 이 값은 파이썬 불리언(boolean)이 됩니다.
self.add_weight()를 사용하여 생성자 메서드나 def build() 메서드에서 모델 변수를 만듭니다.
build 메서드에서 입력 크기를 참조할 수 있으므로 적절한 크기의 가중치를 만들 수 있습니다.
tf.keras.layers.Layer.add_weight를 사용하면 케라스가 변수와 규제 손실을 관리할 수 있습니다.
사용자 정의 층 안에 tf.Tensors 객체를 포함하지 마세요.
tf.function이나 즉시 실행 모드에서 모두 텐서가 만들어지지만 이 텐서들의 동작 방식은 다릅니다.
상태를 저장하기 위해서는 tf.Variable을 사용하세요. 변수는 양쪽 방식에 모두 사용할 수 있습니다.
tf.Tensors는 중간 값을 저장하기 위한 용도로만 사용합니다.
Slim & contrib.layers를 위한 노트
예전 텐서플로 1.x 코드는 Slim 라이브러리를 많이 사용합니다. 이 라이브러리는 텐서플로 1.x의 tf.contrib.layers로 패키지되어 있습니다. contrib 모듈은 더 이상 텐서플로 2.0에서 지원하지 않고 tf.compat.v1에도 포함되지 않습니다. Slim을 사용한 코드를 TF 2.0으로 변환하는 것은 v1.layers를 사용한 코드를 변경하는 것보다 더 어렵습니다. 사실 Slim 코드는 v1.layers로 먼저 변환하고 그 다음 케라스로 변환하는 것이 좋습니다.
arg_scopes를 삭제하세요. 모든 매개변수는 명시적으로 설정되어야 합니다.
normalizer_fn과 activation_fn를 사용해야 한다면 분리하여 각각 하나의 층으로 만드세요.
분리 합성곱(separable conv) 층은 한 개 이상의 다른 케라스 층으로 매핑합니다(깊이별(depthwise), 점별(pointwise), 분리(separable) 케라스 층).
Slim과 v1.layers는 매개변수 이름과 기본값이 다릅니다.
일부 매개변수는 다른 스케일(scale)을 가집니다.
사전 훈련된 Slim 모델을 사용한다면 tf.keras.applications나 TFHub를 확인해 보세요.
일부 tf.contrib 층은 텐서플로 내부에 포함되지 못했지만 TF 애드온(add-on) 패키지로 옮겨졌습니다.
훈련
여러 가지 방법으로 tf.keras 모델에 데이터를 주입할 수 있습니다. 파이썬 제너레이터(generator)와 넘파이 배열을 입력으로 사용할 수 있습니다.
tf.data 패키지를 사용하여 모델에 데이터를 주입하는 것이 권장되는 방법입니다. 이 패키지는 데이터 조작을 위한 고성능 클래스들을 포함하고 있습니다.
tf.queue는 데이터 구조로만 지원되고 입력 파이프라인으로는 지원되지 않습니다.
데이터셋 사용하기
텐서플로 데이터셋(Datasets) 패키지(tfds)는 tf.data.Dataset 객체로 정의된 데이터셋을 적재하기 위한 유틸리티가 포함되어 있습니다.
예를 들어 tfds를 사용하여 MNIST 데이터셋을 적재하는 코드는 다음과 같습니다:
End of explanation
"""
BUFFER_SIZE = 10 # 실전 코드에서는 더 큰 값을 사용합니다.
BATCH_SIZE = 64
NUM_EPOCHS = 5
def scale(image, label):
image = tf.cast(image, tf.float32)
image /= 255
return image, label
"""
Explanation: 그 다음 훈련용 데이터를 준비합니다:
각 이미지의 스케일을 조정합니다.
샘플의 순서를 섞습니다.
이미지와 레이블(label)의 배치를 만듭니다.
End of explanation
"""
train_data = mnist_train.map(scale).shuffle(BUFFER_SIZE).batch(BATCH_SIZE)
test_data = mnist_test.map(scale).batch(BATCH_SIZE)
STEPS_PER_EPOCH = 5
train_data = train_data.take(STEPS_PER_EPOCH)
test_data = test_data.take(STEPS_PER_EPOCH)
image_batch, label_batch = next(iter(train_data))
"""
Explanation: 간단한 예제를 위해 5개의 배치만 반환하도록 데이터셋을 자릅니다:
End of explanation
"""
model = tf.keras.Sequential([
tf.keras.layers.Conv2D(32, 3, activation='relu',
kernel_regularizer=tf.keras.regularizers.l2(0.02),
input_shape=(28, 28, 1)),
tf.keras.layers.MaxPooling2D(),
tf.keras.layers.Flatten(),
tf.keras.layers.Dropout(0.1),
tf.keras.layers.Dense(64, activation='relu'),
tf.keras.layers.BatchNormalization(),
tf.keras.layers.Dense(10)
])
# 사용자 정의 층이 없는 모델입니다.
model.compile(optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
model.fit(train_data, epochs=NUM_EPOCHS)
loss, acc = model.evaluate(test_data)
print("손실 {}, 정확도 {}".format(loss, acc))
"""
Explanation: 케라스 훈련 루프 사용하기
훈련 과정을 세부적으로 제어할 필요가 없다면 케라스의 내장 메서드인 fit, evaluate, predict를 사용하는 것이 좋습니다. 이 메서드들은 모델 구현(Sequential, 함수형 API, 클래스 상속)에 상관없이 일관된 훈련 인터페이스를 제공합니다.
이 메서드들의 장점은 다음과 같습니다:
넘파이 배열, 파이썬 제너레이터, tf.data.Datasets을 사용할 수 있습니다.
자동으로 규제와 활성화 손실을 적용합니다.
다중 장치 훈련을 위해 tf.distribute을 지원합니다.
임의의 호출 가능한 객체를 손실과 측정 지표로 사용할 수 있습니다.
tf.keras.callbacks.TensorBoard와 같은 콜백(callback)이나 사용자 정의 콜백을 지원합니다.
자동으로 텐서플로 그래프를 사용하므로 성능이 뛰어납니다.
Dataset을 사용하여 모델을 훈련하는 예제는 다음과 같습니다. (자세한 작동 방식은 튜토리얼을 참고하세요.)
End of explanation
"""
# 사용자 정의 층이 없는 모델입니다.
model.compile(optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
for epoch in range(NUM_EPOCHS):
# 누적된 측정값을 초기화합니다.
model.reset_metrics()
for image_batch, label_batch in train_data:
result = model.train_on_batch(image_batch, label_batch)
metrics_names = model.metrics_names
print("훈련: ",
"{}: {:.3f}".format(metrics_names[0], result[0]),
"{}: {:.3f}".format(metrics_names[1], result[1]))
for image_batch, label_batch in test_data:
result = model.test_on_batch(image_batch, label_batch,
# return accumulated metrics
reset_metrics=False)
metrics_names = model.metrics_names
print("\n평가: ",
"{}: {:.3f}".format(metrics_names[0], result[0]),
"{}: {:.3f}".format(metrics_names[1], result[1]))
"""
Explanation: 사용자 정의 훈련 루프 만들기
케라스 모델의 훈련 스텝(step)이 좋지만 그 외 다른 것을 더 제어하려면 자신만의 데이터 반복 루프를 만들고 tf.keras.model.train_on_batch 메서드를 사용해 보세요.
기억할 점: 많은 것을 tf.keras.Callback으로 구현할 수 있습니다.
이 메서드는 앞에서 언급한 메서드의 장점을 많이 가지고 있고 사용자가 바깥쪽 루프를 제어할 수 있습니다.
훈련하는 동안 성능을 확인하기 위해 tf.keras.model.test_on_batch나 tf.keras.Model.evaluate 메서드를 사용할 수도 있습니다.
노트: train_on_batch와 test_on_batch는 기본적으로 하나의 배치에 대한 손실과 측정값을 반환합니다. reset_metrics=False를 전달하면 누적된 측정값을 반환합니다. 이 때는 누적된 측정값을 적절하게 초기화해 주어야 합니다. AUC와 같은 일부 지표는 reset_metrics=False를 설정해야 올바르게 계산됩니다.
앞의 모델을 계속 사용합니다:
End of explanation
"""
model = tf.keras.Sequential([
tf.keras.layers.Conv2D(32, 3, activation='relu',
kernel_regularizer=tf.keras.regularizers.l2(0.02),
input_shape=(28, 28, 1)),
tf.keras.layers.MaxPooling2D(),
tf.keras.layers.Flatten(),
tf.keras.layers.Dropout(0.1),
tf.keras.layers.Dense(64, activation='relu'),
tf.keras.layers.BatchNormalization(),
tf.keras.layers.Dense(10)
])
optimizer = tf.keras.optimizers.Adam(0.001)
loss_fn = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
@tf.function
def train_step(inputs, labels):
with tf.GradientTape() as tape:
predictions = model(inputs, training=True)
regularization_loss = tf.math.add_n(model.losses)
pred_loss = loss_fn(labels, predictions)
total_loss = pred_loss + regularization_loss
gradients = tape.gradient(total_loss, model.trainable_variables)
optimizer.apply_gradients(zip(gradients, model.trainable_variables))
for epoch in range(NUM_EPOCHS):
for inputs, labels in train_data:
train_step(inputs, labels)
print("마지막 에포크", epoch)
"""
Explanation: <a id="custom_loops"/>
훈련 단계 커스터마이징
자유도를 높이고 제어를 더 하려면 다음 세 단계를 사용해 자신만의 훈련 루프를 구현할 수 있습니다:
샘플 배치를 만드는 파이썬 제너레이터나 tf.data.Dataset을 반복합니다.
tf.GradientTape을 사용하여 그래디언트를 계산합니다.
tf.keras.optimizer를 사용하여 모델의 가중치 변수를 업데이트합니다.
기억할 점:
클래스 상속으로 만든 층과 모델의 call 메서드에는 항상 training 매개변수를 포함하세요.
모델을 호출할 때 training 매개변수를 올바르게 지정했는지 확인하세요.
사용 방식에 따라 배치 데이터에서 모델이 실행될 때까지 모델 변수가 생성되지 않을 수 있습니다.
모델의 규제 손실 같은 것들을 직접 관리해야 합니다.
v1에 비해 단순해진 것:
따로 변수를 초기화할 필요가 없습니다. 변수는 생성될 때 초기화됩니다.
의존성을 수동으로 제어할 필요가 없습니다. tf.function 안에서도 연산은 즉시 실행 모드처럼 실행됩니다.
End of explanation
"""
cce = tf.keras.losses.CategoricalCrossentropy(from_logits=True)
cce([[1, 0]], [[-1.0,3.0]]).numpy()
"""
Explanation: 새로운 스타일의 측정 지표
텐서플로 2.0에서 측정 지표와 손실은 객체입니다. 이 객체는 즉시 실행 모드와 tf.function에서 모두 사용할 수 있습니다.
손실은 호출 가능한 객체입니다. 매개변수로 (y_true, y_pred)를 기대합니다:
End of explanation
"""
# 측정 객체를 만듭니다.
loss_metric = tf.keras.metrics.Mean(name='train_loss')
accuracy_metric = tf.keras.metrics.SparseCategoricalAccuracy(name='train_accuracy')
@tf.function
def train_step(inputs, labels):
with tf.GradientTape() as tape:
predictions = model(inputs, training=True)
regularization_loss = tf.math.add_n(model.losses)
pred_loss = loss_fn(labels, predictions)
total_loss = pred_loss + regularization_loss
gradients = tape.gradient(total_loss, model.trainable_variables)
optimizer.apply_gradients(zip(gradients, model.trainable_variables))
# 측정값을 업데이트합니다.
loss_metric.update_state(total_loss)
accuracy_metric.update_state(labels, predictions)
for epoch in range(NUM_EPOCHS):
# 측정값을 초기화합니다.
loss_metric.reset_states()
accuracy_metric.reset_states()
for inputs, labels in train_data:
train_step(inputs, labels)
# 측정 결과를 얻습니다.
mean_loss = loss_metric.result()
mean_accuracy = accuracy_metric.result()
print('에포크: ', epoch)
print(' 손실: {:.3f}'.format(mean_loss))
print(' 정확도: {:.3f}'.format(mean_accuracy))
"""
Explanation: 측정 객체는 다음과 같은 메서드를 가집니다:
update_state() — 새로운 측정값을 추가합니다.
result() — 누적된 측정 결과를 얻습니다.
reset_states() — 모든 측정 내용을 지웁니다.
이 객체는 호출 가능합니다. update_state 메서드처럼 새로운 측정값과 함께 호출하면 상태를 업데이트하고 새로운 측정 결과를 반환합니다.
측정 변수를 수동으로 초기화할 필요가 없습니다. 텐서플로 2.0은 자동으로 의존성을 관리하기 때문에 어떤 경우에도 신경 쓸 필요가 없습니다.
다음은 측정 객체를 사용하여 사용자 정의 훈련 루프 안에서 평균 손실을 관리하는 코드입니다.
End of explanation
"""
model.compile(
optimizer = tf.keras.optimizers.Adam(0.001),
loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics = ['acc', 'accuracy', tf.keras.metrics.SparseCategoricalAccuracy(name="my_accuracy")])
history = model.fit(train_data)
history.history.keys()
"""
Explanation: <a id="keras_metric_names"></a>
케라스 지표 이름
텐서플로 2.0에서 케라스 모델은 지표 이름을 더 일관성있게 처리합니다.
지표를 문자열로 전달하면 정확히 같은 문자열이 지표의 name으로 사용됩니다. model.fit 메서드가 반환하는 히스토리(history) 객체와 keras.callbacks로 전달하는 로그에 나타나는 이름이 지표로 전달한 문자열이 됩니다.
End of explanation
"""
def wrap_frozen_graph(graph_def, inputs, outputs):
def _imports_graph_def():
tf.compat.v1.import_graph_def(graph_def, name="")
wrapped_import = tf.compat.v1.wrap_function(_imports_graph_def, [])
import_graph = wrapped_import.graph
return wrapped_import.prune(
tf.nest.map_structure(import_graph.as_graph_element, inputs),
tf.nest.map_structure(import_graph.as_graph_element, outputs))
"""
Explanation: 이전 버전은 이와 다르게 metrics=["accuracy"]를 전달하면 dict_keys(['loss', 'acc'])가 됩니다.
케라스 옵티마이저
v1.train.AdamOptimizer나 v1.train.GradientDescentOptimizer 같은 v1.train에 있는 옵티마이저는 tf.keras.optimizers에 있는 것과 동일합니다.
v1.train을 keras.optimizers로 바꾸기
다음은 옵티마이저를 바꿀 때 유념해야 할 내용입니다:
옵티마이저를 업그레이드하면 예전 체크포인트와 호환이되지 않을 수 있습니다.
입실론 매개변수 기본값은 모두 1e-8에서 1e-7로 바뀌었습니다(대부분의 경우 큰 차이가 없습니다).
v1.train.GradientDescentOptimizer는 tf.keras.optimizers.SGD로 바꿀 수 있습니다.
v1.train.MomentumOptimizer는 모멘텀 매개변수를 사용하는 SGD 옵티마이저로 바꿀 수 있습니다: tf.keras.optimizers.SGD(..., momentum=...).
v1.train.AdamOptimizer는 tf.keras.optimizers.Adam로 바꿀 수 있습니다. beta1과 beta2 매개변수는 beta_1과 beta_2로 이름이 바뀌었습니다.
v1.train.RMSPropOptimizer는 tf.keras.optimizers.RMSprop로 바꿀 수 있습니다. decay 매개변수는 rho로 이름이 바뀌었습니다.
v1.train.AdadeltaOptimizer는 tf.keras.optimizers.Adadelta로 바꿀 수 있습니다.
tf.train.AdagradOptimizer는 tf.keras.optimizers.Adagrad로 바꿀 수 있습니다.
tf.train.FtrlOptimizer는 tf.keras.optimizers.Ftrl로 바꿀 수 있습니다. accum_name과 linear_name 매개변수는 삭제되었습니다.
tf.contrib.AdamaxOptimizer와 tf.contrib.NadamOptimizer는 tf.keras.optimizers.Adamax와 tf.keras.optimizers.Nadam로 바꿀 수 있습니다. beta1, beta2 매개변수는 beta_1, beta_2로 바뀌었습니다.
tf.keras.optimizers의 새로운 기본값
<a id="keras_optimizer_lr"></a>
주의: 만약 모델이 수렴하는데 변화가 있다면 학습률 기본값을 확인해 보세요.
optimizers.SGD, optimizers.Adam, optimizers.RMSprop 기본값은 그대로입니다..
학습률 기본값이 바뀐 경우는 다음과 같습니다:
optimizers.Adagrad는 0.01에서 0.001로 바뀌었습니다.
optimizers.Adadelta는 1.0에서 0.001로 바뀌었습니다.
optimizers.Adamax는 0.002에서 0.001로 바뀌었습니다.
optimizers.Nadam은 0.002에서 0.001로 바뀌었습니다.
텐서보드
텐서플로 2는 텐서보드(TensorBoard) 시각화를 위해 서머리(summary) 데이터를 작성하는데 사용하는 tf.summary API에 큰 변화가있습니다. 새로운 tf.summary에 대한 개괄 소개는 TF 2 API를 사용한 시작하기 튜토리얼와 텐서보드 TF 2 이전 가이드를 참고하세요.
저장과 복원
체크포인트 호환성
텐서플로 2.0은 객체 기반의 체크포인트를 사용합니다.
이전 이름 기반 스타일의 체크포인트도 여전히 복원할 수 있지만 주의가 필요합니다.
코드 변환 과정 때문에 변수 이름이 바뀔 수 있지만 해결 방법이 있습니다.
가장 간단한 방법은 새로운 모델의 이름과 체크포인트에 있는 이름을 나열해 보는 것입니다:
여전히 모든 변수는 설정 가능한 name 매개변수를 가집니다.
케라스 모델도 name 매개변수를 가집니다. 이 값은 변수 이름의 접두어로 사용됩니다.
v1.name_scope 함수를 변수 이름의 접두어를 지정하는데 사용할 수 있습니다. 이 함수는 tf.variable_scope와는 매우 다릅니다. 이름에만 영향을 미치며 변수를 추적하거나 재사용을 관장하지 않습니다.
이것이 주어진 상황에 잘 맞지 않는다면 v1.train.init_from_checkpoint 함수를 시도해 보세요. 이 함수는 assignment_map 매개변수로 예전 이름과 새로운 이름을 매핑할 수 있습니다.
노트: 지연 적재가 되는 객체 기반 체크포인트와는 달리 이름 기반 체크포인트는 함수가 호출될 때 모든 변수가 만들어 집니다. 일부 모델은 build 메서드를 호출하거나 배치 데이터에서 모델을 실행할 때까지 변수 생성을 지연합니다.
텐서플로 추정기(Estimator) 저장소에는 텐서플로 1.X의 추정기에서 만든 체크포인트를 2.0으로 업그레이드하는 변환 도구가 포함되어 있습니다. 비슷한 경우를 위한 도구를 만드는 방법을 보여주는 사례입니다.
saved_model 호환성
saved_model에는 심각한 호환성 문제가 없습니다.
텐서플로 1.x의 saved_model은 텐서플로 2.0와 호환됩니다.
텐서플로 2.0의 saved_model로 저장한 모델도 연산이 지원된다면 TensorFlow 1.x에서 작동됩니다.
Graph.pb 또는 Graph.pbtxt
원본 Graph.pb 파일을 텐서플로 2.0으로 업그레이드하는 쉬운 방법은 없습니다. 이 파일을 생성하는 코드를 업그레이드하는 것이 좋습니다.
하지만 "동결된 그래프"(변수가 상수로 바뀐 tf.Graph)라면 v1.wrap_function를 사용해 concrete_function로 변환하는 것이 가능합니다:
End of explanation
"""
path = tf.keras.utils.get_file(
'inception_v1_2016_08_28_frozen.pb',
'http://storage.googleapis.com/download.tensorflow.org/models/inception_v1_2016_08_28_frozen.pb.tar.gz',
untar=True)
"""
Explanation: 예를 들어 2016년 Inception v1의 동결된 그래프입니다:
End of explanation
"""
graph_def = tf.compat.v1.GraphDef()
loaded = graph_def.ParseFromString(open(path,'rb').read())
"""
Explanation: tf.GraphDef를 로드합니다:
End of explanation
"""
inception_func = wrap_frozen_graph(
graph_def, inputs='input:0',
outputs='InceptionV1/InceptionV1/Mixed_3b/Branch_1/Conv2d_0a_1x1/Relu:0')
"""
Explanation: concrete_function로 감쌉니다:
End of explanation
"""
input_img = tf.ones([1,224,224,3], dtype=tf.float32)
inception_func(input_img).shape
"""
Explanation: 텐서를 입력으로 전달합니다:
End of explanation
"""
# 추정기 input_fn을 정의합니다.
def input_fn():
datasets, info = tfds.load(name='mnist', with_info=True, as_supervised=True)
mnist_train, mnist_test = datasets['train'], datasets['test']
BUFFER_SIZE = 10000
BATCH_SIZE = 64
def scale(image, label):
image = tf.cast(image, tf.float32)
image /= 255
return image, label[..., tf.newaxis]
train_data = mnist_train.map(scale).shuffle(BUFFER_SIZE).batch(BATCH_SIZE)
return train_data.repeat()
# 훈련과 평가 스펙을 정의합니다.
train_spec = tf.estimator.TrainSpec(input_fn=input_fn,
max_steps=STEPS_PER_EPOCH * NUM_EPOCHS)
eval_spec = tf.estimator.EvalSpec(input_fn=input_fn,
steps=STEPS_PER_EPOCH)
"""
Explanation: 추정기
추정기로 훈련하기
텐서플로 2.0은 추정기(estimator)를 지원합니다.
추정기를 사용할 때 텐서플로 1.x의 input_fn(), tf.estimator.TrainSpec, tf.estimator.EvalSpec를 사용할 수 있습니다.
다음은 input_fn을 사용하여 훈련과 평가를 수행하는 예입니다.
input_fn과 훈련/평가 스펙 만들기
End of explanation
"""
def make_model():
return tf.keras.Sequential([
tf.keras.layers.Conv2D(32, 3, activation='relu',
kernel_regularizer=tf.keras.regularizers.l2(0.02),
input_shape=(28, 28, 1)),
tf.keras.layers.MaxPooling2D(),
tf.keras.layers.Flatten(),
tf.keras.layers.Dropout(0.1),
tf.keras.layers.Dense(64, activation='relu'),
tf.keras.layers.BatchNormalization(),
tf.keras.layers.Dense(10)
])
model = make_model()
model.compile(optimizer='adam',
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
metrics=['accuracy'])
estimator = tf.keras.estimator.model_to_estimator(
keras_model = model
)
tf.estimator.train_and_evaluate(estimator, train_spec, eval_spec)
"""
Explanation: 케라스 모델 정의 사용하기
텐서플로 2.0에서 추정기를 구성하는 방법은 조금 다릅니다.
케라스를 사용하여 모델을 정의하는 것을 권장합니다. 그 다음 tf.keras.model_to_estimator 유틸리티를 사용하여 모델을 추정기로 바꾸세요. 다음 코드는 추정기를 만들고 훈련할 때 이 유틸리티를 사용하는 방법을 보여 줍니다.
End of explanation
"""
def my_model_fn(features, labels, mode):
model = make_model()
optimizer = tf.compat.v1.train.AdamOptimizer()
loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
training = (mode == tf.estimator.ModeKeys.TRAIN)
predictions = model(features, training=training)
if mode == tf.estimator.ModeKeys.PREDICT:
return tf.estimator.EstimatorSpec(mode=mode, predictions=predictions)
reg_losses = model.get_losses_for(None) + model.get_losses_for(features)
total_loss=loss_fn(labels, predictions) + tf.math.add_n(reg_losses)
accuracy = tf.compat.v1.metrics.accuracy(labels=labels,
predictions=tf.math.argmax(predictions, axis=1),
name='acc_op')
update_ops = model.get_updates_for(None) + model.get_updates_for(features)
minimize_op = optimizer.minimize(
total_loss,
var_list=model.trainable_variables,
global_step=tf.compat.v1.train.get_or_create_global_step())
train_op = tf.group(minimize_op, update_ops)
return tf.estimator.EstimatorSpec(
mode=mode,
predictions=predictions,
loss=total_loss,
train_op=train_op, eval_metric_ops={'accuracy': accuracy})
# 추정기를 만들고 훈련합니다.
estimator = tf.estimator.Estimator(model_fn=my_model_fn)
tf.estimator.train_and_evaluate(estimator, train_spec, eval_spec)
"""
Explanation: 노트: 케라스에서는 가중치가 적용된 지표를 지원하지 않습니다. model_to_estimator를 사용해 추정기 API의 가중 지표로 변경할 수 없습니다. add_metrics 함수를 사용해 추정기 스펙(spec)에 직접 이런 지표를 만들어야 합니다.
사용자 정의 model_fn 사용하기
기존에 작성한 사용자 정의 추정기 model_fn을 유지해야 한다면 이 model_fn을 케라스 모델로 바꿀 수 있습니다.
그러나 호환성 때문에 사용자 정의 model_fn은 1.x 스타일의 그래프 모드로 실행될 것입니다. 즉 즉시 실행과 의존성 자동 제어가 없다는 뜻입니다.
<a name="minimal_changes"></a>
사용자 정의 model_fn을 최소한만 변경하기
사용자 정의 model_fn을 최소한의 변경만으로 TF 2.0과 사용하려면 tf.compat.v1 아래의 optimizers와 metrics을 사용할 수 있습니다.
사용자 정의 model_fn에 케라스 모델을 사용하는 것은 사용자 정의 훈련 루프에 사용하는 것과 비슷합니다:
mode 매개변수에 기초하여 training 상태를 적절하게 지정하세요.
옵티마이저에 모델의 trainable_variables를 명시적으로 전달하세요.
사용자 정의 루프와 큰 차이점은 다음과 같습니다:
model.losses를 사용하는 대신 tf.keras.Model.get_losses_for 사용하여 손실을 추출하세요.
tf.keras.Model.get_updates_for를 사용하여 모델의 업데이트 값을 추출하세요.
노트: "업데이트(update)"는 각 배치가 끝난 후에 모델에 적용해야 할 변화량입니다. 예를 들면 tf.keras.layers.BatchNormalization 층에서 평균과 분산의 이동 평균(moving average)이 있습니다.
다음은 사용자 정의 model_fn으로부터 추정기를 만드는 코드로 이런 개념을 잘 보여 줍니다.
End of explanation
"""
def my_model_fn(features, labels, mode):
model = make_model()
training = (mode == tf.estimator.ModeKeys.TRAIN)
loss_obj = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
predictions = model(features, training=training)
# 조건이 없는 손실(None 부분)과
# 입력 조건이 있는 손실(features 부분)을 얻습니다.
reg_losses = model.get_losses_for(None) + model.get_losses_for(features)
total_loss=loss_obj(labels, predictions) + tf.math.add_n(reg_losses)
# tf.keras.metrics로 업그레이드 합니다.
accuracy_obj = tf.keras.metrics.Accuracy(name='acc_obj')
accuracy = accuracy_obj.update_state(
y_true=labels, y_pred=tf.math.argmax(predictions, axis=1))
train_op = None
if training:
# tf.keras.optimizers로 업그레이드합니다.
optimizer = tf.keras.optimizers.Adam()
# tf.compat.v1.train.global_step을 올바르게 증가시키기 위해
# 수동으로 tf.compat.v1.global_step 변수를 optimizer.iterations에 할당합니다.
# SessionRunHooks이 global_step에 의존하기 때문에
# 이 할당문은 추정기에 지정된 모든 `tf.train.SessionRunHook`에 필수적입니다.
optimizer.iterations = tf.compat.v1.train.get_or_create_global_step()
# 조건이 없는 손실(None 부분)과
# 입력 조건이 있는 손실(features 부분)을 얻습니다.
update_ops = model.get_updates_for(None) + model.get_updates_for(features)
# minimize_op을 계산합니다.
minimize_op = optimizer.get_updates(
total_loss,
model.trainable_variables)[0]
train_op = tf.group(minimize_op, *update_ops)
return tf.estimator.EstimatorSpec(
mode=mode,
predictions=predictions,
loss=total_loss,
train_op=train_op,
eval_metric_ops={'Accuracy': accuracy_obj})
# 추정기를 만들고 훈련합니다.
estimator = tf.estimator.Estimator(model_fn=my_model_fn)
tf.estimator.train_and_evaluate(estimator, train_spec, eval_spec)
"""
Explanation: TF 2.0으로 사용자 정의 model_fn 만들기
사용자 정의 model_fn에서 TF 1.x API를 모두 제거하고 TF 2.0으로 업그레이드하려면 옵티마이저와 지표를 tf.keras.optimizers와 tf.keras.metrics로 업데이트해야 합니다.
위에서 언급한 최소한의 변경외에도 사용자 정의 model_fn에서 업그레이드해야 할 것이 있습니다:
v1.train.Optimizer 대신에 tf.keras.optimizers을 사용하세요.
tf.keras.optimizers에 모델의 trainable_variables을 명시적으로 전달하세요.
train_op/minimize_op을 계산하려면,
손실이 (호출 가능한 객체가 아니라) 스칼라 Tensor이면 Optimizer.get_updates()을 사용하세요. 반환되는 리스트의 첫 번째 원소가 train_op/minimize_op입니다.
손실이 (함수 같은) 호출 가능한 객체라면 train_op/minimize_op 객체를 얻기 위해 Optimizer.minimize()를 사용하세요.
평가를 하려면 tf.compat.v1.metrics 대신에 tf.keras.metrics를 사용하세요.
위의 my_model_fn를 2.0으로 이전한 코드는 다음과 같습니다:
End of explanation
"""
! curl -O https://raw.githubusercontent.com/tensorflow/estimator/master/tensorflow_estimator/python/estimator/tools/checkpoint_converter.py
"""
Explanation: 프리메이드 추정기
tf.estimator.DNN*, tf.estimator.Linear*, tf.estimator.DNNLinearCombined* 모듈 아래에 있는 프리메이드 추정기(premade estimator)는 계속 텐서플로 2.0 API를 지원합니다. 하지만 일부 매개변수가 바뀌었습니다:
input_layer_partitioner: 2.0에서 삭제되었습니다.
loss_reduction: tf.compat.v1.losses.Reduction 대신에 tf.keras.losses.Reduction로 업데이트합니다. 기본값이 tf.compat.v1.losses.Reduction.SUM에서 tf.keras.losses.Reduction.SUM_OVER_BATCH_SIZE로 바뀌었습니다.
optimizer, dnn_optimizer, linear_optimizer: 이 매개변수는 tf.compat.v1.train.Optimizer 대신에 tf.keras.optimizers로 업데이트되었습니다.
위 변경 사항을 반영하려면:
1. Distribution Strategy는 TF 2.0을 자동으로 대응하므로 input_layer_partitioner에 대해 이전이 필요없습니다.
2. loss_reduction의 경우 지원되는 옵션을 tf.keras.losses.Reduction 확인해 보세요..
3. optimizer 매개변수의 경우, optimizer, dnn_optimizer, linear_optimizer 매개변수를 전달하지 않거나 optimizer 매개변수를 string으로 지정했다면 아무것도 바꿀 필요가 없습니다. 기본적으로 tf.keras.optimizers를 사용합니다. 그외의 경우 tf.compat.v1.train.Optimizer를 이에 상응하는 tf.keras.optimizers로 바꾸어야 합니다.
체크포인트 변환기
<a id="checkpoint_converter"></a>
keras.optimizers로 이전하면 TF 1.X로 저장한 체크포인트를 사용할 수 없습니다.
체크포인트에 저장하는 tf.keras.optimizers 변수가 다르기 때문입니다.
Tf 2.0으로 이전한 후에 예전 체크포인트를 사용하려면 체크포인트 변환기를 사용하세요.
End of explanation
"""
! python checkpoint_converter.py -h
"""
Explanation: 이 스크립트는 도움말을 제공합니다:
End of explanation
"""
# TensorShape 객체를 만들고 인덱스를 참조합니다.
i = 0
shape = tf.TensorShape([16, None, 256])
shape
"""
Explanation: TensorShape
이 클래스는 tf.compat.v1.Dimension 객체 대신에 int 값을 가지도록 단순화되었습니다. 따라서 int 값을 얻기 위해 .value() 메서드를 호출할 필요가 없습니다.
여전히 개별 tf.compat.v1.Dimension 객체는 tf.TensorShape.dims로 참조할 수 있습니다.
다음 코드는 텐서플로 1.x와 텐서플로 2.0의 차이점을 보여줍니다.
End of explanation
"""
value = shape[i]
value
"""
Explanation: TF 1.x에서는 다음과 같이 사용합니다:
python
value = shape[i].value
TF 2.0에서는 다음과 같이 사용합니다:
End of explanation
"""
for value in shape:
print(value)
"""
Explanation: TF 1.x에서는 다음과 같이 사용합니다:
python
for dim in shape:
value = dim.value
print(value)
TF 2.0에서는 다음과 같이 사용합니다:
End of explanation
"""
other_dim = 16
Dimension = tf.compat.v1.Dimension
if shape.rank is None:
dim = Dimension(None)
else:
dim = shape.dims[i]
dim.is_compatible_with(other_dim) # 다른 Dimension 메서드도 동일
shape = tf.TensorShape(None)
if shape:
dim = shape.dims[i]
dim.is_compatible_with(other_dim) # 다른 Dimension 메서드도 동일
"""
Explanation: TF 1.x에서는 다음과 같이 사용합니다(다른 Dimension 메서드를 사용할 때도):
python
dim = shape[i]
dim.assert_is_compatible_with(other_dim)
TF 2.0에서는 다음과 같이 사용합니다:
End of explanation
"""
print(bool(tf.TensorShape([]))) # 스칼라
print(bool(tf.TensorShape([0]))) # 길이 0인 벡터
print(bool(tf.TensorShape([1]))) # 길이 1인 벡터
print(bool(tf.TensorShape([None]))) # 길이를 알 수 없는 벡터
print(bool(tf.TensorShape([1, 10, 100]))) # 3D 텐서
print(bool(tf.TensorShape([None, None, None]))) # 크기를 모르는 3D 텐서
print()
print(bool(tf.TensorShape(None))) # 랭크를 알 수 없는 텐서
"""
Explanation: 랭크(rank)를 알 수 있다면 tf.TensorShape의 불리언 값은 True가 됩니다. 그렇지 않으면 False입니다.
End of explanation
"""
|
deepmind/spurious_normativity
|
spurious_normativity_figures.ipynb
|
apache-2.0
|
import numpy as np
import matplotlib.pyplot as plt
import pickle
import scipy.stats
import seaborn as sns
import tempfile
from google.colab import files
import warnings
warnings.simplefilter('ignore', category=RuntimeWarning)
"""
Explanation: Copyright 2021 DeepMind Technologies Limited.
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at
https://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
This colab accompanies the paper 'Spurious normativity enhances learning of compliance and enforcement behavior in artificial agents' in PNAS 2022 by Koster et al.
End of explanation
"""
f = tempfile.NamedTemporaryFile()
!gsutil cp "gs://dm_spurious_normativity/spurious_normativity.pkl" {f.name}
with open(f.name, 'rb') as pickle_file:
data = pickle.load(pickle_file)
population_data = data[0]
probe_data = data[1]
"""
Explanation: Load Data.
End of explanation
"""
n_rows = 2
n_cols = 3
condition_legends = ['no rule', 'important rule', 'silly rule']
colors_per_condition = [(.8, .9, 25./255),
(230./255, 25./255, 75./255),
(60./255, 180./255, 75./255)]
metrics_titles = ['Total Misdirected Punishing',
'Total Punishments',
'Fraction of Time Spent Marked',
'Fracton of Time Spent Poisoned',
'Total Taboo Berries Eaten',
'Collective Return']
alphabet = ['A. ', 'B. ', 'C. ', 'D. ', 'E. ', 'F. ']
y_lims_per_metric = [(0, 10),
(0, 60),
(0, 1),
(0, 0.7),
(0, 120),
(0, 5500)]
plotcounter = 1
plt.figure(facecolor='white')
fig, ax = plt.subplots(n_rows, n_cols, figsize=(25, n_rows*7), facecolor='w')
for metric, letter, y_lims in zip(
metrics_titles, alphabet, y_lims_per_metric):
plt.subplot(n_rows, n_cols, plotcounter)
for condition, line_color in zip(condition_legends, colors_per_condition):
entry = condition + ' ' + metric
condition_data = population_data[entry]
# The data do not have the same shape so we need to put them on a
# canvas of nans to concatenate them.
data_frame_for_mean = np.empty((int(1e5), len(condition_data)))
data_frame_for_mean.fill(np.nan)
for p, population in enumerate(condition_data):
trajectory = condition_data[p][1]
data_frame_for_mean[0:trajectory.shape[0], p] = trajectory
y = np.nanmean(data_frame_for_mean, axis=1)
# SEM
y_error = np.divide(
np.nanstd(data_frame_for_mean, axis=1),
np.sqrt(len(condition_data)))
x = np.arange(0, 1e9, 1e4)
plt.plot(x, y, color=line_color)
plt.fill_between(x, y-y_error, y+y_error, alpha=0.4,
color=line_color, label='_nolegend_')
plt.title(letter + metric, fontsize=18, fontweight='bold')
plt.legend(condition_legends, loc='best', fontsize=12)
plt.xlabel('Timesteps trained', fontsize=14)
plt.xlim(0, 1e9)
plt.ylim(y_lims)
plt.xticks(fontsize=14)
plt.yticks(fontsize=14)
plotcounter += 1
plt.savefig('fig4.png', dpi=500)
files.download('fig4.png')
# Statistics for Fig 4 plot, t-tests per timebin
populations = 15
timebins = 10
start_x = 0
end_x = 1e8
for tb in range(timebins):
silly_means = np.zeros((populations))
important_means = np.zeros((populations))
for p in range(populations):
silly_x = population_data['silly rule Collective Return'][p][0]
silly_y = population_data['silly rule Collective Return'][p][1]
silly_index = (silly_x > start_x) & (silly_x < end_x)
silly_mean = np.nanmean(silly_y[silly_index])
silly_means[p] = silly_mean
important_x = population_data['important rule Collective Return'][p][0]
important_y = population_data['important rule Collective Return'][p][1]
important_index = (important_x > start_x) & (important_x < end_x)
important_mean = np.nanmean(important_y[important_index])
important_means[p] = important_mean
t, p = scipy.stats.ttest_ind(silly_means, important_means)
print('For timebin ', tb+1, ' from ', start_x, ' to ', end_x)
print('Difference between silly and important rule condition:')
print('t =', np.round(t, decimals=3), ', p =', np.round(p, decimals=4))
start_x += 1e8
end_x += 1e8
"""
Explanation: Population Data
population_data contains data from the 3 conditions:
'no rule',
'important rule',
'silly rule'
Each of those conditions has 7 variables that were logged for each
population.
Collective Return
Total Berries Eaten
Total Taboo Berries Eaten
Total Punishments
Total Misdirected Punishing
Fraction of Time Spent Marked
Fracton of Time Spent Poisoned
Each entry is indexed by a combination of the condition and metric, e.g.:
'important rule Collective Return'
Each of those entries contains a list, containing different populations.
5 for no rule, 15 for the other two conditions.
Each population consists of a tuple: the data of the x and y axis to plot this metric in that particular condition of one population.
Probe data
probe_data contains 15 variables that respond to a probe task in one
experimental condition. The variables are, for the no rules condition:
'no_rule_berry_1' - how quickly berry 1 was approached, the actually poisonous berry.
'no_rule_berry_2' - how quickly berry 2 was approached, the harmless berry that
is taboo in the silly rules condition.
'no_rule_berry_healthy' - how quickly other berries were approached.
'no_rule_zap_marked' - how quickly a marked player was zapped.
'no_rule_zap_unmarked' - how quickly the unmarked players were zapped.
These metrics are repeated for the important_rule and silly_rule condition:
'important_rule_berry_1'
'important_rule_berry_2'
'important_rule_healthy'
'important_zap_marked'
'important_zap_unmarked'
'silly_rule_berry_1'
'silly_rule_berry_2'
'silly_rule_healthy'
'silly_zap_marked'
'silly_zap_unmarked'
Each entry contains an array with the shape [N, 20]. N is the number of
independent populations that were run and the 20 refers to the number of samples
along the training trajectory for which the probes were ran. N = 5 for no_rule
and N = 15 for silly_rule and important_rule.
Figure 4
End of explanation
"""
fig_5_conditions = ['important rule', 'silly rule']
fig_5_metrics = ['Total Punishments', 'Fracton of Time Spent Poisoned']
cutoffs = [(0, 2e8), (2e8, 4e8)]
data_for_correlation = {}
for condition in fig_5_conditions:
for metric, cutoff in zip(fig_5_metrics, cutoffs):
entry_name = condition + ' ' + metric
data_in_entry = population_data[entry_name]
mean_values = np.zeros(len(data_in_entry))
for i, d in enumerate(data_in_entry):
x = d[0]
y = d[1]
index_vec = np.where((x > cutoff[0]) & (x < cutoff[1]))
mean_values[i] = np.mean(y[index_vec])
data_for_correlation[entry_name] = mean_values
fig = plt.figure(figsize=(5, 5), facecolor='white')
ax = fig.add_subplot(111)
sns.regplot(x=data_for_correlation['silly rule Fracton of Time Spent Poisoned'],
y=data_for_correlation['silly rule Total Punishments'],
color='green')
sns.regplot(x=data_for_correlation['important rule Fracton of Time Spent Poisoned'],
y=data_for_correlation['important rule Total Punishments'],
color='red')
plt.xlabel('Fraction of time spent poisoned (later)', fontsize=12)
plt.ylabel('Punishments of players (early)', fontsize=12, labelpad=0)
plt.title('Early punishment reduces later poisoning', fontweight='bold')
plt.legend(['silly rule', 'important rule'], loc='upper right')
plt.xticks([0, .2, .4, .6])
plt.savefig('fig5.png', dpi=500)
files.download('fig5.png')
# Statistics for Fig 5 plot
sr_corr_pop = scipy.stats.pearsonr(
data_for_correlation['silly rule Fracton of Time Spent Poisoned'],
data_for_correlation['silly rule Total Punishments'])
ir_corr_pop = scipy.stats.pearsonr(
data_for_correlation['important rule Fracton of Time Spent Poisoned'],
data_for_correlation['important rule Total Punishments'])
print('Silly Rule: r =', np.round(sr_corr_pop[0], decimals=3),
'p =', np.round(sr_corr_pop[1], decimals=3))
print('Important Rule: r =', np.round(ir_corr_pop[0], decimals=3),
'p =', np.round(ir_corr_pop[1], decimals=3))
"""
Explanation: Figure 5
End of explanation
"""
def error_line(var, color):
ym = np.mean(var, axis=0)
plt.plot(ym, color=color)
ye = np.divide(np.nanstd(var, axis=0), np.sqrt(5))
# x axis is always 20
# because that is how often the agent was sampled during learning
plt.fill_between(range(20), ym-ye, ym+ye, alpha=0.2, color=color)
def populate_axis():
plt.ylim((0, 1))
y_label = 'Timesteps until termination'
plt.ylabel(y_label, fontsize=12, labelpad=-10)
plt.yticks([0, 1], [0, 1])
plt.yticks([0, 1], [30, 0])
plt.xlim((0, 20))
plt.xlabel('Timesteps trained', fontsize=12, labelpad=-10)
plt.xticks([0, 20], ['0', '1e9'])
plt.figure(facecolor='w')
fig, ax = plt.subplots(2, 3, figsize=(15, 10), facecolor='w')
plt.subplot(2, 3, 1)
error_line(probe_data['no_rule_berry_1'], 'pink')
error_line(probe_data['no_rule_berry_2'], 'teal')
error_line(probe_data['no_rule_berry_healthy'], 'blue')
populate_axis()
plt.legend(['Poisonous',
'Healthy in this condition',
'Healthy in all conditions'])
plt.title('B. Berries: No Rule', fontweight='bold')
plt.subplot(2, 3, 2)
error_line(probe_data['important_rule_berry_1'], 'pink')
error_line(probe_data['important_rule_berry_2'], 'teal')
error_line(probe_data['important_rule_healthy'], 'blue')
populate_axis()
plt.legend(['Poisonous and Taboo',
'Healthy in this condition',
'Healthy in all conditions'])
plt.title('C. Berries: Important Rule', fontweight='bold')
plt.subplot(2, 3, 3)
error_line(probe_data['silly_rule_berry_1'], 'pink')
error_line(probe_data['silly_rule_berry_2'], 'teal')
error_line(probe_data['silly_rule_healthy'], 'blue')
populate_axis()
plt.legend(['Poisonous and Taboo',
'Taboo in this condition',
'Healthy in all conditions'])
plt.title('D. Berries: Silly Rule', fontweight='bold')
plt.subplot(2, 3, 4)
error_line(probe_data['important_rule_berry_1'], 'red')
error_line(probe_data['silly_rule_berry_1'], 'green')
populate_axis()
plt.legend(['important rule', 'silly rule'])
plt.title('E. Poison berry', fontweight='bold')
plt.subplot(2, 3, 5)
error_line(probe_data['important_zap_marked'], 'red')
error_line(probe_data['silly_zap_marked'], 'green')
populate_axis()
plt.legend(['important rule', 'silly rule'])
plt.title('F. Punishing marked player', fontweight='bold')
marked_player_important_mean = np.mean(
probe_data['important_zap_marked'][:, 0:4], axis=1)
marked_player_silly_mean = np.mean(
probe_data['silly_zap_marked'][:, 0:4], axis=1)
berry1_important_mean = np.mean(
probe_data['important_rule_berry_1'][:, 4:8], axis=1)
berry_1_silly_mean = np.mean(
probe_data['silly_rule_berry_1'][:, 4:8], axis=1)
ax = plt.subplot(2, 3, 6)
# Multiply values by 30 because the probe episodes have 30 timesteps.
sns.regplot(x=berry_1_silly_mean*30, y=marked_player_silly_mean*30,
color='green', label='silly rule')
sns.regplot(x=berry1_important_mean*30, y=marked_player_important_mean*30,
color='red', label='important rule')
plt.xlabel('Timesteps to approach poisoned berry (later)', fontsize=12)
plt.ylabel('Timesteps to punish marked player (early)', fontsize=12, labelpad=0)
plt.xticks([0, 5, 10, 15], [30, 25, 20, 15])
plt.yticks([2, 5, 8], [28, 25, 22])
plt.title('G. Early punishment reduces later poisoning', fontweight='bold')
h, l = ax.get_legend_handles_labels()
ax.legend(reversed(h), reversed(l), loc='upper right')
plt.savefig('fig6.png', dpi=500)
files.download('fig6.png')
# Stats for Figure 6 F
sr_corr_probe = scipy.stats.pearsonr(
berry_1_silly_mean, marked_player_silly_mean)
ir_corr_probe = scipy.stats.pearsonr(
berry1_important_mean, marked_player_important_mean)
print('Silly Rule: r =', np.round(sr_corr_probe[0], decimals=3),
'p =', np.round(sr_corr_probe[1], decimals=3))
print('Important Rule: r =', np.round(ir_corr_probe[0], decimals=3),
'p =', np.round(ir_corr_probe[1], decimals=3))
"""
Explanation: Figure 6
End of explanation
"""
|
vbarua/PythonWorkshop
|
Code/Introduction To Python/1 - Strings, Numbers and Booleans.ipynb
|
mit
|
"This is a string!!!"
'This is also a string!!!'
"This string contains single 'quotation' marks!!!"
'This string contains double "quotation" marks!!!'
"""
Explanation: Strings, Numbers and Booleans
Strings
Python has strings, which are written using either single or double quotes.
End of explanation
"""
7
42
"""
Explanation: Numbers
Python has two types of numbers, integers and floats, which are analogous to mathematical integers and real number respectively.
Integers
End of explanation
"""
5 + 3
5 - 3
5 * 3
5 / 3 # This is weird. More on this later.
5 % 3 # The remainder of 5 / 3
"""
Explanation: Integers support all of the operations you would expect:
* + Addition
* - Subtraction
* * Multiplication
* / Division
* % Modulo
End of explanation
"""
7.
42.
1.5
"""
Explanation: Floats (Floating-Point Numbers)
End of explanation
"""
3/4
"""
Explanation: Floats support the same operations as integers.
Gotchas
One thing to look out for is division involving integers.
End of explanation
"""
3./4.
"""
Explanation: Mathematically this should be 0.75, but as the computation is using integers the result rounds down. Using floats on the other hand yields more accurate result.
End of explanation
"""
0.1 + 0.2
"""
Explanation: Note that just using floats isn't enough to give you a mathematically correct answer. For example
End of explanation
"""
x = 1
y = 2.
z = x + y
print(type(x))
print(type(y))
print(type(z))
3./4
"""
Explanation: On a computer, floating-point numbers are represented using a finite number of bits (ie. memory). This means that not every real number has a floating-point number associated with it, meaning that computers need to perform rounding to represent floating-point numbers and the results of computations on floating-point numbers. This is a source of error that needs to be taken into account when performing numerical computations.
For more details look here.
Computations with Mixed Number Types
In computations involving integers and floats, the results will be "upgraded" to floats.
End of explanation
"""
True
False
"""
Explanation: Booleans
Booleans represent true or false values.
End of explanation
"""
True and False
True or False
not True
"""
Explanation: They support the kinds of operations you would expect from boolean logic.
End of explanation
"""
1 == 2 # Check equality.
1 != 2 # Checks non-equality
2 > 4 # Size comparison
"zebra" == "zebra" # Comparing strings
(1, 2, 3) == (1, 2, 3) # Comparing tuples
"""
Explanation: Booleans can be generated from equality comparisons.
End of explanation
"""
|
LSSTC-DSFP/LSSTC-DSFP-Sessions
|
Sessions/Session11/Day2/MeasuringCentroidsAndProperMotionSolutions.ipynb
|
mit
|
# Load the packages we will use
import numpy as np
import astropy.io.fits as pf
import astropy.coordinates as co
from astropy.wcs import WCS
from matplotlib import pyplot as pl
%matplotlib inline
"""
Explanation: Practice with stellar astrometry
To accompany astrometry lecture from the Rubin Observatory Data Science Fellows Program, July 2020.
All questions and corrections can be directed to me at garyb@physics.upenn.edu
Enjoy!
Gary Bernstein, 16 July 2020
End of explanation
"""
# Make a Moffat class to draw
class MoffatPSF:
def __init__(self,half_light_radius=4., x0=15.75,y0=15.25):
# Create a Moffat PSF model that is centered at [y0,x0]
# with the specified half-light radius (given in pixels)
self.beta = 2.5
# Calculate r0 from half-light radius
tmp = np.power(0.5,1./(1-self.beta)) - 1
self.r0 = half_light_radius / np.sqrt(tmp)
self.x0 = x0
self.y0 = y0
# Factor that makes the integral of PSF be unity:
self.norm = (self.beta-1) / (np.pi * self.r0**2)
def draw(self,n_pix=32):
# Return an array of shape (n_pix,n_pix) drawing the PSF
# (using numpy convention
# of zero-indexing and fastest index, x, last)
# First make arrays holding the x and y coordinates
xy=np.indices( (n_pix,n_pix),dtype=float)
x = xy[1].copy()
y = xy[0].copy()
rsq = np.square(x-self.x0)+np.square(y-self.y0)
# Here's the basic formla for Moffat:
result = self.norm * np.power(1+rsq/(self.r0**2), -self.beta)
# Tune up the normalization to unity:
result /= np.sum(result)
return result
def realizeXY(self,n_photon=1e3):
# Produce an array of the arrival positions of
# photons, assuming that they are Poisson sample
# of a mean of n_photon arrivals.
# The returned array has shape (2,n_detected)
# where mean of n_detected is n_photon
# First pick total number of photons:
n_detected = np.random.poisson(n_photon)
# Then distribute them in radius using the trick
# of inverse cumulative distribution of uniform distribution.
r_uniform = np.random.random(n_detected)
rsq = np.power(r_uniform, 1./(1-self.beta))-1.
r = self.r0 * np.sqrt(rsq)
# Draw position angles at random and produce output
theta = np.random.random(n_detected) * (2*np.pi)
return np.stack((r*np.sin(theta)+self.y0,r*np.cos(theta)+self.x0))
def realizeImage(self,n_pix=32,n_photon=1e3):
# Returns an image of shape (n_pix,n_pix) which is a
# Poisson-noise realization of the PSF having
# a mean expected flux of n_photon.
# There are two ways to do this: just bin
# photons from realize_XY, or
# do a separate Poisson draw for each pixel.
if n_photon < n_pix*n_pix:
# The first way is probably faster
xy = self.realizeXY(n_photon=n_photon)
# Pixelize the arrival positions
xy_pix = np.floor(xy+0.5).astype(int)
# Discard photons out of bounds
keep = np.logical_and(xy_pix>=0,xy_pix<n_pix)
keep = np.logical_and(keep[0,:],keep[1,:])
xy_pix = xy_pix[:,keep]
# Do some fancy footwork to count the photons into 2d
# bins
counts = np.bincount(xy_pix[0]*n_pix+xy_pix[1],minlength=n_pix*n_pix)
counts = counts.reshape(n_pix,n_pix)
return counts.astype(float)
else:
# Better to do a Poisson draw of each pixel
counts = np.random.poisson(lam=n_photon*self.draw(n_pix),
size=(n_pix,n_pix))
return counts.astype(float)
def addBackground(image, variance):
# Add Gaussian noise with given variance to each pixel of the image
image += np.random.normal(scale=np.sqrt(variance),size=image.shape)
return
"""
Explanation: Useful tools
First I'm going to give you one class and one function that will be needed for the first couple of problems. The class called MoffatPSF describes a PSF with the following circular profile:
$$ I(r) \propto \left[1 + (r/r_0)^2\right]^{-\beta}.$$
Many ground-based PSFs look more like this Moffat function than they do like a Gaussian, so let's use this for our astrometry experiments. We'll stick with $\beta=2.5$ When you create a MoffatPSF, you give it the half_light_radius $r_{1/2}$ which encloses half of the light, rather than giving it $r_0.$ You also give the coordinates $(x_0,y_0)$ of the center of the star. All of these arguments we'll assume to be in units of pixels.
Once you have made a MoffatPSF, you can do a few things with it:
* draw a picture of the PSF as a 2d image (such that it sums to unity)
* realizeXY will return you the (x,y) pixel locations of arrival for one observation of the star - including Poisson noise.
* realizeImage will return you a 2d array that is the pixelized image of the arriving photons.
If you're curious you can look inside the class to see how it's done - I've used a few non-obvious tricks to make this fast since we want to create a lot of star images. There is a common trick of random-number generation when we are picking photon locations, where we transform a uniform deviate (which computers are good at) using a function that creates some desired distribution instead. But you don't need to understand this to do the problems.
Then there is a function addBackground which will add background noise of a chosen level (denoted as $n$ in the lecture notes) to any image.
End of explanation
"""
xy=np.indices( (32,32),dtype=float)
x = xy[1].copy()
y = xy[0].copy()
pl.imshow(x,origin='lower',interpolation='nearest')
pl.colorbar()
"""
Explanation: Here is one more very helpful hint. A lot of our activities require us to do sums like $\sum_{x,y} x I_{xy}.$ To do that it'll be very helpful to have numpy arrays that hold the x and y coordinate values of each pixel in an image. Here I'll make them for you for the case of a 32x32 square image.
Note that the numpy indexing convention places the y coordinate first in the [y,x] indexing. Also the first pixel is number 0. This differs from the convention used in FITS images, which is that the first pixel is number 1, and the first axis is the x axis.
End of explanation
"""
def centroid(photon_list):
# Returns (y,x) centroids in an array of length 2
return np.mean(photon_list, axis=1)
"""
Explanation: Exercise 1: source-dominated centroiding
(a) Write a function centroid(photon_list) which will estimate the position of a star by taking the average of its x and y photon arrival positions, if it's give a (2,N) array of photon positions such as produced by MoffatPSF.realizeXY.
End of explanation
"""
xy = []
x0,y0 = 15.22,16.17
star = MoffatPSF(x0=x0, y0=y0)
# Run my loop of realizations
for i in range(10000):
xy_i = centroid(star.realizeXY(n_photon=1000.))
xy.append(xy_i)
# Make the xy estimates into a 2xN numpy array
xy = np.stack(xy).T
# Draw our histograms
h = pl.hist(xy[1]-x0,bins=100,range=(-1,1),histtype='step',label='X')
h = pl.hist(xy[0]-y0,bins=100,range=(-1,1),histtype='step',label='Y')
pl.xlabel('Position error')
pl.ylabel('Frequency')
pl.legend()
# Calculate the standard deviations and means of the estimates
print('X mean, standard deviation:',np.mean(xy[1,:]),np.std(xy[1,:]))
print('X mean, standard deviation:',np.mean(xy[0,:]),np.std(xy[0,:]))
"""
Explanation: (b) Now create a MoffatPSF instance with a true center of your choice (keep the default half_light_radius=4, which fits nicely into 32x32 images). Make a loop that will create photon lists for each of 10,000 observations of a star with an average flux n_photon=1e3. Use your function from (a) to create estimates $(\hat x_0, \hat y_0)$ of each star. Then produce a histogram of these 10,000 estimates (one for $x$ and one for $y$), and figure out the standard deviations $\sigma_x$ and $\sigma_y$ of your measurements.
Tip: Draw your photon lists 1 at a time, then measure each and throw it away before you draw the next one. We don't want to use up our memory keeping them all around at once.
End of explanation
"""
# Draw a PSF with a lot of photons to measure its std in each axis (they should be the same)
lots_of_photons = star.realizeXY(1e6)
sigma_psf = np.std(lots_of_photons,axis=1)
print("PSF width:",sigma_psf)
# And divide by sqrt(N)
print("Estimated centroid error:",sigma_psf / np.sqrt(1e3))
"""
Explanation: (c) Now see whether our formula for the accuracy of source-limiting centroiding is accurate:
$$ \sigma_x = \frac{\sigma_{\rm PSF}}{\sqrt{N_\gamma}}$$
in this case. You'll need to use your MoffatPSF instance to estimate its standard deviation width $\sigma_{\rm PSF}$.
End of explanation
"""
x0,y0 = 15.22,16.17
hlr = 4.
flux = 4e4
n_bg = 4e3
star = MoffatPSF(half_light_radius=4.,x0=x0, y0=y0)
img = star.realizeImage(n_pix=32,n_photon=flux)
addBackground(img, variance=n_bg)
pl.imshow(img,origin='lower',interpolation='nearest')
pl.colorbar()
"""
Explanation: Exercise 2: aperture centroids for background-limited stars
Now we're going to work with pixelized images and add background noise (say, $n=4000$). We know that simple centroiding will not be a good idea so we're going to need to use some apertures. Before proceeding, let's just draw a background-noisy star:
End of explanation
"""
# The half-light radius of the star is 4 pixels.
# The source noise within the HLR is the shot noise from half the photons:
var_src = flux / 2.
# The total background variance within the HLR is
var_bg = np.pi * hlr**2 * n_bg
print("Background variance:",var_bg)
print("Ratio of bg to src variance:",var_bg/var_src)
# Background noise dominates!
"""
Explanation: (a) Is this stellar image background-limited or source-limited? To answer, consider which one contributes more noise within the half-light radius of the star.
End of explanation
"""
# Here's my weight function class
# Remember, to use a class, you first create an "instance"
# of it, e.g.: weight = GaussAp(sigma=2.6)
# and then you can call this instance like a function of dx,dy
class GaussAp:
# A tophat weight function of radius sigma
def __init__(self,sigma):
self.sigma = sigma
return
def __call__(self,dx,dy):
# Given equal-shaped arrays dx=x-x0, dy=y-y0,
# returns an array of the same shape giving weight function.
# Calculate distance of a pixel from the center
rsq = np.square(dx) + np.square(dy)
# Now return the weight
return np.exp(-rsq/(2.*self.sigma*self.sigma))
# Now a function that calculates the weighted moments
def weighted_moments(img, weight, x, y, x0, y0):
# img is the input image
# weight is a weight function
# x,y are the coordinates of the image
# x0, y0 are the centers for the weight function
w = weight(x-x0,y-y0)
mf = np.sum(img*w)
mx = np.sum(img*w*(x-x0))
my = np.sum(img*w*(y-y0))
return mf,mx,my
# Now my function that will iterate over x0,y0 to converge
def aperture_centroid(img, x, y, weight):
# Iteratively apply the weighted centroiding to the
# image (with coordinate arrays x,y given) until
# convergence, where the Mx and My moments are nulled.
# Start at the center of the image
x0 = 0.5*(img.shape[1]-1)
y0 = 0.5*(img.shape[0]-1)
iteration = 0
MAX_ITERATIONS = 20
tolerance = 0.001 # Quit when centroid moves less than this
while iteration < MAX_ITERATIONS:
mf,mx,my = weighted_moments(img,weight,x,y,x0,y0)
dx = mx / mf
dy = my / mf
if abs(dx)<tolerance and abs(dy)<tolerance:
# Our moments are now close to zero - done!
return np.array((y0,x0))
# otherwise update centroid and try again
x0 = x0 + dx
y0 = y0 + dy
iteration = iteration+1
# If we get here, we've exceeded our iteration count. This
# might happen if we have a low-S/N star. Raise an exception.
raise RuntimeError('Did not converge')
# Test that my routine runs once:
weight = GaussAp(5.)
aperture_centroid(img,x,y,weight)
"""
Explanation: (b) Write a function aperture_centroid(img,R) that will return the values of $(y0,x0)$ that satisfy
$$ M_x = \sum_{xy} I_{xy} W(x-x0,y-y0) (x-x0)=0$$
(and the same for y0), where $W$ is a weight function. I provide for you below a class GaussAp which you can use to create a Gaussian-aperture weight function with some $\sigma_w$:
$$ W(x,y) = \exp{\left(-\frac{x^2+y^2}{2\sigma_w^2}\right)}$$
To do this, you'll need to start with an initial guess for $(y0,x0),$ and then calculate the values of $M_f, M_x$ and $M_y$. Then move the aperture's $x_0,y_0$ by $(M_x/M_f, M_y/M_f)$ and try again. Iterate until the centroid doesn't move any more (or until you have done 10 iterations and need to quit!).
End of explanation
"""
weight_list = np.arange(2.0,5.,0.5)
x0,y0 = 15.22,15.88
nTrials = 4000
star = MoffatPSF(x0=x0, y0=y0)
sigma_x = []
sigma_y = []
for sigma in weight_list:
weight = GaussAp(sigma)
xy = []
# Run my loop of realizations
for i in range(nTrials):
img = star.realizeImage(n_pix=32,n_photon=flux)
addBackground(img, n_bg)
xy_i = aperture_centroid(img,x,y,weight)
xy.append(xy_i)
# Put the results into a 2d array
xy = np.vstack(xy).T
# And get the centroid error
sigma_x.append( np.std(xy[1]))
sigma_y.append( np.std(xy[0]))
# print out the mean positions, as a check
print("At sigma",sigma,"mean x,y are ({:.3f},{:.3f})".format(np.mean(xy[1]),
np.mean(xy[0])))
# Now we can look at the results:
pl.plot(weight_list,sigma_x,'ro',label='X')
pl.plot(weight_list,sigma_y,'bs',label='Y')
pl.grid()
pl.xlabel('Aperture sigma (pixels)')
pl.ylabel('Centroid error (pixels)')
pl.legend(framealpha=0.5)
"""
Explanation: (c) Now your job is to find the weight size radius $\sigma_w$ that yields the best accuracy on the centroid. To do so, use the Moffat star as created above, and measure the centroid of 4,000 realizations contaminated with background noise at the specified n_bg.
Calculate the resultant $\sigma_x,\sigma_y$ for these 4,000 stars, trying values of $\sigma_w=2,2.5,\ldots,4.5.$ Plot the astrometric accuracy vs $\sigma_w$.
Which $\sigma_w$ is best? See if you can make a rough comparison to the rule-of-thumb $\sigma_x\approx \sigma_{\rm PSF} / \nu.$
End of explanation
"""
# I need the derivate of MoffatPSF with respect to x.
# Let me make 2 Moffats that have x differing by a small amount +-dx from our nominal case.
# I'll use the values of x0,y0,hlr,n_bg,flux defined above.
# Small shift:
dx = 0.02
starplus = MoffatPSF(half_light_radius=hlr, x0=x0+dx, y0=y0)
starminus = MoffatPSF(half_light_radius=hlr, x0=x0-dx, y0=y0)
# Calculate the numerical derivative of the PSF by subtracting images
dPSF_dx = (starplus.draw(n_pix=32)-starminus.draw(n_pix=32)) / (2*dx)
# Now calculate the Fisher info
Fxx = (flux * flux / n_bg) * np.sum(dPSF_dx * dPSF_dx)
print("Optimal sigma_x:",1./np.sqrt(Fxx))
# Above I found that the best Gaussian weight obtained \sigma_x=0.084.
print("Ratio of Gaussian aperture to optimal:", 0.084 * np.sqrt(Fxx))
# So we could do about 10% better than the Gaussian aperture,
# which would be equivalent to having 20% more observing time or 2 more years of LSST!
"""
Explanation: Bonus question: Using the formula in the notes, derive the optimal centroid accuracy $\sigma_e$ attainable for the Moffat profile with the chosen size, flux, and background noise. How close is your best-choice Gaussian aperture to the optimum noise limit?
According to the notes, the Cramer-Rao bound for the variance of the x center of a star is the inverse of the Fisher information
$$F_{xx} = \frac{f^2}{n} \sum_{x,y} \left(\frac{\partial \textrm{PSF}}{\partial x}\right)^2.$$
We have two viable options here: compute this analytically by taking the derivative of the Moffat profile and then turning the sum into an integral; or compute this numerically. Let's do this numerically, since we already have the code to calculate the Moffat PSF.
End of explanation
"""
# I'll start you off by constructing an astropy WCS from the first image
h = pf.getheader('old_image.fits')
wcs_old = WCS(h)
mjd_old = h['MJD-OBS']
# And here's an example of using the WCS to map to the sky:
radec = wcs_old.pixel_to_world(1834,1620)
print(radec)
print(radec.ra.degree)
# Here's the second image:
h = pf.getheader('new_image.fits')
wcs_new = WCS(h)
mjd_new = h['MJD-OBS']
"""
Explanation: Exercise 3: Using a WCS
This exercise requires four fits files: old_image.fits, new_image.fits, old_catalog.fits, and new_catalog.fits. Please download these files, and place them in the same directory as this notebook.
The 2 FITS images are from the Dark Energy Survey (DES), named old_image.fits and new_image.fits, and they overlap on the sky. According to Gaia DR2, there is a fast-moving star located at roughly $(\alpha,\delta)=$(29.91148,-8.212267). If you display these two images using DS9, then align them using the Frame->Match->Frame->WCS option, you will be able to move your cursor to those coordinates actually see the star move.
We also have catalogs (FITS tables) named old_catalog.fits and new_catalog.fits that contain the precise pixel centroids measured by SExtractor on these frames. Your exercise is to extract the WCS from each image's header using astropy, apply it to the $(x,y)$ coordinates to obtain $(\alpha,\delta)$ for this star in each image, and then estimate its proper motion.
(a) Get a WCS and the MJD-OBS out of headers. [MJD is Modified Julian Date. Basically it gives the time of the exposure in units of days since some reference moment.]
End of explanation
"""
# I'll start you off by giving the command to read in the catalog to a table:
cat_old = pf.getdata('old_catalog.fits',1)
# Let's locate the desired star by mapping the whole catalog to sky coordinates, and using
# astropy's ability to give distances between SkyCoords and find the one that's close
# to the Gaia position.
radec = wcs_old.pixel_to_world(cat_old['XWIN_IMAGE'],cat_old['YWIN_IMAGE'])
# Find the mininum distance between a catalog object and our Gaia target
target = co.SkyCoord(ra=29.91148,dec=-8.212267,frame='icrs',unit='deg')
dist = radec.separation(target)
index_old = np.argmin(dist)
# Save the coordinats of this star
coords_old = radec[index_old]
sigma_old = cat_old['ERRAWIN_IMAGE'][index_old] * 0.264 # Change into arcseconds
print("Star is index",index_old,"in old image at",coords_old)
# Now repeat for the new catalog
cat_new = pf.getdata('new_catalog.fits',1)
radec = wcs_new.pixel_to_world(cat_new['XWIN_IMAGE'],cat_new['YWIN_IMAGE'])
dist = radec.separation(target)
index_new = np.argmin(dist)
print(dist[index_new]) ###
# Save the coordinats of this star in the new image
coords_new = radec[index_new]
sigma_new = cat_new['ERRAWIN_IMAGE'][index_new] * 0.264 # Change into arcseconds
print("Star is index",index_new,"in new image at",coords_new)
"""
Explanation: (b) Now read the FITS catalogs into tables. We will use the columns XWIN_IMAGE and YWIN_IMAGE to give us the pixel coordinates, with ERRAWIN_IMAGE giving the rough uncertainty on each (in units of pixels, 1 pixel = 0.264 arcsec).
By whatever means you choose, figure out which row in each catalog corresponds to the fast-moving star at the coordinates given above. Then use the WCS's to obtain the ICRS RA & Dec measured for this star on these two images.
End of explanation
"""
# First get the time interval, in years:
dt = (mjd_new - mjd_old) / 365.25 #(number of days in a Julian year)
print("Years between observations:",dt)
# Now get the change in dec and RA, which we convert to milliarcsec
dRA = (coords_new.ra.deg - coords_old.ra.deg) * 3600 * 1000. * np.cos(coords_new.dec.rad)
dDec = (coords_new.dec.deg - coords_old.dec.deg) * 3600 * 1000.
# The uncertainty in the difference is the quadrature sum of the individual errors
sigma_diff = np.sqrt( sigma_new**2 + sigma_old**2) * 1000. # convert to mas.
# Add 2 exposures' worth of 10 mas turbulence to the sigma
sigma_diff = np.sqrt(sigma_diff**2 + 2 * 10.**2)
# Now divide by time to get the rates of motion.
print("PMRA: {:.2f} +- {:.2f}".format(dRA/dt, sigma_diff/dt))
print("PMDec: {:.2f} +- {:.2f}".format(dDec/dt, sigma_diff/dt))
"""
Explanation: (c) Using these two sky coordinates, estimate the rate of proper motion (in mas/yr) of the star between the two exposures. Note that proper motions are traditionally given in real angular motions, so that (RA PM) = (difference in RA) * cos(dec) / (time interval). [You can ignore the fact that there might be parallax motion mixed in here.]
Give an uncertainty for our measurement as well. Here we have to be careful, because ERRAWIN_IMAGE includes only the shot-noise errors. It does not include the contribution from atmospheric turbulence, which we know is dominant for brighter stars. So let's add another 10 mas of uncertainty in quadrature to each coordinate of each measurement as a guess of the size of the turbulence.
Does the DES measurement agree with Gaia's estimates? Are there any reasons it might disagree?
pmra = 326.06 +- 0.94 mas/yr
pmdec = -124.12 +- 0.81 mas/yr
End of explanation
"""
|
gcgruen/homework
|
foundations-homework/05/homework-05-gruen-nyt_graded.ipynb
|
mit
|
#API Key: 0c3ba2a8848c44eea6a3443a17e57448
"""
Explanation: All API's: http://developer.nytimes.com/
Article search API: http://developer.nytimes.com/article_search_v2.json
Best-seller API: http://developer.nytimes.com/books_api.json#/Documentation
Test/build queries: http://developer.nytimes.com/
Tip: Remember to include your API key in all requests! And their interactive web thing is pretty bad. You'll need to register for the API key.
1) What books topped the Hardcover Fiction NYT best-sellers list on Mother's Day in 2009 and 2010? How about Father's Day?
End of explanation
"""
import requests
bestseller_response = requests.get('http://api.nytimes.com/svc/books/v2/lists/2009-05-10/hardcover-fiction?api-key=0c3ba2a8848c44eea6a3443a17e57448')
bestseller_data = bestseller_response.json()
print("The type of bestseller_data is:", type(bestseller_data))
print("The keys of bestseller_data are:", bestseller_data.keys())
# Exploring the data structure further
bestseller_books = bestseller_data['results']
print(type(bestseller_books))
print(bestseller_books[0])
for book in bestseller_books:
#print("NEW BOOK!!!")
#print(book['book_details'])
#print(book['rank'])
if book['rank'] == 1:
for element in book['book_details']:
print("The book that topped the hardcover fiction NYT Beststeller list on Mothers Day in 2009 was", element['title'], "written by", element['author'])
"""
Explanation: Graded = 8/8
End of explanation
"""
def bestseller(x, y):
bestsellerA_response = requests.get('http://api.nytimes.com/svc/books/v2/lists/'+ x +'/hardcover-fiction?api-key=0c3ba2a8848c44eea6a3443a17e57448')
bestsellerA_data = bestsellerA_response.json()
bestsellerA_books = bestsellerA_data['results']
for book in bestsellerA_books:
if book['rank'] == 1:
for element in book['book_details']:
print("The book that topped the hardcover fiction NYT Beststeller list on", y, "was",
element['title'], "written by", element['author'])
bestseller('2009-05-10', "Mothers Day 2009")
bestseller('2010-05-09', "Mothers Day 2010")
bestseller('2009-06-21', "Fathers Day 2009")
bestseller('2010-06-20', "Fathers Day 2010")
#Alternative solution would be, instead of putting this code into a function to loop it:
#1) to create a dictionary called dates containing y as keys and x as values to these keys
#2) to take the above code and nest it into a for loop that loops through the dates, each time using the next key:value pair
# for date in dates:
# replace value in URL and run the above code used inside the function
# replace key in print statement
"""
Explanation: After writing a code that returns a result, now automating that for the various dates using a function:
End of explanation
"""
# STEP 1: Exploring the data structure using just one of the dates from the question
bookcat_response = requests.get('http://api.nytimes.com/svc/books/v2/lists/names.json?published-date=2009-06-06&api-key=0c3ba2a8848c44eea6a3443a17e57448')
bookcat_data = bookcat_response.json()
print(type(bookcat_data))
print(bookcat_data.keys())
bookcat = bookcat_data['results']
print(type(bookcat))
print(bookcat[0])
# STEP 2: Writing a loop that runs the same code for both dates (no function, as only one variable)
dates = ['2009-06-06', '2015-06-15']
for date in dates:
bookcatN_response = requests.get('http://api.nytimes.com/svc/books/v2/lists/names.json?published-date=' + date + '&api-key=0c3ba2a8848c44eea6a3443a17e57448')
bookcatN_data = bookcatN_response.json()
bookcatN = bookcatN_data['results']
category_listN = []
for category in bookcatN:
category_listN.append(category['display_name'])
print(" ")
print("THESE WERE THE DIFFERENT BOOK CATEGORIES THE NYT RANKED ON", date)
for cat in category_listN:
print(cat)
# STEP 1a: EXPLORING THE DATA
test_response = requests.get('http://api.nytimes.com/svc/search/v2/articlesearch.json?q=Gaddafi+Libya&api-key=0c3ba2a8848c44eea6a3443a17e57448')
test_data = test_response.json()
print(type(test_data))
print(test_data.keys())
test_hits = test_data['response']
print(type(test_hits))
print(test_hits.keys())
# STEP 1b: EXPLORING THE META DATA
test_hits_meta = test_data['response']['meta']
print("The meta data of the search request is a", type(test_hits_meta))
print("The dictionary despot_hits_meta has the following keys:", test_hits_meta.keys())
print("The search requests with the TEST URL yields total:")
test_hit_count = test_hits_meta['hits']
print(test_hit_count)
# STEP 2: BUILDING THE CODE TO LOOP THROUGH DIFFERENT SPELLINGS
despot_names = ['Gadafi', 'Gaddafi', 'Kadafi', 'Qaddafi']
for name in despot_names:
despot_response = requests.get('http://api.nytimes.com/svc/search/v2/articlesearch.json?q=' + name +'+Libya&api-key=0c3ba2a8848c44eea6a3443a17e57448')
despot_data = despot_response.json()
despot_hits_meta = despot_data['response']['meta']
despot_hit_count = despot_hits_meta['hits']
print("The NYT has referred to the Libyan despot", despot_hit_count, "times using the spelling", name)
"""
Explanation: 2) What are all the different book categories the NYT ranked in June 6, 2009? How about June 6, 2015?
End of explanation
"""
hip_response = requests.get('http://api.nytimes.com/svc/search/v2/articlesearch.json?q=hipster&fq=pub_year:1995&api-key=0c3ba2a8848c44eea6a3443a17e57448')
hip_data = hip_response.json()
print(type(hip_data))
print(hip_data.keys())
# STEP 1: EXPLORING THE DATA STRUCTURE:
hipsters = hip_data['response']
#print(hipsters)
#hipsters_meta = hipsters['meta']
#print(type(hipsters_meta))
hipsters_results = hipsters['docs']
print(hipsters_results[0].keys())
#print(type(hipsters_results))
#STEP 2: LOOPING FOR THE ANSWER:
earliest_date = '1996-01-01'
for mention in hipsters_results:
if mention['pub_date'] < earliest_date:
earliest_date = mention['pub_date']
print("This is the headline of the first text to mention 'hipster' in 1995:", mention['headline']['main'])
print("It was published on:", mention['pub_date'])
print("This is its lead paragraph:")
print(mention['lead_paragraph'])
"""
Explanation: 4) What's the title of the first story to mention the word 'hipster' in 1995? What's the first paragraph?
End of explanation
"""
# data structure requested same as in task 3, just this time loop though different date ranges
def countmention(a, b, c):
if b == ' ':
marry_response = requests.get('http://api.nytimes.com/svc/search/v2/articlesearch.json?q="gay marriage"&begin_date='+ a +'&api-key=0c3ba2a8848c44eea6a3443a17e57448')
else:
marry_response = requests.get('http://api.nytimes.com/svc/search/v2/articlesearch.json?q="gay marriage"&begin_date='+ a +'&end_date='+ b +'&api-key=0c3ba2a8848c44eea6a3443a17e57448')
marry_data = marry_response.json()
marry_hits_meta = marry_data['response']['meta']
marry_hit_count = marry_hits_meta['hits']
print("The count for NYT articles mentioning 'gay marriage' between", c, "is", marry_hit_count)
#supposedly, there's a way to solve the following part in a more efficient way, but those I tried did not work,
#so it ended up being more time-efficient just to type it:
countmention('19500101', '19591231', '1950 and 1959')
countmention('19600101', '19691231', '1960 and 1969')
countmention('19700101', '19791231', '1970 and 1979')
countmention('19800101', '19891231', '1980 and 1989')
countmention('19900101', '19991231', '1990 and 1999')
countmention('20000101', '20091231', '2000 and 2009')
countmention('20100101', ' ', '2010 and present')
"""
Explanation: 5) How many times was gay marriage mentioned in the NYT between 1950-1959, 1960-1969, 1970-1978, 1980-1989, 1990-2099, 2000-2009, and 2010-present?
Tip: You'll want to put quotes around the search term so it isn't just looking for "gay" and "marriage" in the same article.
Tip: Write code to find the number of mentions between Jan 1, 1950 and Dec 31, 1959.
End of explanation
"""
moto_response = requests.get('http://api.nytimes.com/svc/search/v2/articlesearch.json?q=motorcycle&facet_field=section_name&facet_filter=true&api-key=0c3ba2a8848c44eea6a3443a17e57448')
moto_data = moto_response.json()
#STEP 1: EXPLORING DATA STRUCTURE
#print(type(moto_data))
#print(moto_data.keys())
#print(moto_data['response'])
#print(moto_data['response'].keys())
#print(moto_data['response']['facets'])
#STEP 2: Code to get to the answer
moto_facets = moto_data['response']['facets']
#print(moto_facets)
#print(moto_facets.keys())
moto_sections = moto_facets['section_name']['terms']
#print(moto_sections)
#this for loop is not necessary, but it's nice to know the counts
#(also to check whether the next loop identifies the right section)
for section in moto_sections:
print("The section", section['term'], "mentions motorcycles", section['count'], "times.")
most_motorcycles = 0
for section in moto_sections:
if section['count'] > most_motorcycles:
most_motorcycles = section['count']
print(" ")
print("That means the section", section['term'], "mentions motorcycles the most, namely", section['count'], "times.")
"""
Explanation: 6) What section talks about motorcycles the most?
Tip: You'll be using facets
End of explanation
"""
picks_offset_values = [0, 20, 40]
picks_review_list = []
for value in picks_offset_values:
picks_response = requests.get ('http://api.nytimes.com/svc/movies/v2/reviews/search.json?&offset=' + str(value) + '&api-key=0c3ba2a8848c44eea6a3443a17e57448')
picks_data = picks_response.json()
#STEP 1: EXPLORING THE DATA STRUCTURE (without the loop)
#print(picks_data.keys())
#print(picks_data['num_results'])
#print(picks_data['results'])
#print(type(picks_data['results']))
#print(picks_data['results'][0].keys())
#STEP 2: After writing a test code (not shown) without the loop, now CODING THE LOOP
last_reviews = picks_data['num_results']
picks_results = picks_data['results']
critics_pick_count = 0
for review in picks_results:
if review['critics_pick'] == 1:
critics_pick_count = critics_pick_count + 1
picks_new_count = critics_pick_count
picks_review_list.append(picks_new_count)
print("Out of the last", last_reviews + value, "movie reviews,", sum(picks_review_list), "were Critics' picks.")
"""
Explanation: 7) How many of the last 20 movies reviewed by the NYT were Critics' Picks? How about the last 40? The last 60?
Tip: You really don't want to do this 3 separate times (1-20, 21-40 and 41-60) and add them together. What if, perhaps, you were able to figure out how to combine two lists? Then you could have a 1-20 list, a 1-40 list, and a 1-60 list, and then just run similar code for each of them.
End of explanation
"""
#STEP 1: EXPLORING THE DATA STRUCTURE (without the loop)
#critics_response = requests.get('http://api.nytimes.com/svc/movies/v2/reviews/search.json?&offset=0&api-key=0c3ba2a8848c44eea6a3443a17e57448')
#critics_data = critics_response.json()
#print(critics_data.keys())
#print(critics_data['num_results'])
#print(critics_data['results'])
#print(type(critics_data['results']))
#print(critics_data['results'][0].keys())
#STEP 2: CREATE A LOOP, THAT GOES THROUGH THE SEARCH RESULTS FOR EACH OFFSET VALUE AND STORES THE RESULTS IN THE SAME LIST
#(That list is then passed on to step 3)
critics_offset_value = [0, 20]
critics_list = [ ]
for value in critics_offset_value:
critics_response = requests.get('http://api.nytimes.com/svc/movies/v2/reviews/search.json?&offset=' + str(value) + '&api-key=0c3ba2a8848c44eea6a3443a17e57448')
critics_data = critics_response.json()
critics = critics_data['results']
for review in critics:
critics_list.append(review['byline'])
#print(critics_list)
unique_critics = set(critics_list)
#print(unique_critics)
#STEP 3: FOR EVERY NAME IN THE UNIQUE CRITICS LIST, LOOP THROUGH NON-UNIQUE LIST TO COUNT HOW OFTEN THEY OCCUR
#STEP 4: SELECT THE ONE THAT HAS WRITTEN THE MOST (from the #print statement below, I know it's two people with same score)
max_count = 0
for name in unique_critics:
name_count = 0
for critic in critics_list:
if critic == name:
name_count = name_count + 1
if name_count > max_count:
max_count = name_count
max_name = name
if name_count == max_count:
same_count = name_count
same_name = name
#print(name, "has written", name_count, "reviews out of the last 40 reviews.")
print(max_name, "has written the most of the last 40 reviews:", max_count)
print(same_name, "has written the most of the last 40 reviews:", same_count)
"""
Explanation: 8) Out of the last 40 movie reviews from the NYT, which critic has written the most reviews?
End of explanation
"""
|
valentina-s/GLM_PythonModules
|
notebooks/MLE_multipleNeuronsWeights.ipynb
|
bsd-2-clause
|
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
import random
import csv
%matplotlib inline
import os
import sys
sys.path.append(os.path.join(os.getcwd(),'..'))
sys.path.append(os.path.join(os.getcwd(),'..','code'))
sys.path.append(os.path.join(os.getcwd(),'..','data'))
import filters
import likelihood_functions as lk
import PoissonProcessClasses as PP
import auxiliary_functions as auxfun
import imp
imp.reload(filters)
imp.reload(lk)
imp.reload(auxfun)
imp.reload(PP)
# Number of neurons
nofCells = 2
"""
Explanation: This notebook presents how to perform maximum-likelihood parameter estimation for multiple neurons. The neurons depend on each other through a set of weights.
End of explanation
"""
# creating the path to the data
data_path = os.path.join(os.getcwd(),'..','data')
# reading stimulus
Stim = np.array(pd.read_csv(os.path.join(data_path,'Stim2.csv'),header = None))
# reading location of spikes
# (lengths of tsp sequences are not equal so reading them line by line)
tsp_list = []
with open(os.path.join(data_path,'tsp2.csv')) as csvfile:
tspreader = csv.reader(csvfile)
for row in tspreader:
tsp_list.append(row)
"""
Explanation: Reading input-output data:
End of explanation
"""
dt = 0.01
y_list = []
for tsp in tsp_list:
tsp = np.array(tsp).astype(np.float)
tsp_int = np.ceil((tsp - dt*0.001)/dt)
tsp_int = np.reshape(tsp_int,(tsp_int.shape[0],1))
tsp_int = tsp_int.astype(int)
y_list.append(np.array([item in tsp_int for item in np.arange(Stim.shape[0]/dt)+1]).astype(int))
"""
Explanation: Extracting a spike train from spike positions:
End of explanation
"""
# create a stimulus filter
kpeaks = np.array([0,round(20/3)])
pars_k = {'neye':5,'n':5,'kpeaks':kpeaks,'b':3}
K,K_orth,kt_domain = filters.createStimulusBasis(pars_k, nkt = 20)
# create a post-spike filter
hpeaks = np.array([0.1,2])
pars_h = {'n':5,'hpeaks':hpeaks,'b':.4}
H,H_orth,ht_domain = filters.createPostSpikeBasis(pars_h,dt)
# Interpolate Post Spike Filter
MSP = auxfun.makeInterpMatrix(len(ht_domain),1)
MSP[0,0] = 0
H_orth = np.dot(MSP,H_orth)
"""
Explanation: Creating filters:
End of explanation
"""
M_k = lk.construct_M_k(Stim,K,dt)
M_h_list = []
for tsp in tsp_list:
tsp = np.array(tsp).astype(np.float)
M_h_list.append(lk.construct_M_h(tsp,H_orth,dt,Stim))
# creating a matrix of output covariates
Y = np.array(y_list).T
"""
Explanation: Conditional Intensity (spike rate):
$$\lambda_{\beta}(i) = \exp(K(\beta_k)Stim + H(\beta_h)y + \sum_{j\ne i}w_j I(\beta_{I})*y_j) + \mu$$
$$\lambda_{\beta}(i) = \exp(M_k\beta_k + M_h \beta_h + Y w + \mu)$$
Creating a matrix of covariates:
End of explanation
"""
# tsp_list = []
# for i in range(nofCells):
# tsp_list.append(auxfun.simSpikes(np.hstack((coeff_k,coeff_h)),M,dt))
M_list = []
for i in range(len(M_h_list)):
# exclude the i'th spike-train
M_list.append(np.hstack((M_k,M_h_list[i],np.delete(Y,i,1),np.ones((M_k.shape[0],1)))))
#M_list.append(np.hstack((M_k,M_h_list[i],np.ones((M_h.shape[0],1)))))
"""
Explanation: <!---Simulating a neuron spike trains:-->
End of explanation
"""
coeff_k0 = np.array([ 0.061453,0.284916,0.860335,1.256983,0.910615,0.488660,-0.887091,0.097441,0.026607,-0.090147])
coeff_h0 = np.zeros((5,))
coeff_w0 = np.zeros((nofCells,))
mu_0 = 0
pars0 = np.hstack((coeff_k0,coeff_h0,coeff_w0,mu_0))
pars0 = np.hstack((coeff_k0,coeff_h0,mu_0))
pars0 = np.zeros((17,))
"""
Explanation: Conditional intensity as a function of the covariates:
$$ \lambda_{\beta} = \exp(M\beta) $$
Create a Poisson process model with this intensity:
Setting initial parameters:
End of explanation
"""
res_list = []
for i in range(len(y_list)):
model = PP.PPModel(M_list[i].T,dt = dt/100)
res_list.append(model.fit(y_list[i],start_coef = pars0,maxiter = 500, method = 'L-BFGS-B'))
"""
Explanation: Fitting the likelihood:
End of explanation
"""
k_coeff = np.array([0.061453, 0.284916, 0.860335, 1.256983, 0.910615, 0.488660, -0.887091, 0.097441, 0.026607, -0.090147])
h_coeff = np.array([-15.18,38.24,-67.58,-14.06,-3.36])
for i in range(len(res_list)):
k_coeff_predicted = res_list[i].x[:10]
h_coeff_predicted = res_list[i].x[10:15]
print('Estimated dc for neuron '+str(i)+': '+str(res_list[i].x[-1]))
fig,axs = plt.subplots(1,2,figsize = (10,5))
fig.suptitle('Neuron%d'%(i+1))
axs[0].plot(-kt_domain[::-1],np.dot(K,k_coeff_predicted),'r',label = 'predicted')
axs[0].set_title('Stimulus Filter')
axs[0].hold(True)
axs[0].plot(-kt_domain[::-1],np.dot(K,k_coeff),'b',label = 'true')
axs[0].plot(-kt_domain[::-1],np.dot(K,pars0[:10]),'g',label = 'initial')
axs[0].set_xlabel('Time')
axs[0].legend(loc = 'upper left')
axs[1].set_title('Post-Spike Filter')
axs[1].plot(ht_domain,np.dot(H_orth,h_coeff_predicted),'r',label = 'predicted')
axs[1].plot(ht_domain,np.dot(H_orth,h_coeff),'b',label = 'true')
axs[1].plot(ht_domain,np.dot(H_orth,coeff_h0[:H_orth.shape[1]]),'g',label = 'initial')
axs[1].set_title('Post-Spike Filter')
axs[1].set_xlabel('Time')
axs[1].legend(loc = 'upper right')
"""
Explanation: Specifying the true parameters:
End of explanation
"""
W = np.array([np.hstack((res_list[i].x[-(nofCells):-nofCells+i],0,res_list[i].x[-nofCells+i:-1])) for i in range(len(res_list))])
print(W)
"""
Explanation: Extracting the weight matrix:
End of explanation
"""
|
PMEAL/OpenPNM
|
examples/simulations/steady_state/continuum_heat_transfer.ipynb
|
mit
|
%matplotlib inline
import numpy as np
import scipy as sp
import openpnm as op
%config InlineBackend.figure_formats = ['svg']
np.random.seed(10)
ws = op.Workspace()
ws.settings["loglevel"] = 40
np.set_printoptions(precision=5)
"""
Explanation: Fourier Conduction
This examples shows how OpenPNM can be used to simulate thermal conduction on a generic grid of nodes. The result obtained from OpenPNM is compared to the analytical result.
As usual, start by importing OpenPNM, and the SciPy library.
End of explanation
"""
divs = [10, 50]
Lc = 0.1 # cm
pn = op.network.Cubic(shape=divs, spacing=Lc)
pn.add_boundary_pores(['left', 'right', 'front', 'back'])
"""
Explanation: Generating the Network object
Next, 2D a Network is generated with dimensions of 10x50 elements. The lattice spacing is given by Lc. Boundaries are added all around the edges of Network object using the add_boundariy_pores method.
End of explanation
"""
# Create Phase object and associate with a Physics object
Cu = op.phases.GenericPhase(network=pn)
"""
Explanation: Creating a Phase object
All simulations require a phase object which possess the thermosphysical properties of the system. In this case, we'll create a generic phase object, call it copper, though it has no properties; we'll add these by hand later.
End of explanation
"""
# Add a unit conductance to all connections
Cu['throat.thermal_conductance'] = 1
# Overwrite boundary conductances since those connections are half as long
Ps = pn.pores('*boundary')
Ts = pn.find_neighbor_throats(pores=Ps)
Cu['throat.thermal_conductance'][Ts] = 2
"""
Explanation: Assigning Thermal Conductance to Copper
In a proper OpenPNM model we would create a Geometry object to manage all the geometrical properties, and a Physics object to calculate the thermal conductance based on the geometric information and the thermophysical properties of copper. In the present case, however, we'll just calculate the conductance manually and assign it to Cu.
End of explanation
"""
# Setup Algorithm object
alg = op.algorithms.FourierConduction(network=pn, phase=Cu)
inlets = pn.pores('right_boundary')
outlets = pn.pores(['front_boundary', 'back_boundary', 'right_boundary'])
T_in = 30*np.sin(np.pi*pn['pore.coords'][inlets, 1]/5)+50
alg.set_value_BC(values=T_in, pores=inlets)
alg.set_value_BC(values=50, pores=outlets)
alg.run()
"""
Explanation: Generating the algorithm objects and running the simulation
The last step in the OpenPNM simulation involves the generation of a Algorithm object and running the simulation.
End of explanation
"""
import matplotlib.pyplot as plt
sim = alg['pore.temperature'][pn.pores('internal')]
temp_map = np.reshape(a=sim, newshape=divs)
plt.subplots(1, 1, figsize=(10, 5))
plt.imshow(temp_map, cmap=plt.cm.plasma);
plt.colorbar();
"""
Explanation: This is the last step usually required in a OpenPNM simulation. The algorithm was run, and now the simulation data obtained can be analyzed. For illustrative purposes, the results obtained using OpenPNM shall be compared to an analytical solution of the problem in the following.
First let's rehape the 'pore.temperature' array into the shape of the network while also extracting only the internal pores to avoid showing the boundaries.
End of explanation
"""
print(f"T_average (numerical): {alg['pore.temperature'][pn.pores('internal')].mean():.5f}")
"""
Explanation: Also, let's take a look at the average temperature:
End of explanation
"""
# Calculate analytical solution over the same domain spacing
X = pn['pore.coords'][:, 0]
Y = pn['pore.coords'][:, 1]
soln = 30*np.sinh(np.pi*X/5)/np.sinh(np.pi/5)*np.sin(np.pi*Y/5) + 50
soln = soln[pn.pores('internal')]
soln = np.reshape(soln, (divs[0], divs[1]))
plt.subplots(1, 1, figsize=(10, 5))
plt.imshow(soln, cmap=plt.cm.plasma);
plt.colorbar();
"""
Explanation: The analytical solution is computed as well, and the result is the same shape as the network (including the boundary pores).
End of explanation
"""
print(f"T_average (analytical): {soln.mean():.5f}")
"""
Explanation: Also, let's take a look at the average temperature:
End of explanation
"""
diff = soln - temp_map
plt.subplots(1, 1, figsize=(10, 5))
plt.imshow(diff, cmap=plt.cm.plasma);
plt.colorbar();
print(f"Minimum error: {diff.min():.5f}, maximum error: {diff.max():.5f}")
"""
Explanation: Both the analytical solution and OpenPNM simulation can be subtracted from each other to yield the difference in both values.
End of explanation
"""
|
Ruediger-Braun/compana16
|
Lektion12-Fehler.ipynb
|
gpl-3.0
|
from sympy import *
init_printing()
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
"""
Explanation: Lektion 12
End of explanation
"""
x = Symbol('x', real=True)
A = Matrix(3,3, [x,x,0,0,x,x,0,0,x])
A
A.exp()
"""
Explanation: Matrixexponentiale
End of explanation
"""
A = Matrix(4,4,[0,1,0,0,-1,0,1,0,0,0,0,1,1,0,-3, 0])
A
A.eigenvals()
"""
Explanation: Gekoppelte Pendel
\begin{align}
y'' &= w - y + \cos(2t)\
w'' &= y - 3w
\end{align}
Übersetzt sich in
\begin{align}
y_0' &= y_1 \
y_1' &= y_2 - y_0 + \cos(2t) \
y_2' &= y_3 \
y_3' &= y_0 - 3 y_2
\end{align}
End of explanation
"""
%time Phi = (x*A).exp() # Fundamentalsystem für das System
"""
Explanation: Fundamentalsystem
End of explanation
"""
% time len(latex(Phi))
t = Symbol('t', real=True)
mu = list(A.eigenvals())
mu
phi = [exp(mm*t) for mm in mu]
phi
def element(i, j):
f = phi[j]
return f.diff(t, i)
Phi = Matrix(4, 4, element)
Phi
P1 = Phi**(-1)
len(latex(P1))
P4 = Phi.inv()
len(latex(P4))
A3 = P1*Phi
A3[0,0].n()
A4 = simplify(A3)
A4[0,0].n()
A3[0,0].simplify()
Out[44].n()
len(latex(A3[0,0]))
A2 = simplify(P1*Phi)
A2[0,0]
A2[0,0].n()
P2 = simplify(P1.expand())
len(latex(P2))
P2
(P2*Phi).simplify()
A = Out[31]
A[0,0].n()
B = Matrix([0, cos(2*t), 0, 0])
B
P2*B
P3 = Integral(P2*B, t).doit()
P3
tmp = (Phi*P3)[0]
tmp = tmp.simplify()
expand(tmp).collect([sin(2*t), cos(2*t)])
psi2 = (Phi*P3)[2]
psi2.simplify().expand()
im(psi2.simplify()).expand()
M = Matrix([0,1,t])
Integral(M, t).doit()
"""
Explanation: Das Fundamentalsystem wird leider zu kompliziert
End of explanation
"""
x = Symbol('x')
y = Function('y')
dgl = Eq(y(x).diff(x,2), -sin(y(x)))
dgl
#dsolve(dgl) # NotImplementedError
"""
Explanation: Numerische Lösungen
End of explanation
"""
def F(x, y):
y0, y1 = y
w0 = y1
w1 = -mpmath.sin(y0)
return [w0, w1]
F(0,[0,1])
ab = [mpmath.pi/2, 0]
x0 = 0
phi = mpmath.odefun(F, x0, ab)
phi(1)
xn = np.linspace(0, 25, 200)
wn = [phi(xx)[0] for xx in xn]
dwn = [phi(xx)[1] for xx in xn]
plt.plot(xn, wn, label="$y$")
plt.plot(xn, dwn, label="$y'$")
plt.legend();
"""
Explanation: die Funktion mpmath.odefun löst die Differentialgleichung $[y_0', \dots, y_n'] = F(x, [y_0, \dots, y_n])$.
End of explanation
"""
%time phi(50)
%time phi(60)
%time phi(40)
"""
Explanation: Ergebnisse werden intern gespeichert (Cache)
End of explanation
"""
dgl
eta = Symbol('eta')
y0 = Symbol('y0')
"""
Explanation: Die Pendelgleichung
End of explanation
"""
H = Integral(-sin(eta), eta).doit()
H
E = y(x).diff(x)**2/2 - H.subs(eta, y(x)) # Energie
E
E.diff(x)
E.diff(x).subs(dgl.lhs, dgl.rhs)
"""
Explanation: Wir lösen die AWA $y'' = -\sin(y)$, $y(0) = y_0$, $y'(0) = 0$.
End of explanation
"""
E0 = E.subs({y(x): y0, y(x).diff(x): 0})
E0
dgl_E = Eq(E, E0)
dgl_E
# dsolve(dgl_E) # abgebrochen
"""
Explanation: Die Energie ist eine Erhaltungsgröße.
End of explanation
"""
Lsg = solve(dgl_E, y(x).diff(x))
Lsg
h = Lsg[0].subs(y(x), eta)
h
I1 = Integral(1/h, eta).doit()
I1
"""
Explanation: Lösen wir mit der Methode der Trennung der Variablen.
End of explanation
"""
I2 = Integral(1/h, (eta, y0, -y0))
I2
def T(ypsilon0):
return 2*re(I2.subs(y0, ypsilon0).n())
T(pi/2)
phi(T(pi/2)), mpmath.pi/2
xn = np.linspace(0.1, .95*np.pi, 5)
wn = [T(yy) for yy in xn]
plt.plot(xn, wn);
"""
Explanation: In der Tat nicht elementar integrierbar.
Trennung der Variablem führt zu
$$ -\frac{\sqrt2}2 \int_{y_0}^{y(x)} \frac{d\eta}{\sqrt{\cos(\eta)-\cos(y_0)}} = x. $$
Insbesondere ist
$$ -\frac{\sqrt2}2 \int_{y_0}^{-y_0} \frac{d\eta}{\sqrt{\cos(\eta)-\cos(y_0)}} $$
gleich der halben Schwingungsperiode.
End of explanation
"""
|
IvarsKarpics/mxcube
|
bin/mxcube_jupyter_notebook.ipynb
|
lgpl-3.0
|
import os
import sys
cwd = os.getcwd()
print cwd
mxcube_root = cwd[:-4]
print mxcube_root
sys.path.insert(0, mxcube_root)
from HardwareRepository import HardwareRepository
#print "MXCuBE home directory: %s" % cwd
hwr_server = mxcube_root + "/HardwareRepository/configuration/xml-qt"
HardwareRepository.setHardwareRepositoryServer(hwr_server)
hardware_repository = HardwareRepository.HardwareRepository()
hardware_repository.connect()
HardwareRepository.add_hardware_objects_dirs([mxcube_root + "/HardwareObjects"])
"""
Explanation: Welcome to the MXCuBE jupyter Notebook service!
Press "Shift + Enter" to proceed
End of explanation
"""
energy_hwobj = hardware_repository.get_hardware_object("energy-mockup")
attenuators_hwobj = hardware_repository.get_hardware_object("attenuators-mockup")
detector_hwobj = hardware_repository.get_hardware_object("detector-mockup")
mach_info_hwobj = hardware_repository.get_hardware_object("mach-info-mockup")
resolution_hwobj = hardware_repository.get_hardware_object("resolution-mockup")
transmission_hwobj = hardware_repository.get_hardware_object("transmission-mockup")
print energy_hwobj.energy_value
print attenuators_hwobj.value
print resolution_hwobj.currentResolution
"""
Explanation: Try to load some hardware objects defined in the xml-qt:
End of explanation
"""
print dir(energy_hwobj)
energy_hwobj.getChannel("chanTEST")
"""
Explanation: Use dir to see available methods and variables
End of explanation
"""
|
mattilyra/gensim
|
docs/notebooks/doc2vec-wikipedia.ipynb
|
lgpl-2.1
|
from gensim.corpora.wikicorpus import WikiCorpus
from gensim.models.doc2vec import Doc2Vec, TaggedDocument
from pprint import pprint
import multiprocessing
"""
Explanation: Doc2Vec to wikipedia articles
We conduct the replication to Document Embedding with Paragraph Vectors (http://arxiv.org/abs/1507.07998).
In this paper, they showed only DBOW results to Wikipedia data. So we replicate this experiments using not only DBOW but also DM.
Basic Setup
Let's import Doc2Vec module.
End of explanation
"""
wiki = WikiCorpus("enwiki-latest-pages-articles.xml.bz2")
#wiki = WikiCorpus("enwiki-YYYYMMDD-pages-articles.xml.bz2")
"""
Explanation: Preparing the corpus
First, download the dump of all Wikipedia articles from here (you want the file enwiki-latest-pages-articles.xml.bz2, or enwiki-YYYYMMDD-pages-articles.xml.bz2 for date-specific dumps).
Second, convert the articles to WikiCorpus. WikiCorpus construct a corpus from a Wikipedia (or other MediaWiki-based) database dump.
For more details on WikiCorpus, you should access Corpus from a Wikipedia dump.
End of explanation
"""
class TaggedWikiDocument(object):
def __init__(self, wiki):
self.wiki = wiki
self.wiki.metadata = True
def __iter__(self):
for content, (page_id, title) in self.wiki.get_texts():
yield TaggedDocument([c.decode("utf-8") for c in content], [title])
documents = TaggedWikiDocument(wiki)
"""
Explanation: Define TaggedWikiDocument class to convert WikiCorpus into suitable form for Doc2Vec.
End of explanation
"""
pre = Doc2Vec(min_count=0)
pre.scan_vocab(documents)
for num in range(0, 20):
print('min_count: {}, size of vocab: '.format(num), pre.scale_vocab(min_count=num, dry_run=True)['memory']['vocab']/700)
"""
Explanation: Preprocessing
To set the same vocabulary size with original paper. We first calculate the optimal min_count parameter.
End of explanation
"""
cores = multiprocessing.cpu_count()
models = [
# PV-DBOW
Doc2Vec(dm=0, dbow_words=1, size=200, window=8, min_count=19, iter=10, workers=cores),
# PV-DM w/average
Doc2Vec(dm=1, dm_mean=1, size=200, window=8, min_count=19, iter =10, workers=cores),
]
models[0].build_vocab(documents)
print(str(models[0]))
models[1].reset_from(models[0])
print(str(models[1]))
"""
Explanation: In the original paper, they set the vocabulary size 915,715. It seems similar size of vocabulary if we set min_count = 19. (size of vocab = 898,725)
Training the Doc2Vec Model
To train Doc2Vec model by several method, DBOW and DM, we define the list of models.
End of explanation
"""
for model in models:
%%time model.train(documents, total_examples=model.corpus_count, epochs=model.iter)
"""
Explanation: Now we’re ready to train Doc2Vec of the English Wikipedia.
End of explanation
"""
for model in models:
print(str(model))
pprint(model.docvecs.most_similar(positive=["Machine learning"], topn=20))
"""
Explanation: Similarity interface
After that, let's test both models! DBOW model show similar results with the original paper. First, calculating cosine similarity of "Machine learning" using Paragraph Vector. Word Vector and Document Vector are separately stored. We have to add .docvecs after model name to extract Document Vector from Doc2Vec Model.
End of explanation
"""
for model in models:
print(str(model))
pprint(model.docvecs.most_similar(positive=["Lady Gaga"], topn=10))
"""
Explanation: DBOW model interpret the word 'Machine Learning' as a part of Computer Science field, and DM model as Data Science related field.
Second, calculating cosine simillarity of "Lady Gaga" using Paragraph Vector.
End of explanation
"""
for model in models:
print(str(model))
vec = [model.docvecs["Lady Gaga"] - model["american"] + model["japanese"]]
pprint([m for m in model.docvecs.most_similar(vec, topn=11) if m[0] != "Lady Gaga"])
"""
Explanation: DBOW model reveal the similar singer in the U.S., and DM model understand that many of Lady Gaga's songs are similar with the word "Lady Gaga".
Third, calculating cosine simillarity of "Lady Gaga" - "American" + "Japanese" using Document vector and Word Vectors. "American" and "Japanese" are Word Vectors, not Paragraph Vectors. Word Vectors are already converted to lowercases by WikiCorpus.
End of explanation
"""
|
mne-tools/mne-tools.github.io
|
0.18/_downloads/66fec418bceb5ce89704fb8b44930330/plot_3d_to_2d.ipynb
|
bsd-3-clause
|
# Authors: Christopher Holdgraf <choldgraf@berkeley.edu>
#
# License: BSD (3-clause)
from scipy.io import loadmat
import numpy as np
from mayavi import mlab
from matplotlib import pyplot as plt
from os import path as op
import mne
from mne.viz import ClickableImage # noqa
from mne.viz import plot_alignment, snapshot_brain_montage
print(__doc__)
subjects_dir = mne.datasets.sample.data_path() + '/subjects'
path_data = mne.datasets.misc.data_path() + '/ecog/sample_ecog.mat'
# We've already clicked and exported
layout_path = op.join(op.dirname(mne.__file__), 'data', 'image')
layout_name = 'custom_layout.lout'
"""
Explanation: ====================================================
How to convert 3D electrode positions to a 2D image.
====================================================
Sometimes we want to convert a 3D representation of electrodes into a 2D
image. For example, if we are using electrocorticography it is common to
create scatterplots on top of a brain, with each point representing an
electrode.
In this example, we'll show two ways of doing this in MNE-Python. First,
if we have the 3D locations of each electrode then we can use Mayavi to
take a snapshot of a view of the brain. If we do not have these 3D locations,
and only have a 2D image of the electrodes on the brain, we can use the
:class:mne.viz.ClickableImage class to choose our own electrode positions
on the image.
End of explanation
"""
mat = loadmat(path_data)
ch_names = mat['ch_names'].tolist()
elec = mat['elec'] # electrode coordinates in meters
dig_ch_pos = dict(zip(ch_names, elec))
mon = mne.channels.DigMontage(dig_ch_pos=dig_ch_pos)
info = mne.create_info(ch_names, 1000., 'ecog', montage=mon)
print('Created %s channel positions' % len(ch_names))
"""
Explanation: Load data
First we'll load a sample ECoG dataset which we'll use for generating
a 2D snapshot.
End of explanation
"""
fig = plot_alignment(info, subject='sample', subjects_dir=subjects_dir,
surfaces=['pial'], meg=False)
mlab.view(200, 70)
xy, im = snapshot_brain_montage(fig, mon)
# Convert from a dictionary to array to plot
xy_pts = np.vstack([xy[ch] for ch in info['ch_names']])
# Define an arbitrary "activity" pattern for viz
activity = np.linspace(100, 200, xy_pts.shape[0])
# This allows us to use matplotlib to create arbitrary 2d scatterplots
fig2, ax = plt.subplots(figsize=(10, 10))
ax.imshow(im)
ax.scatter(*xy_pts.T, c=activity, s=200, cmap='coolwarm')
ax.set_axis_off()
# fig2.savefig('./brain.png', bbox_inches='tight') # For ClickableImage
"""
Explanation: Project 3D electrodes to a 2D snapshot
Because we have the 3D location of each electrode, we can use the
:func:mne.viz.snapshot_brain_montage function to return a 2D image along
with the electrode positions on that image. We use this in conjunction with
:func:mne.viz.plot_alignment, which visualizes electrode positions.
End of explanation
"""
# This code opens the image so you can click on it. Commented out
# because we've stored the clicks as a layout file already.
# # The click coordinates are stored as a list of tuples
# im = plt.imread('./brain.png')
# click = ClickableImage(im)
# click.plot_clicks()
# # Generate a layout from our clicks and normalize by the image
# print('Generating and saving layout...')
# lt = click.to_layout()
# lt.save(op.join(layout_path, layout_name)) # To save if we want
# # We've already got the layout, load it
lt = mne.channels.read_layout(layout_name, path=layout_path, scale=False)
x = lt.pos[:, 0] * float(im.shape[1])
y = (1 - lt.pos[:, 1]) * float(im.shape[0]) # Flip the y-position
fig, ax = plt.subplots()
ax.imshow(im)
ax.scatter(x, y, s=120, color='r')
plt.autoscale(tight=True)
ax.set_axis_off()
plt.show()
"""
Explanation: Manually creating 2D electrode positions
If we don't have the 3D electrode positions then we can still create a
2D representation of the electrodes. Assuming that you can see the electrodes
on the 2D image, we can use :class:mne.viz.ClickableImage to open the image
interactively. You can click points on the image and the x/y coordinate will
be stored.
We'll open an image file, then use ClickableImage to
return 2D locations of mouse clicks (or load a file already created).
Then, we'll return these xy positions as a layout for use with plotting topo
maps.
End of explanation
"""
|
tritemio/multispot_paper
|
out_notebooks/usALEX-5samples-E-corrected-all-ph-out-7d.ipynb
|
mit
|
ph_sel_name = "None"
data_id = "7d"
# data_id = "7d"
"""
Explanation: Executed: Mon Mar 27 11:39:17 2017
Duration: 7 seconds.
usALEX-5samples - Template
This notebook is executed through 8-spots paper analysis.
For a direct execution, uncomment the cell below.
End of explanation
"""
from fretbursts import *
init_notebook()
from IPython.display import display
"""
Explanation: Load software and filenames definitions
End of explanation
"""
data_dir = './data/singlespot/'
import os
data_dir = os.path.abspath(data_dir) + '/'
assert os.path.exists(data_dir), "Path '%s' does not exist." % data_dir
"""
Explanation: Data folder:
End of explanation
"""
from glob import glob
file_list = sorted(f for f in glob(data_dir + '*.hdf5') if '_BKG' not in f)
## Selection for POLIMI 2012-11-26 datatset
labels = ['17d', '27d', '7d', '12d', '22d']
files_dict = {lab: fname for lab, fname in zip(labels, file_list)}
files_dict
data_id
"""
Explanation: List of data files:
End of explanation
"""
d = loader.photon_hdf5(filename=files_dict[data_id])
"""
Explanation: Data load
Initial loading of the data:
End of explanation
"""
leakage_coeff_fname = 'results/usALEX - leakage coefficient DexDem.csv'
leakage = np.loadtxt(leakage_coeff_fname)
print('Leakage coefficient:', leakage)
"""
Explanation: Load the leakage coefficient from disk:
End of explanation
"""
dir_ex_coeff_fname = 'results/usALEX - direct excitation coefficient dir_ex_aa.csv'
dir_ex_aa = np.loadtxt(dir_ex_coeff_fname)
print('Direct excitation coefficient (dir_ex_aa):', dir_ex_aa)
"""
Explanation: Load the direct excitation coefficient ($d_{exAA}$) from disk:
End of explanation
"""
gamma_fname = 'results/usALEX - gamma factor - all-ph.csv'
gamma = np.loadtxt(gamma_fname)
print('Gamma-factor:', gamma)
"""
Explanation: Load the gamma-factor ($\gamma$) from disk:
End of explanation
"""
d.leakage = leakage
d.dir_ex = dir_ex_aa
d.gamma = gamma
"""
Explanation: Update d with the correction coefficients:
End of explanation
"""
d.ph_times_t[0][:3], d.ph_times_t[0][-3:]#, d.det_t
print('First and last timestamps: {:10,} {:10,}'.format(d.ph_times_t[0][0], d.ph_times_t[0][-1]))
print('Total number of timestamps: {:10,}'.format(d.ph_times_t[0].size))
"""
Explanation: Laser alternation selection
At this point we have only the timestamps and the detector numbers:
End of explanation
"""
d.add(det_donor_accept=(0, 1), alex_period=4000, D_ON=(2850, 580), A_ON=(900, 2580), offset=0)
"""
Explanation: We need to define some parameters: donor and acceptor ch, excitation period and donor and acceptor excitiations:
End of explanation
"""
plot_alternation_hist(d)
"""
Explanation: We should check if everithing is OK with an alternation histogram:
End of explanation
"""
loader.alex_apply_period(d)
print('D+A photons in D-excitation period: {:10,}'.format(d.D_ex[0].sum()))
print('D+A photons in A-excitation period: {:10,}'.format(d.A_ex[0].sum()))
"""
Explanation: If the plot looks good we can apply the parameters with:
End of explanation
"""
d
"""
Explanation: Measurements infos
All the measurement data is in the d variable. We can print it:
End of explanation
"""
d.time_max
"""
Explanation: Or check the measurements duration:
End of explanation
"""
d.calc_bg(bg.exp_fit, time_s=60, tail_min_us='auto', F_bg=1.7)
dplot(d, timetrace_bg)
d.rate_m, d.rate_dd, d.rate_ad, d.rate_aa
"""
Explanation: Compute background
Compute the background using automatic threshold:
End of explanation
"""
d.burst_search(L=10, m=10, F=7, ph_sel=Ph_sel('all'))
print(d.ph_sel)
dplot(d, hist_fret);
# if data_id in ['7d', '27d']:
# ds = d.select_bursts(select_bursts.size, th1=20)
# else:
# ds = d.select_bursts(select_bursts.size, th1=30)
ds = d.select_bursts(select_bursts.size, add_naa=False, th1=30)
n_bursts_all = ds.num_bursts[0]
def select_and_plot_ES(fret_sel, do_sel):
ds_fret= ds.select_bursts(select_bursts.ES, **fret_sel)
ds_do = ds.select_bursts(select_bursts.ES, **do_sel)
bpl.plot_ES_selection(ax, **fret_sel)
bpl.plot_ES_selection(ax, **do_sel)
return ds_fret, ds_do
ax = dplot(ds, hist2d_alex, S_max_norm=2, scatter_alpha=0.1)
if data_id == '7d':
fret_sel = dict(E1=0.60, E2=1.2, S1=0.2, S2=0.9, rect=False)
do_sel = dict(E1=-0.2, E2=0.5, S1=0.8, S2=2, rect=True)
ds_fret, ds_do = select_and_plot_ES(fret_sel, do_sel)
elif data_id == '12d':
fret_sel = dict(E1=0.30,E2=1.2,S1=0.131,S2=0.9, rect=False)
do_sel = dict(E1=-0.4, E2=0.4, S1=0.8, S2=2, rect=False)
ds_fret, ds_do = select_and_plot_ES(fret_sel, do_sel)
elif data_id == '17d':
fret_sel = dict(E1=0.01, E2=0.98, S1=0.14, S2=0.88, rect=False)
do_sel = dict(E1=-0.4, E2=0.4, S1=0.80, S2=2, rect=False)
ds_fret, ds_do = select_and_plot_ES(fret_sel, do_sel)
elif data_id == '22d':
fret_sel = dict(E1=-0.16, E2=0.6, S1=0.2, S2=0.80, rect=False)
do_sel = dict(E1=-0.2, E2=0.4, S1=0.85, S2=2, rect=True)
ds_fret, ds_do = select_and_plot_ES(fret_sel, do_sel)
elif data_id == '27d':
fret_sel = dict(E1=-0.1, E2=0.5, S1=0.2, S2=0.82, rect=False)
do_sel = dict(E1=-0.2, E2=0.4, S1=0.88, S2=2, rect=True)
ds_fret, ds_do = select_and_plot_ES(fret_sel, do_sel)
n_bursts_do = ds_do.num_bursts[0]
n_bursts_fret = ds_fret.num_bursts[0]
n_bursts_do, n_bursts_fret
d_only_frac = 1.*n_bursts_do/(n_bursts_do + n_bursts_fret)
print('D-only fraction:', d_only_frac)
dplot(ds_fret, hist2d_alex, scatter_alpha=0.1);
dplot(ds_do, hist2d_alex, S_max_norm=2, scatter=False);
"""
Explanation: Burst search and selection
End of explanation
"""
bandwidth = 0.03
E_range_do = (-0.1, 0.15)
E_ax = np.r_[-0.2:0.401:0.0002]
E_pr_do_kde = bext.fit_bursts_kde_peak(ds_do, bandwidth=bandwidth, weights='size',
x_range=E_range_do, x_ax=E_ax, save_fitter=True)
mfit.plot_mfit(ds_do.E_fitter, plot_kde=True, bins=np.r_[E_ax.min(): E_ax.max(): bandwidth])
plt.xlim(-0.3, 0.5)
print("%s: E_peak = %.2f%%" % (ds.ph_sel, E_pr_do_kde*100))
"""
Explanation: Donor Leakage fit
End of explanation
"""
nt_th1 = 50
dplot(ds_fret, hist_size, which='all', add_naa=False)
xlim(-0, 250)
plt.axvline(nt_th1)
Th_nt = np.arange(35, 120)
nt_th = np.zeros(Th_nt.size)
for i, th in enumerate(Th_nt):
ds_nt = ds_fret.select_bursts(select_bursts.size, th1=th)
nt_th[i] = (ds_nt.nd[0] + ds_nt.na[0]).mean() - th
plt.figure()
plot(Th_nt, nt_th)
plt.axvline(nt_th1)
nt_mean = nt_th[np.where(Th_nt == nt_th1)][0]
nt_mean
"""
Explanation: Burst sizes
End of explanation
"""
E_pr_fret_kde = bext.fit_bursts_kde_peak(ds_fret, bandwidth=bandwidth, weights='size')
E_fitter = ds_fret.E_fitter
E_fitter.histogram(bins=np.r_[-0.1:1.1:0.03])
E_fitter.fit_histogram(mfit.factory_gaussian(center=0.5))
E_fitter.fit_res[0].params.pretty_print()
fig, ax = plt.subplots(1, 2, figsize=(14, 4.5))
mfit.plot_mfit(E_fitter, ax=ax[0])
mfit.plot_mfit(E_fitter, plot_model=False, plot_kde=True, ax=ax[1])
print('%s\nKDE peak %.2f ' % (ds_fret.ph_sel, E_pr_fret_kde*100))
display(E_fitter.params*100)
"""
Explanation: Fret fit
Max position of the Kernel Density Estimation (KDE):
End of explanation
"""
ds_fret.fit_E_m(weights='size')
"""
Explanation: Weighted mean of $E$ of each burst:
End of explanation
"""
ds_fret.fit_E_generic(fit_fun=bl.gaussian_fit_hist, bins=np.r_[-0.1:1.1:0.03], weights=None)
"""
Explanation: Gaussian fit (no weights):
End of explanation
"""
ds_fret.fit_E_generic(fit_fun=bl.gaussian_fit_hist, bins=np.r_[-0.1:1.1:0.005], weights='size')
E_kde_w = E_fitter.kde_max_pos[0]
E_gauss_w = E_fitter.params.loc[0, 'center']
E_gauss_w_sig = E_fitter.params.loc[0, 'sigma']
E_gauss_w_err = float(E_gauss_w_sig/np.sqrt(ds_fret.num_bursts[0]))
E_gauss_w_fiterr = E_fitter.fit_res[0].params['center'].stderr
E_kde_w, E_gauss_w, E_gauss_w_sig, E_gauss_w_err, E_gauss_w_fiterr
"""
Explanation: Gaussian fit (using burst size as weights):
End of explanation
"""
S_pr_fret_kde = bext.fit_bursts_kde_peak(ds_fret, burst_data='S', bandwidth=0.03) #weights='size', add_naa=True)
S_fitter = ds_fret.S_fitter
S_fitter.histogram(bins=np.r_[-0.1:1.1:0.03])
S_fitter.fit_histogram(mfit.factory_gaussian(), center=0.5)
fig, ax = plt.subplots(1, 2, figsize=(14, 4.5))
mfit.plot_mfit(S_fitter, ax=ax[0])
mfit.plot_mfit(S_fitter, plot_model=False, plot_kde=True, ax=ax[1])
print('%s\nKDE peak %.2f ' % (ds_fret.ph_sel, S_pr_fret_kde*100))
display(S_fitter.params*100)
S_kde = S_fitter.kde_max_pos[0]
S_gauss = S_fitter.params.loc[0, 'center']
S_gauss_sig = S_fitter.params.loc[0, 'sigma']
S_gauss_err = float(S_gauss_sig/np.sqrt(ds_fret.num_bursts[0]))
S_gauss_fiterr = S_fitter.fit_res[0].params['center'].stderr
S_kde, S_gauss, S_gauss_sig, S_gauss_err, S_gauss_fiterr
"""
Explanation: Stoichiometry fit
Max position of the Kernel Density Estimation (KDE):
End of explanation
"""
S = ds_fret.S[0]
S_ml_fit = (S.mean(), S.std())
S_ml_fit
"""
Explanation: The Maximum likelihood fit for a Gaussian population is the mean:
End of explanation
"""
weights = bl.fret_fit.get_weights(ds_fret.nd[0], ds_fret.na[0], weights='size', naa=ds_fret.naa[0], gamma=1.)
S_mean = np.dot(weights, S)/weights.sum()
S_std_dev = np.sqrt(
np.dot(weights, (S - S_mean)**2)/weights.sum())
S_wmean_fit = [S_mean, S_std_dev]
S_wmean_fit
"""
Explanation: Computing the weighted mean and weighted standard deviation we get:
End of explanation
"""
sample = data_id
"""
Explanation: Save data to file
End of explanation
"""
variables = ('sample n_bursts_all n_bursts_do n_bursts_fret '
'E_kde_w E_gauss_w E_gauss_w_sig E_gauss_w_err E_gauss_w_fiterr '
'S_kde S_gauss S_gauss_sig S_gauss_err S_gauss_fiterr '
'E_pr_do_kde nt_mean\n')
"""
Explanation: The following string contains the list of variables to be saved. When saving, the order of the variables is preserved.
End of explanation
"""
variables_csv = variables.replace(' ', ',')
fmt_float = '{%s:.6f}'
fmt_int = '{%s:d}'
fmt_str = '{%s}'
fmt_dict = {**{'sample': fmt_str},
**{k: fmt_int for k in variables.split() if k.startswith('n_bursts')}}
var_dict = {name: eval(name) for name in variables.split()}
var_fmt = ', '.join([fmt_dict.get(name, fmt_float) % name for name in variables.split()]) + '\n'
data_str = var_fmt.format(**var_dict)
print(variables_csv)
print(data_str)
# NOTE: The file name should be the notebook name but with .csv extension
with open('results/usALEX-5samples-E-corrected-all-ph.csv', 'a') as f:
f.seek(0, 2)
if f.tell() == 0:
f.write(variables_csv)
f.write(data_str)
"""
Explanation: This is just a trick to format the different variables:
End of explanation
"""
|
matthijsvk/multimodalSR
|
code/Experiments/Tutorials/EbenOlsen_TheanoLasagne/2 - Lasagne Basics/Digit Recognizer.ipynb
|
mit
|
# Uncomment and execute this cell for an example solution
load spoilers/logreg.py
"""
Explanation: Exercises
1. Logistic regression
The simple network we created is similar to a logistic regression model. Verify that the accuracy is close to that of sklearn.linear_model.LogisticRegression.
End of explanation
"""
# Uncomment and execute this cell for an example solution
load spoilers/hiddenlayer.py
"""
Explanation: 2. Hidden layer
Try adding one or more "hidden" DenseLayers between the input and output. Experiment with different numbers of hidden units.
End of explanation
"""
# Uncomment and execute this cell for an example solution
load spoilers/optimizer.py
"""
Explanation: 3. Optimizer
Try one of the other algorithms available in lasagne.updates. You may also want to adjust the learning rate.
Visualize and compare the trained weights. Different optimization trajectories may lead to very different results, even if the performance is similar. This can be important when training more complicated networks.
End of explanation
"""
|
AlbanoCastroSousa/RESSPyLab
|
examples/Old_RESSPyLab_Parameter_Calibration_Orientation_Notebook.ipynb
|
mit
|
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
from IPython.core.interactiveshell import InteractiveShell
InteractiveShell.ast_node_interactivity = "all"
import RESSPyLab
"""
Explanation: Import modules
End of explanation
"""
testFileNames=['example_1.csv']
listCleanTests=[]
for testFileName in testFileNames:
test=pd.read_csv(testFileName)
listCleanTests.append(test)
"""
Explanation: 1 - Load an experiment
Make a list of pandas dataframes with (clean) experimental data from a csv file. This is done with the pandas package from data in csv files. Two columns should be included in the csv file with true strain ("e_true") and true stress ("Sigma_true").
End of explanation
"""
x_0=[200e3,355,1e-1,1e-1,1e-1,1e-1]
sol=RESSPyLab.VCopt_SVD(x_0,listCleanTests)
print(sol)
x_0=[200e3,355,1e-1,1e-1,1e-1,1e-1]
sol=RESSPyLab.VCopt_J(x_0,listCleanTests)
print(sol)
"""
Explanation: 2 - Determine Voce and Chaboche material parameters with either VCopt_SVD or VCopt_J
There are two arguments to VCopt: an initial starting point for the parameters ("x_0") and the list of tests previously assembled.
The parameters are gathered in list in the following order:
[E, sy0, Qinf, b, C_1, gamma_1, C_2, gamma_2, ..., ..., C_k, gamma_k]
A recommended initial point is an elastic perfectly plastic model with the nominal values of the elastic modulus and the yield stress. All other values are, therefore, set to zero. For numerical purposes a minimum 1e-1 is used.
The examples herein are from an S355J2 steel. Nominal values are therefore: E=200e3MPa sy0=355MPa
End of explanation
"""
simCurve=RESSPyLab.VCsimCurve(sol,test)
plt.plot(test['e_true'],test['Sigma_true'],c='r',label='Test')
plt.plot(simCurve['e_true'],simCurve['Sigma_true'],c='k',label='RESSPyLab')
plt.legend(loc='best')
plt.xlabel('True strain')
plt.ylabel('True stress')
"""
Explanation: 3 - Use the solution point to plot experiment vs simulation
End of explanation
"""
testFileNames=['example_1.csv','example_2.csv']
listCleanTests=[]
for testFileName in testFileNames:
test=pd.read_csv(testFileName)
listCleanTests.append(test)
x_0=[200e3,355,1e-1,1e-1,1e-1,1e-1]
sol=RESSPyLab.VCopt_SVD(x_0,listCleanTests)
print(sol)
x_0=[200e3,355,1e-1,1e-1,1e-1,1e-1]
sol=RESSPyLab.VCopt_J(x_0,listCleanTests)
print(sol)
test=pd.read_csv('example_1.csv')
simCurve=RESSPyLab.VCsimCurve(sol,test)
plt.plot(test['e_true'],test['Sigma_true'],c='r',label='Test')
plt.plot(simCurve['e_true'],simCurve['Sigma_true'],c='k',label='RESSPyLab')
plt.legend(loc='best')
plt.xlabel('True strain')
plt.ylabel('True stress')
test=pd.read_csv('example_2.csv')
simCurve=RESSPyLab.VCsimCurve(sol,test)
plt.plot(test['e_true'],test['Sigma_true'],c='r',label='Test')
plt.plot(simCurve['e_true'],simCurve['Sigma_true'],c='k',label='RESSPyLab')
plt.legend(loc='best')
plt.xlabel('True strain')
plt.ylabel('True stress')
"""
Explanation: 4 - Example with multiple tests
End of explanation
"""
|
dagrha/textual-analysis
|
textblob_lovecraft.ipynb
|
mit
|
from textblob import TextBlob
import pandas as pd
import pylab as plt
import collections
import re
%matplotlib inline
"""
Explanation: Sentiment analysis on
H.P. Lovecraft's The Shunned House
For this, we'll use the TextBlob library (http://textblob.readthedocs.org/en/dev/) and pandas (http://pandas.pydata.org/)
End of explanation
"""
with open (r'lovecraft.txt', 'r') as myfile:
shunned = myfile.read()
ushunned = unicode(shunned, 'utf-8')
tb = TextBlob(ushunned)
"""
Explanation: I've already pulled down The Sunned House from Project Gutenberg (https://www.gutenberg.org/wiki/Main_Page) and saved it as a text file called 'lovecraft.txt'. Here we'll load it then define the encoding as utf-8. Lastly, we'll instantiate a TextBlob object:
End of explanation
"""
paragraph = tb.sentences
i = -1
for sentence in paragraph:
i += 1
pol = sentence.sentiment.polarity
if i == 0:
write_type = 'w'
with open('shunned.csv', write_type) as text_file:
header = 'number,polarity\n'
text_file.write(str(header))
write_type = 'a'
with open('shunned.csv', write_type) as text_file:
newline = str(i) + ',' + str(pol) + '\n'
text_file.write(str(newline))
"""
Explanation: Now we'll go through every sentence in the story and get the 'sentiment' of each one. Sentiment analysis in TextBlob returns a polarity and a subjectivity number. Here we'll just extract the polarity:
End of explanation
"""
df = pd.DataFrame.from_csv('shunned.csv')
"""
Explanation: Now we instantiate a dataframe by pulling in that csv:
End of explanation
"""
df.polarity.plot(figsize=(12,5), color='b', title='Sentiment Polarity for HP Lovecraft\'s The Shunned House')
plt.xlabel('Sentence number')
plt.ylabel('Sentiment polarity')
"""
Explanation: Let's plot our data! First let's just look at how the sentiment polarity changes from sentence to sentence:
End of explanation
"""
df['cum_sum'] = df.polarity.cumsum()
"""
Explanation: Very up and down from sentence to sentence! Some dark sentences (the ones below 0.0 polarity), some positive sentences (greater than 0.0 polarity) but overall kind of hovers around 0.0 polarity.
One thing that may be interesting to look at is how the senitment changes over the course of the book. To examine that further, I'm going to create a new column in the dataframe which is the cumulative summation of the polarity rating, using the cumsum() pandas method:
End of explanation
"""
df.cum_sum.plot(figsize=(12,5), color='r',
title='Sentiment Polarity cumulative summation for HP Lovecraft\'s The Shunned House')
plt.xlabel('Sentence number')
plt.ylabel('Cumulative sum of sentiment polarity')
"""
Explanation: So, now let's plot the results-- How does the sentiment of Lovecraft's story change over the course of the book?
End of explanation
"""
df.head()
"""
Explanation: The climax of Lovecraft's story appears to be around sentence 255 or so. Things really drop off at that point and get dark, according to the TextBlob sentiment analysis.
What's the dataframe look like?
End of explanation
"""
df.describe()
"""
Explanation: Let's get some basic statistical information about sentence seniments:
End of explanation
"""
for i in df[df.polarity < -0.5].index:
print i, tb.sentences[i]
words = re.findall(r'\w+', open('lovecraft.txt').read().lower())
collections.Counter(words).most_common(10)
"""
Explanation: For fun, let's just see what TextBlob thinks are the most negatively polar sentences in the short story:
End of explanation
"""
words = re.findall(r'\w+', ushunned.lower())
common = collections.Counter(words).most_common()
df_freq = pd.DataFrame(common, columns=['word', 'freq'])
df_freq.set_index('word').head()
"""
Explanation: Let's take a quick peak at word frequencies by using the re and collections library. Here we'll use the Counter() and most_common() methods to return a list of tuples of the most common words in the story:
End of explanation
"""
|
AllenDowney/ThinkStats2
|
homeworks/homework01.ipynb
|
gpl-3.0
|
%matplotlib inline
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
sns.set(style='white')
import utils
from utils import decorate
from thinkstats2 import Pmf, Cdf
"""
Explanation: Homework 1
Load and validate GSS data
Allen Downey
MIT License
End of explanation
"""
def read_gss(dirname):
"""Reads GSS files from the given directory.
dirname: string
returns: DataFrame
"""
dct = utils.read_stata_dict(dirname + '/GSS.dct')
gss = dct.read_fixed_width(dirname + '/GSS.dat.gz',
compression='gzip')
return gss
"""
Explanation: Loading and validation
End of explanation
"""
gss = read_gss('gss_eda')
print(gss.shape)
gss.head()
"""
Explanation: Read the variables I selected from the GSS dataset. You can look up these variables at https://gssdataexplorer.norc.org/variables/vfilter
End of explanation
"""
def replace_invalid(df):
df.realinc.replace([0], np.nan, inplace=True)
df.educ.replace([98,99], np.nan, inplace=True)
# 89 means 89 or older
df.age.replace([98, 99], np.nan, inplace=True)
df.cohort.replace([9999], np.nan, inplace=True)
df.adults.replace([9], np.nan, inplace=True)
replace_invalid(gss)
"""
Explanation: Most variables use special codes to indicate missing data. We have to be careful not to use these codes as numerical data; one way to manage that is to replace them with NaN, which Pandas recognizes as a missing value.
End of explanation
"""
gss['year'].describe()
gss['sex'].describe()
gss['age'].describe()
gss['cohort'].describe()
gss['race'].describe()
gss['educ'].describe()
gss['realinc'].describe()
gss['wtssall'].describe()
"""
Explanation: Here are summary statistics for the variables I have validated and cleaned.
End of explanation
"""
from thinkstats2 import Hist, Pmf, Cdf
import thinkplot
hist_educ = Hist(gss.educ)
thinkplot.hist(hist_educ)
decorate(xlabel='Years of education',
ylabel='Count')
"""
Explanation: Exercise
Look through the column headings to find a few variables that look interesting. Look them up on the GSS data explorer.
Use value_counts to see what values appear in the dataset, and compare the results with the counts in the code book.
Identify special values that indicate missing data and replace them with NaN.
Use describe to compute summary statistics. What do you notice?
Visualize distributions
Let's visualize the distributions of the variables we've selected.
Here's a Hist of the values in educ:
End of explanation
"""
import matplotlib.pyplot as plt
plt.hist(gss.educ.dropna())
decorate(xlabel='Years of education',
ylabel='Count')
"""
Explanation: Hist as defined in thinkstats2 is different from hist as defined in Matplotlib. The difference is that Hist keeps all unique values and does not put them in bins. Also, hist does not handle NaN.
One of the hazards of using hist is that the shape of the result depends on the bin size.
Exercise:
Run the following cell and compare the result to the Hist above.
Add the keyword argument bins=11 to plt.hist and see how it changes the results.
Experiment with other numbers of bins.
End of explanation
"""
hist_realinc = Hist(gss.realinc)
thinkplot.hist(hist_realinc)
decorate(xlabel='Real income (1986 USD)',
ylabel='Count')
"""
Explanation: However, a drawback of Hist and Pmf is that they basically don't work when the number of unique values is large, as in this example:
End of explanation
"""
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
"""
Explanation: Exercise:
Make and plot a Hist of age.
Make and plot a Pmf of educ.
What fraction of people have 12, 14, and 16 years of education?
End of explanation
"""
# Solution goes here
# Solution goes here
# Solution goes here
"""
Explanation: Exercise:
Make and plot a Cdf of educ.
What fraction of people have more than 12 years of education?
End of explanation
"""
# Solution goes here
# Solution goes here
# Solution goes here
"""
Explanation: Exercise:
Make and plot a Cdf of age.
What is the median age? What is the inter-quartile range (IQR)?
End of explanation
"""
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
"""
Explanation: Exercise:
Find another numerical variable, plot a histogram, PMF, and CDF, and compute any statistics of interest.
End of explanation
"""
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
"""
Explanation: Exercise:
Compute the CDF of realinc for male and female respondents, and plot both CDFs on the same axes.
What is the difference in median income between the two groups?
End of explanation
"""
# Solution goes here
# Solution goes here
# Solution goes here
# Solution goes here
"""
Explanation: Exercise:
Use a variable to break the dataset into groups and plot multiple CDFs to compare distribution of something within groups.
Note: Try to find something interesting, but be cautious about overinterpreting the results. Between any two groups, there are often many differences, with many possible causes.
End of explanation
"""
np.random.seed(19)
sample = utils.resample_by_year(gss, 'wtssall')
"""
Explanation: Save the cleaned data
Now that we have the data in good shape, we'll save it in a binary format (HDF5), which will make it faster to load later.
Also, we have to do some resampling to make the results representative. We'll talk about this in class.
End of explanation
"""
!rm gss.hdf5
sample.to_hdf('gss.hdf5', 'gss')
"""
Explanation: Save the file.
End of explanation
"""
%time gss = pd.read_hdf('gss.hdf5', 'gss')
gss.shape
"""
Explanation: Load it and see how fast it is!
End of explanation
"""
|
stefan-balke/librosa
|
examples/LibROSA demo.ipynb
|
isc
|
from __future__ import print_function
# We'll need numpy for some mathematical operations
import numpy as np
# matplotlib for displaying the output
import matplotlib.pyplot as plt
import matplotlib.style as ms
ms.use('seaborn-muted')
%matplotlib inline
# and IPython.display for audio output
import IPython.display
# Librosa for audio
import librosa
# And the display module for visualization
import librosa.display
audio_path = librosa.util.example_audio_file()
# or uncomment the line below and point it at your favorite song:
#
# audio_path = '/path/to/your/favorite/song.mp3'
y, sr = librosa.load(audio_path)
"""
Explanation: Librosa demo
This notebook demonstrates some of the basic functionality of librosa version 0.4.
Following through this example, you'll learn how to:
Load audio input
Compute mel spectrogram, MFCC, delta features, chroma
Locate beat events
Compute beat-synchronous features
Display features
Save beat tracker output to a CSV file
End of explanation
"""
# Let's make and display a mel-scaled power (energy-squared) spectrogram
S = librosa.feature.melspectrogram(y, sr=sr, n_mels=128)
# Convert to log scale (dB). We'll use the peak power as reference.
log_S = librosa.logamplitude(S, ref_power=np.max)
# Make a new figure
plt.figure(figsize=(12,4))
# Display the spectrogram on a mel scale
# sample rate and hop length parameters are used to render the time axis
librosa.display.specshow(log_S, sr=sr, x_axis='time', y_axis='mel')
# Put a descriptive title on the plot
plt.title('mel power spectrogram')
# draw a color bar
plt.colorbar(format='%+02.0f dB')
# Make the figure layout compact
plt.tight_layout()
"""
Explanation: By default, librosa will resample the signal to 22050Hz.
You can change this behavior by saying:
librosa.load(audio_path, sr=44100)
to resample at 44.1KHz, or
librosa.load(audio_path, sr=None)
to disable resampling.
Mel spectrogram
This first step will show how to compute a Mel spectrogram from an audio waveform.
End of explanation
"""
y_harmonic, y_percussive = librosa.effects.hpss(y)
# What do the spectrograms look like?
# Let's make and display a mel-scaled power (energy-squared) spectrogram
S_harmonic = librosa.feature.melspectrogram(y_harmonic, sr=sr)
S_percussive = librosa.feature.melspectrogram(y_percussive, sr=sr)
# Convert to log scale (dB). We'll use the peak power as reference.
log_Sh = librosa.logamplitude(S_harmonic, ref_power=np.max)
log_Sp = librosa.logamplitude(S_percussive, ref_power=np.max)
# Make a new figure
plt.figure(figsize=(12,6))
plt.subplot(2,1,1)
# Display the spectrogram on a mel scale
librosa.display.specshow(log_Sh, sr=sr, y_axis='mel')
# Put a descriptive title on the plot
plt.title('mel power spectrogram (Harmonic)')
# draw a color bar
plt.colorbar(format='%+02.0f dB')
plt.subplot(2,1,2)
librosa.display.specshow(log_Sp, sr=sr, x_axis='time', y_axis='mel')
# Put a descriptive title on the plot
plt.title('mel power spectrogram (Percussive)')
# draw a color bar
plt.colorbar(format='%+02.0f dB')
# Make the figure layout compact
plt.tight_layout()
"""
Explanation: Harmonic-percussive source separation
Before doing any signal analysis, let's pull apart the harmonic and percussive components of the audio. This is pretty easy to do with the effects module.
End of explanation
"""
# We'll use a CQT-based chromagram here. An STFT-based implementation also exists in chroma_cqt()
# We'll use the harmonic component to avoid pollution from transients
C = librosa.feature.chroma_cqt(y=y_harmonic, sr=sr)
# Make a new figure
plt.figure(figsize=(12,4))
# Display the chromagram: the energy in each chromatic pitch class as a function of time
# To make sure that the colors span the full range of chroma values, set vmin and vmax
librosa.display.specshow(C, sr=sr, x_axis='time', y_axis='chroma', vmin=0, vmax=1)
plt.title('Chromagram')
plt.colorbar()
plt.tight_layout()
"""
Explanation: Chromagram
Next, we'll extract Chroma features to represent pitch class information.
End of explanation
"""
# Next, we'll extract the top 13 Mel-frequency cepstral coefficients (MFCCs)
mfcc = librosa.feature.mfcc(S=log_S, n_mfcc=13)
# Let's pad on the first and second deltas while we're at it
delta_mfcc = librosa.feature.delta(mfcc)
delta2_mfcc = librosa.feature.delta(mfcc, order=2)
# How do they look? We'll show each in its own subplot
plt.figure(figsize=(12, 6))
plt.subplot(3,1,1)
librosa.display.specshow(mfcc)
plt.ylabel('MFCC')
plt.colorbar()
plt.subplot(3,1,2)
librosa.display.specshow(delta_mfcc)
plt.ylabel('MFCC-$\Delta$')
plt.colorbar()
plt.subplot(3,1,3)
librosa.display.specshow(delta2_mfcc, sr=sr, x_axis='time')
plt.ylabel('MFCC-$\Delta^2$')
plt.colorbar()
plt.tight_layout()
# For future use, we'll stack these together into one matrix
M = np.vstack([mfcc, delta_mfcc, delta2_mfcc])
"""
Explanation: MFCC
Mel-frequency cepstral coefficients are commonly used to represent texture or timbre of sound.
End of explanation
"""
# Now, let's run the beat tracker.
# We'll use the percussive component for this part
plt.figure(figsize=(12, 6))
tempo, beats = librosa.beat.beat_track(y=y_percussive, sr=sr)
# Let's re-draw the spectrogram, but this time, overlay the detected beats
plt.figure(figsize=(12,4))
librosa.display.specshow(log_S, sr=sr, x_axis='time', y_axis='mel')
# Let's draw transparent lines over the beat frames
plt.vlines(librosa.frames_to_time(beats),
1, 0.5 * sr,
colors='w', linestyles='-', linewidth=2, alpha=0.5)
plt.axis('tight')
plt.colorbar(format='%+02.0f dB')
plt.tight_layout()
"""
Explanation: Beat tracking
The beat tracker returns an estimate of the tempo (in beats per minute) and frame indices of beat events.
The input can be either an audio time series (as we do below), or an onset strength envelope as calculated by librosa.onset.onset_strength().
End of explanation
"""
print('Estimated tempo: %.2f BPM' % tempo)
print('First 5 beat frames: ', beats[:5])
# Frame numbers are great and all, but when do those beats occur?
print('First 5 beat times: ', librosa.frames_to_time(beats[:5], sr=sr))
# We could also get frame numbers from times by librosa.time_to_frames()
"""
Explanation: By default, the beat tracker will trim away any leading or trailing beats that don't appear strong enough.
To disable this behavior, call beat_track() with trim=False.
End of explanation
"""
# feature.sync will summarize each beat event by the mean feature vector within that beat
M_sync = librosa.util.sync(M, beats)
plt.figure(figsize=(12,6))
# Let's plot the original and beat-synchronous features against each other
plt.subplot(2,1,1)
librosa.display.specshow(M)
plt.title('MFCC-$\Delta$-$\Delta^2$')
# We can also use pyplot *ticks directly
# Let's mark off the raw MFCC and the delta features
plt.yticks(np.arange(0, M.shape[0], 13), ['MFCC', '$\Delta$', '$\Delta^2$'])
plt.colorbar()
plt.subplot(2,1,2)
# librosa can generate axis ticks from arbitrary timestamps and beat events also
librosa.display.specshow(M_sync, x_axis='time',
x_coords=librosa.frames_to_time(librosa.util.fix_frames(beats)))
plt.yticks(np.arange(0, M_sync.shape[0], 13), ['MFCC', '$\Delta$', '$\Delta^2$'])
plt.title('Beat-synchronous MFCC-$\Delta$-$\Delta^2$')
plt.colorbar()
plt.tight_layout()
# Beat synchronization is flexible.
# Instead of computing the mean delta-MFCC within each beat, let's do beat-synchronous chroma
# We can replace the mean with any statistical aggregation function, such as min, max, or median.
C_sync = librosa.util.sync(C, beats, aggregate=np.median)
plt.figure(figsize=(12,6))
plt.subplot(2, 1, 1)
librosa.display.specshow(C, sr=sr, y_axis='chroma', vmin=0.0, vmax=1.0, x_axis='time')
plt.title('Chroma')
plt.colorbar()
plt.subplot(2, 1, 2)
librosa.display.specshow(C_sync, y_axis='chroma', vmin=0.0, vmax=1.0, x_axis='time',
x_coords=librosa.frames_to_time(librosa.util.fix_frames(beats)))
plt.title('Beat-synchronous Chroma (median aggregation)')
plt.colorbar()
plt.tight_layout()
"""
Explanation: Beat-synchronous feature aggregation
Once we've located the beat events, we can use them to summarize the feature content of each beat.
This can be useful for reducing data dimensionality, and removing transient noise from the features.
End of explanation
"""
|
gmonce/datascience
|
src/Mentiras.ipynb
|
gpl-3.0
|
# Datos
# Consideramos los votos a setiembre de diferentes años, para ver cómo van cambiando
votaciones_factum_2014={'votoFA':0.42,'votoPN':0.32,'votoPC':0.15,'votoPI':0.03,'votoIndefinidos':0.04,'votoOtros':0.02}
votaciones_factum_julio_2014={'votoFA':0.42,'votoPN':0.30,'votoPC':0.14,'votoPI':0.03,'votoIndefinidos':0.04,'votoOtros':0.02}
votaciones_factum_2013={'votoFA':0.43,'votoPN':0.23,'votoPC':0.16,'votoPI':0.02,'votoIndefinidos':0.08,'votoOtros':0.08}
votaciones_factum_2010={'votoFA':0.49,'votoPN':0.22,'votoPC':0.13,'votoPI':0.00,'votoIndefinidos':0.00,'votoOtros':0.00}
votaciones_factum_2009={'votoFA':0.46,'votoPN':0.34,'votoPC':0.10,'votoPI':0.02,'votoIndefinidos':0.06,'votoOtros':0.02}
"""
Explanation: Mentiras, malditas mentiras, y encuestas (*)
Guillermo Moncecchi (@gmonce)
Época de elecciones, época de encuestas. Y época de análisis de encuestas. Hace tiempo (mucho tiempo) que tengo la misma impresión: me parece que la mayoría de las afirmaciones que hacen los politólogos (y, mucho más, los medios) son, directamente erróneas, o al menos no se desprenden de los datos. Van dos encuestas en las que baja un punto y entonces: "El Frente Amplio consolida su caída". Pero, claro, mi afirmación es tan imprecisa como las originales.
Aunque trabajo habitualmente con probabilidades, no soy ni de lejos un experto (siquiera un conocedor profundo) de los métodos estadísticos. Así que leí un poco, busqué otro poco... y finalmente encontré exactamente lo que quería: este artículo sobre cómo analizar resultados de encuestas, y cómo tener en cuenta el "margen de error" reportado. Si les interesa leerlo, se llama "The 'Margin of Error' for Differences in Polls", de Charles Franklin, y está disponible en https://abcnews.go.com/images/PollingUnit/MOEFranklin.pdf.
Algunos comentarios, antes de empezar:
Una encuesta es una consulta a un grupo de gente, esperando que sea representativa del total de la población (en este caso, los votantes).
El número que se publica es la proporción de votantes que eligió a cada partido (por ejemplo: 0.32 para el Partido Nacional quiere decir que 32 de cada 100 encuestados tomó esa opción).
Aademás, se publica un "margen de error" (del tipo +/- x%) y un "nivel de confianza" (típicamente del 95%), que dice lo siguiente (leer atentamente, porque es larguito): si hacemos 100 encuestas como éstas, en 95 de ellas, el número va a estar entre el valor de la proporción más/menos el valor del margen de error. A esto se le llama usualmente intervalo de confianza. Por ejemplo, si el margen de error es +/- 3,2%, y el nivel de confianza es el 95%, el 0.32 se transforma en un rango entre 0.288 y 0.352 (expresado usualmente como [0.288,0.352]).
Si leemos nuevamente lo anterior, vemos que quiere decir, nada más ni nada menos, que una de cada 20 veces que haga esta encuesta me voy a equivocar y el número va a salirse del rango. No quiere decir que haya un 95% de certeza (y por eso el término "confianza" no es muy adecuado) sobre los resultados.
Cuando un valor cae dentro del intervalo de confianza, decimos que es estadísticamente significativo, lo cual quiere decir que no es probable que haya sido por azar.
O sea, a las encuestas hay que tomarlas con pinzas. No voy a repetirlo, pero cada vez que demos un valor o un intervalo, recuerden que 1 de cada 20 veces es normal errarle. Normal.
Vamos a los datos para ver ejemplos: consideramos la encuesta Factum en el 2009 en diversas ediciones (elijo la misma encuesta para que las comparaciones sean válidas, porque uno supone que el método es el mismo):
End of explanation
"""
#!wget franklin.py https://raw.githubusercontent.com/gmonce/datascience/master/src/franklin.py
from franklin import *
"""
Explanation: La biblioteca franklin.py (en Python) tiene la definición de las funciones que vamos a utilizar. Para los que les interese el código para jugar con sus propios números, están disponibles aquí, pero quieren ser exactamente las fórmulas mencionadas en el paper. Desafío para programadores con más habilidades (y tiempo) que yo: hacer una página web que haga estas cuentas para cualquier par de valores en las encuestas
End of explanation
"""
# Veamos los intervalos de confianza para los votos al día de hoy
for (key,value) in votaciones_factum_2014.items():
print (key,votaciones_factum_2014[key],ci(votaciones_factum_2014[key]))
"""
Explanation: Empecemos por ver los diferentes intervalos de confianza para cada partido, según la última encuesta (Setiembre de 2014):
End of explanation
"""
cidif=ci_dif(votaciones_factum_2014['votoFA'],votaciones_factum_2014['votoPN'])
"""
Explanation: Por ejemplo, el Frente Amplio está entre 0.389 y 0.451. El Partido Colorado está entre 0.128 y 0.17. Pero hay que tener en cuenta que estos rangos aplican a la proporción calculada, es decir al valor obtenido por cada partido en particular. No puede usarse ese rango para comparar valores (por cuestiones de varianza, errores estándar y esas cosas de estadísticos). Para eso, hay que hacer algunas cuentas (que están descritas en el paper, y programadas más arriba). Veamos algunos ejemplos:
Pregunta 1: ¿Es estadísticamente significativa la diferencia entre el FA y el PN? Esta pregunta puede traducirse como "¿podemos afirmar con confianza que el FA tiene más votos que el PN? (siempre dentro del 95% mencionado, que dije que no iba a mencionar de nuevo, pero no puedo evitarlo...). Calculemos el intervalo para la diferencia entre ambos votos. Si el rango no incluye al 0, entonces la diferencia es significativa:
End of explanation
"""
cidif=ci_dif(votaciones_factum_2014['votoFA'],votaciones_factum_2014['votoPN']+votaciones_factum_2014['votoPC'])
"""
Explanation: Es. Estamos bastante seguros de que (según las encuestas), va ganando el FA. Pregunta 2: ¿Es significativa la diferencia entre el FA y los partidos tradicionales sumados?
End of explanation
"""
cidif=ci_dif_between(votaciones_factum_2010['votoPN'],votaciones_factum_2014['votoPN'])
"""
Explanation: Primera observación: si blancos y colorados suman sus votos, no sabemos qué pasa. Atención cuando digo que no sabemos qué pasa, no estoy hablando de indecisos. Estoy diciendo que con la cantidad de gente que encuestamos, los modelos estadísticos en las que nos basamos nos dicen que no alcanza para hacer la afirmación, al menos para no errarle en más de 1 en 20 veces.
También podemos comparar resultados entre diferentes encuestas, para ver si hubo cambios. Con otra fórmula. Pregunta 3:¿Mejoró el Partido Nacional del 2010 hasta ahora?
End of explanation
"""
cidif=ci_dif_between(votaciones_factum_2010['votoPC'],votaciones_factum_2014['votoPC'])
"""
Explanation: Sí, mejoró. ¿Y el Partido Colorado?
End of explanation
"""
cidif=ci_dif_between(votaciones_factum_2010['votoFA'],votaciones_factum_2014['votoFA'])
"""
Explanation: No, ¿y el FA?
End of explanation
"""
for partido in ['votoFA','votoPN','votoPC','votoPI']:
print (partido)
cidif=ci_dif_between(votaciones_factum_2009[partido],votaciones_factum_2014[partido])
"""
Explanation: El FA está hoy peor que luego de ganar las elecciones. Pero tal vez sería mejor comparar con el 2009 (es decir, antes de las elecciones, en la misma época en la que estamos ahora). De hecho, podemos comparar a todos los partidos.
End of explanation
"""
cidif=ci_dif(votaciones_factum_2014['votoFA'],votaciones_factum_2014['votoPN']+votaciones_factum_2014['votoPC'])
"""
Explanation: Lo único que podríamos afirmar con "certeza estadística" es que el Partido Colorado está mejor que en las elecciones pasadas. Después, todo más o menos igual
En los diarios uruguayos se hacen afirmaciones sobre las encuestas que parecen omitir el margen de error. Si un partido bajó un punto, "su imagen se ha deteriorado". Si sube dos, "muestra un repunte". La mayoría de esas afirmaciones son estadísticamente muy arriesgadas. Veamos un ejemplo:
Blancos y Colorados sumados siguen aventajando al Frente Amplio El País 8/9/2014. http://www.elpais.com.uy/informacion/encuesta-factum-intencion-voto-septiembre.html
End of explanation
"""
cidif=ci_dif_between(votaciones_factum_julio_2014['votoFA'],votaciones_factum_2014['votoFA'])
"""
Explanation: Como dijimos antes, los números son demasiado parecidos como para afirmar que este valor no es casualidad. En la misma nota: "el Frente Amplio detuvo la caída"
End of explanation
"""
cidif=ci_dif_between(votaciones_factum_julio_2014['votoPN'],votaciones_factum_2014['votoPN'])
"""
Explanation: No sabemos. De hecho no sabemos si efectivamente venía cayendo, porque la diferencia entre los números es siempre muy pequeña. En la misma nota: "El partido Nacional sigue creciendo"
End of explanation
"""
|
google/applied-machine-learning-intensive
|
content/02_data/05_exploratory_data_analysis/colab-part1.ipynb
|
apache-2.0
|
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
"""
Explanation: <a href="https://colab.research.google.com/github/google/applied-machine-learning-intensive/blob/master/content/02_data/05_exploratory_data_analysis/colab-part1.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
Copyright 2020 Google LLC.
End of explanation
"""
! chmod 600 kaggle.json && (ls ~/.kaggle 2>/dev/null || mkdir ~/.kaggle) && mv kaggle.json ~/.kaggle/ && echo 'Done'
"""
Explanation: Exploratory Data Analysis
Exploratory Data Analysis, often shortened to EDA, is a term that you'll hear quite a bit in the field of data science. EDA is the process of examining a dataset to find facts about the data and communicating those facts, often through visualizations.
In order to explore the data and visualize it, some modifications might need to be made to the data along the way. This is often referred to as data preprocessing. Though data preprocessing is technically different from EDA, EDA often exposes problems with the data that need to be fixed in order to continue exploring. Because of this tight coupling, we'll clean the data as necessary to help understand the data.
In this lab we will apply our Pandas knowledge to explore a dataset about chocolate. Part 1 of the lab will explore each column in our dataset individually. Part 2 will take the results of our preprocessed data and search for patterns across columns and rows.
Introduction
The Dataset: Chocolate Bar Ratings
In this lab we will use a chocolate bar ratings dataset. The dataset is from the Flavors of Cacao data.
On the Kaggle page for the dataset, we can find some basic information about the dataset. For instance, there are over 1,700 chocolate bars that have been rated. We can also preview the columns found in the dataset:
Column | Data Type | Description
-------|-----------|-------------
Company (Maker-if known) | String | Name of the company manufacturing the bar.
Specific Bean Origin or Bar Name | String | The specific geo-region of origin for the bar.
REF | Number | A value linked to when the review was entered in the database. Higher = more recent.
Review Date | Number | Date of publication of the review.
Cocoa Percent | String | Cocoa percentage (darkness) of the chocolate bar being reviewed.
Company Location | String | Manufacturer base country.
Rating | Number | Expert rating for the bar.
BeanType | String | The variety (breed) of bean used, if provided.
Broad Bean Origin | String | The broad geo-region of origin for the bean.
This is an interesting dataset. Think of the questions that you might be able to answer! A few could be:
Is there a relationship between numeric rating and properties such as percentage of cocoa, bean type, origin, and maker?
Are some of the properties of cacao beans correlated?
Where are the top chocolate bars from?
Are there multiple entries for the same bar from the same maker, but with different ratings over the years? If so, has there been any change in the chocolate bar that could account for the differences?
Do makers who produce a wide variety of bars have a higher chance of creating a top-rated chocolate bar?
I'm sure you can think of even more. So, what are we waiting for? Let's load the data!
Acquiring the Data
The data is hosted on Kaggle, so we can use our Kaggle credentials to download the data into the lab. The dataset is located at https://www.kaggle.com/rtatman/chocolate-bar-ratings. We can use the kaggle command line utility to do this.
First off, upload your kaggle.json file into the lab now.
Next, run the following command to get the credential files set to the right permissions and located in the correct spot.
End of explanation
"""
! kaggle datasets download rtatman/chocolate-bar-ratings
! ls
"""
Explanation: Now we can run the kaggle command to actually download the data.
End of explanation
"""
import pandas as pd
df = pd.read_csv('chocolate-bar-ratings.zip')
df
"""
Explanation: We now have our data downloaded to our virtual machine and stored in the file chocolate-bar-ratings.zip.
Creating a DataFrame
We now need to load the data into memory. We can do this easily using Pandas' read_csv() function.
End of explanation
"""
df.dtypes
"""
Explanation: Let's also make sure that our data types match what was documented:
End of explanation
"""
df.columns = [
'Company',
'Specific Bean Origin',
'REF',
'Review Date',
'Cocoa Percent',
'Company Location',
'Rating',
'Bean Type',
'Broad Bean Origin'
]
df
"""
Explanation: In this output, object types are strings while int64 types are whole numbers and float64 types are fractional numbers. This seems to match the documentation that we saw for the dataset.
From just a glance at the DataFrame, we can see a few facts about our data:
There are 1,795 rows and 9 columns.
The columns are the columns we expected based on the documentation, though some have \n (new line) embedded in them. We'll need to clean that up.
The data seems to be sorted by the 'Company' column.
There is definitely some missing data, as we can see in the 'Bean Type' column.
We will look more closely at each column throughout this lab.
Cleaning Up Column Names
One of the more frustrating aspects of this dataset is the poor format of the column names. Typing 'Specific Bean Origin\nor Bar Name' in order to access the column is painful.
So our first order of business will be to update the column names.
End of explanation
"""
df = df[[
'Company',
'Company Location',
'Bean Type',
'Specific Bean Origin',
'Broad Bean Origin',
'Cocoa Percent',
'REF',
'Review Date',
'Rating',
]]
df
"""
Explanation: That's much better, but the columns are also in an odd order. Information about the company is spread across the columns, and so is the information about the cacao bean. Let's order the columns a little more meaningfully.
This order makes a little more sense:
Company Information:
* Company
* Company Location
Chocolate Bar Information
* Bean Type
* Specific Bean Origin
* Broad Bean Origin
* Cocoa Percent
Review Information
* REF
* Review Date
* Rating
We can reorder the columns by specifically selecting the columns in order and reassigning them to the df variable:
End of explanation
"""
df['Company'].isnull().any()
"""
Explanation: Examining Each Column
In this section we will examine each column to learn about the data in the column. We will also make changes to the data as needed.
Column: Company
The 'Company' column is the first in the list, so let's look at it first.
We can tell that the column contains string values. Let's see if any are missing:
End of explanation
"""
df['Company'].unique().size
"""
Explanation: No data is missing. Let's now see how many distinct values there are:
End of explanation
"""
for company in sorted(df['Company'].unique()):
print(company)
"""
Explanation: A few hundred is not a terribly long list. Let's print the list in alphabetical order to see how it looks.
End of explanation
"""
import pandas as pd
df = pd.read_csv('chocolate-bar-ratings.zip')
df.columns = ['Company', 'Specific Bean Origin', 'REF', 'Review Date',
'Cocoa Percent', 'Company Location', 'Rating', 'Bean Type',
'Broad Bean Origin']
df = df[['Company', 'Company Location', 'Bean Type', 'Specific Bean Origin',
'Broad Bean Origin', 'Cocoa Percent', 'REF', 'Review Date', 'Rating']]
# Change 'Shattel' to 'Shattell'
# Change 'Cacao de Origin' to 'Cacao de Origen'
# Print the number of unique company names
"""
Explanation: This is some interesting data. Looking at it raises many questions. For instance:
Should company names like 'Vintage Plantations' and 'Vintage Plantations (Tulicorp)' be changed to the same name?
Is 'Cacao de Origin' a misspelling of 'Cacao de Origen'?
Is 'Shattel' a misspelling of 'Shattell'?
These are the types of things you'll see and questions you'll ask when you encounter a new dataset. Rarely is the data in perfect condition. Often you'll spend a considerable amount of time researching topics related to the data in order to make a call about repairing aspects of the data.
In this particular case, it would be great if we could find a master list of all of the chocolate makers in the world. We could then cross reference the names in the dataset with the names in the master list.
Unfortunately, we don't have a master list of chocolate makers. Instead, we will have to rely on manually inspecting the data and researching when things don't look right.
Let's say that for now we are confident that 'Cacao de Origin' and 'Shattel' are misspellings, so we will correct that data. We aren't confident enough to change any of the names with parentheses in them though.
Let's fix our misspellings!
Exercise 1: Fixing Misspellings
We have decided that we would like to change every instance of 'Cacao de Origin' to 'Cacao de Origen' and every instance of 'Shattel' to 'Shattell' in the 'Company' column of our dataset. Write the code to modify the values. Make sure your code doesn't have any warnings. At the end of the code block, print the number of unique company names when you are done. There should be two less columns than what you saw above.
Student Solution
End of explanation
"""
df['Company Location'].isna().any()
"""
Explanation: Column: Company Location
The documentation describes the 'Company Location' column as "Manufacturer base country."
Let's take a look at the data. As always, we'll first check to see if any data is missing.
End of explanation
"""
df['Company Location'].unique().shape
"""
Explanation: No missing data.
Now we can see how many unique values there are:
End of explanation
"""
for location in sorted(df['Company Location'].unique()):
print(location)
"""
Explanation: There are just 60 locations, which is small enough that we can manually inspect the values. Let's print the data.
End of explanation
"""
# Fix at least two issues with the 'Company Location' data
"""
Explanation: Overall, the data looks pretty clean. The column is supposed to contain countries and most entries are countries. There are a few problems with the country data though. We found at least five errors in the data. Let's see what you can find.
Exercise 2: Fixing Company Location Data
There are at least five errors in the company location data that need to be fixed. Some are fairly easy to spot (spelling errors), but some do require knowledge of what constitutes a country. Take some time to look at the data, and see if you can spot at least two of the issues. Write code to fix the issues.
Student Solution
End of explanation
"""
df['Bean Type'].isna().any()
"""
Explanation: Column: Bean Type
Now that our company data is looking a little better, let's move into data about the cocoa going into the chocolate bar itself. The first piece of data is the 'Bean Type'. 'Bean Type' is defined as "The variety (breed) of bean used, if provided". This hints that there will be some missing data. Let's check and see.
End of explanation
"""
df[df['Bean Type'].isna()].count()
"""
Explanation: Indeed, we have missing data. Let's see how much is missing.
End of explanation
"""
df[df['Bean Type'].isna()]
"""
Explanation: Only one row of data is missing 'Bean Type'. Let's take a look at that row.
End of explanation
"""
df.loc[df['Bean Type'].isna(), 'Bean Type'] = 'Unknown'
df[df['Bean Type'].isna()]
"""
Explanation: Now we have a choice to make about how to handle this missing data. Some options include:
Leave it as is
Remove the entire row
Fill in the data with some value
Leaving undefined values lying around in our data can be problematic. Missing values are not counted and can be tricky to program around.
Removing the entire row actually isn't a bad option in this case. Since it is only one row out of over 1,700, it likely won't have too much effect on any analysis that we do.
As for filling in the row, we can:
Use 'Unknown' or some other placeholder value
Actually do research to find the true missing value
See if there is a reasonable value already in the data
In this case, we are just going to replace the missing value with 'Unknown'.
End of explanation
"""
df['Bean Type'].unique().size
"""
Explanation: Now we can see how many unique bean types we have.
End of explanation
"""
for t in sorted(df['Bean Type'].unique()):
print(t)
"""
Explanation: Only 42, let's print them out.
End of explanation
"""
space = sorted(df['Bean Type'].unique())[-1]
print(", ".join("0x{:02x}".format(ord(c)) for c in space))
"""
Explanation: The data looks pretty good. But there is a small little problem. After 'Unknown' there seems to be an empty line. What is that?
It turns out that it is a whitespace character. We thought we had only one missing value, but it looks like there are some values that are present but are white space. Let's see how many.
White space can be tricky because there are many different encodings that render as white space. Let's find out exactly which space character this is.
To get the space(s) we can sort the 'Bean Type' values again and get the last one, since we see the space last in the list. We can then print the space as hexadecimal characters.
End of explanation
"""
df[df['Bean Type'] == chr(0xa0)]
"""
Explanation: We get 0xa0 which is the ASCII code for non-breaking space. This is different from the white space that you get when you hit the space bar. That space is encoded 0x20.
Let's see how many of these there are:
End of explanation
"""
# Your Code Goes Here
"""
Explanation: Almost 900! Let's encode those as 'Unknown' also.
Exercise 3: Fixing Non-Breaking Space
There are non-breaking space characters, 0xa0 in the 'Bean Type' column. Replace these values with the word 'Unknown'.
Student Solution
End of explanation
"""
df['Specific Bean Origin'].isna().any()
"""
Explanation: Column: Specific Bean Origin
Let's look at our next column: 'Specific Bean Origin'. 'Specific Bean Origin' is a string column that contains the "specific geo-region of origin for the bar."
First, we'll see if we are missing any data in the 'Specific Bean Origin' column.
End of explanation
"""
df[df['Specific Bean Origin'].apply(lambda x: x.strip()).str.len() == 0]
"""
Explanation: Good, we don't have any 'N/A' data. But we learned from the 'Bean' column that we also need to check string columns for being only white space.
A good way to do this is to apply a function that strips leading and trailing white space from every value in a column, and see if the resulting string is zero-length.
End of explanation
"""
df['Specific Bean Origin'].unique().size
"""
Explanation: Here we can see that no data was returned, so we don't have any 'Specific Bean Origin' values that are only spaces.
If you run this function and get an error about numbers/floats not having a strip function, you likely have N/A values in your column. Always check isna() first.
Now that we know that every row has a 'Specific Bean Origin' value, let's see how many unique values we have.
End of explanation
"""
for origin in sorted(df['Specific Bean Origin'].unique()):
if origin.startswith('B'):
break
print(origin)
"""
Explanation: Over 1,000 values! That is quite a bit of data to manually sift through. Let's look at the first bit of data, up until the first origin that starts with 'B'.
End of explanation
"""
df[(df['Specific Bean Origin'] == 'Akesson Estate') | \
(df['Specific Bean Origin'] == "Akesson's Estate")]
"""
Explanation: This is some pretty ugly data. Most (but not all) rows contain the bean's geographical origin, but some seem to include the year and/or batch numbers as well, and some seem to contain different information entirely ("100 percent").
Looking at the data, we can also see some things that look odd. For instance, "Akesson Estate" and "Akesson's Estate" are likely the same origin. Also, "Ambolikapkly P." clearly looks like a misspelling of "Ambolikapiky P."
We could make all of the "Akesson" origins look the same, but should we? First, let's look at the entire rows for the offending data.
End of explanation
"""
df.loc[df['Specific Bean Origin'] == 'Ambolikapkly P.',
'Specific Bean Origin'] = 'Ambolikapiky P.'
"""
Explanation: It is interesting that all of the bean types and origins are alike. It looks like Akesson('s) Estate serves many companies though.
It is tempting to go ahead and change the "Specific Bean Origin" values to make them match, but it is better to do more research into the industry before making those sorts of changes. You might disagree with this decision, and that is perfectly fine. When working with datasets, you will often have to make difficult calls to deal with ambiguous data. Different people will make different decisions, and that's okay.
The "Ambolikapkly P." issue is a little more obvious and can be validated with a quick internet search. The "Ambolikapkly" spelling shows up very few times and always in the context of this data set. The other spelling is much more common. Let's go ahead and fix that.
End of explanation
"""
for origin in sorted(df['Specific Bean Origin'].unique()):
print(origin)
"""
Explanation: Exercise 4: Finding and Repairing Bad Data
There are a few more obvious errors in the 'Specific Bean Origin' column of the dataset. Print out the column, scan the output, and see if you can find any more errors. Write the code to fix the errors. Find at least one error to fix.
The code to print the dataset is below.
End of explanation
"""
# Repair the data
"""
Explanation: Student Solution
End of explanation
"""
# Find the top 5 bar origins
"""
Explanation: Exercise 5: Top Specific Bean Origins
There are just over 1,000 unique specific bean origins and over 1,700 entries in the dataset. Write code to find the top five most repeated origins. Print the origins and the number of times that each appears in the dataset.
Student Solution
End of explanation
"""
df[df['Broad Bean Origin'].isna()].count()
"""
Explanation: Column: Broad Bean Origin
The 'Broad Bean Origin' is the "broad geo-region of origin for the bean." In theory, this should be broader regions than the 'Specific Bean Origin' that we just worked with.
Let's dive in. First things first, let's check for N/A values.
End of explanation
"""
df[df['Broad Bean Origin'].isna()]
"""
Explanation: It looks like we are missing one origin. Let's take a look at the record.
End of explanation
"""
df[df['Specific Bean Origin'] == 'Madagascar']
"""
Explanation: The one record has a 'Specific Bean Origin' of 'Madagascar'. Let's see if there are any other chocolates from that same specific origin.
End of explanation
"""
df.loc[(df['Specific Bean Origin'] == 'Madagascar') &
(df['Broad Bean Origin'].isna()),
'Broad Bean Origin'] = 'Madagascar'
df[df['Broad Bean Origin'].isna()]
"""
Explanation: Quite a few! And they all have a 'Broad Bean Origin' of 'Madagascar', except for our one missing value. It is probably safe to just set the missing value to 'Madagascar' also.
End of explanation
"""
df[df['Broad Bean Origin'].apply(lambda x: x.strip()).str.len() == 0]
"""
Explanation: Now that we have all of the N/A values handled, let's see if we have an issue with spaces.
End of explanation
"""
spaces_df = df[df['Broad Bean Origin'].apply(
lambda x: x.strip()).str.len() == 0]
for space in spaces_df['Broad Bean Origin'].unique():
print(", ".join("0x{:02x}".format(ord(c)) for c in space))
"""
Explanation: There are spaces in 73 rows of the data. Let's see what those space values are.
End of explanation
"""
has_bbo_idx = df['Broad Bean Origin'].apply(lambda x: x.strip()).str.len() > 0
sbo_bbo = df[has_bbo_idx]['Specific Bean Origin']
sbo_no_bbo = df[~has_bbo_idx]['Specific Bean Origin']
pd.merge(sbo_bbo, sbo_no_bbo)
"""
Explanation: It is that pesky 0xa0 again.
We can fix this by replacing all of the 0xa0 values with 'Unknown'. However, an even better fix would be if we could find similar chocolates with the same 'Specific Bean Origin' and then derive the 'Broad Bean Origin' from that.
Let's see if it is even possible. To do that we can find all of the 'Specific Bean Origin' values for rows with 'Broad Bean Origin' and those without. Then we can use pd.merge() to combine the two. If you remember, pd.merge() returns only the values which appear in both of the given Series. This means that the return value will show us which values appear both in columns with 'Broad Bean Origin' values and those without.
End of explanation
"""
df[(df['Specific Bean Origin'] == 'Orinoco') |
(df['Specific Bean Origin'] == 'Amazonas')]
"""
Explanation: We have overlap, which is good. In theory, we could use the 'Broad Bean Origin' values from bars that have that value to fill in the 'Broad Bean Origin' for bars from the same specific region that don't have it.
But look closely at those 'Specific Bean Origin' values. Dark? Raw? Blend?
Those are specific origins. The only two origins that seem even close to regions are 'Amazonas' and 'Orinoco'. Let's look closer at the data for those regions.
End of explanation
"""
# Your Code Goes Here
"""
Explanation: Yuck! Amazonas turns out to be a very common location. There are states called Amazonas in Brazil, Venezuela, and Peru. Orinoco is a river that runs through both Venezuela and Columbia.
In neither case do we have definitive data to make the call about the 'Broad Bean Origin' for these rows.
Unfortunately that is how it goes when working with data. You get imperfect data into your system, and then you try to research and find the best fix. But you sometimes just have to accept that you are missing data.
Exercise 6: Unknown Broad Bean Origins
We have a few 'Broad Bean Origin' values of 0xa0. Change those values to the literal string 'Unknown'.
Student Solution
End of explanation
"""
df['Cocoa Percent'].isna().any()
"""
Explanation: Column: Cocoa Percent
Next we will check out the 'Cocoa Percent' column. Remember that 'Cocoa Percent' is "Cocoa percentage (darkness) of the chocolate bar."
As usual, we'll first see if there is any missing data:
End of explanation
"""
df['Cocoa Percent'].sample(10)
"""
Explanation: Nothing missing. Great!
Next, we should probably check to make sure that the percentages fall within a valid range: 0-100 or 0.0-1.0. You might recall that 'Cocoa Percent' isn't actually a numeric column, though, so we can't easily find the range. If we sample the data, we see that it looks like percentages from 0 to 100, but they are stored as strings with '%' symbols appended.
End of explanation
"""
df['Cocoa Percent'].apply(lambda s: float(s[:-1]))
"""
Explanation: We need to remove those percentage signs and convert the digits that remain into numbers. There are a few ways that we can accomplish this.
One is to apply a lambda to each value. The lambda can slice all but the last character of each value and then convert it to a float using core Python syntax.
End of explanation
"""
pd.to_numeric(df['Cocoa Percent'].str.strip('%'))
"""
Explanation: An alternative is to use .str.strip('%') on the Series to remove the percentage sign and then pass the resultant Series to pd.to_numeric() in order to convert the string values to numbers.
End of explanation
"""
df['Cocoa Percent'] = df['Cocoa Percent'].apply(lambda s: float(s[:-1]))
df['Cocoa Percent'].describe()
"""
Explanation: Is one way better than the other? Not necessarily. Feel free to choose whichever feels more natural to you.
Either way, we need to do the conversion and save the new values to 'Cocoa Percent'.
End of explanation
"""
df['REF'].isna().any()
"""
Explanation: We have now converted our 'Cocoa Percent' column from a string to a floating point number. We can see in the output of the call to describe() that the minimum cocoa percentage that we have is 42% and that the maximum is 100%. Both seem like reasonable values for cocoa content in a chocolate bar, so our work here is done.
Column: REF
The 'REF' column is "A value linked to when the review was entered in the database. Higher = more recent." Let's take a look at it.
As always, we should check and see if there are any values missing.
End of explanation
"""
df['REF'].describe()
"""
Explanation: We can describe() the data to see some basic statistics about it.
End of explanation
"""
df['REF'].unique().size
"""
Explanation: Here we can see that the data ranges from 5 through 1952 and that the mean is pretty high.
Are the values unique?
End of explanation
"""
import matplotlib.pyplot as plt
ref_counts = df['REF'].groupby(df['REF']).count()
plt.figure(figsize=(20,10))
plt.bar(ref_counts.index.values, ref_counts)
plt.show()
"""
Explanation: Not unique. So 'REF' isn't a unique identifier for our rows of data.
There isn't much more that we can do with this column. We might want to visualize it to see if we can find any meaning. The numbers themselves aren't particularly interesting, but the quantity of each number might be. Let's find and plot the count of each 'REF'.
End of explanation
"""
df['Review Date'].isna().any()
"""
Explanation: From this chart we can see that 'REF' values repeat between 1 and 9 times with 4 being the most common. Overall, there isn't much interesting data or data repair for this column.
Column: Review Date
Review date is the date that the review for a given row was actually published. It is a numeric column.
First, let's see if any data is missing.
End of explanation
"""
df['Review Date'].describe()
"""
Explanation: No missing data. Good.
Now we can check some basic statistics about the data.
End of explanation
"""
# Reviews Per Year Visualization
"""
Explanation: We can see publication dates ranging from 2006 through 2017, which seems like reasonable years. If we had seen dates from the 1800s or the future, we should be worried. This range seems well within reason, though.
There isn't much else that we need to do for this column. Since we only have a few years when reviews were posted, we can create a visualization showing how many reviews were posted each year.
Exercise 7: Reviews Per Year
Create a visualization that shows the number of reviews that were created each year.
Student Solution
End of explanation
"""
df['Rating'].isna().any()
"""
Explanation: Column: Rating
We have now made it to the rating column. The rating is the "expert rating for the bar." From the documentation, the possible ratings are:
Rating | Meaning
-------|---------
5 | Elite (Transcending beyond the ordinary limits)
4 | Premium (Superior flavor development, character and style)
3 | Satisfactory (3.0) to praiseworthy(3.75) (well made with special qualities)
2 | Disappointing (Passable but contains at least one significant flaw)
1 | Unpleasant (mostly unpalatable)
Let's take a look at ratings. First off, are any missing?
End of explanation
"""
df['Rating'].describe()
"""
Explanation: Nothing missing. Let's describe the column of data.
End of explanation
"""
sorted(df['Rating'].unique())
"""
Explanation: It looks like our ratings are indeed floating point values and that they range from 1.0 to 5.0. But are they really continuous?
End of explanation
"""
# Your Code Goes Here
"""
Explanation: Interestingly enough, the values don't seem to be continuous, but instead seem to be divided into quarters. Instead of infinite possible values between 1.0 and 5.0, we really have 17 possible values: 1.0, 1.25, 1.5, 1.75, 2.0, 2.25, 2.5, 2.75, 3.0, 3.25, 3.5, 3.75, 4.0, 4.25, 4.5, 4.75, 5.0.
What does this mean for us?
It means that downstream we might be able to use a regression or categorical model in order to predict these values.
If we think about the ratings numbers, their relative position matters. For example, a 4.0 chocolate is better than a 2.0 chocolate. But does the magnitude matter? Is a 4.0 chocolate twice as good as a 2.0 chocolate? What does that even mean?
Let's set our modelers up for success and create a new column that they can use to potentially build models for our data.
Exercise 8: Ratings as Categories
In this exercise we are going to create a new column called 'Grade'. Grade is a categorical rating system that maps the following ratings to grades:
Rating | Grade
-------|------
5.00 | A
4.75 | B
4.50 | C
4.25 | D
4.00 | E
3.75 | F
3.50 | G
3.25 | H
3.00 | I
2.75 | J
2.50 | K
2.25 | L
2.00 | M
1.75 | N
1.50 | O
1.25 | P
1.00 | Q
Create the 'Grade' column and add it to our chocolate bar DataFrame.
Student Solution
End of explanation
"""
|
MIT-LCP/mimic-code
|
mimic-iii/notebooks/aline-aws/aline-awsathena.ipynb
|
mit
|
# Install OS dependencies. This only needs to be run once for each new notebook instance.
!pip install PyAthena
from pyathena import connect
from pyathena.util import as_pandas
from __future__ import print_function
# Import libraries
import datetime
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import os
import boto3
from botocore.client import ClientError
# below is used to print out pretty pandas dataframes
from IPython.display import display, HTML
%matplotlib inline
s3 = boto3.resource('s3')
client = boto3.client("sts")
account_id = client.get_caller_identity()["Account"]
my_session = boto3.session.Session()
region = my_session.region_name
athena_query_results_bucket = 'aws-athena-query-results-'+account_id+'-'+region
try:
s3.meta.client.head_bucket(Bucket=athena_query_results_bucket)
except ClientError:
bucket = s3.create_bucket(Bucket=athena_query_results_bucket)
print('Creating bucket '+athena_query_results_bucket)
cursor = connect(s3_staging_dir='s3://'+athena_query_results_bucket+'/athena/temp').cursor()
# The Glue database name of your MIMIC-III parquet data
gluedatabase="mimiciii"
# location of the queries to generate aline specific materialized views
aline_path = './'
# location of the queries to generate materialized views from the MIMIC code repository
concepts_path = './concepts/'
"""
Explanation: Arterial line study
This notebook reproduces the arterial line study in MIMIC-III. The following is an outline of the notebook:
Generate necessary materialized views in SQL
Combine materialized views and acquire a single dataframe
Write this data to file for use in R
The R code then evaluates whether an arterial line is associated with mortality after propensity matching.
Note that the original arterial line study used a genetic algorithm to select the covariates in the propensity score. We omit the genetic algorithm step, and instead use the final set of covariates described by the authors. For more detail, see:
Hsu DJ, Feng M, Kothari R, Zhou H, Chen KP, Celi LA. The association between indwelling arterial catheters and mortality in hemodynamically stable patients with respiratory failure: a propensity score analysis. CHEST Journal. 2015 Dec 1;148(6):1470-6.
End of explanation
"""
# Load in the query from file
query='DROP TABLE IF EXISTS DATABASE.angus_sepsis;'
cursor.execute(query.replace("DATABASE", gluedatabase))
f = os.path.join(concepts_path,'sepsis/angus-awsathena.sql')
with open(f) as fp:
query = ''.join(fp.readlines())
# Execute the query
print('Generating table \'angus_sepsis\' using {} ...'.format(f),end=' ')
cursor.execute(query.replace("DATABASE", gluedatabase))
print('done.')
# Load in the query from file
query='DROP TABLE IF EXISTS DATABASE.heightweight;'
cursor.execute(query.replace("DATABASE", gluedatabase))
f = os.path.join(concepts_path,'demographics/HeightWeightQuery-awsathena.sql')
with open(f) as fp:
query = ''.join(fp.readlines())
# Execute the query
print('Generating table \'heightweight\' using {} ...'.format(f),end=' ')
cursor.execute(query.replace("DATABASE", gluedatabase))
print('done.')
# Load in the query from file
query='DROP TABLE IF EXISTS DATABASE.aline_vaso_flag;'
cursor.execute(query.replace("DATABASE", gluedatabase))
f = os.path.join(aline_path,'aline_vaso_flag-awsathena.sql')
with open(f) as fp:
query = ''.join(fp.readlines())
# Execute the query
print('Generating table \'aline_vaso_flag\' using {} ...'.format(f),end=' ')
cursor.execute(query.replace("DATABASE", gluedatabase))
print('done.')
# Load in the query from file
query='DROP TABLE IF EXISTS DATABASE.ventsettings;'
cursor.execute(query.replace("DATABASE", gluedatabase))
f = os.path.join(concepts_path,'durations/ventilation-settings-awsathena.sql')
with open(f) as fp:
query = ''.join(fp.readlines())
# Execute the query
print('Generating table \'vent_settings\' using {} ...'.format(f),end=' ')
cursor.execute(query.replace("DATABASE", gluedatabase))
print('done.')
# Load in the query from file
query='DROP TABLE IF EXISTS DATABASE.ventdurations;'
cursor.execute(query.replace("DATABASE", gluedatabase))
f = os.path.join(concepts_path,'durations/ventilation-durations-awsathena.sql')
with open(f) as fp:
query = ''.join(fp.readlines())
# Execute the query
print('Generating table \'vent_durations\' using {} ...'.format(f),end=' ')
cursor.execute(query.replace("DATABASE", gluedatabase))
print('done.')
"""
Explanation: 1 - Generate materialized views
Before generating the aline cohort, we require the following materialized views to be already generated:
angus - from angus.sql
heightweight - from HeightWeightQuery.sql
aline_vaso_flag - from aline_vaso_flag.sql
You can generate the above by executing the below codeblock. If you haven't changed the directory structure, the below should work, otherwise you may need to modify the concepts_path variable above.
End of explanation
"""
# Load in the query from file
query='DROP TABLE IF EXISTS DATABASE.aline_cohort_all;'
cursor.execute(query.replace("DATABASE", gluedatabase))
f = os.path.join(aline_path,'aline_cohort-awsathena.sql')
with open(f) as fp:
query = ''.join(fp.readlines())
# Execute the query
print('Generating table \'aline_cohort_all\' using {} ...'.format(f),end=' ')
cursor.execute(query.replace("DATABASE", gluedatabase))
print('done.')
# Load in the query from file
query='DROP TABLE IF EXISTS DATABASE.aline_cohort;'
cursor.execute(query.replace("DATABASE", gluedatabase))
f = os.path.join(aline_path,'aline_final_cohort-awsathena.sql')
with open(f) as fp:
query = ''.join(fp.readlines())
# Execute the query
print('Generating table \'aline_cohort\' using {} ...'.format(f),end=' ')
cursor.execute(query.replace("DATABASE", gluedatabase))
print('done.')
query = """
select
icustay_id
, exclusion_readmission
, exclusion_shortstay
, exclusion_vasopressors
, exclusion_septic
, exclusion_aline_before_admission
, exclusion_not_ventilated_first24hr
, exclusion_service_surgical
from DATABASE.aline_cohort_all
"""
cursor.execute(query.replace("DATABASE", gluedatabase))
# Load the result of the query into a dataframe
df = as_pandas(cursor)
# print out exclusions
idxRem = df['icustay_id'].isnull()
for c in df.columns:
if 'exclusion_' in c:
print('{:5d} - {}'.format(df[c].sum(), c))
idxRem[df[c]==1] = True
# final exclusion (excl sepsis/something else)
print('Will remove {} of {} patients.'.format(np.sum(idxRem), df.shape[0]))
print('')
print('')
print('Reproducing the flow of the flowchart from Chest paper.')
# first stay
idxRem = (df['exclusion_readmission']==1) | (df['exclusion_shortstay']==1)
print('{:5d} - removing {:5d} ({:2.2f}%) patients - short stay // readmission.'.format(
df.shape[0], np.sum(idxRem), 100.0*np.mean(idxRem)))
df = df.loc[~idxRem,:]
idxRem = df['exclusion_not_ventilated_first24hr']==1
print('{:5d} - removing {:5d} ({:2.2f}%) patients - not ventilated in first 24 hours.'.format(
df.shape[0], np.sum(idxRem), 100.0*np.mean(idxRem)))
df = df.loc[df['exclusion_not_ventilated_first24hr']==0,:]
print('{:5d}'.format(df.shape[0]))
idxRem = df['icustay_id'].isnull()
for c in ['exclusion_septic', 'exclusion_vasopressors',
'exclusion_aline_before_admission', 'exclusion_service_surgical']:
print('{:5s} - removing {:5d} ({:2.2f}%) patients - additional {:5d} {:2.2f}% - {}'.format(
'', df[c].sum(), 100.0*df[c].mean(),
np.sum((idxRem==0)&(df[c]==1)), 100.0*np.mean((idxRem==0)&(df[c]==1)),
c))
idxRem = idxRem | (df[c]==1)
df = df.loc[~idxRem,:]
print('{} - final cohort.'.format(df.shape[0]))
"""
Explanation: Now we generate the aline_cohort table using the aline_cohort.sql file.
Afterwards, we can generate the remaining 6 materialized views in any order, as they all depend on only aline_cohort and raw MIMIC-III data.
End of explanation
"""
# get a list of all files in the subfolder
aline_queries = [f for f in os.listdir(aline_path)
# only keep the filename if it is actually a file (and not a directory)
if os.path.isfile(os.path.join(aline_path,f))
# and only keep the filename if it is an SQL file
& f.endswith('.sql')
# and we do *not* want aline_cohort - it's generated above
& (f != 'aline_cohort-awsathena.sql') & (f != 'aline_final_cohort-awsathena.sql') & (f != 'aline_vaso_flag-awsathena.sql')]
for f in aline_queries:
# Load in the query from file
table=f.split('-')
query='DROP TABLE IF EXISTS DATABASE.{};'.format(table[0])
cursor.execute(query.replace("DATABASE", gluedatabase))
print('Executing {} ...'.format(f), end=' ')
with open(os.path.join(aline_path,f)) as fp:
query = ''.join(fp.readlines())
cursor.execute(query.replace("DATABASE", gluedatabase))
print('done.')
"""
Explanation: The following codeblock loads in the SQL from each file in the aline subfolder and executes the query to generate the materialized view. We specifically exclude the aline_cohort.sql file as we have already executed it above. Again, the order of query execution does not matter for these queries. Note also that the filenames are the same as the created materialized view names for convenience.
End of explanation
"""
# Load in the query from file
query = """
--FINAL QUERY
select
co.subject_id, co.hadm_id, co.icustay_id
-- static variables from patient tracking tables
, co.age
, co.gender
-- , co.gender_num -- gender, 0=F, 1=M
, co.intime as icustay_intime
, co.day_icu_intime -- day of week, text
--, co.day_icu_intime_num -- day of week, numeric (0=Sun, 6=Sat)
, co.hour_icu_intime -- hour of ICU admission (24 hour clock)
, case
when co.hour_icu_intime >= 7
and co.hour_icu_intime < 19
then 1
else 0
end as icu_hour_flag
, co.outtime as icustay_outtime
-- outcome variables
, co.icu_los_day
, co.hospital_los_day
, co.hosp_exp_flag -- 1/0 patient died within current hospital stay
, co.icu_exp_flag -- 1/0 patient died within current ICU stay
, co.mort_day -- days from ICU admission to mortality, if they died
, co.day_28_flag -- 1/0 whether the patient died 28 days after *ICU* admission
, co.mort_day_censored -- days until patient died *or* 150 days (150 days is our censor time)
, co.censor_flag -- 1/0 did this patient have 150 imputed in mort_day_censored
-- aline flags
-- , co.initial_aline_flag -- always 0, we remove patients admitted w/ aline
, co.aline_flag -- 1/0 did the patient receive an aline
, co.aline_time_day -- if the patient received aline, fractional days until aline put in
-- demographics extracted using regex + echos
, bmi.weight as weight_first
, bmi.height as height_first
, bmi.bmi
-- service patient was admitted to the ICU under
, co.service_unit
-- severity of illness just before ventilation
, so.sofa as sofa_first
-- vital sign value just preceeding ventilation
, vi.map as map_first
, vi.heartrate as hr_first
, vi.temperature as temp_first
, vi.spo2 as spo2_first
-- labs!
, labs.bun_first
, labs.creatinine_first
, labs.chloride_first
, labs.hgb_first
, labs.platelet_first
, labs.potassium_first
, labs.sodium_first
, labs.tco2_first
, labs.wbc_first
-- comorbidities extracted using ICD-9 codes
, icd.chf as chf_flag
, icd.afib as afib_flag
, icd.renal as renal_flag
, icd.liver as liver_flag
, icd.copd as copd_flag
, icd.cad as cad_flag
, icd.stroke as stroke_flag
, icd.malignancy as malignancy_flag
, icd.respfail as respfail_flag
, icd.endocarditis as endocarditis_flag
, icd.ards as ards_flag
, icd.pneumonia as pneumonia_flag
-- sedative use
, sed.sedative_flag
, sed.midazolam_flag
, sed.fentanyl_flag
, sed.propofol_flag
from DATABASE.aline_cohort co
-- The following tables are generated by code within this repository
left join DATABASE.aline_sofa so
on co.icustay_id = so.icustay_id
left join DATABASE.aline_bmi bmi
on co.icustay_id = bmi.icustay_id
left join DATABASE.aline_icd icd
on co.hadm_id = icd.hadm_id
left join DATABASE.aline_vitals vi
on co.icustay_id = vi.icustay_id
left join DATABASE.aline_labs labs
on co.icustay_id = labs.icustay_id
left join DATABASE.aline_sedatives sed
on co.icustay_id = sed.icustay_id
order by co.icustay_id
"""
cursor.execute(query.replace("DATABASE", gluedatabase))
# Load the result of the query into a dataframe
df = as_pandas(cursor)
df.describe().T
"""
Explanation: Summarize the cohort exclusions before we pull all the data together.
2 - Extract all covariates and outcome measures
We now aggregate all the data from the various views into a single dataframe.
End of explanation
"""
# plot the rest of the distributions
for col in df.columns:
if df.dtypes[col] in ('int64','float64'):
plt.figure(figsize=[12,6])
plt.hist(df[col].dropna(), bins=50, normed=True)
plt.xlabel(col,fontsize=24)
plt.show()
# apply corrections
df.loc[df['age']>89, 'age'] = 91.4
"""
Explanation: Now we need to remove obvious outliers, including correcting ages > 200 to 91.4 (i.e. replace anonymized ages with 91.4, the median age of patients older than 89).
End of explanation
"""
df.to_csv('aline_data.csv',index=False)
"""
Explanation: 3 - Write to file
End of explanation
"""
|
y2ee201/Deep-Learning-Nanodegree
|
intro-to-tensorflow/intro_to_tensorflow.ipynb
|
mit
|
import hashlib
import os
import pickle
from urllib.request import urlretrieve
import numpy as np
from PIL import Image
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import LabelBinarizer
from sklearn.utils import resample
from tqdm import tqdm
from zipfile import ZipFile
print('All modules imported.')
"""
Explanation: <h1 align="center">TensorFlow Neural Network Lab</h1>
<img src="image/notmnist.png">
In this lab, you'll use all the tools you learned from Introduction to TensorFlow to label images of English letters! The data you are using, <a href="http://yaroslavvb.blogspot.com/2011/09/notmnist-dataset.html">notMNIST</a>, consists of images of a letter from A to J in different fonts.
The above images are a few examples of the data you'll be training on. After training the network, you will compare your prediction model against test data. Your goal, by the end of this lab, is to make predictions against that test set with at least an 80% accuracy. Let's jump in!
To start this lab, you first need to import all the necessary modules. Run the code below. If it runs successfully, it will print "All modules imported".
End of explanation
"""
def download(url, file):
"""
Download file from <url>
:param url: URL to file
:param file: Local file path
"""
if not os.path.isfile(file):
print('Downloading ' + file + '...')
urlretrieve(url, file)
print('Download Finished')
# Download the training and test dataset.
download('https://s3.amazonaws.com/udacity-sdc/notMNIST_train.zip', 'notMNIST_train.zip')
download('https://s3.amazonaws.com/udacity-sdc/notMNIST_test.zip', 'notMNIST_test.zip')
# Make sure the files aren't corrupted
assert hashlib.md5(open('notMNIST_train.zip', 'rb').read()).hexdigest() == 'c8673b3f28f489e9cdf3a3d74e2ac8fa',\
'notMNIST_train.zip file is corrupted. Remove the file and try again.'
assert hashlib.md5(open('notMNIST_test.zip', 'rb').read()).hexdigest() == '5d3c7e653e63471c88df796156a9dfa9',\
'notMNIST_test.zip file is corrupted. Remove the file and try again.'
# Wait until you see that all files have been downloaded.
print('All files downloaded.')
def uncompress_features_labels(file):
"""
Uncompress features and labels from a zip file
:param file: The zip file to extract the data from
"""
features = []
labels = []
with ZipFile(file) as zipf:
# Progress Bar
filenames_pbar = tqdm(zipf.namelist(), unit='files')
# Get features and labels from all files
for filename in filenames_pbar:
# Check if the file is a directory
if not filename.endswith('/'):
with zipf.open(filename) as image_file:
image = Image.open(image_file)
image.load()
# Load image data as 1 dimensional array
# We're using float32 to save on memory space
feature = np.array(image, dtype=np.float32).flatten()
# Get the the letter from the filename. This is the letter of the image.
label = os.path.split(filename)[1][0]
features.append(feature)
labels.append(label)
return np.array(features), np.array(labels)
# Get the features and labels from the zip files
train_features, train_labels = uncompress_features_labels('notMNIST_train.zip')
test_features, test_labels = uncompress_features_labels('notMNIST_test.zip')
# Limit the amount of data to work with a docker container
docker_size_limit = 150000
train_features, train_labels = resample(train_features, train_labels, n_samples=docker_size_limit)
# Set flags for feature engineering. This will prevent you from skipping an important step.
is_features_normal = False
is_labels_encod = False
# Wait until you see that all features and labels have been uncompressed.
print('All features and labels uncompressed.')
"""
Explanation: The notMNIST dataset is too large for many computers to handle. It contains 500,000 images for just training. You'll be using a subset of this data, 15,000 images for each label (A-J).
End of explanation
"""
# Problem 1 - Implement Min-Max scaling for grayscale image data
def normalize_grayscale(image_data):
"""
Normalize the image data with Min-Max scaling to a range of [0.1, 0.9]
:param image_data: The image data to be normalized
:return: Normalized image data
"""
# TODO: Implement Min-Max scaling for grayscale image data
x_min = image_data.min()
x_max = image_data.max()
a = 0.1
b = 0.9
mult = (b - a)/(x_max - x_min)
return np.add(np.multiply(np.subtract(image_data, x_min), mult),0.1)
### DON'T MODIFY ANYTHING BELOW ###
# Test Cases
np.testing.assert_array_almost_equal(
normalize_grayscale(np.array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 255])),
[0.1, 0.103137254902, 0.106274509804, 0.109411764706, 0.112549019608, 0.11568627451, 0.118823529412, 0.121960784314,
0.125098039216, 0.128235294118, 0.13137254902, 0.9],
decimal=3)
np.testing.assert_array_almost_equal(
normalize_grayscale(np.array([0, 1, 10, 20, 30, 40, 233, 244, 254,255])),
[0.1, 0.103137254902, 0.13137254902, 0.162745098039, 0.194117647059, 0.225490196078, 0.830980392157, 0.865490196078,
0.896862745098, 0.9])
if not is_features_normal:
train_features = normalize_grayscale(train_features)
test_features = normalize_grayscale(test_features)
is_features_normal = True
print('Tests Passed!')
if not is_labels_encod:
# Turn labels into numbers and apply One-Hot Encoding
encoder = LabelBinarizer()
encoder.fit(train_labels)
train_labels = encoder.transform(train_labels)
test_labels = encoder.transform(test_labels)
# Change to float32, so it can be multiplied against the features in TensorFlow, which are float32
train_labels = train_labels.astype(np.float32)
test_labels = test_labels.astype(np.float32)
is_labels_encod = True
print('Labels One-Hot Encoded')
assert is_features_normal, 'You skipped the step to normalize the features'
assert is_labels_encod, 'You skipped the step to One-Hot Encode the labels'
# Get randomized datasets for training and validation
train_features, valid_features, train_labels, valid_labels = train_test_split(
train_features,
train_labels,
test_size=0.05,
random_state=832289)
print('Training features and labels randomized and split.')
# Save the data for easy access
pickle_file = 'notMNIST.pickle'
if not os.path.isfile(pickle_file):
print('Saving data to pickle file...')
try:
with open('notMNIST.pickle', 'wb') as pfile:
pickle.dump(
{
'train_dataset': train_features,
'train_labels': train_labels,
'valid_dataset': valid_features,
'valid_labels': valid_labels,
'test_dataset': test_features,
'test_labels': test_labels,
},
pfile, pickle.HIGHEST_PROTOCOL)
except Exception as e:
print('Unable to save data to', pickle_file, ':', e)
raise
print('Data cached in pickle file.')
"""
Explanation: <img src="image/Mean Variance - Image.png" style="height: 75%;width: 75%; position: relative; right: 5%">
Problem 1
The first problem involves normalizing the features for your training and test data.
Implement Min-Max scaling in the normalize_grayscale() function to a range of a=0.1 and b=0.9. After scaling, the values of the pixels in the input data should range from 0.1 to 0.9.
Since the raw notMNIST image data is in grayscale, the current values range from a min of 0 to a max of 255.
Min-Max Scaling:
$
X'=a+{\frac {\left(X-X_{\min }\right)\left(b-a\right)}{X_{\max }-X_{\min }}}
$
If you're having trouble solving problem 1, you can view the solution here.
End of explanation
"""
%matplotlib inline
# Load the modules
import pickle
import math
import numpy as np
import tensorflow as tf
from tqdm import tqdm
import matplotlib.pyplot as plt
# Reload the data
pickle_file = 'notMNIST.pickle'
with open(pickle_file, 'rb') as f:
pickle_data = pickle.load(f)
train_features = pickle_data['train_dataset']
train_labels = pickle_data['train_labels']
valid_features = pickle_data['valid_dataset']
valid_labels = pickle_data['valid_labels']
test_features = pickle_data['test_dataset']
test_labels = pickle_data['test_labels']
del pickle_data # Free up memory
print('Data and modules loaded.')
"""
Explanation: Checkpoint
All your progress is now saved to the pickle file. If you need to leave and comeback to this lab, you no longer have to start from the beginning. Just run the code block below and it will load all the data and modules required to proceed.
End of explanation
"""
# All the pixels in the image (28 * 28 = 784)
features_count = 784
# All the labels
labels_count = 10
# TODO: Set the features and labels tensors
features = tf.placeholder(tf.float32)
labels = tf.placeholder(tf.float32)
# TODO: Set the weights and biases tensors
weights = tf.Variable(tf.truncated_normal((features_count, labels_count)))
biases = tf.Variable(tf.zeros(labels_count))
### DON'T MODIFY ANYTHING BELOW ###
#Test Cases
from tensorflow.python.ops.variables import Variable
assert features._op.name.startswith('Placeholder'), 'features must be a placeholder'
assert labels._op.name.startswith('Placeholder'), 'labels must be a placeholder'
assert isinstance(weights, Variable), 'weights must be a TensorFlow variable'
assert isinstance(biases, Variable), 'biases must be a TensorFlow variable'
assert features._shape == None or (\
features._shape.dims[0].value is None and\
features._shape.dims[1].value in [None, 784]), 'The shape of features is incorrect'
assert labels._shape == None or (\
labels._shape.dims[0].value is None and\
labels._shape.dims[1].value in [None, 10]), 'The shape of labels is incorrect'
assert weights._variable._shape == (784, 10), 'The shape of weights is incorrect'
assert biases._variable._shape == (10), 'The shape of biases is incorrect'
assert features._dtype == tf.float32, 'features must be type float32'
assert labels._dtype == tf.float32, 'labels must be type float32'
# Feed dicts for training, validation, and test session
train_feed_dict = {features: train_features, labels: train_labels}
valid_feed_dict = {features: valid_features, labels: valid_labels}
test_feed_dict = {features: test_features, labels: test_labels}
# Linear Function WX + b
logits = tf.matmul(features, weights) + biases
prediction = tf.nn.softmax(logits)
# Cross entropy
cross_entropy = -tf.reduce_sum(labels * tf.log(prediction), reduction_indices=1)
# Training loss
loss = tf.reduce_mean(cross_entropy)
# Create an operation that initializes all variables
init = tf.global_variables_initializer()
# Test Cases
with tf.Session() as session:
session.run(init)
session.run(loss, feed_dict=train_feed_dict)
session.run(loss, feed_dict=valid_feed_dict)
session.run(loss, feed_dict=test_feed_dict)
biases_data = session.run(biases)
assert not np.count_nonzero(biases_data), 'biases must be zeros'
print('Tests Passed!')
# Determine if the predictions are correct
is_correct_prediction = tf.equal(tf.argmax(prediction, 1), tf.argmax(labels, 1))
# Calculate the accuracy of the predictions
accuracy = tf.reduce_mean(tf.cast(is_correct_prediction, tf.float32))
print('Accuracy function created.')
"""
Explanation: Problem 2
Now it's time to build a simple neural network using TensorFlow. Here, your network will be just an input layer and an output layer.
<img src="image/network_diagram.png" style="height: 40%;width: 40%; position: relative; right: 10%">
For the input here the images have been flattened into a vector of $28 \times 28 = 784$ features. Then, we're trying to predict the image digit so there are 10 output units, one for each label. Of course, feel free to add hidden layers if you want, but this notebook is built to guide you through a single layer network.
For the neural network to train on your data, you need the following <a href="https://www.tensorflow.org/resources/dims_types.html#data-types">float32</a> tensors:
- features
- Placeholder tensor for feature data (train_features/valid_features/test_features)
- labels
- Placeholder tensor for label data (train_labels/valid_labels/test_labels)
- weights
- Variable Tensor with random numbers from a truncated normal distribution.
- See <a href="https://www.tensorflow.org/api_docs/python/constant_op.html#truncated_normal">tf.truncated_normal() documentation</a> for help.
- biases
- Variable Tensor with all zeros.
- See <a href="https://www.tensorflow.org/api_docs/python/constant_op.html#zeros"> tf.zeros() documentation</a> for help.
If you're having trouble solving problem 2, review "TensorFlow Linear Function" section of the class. If that doesn't help, the solution for this problem is available here.
End of explanation
"""
# Change if you have memory restrictions
batch_size = 128
# TODO: Find the best parameters for each configuration
epochs = 5
learning_rate = 0.2
### DON'T MODIFY ANYTHING BELOW ###
# Gradient Descent
optimizer = tf.train.GradientDescentOptimizer(learning_rate).minimize(loss)
# The accuracy measured against the validation set
validation_accuracy = 0.0
# Measurements use for graphing loss and accuracy
log_batch_step = 50
batches = []
loss_batch = []
train_acc_batch = []
valid_acc_batch = []
with tf.Session() as session:
session.run(init)
batch_count = int(math.ceil(len(train_features)/batch_size))
for epoch_i in range(epochs):
# Progress bar
batches_pbar = tqdm(range(batch_count), desc='Epoch {:>2}/{}'.format(epoch_i+1, epochs), unit='batches')
# The training cycle
for batch_i in batches_pbar:
# Get a batch of training features and labels
batch_start = batch_i*batch_size
batch_features = train_features[batch_start:batch_start + batch_size]
batch_labels = train_labels[batch_start:batch_start + batch_size]
# Run optimizer and get loss
_, l = session.run(
[optimizer, loss],
feed_dict={features: batch_features, labels: batch_labels})
# Log every 50 batches
if not batch_i % log_batch_step:
# Calculate Training and Validation accuracy
training_accuracy = session.run(accuracy, feed_dict=train_feed_dict)
validation_accuracy = session.run(accuracy, feed_dict=valid_feed_dict)
# Log batches
previous_batch = batches[-1] if batches else 0
batches.append(log_batch_step + previous_batch)
loss_batch.append(l)
train_acc_batch.append(training_accuracy)
valid_acc_batch.append(validation_accuracy)
# Check accuracy against Validation data
validation_accuracy = session.run(accuracy, feed_dict=valid_feed_dict)
loss_plot = plt.subplot(211)
loss_plot.set_title('Loss')
loss_plot.plot(batches, loss_batch, 'g')
loss_plot.set_xlim([batches[0], batches[-1]])
acc_plot = plt.subplot(212)
acc_plot.set_title('Accuracy')
acc_plot.plot(batches, train_acc_batch, 'r', label='Training Accuracy')
acc_plot.plot(batches, valid_acc_batch, 'x', label='Validation Accuracy')
acc_plot.set_ylim([0, 1.0])
acc_plot.set_xlim([batches[0], batches[-1]])
acc_plot.legend(loc=4)
plt.tight_layout()
plt.show()
print('Validation accuracy at {}'.format(validation_accuracy))
"""
Explanation: <img src="image/Learn Rate Tune - Image.png" style="height: 70%;width: 70%">
Problem 3
Below are 2 parameter configurations for training the neural network. In each configuration, one of the parameters has multiple options. For each configuration, choose the option that gives the best acccuracy.
Parameter configurations:
Configuration 1
* Epochs: 1
* Learning Rate:
* 0.8
* 0.5
* 0.1
* 0.05
* 0.01
Configuration 2
* Epochs:
* 1
* 2
* 3
* 4
* 5
* Learning Rate: 0.2
The code will print out a Loss and Accuracy graph, so you can see how well the neural network performed.
If you're having trouble solving problem 3, you can view the solution here.
End of explanation
"""
### DON'T MODIFY ANYTHING BELOW ###
# The accuracy measured against the test set
test_accuracy = 0.0
with tf.Session() as session:
session.run(init)
batch_count = int(math.ceil(len(train_features)/batch_size))
for epoch_i in range(epochs):
# Progress bar
batches_pbar = tqdm(range(batch_count), desc='Epoch {:>2}/{}'.format(epoch_i+1, epochs), unit='batches')
# The training cycle
for batch_i in batches_pbar:
# Get a batch of training features and labels
batch_start = batch_i*batch_size
batch_features = train_features[batch_start:batch_start + batch_size]
batch_labels = train_labels[batch_start:batch_start + batch_size]
# Run optimizer
_ = session.run(optimizer, feed_dict={features: batch_features, labels: batch_labels})
# Check accuracy against Test data
test_accuracy = session.run(accuracy, feed_dict=test_feed_dict)
assert test_accuracy >= 0.80, 'Test accuracy at {}, should be equal to or greater than 0.80'.format(test_accuracy)
print('Nice Job! Test Accuracy is {}'.format(test_accuracy))
"""
Explanation: Test
You're going to test your model against your hold out dataset/testing data. This will give you a good indicator of how well the model will do in the real world. You should have a test accuracy of at least 80%.
End of explanation
"""
|
OpenWeavers/openanalysis
|
doc/OpenAnalysis/04 - String Matching.ipynb
|
gpl-3.0
|
x = 'this is some random text used for illustrative purposes'
x
'this' in x
'not' in x
x.index('is')
x.index('not')
"""
Explanation: String Matching Analysis
Consider a string of finite length $m$ Let it be $T$. Finding whether a string $P$ of length $n$ exsists in $T$ is known as String Matching, Following is some of the comparision based String Matching Algorithms.
Brute Force String Matching Algorithm
Horspool String Matching
Boyer - Moore String Matching
Before looking at the analysis part, we shall examine the Language in built methods to sorting
The in operator and str.index()
We have already seen the in operator in several contexts. Let's see the working of in operator again
End of explanation
"""
from openanalysis.string_matching import StringMatchingAlgorithm,StringMatchingAnalyzer
%matplotlib inline
%config InlineBackend.figure_formats={"svg", "pdf"}
"""
Explanation: Standard import statement
End of explanation
"""
class Horspool(StringMatchingAlgorithm): # Must derive from StringMatchingAlgorithm
def __init__(self):
StringMatchingAlgorithm.__init__(self, "Hosrpool String Matching")
self.shift_table = {}
self.pattern = ''
def generate_shift_table(self, pattern): # class is needed to include helper methods
self.pattern = pattern
for i in range(0, len(pattern) - 1):
self.shift_table[pattern[i]] = len(pattern) -i - 1
def match(self, text: str, pattern: str):
StringMatchingAlgorithm.match(self, text, pattern)
self.generate_shift_table(pattern)
i = len(self.pattern) - 1
while i < len(text):
j = 0
while j < len(self.pattern) and text[i-j] == self.pattern[len(self.pattern)-1-j]:
j += 1
self.count += j # Increment count here
if j == len(self.pattern):
return i-len(self.pattern)+1
if text[i] in self.shift_table:
i += self.shift_table[text[i]]
else:
i += len(self.pattern)
return -1
"""
Explanation: StringMatchingAlgorithm is the base class providing the standards to implement sorting algorithms, SearchVisualizer visualizes and analyses the algorithm
StringMatchingAlgorithm class
Any String Matching Algorithm, which has to be implemented, has to be derived from this class. Now we shall see data members and member functions of this class.
Data Members
name - Name of the Searching Algorithm
count - Holds the number of basic operations performed
Member Functions
__init__(self, name): - Initializes algorithm with a name
match(self, text, pattern) _ The base String Matching function. Sets count to 0.
An example .... Horspool String Matching Algorithm
Now we shall implement the class Horspool
End of explanation
"""
StringMatchingAnalyzer(Horspool).analyze(progress=False)
"""
Explanation: StringMatchingAnalyzer class
This class provides the visualization and analysis methods. Let's see its methods in detail
__init__(self, matching): Initializes visualizer with a String Matching Algorithm.
searcher is a class, which is derived from StringMatchingAlgorithm
analyze(self,progress = True):
Plots the number of basic operations performed
Both Text length and Pattern Length are varied
Samples are chosen randomly from pre defined large data
progress indicates whether Progress Bar has to be shown or not
End of explanation
"""
|
UWashington-Astro300/Astro300-W17
|
08_Images_In_Python.ipynb
|
mit
|
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
from scipy import linalg
plt.style.use('ggplot')
plt.rc('axes', grid=False) # turn off the background grid for images
"""
Explanation: Multidimentional data - Matrices and Images
End of explanation
"""
my_matrix = np.array([[1,2],[1,1]])
print(my_matrix.shape)
print(my_matrix)
my_matrix_transposed = np.transpose(my_matrix)
print(my_matrix_transposed)
my_matrix_inverse = linalg.inv(my_matrix)
print(my_matrix_inverse)
"""
Explanation: Let us work with the matrix:
$
\left[
\begin{array}{cc}
1 & 2 \
1 & 1
\end{array}
\right]
$
End of explanation
"""
my_matrix_inverse.dot(my_matrix)
"""
Explanation: numpy matrix multiply uses the dot() function:
End of explanation
"""
my_matrix_inverse * my_matrix_inverse
"""
Explanation: Caution the * will just multiply the matricies on an element-by-element basis:
End of explanation
"""
A = np.array([[1,2],[1,1]])
print(A)
b = np.array([[4],[3]])
print(b)
# Solve by inverting A and then mulitply by b
linalg.inv(A).dot(b)
# Cleaner looking
linalg.solve(A,b)
"""
Explanation: Solving system of linear equations
$$
\begin{array}{c}
x + 2y = 4 \
x + y = 3 \
\end{array}
\hspace{2cm}
\left[
\begin{array}{cc}
1 & 2 \
1 & 1 \
\end{array}
\right]
\left[
\begin{array}{c}
x\
y
\end{array}
\right]
=
\left[
\begin{array}{c}
4\
3\
\end{array}
\right]
\hspace{2cm}
{\bf A}x = {\bf b}
\hspace{2cm}
\left[
\begin{array}{c}
x\
y
\end{array}
\right]
=
\left[
\begin{array}{cc}
1 & 2 \
1 & 1 \
\end{array}
\right]^{-1}
\left[
\begin{array}{c}
4\
3\
\end{array}
\right]
=
\left[
\begin{array}{c}
2\
1
\end{array}
\right]
$$
End of explanation
"""
A = np.array([[1,3,5],[2,5,1],[2,3,8]])
b = np.array([[10],[8],[3]])
print(linalg.inv(A))
print(linalg.solve(A,b))
"""
Explanation: System of 3 equations
$$
\begin{array}{c}
x + 3y + 5z = 10 \
2x + 5y + z = 8 \
2x + 3y + 8z = 3 \
\end{array}
\hspace{3cm}
\left[
\begin{array}{ccc}
1 & 3 & 5 \
2 & 5 & 1 \
2 & 3 & 8
\end{array}
\right]
\left[
\begin{array}{c}
x\
y\
z
\end{array}
\right]
=
\left[
\begin{array}{c}
10\
8\
3
\end{array}
\right]
$$
End of explanation
"""
print(A)
plt.imshow(A, interpolation='nearest', cmap=plt.cm.Blues);
"""
Explanation: Images are just 2-d arrays - imshow will display 2-d arrays as images
End of explanation
"""
test_image = np.load("./MyData/test_data.npy") # load in a saved numpy array
test_image.ndim, test_image.shape, test_image.dtype
print("The minimum value of the image is {0:.2f}".format(test_image.min()))
print("The maximum value of the image is {0:.2f}".format(test_image.max()))
print("The mean value of the image is {0:.2f}".format(test_image.mean()))
print("The standard deviation of the image is {0:.2f}".format(test_image.std()))
#flatten() collapses n-dimentional data into 1-d
plt.hist(test_image.flatten(),bins=30);
"""
Explanation: Read in some data
End of explanation
"""
another_test_image = test_image + 8
print("The minimum value of the other image is {0:.2f}".format(another_test_image.min()))
print("The maximum value of the other image is {0:.2f}".format(another_test_image.max()))
print("The mean value of the other image is {0:.2f}".format(another_test_image.mean()))
print("The standard deviation of the other image is {0:.2f}".format(another_test_image.std()))
"""
Explanation: Math on images applies to every value (pixel)
End of explanation
"""
plt.imshow(test_image, cmap=plt.cm.gray)
plt.colorbar();
"""
Explanation: Show the image represenation of I with a colorbar
End of explanation
"""
fig, ax = plt.subplots(1,5,sharey=True)
fig.set_size_inches(12,6)
fig.tight_layout()
ax[0].imshow(test_image, cmap=plt.cm.viridis)
ax[0].set_xlabel('viridis')
ax[1].imshow(test_image, cmap=plt.cm.hot)
ax[1].set_xlabel('hot')
ax[2].imshow(test_image, cmap=plt.cm.magma)
ax[2].set_xlabel('magma')
ax[3].imshow(test_image, cmap=plt.cm.spectral)
ax[3].set_xlabel('spectral')
ax[4].imshow(test_image, cmap=plt.cm.gray)
ax[4].set_xlabel('gray')
"""
Explanation: Colormap reference: http://matplotlib.org/examples/color/colormaps_reference.html
End of explanation
"""
plt.imsave('Splash.png', test_image, cmap=plt.cm.gray) # Write the array I to a PNG file
my_png = plt.imread('Splash.png') # Read in the PNG file
print("The original data has a min = {0:.2f} and a max = {1:.2f}".format(test_image.min(), test_image.max()))
print("The PNG file has a min = {0:.2f} and a max = {1:.2f}".format(my_png.min(), my_png.max()))
"""
Explanation: WARNING! Common image formats DO NOT preserve dynamic range of original data!!
Common image formats: jpg, gif, png, tiff
Common image formats will re-scale your data values to [0:1]
Common image formats are NOT suitable for scientific data!
End of explanation
"""
X = np.linspace(-5, 5, 500)
Y = np.linspace(-5, 5, 500)
X, Y = np.meshgrid(X, Y) # turns two 1-d arrays (X, Y) into one 2-d grid
Z = np.sqrt(X**2+Y**2)+np.sin(X**2+Y**2)
Z.min(), Z.max(), Z.mean()
"""
Explanation: Creating images from math
End of explanation
"""
from matplotlib.colors import LightSource
ls = LightSource(azdeg=0,altdeg=40)
shadedfig = ls.shade(Z,plt.cm.copper)
fig, ax = plt.subplots(1,3)
fig.set_size_inches(12,6)
fig.tight_layout()
ax[0].imshow(shadedfig)
contlevels = [1,2,Z.mean()]
ax[1].axis('equal')
ax[1].contour(Z,contlevels)
ax[2].imshow(shadedfig)
ax[2].contour(Z,contlevels);
"""
Explanation: Fancy Image Display
End of explanation
"""
my_doctor = plt.imread('./MyData/doctor5.png')
print("The image my_doctor has a shape [height,width] of {0}".format(my_doctor.shape))
print("The image my_doctor is made up of data of type {0}".format(my_doctor.dtype))
print("The image my_doctor has a maximum value of {0}".format(my_doctor.max()))
print("The image my_doctor has a minimum value of {0}".format(my_doctor.min()))
plt.imshow(my_doctor,cmap=plt.cm.gray);
"""
Explanation: Reading in images (imread) - Common Formats
End of explanation
"""
fig, ax = plt.subplots(1,4)
fig.set_size_inches(12,6)
fig.tight_layout()
# You can show just slices of the image - Rememeber: The origin is the upper left corner
ax[0].imshow(my_doctor, cmap=plt.cm.gray)
ax[0].set_xlabel('Original')
ax[1].imshow(my_doctor[0:300,0:100], cmap=plt.cm.gray)
ax[1].set_xlabel('[0:300,0:100]') # 300 rows, 100 columns
ax[2].imshow(my_doctor[:,0:100], cmap=plt.cm.gray) # ":" = whole range
ax[2].set_xlabel('[:,0:100]') # all rows, 100 columns
ax[3].imshow(my_doctor[:,::-1], cmap=plt.cm.gray);
ax[3].set_xlabel('[:,::-1]') # reverse the columns
fig, ax = plt.subplots(1,2)
fig.set_size_inches(12,6)
fig.tight_layout()
CutLine = 300
ax[0].imshow(my_doctor, cmap=plt.cm.gray)
ax[0].hlines(CutLine, 0, 194, color='b', linewidth=3)
ax[1].plot(my_doctor[CutLine,:], color='b', linewidth=3)
ax[1].set_xlabel("X Value")
ax[1].set_ylabel("Pixel Value")
"""
Explanation: Images are just arrays that can be sliced.
For common image formats the origin is the upper left hand corner
End of explanation
"""
from scipy import ndimage
fig, ax = plt.subplots(1,5)
fig.set_size_inches(14,6)
fig.tight_layout()
ax[0].imshow(my_doctor, cmap=plt.cm.gray)
my_doctor_2 = ndimage.rotate(my_doctor,45,cval=0.75) # cval is the value to set pixels outside of image
ax[1].imshow(my_doctor_2, cmap=plt.cm.gray) # Rotate and reshape
my_doctor_3 = ndimage.rotate(my_doctor,45,reshape=False,cval=0.75) # Rotate and do not reshape
ax[2].imshow(my_doctor_3, cmap=plt.cm.gray)
my_doctor_4 = ndimage.shift(my_doctor,(10,30),cval=0.75) # Shift image
ax[3].imshow(my_doctor_4, cmap=plt.cm.gray)
my_doctor_5 = ndimage.gaussian_filter(my_doctor,5) # Blur image
ax[4].imshow(my_doctor_5, cmap=plt.cm.gray);
"""
Explanation: Simple image manipulation
End of explanation
"""
import astropy.io.fits as fits
my_image_file = "./MyData/bsg01.fits"
image_data = fits.getdata(my_image_file)
image_header = fits.getheader(my_image_file)
image_header
print("The image has a shape [height,width] of {0}".format(image_data.shape))
print("The image is made up of data of type {0}".format(image_data.dtype))
print("The image has a maximum value of {0}".format(image_data.max()))
print("The image has a minimum value of {0}".format(image_data.min()))
fig, ax = plt.subplots(1,2)
fig.set_size_inches(12,6)
fig.tight_layout()
ax[0].imshow(image_data,cmap=plt.cm.gray)
ax[1].hist(image_data.flatten(),bins=20);
"""
Explanation: ndimage can do much more: http://scipy-lectures.github.io/advanced/image_processing/
FITS file (Flexible Image Transport System) - Standard Astro File Format
FITS format preserves dynamic range of data
FITS format can include lists, tables, images, and combunations of different types of data
FITS fiels consiste of at least two parts - A Header and Data
End of explanation
"""
CopyData = np.copy(image_data)
CutOff = 40
mask = np.where(CopyData > CutOff)
CopyData[mask] = 50 # You can not just throw data away, you have to set it to something.
fig, ax = plt.subplots(1,2)
fig.set_size_inches(12,6)
fig.tight_layout()
ax[0].imshow(CopyData,cmap=plt.cm.gray)
ax[1].hist(CopyData.flatten(),bins=20);
"""
Explanation: You can use masks on images
End of explanation
"""
another_image_file = "./MyData/bsg02.fits"
another_image_data = fits.getdata(another_image_file)
fig, ax = plt.subplots(1,2)
fig.set_size_inches(12,6)
fig.tight_layout()
ax[0].imshow(image_data, cmap=plt.cm.gray)
ax[1].imshow(another_image_data, cmap=plt.cm.gray);
"""
Explanation: You can add and subtract images
End of explanation
"""
fig, ax = plt.subplots(1,3)
fig.set_size_inches(12,6)
fig.tight_layout()
ax[0].imshow(image_data, cmap=plt.cm.gray)
ax[1].imshow(another_image_data, cmap=plt.cm.gray);
real_image = image_data - another_image_data # Subtract the images pixel by pixel
ax[2].imshow(real_image, cmap=plt.cm.gray);
"""
Explanation: The two images above may look the same but they are not! Subtracting the two images reveals the truth.
End of explanation
"""
my_spectra_file = './MyData/Star_G5.fits'
spectra_data = fits.getdata(my_spectra_file)
spectra_header = fits.getheader(my_spectra_file)
spectra_header
# The FITS header has the information to make an array of wavelengths
Start = spectra_header['CRVAL1']
Number = spectra_header['NAXIS1']
Delta = spectra_header['CDELT1']
End = Start + (Number * Delta)
Wavelength = np.arange(Start,End,Delta)
fig, ax = plt.subplots(2,1)
fig.set_size_inches(11,8.5)
fig.tight_layout()
# Full spectra
ax[0].plot(Wavelength, spectra_data, color='b')
ax[0].set_ylabel("Flux")
ax[0].set_xlabel("Wavelength [angstroms]")
# Just the visible range with the hydrogen Balmer lines
ax[1].set_xlim(4000,7000)
ax[1].set_ylim(0.6,1.2)
ax[1].plot(Wavelength, spectra_data, color='b')
ax[1].set_ylabel("Flux")
ax[1].set_xlabel("Wavelength [angstroms]")
H_Balmer = [6563,4861,4341,4102,3970,3889,3835,3646]
ax[1].vlines(H_Balmer,0,2, color='r', linewidth=3, alpha = 0.25)
"""
Explanation: FITS Tables - An astronomical example
Stellar spectra data from the ESO Library of Stellar Spectra
End of explanation
"""
import glob
star_list = glob.glob('./MyData/Star_*.fits')
star_list
fig, ax = plt.subplots(1,1)
fig.set_size_inches(9,5)
fig.tight_layout()
# Full spectra
ax.set_ylabel("Flux")
ax.set_xlabel("Wavelength [angstroms]")
ax.set_ylim(0.0, 1.05)
for file in star_list:
spectra = fits.getdata(file)
spectra_normalized = spectra / spectra.max()
ax.plot(Wavelength, spectra_normalized, label=file)
ax.legend(loc=0,shadow=True);
"""
Explanation: Stellar spectral classes
End of explanation
"""
from astropy.wcs import WCS
from matplotlib.colors import LogNorm
my_image_file = "./MyData/m51.fits"
image_data = fits.getdata(my_image_file)
image_header = fits.getheader(my_image_file)
fig, ax = plt.subplots(1,3)
fig.set_size_inches(12,4)
fig.tight_layout()
ax[0].set_title("Upside down!")
ax[1].set_title("Right side up!")
ax[2].set_title("LOG image intensity")
ax[0].imshow(image_data, cmap=plt.cm.gray)
ax[1].imshow(image_data, origin='lower', cmap=plt.cm.gray)
ax[2].imshow(image_data, origin='lower', cmap=plt.cm.gray, norm=LogNorm());
image_header
my_wcs = WCS(image_header)
my_wcs
fig = plt.figure()
ax = fig.add_subplot(111, projection=my_wcs)
fig.set_size_inches(6,6)
fig.tight_layout()
ax.grid(color='r', ls='--')
ax.set_xlabel('Right Ascension (J2000)')
ax.set_ylabel('Declination (J2000)')
ax.imshow(image_data, origin='lower', cmap=plt.cm.gray);
fig = plt.figure()
ax = fig.add_subplot(111, projection=my_wcs)
fig.set_size_inches(6,6)
fig.tight_layout()
ax.set_xlabel('Right Ascension (J2000)')
ax.set_ylabel('Declination (J2000)')
ax.grid(color='r', ls='--')
overlay = ax.get_coords_overlay('galactic')
overlay.grid(color='y', ls='dotted')
overlay[0].set_axislabel('Galactic Longitude')
overlay[1].set_axislabel('Galactic Latitude')
ax.imshow(image_data, origin='lower', cmap=plt.cm.gray);
"""
Explanation: FITS Images - An astronomical example
World Coordinate System wcs
End of explanation
"""
redfilter = plt.imread("./MyData/sphereR.jpg")
redfilter.shape,redfilter.dtype
"""
Explanation: Pseudocolor - All color astronomy images are fake.
Color images are composed of three 2-d images: <img src="images/Layers.png" width="150">
JPG images are 3-d, even grayscale images
End of explanation
"""
redfilter = plt.imread("./MyData/sphereR.jpg")[:,:,0]
redfilter.shape,redfilter.dtype
plt.imshow(redfilter,cmap=plt.cm.gray);
greenfilter = plt.imread("./MyData/sphereG.jpg")[:,:,0]
bluefilter = plt.imread("./MyData/sphereB.jpg")[:,:,0]
fig, ax = plt.subplots(1,3)
fig.set_size_inches(12,3)
fig.tight_layout()
ax[0].set_title("Red Filter")
ax[1].set_title("Green Filter")
ax[2].set_title("Blue Filter")
ax[0].imshow(redfilter,cmap=plt.cm.gray)
ax[1].imshow(greenfilter,cmap=plt.cm.gray)
ax[2].imshow(bluefilter,cmap=plt.cm.gray);
"""
Explanation: We just want to read in one of the three channels
End of explanation
"""
rgb = np.zeros((480,640,3),dtype='uint8')
print(rgb.shape, rgb.dtype)
plt.imshow(rgb,cmap=plt.cm.gray);
"""
Explanation: Need to create a blank 3-d array to hold all of the images
End of explanation
"""
rgb[:,:,0] = redfilter
rgb[:,:,1] = greenfilter
rgb[:,:,2] = bluefilter
fig, ax = plt.subplots(1,4)
fig.set_size_inches(14,3)
fig.tight_layout()
ax[0].set_title("Red Filter")
ax[1].set_title("Green Filter")
ax[2].set_title("Blue Filter")
ax[3].set_title("All Filters Stacked")
ax[0].imshow(redfilter,cmap=plt.cm.gray)
ax[1].imshow(greenfilter,cmap=plt.cm.gray)
ax[2].imshow(bluefilter,cmap=plt.cm.gray)
ax[3].imshow(rgb,cmap=plt.cm.gray);
print("The image rgb has a shape [height,width] of {0}".format(rgb.shape))
print("The image rgb is made up of data of type {0}".format(rgb.dtype))
print("The image rgb has a maximum value of {0}".format(rgb.max()))
print("The image rgb has a minimum value of {0}".format(rgb.min()))
rgb[:,:,0] = redfilter * 1.5
plt.imshow(rgb)
"""
Explanation: Fill the array with the filtered images
End of explanation
"""
|
google/starthinker
|
colabs/cm360_report_replicate.ipynb
|
apache-2.0
|
!pip install git+https://github.com/google/starthinker
"""
Explanation: CM360 Report Replicate
Replicate a report across multiple networks and advertisers.
License
Copyright 2020 Google LLC,
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
https://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
Disclaimer
This is not an officially supported Google product. It is a reference implementation. There is absolutely NO WARRANTY provided for using this code. The code is Apache Licensed and CAN BE fully modified, white labeled, and disassembled by your team.
This code generated (see starthinker/scripts for possible source):
- Command: "python starthinker_ui/manage.py colab"
- Command: "python starthinker/tools/colab.py [JSON RECIPE]"
1. Install Dependencies
First install the libraries needed to execute recipes, this only needs to be done once, then click play.
End of explanation
"""
from starthinker.util.configuration import Configuration
CONFIG = Configuration(
project="",
client={},
service={},
user="/content/user.json",
verbose=True
)
"""
Explanation: 2. Set Configuration
This code is required to initialize the project. Fill in required fields and press play.
If the recipe uses a Google Cloud Project:
Set the configuration project value to the project identifier from these instructions.
If the recipe has auth set to user:
If you have user credentials:
Set the configuration user value to your user credentials JSON.
If you DO NOT have user credentials:
Set the configuration client value to downloaded client credentials.
If the recipe has auth set to service:
Set the configuration service value to downloaded service credentials.
End of explanation
"""
FIELDS = {
'auth_read':'user', # Credentials used for reading data.
'recipe_name':'', # Sheet to read ids from.
'auth_write':'service', # Credentials used for writing data.
'account':'', # CM network id.
'recipe_slug':'',
'report_id':'', # CM template report id, for template
'report_name':'', # CM template report name, empty if using id instead.
'delete':False, # Use only to reset the reports if setup changes.
'Aggregate':False, # Append report data to existing table, requires Date column.
}
print("Parameters Set To: %s" % FIELDS)
"""
Explanation: 3. Enter CM360 Report Replicate Recipe Parameters
Provide the name or ID of an existing report.
Run the recipe once to generate the input sheet called .
Enter network and advertiser ids to replicate the report.
Data will be written to BigQuery > > > _All
Modify the values below for your use case, can be done multiple times, then click play.
End of explanation
"""
from starthinker.util.configuration import execute
from starthinker.util.recipe import json_set_fields
TASKS = [
{
'drive':{
'auth':'user',
'copy':{
'source':'https://docs.google.com/spreadsheets/d/1Su3t2YUWV_GG9RD63Wa3GNANmQZswTHstFY6aDPm6qE/',
'destination':{'field':{'name':'recipe_name','kind':'string','order':1,'description':'Name of document to deploy to.','default':''}}
}
}
},
{
'dataset':{
'auth':{'field':{'name':'auth_write','kind':'authentication','order':1,'default':'service','description':'Credentials used for writing data.'}},
'dataset':{'field':{'name':'recipe_slug','kind':'string','order':2,'default':'','description':'Name of Google BigQuery dataset to create.'}}
}
},
{
'cm_report_replicate':{
'auth':{'field':{'name':'auth_read','kind':'authentication','order':0,'default':'user','description':'Credentials used for reading data.'}},
'report':{
'account':{'field':{'name':'account','kind':'integer','order':3,'default':'','description':'CM network id.'}},
'id':{'field':{'name':'report_id','kind':'integer','order':4,'default':'','description':'CM template report id, for template'}},
'name':{'field':{'name':'report_name','kind':'string','order':5,'default':'','description':'CM template report name, empty if using id instead.'}},
'delete':{'field':{'name':'delete','kind':'boolean','order':6,'default':False,'description':'Use only to reset the reports if setup changes.'}}
},
'replicate':{
'sheets':{
'sheet':{'field':{'name':'recipe_name','kind':'string','order':1,'default':'','description':'Sheet to read ids from.'}},
'tab':'Accounts',
'range':''
}
},
'write':{
'bigquery':{
'dataset':{'field':{'name':'recipe_slug','kind':'string','order':4,'default':''}},
'is_incremental_load':{'field':{'name':'Aggregate','kind':'boolean','order':7,'default':False,'description':'Append report data to existing table, requires Date column.'}}
}
}
}
}
]
json_set_fields(TASKS, FIELDS)
execute(CONFIG, TASKS, force=True)
"""
Explanation: 4. Execute CM360 Report Replicate
This does NOT need to be modified unless you are changing the recipe, click play.
End of explanation
"""
|
Neuroglycerin/neukrill-net-work
|
notebooks/model_modifications/Adding MLP Results.ipynb
|
mit
|
import pylearn2.utils
import pylearn2.config
import theano
import neukrill_net.dense_dataset
import neukrill_net.utils
import numpy as np
%matplotlib inline
import matplotlib.pyplot as plt
import holoviews as hl
%load_ext holoviews.ipython
import sklearn.metrics
cd ..
settings = neukrill_net.utils.Settings("settings.json")
run_settings = neukrill_net.utils.load_run_settings(
"run_settings/alexnet_based_norm_global_8aug_pmlp1.json", settings, force=True)
model = pylearn2.utils.serial.load(run_settings['alt_picklepath'])
def plot_monitor(model,c = 'valid_y_nll'):
channel = model.monitor.channels[c]
plt.title(c)
plt.grid(which="both")
plt.plot(channel.example_record,channel.val_record)
return None
"""
Explanation: Seemed like we could deal with extra capacity in our network, so these are the results of adding extra MLP layers.
Loading the pickle
End of explanation
"""
%run ~/repos/pylearn2/pylearn2/scripts/print_monitor.py /disk/scratch/neuroglycerin/models/alexnet_based_norm_global_8aug_pmlp1.pkl
plot_monitor(model,c="valid_objective")
plot_monitor(model,c="valid_y_nll")
plot_monitor(model,c="train_objective")
plot_monitor(model,c="train_y_nll")
"""
Explanation: Adding one MLP layer
The following are the final logs from the best pickle file saved.
End of explanation
"""
run_settings = neukrill_net.utils.load_run_settings(
"run_settings/alexnet_based_norm_global_8aug.json", settings, force=True)
old = pylearn2.utils.serial.load(run_settings['pickle abspath'])
plot_monitor(old,c="train_y_nll")
"""
Explanation: A bit of overfitting happening here, but is it more than normal. Looking at the network before adding that layer:
End of explanation
"""
run_settings = neukrill_net.utils.load_run_settings(
"run_settings/alexnet_based_norm_global_8aug_pmlp2.json", settings, force=True)
twomlp = pylearn2.utils.serial.load(run_settings['alt_picklepath'])
%run ~/repos/pylearn2/pylearn2/scripts/print_monitor.py /disk/scratch/neuroglycerin/models/alexnet_based_norm_global_8aug_pmlp2.pkl
plot_monitor(twomlp, c="valid_objective")
plot_monitor(twomlp, c="train_objective")
"""
Explanation: It drops faster and ends lower than without the MLP layer. Suspect whatever is happening here, the extra MLP layer is not having much of an effect.
Two More MLP Layers
Also ran a model adding two more MLP layers, tracking the results of this model:
End of explanation
"""
|
tpin3694/tpin3694.github.io
|
python/data_structure_basics.ipynb
|
mit
|
# Create a list of countries, then print the results
allies = ['USA','UK','France','New Zealand',
'Australia','Canada','Poland']; allies
# Print the length of the list
len(allies)
# Add an item to the list, then print the results
allies.append('China'); allies
# Sort list, then print the results
allies.sort(); allies
# Reverse sort list, then print the results
allies.reverse(); allies
# View the first item of the list
allies[0]
# View the last item of the list
allies[-1]
# Delete the item in the list
del allies[0]; allies
# Add a numeric value to a list of strings
allies.append(3442); allies
"""
Explanation: Title: Data Structure Basics
Slug: data_structure_basics
Summary: Data Structure Basics
Date: 2016-05-01 12:00
Category: Python
Tags: Basics
Authors: Chris Albon
Lists
"A list is a data structure that holds an ordered collection of items i.e. you can store a sequence of items in a list." - A Byte Of Python
Lists are mutable.
End of explanation
"""
# Create a tuple of state names
usa = ('Texas', 'California', 'Maryland'); usa
# Create a tuple of countries
# (notice the USA has a state names in the nested tuple)
countries = ('canada', 'mexico', usa); countries
# View the third item of the top tuple
countries[2]
# View the third item of the third tuple
countries[2][2]
"""
Explanation: Tuples
"Though tuples may seem similar to lists, they are often used in different situations and for different purposes. Tuples are immutable, and usually contain an heterogeneous sequence of elements that are accessed via unpacking (or indexing (or even by attribute in the case of namedtuples). Lists are mutable, and their elements are usually homogeneous and are accessed by iterating over the list." - Python Documentation
"Tuples are heterogeneous data structures (i.e., their entries have different meanings), while lists are homogeneous sequences." - StackOverflow
Parentheses are optional, but useful.
End of explanation
"""
# Create a dictionary with key:value combos
staff = {'Chris' : 'chris@stater.org',
'Jake' : 'jake@stater.org',
'Ashley' : 'ashley@stater.org',
'Shelly' : 'shelly@stater.org'
}
# Print the value using the key
staff['Chris']
# Delete a dictionary entry based on the key
del staff['Chris']; staff
# Add an item to the dictionary
staff['Guido'] = 'guido@python.org'; staff
"""
Explanation: Dictionaries
"A dictionary is like an address-book where you can find the address or contact details of a person by knowing only his/her name i.e. we associate keys (name) with values (details). Note that the key must be unique just like you cannot find out the correct information if you have two persons with the exact same name." - A Byte Of Python
End of explanation
"""
# Create a set of BRI countries
BRI = set(['brazil', 'russia', 'india'])
# Is India in the set BRI?
'india' in BRI
# Is the US in the set BRI?
'usa' in BRI
# Create a copy of BRI called BRIC
BRIC = BRI.copy()
# Add China to BRIC
BRIC.add('china')
# Is BRIC a super-set of BRI?
BRIC.issuperset(BRI)
# Remove Russia from BRI
BRI.remove('russia')
# What items are the union of BRI and BRIC?
BRI & BRIC
"""
Explanation: Sets
Sets are unordered collections of simple objects.
End of explanation
"""
|
Diyago/Machine-Learning-scripts
|
DEEP LEARNING/Pytorch from scratch/word2vec-embeddings/Negative_Sampling.ipynb
|
apache-2.0
|
# read in the extracted text file
with open('data/text8') as f:
text = f.read()
# print out the first 100 characters
print(text[:100])
"""
Explanation: Skip-gram Word2Vec
In this notebook, I'll lead you through using PyTorch to implement the Word2Vec algorithm using the skip-gram architecture. By implementing this, you'll learn about embedding words for use in natural language processing. This will come in handy when dealing with things like machine translation.
Readings
Here are the resources I used to build this notebook. I suggest reading these either beforehand or while you're working on this material.
A really good conceptual overview of Word2Vec from Chris McCormick
First Word2Vec paper from Mikolov et al.
Neural Information Processing Systems, paper with improvements for Word2Vec also from Mikolov et al.
Word embeddings
When you're dealing with words in text, you end up with tens of thousands of word classes to analyze; one for each word in a vocabulary. Trying to one-hot encode these words is massively inefficient because most values in a one-hot vector will be set to zero. So, the matrix multiplication that happens in between a one-hot input vector and a first, hidden layer will result in mostly zero-valued hidden outputs.
To solve this problem and greatly increase the efficiency of our networks, we use what are called embeddings. Embeddings are just a fully connected layer like you've seen before. We call this layer the embedding layer and the weights are embedding weights. We skip the multiplication into the embedding layer by instead directly grabbing the hidden layer values from the weight matrix. We can do this because the multiplication of a one-hot encoded vector with a matrix returns the row of the matrix corresponding the index of the "on" input unit.
<img src='assets/lookup_matrix.png' width=50%>
Instead of doing the matrix multiplication, we use the weight matrix as a lookup table. We encode the words as integers, for example "heart" is encoded as 958, "mind" as 18094. Then to get hidden layer values for "heart", you just take the 958th row of the embedding matrix. This process is called an embedding lookup and the number of hidden units is the embedding dimension.
There is nothing magical going on here. The embedding lookup table is just a weight matrix. The embedding layer is just a hidden layer. The lookup is just a shortcut for the matrix multiplication. The lookup table is trained just like any weight matrix.
Embeddings aren't only used for words of course. You can use them for any model where you have a massive number of classes. A particular type of model called Word2Vec uses the embedding layer to find vector representations of words that contain semantic meaning.
Word2Vec
The Word2Vec algorithm finds much more efficient representations by finding vectors that represent the words. These vectors also contain semantic information about the words.
<img src="assets/context_drink.png" width=40%>
Words that show up in similar contexts, such as "coffee", "tea", and "water" will have vectors near each other. Different words will be further away from one another, and relationships can be represented by distance in vector space.
There are two architectures for implementing Word2Vec:
CBOW (Continuous Bag-Of-Words) and
Skip-gram
<img src="assets/word2vec_architectures.png" width=60%>
In this implementation, we'll be using the skip-gram architecture with negative sampling because it performs better than CBOW and trains faster with negative sampling. Here, we pass in a word and try to predict the words surrounding it in the text. In this way, we can train the network to learn representations for words that show up in similar contexts.
Loading Data
Next, we'll ask you to load in data and place it in the data directory
Load the text8 dataset; a file of cleaned up Wikipedia article text from Matt Mahoney.
Place that data in the data folder in the home directory.
Then you can extract it and delete the archive, zip file to save storage space.
After following these steps, you should have one file in your data directory: data/text8.
End of explanation
"""
import utils
# get list of words
words = utils.preprocess(text)
print(words[:30])
# print some stats about this word data
print("Total words in text: {}".format(len(words)))
print("Unique words: {}".format(len(set(words)))) # `set` removes any duplicate words
"""
Explanation: Pre-processing
Here I'm fixing up the text to make training easier. This comes from the utils.py file. The preprocess function does a few things:
It converts any punctuation into tokens, so a period is changed to <PERIOD>. In this data set, there aren't any periods, but it will help in other NLP problems.
It removes all words that show up five or fewer times in the dataset. This will greatly reduce issues due to noise in the data and improve the quality of the vector representations.
It returns a list of words in the text.
This may take a few seconds to run, since our text file is quite large. If you want to write your own functions for this stuff, go for it!
End of explanation
"""
vocab_to_int, int_to_vocab = utils.create_lookup_tables(words)
int_words = [vocab_to_int[word] for word in words]
print(int_words[:30])
"""
Explanation: Dictionaries
Next, I'm creating two dictionaries to convert words to integers and back again (integers to words). This is again done with a function in the utils.py file. create_lookup_tables takes in a list of words in a text and returns two dictionaries.
The integers are assigned in descending frequency order, so the most frequent word ("the") is given the integer 0 and the next most frequent is 1, and so on.
Once we have our dictionaries, the words are converted to integers and stored in the list int_words.
End of explanation
"""
from collections import Counter
import random
import numpy as np
threshold = 1e-5
word_counts = Counter(int_words)
#print(list(word_counts.items())[0]) # dictionary of int_words, how many times they appear
total_count = len(int_words)
freqs = {word: count/total_count for word, count in word_counts.items()}
p_drop = {word: 1 - np.sqrt(threshold/freqs[word]) for word in word_counts}
# discard some frequent words, according to the subsampling equation
# create a new list of words for training
train_words = [word for word in int_words if random.random() < (1 - p_drop[word])]
print(train_words[:30])
"""
Explanation: Subsampling
Words that show up often such as "the", "of", and "for" don't provide much context to the nearby words. If we discard some of them, we can remove some of the noise from our data and in return get faster training and better representations. This process is called subsampling by Mikolov. For each word $w_i$ in the training set, we'll discard it with probability given by
$$ P(w_i) = 1 - \sqrt{\frac{t}{f(w_i)}} $$
where $t$ is a threshold parameter and $f(w_i)$ is the frequency of word $w_i$ in the total dataset.
Implement subsampling for the words in int_words. That is, go through int_words and discard each word given the probablility $P(w_i)$ shown above. Note that $P(w_i)$ is the probability that a word is discarded. Assign the subsampled data to train_words.
End of explanation
"""
def get_target(words, idx, window_size=5):
''' Get a list of words in a window around an index. '''
R = np.random.randint(1, window_size+1)
start = idx - R if (idx - R) > 0 else 0
stop = idx + R
target_words = words[start:idx] + words[idx+1:stop+1]
return list(target_words)
# test your code!
# run this cell multiple times to check for random window selection
int_text = [i for i in range(10)]
print('Input: ', int_text)
idx=5 # word index of interest
target = get_target(int_text, idx=idx, window_size=5)
print('Target: ', target) # you should get some indices around the idx
"""
Explanation: Making batches
Now that our data is in good shape, we need to get it into the proper form to pass it into our network. With the skip-gram architecture, for each word in the text, we want to define a surrounding context and grab all the words in a window around that word, with size $C$.
From Mikolov et al.:
"Since the more distant words are usually less related to the current word than those close to it, we give less weight to the distant words by sampling less from those words in our training examples... If we choose $C = 5$, for each training word we will select randomly a number $R$ in range $[ 1: C ]$, and then use $R$ words from history and $R$ words from the future of the current word as correct labels."
Exercise: Implement a function get_target that receives a list of words, an index, and a window size, then returns a list of words in the window around the index. Make sure to use the algorithm described above, where you chose a random number of words to from the window.
Say, we have an input and we're interested in the idx=2 token, 741:
[5233, 58, 741, 10571, 27349, 0, 15067, 58112, 3580, 58, 10712]
For R=2, get_target should return a list of four values:
[5233, 58, 10571, 27349]
End of explanation
"""
def get_batches(words, batch_size, window_size=5):
''' Create a generator of word batches as a tuple (inputs, targets) '''
n_batches = len(words)//batch_size
# only full batches
words = words[:n_batches*batch_size]
for idx in range(0, len(words), batch_size):
x, y = [], []
batch = words[idx:idx+batch_size]
for ii in range(len(batch)):
batch_x = batch[ii]
batch_y = get_target(batch, ii, window_size)
y.extend(batch_y)
x.extend([batch_x]*len(batch_y))
yield x, y
int_text = [i for i in range(20)]
x,y = next(get_batches(int_text, batch_size=4, window_size=5))
print('x\n', x)
print('y\n', y)
"""
Explanation: Generating Batches
Here's a generator function that returns batches of input and target data for our model, using the get_target function from above. The idea is that it grabs batch_size words from a words list. Then for each of those batches, it gets the target words in a window.
End of explanation
"""
def cosine_similarity(embedding, valid_size=16, valid_window=100, device='cpu'):
""" Returns the cosine similarity of validation words with words in the embedding matrix.
Here, embedding should be a PyTorch embedding module.
"""
# Here we're calculating the cosine similarity between some random words and
# our embedding vectors. With the similarities, we can look at what words are
# close to our random words.
# sim = (a . b) / |a||b|
embed_vectors = embedding.weight
# magnitude of embedding vectors, |b|
magnitudes = embed_vectors.pow(2).sum(dim=1).sqrt().unsqueeze(0)
# pick N words from our ranges (0,window) and (1000,1000+window). lower id implies more frequent
valid_examples = np.array(random.sample(range(valid_window), valid_size//2))
valid_examples = np.append(valid_examples,
random.sample(range(1000,1000+valid_window), valid_size//2))
valid_examples = torch.LongTensor(valid_examples).to(device)
valid_vectors = embedding(valid_examples)
similarities = torch.mm(valid_vectors, embed_vectors.t())/magnitudes
return valid_examples, similarities
"""
Explanation: Validation
Here, I'm creating a function that will help us observe our model as it learns. We're going to choose a few common words and few uncommon words. Then, we'll print out the closest words to them using the cosine similarity:
<img src="assets/two_vectors.png" width=30%>
$$
\mathrm{similarity} = \cos(\theta) = \frac{\vec{a} \cdot \vec{b}}{|\vec{a}||\vec{b}|}
$$
We can encode the validation words as vectors $\vec{a}$ using the embedding table, then calculate the similarity with each word vector $\vec{b}$ in the embedding table. With the similarities, we can print out the validation words and words in our embedding table semantically similar to those words. It's a nice way to check that our embedding table is grouping together words with similar semantic meanings.
End of explanation
"""
import torch
from torch import nn
import torch.optim as optim
class SkipGramNeg(nn.Module):
def __init__(self, n_vocab, n_embed, noise_dist=None):
super().__init__()
self.n_vocab = n_vocab
self.n_embed = n_embed
self.noise_dist = noise_dist
# define embedding layers for input and output words
self.in_embed = nn.Embedding(n_vocab, n_embed)
self.out_embed = nn.Embedding(n_vocab, n_embed)
# Initialize embedding tables with uniform distribution
# I believe this helps with convergence
self.in_embed.weight.data.uniform_(-1, 1)
self.out_embed.weight.data.uniform_(-1, 1)
def forward_input(self, input_words):
input_vectors = self.in_embed(input_words)
return input_vectors
def forward_output(self, output_words):
output_vectors = self.out_embed(output_words)
return output_vectors
def forward_noise(self, batch_size, n_samples):
""" Generate noise vectors with shape (batch_size, n_samples, n_embed)"""
if self.noise_dist is None:
# Sample words uniformly
noise_dist = torch.ones(self.n_vocab)
else:
noise_dist = self.noise_dist
# Sample words from our noise distribution
noise_words = torch.multinomial(noise_dist,
batch_size * n_samples,
replacement=True)
device = "cuda" if model.out_embed.weight.is_cuda else "cpu"
noise_words = noise_words.to(device)
noise_vectors = self.out_embed(noise_words).view(batch_size, n_samples, self.n_embed)
return noise_vectors
class NegativeSamplingLoss(nn.Module):
def __init__(self):
super().__init__()
def forward(self, input_vectors, output_vectors, noise_vectors):
batch_size, embed_size = input_vectors.shape
# Input vectors should be a batch of column vectors
input_vectors = input_vectors.view(batch_size, embed_size, 1)
# Output vectors should be a batch of row vectors
output_vectors = output_vectors.view(batch_size, 1, embed_size)
# bmm = batch matrix multiplication
# correct log-sigmoid loss
out_loss = torch.bmm(output_vectors, input_vectors).sigmoid().log()
out_loss = out_loss.squeeze()
# incorrect log-sigmoid loss
noise_loss = torch.bmm(noise_vectors.neg(), input_vectors).sigmoid().log()
noise_loss = noise_loss.squeeze().sum(1) # sum the losses over the sample of noise vectors
# negate and sum correct and noisy log-sigmoid losses
# return average batch loss
return -(out_loss + noise_loss).mean()
"""
Explanation: SkipGram model
Define and train the SkipGram model.
You'll need to define an embedding layer and a final, softmax output layer.
An Embedding layer takes in a number of inputs, importantly:
* num_embeddings – the size of the dictionary of embeddings, or how many rows you'll want in the embedding weight matrix
* embedding_dim – the size of each embedding vector; the embedding dimension
Below is an approximate diagram of the general structure of our network.
<img src="assets/skip_gram_arch.png" width=60%>
The input words are passed in as batches of input word tokens.
This will go into a hidden layer of linear units (our embedding layer).
Then, finally into a softmax output layer.
We'll use the softmax layer to make a prediction about the context words by sampling, as usual.
Negative Sampling
For every example we give the network, we train it using the output from the softmax layer. That means for each input, we're making very small changes to millions of weights even though we only have one true example. This makes training the network very inefficient. We can approximate the loss from the softmax layer by only updating a small subset of all the weights at once. We'll update the weights for the correct example, but only a small number of incorrect, or noise, examples. This is called "negative sampling".
There are two modifications we need to make. First, since we're not taking the softmax output over all the words, we're really only concerned with one output word at a time. Similar to how we use an embedding table to map the input word to the hidden layer, we can now use another embedding table to map the hidden layer to the output word. Now we have two embedding layers, one for input words and one for output words. Secondly, we use a modified loss function where we only care about the true example and a small subset of noise examples.
$$
- \large \log{\sigma\left(u_{w_O}\hspace{0.001em}^\top v_{w_I}\right)} -
\sum_i^N \mathbb{E}{w_i \sim P_n(w)}\log{\sigma\left(-u{w_i}\hspace{0.001em}^\top v_{w_I}\right)}
$$
This is a little complicated so I'll go through it bit by bit. $u_{w_O}\hspace{0.001em}^\top$ is the embedding vector for our "output" target word (transposed, that's the $^\top$ symbol) and $v_{w_I}$ is the embedding vector for the "input" word. Then the first term
$$\large \log{\sigma\left(u_{w_O}\hspace{0.001em}^\top v_{w_I}\right)}$$
says we take the log-sigmoid of the inner product of the output word vector and the input word vector. Now the second term, let's first look at
$$\large \sum_i^N \mathbb{E}_{w_i \sim P_n(w)}$$
This means we're going to take a sum over words $w_i$ drawn from a noise distribution $w_i \sim P_n(w)$. The noise distribution is basically our vocabulary of words that aren't in the context of our input word. In effect, we can randomly sample words from our vocabulary to get these words. $P_n(w)$ is an arbitrary probability distribution though, which means we get to decide how to weight the words that we're sampling. This could be a uniform distribution, where we sample all words with equal probability. Or it could be according to the frequency that each word shows up in our text corpus, the unigram distribution $U(w)$. The authors found the best distribution to be $U(w)^{3/4}$, empirically.
Finally, in
$$\large \log{\sigma\left(-u_{w_i}\hspace{0.001em}^\top v_{w_I}\right)},$$
we take the log-sigmoid of the negated inner product of a noise vector with the input vector.
<img src="assets/neg_sampling_loss.png" width=50%>
To give you an intuition for what we're doing here, remember that the sigmoid function returns a probability between 0 and 1. The first term in the loss pushes the probability that our network will predict the correct word $w_O$ towards 1. In the second term, since we are negating the sigmoid input, we're pushing the probabilities of the noise words towards 0.
End of explanation
"""
device = 'cuda' if torch.cuda.is_available() else 'cpu'
# Get our noise distribution
# Using word frequencies calculated earlier in the notebook
word_freqs = np.array(sorted(freqs.values(), reverse=True))
unigram_dist = word_freqs/word_freqs.sum()
noise_dist = torch.from_numpy(unigram_dist**(0.75)/np.sum(unigram_dist**(0.75)))
# instantiating the model
embedding_dim = 300
model = SkipGramNeg(len(vocab_to_int), embedding_dim, noise_dist=noise_dist).to(device)
# using the loss that we defined
criterion = NegativeSamplingLoss()
optimizer = optim.Adam(model.parameters(), lr=0.003)
print_every = 1500
steps = 0
epochs = 5
# train for some number of epochs
for e in range(epochs):
# get our input, target batches
for input_words, target_words in get_batches(train_words, 512):
steps += 1
inputs, targets = torch.LongTensor(input_words), torch.LongTensor(target_words)
inputs, targets = inputs.to(device), targets.to(device)
# input, output, and noise vectors
input_vectors = model.forward_input(inputs)
output_vectors = model.forward_output(targets)
noise_vectors = model.forward_noise(inputs.shape[0], 5)
# negative sampling loss
loss = criterion(input_vectors, output_vectors, noise_vectors)
optimizer.zero_grad()
loss.backward()
optimizer.step()
# loss stats
if steps % print_every == 0:
print("Epoch: {}/{}".format(e+1, epochs))
print("Loss: ", loss.item()) # avg batch loss at this point in training
valid_examples, valid_similarities = cosine_similarity(model.in_embed, device=device)
_, closest_idxs = valid_similarities.topk(6)
valid_examples, closest_idxs = valid_examples.to('cpu'), closest_idxs.to('cpu')
for ii, valid_idx in enumerate(valid_examples):
closest_words = [int_to_vocab[idx.item()] for idx in closest_idxs[ii]][1:]
print(int_to_vocab[valid_idx.item()] + " | " + ', '.join(closest_words))
print("...\n")
"""
Explanation: Training
Below is our training loop, and I recommend that you train on GPU, if available.
End of explanation
"""
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import matplotlib.pyplot as plt
from sklearn.manifold import TSNE
# getting embeddings from the embedding layer of our model, by name
embeddings = model.in_embed.weight.to('cpu').data.numpy()
viz_words = 380
tsne = TSNE()
embed_tsne = tsne.fit_transform(embeddings[:viz_words, :])
fig, ax = plt.subplots(figsize=(16, 16))
for idx in range(viz_words):
plt.scatter(*embed_tsne[idx, :], color='steelblue')
plt.annotate(int_to_vocab[idx], (embed_tsne[idx, 0], embed_tsne[idx, 1]), alpha=0.7)
"""
Explanation: Visualizing the word vectors
Below we'll use T-SNE to visualize how our high-dimensional word vectors cluster together. T-SNE is used to project these vectors into two dimensions while preserving local stucture. Check out this post from Christopher Olah to learn more about T-SNE and other ways to visualize high-dimensional data.
End of explanation
"""
|
rjleveque/binder_experiments
|
clawpack_tests/pyclaw1.ipynb
|
bsd-2-clause
|
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
from clawpack import pyclaw
from clawpack import riemann
"""
Explanation: A quick introduction to PyClaw
PyClaw is a solver for hyperbolic PDEs, based on Clawpack. You can read more about PyClaw in this paper (free version here.
In this notebook, we explore some basic PyClaw functionality. Before running the notebook, you should install Clawpack. The quick way is to just
pip install clawpack
End of explanation
"""
claw = pyclaw.Controller()
claw.tfinal = 1.0
claw.keep_copy = True # Keep solution data in memory for plotting
claw.output_format = None # Don't write solution data to file
claw.num_output_times = 50 # Write 50 output frames
"""
Explanation: Setting up a problem
To solve a problem, we'll need to create the following:
A controller, which handles the running, output, and can be used for plotting (you don't absolutely need a controller, but it makes life simpler)
A solver, which is responsible for actually evolving the solution in time. Here we'll need to specify the equations to be solved and the boundary conditions.
A domain over which to solve the problem
A solution, where we will provide the initial data. After running, the solution will contain -- you guessed it! -- the solution.
Let's start by creating a controller and specifying the simulation end time:
End of explanation
"""
riemann.
"""
Explanation: Riemann solvers
Like many solvers for nonlinear hyperbolic PDEs, PyClaw uses Riemann solvers. By specifying a Riemann solver, we will specify the system of PDEs that we want to solve.
Place your cursor at the end of the line in the box below and hit 'Tab' (for autocompletion). You'll see a dropdown list of all the Riemann solvers currently available in PyClaw. The ones with 'py' at the end of the name are written in pure Python; the others are Fortran, wrapped with f2py.
Note that this won't work if you're viewing the notebook online as HTML; you need to actually be running it.
End of explanation
"""
riemann_solver = riemann.acoustics_1D
claw.solver = pyclaw.ClawSolver1D(riemann_solver)
"""
Explanation: We'll solve the one-dimensional acoustics equations:
$$\begin{align}
p_t + K u_x & = 0 \
u_t + \frac{1}{\rho} p_x & = 0.
\end{align}$$
Here $p, u$ are the pressure and velocity as functions of $x,t$, while $\rho, K$ are constants representing the density and bulk modulus of the material transmitting the waves. We'll specify these constants later.
We can do this using the first solver in the list. Notice that the solver we create here belongs to the controller that we created above.
End of explanation
"""
claw.solver.all_bcs = pyclaw.BC.periodic
"""
Explanation: We also need to specify boundary conditions. We'll use periodic BCs, so that waves that go off one side of the domain come back in at the other:
End of explanation
"""
domain = pyclaw.Domain( (0.,), (1.,), (100,))
"""
Explanation: The problem domain
Next we need to specify the domain and the grid. We'll solve on the unit line $[0,1]$ using 100 grid cells. Note that each argument to the Domain constructor must be a tuple:
End of explanation
"""
claw.solution = pyclaw.Solution(claw.solver.num_eqn,domain)
"""
Explanation: The initial solution
Next we create a solution object that belongs to the controller and extends over the domain we specified:
End of explanation
"""
x=domain.grid.x.centers
bet=100; gam=5; x0=0.75
claw.solution.q[0,:] = np.exp(-bet * (x-x0)**2) * np.cos(gam * (x - x0))
claw.solution.q[1,:] = 0.
plt.plot(x, claw.solution.q[0,:],'-o')
"""
Explanation: The initial data is specified in an array named $q$. The pressure is contained in q[0,:] and the velocity in q[1,:]. We'll specify a wavepacket for the pressure and zero velocity.
End of explanation
"""
riemann_solver.cparam.
"""
Explanation: Problem-specific parameters
The Riemann solver we've chosen requires some physical parameters to be specified. Press 'Tab' in the box below and you'll see what they are.
End of explanation
"""
import numpy as np
density = 1.0
bulk_modulus = 1.0
impedance = np.sqrt(density*bulk_modulus)
sound_speed = np.sqrt(density/bulk_modulus)
claw.solution.state.problem_data = {
'rho' : density,
'bulk': bulk_modulus,
'zz' : np.sqrt(density*bulk_modulus),
'cc' : np.sqrt(bulk_modulus/density)
}
"""
Explanation: Two of these parameters are $\rho$ and $K$ in the equations above. The other two are the impedance $Z = \sqrt{\rho K}$ and sound speed $c = \sqrt{K/\rho}$. We specify these parameters in a dictionary that belongs to the solution object:
End of explanation
"""
status = claw.run()
"""
Explanation: Finally, let's run the simulation.
End of explanation
"""
pressure = claw.frames[50].q[0,:]
plt.plot(x,pressure,'-o')
"""
Explanation: Plotting
Now we'll plot the results, which are contained in claw.frames[:]. It's simple to plot a single frame with matplotlib:
End of explanation
"""
from matplotlib import animation
import matplotlib.pyplot as plt
from clawpack.visclaw.JSAnimation import IPython_display
import numpy as np
fig = plt.figure()
ax = plt.axes(xlim=(0, 1), ylim=(-0.2, 1.2))
frame = claw.frames[0]
pressure = frame.q[0,:]
line, = ax.plot([], [], lw=2)
def fplot(frame_number):
frame = claw.frames[frame_number]
pressure = frame.q[0,:]
line.set_data(x,pressure)
return line,
animation.FuncAnimation(fig, fplot, frames=len(claw.frames), interval=30)
"""
Explanation: To examine the evolution more thoroughly, it's nice to see all the frames in sequence. We can do this as follows.
NOTE: The JSAnimation does not work below. If you execute this cell and try to start the animation, the javascript goes into an infinite loop.
End of explanation
"""
|
WomensCodingCircle/CodingCirclePython
|
Lesson14_NumpyAndMatplotlib/numpy.ipynb
|
mit
|
# by convention, we typically import numpy as the alias np
import numpy as np
"""
Explanation: Adapted from Scientific Python: Part 1 (lessons/thw-numpy/numpy.ipynb)
Introducing NumPy
NumPy is a Python package implementing efficient collections of specific types of data (generally numerical), similar to the standard array
module (but with many more features). NumPy arrays differ from lists and tuples in that the data is contiguous in memory. A Python list,
[0, 1, 2], in contrast, is actually an array of pointers to Python objects representing each number. This allows NumPy arrays to be
considerably faster for numerical operations than Python lists/tuples.
End of explanation
"""
#np?
#np.
"""
Explanation: Let's see what numpy can do.
End of explanation
"""
print((np.sqrt(4)))
print((np.pi)) # a constant
print((np.sin(np.pi)))
"""
Explanation: We can try out some of those constants and functions:
End of explanation
"""
arr1 = np.array([1, 2.3, 4])
# Type of a numpy array
print((type(arr1)))
# Type of the data inside a numpy array dtype=data type
print((arr1.dtype))
"""
Explanation: "That's great," you're thinking. "math already has all of those functions and constants." But that's not the real beauty of NumPy.
TRY IT
Find the square root of pi using numpy functions and constants
Numpy arrays (ndarrays)
Creating a NumPy array is as simple as passing a sequence to numpy.array:
Numpy arrays are collections of things, all of which must be the same type, that work
similarly to lists (as we've described them so far). The most important are:
You can easily perform elementwise operations (and matrix algebra) on arrays
Arrays can be n-dimensional
Arrays must be pre-allocated (ie, there is no equivalent to append)
Arrays can be created from existing collections such as lists, or instantiated "from scratch" in a
few useful ways.
End of explanation
"""
print(('2 rows, 3 columns of zeros:\n', np.zeros((2,3))))
print(('4x4 identity matrix:\n', np.identity(4)))
squared = []
for x in range(5):
squared.append(x**2)
print(squared)
a = np.array(squared)
b = np.zeros_like(a)
print(('a:\n', a))
print(('b:\n', b))
"""
Explanation: TRY IT
Create an array from the list [0,1,2] and print out it's dtype
Datatype options
Choose your datatype based on how large the largest values could be, and how much memory you expect to use
bool_ - Boolean (True or False) stored as a byte
int_ - Default integer type (same as C long; normally either int64 or int32)
int8 - Byte (-128 to 127)
int16 - Integer (-32768 to 32767)
int32 - Integer (-2147483648 to 2147483647)
int64 - Integer (-9223372036854775808 to 9223372036854775807)
uint8 - Unsigned integer (0 to 255)
uint16 - Unsigned integer (0 to 65535)
uint32 - Unsigned integer (0 to 4294967295)
uint64 - Unsigned integer (0 to 18446744073709551615)
float_ - Shorthand for float64.
float16 - Half precision float: sign bit, 5 bits exponent, 10 bits mantissa
float32 - Single precision float: sign bit, 8 bits exponent, 23 bits mantissa
float64 - Double precision float: sign bit, 11 bits exponent, 52 bits mantissa
complex_ - Shorthand for complex128.
complex64 - Complex number, represented by two 32-bit floats (real and imaginary components)
complex128 - Complex number, represented by two 64-bit floats (real and imaginary components)
Creating Arrays
There are many other ways to create NumPy arrays, such as np.identity, np.zeros, np.zeros_like, np.ones, np.ones_like
End of explanation
"""
c = np.ones((15, 30))
print(('number of dimensions of c:', c.ndim))
print(('length of c in each dimension:', c.shape))
x = np.array([[[1,2,3],[4,5,6],[7,8,9]] , [[0,0,0],[0,0,0],[0,0,0]]])
print(('number of dimensions of x:', x.ndim))
print(('length of x in each dimension:', x.shape))
"""
Explanation: These arrays have attributes, like .ndim and .shape that tell us about the number and length of the dimensions.
The dimension of an array is the number of indices needed to select an element. Thus, if the array is seen as a function on a set of possible index combinations, it is the dimension of the space of which its domain is a discrete subset. Thus a one-dimensional array is a list of data, a two-dimensional array a rectangle of data, a three-dimensional array a block of data, etc.
The shape is the number of elements in each dimension of data
End of explanation
"""
print("Arange")
print((np.arange(5)))
# Args: start, stop, number of elements
print("Linspace")
print((np.linspace(5, 10, 5)))
# logspace can also take a base argument, by default it is 10
print("Logspace")
print((np.logspace(0, 1, 5)))
print((np.logspace(0, 1, 5, base=2)))
"""
Explanation: NumPy has its own range() function, np.arange() (stands for array-range), that is more efficient for building larger arrays. It functions in much the same way as range().
NumPy also has linspace() and logspace(), that can generate equally spaced samples between a start-point and an end-point. Find out more with np.linspace?.
End of explanation
"""
np.loadtxt?
"""
Explanation: TRY IT
Create a numpy array with 8 rows and 50 columns of 0's
Creating numpy arrays from text files
You can use loadtxt to load data from a text file (csv or tab-delimited data)
End of explanation
"""
np.loadtxt('simple.csv', delimiter=',')
"""
Explanation: The simplest way to use it is to just give it a file name. By default, your data will be loaded as floats with whitespace being the delimiter
my_arr = np.loadtxt('myfile.txt')
More likely you will need to use some of the keyword arguments. like dtype, delimiter, skiprows, or usecols Docs available here: http://docs.scipy.org/doc/numpy/reference/generated/numpy.loadtxt.html
my_array = loadtxt('myfile.csv', usecols=[1,2,3,4,5,6,7,8,9,10,11,12], delimiter=',')
End of explanation
"""
A = np.arange(5)
B = np.arange(5, 10)
print(('A', A))
print(('B', B))
print(('A+B', A+B))
print(('B-A', B-A))
print(('A*B', A*B))
"""
Explanation: TRY IT
Load the file 'example.tsv' a tab delimited file. Once you have that working, only load the odd numbered columns (1,3,5).
Arithmetic with ndarrays
Standard arithmetic operators perform element-wise operations on arrays of the same size.
End of explanation
"""
A = np.arange(5)
print(('A', A))
print(('A+10', A+10))
print(('2 * A', 2*A))
print(('A ** 2', A**2))
"""
Explanation: In addition, if one of the arguments is a scalar, that value will be applied to all the elements of the array.
scalar - a quantity possessing only magnitude. (In this case we mean a single number either an int or a float)
End of explanation
"""
print((A.dot(B)))
print((np.dot(A, B)))
"""
Explanation: Linear algebra with arrays
You can use arrays as vectors and matrices in linear algebra operations
Specifically, you can perform matrix/vector multiplication between arrays, by using the .dot method, or the np.dot function
dot product - the dot product between two vectors is based on the projection of one vector onto another.
End of explanation
"""
# Numpy arrays
A = np.arange(5)*2
print(A)
# Lists
B = list(range(5))*2
print(B)
"""
Explanation: If you are planning on doing serious linear algebra, you might be better off using the np.matrix object instead of np.array.
Numpy 'gotchas'
Multiplication and Addition
As you may have noticed above, since NumPy arrays are modeled more closely after vectors and matrices, multiplying by a scalar will multiply each element of the array, whereas multiplying a list by a scalar will repeat that list N times.
End of explanation
"""
# Numpy arrays
A = np.arange(5) + np.arange(5)
print(A)
# Lists
B = list(range(5)) + list(range(5))
print(B)
"""
Explanation: Similarly, when adding two numpy arrays together, we get the vector sum back, whereas when adding two lists together, we get the concatenation back.
End of explanation
"""
arr1 = np.array([1, 2, 3, 4, 5])
arr2 = np.array([1, 1, 3, 3, 5])
print((arr1 == arr2))
c = (arr1 == arr2)
print((type(c)))
print((c.dtype))
"""
Explanation: Boolean operators work on arrays too, and they return boolean arrays
Much like the basic arithmetic operations we discussed above, comparison operations are performed element-wise. That is, rather than returning a
single boolean, comparison operators compare each element in both arrays pairwise, and return an array of booleans (if the sizes of the input
arrays are incompatible, the comparison will simply return False). For example:
End of explanation
"""
print(arr1)
print(c)
print((arr1[c]))
"""
Explanation: You can get a portion of an array by using a boolean array as the index. It will return an array where only true values are returned
End of explanation
"""
print((np.all(c)))
print((c.all()))
print((c.any()))
"""
Explanation: Note: You can use the methods .any() and .all() or the functions np.any and np.all to return a single boolean indicating whether any or all values in the array are True, respectively.
End of explanation
"""
A = np.arange(5)
B = A[0:1]
B[0] = 42
print(A)
A = list(range(5))
B = A[0:1]
B[0] = 42
print(A)
"""
Explanation: TRY IT
Create a boolean array for arr1 for where values are >= 3
Views vs. Copies
In order to be as efficient as possible, numpy uses "views" instead of copies wherever possible. That is, numpy arrays derived from another base array generally refer to the ''exact same data'' as the base array. The consequence of this is that modification of these derived arrays will also modify the base array. The result of an array indexed by an array of indices is a ''copy'', but an array indexed by an array of booleans is a ''view''.
Specifically, slices of arrays are always views, unlike slices of lists or tuples, which are always copies.
End of explanation
"""
a = np.array([1,2,3])
print((a[0:2]))
"""
Explanation: Indexing arrays
In addition to the usual methods of indexing lists with an integer (or with a series of colon-separated integers for a slice), numpy allows you
to index arrays in a wide variety of different ways for more advanced operations.
First, the simple way:
End of explanation
"""
c = np.random.rand(3,3)
print(c)
print((c[1:3,0:2]))
print(a)
c[0,:] = a
print(c)
"""
Explanation: How can we index if the array has more than one dimension?
End of explanation
"""
|
UserAd/data_science
|
Twitter bots/Botnet search.ipynb
|
mit
|
seeds = ['volya_belousova', 'egor4rgurev', 'kirillfrolovdw', 'ilyazhuchhj']
auth = tweepy.OAuthHandler(OAUTH_KEY, OAUTH_SECRET)
auth.set_access_token(ACCESS_TOKEN, ACCESS_TOKEN_SECRET)
api = tweepy.API(auth, wait_on_rate_limit=True, wait_on_rate_limit_notify=True)
graph = Graph(user=NEO4J_USER, password=NEO4J_SECRET)
def get_follwers_by_id(account_id):
ids = []
for page in tweepy.Cursor(api.followers_ids, user_id=account_id).pages():
print("FOLLOWERS: Next page for %s" % account_id)
ids.extend(page)
return ids
def get_friends_by_id(account_id):
ids = []
for page in tweepy.Cursor(api.friends_ids, user_id=account_id).pages():
print("FRIENDS: Next page for %s" % account_id)
ids.extend(page)
return ids
def get_friends(account):
ids = []
for page in tweepy.Cursor(api.friends_ids, screen_name=account).pages():
print("Next page for %s" % account)
ids.extend(page)
return ids
def chunks(l, n):
for i in range(0, len(l), n):
yield l[i:i + n]
"""
Explanation: Select seeds for search networks
I select small (1000-1500) sized bot network and pick 4 random members from it
End of explanation
"""
friend_ids = {}
for account in seeds:
friend_ids[account] = get_friends(account)
commons = {}
for first in seeds:
for second in seeds:
if first != second:
commons[(first, second)] = list(set(friend_ids[first]) & set(friend_ids[second]))
all_users = friend_ids[seeds[0]]
for name in seeds:
all_users = list(set(all_users) | set(friend_ids[name]))
"""
Explanation: Now search for friends of seed users
End of explanation
"""
display("Common users: {0}".format(len(all_users)))
html = ["<table width=100%>"]
html.append('<tr><td></td>')
for name in seeds:
html.append('<td>{0}</td>'.format(name))
html.append('</tr>')
for first in seeds:
html.append('<tr><td>{0}</td>'.format(first))
for second in seeds:
if first != second:
html.append('<td>{0}</td>'.format(len(commons[(first,second)])))
else:
html.append('<td>x</td>')
html.append("</tr>")
html.append('</table>')
HTML(''.join(html))
"""
Explanation: Show common users in total and per seed user
End of explanation
"""
graph.run("CREATE CONSTRAINT ON (u:UserRes) ASSERT u.id IS UNIQUE")
processed_users = []
for user_id in all_users:
if user_id not in processed_users:
user = Node("UserRes", id=user_id)
graph.merge(user)
try:
for friend_id in get_follwers_by_id(user_id):
if friend_id in all_users:
friend = Node("UserRes", id=friend_id)
graph.merge(friend)
graph.merge(Relationship(friend, "FRIEND_OF", user))
for friend_id in get_friends_by_id(user_id):
if friend_id in all_users:
friend = Node("UserRes", id=friend_id)
graph.merge(friend)
graph.merge(Relationship(user, "FRIEND_OF", friend))
except tweepy.TweepError:
print("User {0} has protected followers/friends".format(user_id))
processed_users.append(user_id)
print(float(len(processed_users)) / float(len(all_users)) * 100.0)
"""
Explanation: Now search and populate neo4j database
End of explanation
"""
query = """
MATCH (user1:UserRes)-[:FRIEND_OF]->(user2:UserRes),
(user2:UserRes)-[:FRIEND_OF]->(user1)
RETURN user1.id, user2.id
"""
data = graph.run(query)
ig = IGraph.TupleList(data, weights=False)
ig.es["width"] = 1
ig.simplify(combine_edges={ "width": "sum" })
"""
Explanation: Get all users from neo4j and build graph
End of explanation
"""
clusters = IGraph.community_fastgreedy(ig)
clusters = clusters.as_clustering()
print("Found %d clusters" % len(clusters))
"""
Explanation: Let's cluster graph and search for communities
End of explanation
"""
nodes = [{"id": node.index, "name": node["name"]} for node in ig.vs]
for node in nodes:
node["cluster"] = clusters.membership[node["id"]]
nodes_df = pd.DataFrame(nodes)
edges = [{"source": x[0], "target": x[1]} for x in ig.get_edgelist()]
edges_df = pd.DataFrame(edges)
edges_counts = edges_df.groupby('source').count().reset_index().rename(columns = {'target': 'count'})
"""
Explanation: Let's make clusters dataframe
End of explanation
"""
nodes_df.groupby('cluster').count()
"""
Explanation: Let's look to all clusters closely
End of explanation
"""
first_cluster = nodes_df[nodes_df["cluster"] == 0][["id", "name"]]
"""
Explanation: We have only two clusters with significant user count.
Let's check first
End of explanation
"""
first_cluster_counts = first_cluster.set_index('id').join(edges_counts.set_index('source')).reset_index()
first_cluster_counts["count"].hist()
"""
Explanation: Join edges to users
End of explanation
"""
for group in range(20):
start = group * 100
stop = (group + 1) * 100
users_slice = first_cluster_counts[(first_cluster_counts["count"] > start) & (first_cluster_counts["count"] < stop)]
print("Users from %d to %d has %d" %(start, stop, users_slice.count()[0]))
display(users_slice[:10])
"""
Explanation: Let's look to all groups
End of explanation
"""
filtered_bots = first_cluster_counts[(first_cluster_counts["count"] > 1200) & (first_cluster_counts["count"] < 1900)]
print("We found %s bots in first approximation" % filtered_bots.count()[0])
"""
Explanation: Looks like most bot accounts has followers/follows count from 1200 to 1900
Let's filter it
End of explanation
"""
first_cluster_bots = []
for group in chunks(filtered_bots["name"].values, 100):
for user in api.lookup_users(user_ids=list(group)):
first_cluster_bots.append(user)
locations = [user.location for user in first_cluster_bots]
first_cluster_bots[0].favourites_count
possible_bot_users = pd.DataFrame([{'name': user.name, 'id': user.id, 'location': user.location, 'screen_name': user.screen_name, 'followers': user.followers_count, 'friends': user.friends_count, 'created_at': user.created_at, 'favorites': user.favourites_count} for user in first_cluster_bots])
possible_bot_users.hist()
possible_bot_users[["id", "location"]].groupby('location').count().plot(kind='bar')
"""
Explanation: Now collect all information from these accounts and search for corellations
End of explanation
"""
moscow_users = possible_bot_users[possible_bot_users["location"] == u'Москва']
moscow_users.hist()
moscow_users[:10]
"""
Explanation: Ok, we have two significant values. Moscow and New York. Let's split dataset
End of explanation
"""
ny_users = possible_bot_users[possible_bot_users["location"] == u'New York, USA']
ny_users.hist()
ny_users[:10]
"""
Explanation: Now check NY users
End of explanation
"""
print("Moscow bots: %d, NY bots: %d, Total: %d" % (moscow_users.count()[0], ny_users.count()[0], moscow_users.count()[0] + ny_users.count()[0]))
"""
Explanation: Conclusion
We have one twitter bot network on two languages: Russian and English.
All bots have deep linking and posts random sentences every hour.
End of explanation
"""
ny_users.append(moscow_users).to_csv("./moscow_ny_bots.csv", encoding='utf8')
"""
Explanation: Now export moscow and ny users to csv
End of explanation
"""
|
fweik/espresso
|
doc/tutorials/visualization/visualization.ipynb
|
gpl-3.0
|
from matplotlib import pyplot
import espressomd
import numpy
espressomd.assert_features("LENNARD_JONES")
# system parameters (10000 particles)
box_l = 10.7437
density = 0.7
# interaction parameters (repulsive Lennard-Jones)
lj_eps = 1.0
lj_sig = 1.0
lj_cut = 1.12246
lj_cap = 20
# integration parameters
system = espressomd.System(box_l=[box_l, box_l, box_l])
system.time_step = 0.0001
system.cell_system.skin = 0.4
system.thermostat.set_langevin(kT=1.0, gamma=1.0, seed=42)
# warmup integration (with capped LJ potential)
warm_steps = 100
warm_n_times = 30
# do the warmup until the particles have at least the distance min_dist
min_dist = 0.9
# integration
int_steps = 1000
int_n_times = 100
#############################################################
# Setup System #
#############################################################
# interaction setup
system.non_bonded_inter[0, 0].lennard_jones.set_params(
epsilon=lj_eps, sigma=lj_sig,
cutoff=lj_cut, shift="auto")
system.force_cap = lj_cap
# particle setup
volume = box_l * box_l * box_l
n_part = int(volume * density)
for i in range(n_part):
system.part.add(id=i, pos=numpy.random.random(3) * system.box_l)
act_min_dist = system.analysis.min_dist()
#############################################################
# Warmup Integration #
#############################################################
# set LJ cap
lj_cap = 20
system.force_cap = lj_cap
# warmup integration loop
i = 0
while (i < warm_n_times and act_min_dist < min_dist):
system.integrator.run(warm_steps)
# warmup criterion
act_min_dist = system.analysis.min_dist()
i += 1
# increase LJ cap
lj_cap = lj_cap + 10
system.force_cap = lj_cap
#############################################################
# Integration #
#############################################################
# remove force capping
lj_cap = 0
system.force_cap = lj_cap
def main():
for i in range(int_n_times):
print("\rrun %d at time=%.0f " % (i, system.time), end='')
system.integrator.run(int_steps)
print('\rSimulation complete')
main()
"""
Explanation: Visualization
Introduction
When you are running a simulation, it is often useful to see what is going on
by visualizing particles in a 3D view or by plotting observables over time.
That way, you can easily determine things like whether your choice of parameters
has led to a stable simulation or whether your system has equilibrated. You may
even be able to do your complete data analysis in real time as the simulation progresses.
Thanks to ESPResSo's Python interface, we can make use of standard libraries
like Mayavi or OpenGL (for interactive 3D views) and Matplotlib (for line graphs)
for this purpose. We will also use NumPy, which both of these libraries depend on,
to store data and perform some basic analysis.
Simulation
First, we need to set up a simulation.
We will simulate a simple Lennard-Jones liquid.
End of explanation
"""
matplotlib_notebook = True # toggle this off when outside IPython/Jupyter
# setup matplotlib canvas
pyplot.xlabel("Time")
pyplot.ylabel("Energy")
plot, = pyplot.plot([0], [0])
if matplotlib_notebook:
from IPython import display
else:
pyplot.show(block=False)
# setup matplotlib update function
current_time = -1
def update_plot():
i = current_time
if i < 3:
return None
plot.set_xdata(energies[:i + 1, 0])
plot.set_ydata(energies[:i + 1, 1])
pyplot.xlim(0, energies[i, 0])
pyplot.ylim(energies[:i + 1, 1].min(), energies[:i + 1, 1].max())
# refresh matplotlib GUI
if matplotlib_notebook:
display.clear_output(wait=True)
display.display(pyplot.gcf())
else:
pyplot.draw()
pyplot.pause(0.01)
# re-define the main() function
def main():
global current_time
for i in range(int_n_times):
system.integrator.run(int_steps)
energies[i] = (system.time, system.analysis.energy()['total'])
current_time = i
update_plot()
if matplotlib_notebook:
display.clear_output(wait=True)
system.time = 0 # reset system timer
energies = numpy.zeros((int_n_times, 2))
main()
if not matplotlib_notebook:
pyplot.close()
"""
Explanation: Live plotting
Let's have a look at the total energy of the simulation. We can determine the
individual energies in the system using <tt>system.analysis.energy()</tt>.
We will adapt the <tt>main()</tt> function to store the total energy at each
integration run into a NumPy array. We will also create a function to draw a
plot after each integration run.
End of explanation
"""
from espressomd import visualization
from threading import Thread
visualizer = visualization.openGLLive(system)
# alternative: visualization.mayaviLive(system)
"""
Explanation: Live visualization and plotting
To interact with a live visualization, we need to move the main integration loop into a secondary thread and run the visualizer in the main thread (note that visualization or plotting cannot be run in secondary threads). First, choose a visualizer:
End of explanation
"""
def main():
global current_time
for i in range(int_n_times):
system.integrator.run(int_steps)
energies[i] = (system.time, system.analysis.energy()['total'])
current_time = i
visualizer.update()
system.time = 0 # reset system timer
"""
Explanation: Then, re-define the <tt>main()</tt> function to run the visualizer:
End of explanation
"""
# setup new matplotlib canvas
if matplotlib_notebook:
pyplot.xlabel("Time")
pyplot.ylabel("Energy")
plot, = pyplot.plot([0], [0])
# execute main() in a secondary thread
t = Thread(target=main)
t.daemon = True
t.start()
# execute the visualizer in the main thread
visualizer.register_callback(update_plot, interval=int_steps // 2)
visualizer.start()
"""
Explanation: Next, create a secondary thread for the <tt>main()</tt> function. However,
as we now have multiple threads, and the first thread is already used by
the visualizer, we cannot call <tt>update_plot()</tt> from
the <tt>main()</tt> anymore.
The solution is to register the <tt>update_plot()</tt> function as a
callback of the visualizer:
End of explanation
"""
|
mahieke/maschinelles_lernen
|
a3/Aufgabe_3.1.ipynb
|
mit
|
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import math
from numpy import linalg as LA
import scipy as sp
import urllib2
from urllib2 import urlopen, URLError, HTTPError
import zipfile
import tarfile
import sys
import os
from skimage import data, io, filter
from PIL import Image
"""
Explanation: Praktikum Maschinelles Lernen WS 15/16
<table>
<tr>
<td>Name</td>
<td>Vorname</td>
<td>Matrikelnummer</td>
<td>Datum</td>
</tr>
<tr>
<td>Alt</td>
<td>Tobias</td>
<td>282385</td>
<td>18.12.2015</td>
</tr>
<tr>
<td>Hieke</td>
<td>Manuel</td>
<td>283912</td>
<td>08.01.2016</td>
</tr>
</table>
<b>Aufgabe 3.1 - Perzeptron
End of explanation
"""
# Funktion zum Erstellen des Datensatzes
#-----------------------------------------------------------------------------
# loc : float Mean (“centre”) of the distribution.
# scale : float Standard deviation (spread or “width”) of the distribution.
# size : int or tuple of ints, optional
# numpy.random.normal(loc=0.0, scale=1.0, size=None)
def createToyDataSet(ypos,numberOfData,clusterDistance,varianz):
#sigma = sqrt(clusterBright) # mean and standard deviation
#data1 = np.random.normal(mu, sigma, numberOfData)
#data2 = np.random.normal(mu, sigma, numberOfData)
mu = clusterDistance #loc Paramter -> Abstand
sigma = sqrt(varianz) #scale Parameter -> Clusterbreite
sizeOfData = numberOfData #Anzahl Daten
#np.vstack -> Stack arrays in sequence vertically
X = np.vstack([np.random.normal(ypos+mu, sigma, (sizeOfData, 2)), np.random.normal(ypos-mu, sigma, (sizeOfData, 2))])
return X
#Graphische Darstellung
#-------------------------------------------------------------------------
def plotToyData(data,mu,varianz):
fig = plt.figure()
fig, ax = subplots(figsize=(14,6))
data1 = data[0]
data2 = data[1]
#plot data histogramm
ax = plt.subplot(1,2,1)
title('x/y Histogramm')
count, bins, ignored = ax.hist(data, 30, normed=True)
ax.plot(bins, 1/(sqrt(varianz) * np.sqrt(2 * np.pi)) * np.exp( - (bins)**2 / (2 * varianz) ),
linewidth=2, color='g')
# 1. Gaussverteilungen - Cluster 1
x_plot = np.linspace(mu - 4*sqrt(varianz), mu + 4*sqrt(varianz), 100) # the x-values to use in the plot
# compute the values of this density at the locations given by x_plot
py = 1/np.sqrt(4*np.pi*varianz)*np.exp(-0.5*(x_plot-mu)**2/varianz)
# sample some random values from this density
x_samps = data
# Plot the density
ax.plot(x_plot, py)
# 2. Gaussverteilungen - Cluster 2
x_plot = np.linspace(-mu - 4*sqrt(varianz), -mu + 4*sqrt(varianz), 100) # the x-values to use in the plot
# compute the values of this density at the locations given by x_plot
py = 1/np.sqrt(4*np.pi*varianz)*np.exp(-0.5*(x_plot+mu)**2/varianz)
# sample some random values from this density
x_samps = data
# Plot the density
ax.plot(x_plot, py)
# Scatter plot
ax = plt.subplot(1,2,2)
colors = np.hstack([np.zeros(len(data)/2), np.ones(len(data)/2)])
plt.scatter(data[:, 0], data[:, 1], c=colors, edgecolors='none',cmap=plt.cm.Accent)
#Erzeugen der Daten (wie gewünscht einstellbar)
#---------------------------------------------------------------------------
varianz = 0.5 #Clusterbreite
numberOfData = 200 #Anzahl neuer Datenpunkte pro Cluster
mean= 1.5 #Abstand
ypos = 0 #y-Achsen-Verschiebung
toyData = createToyDataSet(ypos,numberOfData,mean,varianz)
plotToyData(toyData, mean, varianz)
#Erzeugen des zugehörigen Labelvektor mit den Werten ±1
#-------------------------------------------------------------------
labelvector = np.ones(len(toyData))
labelvector[len(toyData)/2:] *= -1
print 'ToyData Größe :',shape(toyData),' 1.Klasse: ',toyData[0][0],' 2.Klasse: ', toyData[1][0]
print 'Labelvektor Größe:',shape(labelvector),' 1.Klasse: ',labelvector[0],'\t\t2.Klasse: ', labelvector[200]
"""
Explanation: <b>Teil A - Toy Dataset
End of explanation
"""
|
rringham/deep-learning-notebooks
|
udacity/1_notmnist.ipynb
|
mit
|
# These are all the modules we'll be using later. Make sure you can import them
# before proceeding further.
%matplotlib inline
from __future__ import print_function
import matplotlib.pyplot as plt
import numpy as np
import os
import sys
import tarfile
from IPython.display import display, Image
from scipy import ndimage
from sklearn.linear_model import LogisticRegression
from six.moves.urllib.request import urlretrieve
from six.moves import cPickle as pickle
"""
Explanation: Deep Learning
Assignment 1
The objective of this assignment is to learn about simple data curation practices, and familiarize you with some of the data we'll be reusing later.
This notebook uses the notMNIST dataset to be used with python experiments. This dataset is designed to look like the classic MNIST dataset, while looking a little more like real data: it's a harder task, and the data is a lot less 'clean' than MNIST.
End of explanation
"""
url = 'http://yaroslavvb.com/upload/notMNIST/'
def maybe_download(filename, expected_bytes, force=False):
"""Download a file if not present, and make sure it's the right size."""
if force or not os.path.exists(filename):
filename, _ = urlretrieve(url + filename, filename)
statinfo = os.stat(filename)
if statinfo.st_size == expected_bytes:
print('Found and verified', filename)
else:
raise Exception(
'Failed to verify' + filename + '. Can you get to it with a browser?')
return filename
train_filename = maybe_download('notMNIST_large.tar.gz', 247336696)
test_filename = maybe_download('notMNIST_small.tar.gz', 8458043)
"""
Explanation: First, we'll download the dataset to our local machine. The data consists of characters rendered in a variety of fonts on a 28x28 image. The labels are limited to 'A' through 'J' (10 classes). The training set has about 500k and the testset 19000 labelled examples. Given these sizes, it should be possible to train models quickly on any machine.
End of explanation
"""
num_classes = 10
np.random.seed(133)
def maybe_extract(filename, force=False):
root = os.path.splitext(os.path.splitext(filename)[0])[0] # remove .tar.gz
if os.path.isdir(root) and not force:
# You may override by setting force=True.
print('%s already present - Skipping extraction of %s.' % (root, filename))
else:
print('Extracting data for %s. This may take a while. Please wait.' % root)
tar = tarfile.open(filename)
sys.stdout.flush()
tar.extractall()
tar.close()
data_folders = [
os.path.join(root, d) for d in sorted(os.listdir(root))
if os.path.isdir(os.path.join(root, d))]
if len(data_folders) != num_classes:
raise Exception(
'Expected %d folders, one per class. Found %d instead.' % (
num_classes, len(data_folders)))
print(data_folders)
return data_folders
train_folders = maybe_extract(train_filename)
test_folders = maybe_extract(test_filename)
"""
Explanation: Extract the dataset from the compressed .tar.gz file.
This should give you a set of directories, labelled A through J.
End of explanation
"""
Image("notMNIST_large/a/emxhZGRpLnR0Zg==.png")
"""
Explanation: Problem 1
Let's take a peek at some of the data to make sure it looks sensible. Each exemplar should be an image of a character A through J rendered in a different font. Display a sample of the images that we just downloaded. Hint: you can use the package IPython.display.
End of explanation
"""
image_size = 28 # Pixel width and height.
pixel_depth = 255.0 # Number of levels per pixel.
def load_letter(folder, min_num_images):
"""Load the data for a single letter label."""
image_files = os.listdir(folder)
dataset = np.ndarray(shape=(len(image_files), image_size, image_size),
dtype=np.float32)
image_index = 0
print(folder)
for image in os.listdir(folder):
image_file = os.path.join(folder, image)
try:
image_data = (ndimage.imread(image_file).astype(float) -
pixel_depth / 2) / pixel_depth
if image_data.shape != (image_size, image_size):
raise Exception('Unexpected image shape: %s' % str(image_data.shape))
dataset[image_index, :, :] = image_data
image_index += 1
except IOError as e:
print('Could not read:', image_file, ':', e, '- it\'s ok, skipping.')
num_images = image_index
dataset = dataset[0:num_images, :, :]
if num_images < min_num_images:
raise Exception('Many fewer images than expected: %d < %d' %
(num_images, min_num_images))
print('Full dataset tensor:', dataset.shape)
print('Mean:', np.mean(dataset))
print('Standard deviation:', np.std(dataset))
return dataset
def maybe_pickle(data_folders, min_num_images_per_class, force=False):
dataset_names = []
for folder in data_folders:
set_filename = folder + '.pickle'
dataset_names.append(set_filename)
if os.path.exists(set_filename) and not force:
# You may override by setting force=True.
print('%s already present - Skipping pickling.' % set_filename)
else:
print('Pickling %s.' % set_filename)
dataset = load_letter(folder, min_num_images_per_class)
try:
with open(set_filename, 'wb') as f:
pickle.dump(dataset, f, pickle.HIGHEST_PROTOCOL)
except Exception as e:
print('Unable to save data to', set_filename, ':', e)
return dataset_names
train_datasets = maybe_pickle(train_folders, 45000)
test_datasets = maybe_pickle(test_folders, 1800)
"""
Explanation: Now let's load the data in a more manageable format. Since, depending on your computer setup you might not be able to fit it all in memory, we'll load each class into a separate dataset, store them on disk and curate them independently. Later we'll merge them into a single dataset of manageable size.
We'll convert the entire dataset into a 3D array (image index, x, y) of floating point values, normalized to have approximately zero mean and standard deviation ~0.5 to make training easier down the road.
A few images might not be readable, we'll just skip them.
End of explanation
"""
with open("notMNIST_large/A.pickle", 'rb') as a_pickle:
a_set = pickle.load(a_pickle)
print("images in a_set: %d" % len(a_set))
plt.imshow(a_set[234])
"""
Explanation: Problem 2
Let's verify that the data still looks good. Displaying a sample of the labels and images from the ndarray. Hint: you can use matplotlib.pyplot.
End of explanation
"""
def pickled_dataset_size(data_file):
with open(data_file, 'rb') as current_pickle:
pickled_set = pickle.load(current_pickle)
print("images in %s: %d" % (data_file, len(pickled_set)))
# large data sets
pickled_dataset_size("notMNIST_large/A.pickle")
pickled_dataset_size("notMNIST_large/B.pickle")
pickled_dataset_size("notMNIST_large/C.pickle")
pickled_dataset_size("notMNIST_large/D.pickle")
pickled_dataset_size("notMNIST_large/E.pickle")
pickled_dataset_size("notMNIST_large/F.pickle")
pickled_dataset_size("notMNIST_large/G.pickle")
pickled_dataset_size("notMNIST_large/H.pickle")
pickled_dataset_size("notMNIST_large/I.pickle")
pickled_dataset_size("notMNIST_large/J.pickle")
# small data sets
pickled_dataset_size("notMNIST_small/A.pickle")
pickled_dataset_size("notMNIST_small/B.pickle")
pickled_dataset_size("notMNIST_small/C.pickle")
pickled_dataset_size("notMNIST_small/D.pickle")
pickled_dataset_size("notMNIST_small/E.pickle")
pickled_dataset_size("notMNIST_small/F.pickle")
pickled_dataset_size("notMNIST_small/G.pickle")
pickled_dataset_size("notMNIST_small/H.pickle")
pickled_dataset_size("notMNIST_small/I.pickle")
pickled_dataset_size("notMNIST_small/J.pickle")
"""
Explanation: Problem 3
Another check: we expect the data to be balanced across classes. Verify that.
End of explanation
"""
def make_arrays(nb_rows, img_size):
if nb_rows:
dataset = np.ndarray((nb_rows, img_size, img_size), dtype=np.float32)
labels = np.ndarray(nb_rows, dtype=np.int32)
else:
dataset, labels = None, None
return dataset, labels
def merge_datasets(pickle_files, train_size, valid_size=0):
num_classes = len(pickle_files)
valid_dataset, valid_labels = make_arrays(valid_size, image_size)
train_dataset, train_labels = make_arrays(train_size, image_size)
vsize_per_class = valid_size // num_classes
tsize_per_class = train_size // num_classes
start_v, start_t = 0, 0
end_v, end_t = vsize_per_class, tsize_per_class
end_l = vsize_per_class+tsize_per_class
for label, pickle_file in enumerate(pickle_files):
print('label: %s' % label)
try:
with open(pickle_file, 'rb') as f:
letter_set = pickle.load(f)
# let's shuffle the letters to have random validation and training set
np.random.shuffle(letter_set)
if valid_dataset is not None:
valid_letter = letter_set[:vsize_per_class, :, :]
valid_dataset[start_v:end_v, :, :] = valid_letter
valid_labels[start_v:end_v] = label
start_v += vsize_per_class
end_v += vsize_per_class
train_letter = letter_set[vsize_per_class:end_l, :, :]
train_dataset[start_t:end_t, :, :] = train_letter
train_labels[start_t:end_t] = label
start_t += tsize_per_class
end_t += tsize_per_class
except Exception as e:
print('Unable to process data from', pickle_file, ':', e)
raise
return valid_dataset, valid_labels, train_dataset, train_labels
train_size = 200000
valid_size = 10000
test_size = 10000
valid_dataset, valid_labels, train_dataset, train_labels = merge_datasets(train_datasets, train_size, valid_size)
_, _, test_dataset, test_labels = merge_datasets(test_datasets, test_size)
print('Training:', train_dataset.shape, train_labels.shape)
print('Validation:', valid_dataset.shape, valid_labels.shape)
print('Testing:', test_dataset.shape, test_labels.shape)
"""
Explanation: Merge and prune the training data as needed. Depending on your computer setup, you might not be able to fit it all in memory, and you can tune train_size as needed. The labels will be stored into a separate array of integers 0 through 9.
Also create a validation dataset for hyperparameter tuning.
End of explanation
"""
def randomize(dataset, labels):
permutation = np.random.permutation(labels.shape[0])
shuffled_dataset = dataset[permutation,:,:]
shuffled_labels = labels[permutation]
return shuffled_dataset, shuffled_labels
train_dataset, train_labels = randomize(train_dataset, train_labels)
test_dataset, test_labels = randomize(test_dataset, test_labels)
valid_dataset, valid_labels = randomize(valid_dataset, valid_labels)
"""
Explanation: Next, we'll randomize the data. It's important to have the labels well shuffled for the training and test distributions to match.
End of explanation
"""
import random
index = random.randint(0, train_size)
print('training example: %d, index: %d' % (train_labels[index], index)); index = random.randint(0, train_size);
print('training example: %d, index: %d' % (train_labels[index], index)); index = random.randint(0, train_size);
print('training example: %d, index: %d\n' % (train_labels[index], index))
index = random.randint(0, valid_size);
print('validation example: %d, index: %d' % (valid_labels[index], index)); index = random.randint(0, valid_size);
print('validation example: %d, index: %d' % (valid_labels[index], index)); index = random.randint(0, valid_size);
print('validation example: %d, index: %d\n' % (valid_labels[index], index))
index = random.randint(0, test_size);
print('test example: %d, index: %d' % (test_labels[index], index)); index = random.randint(0, test_size);
print('test example: %d, index: %d' % (test_labels[index], index)); index = random.randint(0, test_size);
print('test example: %d, index: %d\n' % (test_labels[index], index))
"""
Explanation: Problem 4
Convince yourself that the data is still good after shuffling!
End of explanation
"""
pickle_file = 'notMNIST.pickle'
try:
f = open(pickle_file, 'wb')
save = {
'train_dataset': train_dataset,
'train_labels': train_labels,
'valid_dataset': valid_dataset,
'valid_labels': valid_labels,
'test_dataset': test_dataset,
'test_labels': test_labels,
}
pickle.dump(save, f, pickle.HIGHEST_PROTOCOL)
f.close()
except Exception as e:
print('Unable to save data to', pickle_file, ':', e)
raise
statinfo = os.stat(pickle_file)
print('Compressed pickle size:', statinfo.st_size)
"""
Explanation: Finally, let's save the data for later reuse:
End of explanation
"""
import time
import hashlib
t1 = time.time()
train_hashes = [hashlib.sha1(x).digest() for x in train_dataset]
valid_hashes = [hashlib.sha1(x).digest() for x in valid_dataset]
test_hashes = [hashlib.sha1(x).digest() for x in test_dataset]
print("train_hashes count: %d" % len(train_hashes))
print("valid_hashes count: %d" % len(valid_hashes))
print("test_hashes count: %d" % len(test_hashes))
valid_in_train = np.in1d(valid_hashes, train_hashes)
test_in_train = np.in1d(test_hashes, train_hashes)
test_in_valid = np.in1d(test_hashes, valid_hashes)
unique_train_count = len(train_dataset) - (valid_in_train.sum() + test_in_train.sum())
print("unique train samples: %d out of %d total samples\n" % (unique_train_count, len(train_dataset)))
print("valid_in_train count: %d" % len(valid_in_train))
print("test_in_train count: %d" % len(test_in_train))
print("test_in_valid count: %d" % len(test_in_valid))
valid_keep = ~valid_in_train
test_keep = ~(test_in_train | test_in_valid)
valid_dataset_clean = valid_dataset[valid_keep]
valid_labels_clean = valid_labels[valid_keep]
test_dataset_clean = test_dataset[test_keep]
test_labels_clean = test_labels[test_keep]
t2 = time.time()
print("Time: %0.2fs\n" % (t2 - t1))
print("valid -> train overlap %d samples" % valid_in_train.sum())
print("test -> train overlap %d samples" % test_in_train.sum())
print("test -> valid overlap %d samples\n" % test_in_valid.sum())
print("valid_dataset_clean size: %d" % len(valid_dataset_clean))
print("valid_labels_clean size: %d" % len(valid_labels_clean))
print("test_dataset_clean size: %d" % len(test_dataset_clean))
print("test_labels_clean size: %d" % len(test_labels_clean))
# write clean dataset
clean_pickle_file = 'notMNIST_clean.pickle'
try:
f = open(clean_pickle_file, 'wb')
save = {
'train_dataset': train_dataset,
'train_labels': train_labels,
'valid_dataset': valid_dataset_clean,
'valid_labels': valid_labels_clean,
'test_dataset': test_dataset_clean,
'test_labels': test_labels_clean,
}
pickle.dump(save, f, pickle.HIGHEST_PROTOCOL)
f.close()
except Exception as e:
print('Unable to save data to', clean_pickle_file, ':', e)
raise
statinfo = os.stat(clean_pickle_file)
print('Compressed clean pickle size:', statinfo.st_size)
"""
Explanation: Problem 5
By construction, this dataset might contain a lot of overlapping samples, including training data that's also contained in the validation and test set! Overlap between training and test can skew the results if you expect to use your model in an environment where there is never an overlap, but are actually ok if you expect to see training samples recur when you use it.
Measure how much overlap there is between training, validation and test samples.
Optional questions:
- What about near duplicates between datasets? (images that are almost identical)
- Create a sanitized validation and test set, and compare your accuracy on those in subsequent assignments.
End of explanation
"""
def train_and_evaluate(sample_count, tr_set, tr_labels, v_set, v_labels, t_set, t_labels):
print('model trained with %d samples' % sample_count)
(train_samples, train_width, train_height) = tr_set.shape
(valid_samples, valid_width, valid_height) = v_set.shape
(test_samples, test_width, test_height) = t_set.shape
X_valid = np.reshape(v_set, (valid_samples, valid_width*valid_height))
X_test = np.reshape(t_set, (test_samples, test_width*test_height))
X = np.reshape(train_dataset, (train_samples, train_width*train_height))[0:sample_count]
Y = train_labels[0:sample_count]
model = LogisticRegression()
model.fit(X, Y)
train_pred = model.predict(X)
train_pred_match = train_pred == Y
valid_pred = model.predict(X_valid)
valid_pred_match = valid_pred == v_labels
test_pred = model.predict(X_test)
test_pred_match = test_pred == t_labels
print(" train set prediction rate: %f" % (train_pred_match.sum() / float(len(Y))))
print(" validation set prediction rate: %f" % (valid_pred_match.sum() / float(len(v_labels))))
print(" test set prediction rate: %f\n" % (test_pred_match.sum() / float(len(t_labels))))
train_and_evaluate(50, train_dataset, train_labels, valid_dataset_clean, valid_labels_clean, test_dataset_clean, test_labels_clean)
train_and_evaluate(100, train_dataset, train_labels, valid_dataset_clean, valid_labels_clean, test_dataset_clean, test_labels_clean)
train_and_evaluate(1000, train_dataset, train_labels, valid_dataset_clean, valid_labels_clean, test_dataset_clean, test_labels_clean)
# train_and_evaluate(5000, train_dataset, train_labels, valid_dataset_clean, valid_labels_clean, test_dataset_clean, test_labels_clean)
# train_and_evaluate(200000, train_dataset, train_labels, valid_dataset_clean, valid_labels_clean, test_dataset_clean, test_labels_clean)
"""
Explanation: Problem 6
Let's get an idea of what an off-the-shelf classifier can give you on this data. It's always good to check that there is something to learn, and that it's a problem that is not so trivial that a canned solution solves it.
Train a simple model on this data using 50, 100, 1000 and 5000 training samples. Hint: you can use the LogisticRegression model from sklearn.linear_model.
Optional question: train an off-the-shelf model on all the data!
End of explanation
"""
|
ikegami-yukino/madoka-python
|
Benchmark.ipynb
|
bsd-3-clause
|
import collections
import subprocess
import itertools
import os
import time
import madoka
import numpy as np
import redis
ALPHANUM = 'abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ0123456789'
NUM_ALPHANUM_COMBINATION = 238328
zipf_array = np.random.zipf(1.5, NUM_ALPHANUM_COMBINATION)
def python_memory_usage():
return int(subprocess.getoutput('ps up %s' % os.getpid()).split()[15])
def redis_memory_usage():
lines = subprocess.getoutput('ps').splitlines()
for line in lines:
if 'redis-server' in line:
pid = line.split()[0]
break
return int(subprocess.getoutput('ps up %s' % pid).split()[15])
def count(counter):
for (i, chars) in enumerate(itertools.product(ALPHANUM, repeat=3)):
chars = ''.join(chars)
counter[chars] = int(zipf_array[i])
return counter
def benchmark(counter, start_mem_usage):
counter = count(counter)
end_mem_usage = python_memory_usage()
diff = end_mem_usage - start_mem_usage
print('memory consumption is {:,d} KB'.format(diff))
return counter
def redis_benchmark():
db = redis.Redis()
db.flushall()
start_mem_usage = redis_memory_usage()
with db.pipeline() as pipe:
for (i, chars) in enumerate(itertools.product(ALPHANUM, repeat=3)):
chars = ''.join(chars)
pipe.set(chars, int(zipf_array[i]))
pipe.execute()
end_mem_usage = redis_memory_usage()
diff = end_mem_usage - start_mem_usage
print('memory consumption is {:,d} KB'.format(diff))
print('collections.Counter')
start_mem_usage = python_memory_usage()
start_time = time.process_time()
counter = collections.Counter()
benchmark(counter, start_mem_usage)
end_time = time.process_time()
print('Processsing Time is %5f sec.' % (end_time - start_time))
del counter
print('*' * 30)
print('madoka.Sketch')
start_mem_usage = python_memory_usage()
start_time = time.process_time()
sketch = madoka.Sketch()
benchmark(sketch, start_mem_usage)
end_time = time.process_time()
print('Processsing Time is %5f sec.' % (end_time - start_time))
del sketch
print('*' * 30)
print('Redis')
start_time = time.process_time()
redis_benchmark()
end_time = time.process_time()
print('Processsing Time is %5f sec.' % (end_time - start_time))
"""
Explanation: Memory consumption
End of explanation
"""
sketch = madoka.Sketch()
diffs = []
for (i, chars) in enumerate(itertools.product(ALPHANUM, repeat=3)):
chars = ''.join(chars)
sketch[chars] = int(zipf_array[i])
diff = abs(sketch[chars] - int(zipf_array[i]))
if diff > 0:
diffs.append(diff / int(zipf_array[i]) * 100)
else:
diffs.append(0)
print(np.average(diffs))
"""
Explanation: Counting error rate
End of explanation
"""
|
stevetjoa/stanford-mir
|
why_mir.ipynb
|
mit
|
ipd.display( ipd.YouTubeVideo("grL4JMs0hDc", start=75) )
"""
Explanation: ← Back to Index
What is Music Information Retrieval?
While you listen to these excerpts, name as many of its musical characteristics as you can. Can you name the genre? tempo? instruments? mood? time signature? key signature? chord progression? tuning frequency? song structure?
End of explanation
"""
ipd.display( ipd.YouTubeVideo("PrVu9WKs498", start=8) )
"""
Explanation: Another:
End of explanation
"""
ipd.display( ipd.YouTubeVideo("Cxj8vSS2ELU", start=540) )
"""
Explanation: One more:
End of explanation
"""
ipd.display( ipd.YouTubeVideo("ECvinPjmBVE", start=6) )
"""
Explanation: What is MIR?
Here is a sampling of tasks found in music information retrieval:
fingerprinting
cover song detection
genre recognition
transcription
recommendation
symbolic melodic similarity
mood
source separation
instrument recognition
pitch tracking
tempo estimation
score alignment
song structure/form
beat tracking
key detection
query by humming
Why MIR?
discover, organize, monetize media collections
search ("find me something that sounds like this") songs, loops, speech, environmental sounds, sound effects
workflows in consumer products through machine hearing
automatic control of software and mobile devices
Commercial Applications
Example: RiffStation
End of explanation
"""
ipd.display( ipd.YouTubeVideo("DiW6XVFeFgo", start=60))
"""
Explanation: Example: Melodyne
End of explanation
"""
ipd.display( ipd.YouTubeVideo("A0cfugW4DbE", start=150))
"""
Explanation: Example: Auto-Tune
End of explanation
"""
ipd.display( ipd.YouTubeVideo("TG-ivjyyYhM", start=35))
"""
Explanation: Example: Key Detection and Auto-harmonization with iZotope Nectar 2
End of explanation
"""
|
sympy/scipy-2017-codegen-tutorial
|
notebooks/cython-examples.ipynb
|
bsd-3-clause
|
import numpy as np
x = np.random.randn(10000)
"""
Explanation: Writing Cython
In this notebook, we'll take a look at how to implement a simple function using Cython. The operation we'll implement is the first-order diff, which takes in an array of length $n$:
$$\mathbf{x} = \begin{bmatrix} x_1 \ x_2 \ \vdots \ x_n\end{bmatrix}$$
and returns the following:
$$\mathbf{y} = \begin{bmatrix} x_2 - x_1 \ x_3 - x_2 \ \vdots \ x_n - x_{n-1} \end{bmatrix}$$
First we'll import everything we'll need and generate some data to work with.
End of explanation
"""
def py_diff(x):
n = x.size
y = np.zeros(n-1)
for i in range(n-1):
y[i] = x[i+1] - x[i]
return y
%timeit py_diff(x)
"""
Explanation: Below is a simple implementation using pure Python (no NumPy). The %timeit magic command let's us see how long it takes the function to run on the 10,000-element array defined above.
End of explanation
"""
%load_ext cython
%%cython
import numpy as np
def cy_diff_naive(x):
n = x.size
y = np.zeros(n-1)
for i in range(n-1):
y[i] = x[i+1] - x[i]
return y
%timeit cy_diff_naive(x)
"""
Explanation: Now use the exact same function body but add the %%cython magic at the top of the code cell. How much of a difference does simply pre-compiling make?
End of explanation
"""
%%cython
import numpy as np
def cy_diff(double[::1] x):
cdef int n = x.size
cdef double[::1] y = np.zeros(n-1)
cdef int i
for i in range(n-1):
y[i] = x[i+1] - x[i]
return y
%timeit cy_diff(x)
"""
Explanation: So it didn't make much of a difference. That's because Cython really shines when you specify data types. We do this by annotating the variables used in the function with cdef <type> .... Let's see how much this improves things.
Note: array types (like for the input arg x) can be declared using the memoryview syntax double[::1] or using np.ndarray[cnp.float64_t, ndim=1].
End of explanation
"""
%%cython
from cython import wraparound, boundscheck
import numpy as np
@boundscheck(False)
@wraparound(False)
def cy_diff2(double[::1] x):
cdef int n = x.size
cdef double[::1] y = np.zeros(n-1)
cdef int i
for i in range(n-1):
y[i] = x[i+1] - x[i]
return y
%timeit cy_diff2(x)
"""
Explanation: That made a huge difference! There are a couple more things we can do to speed up our diff implementation, including disabling some safety checks. The combination of disabling bounds checking (making sure you don't try access an index of an array that doesn't exist) and disabling wraparound (disabling use of negative indices) can really improve things when we are sure neither condition will occur. Let's try that.
End of explanation
"""
def np_diff(x):
return np.diff(x)
%timeit np_diff(x)
"""
Explanation: Finally, let's see how NumPy's diff performs for comparison.
End of explanation
"""
|
CNS-OIST/STEPS_Example
|
user_manual/source/API_2/Interface_Tutorial_4_Complexes.ipynb
|
gpl-2.0
|
import steps.interface
from steps.model import *
mdl = Model()
with mdl:
A0, A1, A2 = SubUnitState.Create()
ASU = SubUnit.Create([A0, A1, A2])
CA = Complex.Create([ASU, ASU, ASU, ASU], statesAsSpecies=True)
"""
Explanation: Multi-state complexes
<div class="admonition note">
**Topics**: Complexes, complex reactions.
</div>
In this chapter, we will introduce a concise way of declaring reactions between molecules that can be in a high number of distinct functional states. We will use the Complex class and its subconstituents SubUnits and SubunitStates to specify the state space of these molecules.
We will first intoduce Complexes in a general way and compare them to other forms of rule-based modeling frameworks. We will then present their use in an IP3 receptor example that builds on the one used in a previous chapter.
Complex declaration
Complexes are composed of an arbitrary number of subunits that can themselves be in an arbitrary number of states. In this guide, we will represent complexes as collections of geometric shapes, like in the following examples:
<img src="images/complex_examples.png"/>
Each complex consists of a list of subunits, represented by different geometrical shapes in the second column of the figure. These subunits can be in various states (represented by colors), as shown in the third column. Specific instances of complexes can thus be in various states, resulting from all the possible combinations of subunit states. The last column only shows a few examples of such states for each complex.
In order to declare a complex, we first need to declare all its subunits along with their subunit states. We then need to provide a list of subunits that the complex is made of. Consider the following example, corresponding to the first row of the figure:
End of explanation
"""
def printStates(cs):
print(f'{len(cs)} states')
for state in cs:
print(state)
printStates(CA[...])
"""
Explanation: As usual, we first need to import required modules and define a Model object. The creation of subunit states and subunit is then straighforward with the SubUnitState and SubUnit classes. SubUnitState behaves as Species, we do not need to specify any parameters for their creation. SubUnit takes a list of SubUnitStates as a parameter. Finally, the complex is created with the Complex class that takes a list of SubUnit objects as argument as well as the statesAsSpecies keyword argument that specifies that all states of the complex should be automatically declared in STEPS as Species. This keyword parameter is required in the current version of STEPS since multi-state complexes are not natively supported yet.
Note that the list of SubUnit objects that is given to the Complex constructor can totally contain duplicates since complexes can be composed of several identical subunits. In addition, the order in which the subunits are given is important when these subunits are not identical as it will later be used to identify specific subunits in a complex. In our graphical representations, we will assume that the first element of this list is the subunit in the top right corner of the complex and the remaning subunits are read in clockwise order from there.
We can then list all the states that this complex can be in with:
End of explanation
"""
with mdl:
B0, B1, R0, R1 = SubUnitState.Create()
BSU, RSU = SubUnit.Create([B0, B1], [R0, R1])
CB = Complex.Create([BSU, RSU, BSU, RSU], statesAsSpecies=True, order=RotationalSymmetryOrdering)
"""
Explanation: We used the square bracket notation after the complex name to access its states: CA[...]. This notation returns an object that describes a set of states of the complex, when using it only with the ellipsis ... object, this corresponds to all possible states of the complex. We will see how to use this notation later in the chapter.
Note that instead of the $3^4 = 81$ states that should result from all possible combinations of 3 subunit states for 4 subunits, we only have 15 states. This is due to the fact that, by default, complex states do not take the order of subunits into account. The state CA_A0_A0_A0_A1 is equivalent to the state CA_A0_A0_A1_A0 since they are both composed of 3 subunits in state A0 and one subunit in state A1. Only one of the four equivalent states is conserved and declared in STEPS as a Species.
Complex ordering
This behavior is however not always desirable as neighboring relations between subunits can sometimes be considered important. The Complex constructor can thus take an additional keyword argument order. This argument makes it possible to specify groups of complex states that will be considered equivalent. STEPS comes with 3 built-in choices for this parameter: NoOrdering, the default ; StrongOrdering, that considers all possible ordered states ; and RotationalSymmetryOrdering, that we will explain below. It is also possible to implement a custom order function, more details are given in the documentation.
The following figure shows how states are grouped in the 3 order functions for a complex with 4 identical subunits with 2 states:
<img src="images/complex_states_1.png"/>
Columns correspond to the number of subunits in state S1 (dark blue), starting with all subunits in state S0 (light blue). The last two columns are ommited since they are identical to the first two if states are inverted. Grey lines represent which states are grouped together under the different ordering functions. The first row contains all the possible ordered states and the last one contains the unordered states. Since subunits can only be in two states, there are only 5 states under the NoOrdering function: all subunits in S0, one subunit in S1, two in S1, etc. The RotationalSymmetryOrdering function is a bit trickier, it groups all states that are identical under rotation. When only one subunit is in S1, all states can be made equivalent with quarter turn rotations. This is not the case when two subunits are in S1, there are then two distinct states that cannot be made identical with quarter turn rotations: a state in which the two subunits in S1 are adjacent, and another in which they are opposite. Note that this rotational symmetry still takes into account handedness:
<img src="images/complex_states_2.png"/>
In the above figure, 4 identical subunits can be in 3 different states and we only consider the case in which two subunits are in S0 (light blue), one in S1 (dark blue) and one in S2 (teal). Note that under rotational symmetry, there are two complex states in which S1 and S2 are adjacent but these states are not identical: the left one has S1 then S2 while the other has S2 then S1 (in clockwise direction). When complexes contain different subunits, and depending in which order the subunits are declared in the complex, it becomes less likely for complex states to be rotationaly equivalent:
<img src="images/complex_states_3.png"/>
We can declare the complex described in this last figure in STEPS with the rotational symmetry ordering function:
End of explanation
"""
printStates(CB[...])
"""
Explanation: We can then print all the corresponding states:
End of explanation
"""
with mdl:
S0, S1, S2, T0, T1, T2 = SubUnitState.Create()
SSU, TSU = SubUnit.Create([S0, S1, S2], [T0, T1, T2])
CC = Complex.Create([SSU, SSU, TSU], statesAsSpecies=True, order=StrongOrdering)
"""
Explanation: We get 10 states, as expected from the figure. Again, we used the square bracket notation CB[...] to access complex states ; in the next section, we describe how this notation works.
Complex selectors
A complex selector is an instance of the ComplexSelector class and is created when using the square bracket notation on a complex. Simply put, the square bracket notation allows to slice the complex state space in a way that is similar to array slicing in numpy (see the numpy documentation for more details). As we will see later, these complex selectors can then be used for declaring reactions that apply to a subset of complex states without having to enumerate all the states. The following figure shows how various square bracket notations select various part of the complex state space:
<img src="images/complex_selectors.png"/>
For simplicity of representation, the complex used in these examples has 3 subunits: two identical subunits S and one subunit T, both these subunits can be in 3 different states. The same principles of course apply for complexes with more than 3 subunits. While in these examples, the full ordered state space is represented, the complex states selected by a complex selectors will depend on the specific ordering function used during the creation of the Complex. The states are organized spatially as if they were part of a three dimensional matrix, to make the analogy with numpy slicing easier to see.
Let us declare this complex in STEPS and evaluate these complex selectors:
End of explanation
"""
printStates(CC[...])
"""
Explanation: The first example A corresponds to the complex selector we used so far, it returns all the possible states of the complex. Like for numpy slicing, the easiest way to select all 'dimensions' from the complex is to use a colon : for each dimension, meaning we want to select everything in this 'dimension'. The complex has 3 subunits / 'dimensions' so we need 3 colons in the square bracket: CC[:, :, :]. The order of dimensions is the same as the one used when declaring the complex. The ellipsis object ... can be used, like in numpy, to avoid repeating colons when the number of subunits / dimensions is high. It is equivalent to typing comma separated colons for the remaning dimensions. Note however that only one ellipsis object can be used in a square bracket notation since using several could lead to ambiguities (in CC[..., S0, ...] it would not be clear which dimension should correspond to S0). If no ellipsis object is used, the number of comma separated values should always match the number of subunits in the complex.
We can thus get all $3^3 = 27$ complex states with:
End of explanation
"""
printStates(CC[:, :, T1])
"""
Explanation: Examples B and C slice the state space in one dimension. The complex selector in example B has colons for the first two dimensions, meaning all subunit states are selected, and the last one has the T1 SubUnitState object, indicating that only complex states in which the third subunit is in state T1 should be selected. Again, the two colons can be replaced by an ellipsis object ....
End of explanation
"""
printStates(CC[:, S1, T2])
"""
Explanation: Example D specifies two out of three dimensions, it selects all states in which the second S subunit is in state S1 and the T subunit is in state T1. Note that if all subunits / 'dimensions' are uniquely specified, the square bracket notation returns a ComplexState instead of a ComplexSelector. As expected example D return 3 states:
End of explanation
"""
printStates(CC[S0 | S2, :, T1])
printStates(CC[~S1, :, T1])
"""
Explanation: Example E combines two SubUnitState with the union operator | in order to select states for which the first subunit is either in state S0 or S2. Alternatively, since there are only 3 possibles states for this subunit, we can use the negation operator ~S1 to select all subunit states that are not S1:
End of explanation
"""
printStates(CC[:, :, T1] | CC[:, S1, :])
"""
Explanation: Not that both | and ~ operators return a SubUnitSelector object (see documentation) that represent a subset of the SubUnitStates associated to a given SubUnit.
Examples F and G illustrate the possibility to combine complex selectors. Example F shows the intersection between two result selectors with the & operator while example G shows the union with the | operator. In both cases the result object is a complex selector itself and can thus be further combined with other complex selectors. As expected, the union from example G yields 15 states:
End of explanation
"""
printStates(CC[...] << S0)
"""
Explanation: Note that, while example F can also be written as a single complex selector, example G cannot.
Example H illustrates the use of the << operator to inject subunit states in a complex selector. CC[..] << S0 should be read 'inject a subunit state S0 in any available position'. Since, in CC[...], there are 2 free positions that can be in state S0, it is equivalent to CC[S0, :, :] | CC[:, S0, :]. It is not very useful in our example but becomes convenient for bigger complexes. Note that the right hand side of the << operator can also be a SubUnitSelector: CC[...] << (S0 | S1). Finally, several subunit states can be injected at once with e.g. CC[...] << 2 * S0. Detailed explanations and examples are available in the documentation. In example H we have:
End of explanation
"""
%reset -f
"""
Explanation: Complexes and rule based modeling
Although STEPS complexes offer similar capabilities as rule-based modeling frameworks like bionetgen, they are not completely equivalent. STEPS complexes require the explicit declaration of all complexes before any simulation takes place. In contrast, bionetgen allows the creation of new complexes through the binding of smaller complexes. Thus, STEPS complexes are more suited to cases in which the complex has a set structure and its state space is known before simulation.
Having introduced the main concepts relative to the Complex class, we can now use multi-state complexes in a full example. We first reset the jupyter kernel to start from scratch:
End of explanation
"""
import steps.interface
from steps.model import *
from steps.geom import *
from steps.sim import *
from steps.saving import *
from steps.rng import *
nAvog = 6.02214076e23
nbIP3R = 5
nbPumps = 5
c0 = 2e-6
c1 = 0.185
cytVol = 1.6572e-19
ERVol = cytVol * c1
a1 = 400e6
a2 = 0.2e6
a3 = 400e6
a4 = 0.2e6
a5 = 20e6
b1 = 0.13e-6 * a1
b2 = 1.049e-6 * a2
b3 = 943.4e-9 * a3
b4 = 144.5e-9 * a4
b5 = 82.34e-9 * a5
v1 = 6
v2 = 0.11
v3 = 0.9e-6
k3 = 0.1e-6
rp = v3 * 1e3 * cytVol * nAvog / nbPumps / 2
rb = 10 * rp
rf = (rb + rp) / (k3 ** 2)
kip3 = 1e3 * nAvog * ERVol * v1 / nbIP3R
"""
Explanation: IP3 receptor model
In this section, we will implement the IP3 receptor model described in De Young and Keizer, A single-pool inositol 1, 4, 5-trisphosphate-receptor-based model for agonist-stimulated oscillations in Ca2+ concentration, PNAS, 1992. This model relies on a markov chain description of IP3R subunits in which each of the 4 identical subunits have 3 binding sites, one for IP3 and two for Ca2+, one activating, the other inactivating. This results in $2^3 = 8$ possible states per subunit and the whole channel is deemed open if at least three of the subunits are in the state in which one IP3 and the activating Ca2+ are bound.
We first import the required modules and declare the parameters as specified in the original article:
End of explanation
"""
mdl = Model()
r = ReactionManager()
with mdl:
Ca, IP3, ERPump, ERPump2Ca = Species.Create()
R000, R100, R010, R001, R110, R101, R111, R011 = SubUnitState.Create()
IP3RSU = SubUnit.Create([R000, R100, R010, R001, R110, R101, R111, R011])
IP3R = Complex.Create([IP3RSU, IP3RSU, IP3RSU, IP3RSU], statesAsSpecies=True)
ssys = SurfaceSystem.Create()
"""
Explanation: We then declare the model, the species and most importantly, the complex that we will use to simulate IP3 receptors. The following figure describes the IP3R complex:
<img src="images/complex_ip3_structure.png"/>
As explained before, it is composed of 4 identical subunits which can be in 8 distinct states, we name the states according to what is bound to the subunit: for state $ijk$, $i$ is 1 if IP3 is bound, $j$ is 1 if the activating Ca2+ is bound, and $k$ is 1 if the inactivating Ca2+ is bound. State $110$ thus corresponds to the open state. Below the complex and its subunits, we represented the reaction network that governs the transitions between the subunit states. Each transition involves the binding or unbinding of either IP3 or Ca2+.
We then proceed to declaring the IP3R complex:
End of explanation
"""
len(IP3R[...])
"""
Explanation: The declaration of the complex itself follows what we saw in the first part of this chapter. We can count the number of distinct complex states:
End of explanation
"""
with mdl, ssys:
# Ca2+ passing through open IP3R channel
IP3R_1 = IP3R.get()
IP3R_1[R110, R110, R110, :].s + Ca.i <r['caflux']> IP3R_1[R110, R110, R110, :].s + Ca.o
r['caflux'].K = kip3, kip3
"""
Explanation: Note that, since the default ordering is NoOrdering, this is much lower than the $8^4=4096$ states that could be expected if StrongOrdering was used instead.
The next step is to declare all the reactions involving the IP3R channel. Most of them correspond to IP3 and Ca2+ binding / unbinding that changes the states of subunits. In addition, we also need to write a reaction that will account for the Ca2+ flux from the endoplasmic reticulum (ER) through open IP3R channels. In the following section, we will see how to declare all these reactions.
Reactions involving complex states
The simplest way to declare a reaction involving a complex consists in simply using a complex state as a reactant in a normal reaction. For example, if we wanted to only allow Ca2+ through the IP3R channel when the four subunits are in the open state, we would write:
python
with mdl, ssys:
IP3R[R110, R110, R110, R110].s + Ca.i <r[1]> IP3R[R110, R110, R110, R110].s + Ca.o
r[1].K = kip3, kip3
Both left hand side and right hand side of the reaction contain the IP3R complex in a fully specified state. In this case, no changes are made to the complex but there are a lot of cases in which changes to the complex are required. Let us imagine for example that some species X can react with the fully open IP3R channel and force the unbinding of IP3 and Ca from one of its subunits. We would have the following reaction:
python
with mdl, ssys:
IP3R[R110, R110, R110, R110].s + X.o >r[1]> X.o + IP3R[R110, R110, R110, R000].s + Ca.o + IP3.o
r[1].K = rate
Note that the specific position of the subunit that is changed does not matter since we declared the complex using the detault NoOrdering setting. Complex states are thus used in reactions as if they were Species; this is convenient when only a single state of the complex can undergo a specific reaction but it quickly becomes unpractical when several complex states can undergo the same reaction.
If, as is the case in the original De Young Keizer model, the IP3R channel opens when at least 3 subunits are in state R110, we would need to declare 8 reactions involving fully specified complex states:
python
with mdl, ssys:
IP3R[R110, R110, R110, R000].s + Ca.i <r[1]> IP3R[R110, R110, R110, R000].s + Ca.o
IP3R[R110, R110, R110, R001].s + Ca.i <r[2]> IP3R[R110, R110, R110, R001].s + Ca.o
...
IP3R[R110, R110, R110, R110].s + Ca.i <r[7]> IP3R[R110, R110, R110, R110].s + Ca.o
IP3R[R110, R110, R110, R111].s + Ca.i <r[8]> IP3R[R110, R110, R110, R111].s + Ca.o
r[1].K = kip3, kip3
...
r[8].K = kip3, kip3
This case needs to be tackled using complex selectors instead.
Reactions involving complex selectors
In order to group all these reactions in a single one, we could use the complex selector IP3R[R110, R110, R110, :] that encompasses all of the above 8 states. We would intuitively try to declare the reaction like so:
python
with mdl, ssys:
IP3R[R110, R110, R110, :].s + Ca.i <r[1]> IP3R[R110, R110, R110, :].s + Ca.o
r[1].K = kip3, kip3
<div class="warning alert alert-block alert-danger">
<b>This raises the following exception</b>: <code>Complex selector IP3R[R110, R110, R110, :] is used in the right hand side of a reaction but is not matching anything in the left hand side and is not fully defined. The reaction is ambiguous.</code>
</div>
When trying to declare the reaction in this way, STEPS throws an exception. This is due to the fact that, in general, STEPS does not know whether the two result selectors refer to the same specific complex or to distinct ones. It is important here to make the distinction between the complex selectors during reaction declaration and the specific complexes that will exist during a simulation. Specific complexes in a simulation are always fully defined while complex selectors are only partially specified. In an actual simulation, specific complexes thus need to be matched to these partially specified objects.
Although it might not seem very important in the reaction we tried to declare above, it becomes critical when expressing reactions between 2 complexes of the same type. Consider the following reaction using the CC complex declared in the first part of this chapter:
python
CC[:, :, T0] + CC[:, :, T1] >r[1]> CC[:, :, T1] + CC[:, :, T2]
r[1].K = 1
This reaction would also result in the same exception being thrown. This reaction happens when two complexes of the same CC type meet and when one has its T subunit in state T0 and the other in state T1, ignoring the states of the S subunits. The intuitive way to read this reaction is that the T0 complex is changed to T1 and the T1 complex is changed to T2. It could however be read in a different way: maybe the T0 complex should be changed to T2 while the T1 should remain in T1. Imagine for example the specific reaction in which the left hand side is CC[S0, S0, T0] + CC[S1, S1, T1], should the right hand side be CC[S0, S0, T1] + CC[S1, S1, T2] or CC[S0, S0, T2] + CC[S1, S1, T1]?
In order to make it explicit, STEPS thus requires the user to use identified complexes in reactions involving complex selectors. To get an identified complex in the same example, we would write:
python
CC_1 = CC.get()
CC_2 = CC.get()
CC_1[:, :, T0] + CC_2[:, :, T1] >r[1]> CC_1[:, :, T1] + CC_2[:, :, T2]
r[1].K = 1
Calling the get() method on the complex returns an object that behaves like a Complex but keeps a specific identity so that, if it appears several times in a reaction, STEPS knows that it refers to the same specific complex. The reaction is now unambiguous and no exceptions are thrown. Coming back to our IP3R channel example, we can now declare the reaction associated to the Ca2+ flux through open IP3R channels with:
End of explanation
"""
with mdl, ssys:
# IP3R subunits reaction network
with IP3R[...]:
R000.s + IP3.o <r[1]> R100.s
R000.s + Ca.o <r[2]> R010.s
R000.s + Ca.o <r[3]> R001.s
R100.s + Ca.o <r[4]> R110.s
R100.s + Ca.o <r[5]> R101.s
R010.s + IP3.o <r[6]> R110.s
R010.s + Ca.o <r[7]> R011.s
R001.s + IP3.o <r[8]> R101.s
R001.s + Ca.o <r[9]> R011.s
R110.s + Ca.o <r[10]> R111.s
R101.s + Ca.o <r[11]> R111.s
R011.s + IP3.o <r[12]> R111.s
r[1].K = a1, b1
r[2].K = a5, b5
r[3].K = a4, b4
r[4].K = a5, b5
r[5].K = a2, b2
r[6].K = a1, b1
r[7].K = a4, b4
r[8].K = a3, b3
r[9].K = a5, b5
r[10].K = a2, b2
r[11].K = a5, b5
r[12].K = a3, b3
# Ca2+ leak
Ca.i <r[1]> Ca.o
r[1].K = v2, c1 * v2
2*Ca.o + ERPump.s <r[1]> ERPump2Ca.s >r[2]> 2*Ca.i + ERPump.s
r[1].K = rf, rb
r[2].K = rp
"""
Explanation: The next step is to somehow declare reactions that are associated to IP3 and Ca2+ binding / unbinding to IP3R subunits, as described in the figure.
Let us first consider all reactions linked to IP3 binding to IP3R subunits and let us specifically focus on IP3 binding to IP3R subunits in the R000 state, the rate of these reactions will depend on the number of subunits in this state. We can tackle this by writing a complex selector that controls the number of subunits in this state. For example, IP3R[R000, ~R000, ~R000, ~R000] corresponds to all states in which only one subunit is in the R000 state. We could thus write all IP3 binding reactions to R000 with:
python
with mdl, ssys:
IP3R_1 = IP3R.get()
IP3R_1[R000, ~R000, ~R000, ~R000].s + IP3.o >r[1]> IP3R_1[R100, ~R000, ~R000, ~R000].s
IP3R_1[R000, R000, ~R000, ~R000].s + IP3.o >r[2]> IP3R_1[R100, R000, ~R000, ~R000].s
IP3R_1[R000, R000, R000, ~R000].s + IP3.o >r[3]> IP3R_1[R100, R000, R000, ~R000].s
IP3R_1[R000, R000, R000, R000].s + IP3.o >r[4]> IP3R_1[R100, R000, R000, R000].s
r[1].K = 1 * a1
r[2].K = 2 * a1
r[3].K = 3 * a1
r[4].K = 4 * a1
There are 4 reactions, corresponding to the cases in which the IP3R complex has 1, 2, 3 and 4 subunits in state R000. Since there are 4 ways to bind IP3 to an R000 subunit in a IP3R[R000, R000, R000, R000] complex state, the rate of the reaction should be 4 times the elementary rate $a_1$.
Expressing the unbinding reactions is however not trivial using these reactions. Let us consider the first of these 4 reactions, making it bidirectional would be equivalent to adding the following reaction:
python
IP3R_1[R100, ~R000, ~R000, ~R000].s >r[1]> IP3R_1[R000, ~R000, ~R000, ~R000].s + IP3.o
In contrast with the binding reactions, it is not clear which rate should be used for this reaction, we know that, in the left hand side, at least one subunit is in state R100 but the other subunits might also be in the same state, it is not prevented by the ~R000 subunit selector. In order to be sure that e.g. only one subunit is in state R100 we would instead need to write:
python
IP3R_1[R100, ~R100, ~R100, ~R100].s >r[1]> IP3R_1[R000, ~R100, ~R100, ~R100].s + IP3.o
r[1].K = b1
The following tentative solution using a single bidirectional reaction will not work:
python
IP3R_1[R000, ~R000, ~R000, ~R000].s + IP3.o <r[1]> IP3R_1[R100, ~(R000 | R100), ~(R000 | R100), ~(R000 | R100)].s
r[1].K = a1, b1
This reaction is invalid because the right hand side is more restrictive than the left hand side. The left hand side matches e.g. IP3R[R000, R100, R100, R100] but the right hand side cannot match it. As a side note, the only way for a right hand side complex selector to be more restrictive is to constrain the subunits to a single state. In this case, there is no ambiguity and the reaction is valid.
We could try to fix this validity issue by using the same subunit selectors on the left hand side:
python
IP3R_1[R000, ~(R000 | R100), ~(R000 | R100), ~(R000 | R100)].s + IP3.o <r[1]> IP3R_1[R100, ~(R000 | R100), ~(R000 | R100), ~(R000 | R100)].s
r[1].K = a1, b1
This is a valid reaction but it does not cover all cases of IP3 binding to an IP3R in which only one subunit is in state R000. For example, IP3R[R000, R100, R111, R111] would not be taken into account because its second subunit is R100, which does not match with the subunit selector ~(R000 | R100).
From all these examples, it becomes clear that complex selectors are not well suited to declaring reactions that involve single subunits instead of full complexes. These reactions should instead be declared with their dedicated syntax.
Reactions involving subunits
In order to express reactions that involve subunits instead of full complexes, we can simply use subunit states as reactants. The IP3 binding reaction to R000 can thus be declared with:
python
with mdl, ssys:
with IP3R[...]:
R000.s + IP3.o <r[1]> R100.s
r[1].K = a1, b1
The reaction itself corresponds exactly to the reaction being represented on the previous figure. The main difference with the full complex reactions we saw before is that the reaction declaration needs to be done inside a with block that uses a complex selector. This specifies the complex on which the reaction applies as well as the states that the complex needs to be in for the reaction to apply. In our case, the reaction applies to IP3R complexes in any state. We do not need to specify that at least one subunit should be in state R000 since it is already implicitely required by the presence of R000.s in the left hand side of the reaction.
Note that, in addition to being much simpler than our previous attempts using complex selectors, this syntax makes it very easy to declare the unbinding reaction ; we just need to make the reaction bidirectional.
The rates are the per-subunit rates, as in the figure. STEPS will automatically compute the coefficients such that a complex with 2 subunits in state R000 will undergo the change of one of its subunits with rate $2a_1$. FInally, the position of the complex is indicated by adding the position indicator .s to the subunit state itself.
The following figure represents the full complex reactions that are equivalent to 2 examples of subunits reactions:
<img src="images/complex_reactions.png"/>
Note that in both cases, only a very low number of possible reactions are represented. In each case, the required coefficient is applied to the rate that was used in the subunit reaction. For example, the first complex reaction of the left column can happen in four different ways since all four subunits are in the R000 state; since all these ways result in the same equivalent state IP3R[R100, R000, R000, R000], the subunit reaction rate is multiplied by 4 to get the complex reaction rate. Note that if we used the StrongOrdering ordering function, IP3R[R100, R000, R000, R000] would be different from e.g. IP3R[R000, R100, R000, R000] so four distinct complex reactions with rate $a_1$ would be declared.
Expressing cooperativity with complex selectors
In our example, subunits bind IP3 and Ca2+ independently ; a simple way to express cooperativity would be to use several with blocks with different complex selectors. For example, if the binding rate of IP3 to a R000 subunit depended on the number of subunits in the R100 state we could write:
python
with mdl, ssys:
# Binding
with IP3R[~R100, ~R100, ~R100, ~R100]:
R000.s + IP3.o >r[1]> R100.s
r[1].K = a1_0
with IP3R[ R100, ~R100, ~R100, ~R100]:
R000.s + IP3.o >r[1]> R100.s
r[1].K = a1_1
with IP3R[ R100, R100, ~R100, ~R100]:
R000.s + IP3.o >r[1]> R100.s
r[1].K = a1_2
with IP3R[ R100, R100, R100, ~R100]:
R000.s + IP3.o >r[1]> R100.s
r[1].K = a1_3
# Unbinding
with IP3R[...]:
R100.s >r[1]> R000.s + IP3.o
r[1].K = b1
With a1_0 the IP3 binding rate to R000 when no subunits are in the R100 state, a1_1 when one subunit is in this state, etc. Note that the unbinding reaction now needs to be declared separately because, for the with IP3R[~R100, ~R100, ~R100, ~R100]: block, the complex selector would be incompatible with the R100.s right hand side.
Expressing cooperativity with complex-dependent reaction rates
There is however a simpler way to express cooperativity by using complex-dependent reaction rate. The following example declares the same reactions as the previous one:
```python
rates = [a1_0, a1_1, a1_2, a1_3]
a1 = CompDepRate(lambda state: rates[state.Count(R100)], [IP3R])
with mdl, ssys:
with IP3R[...]:
R000.s + IP3.o <r[1]> R100.s
r[1].K = a1, b1
```
We first declare a list to hold all our a1_x rates ; we then declare the a1 rate as a CompDepRate object. Its constructor (see documentation) takes two parameters: the first one is a function that takes one or several complex states as parameter and returns a reaction rate ; the second is the list of complexes whose states influence the rate. In our case, the rate only depends on the state of the IP3R complex. Since it is possible to declare reactions between two complexes, corresponding rate can be declared with CompDepRate(lambda state1, state2: ..., [Comp1, Comp2]). Note that the lambda function now takes two parameters, corresponding to the states of the two complexes. They are given in the same order as in the [Comp1, Comp2] list.
Note that the lambda function in the CompDepRate constructor makes uses of the Count method (see documentation) from the ComplexState class. This method takes a SubUnitState or a SubUnitSelector as a parameter and returns the number of subunits in the state that correspond to the one passed as parameter.
The reaction can then be declared inside a with IP3R[...] block, meaning it applies to all complexes, no matter their state. The forward rate is then simply set to the CompDepRate object we declared.
Declaring reactions involving subunits can be done in a lot of different ways. We covered the most common cases in the previous subsections and advanced use cases are treated in a separate section, as appendix to this chapter.
Let us now come back to our main IP3R simulation example and declare the missing reactions:
End of explanation
"""
geom = Geometry()
with geom:
cyt, ER = Compartment.Create()
cyt.Vol = cytVol
ER.Vol = ERVol
memb = Patch.Create(ER, cyt, ssys)
memb.Area = 0.4143e-12
"""
Explanation: The full subunit reaction network is declared in the with IP3R[...]: block. The remaining lines declare the reactions associated to the Ca2+ leak from the endoplasmic reticulum (ER) as well as the Ca2+ pumping into the ER.
Geometry and simulation
The well-mixed geometry is declared easily with:
End of explanation
"""
rng = RNG('mt19937', 512, 7233)
sim = Simulation('Wmdirect', mdl, geom, rng)
rs = ResultSelector(sim)
cytCa = rs.cyt.Ca.Conc
caFlux = rs.SUM(rs.memb.caflux['fwd'].Extent) << rs.SUM(rs.memb.caflux['bkw'].Extent)
IP3RStates = rs.memb.IP3R[~R110, ~R110, ~R110, ~R110].Count
IP3RStates <<= rs.memb.IP3R[ R110, ~R110, ~R110, ~R110].Count
IP3RStates <<= rs.memb.IP3R[ R110, R110, ~R110, ~R110].Count
IP3RStates <<= rs.memb.IP3R[ R110, R110, R110, ~R110].Count
IP3RStates <<= rs.memb.IP3R[ R110, R110, R110, R110].Count
sim.toSave(cytCa, caFlux, IP3RStates, dt=0.05)
"""
Explanation: As in other chapters, we then declare the simulation object as well as the data to be saved:
End of explanation
"""
ENDT = 10.0
sim.newRun()
# Initial conditions
sim.cyt.Ca.Conc = 3.30657e-8
sim.cyt.IP3.Conc = 0.2e-6
sim.ER.Ca.Conc = c0/c1
sim.memb.ERPump.Count = nbPumps
sim.memb.IP3R[R000, R000, R000, R000].Count = nbIP3R
sim.run(ENDT)
"""
Explanation: Both cytCa and caFlux result selectors use syntaxes that were already presented in the previous chapters. Note however that we use rs.SUM() on caFlux paths because rs.memb.caflux['fwd'].Extent saves the extents of all reactions that are implied by the 'caflux' complex reaction. Since we want to look at the overall complex reaction extent, we sum these values with rs.SUM().
The data saving relative to complexes themselves is new but relatively easy to understand. In our example, we want to track how receptors are distributed in terms of number of subunits in the R110 open state. We save 5 values: the number of IP3R that have 0 subunits in the R110 state, the number of IP3R that have 1 subunit in this state, etc. Note that the rs.memb.IP3R.Count result selector would save the total number of IP3R on the ER membrane.
In addition to counting numbers of complexes, it is also possible to count numbers of subunits. rs.memb.IP3R.R110.Count would save the total number of subunits of IP3R that are in state R110.
Finally, if one wanted to save the separate counts of all states matching some complex selectors, one could use rs.memb.LIST(*IP3R[R110, R110, ...]).Count. This uses the LIST() function that we saw in previous chapters by feeding it all the states that we want to save.
We can then proceed to setting up intial conditions and running the simulation:
End of explanation
"""
from matplotlib import pyplot as plt
import numpy as np
plt.figure(figsize=(10, 7))
plt.plot(cytCa.time[0], cytCa.data[0]*1e6)
plt.legend(cytCa.labels)
plt.xlabel('Time [s]')
plt.ylabel('Concentration [μM]')
plt.show()
plt.figure(figsize=(10, 7))
plt.plot(caFlux.time[0], caFlux.data[0])
plt.legend(caFlux.labels)
plt.xlabel('Time [s]')
plt.ylabel('Reaction extent')
plt.show()
"""
Explanation: Note that injecting IP3R complexes requires specifying their states completely.
Plotting the results
We then plot the results from the cytCa and caFlux result selectors first:
End of explanation
"""
n = 20
plt.figure(figsize=(10, 7))
for i in range(IP3RStates.data[0].shape[1]):
sig = IP3RStates.data[0, :, i]
avg = np.convolve(sig, np.ones(n) / n, 'valid')
tme = IP3RStates.time[0, n//2:-n//2+1]
plt.plot(tme, avg, color=f'C{i}', label=IP3RStates.labels[i])
plt.plot(IP3RStates.time[0], sig, '--', linewidth=1, color=f'C{i}', alpha=0.4)
plt.legend(loc=1)
plt.xlabel('Time [s]')
plt.ylabel('Count')
plt.show()
"""
Explanation: We then plot the data from the IP3RStates result selector. In addition to the raw data, we compute a sliding window average to ease visualization:
End of explanation
"""
|
Kaggle/learntools
|
notebooks/data_cleaning/raw/tut1.ipynb
|
apache-2.0
|
# modules we'll use
import pandas as pd
import numpy as np
# read in all our data
nfl_data = pd.read_csv("../input/nflplaybyplay2009to2016/NFL Play by Play 2009-2017 (v4).csv")
# set seed for reproducibility
np.random.seed(0)
"""
Explanation: Welcome to the Data Cleaning course on Kaggle Learn!
Data cleaning is a key part of data science, but it can be deeply frustrating. Why are some of your text fields garbled? What should you do about those missing values? Why aren’t your dates formatted correctly? How can you quickly clean up inconsistent data entry? In this course, you'll learn why you've run into these problems and, more importantly, how to fix them!
In this course, you’ll learn how to tackle some of the most common data cleaning problems so you can get to actually analyzing your data faster. You’ll work through five hands-on exercises with real, messy data and answer some of your most commonly-asked data cleaning questions.
In this notebook, we'll look at how to deal with missing values.
Take a first look at the data
The first thing we'll need to do is load in the libraries and dataset we'll be using.
For demonstration, we'll use a dataset of events that occured in American Football games. In the following exercise, you'll apply your new skills to a dataset of building permits issued in San Francisco.
End of explanation
"""
# look at the first five rows of the nfl_data file.
# I can see a handful of missing data already!
nfl_data.head()
"""
Explanation: The first thing to do when you get a new dataset is take a look at some of it. This lets you see that it all read in correctly and gives an idea of what's going on with the data. In this case, let's see if there are any missing values, which will be reprsented with NaN or None.
End of explanation
"""
# get the number of missing data points per column
missing_values_count = nfl_data.isnull().sum()
# look at the # of missing points in the first ten columns
missing_values_count[0:10]
"""
Explanation: Yep, it looks like there's some missing values.
How many missing data points do we have?
Ok, now we know that we do have some missing values. Let's see how many we have in each column.
End of explanation
"""
# how many total missing values do we have?
total_cells = np.product(nfl_data.shape)
total_missing = missing_values_count.sum()
# percent of data that is missing
percent_missing = (total_missing/total_cells) * 100
print(percent_missing)
"""
Explanation: That seems like a lot! It might be helpful to see what percentage of the values in our dataset were missing to give us a better sense of the scale of this problem:
End of explanation
"""
# look at the # of missing points in the first ten columns
missing_values_count[0:10]
"""
Explanation: Wow, almost a quarter of the cells in this dataset are empty! In the next step, we're going to take a closer look at some of the columns with missing values and try to figure out what might be going on with them.
Figure out why the data is missing
This is the point at which we get into the part of data science that I like to call "data intution", by which I mean "really looking at your data and trying to figure out why it is the way it is and how that will affect your analysis". It can be a frustrating part of data science, especially if you're newer to the field and don't have a lot of experience. For dealing with missing values, you'll need to use your intution to figure out why the value is missing. One of the most important questions you can ask yourself to help figure this out is this:
Is this value missing because it wasn't recorded or because it doesn't exist?
If a value is missing becuase it doesn't exist (like the height of the oldest child of someone who doesn't have any children) then it doesn't make sense to try and guess what it might be. These values you probably do want to keep as NaN. On the other hand, if a value is missing because it wasn't recorded, then you can try to guess what it might have been based on the other values in that column and row. This is called imputation, and we'll learn how to do it next! :)
Let's work through an example. Looking at the number of missing values in the nfl_data dataframe, I notice that the column "TimesSec" has a lot of missing values in it:
End of explanation
"""
# remove all the rows that contain a missing value
nfl_data.dropna()
"""
Explanation: By looking at the documentation, I can see that this column has information on the number of seconds left in the game when the play was made. This means that these values are probably missing because they were not recorded, rather than because they don't exist. So, it would make sense for us to try and guess what they should be rather than just leaving them as NA's.
On the other hand, there are other fields, like "PenalizedTeam" that also have lot of missing fields. In this case, though, the field is missing because if there was no penalty then it doesn't make sense to say which team was penalized. For this column, it would make more sense to either leave it empty or to add a third value like "neither" and use that to replace the NA's.
Tip: This is a great place to read over the dataset documentation if you haven't already! If you're working with a dataset that you've gotten from another person, you can also try reaching out to them to get more information.
If you're doing very careful data analysis, this is the point at which you'd look at each column individually to figure out the best strategy for filling those missing values. For the rest of this notebook, we'll cover some "quick and dirty" techniques that can help you with missing values but will probably also end up removing some useful information or adding some noise to your data.
Drop missing values
If you're in a hurry or don't have a reason to figure out why your values are missing, one option you have is to just remove any rows or columns that contain missing values. (Note: I don't generally recommend this approch for important projects! It's usually worth it to take the time to go through your data and really look at all the columns with missing values one-by-one to really get to know your dataset.)
If you're sure you want to drop rows with missing values, pandas does have a handy function, dropna() to help you do this. Let's try it out on our NFL dataset!
End of explanation
"""
# remove all columns with at least one missing value
columns_with_na_dropped = nfl_data.dropna(axis=1)
columns_with_na_dropped.head()
# just how much data did we lose?
print("Columns in original dataset: %d \n" % nfl_data.shape[1])
print("Columns with na's dropped: %d" % columns_with_na_dropped.shape[1])
"""
Explanation: Oh dear, it looks like that's removed all our data! 😱 This is because every row in our dataset had at least one missing value. We might have better luck removing all the columns that have at least one missing value instead.
End of explanation
"""
# get a small subset of the NFL dataset
subset_nfl_data = nfl_data.loc[:, 'EPA':'Season'].head()
subset_nfl_data
"""
Explanation: We've lost quite a bit of data, but at this point we have successfully removed all the NaN's from our data.
Filling in missing values automatically
Another option is to try and fill in the missing values. For this next bit, I'm getting a small sub-section of the NFL data so that it will print well.
End of explanation
"""
# replace all NA's with 0
subset_nfl_data.fillna(0)
"""
Explanation: We can use the Panda's fillna() function to fill in missing values in a dataframe for us. One option we have is to specify what we want the NaN values to be replaced with. Here, I'm saying that I would like to replace all the NaN values with 0.
End of explanation
"""
# replace all NA's the value that comes directly after it in the same column,
# then replace all the remaining na's with 0
subset_nfl_data.fillna(method='bfill', axis=0).fillna(0)
"""
Explanation: I could also be a bit more savvy and replace missing values with whatever value comes directly after it in the same column. (This makes a lot of sense for datasets where the observations have some sort of logical order to them.)
End of explanation
"""
|
folivetti/PIPYTHON
|
Aula08Recursividade.ipynb
|
mit
|
def imprime(i):
print (i)
def imprimeLista(l):
for e in l:
imprime (e)
imprimeLista([1, 3, 5, 7])
"""
Explanation: Introdução à Programação em Python
Recursão
Em um programa é muito comum chamarmos uma função dentro de uma outra função.
End of explanation
"""
def fatorial(n):
fat = 1
while n > 1:
fat *= n
n -= 1
return fat
print(fatorial(3))
print(fatorial(6))
"""
Explanation: Entretanto, nada impede de uma função chamar ela mesma!
Como exemplo, vamos pensar na função fatorial.
O fatorial de 3 e 6 respectivamente pode ser calculado conforme descrito abaixo:
$3! = 3 \times 2 \times 1 = 6$
$6! = 6 \times 5 \times 4 \times 3 \times 2 \times 1 = 720$
De forma genérica, podemos calcular o fatorial como:
$n! = n \times (n-1) \times (n-2) \times \ldots \times 3 \times 2 \times 1$
Deste modo, podemos facilmente implementar uma função para calcular o fatorial:
End of explanation
"""
import sys
sys.setrecursionlimit(50)
# Ao executar esta função, o python ficará processando até
# ocorrer um estouro de pilha de memória (stack overflow).
def fatorial(n):
return n * fatorial(n-1)
print(fatorial(6))
sys.setrecursionlimit(1000)
"""
Explanation: Uma outra forma de cálcular o fatorial é:
$3! = 3 \times 2! = 3 \times 2 = 6$
$6! = 6 \times 5! = 6 \times 120 = 720$
De modo genérico, temos:
$n! = n \times (n-1)!$
Pela equação acima, podemos deduzir que qualquer fotorial pode ser calculado como o seu valor multiplicado pelo fatorial do seu valor subtraído por um.
E quanto vale o fatorial de $n-1$?
R: $(n-1)! = (n-1) \times (n-2)!$
Vamos propor uma função que tenta calcular a equação acima:
python
def fatorial(n)
return n * (n-1)!
Entretanto, como podemos calcular o (n-1)!?
A função fatorial(n) que estamos desenvolvendo não tem este objetivo?
Então, por que não utilizar esta própria função para calcular o fatorial de (n-1)?
Neste sentido, teremos:
python
def fatorial(n)
return n * fatorial(n-1)
Entretanto, existe um erro grave nesta função: se a função chama ela mesma, quando esta função termina e retorna?
R: Da forma que está, não termina... :'(
End of explanation
"""
def fatorial(n):
# Parte TRIVIAL
if n == 0:
return 1
# Parte GERAL ou RECURSIVA
else:
return n * fatorial(n-1)
print(fatorial(3))
print(fatorial(6))
print(fatorial(20))
"""
Explanation: O principal problema do exemplo anterior é que a função não sabe quando terminar!
Deste modo, toda função recursiva precisa ter uma condição de término.
Toda função recursiva deve ter uma lógica similar ao demonstrado a seguir:
```python
def funcRecursiva():
# Parte TRIVIAL
if <conhecemos_a_resposta>:
return <resposta>
#parte GERAL
else:
# utilizamos a recursão
return funcRecursiva()
```
No caso do fatorial, sabemos que o fatorial de 0 é 1, certo?
Esta é a nossa parte TRIVIAL.
End of explanation
"""
def hanoi(num_discos, hasteOrigem, hasteDestino, hasteAuxiliar):
# Parte TRIVIAL
if num_discos == 1:
print ("Mover o disco da haste " + hasteOrigem + " para a haste " + hasteDestino + ".")
# Parte GERAL
else:
hanoi(num_discos-1, hasteOrigem, hasteAuxiliar, hasteDestino)
print ("Mover o disco da haste " + hasteOrigem + " para a haste " + hasteDestino + ".")
hanoi(num_discos-1, hasteAuxiliar, hasteDestino, hasteOrigem)
hanoi(4, "A", "C", "B")
"""
Explanation: Apesar de parecer que a recursão complica a solução, conforme demosntrado no fatorial (exemplo anterior), uma função recursiva pode facilitar a implementação de muitos problemas, pois permite dividir o problema em problemas menores para solucioná-lo.
Para entener melhor a motivação do uso da recursão, vamos pegar um exemplo mais complexo. Lembram da torre de Hanói?
Caso não lembre, clique aqui antes de continuar lendo.
Como podemos construir um algoritmo para solucionar a torre de Hanói?
Não é trivial. Mas podemos utilizar a recursão com o objetivo de dividir para conquistar.
A chave para resolver este problema é quebrar o desafio em desafios menores até que o desafio seja trivial.
Neste caso, o trivial é quando temos apenas um único disco para mover (neste caso, basta mover o disco direto).
Se queremos mover os discos da haste 1 para a haste 3, e existem $n$ discos na haste 1, podemos mover os $n-1$ discos para a haste auxiliar 2, mover o último disco da haste 1 para a haste 3 e, então, levar os $n-1$ discos da haste auxiliar 2 para a haste 3. Veja a figura a seguir:
Ou seja, a parte TRIVIAL da nossa recursão é mover um disco da haste 1 para a haste C.
A parte GERAL (recursiva) vai ser mover os $n-1$ discos da haste 1 para a haste 2 e depois mover da haste 2 para a haste 3.
```python
def hanoi():
# Parte TRIVIAL
if numero_de_discos == 1:
<mover_o_disco_da_origem_para_o_destino>
# Parte GERAL
else:
hanoi(<mover_n-1_discos_da_origem_para_o_auxiliar)
<mover_o_disco_da_origem_para_o_destino>
hanoi(<mover_n-1_discos_do_auxiliar_para_o_destino)
```
Veja a implmementação correta abaixo.
End of explanation
"""
from time import sleep
def contadorRegressivo(n):
if n == 0:
print ("BOOM!")
else:
print (str(n) + "s")
sleep(1)
contadorRegressivo(n-1)
contadorRegressivo(3)
"""
Explanation: Funcionou? Verifique:
Agora tente desenvolver uma outra solução para o problema da torre de Hanói sem utilizar a recursão.
Pense um pouco e você perceberá que é muito difícil (porém não impossível).
Agora que entendemos a ideia da recursão, vamos analizar passo-a-passo o que acontece com o programa durante uma recursão, do ponto de vista computacional.
Vamos começar com uma função fotorial bem simples: um contador regressivo!
End of explanation
"""
space = ""
from time import sleep
def contadorRegressivo(n):
space = "||||" * (3-n)
print(space + "contadorRegressivo(n = {}) # Chamado pela {}ª vêz!".format(n,4-n))
if n == 0:
print(space + " if {} == 0: (VERDADEIRO)".format(n))
print(space + " print (\"BOOM!\")")
else:
print(space + " if {} == 0: (FALSO)".format(n))
print(space + " else:")
print(space + " print (str(n = {}) + \"s\")".format(n))
print(space + " sleep(1)")
print(space + " contadorRegressivo(n-1) # Recursão!")
contadorRegressivo(n-1)
if n < 3:
print(space + " # Retornando para contadorRegressivo(n = {})".format(n+1))
else:
print(" # Fim da execução")
contadorRegressivo(3)
"""
Explanation: Neste exemplo, ao chamar a função contadorRegressivo(n = 3), o Python irá verificar que n é diferetne de 0, imprimir "3s", dormir por um segundo e chamar a função contadorRegressivo(n = 2).
A função contadorRegressivo(n = 2) irá seguir seguir o mesmo roteiro, mas com o valor de n igual a 2.
O mesmo ocorre com a função contadorRegressivo(n = 1).
Por fim, a função contadorRegressivo(n = 0) cai na parte trivial da recursão, o que faz imprimir "BOOM!" e retornar (sem recursão).
Neste momento, a função contadorRegressivo(n = 0) retorna para a função contadorRegressivo(n = 1), que retorna para a função contadorRegressivo(n = 2) que, por fim, retorna para a função contadorRegressivo(n = 3).
Note que embora o nome da função seja a mesma, os parâmetros são diferentes, ou seja, embora esteja executando o memso código, todos os valores da função são diferentes, pois a instância de cada função é diferente.
O código abaixo imprime é similar ao exemplo anterior, mas imprime passo a passo a recursão.
End of explanation
"""
n1 = 1
n2 = 1
for i in range(1, 20):
n1, n2 = n2, (n1+n2)
print (n1)
print ("...")
"""
Explanation: Atividade 1
A sequência de Fibonacci é uma série que segue a seguinte regra: $x_{n}=x_{n-1}+x_{n-2}$, sendo que $x_{1} = 1$ e $x_{2} = 1$. Os primeiros elementos da sequência de Fibonacci são: 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, 233, 377, 610, 987, 1597, 2584, 4181, 6765, 10946, ...
Um possível código para imprimir a sequencia de Fibonacci é apresentado abaixo:
End of explanation
"""
def Fibonacci(n):
"""
Código
"""
Fibonacci(5)
Fibonacci(7)
"""
Explanation: Faça um código que calcule a sequência de Fibonacci utilizando recursão.
End of explanation
"""
def Euclides (a, b):
while b != 0:
a, b = b, a % b
return a
print(Euclides(10, 8))
print(Euclides(21, 13))
print(Euclides(63, 108))
"""
Explanation: Atividade 2
Pode-se calcular o máximo divisor comum (MDC) entre dois números inteiros positivos utilizando o algoritmo de Euclides.
Veja a implementação do algoritmo de Euclides abaixo:
End of explanation
"""
|
moble/spherical_functions
|
Notes/conventions.ipynb
|
mit
|
import csv
import sympy
from sympy import sin, cos
from sympy.parsing.mathematica import mathematica
from sympy.physics.quantum.spin import Rotation
from sympy.abc import _clash
import numpy as np
import quaternion
import spherical_functions as sf
"""
Explanation: NOTE: I've run this notebook with the correction to sympy's trig simplification routines found in this pull request, which has not yet made it into a released version. I just ran python -c "import sympy; print(sympy.__file__)" to find where on my system the actual files are, then edited .../simplify/fu.py as given in the PR.
End of explanation
"""
Rotation.D(j, m, mp, alpha, beta, gamma)
"""
Explanation: Mathematica documentation
The Mathematica documentation for WignerD says
WignerD[{j, m_1, m_2}], psi, theta, phi] gives the Wigner D-function $D^j_{m_1, m_2}(\psi, \theta, \phi)$.
The Wigner D-function $D^j_{m_1, m_2}$ gives the matrix element of a rotation operator parametrized by Euler angles in a $2 j+1$-dimensional unitary representation of a rotation group when parameters $j, m_1, m_2$ are physical, i.e. all integers or half-integers such that $-j \leq m_1,m_2 \leq j$.
The Wolfram Language uses phase conventions where $D^j_{m_1, m_2}(\psi, \theta, \phi) = e^{i m_1 \psi + i m_2 \phi} D^j_{m_1, m_2}(0, \theta, 0)$.
WignerD[{j, m1, m2}, psi, theta, phi] == (-1)^(m1 - m2) Conjugate[WignerD[{j, -m1, -m2}, psi, theta, phi]
WignerD[{j, m1, m2}, psi, theta, phi] == (-1)^(m1 - m2) WignerD[{j, m2, m1}, phi, theta, psi]
There are no more specifics about what the Euler angles mean in this function's documentation, but the documentation for EulerMatrix[{alpha, beta, gamma}] says that it "gives the Euler 3D rotation matrix formed by rotating by $\alpha$ around the current $z$ axis, then by $\beta$ around the current $y$ axis, and then by $\gamma$ around the current $z$ axis." This is ambiguous, but
we later find that EulerMatrix[{alpha, beta, gamma}, {a, b, c}] is equivalent to $R_{\alpha, a} R_{\beta, b} R_{\gamma, c}$. Evidently, "current" refers to the rotating body axes, and so EulerMatrix[{alpha, beta, gamma}] is what I would write in quaternion form as
\begin{equation}
e^{\alpha \hat{z}/2} e^{\beta \hat{y}/2} e^{\gamma \hat{z}/2} = e^{\gamma \hat{z}''/2} e^{\beta \hat{y}'/2} e^{\alpha \hat{z}/2}
\end{equation}
I've created a CSV file with the analytic expressions for j from 0 through 5, using this code:
mathematica
SetDirectory[NotebookDirectory[]];
Export[
"conventions_mathematica.csv",
Flatten[
Table[{j, m1, m2, ToString[WignerD[{j, m1, m2}, psi, theta, phi], InputForm]},
{j, 0, 5}, {m1, -j, j}, {m2, -j, j} ], 2],
TableHeadings -> {"j", "m1", "m2", "WignerD[{j, m1, m2}, psi, theta, phi]"}
];
I'll be comparing these expressions to SymPy's, and then evaluating them to compare to the results from spherical_functions.
SymPy documentation
The SymPy documentation is unclear and a little self-contradictory. The main docstring for sympy.physics.quantum.spin.Rotation says that it
Defines the rotation operator in terms of the Euler angles defined by the z-y-z convention for a passive transformation. That is the coordinate axes are rotated first about the z-axis, giving the new x'-y'-z' axes. Then this new coordinate system is rotated about the new y'-axis, giving new x''-y''-z'' axes. Then this new coordinate system is rotated about the z''-axis. Conventions follow those laid out in Varshalovich.
* alpha: First Euler Angle
* beta: Second Euler angle
* gamma: Third Euler angle
The docstring for sympy.physics.quantum.spin.WignerD says
The Wigner D-function gives the matrix elements of the rotation operator in the jm-representation. For the Euler angles $\alpha$, $\beta$, $\gamma$, the D-function is defined such that:
\begin{equation}
\left\langle j,m| \mathcal{R}(\alpha, \beta, \gamma ) |j',m' \right \rangle
= \delta_{jj'} D(j, m, m', \alpha, \beta, \gamma)
\end{equation}
Where the rotation operator is as defined by the Rotation class.
The Wigner D-function defined in this way gives:
\begin{equation}
D(j, m, m', \alpha, \beta, \gamma) = e^{-i m \alpha} d(j, m, m', \beta) e^{-i m' \gamma}
\end{equation}
Where d is the Wigner small-d function, which is given by Rotation.d.
The Wigner small-d function gives the component of the Wigner D-function that is determined by the second Euler angle. That is the Wigner D-function is:
\begin{equation}
D(j, m, m', \alpha, \beta, \gamma) = e^{-i m \alpha} d(j, m, m', \beta) e^{-i m' \gamma}
\end{equation}
Where d is the small-d function. The Wigner D-function is given by Rotation.D.
* j: Total angular momentum
* m: Eigenvalue of angular momentum along axis after rotation
* mp: Eigenvalue of angular momentum along rotated axis
Again, this is all pretty ambiguous, regarding exactly which angle is supposed to go with which rotation, but my best guess is that it looks like this:
\begin{equation}
e^{\gamma \hat{z}''/2} e^{\beta \hat{y}'/2} e^{\alpha \hat{z}/2} = e^{\alpha \hat{z}/2} e^{\beta \hat{y}/2} e^{\gamma \hat{z}/2},
\end{equation}
which is precisely the same as my interpretation of Mathematica's convention.
Here's the notation SymPy uses for the D matrices:
End of explanation
"""
class QuaternionFromEuler(object):
def __init__(self, alpha, beta, gamma):
# This is essentially copied from the quaternion code
self.w = cos(beta/2)*cos((alpha+gamma)/2)
self.x = -sin(beta/2)*sin((alpha-gamma)/2)
self.y = sin(beta/2)*cos((alpha-gamma)/2)
self.z = cos(beta/2)*sin((alpha+gamma)/2)
q = QuaternionFromEuler(psi, theta, phi)
sympy.Matrix([
[simplify(1 - 2*(q.y**2 + q.z**2)), simplify(2*(q.x*q.y - q.z*q.w)), simplify(2*(q.x*q.z + q.y*q.w))],
[simplify(2*(q.x*q.y + q.z*q.w)), simplify(1 - 2*(q.x**2 + q.z**2)), simplify(2*(q.y*q.z - q.x*q.w))],
[simplify(2*(q.x*q.z - q.y*q.w)), simplify(2*(q.y*q.z + q.x*q.w)), simplify(1 - 2*(q.x**2 + q.y**2))]
])
"""
Explanation: Compare quaternion's Euler angles to Mathematica's
Although the Mathematica documentation doesn't explicitly relate its WignerD and EulerMatrix functions, I think enough of Mathematica to guess that they at least use consistent conventions. And spherical_functions explicitly takes a quaternion object, so to the extent that I use Euler angles at all, we can stay consistent in this way.
So first, we check the rotation matrix that comes out of quaternion via Euler angles:
End of explanation
"""
j, m1, m2 = sympy.symbols('j, m1, m2', integer=True)
#psi, theta, phi = sympy.symbols('psi, theta, phi', real=True)
with open('conventions_mathematica.csv', 'r') as csvfile:
reader = csv.reader(csvfile)
header = next(reader, None)
print('Mathematica header:', ', '.join(header))
mathematica_wignerD = {
tuple(int(s) for s in row[:3]): sympy.sympify(mathematica(row[3]))
for row in reader
}
"""
Explanation: This is precisely the same matrix as Mathematica returns from EulerMatrix[{ψ, θ, ϕ}], which would suggest to me that my Euler conventions are the same as Mathematica's.
Compare Mathematica's expressions to SymPy's
End of explanation
"""
free_symbols = list(set(symbol for jm1m2 in mathematica_wignerD for symbol in mathematica_wignerD[jm1m2].free_symbols))
sorted(free_symbols, key=lambda s: str(s))
phi, psi, theta = sorted(free_symbols, key=lambda s: str(s))
j, m, mp = sympy.symbols('j, m, mprime', integer=True)
alpha, beta, gamma = sympy.symbols('alpha, beta, gamma', real=True)
"""
Explanation: Unfortunately, I can't get sympify to correctly use locals, so I have to just grab all the symbols that it created in the previous cell, as follows:
End of explanation
"""
half_angle_replacements = ([]
+[(sin(theta/2)**n, ((1-cos(theta))/2)**(n//2)) for n in [2, 4, 6, 8]]
+[(cos(theta/2)**n, ((1+cos(theta))/2)**(n//2)) for n in [2, 4, 6, 8]]
+[(sin(theta/2)*cos(theta/2), sin(theta)/2)]
#+[(sin(3*theta/2), 3*sin(theta/2)-4*sin(theta/2)**3)]
#+[(sin(5*theta/2), 5*cos(theta/2)**4*sin(theta/2)-10*cos(theta/2)**2*sin(theta/2)**3+sin(theta/2)**5)]
)
def simplify(difference):
from sympy import trigsimp, expand
difference = trigsimp(expand(sympy.simplify(difference), trig=True))
difference = sympy.simplify(difference.subs(half_angle_replacements, simultaneous=True))
return difference
"""
Explanation: As always, SymPy isn't good enough at simplifying trig functions, so I have to jump through some extra hoops:
End of explanation
"""
for j, m1, m2 in mathematica_wignerD:
mathematica_value = sympy.expand(sympy.trigsimp(sympy.simplify(mathematica_wignerD[(j, m1, m2)])), trig=True)
sympy_value = sympy.expand(sympy.trigsimp(sympy.simplify(Rotation.D(j, m1, m2, -psi, -theta, -phi).doit())), trig=True)
ratio = sympy.simplify(mathematica_value/sympy_value)
mathematica_value, sympy_value = sympy.fraction(ratio)
mathematica_value = sympy.simplify(sympy.simplify(mathematica_value.subs(half_angle_replacements)).subs(half_angle_replacements))
sympy_value = sympy.simplify(sympy.simplify(sympy_value.subs(half_angle_replacements)).subs(half_angle_replacements))
difference = simplify(mathematica_value - sympy_value)
print('Checking (j, m1, m2) = ({0}, {1}, {2})'.format(j, m1, m2))
if difference:
display(mathematica_value, sympy_value, difference)
print()
"""
Explanation: Finally, we can go through and check each and every expression (even though we could have assumed some symmetries to skip certain combinations) to ensure that the Mathematica expression returned by
WignerD[{j, m1, m2}, psi, theta, phi]
is identical to the SymPy expression returned by
Rotation.D(j, m1, m2, -psi, -theta, -phi)
End of explanation
"""
for j in range(6):
for m1 in range(-j, j+1):
for m2 in range(-j, j+1):
#print(j, m1, m2)
difference = sympy.simplify(Rotation.D(j, m1, m2, -psi, -theta, -phi).doit()
- (-1)**(m1+m2)*Rotation.D(j, m2, m1, -phi, -theta, -psi).doit())
if difference:
display(difference)
"""
Explanation: All cases show agreement.
I'm not quite sure how to interpret this weird sign difference. Flipping the signs is one of the things you do when inverting a rotation, but you also flip the order of the angles. This could essentially be done here as well if we also flip the order of m1 and m2 — except that we need an additional factor of $(-1)^{m_1+m_2}$. So we could think of this as saying that one of these provides the D matrix for the inverse rotation of the other, and they swap the order of the m arguments, and there's a (Condon-Shortley) phase difference.
I'll just quickly verify that Rotation.D actually satisfies this symmetry:
End of explanation
"""
np.random.seed(1234)
for _ in range(100): # Test for 100 sets of random Euler angles
ψ, θ, ϕ = np.random.rand(3) * np.array([2*np.pi, np.pi, 2*np.pi])
for j, m1, m2 in mathematica_wignerD:
mathematica_value = mathematica_wignerD[(j, m1, m2)].subs({psi: ψ, theta: θ, phi: ϕ}).evalf()
spherical_functions_value = (-1)**(m1+m2) * sf.Wigner_D_element(quaternion.from_euler_angles(ψ, θ, ϕ), j, m1, m2)
diff = abs(mathematica_value - spherical_functions_value)
if diff > 3e-13:
print(j, m1, m2, ψ, θ, ϕ, diff)
"""
Explanation: So another way to say this is that SymPy takes the inverse rotation, and returns the transpose with that weird phase.
Compare Mathematica to spherical_functions
I find both Mathematica's and SymPy's descriptions to be ambiguous, but it looks like Mathematica's is closer to my thinking — except that other places in the documentation make me think that their Condon-Shortley phases are weird, so I'll play around with that until I get some agreement. I'll check by simply evaluating on random numbers.
End of explanation
"""
|
conferency/find-my-reviewers
|
tutorials/Preprocessing_and_Training_LDA.ipynb
|
mit
|
# Loading metadata from trainning database
con = sqlite3.connect("F:/FMR/data.sqlite")
db_documents = pd.read_sql_query("SELECT * from documents", con)
db_authors = pd.read_sql_query("SELECT * from authors", con)
data = db_documents # just a handy alias
data.head()
"""
Explanation: Preparing Data
In this step, we are going to load data from disk to the memory and properly format them so that we can processing them in the next "preprocessing" stage.
End of explanation
"""
tokenised = load_json("abstract_tokenised.json")
# Let's have a peek
tokenised["acis2001/1"][:10]
"""
Explanation: Loading Tokenised Full Text
In the previous tutorial (Jupyter notebook), we generated a bunch of .json files storing our tokenised full texts. Now we are going to load them.
End of explanation
"""
from textblob import TextBlob
non_en = [] # a list of ids of the documents in other languages
count = 0
for id_, entry in data.iterrows():
count += 1
try:
lang = TextBlob(entry["title"] + " " + entry["abstract"]).detect_language()
except:
raise
if lang != 'en':
non_en.append(id_)
print(lang, data.iloc[id_]["title"])
if (count % 100) == 0:
print("Progress: ", count)
save_pkl(non_en, "non_en.list.pkl")
non_en = load_pkl("non_en.list.pkl")
# Convert our dict-based structure to be a list-based structure that are readable by Gensim and at the same time,
# filter out those non-English documents
tokenised_list = [tokenised[i] for i in data["submission_path"] if i not in non_en]
"""
Explanation: Preprocessing Data for Gensim and Finetuning
In this stage, we preprocess the data so it could be read by Gensim. Then we will furthur clean up the data to better train the model.
First of all, we need a dictionary of our corpus, i.e., the whole collection of our full texts. However, there are documents in our dataset written in some other languages. We need to stay with one language (in the example, English) in order to best train the model, so let's filter them out first.
Language Detection
TextBlob ships with a handy API wrapper of Google's language detection service. We will store the id of these non-English documents in a list called non_en and save it as a pickled file for later use.
End of explanation
"""
def remove_hyphenation(l):
return [i.replace("- ", "").replace("-", "") for i in l]
tokenised_list = [remove_hyphenation(i) for i in tokenised_list]
"""
Explanation: Although we tried to handle these hyphenations in the previous tutorial, now we still have them for some reasons. The most conveient way to remove them is to remove them in the corpus and rebuild the dictionary. Then re-apply our previous filter.
End of explanation
"""
from nltk.stem.wordnet import WordNetLemmatizer
lemmatizer = WordNetLemmatizer()
def lemmatize(l):
return [" ".join([lemmatizer.lemmatize(token)
for token
in phrase.split(" ")])
for phrase in l]
def lemmatize_all(tokenised):
# Lemmatize the documents.
lemmatized = [lemmatize(entry) for entry in tokenised]
return lemmatized
" ".join([lemmatizer.lemmatize(token)
for token
in 'assistive technologies'.split(" ")])
tokenised_list = lemmatize_all(tokenised_list)
# In case we need it in the future
save_json(tokenised_list, "abstract_lemmatized.json")
# To load it:
tokenised_list = load_json("abstract_lemmatized.json")
"""
Explanation: Lemmatization
But before building the vocabulary, we need to unify some variants of the same phrases. For example, "technologies" should be mapped to "technology". This process is called lemmatization.
End of explanation
"""
from gensim.corpora import Dictionary
# Create a dictionary for all the documents. This might take a while.
dictionary = Dictionary(tokenised_list)
# Let's see what's inside, note the spelling :)
# But there is really nothing we can do with that.
dictionary[0]
len(dictionary)
"""
Explanation: Then we can create our lemmatized vocabulary.
End of explanation
"""
# remove tokens that appear in less than 20 documents and tokens that appear in more than 50% of the documents.
dictionary.filter_extremes(no_below=2, no_above=0.5, keep_n=None)
len(dictionary)
"""
Explanation: Obviously we have a way too large vocabulary size. This is because the algorithm used in TextBlob's noun phrase extraction is not very robust in complicated scenario. Let's see what we can do about this.
Filtering Vocabulary
First of all, let's rule out the most obvious ones: words and phrases that appear in too many documents and ones that appear only 1-5 documents. Gensim provides a very convenient built-in function to filter them out:
End of explanation
"""
# Helpers
display_limit = 10
def shorter_than(n):
bad = []
count = 0
for i in dictionary:
if len(dictionary[i]) < n:
count += 1
if count < display_limit:
print(dictionary[i])
bad.append(i)
print(count)
return bad
def if_in(symbol):
bad = []
count = 0
for i in dictionary:
if symbol in dictionary[i]:
count += 1
if count < display_limit:
print(dictionary[i])
bad.append(i)
print(count)
return bad
def more_than(symbol, n):
bad = []
count = 0
for i in dictionary:
if dictionary[i].count(symbol) > n:
count += 1
if count < display_limit:
print(dictionary[i])
bad.append(i)
print(count)
return bad
bad = shorter_than(3)
"""
Explanation: Now we have drastically reduced the size of the vocabulary from 2936116 to 102508. However this is not enough. For example:
End of explanation
"""
dictionary.filter_tokens(bad_ids=bad)
display_limit = 10
bad = if_in("*")
dictionary.filter_tokens(bad_ids=bad)
bad = if_in("<")
dictionary.filter_tokens(bad_ids=bad)
bad = if_in(">")
dictionary.filter_tokens(bad_ids=bad)
bad = if_in("%")
dictionary.filter_tokens(bad_ids=bad)
bad = if_in("/")
dictionary.filter_tokens(bad_ids=bad)
bad = if_in("[")
bad += if_in("]")
bad += if_in("}")
bad += if_in("{")
dictionary.filter_tokens(bad_ids=bad)
display_limit = 20
bad = more_than(" ", 3)
dictionary.filter_tokens(bad_ids=bad)
bad = if_in("- ") # verify that there is no hyphenation problem
bad = if_in("quarter")
dictionary.filter_tokens(bad_ids=bad)
"""
Explanation: We have 752 such meaningless tokens in our vocabulary. Presumably this is because that during the extraction of the PDF, some mathenmatical equations are parsed as plain text (of course).
Now we are going to remove these:
End of explanation
"""
names = load_json("names.json")
name_ids = [i for i, v in dictionary.iteritems() if v in names]
dictionary.filter_tokens(bad_ids=name_ids)
locations = load_json("locations.json")
location_ids = [i for i, v in dictionary.iteritems() if v in locations]
dictionary.filter_tokens(bad_ids=location_ids)
locations[:10]
names[:15] # not looking good, but it seems like it won't do much harm either
"""
Explanation: Removing Names & Locations
There are a lot of citations and references in the PDFs, and they are extremely difficult to be recoginsed given that they come in a lot of variants.
We will demostrate how to identify these names and locations in another tutorial (see TOC) using a Stanford NLP library, and eventually we can get a list of names and locations in names.json and locations.json respectively.
End of explanation
"""
corpus = [dictionary.doc2bow(l) for l in tokenised_list]
# Save it for future usage
from gensim.corpora.mmcorpus import MmCorpus
MmCorpus.serialize("aisnet_abstract_np_cleaned.mm", corpus)
# Also save the dictionary
dictionary.save("aisnet_abstract_np_cleaned.ldamodel.dictionary")
# To load the corpus:
from gensim.corpora.mmcorpus import MmCorpus
corpus = MmCorpus("aisnet_abstract_cleaned.mm")
# To load the dictionary:
from gensim.corpora import Dictionary
dictionary = Dictionary.load("aisnet_abstract_np_cleaned.ldamodel.dictionary")
"""
Explanation: Building Corpus in Gensim Format
Since we already have a dictionary, each distinct token can be expressed as a id in the dictionary. Then we can compress the Corpus using this new representation and convert the document to be a BoW (bag of words).
End of explanation
"""
# Train LDA model.
from gensim.models import LdaModel
# Set training parameters.
num_topics = 150
chunksize = 2000
passes = 1
iterations = 150
eval_every = None # Don't evaluate model perplexity, takes too much time.
# Make a index to word dictionary.
print("Dictionary test: " + dictionary[0]) # This is only to "load" the dictionary.
id2word = dictionary.id2token
model = LdaModel(corpus=corpus, id2word=id2word, chunksize=chunksize, \
alpha='auto', eta='auto', \
iterations=iterations, num_topics=num_topics, \
passes=passes, eval_every=eval_every)
# Save the LDA model
model.save("aisnet_abstract_150_cleaned.ldamodel")
"""
Explanation: Train the LDA Model
Now we have the dictionary and the corpus, we are ready to train our LDA model. We take the LDA model with 150 topics for example.
End of explanation
"""
from gensim.models import LdaModel
model = LdaModel.load("aisnet_abstract_150_cleaned.ldamodel")
import pyLDAvis.gensim
vis = pyLDAvis.gensim.prepare(model, corpus, dictionary)
pyLDAvis.display(vis)
"""
Explanation: Visualize the LDA Model
There is a convenient library called pyLDAvis that allows us to visualize our trained LDA model.
End of explanation
"""
|
phoebe-project/phoebe2-docs
|
development/tutorials/dpdt.ipynb
|
gpl-3.0
|
#!pip install "phoebe>=2.4,<2.5"
"""
Explanation: Period Change (dpdt)
Setup
Let's first make sure we have the latest version of PHOEBE 2.4 installed (uncomment this line if running in an online notebook session such as colab).
End of explanation
"""
import phoebe
from phoebe import u # units
import numpy as np
import matplotlib.pyplot as plt
b = phoebe.default_binary()
"""
Explanation: As always, let's do imports and initialize a new Bundle.
End of explanation
"""
b.set_value('q', 0.8)
b.set_value('teff', component='secondary', value=5000)
"""
Explanation: In order to easily differentiate between the components in a light curve and in the orbits, we'll set the secondary temperature and the mass ratio.
End of explanation
"""
b.set_value('per0', 60)
b.set_value('dpdt', 0.005*u.d/u.d)
"""
Explanation: and set dpdt to an unrealistically large value so that we can easily see the effect over just a few orbits.
End of explanation
"""
for i in range(3):
b.add_dataset('lc', compute_phases=phoebe.linspace(i,i+1,101))
b.add_dataset('rv', compute_phases=phoebe.linspace(i,i+1,101))
b.add_dataset('orb', compute_phases=phoebe.linspace(i,i+1,101))
"""
Explanation: We'll add several light curve, RV, and orbit datasets, each covering successive cycles of the orbit so that we can differentiate between them later when plotting.
End of explanation
"""
print(b.get_parameter(qualifier='dpdt').description)
"""
Explanation: It is important to note that the dpdt parameter is the time-derivative of period (the anomalistic period). However, the anomalistic and sidereal periods are only different in the case of apsidal motion.
End of explanation
"""
print(b.filter('t0', context='system'))
print(b.get_parameter('t0', context='system').description)
"""
Explanation: Zero-point for orbital period
The orbital period itself, period, is defined at time t0 (in the system context). If the dataset times are far from t0@system then we begin to lose precision on the period parameter as a small change, coupled with the propagation of dpdt * (times - t0) can cause a large effect. It is important to try to define the system at a time t0@system that are near the dataset times (and near the other various t0s). By default, t0@system is set to 0 and we have set our dataset times to start at zero as well.
End of explanation
"""
print(b.filter(qualifier='compute_times', kind='lc', context='dataset'))
"""
Explanation: Considerations in compute_phases and mask_phases
By default, the mapping between compute_times and compute_phases will account for dpdt. In this case, we set compute_phases to cover successive orbits... so therefore the resulting compute_times will adjust as necessary.
End of explanation
"""
print(b.filter(qualifier='phases_dpdt'))
print(b.get_parameter(qualifier='phases_dpdt', dataset='lc01').description)
"""
Explanation: For the case of this tutorial, we would rather the compute_times be even cycles based on period alone, so that we can color by cycles of period and easily visualize the effect of dpdt. We could have set compute_times directly instead, but then we would need to keep the period fixed and know it in advance. Alternatively, we can set phases_dpdt = 'none' to tell this mapping to ignore dpdt.
End of explanation
"""
b.set_value_all('phases_dpdt', 'none')
"""
Explanation: As noted in the description, the phases_dpdt parameter will also affect phase-masking.
End of explanation
"""
print(b.filter(qualifier='compute_times', kind='lc', context='dataset'))
"""
Explanation: Now we see that our resulting compute_times are direct multiples of the period (at time=t0@system).
End of explanation
"""
b.run_compute(ltte=False)
_ = b.plot(kind='lc', x='times', legend=True, show=True)
"""
Explanation: Contribution to Eclipse Timings in Light Curves
Now we'll run the forward model, but with light travel time effects disabled, just to avoid any confusion with small contributions from the finite speed of light.
End of explanation
"""
_ = b.plot(kind='lc', x='phases', legend=True, show=True)
"""
Explanation: By default, the phasing in plotting accounts for dpdt.
End of explanation
"""
_ = b.plot(kind='lc', x='phases', dpdt=0.0, legend=True, show=True)
"""
Explanation: To override this behavior, we can pass dpdt=0.0 so that we can see the eclipses spread across the phase-space. dpdt is passed directly to b.to_phase (see also: b.to_time and b.get_ephemeris)
End of explanation
"""
b.set_value('dpdt', 0.1*u.d/u.d)
b.run_compute(ltte=False)
_ = b.plot(kind='orb',
x='us', y='ws',
time=b.get_value('t0_supconj@component')+1*np.arange(0,3),
linestyle={'primary': 'solid', 'secondary': 'dotted'},
color={'orb01': 'blue', 'orb02': 'orange', 'orb03': 'green'},
#color='dataset', # TODO: we should support this to say color BY dataset
legend=True,
show=True)
"""
Explanation: Contribution to Orbits and Mass-Conservation
As the orbital period is instantaneously changing, the instantaneous semi-major axis of the orbit is also adjusted in order to conserve the total mass in the system (under Kepler's third law). This results in an automatic in or out-spiral of the system whenever dpdt != 0.0. Note that, like the period, sma is defined at t0@system.
Just for visualization purposes, let's rerun our forward model, but this time with an even more exaggerated value for dpdt
End of explanation
"""
_ = b.plot(kind='orb',
time=b.get_value('t0_supconj@component')+1*np.arange(0,3),
linestyle={'primary': 'solid', 'secondary': 'dotted'},
color={'orb01': 'blue', 'orb02': 'orange', 'orb03': 'green'},
x='times', y='us',
show=True)
"""
Explanation: By plotting us vs times, we can see the position of the stars at integer periods (when we'd expect eclipses if it weren't for dpdt) as well as the times of the resulting eclipses (when the two stars cross at u=0, ignoring ltte, etc). Here we clearly see the increasing orbit size as a function of time.
End of explanation
"""
_ = b.plot(kind='rv', x='times',
linestyle={'primary': 'solid', 'secondary': 'dotted'},
color={'rv01': 'blue', 'rv02': 'orange', 'rv03': 'green'},
show=True)
"""
Explanation: Contributions to RVs
Due to the changing size of the orbit due to mass conservation (increasing the RV amplitude for a positive dpdt), as well as the changing orbital period (decreasing the RV amplitude for a positive dpdt), the RVs will also have a change in amplitude as a function of time (in addition to the phase-effects seen for the light curve above).
End of explanation
"""
|
ES-DOC/esdoc-jupyterhub
|
notebooks/inpe/cmip6/models/sandbox-3/ocean.ipynb
|
gpl-3.0
|
# DO NOT EDIT !
from pyesdoc.ipython.model_topic import NotebookOutput
# DO NOT EDIT !
DOC = NotebookOutput('cmip6', 'inpe', 'sandbox-3', 'ocean')
"""
Explanation: ES-DOC CMIP6 Model Properties - Ocean
MIP Era: CMIP6
Institute: INPE
Source ID: SANDBOX-3
Topic: Ocean
Sub-Topics: Timestepping Framework, Advection, Lateral Physics, Vertical Physics, Uplow Boundaries, Boundary Forcing.
Properties: 133 (101 required)
Model descriptions: Model description details
Initialized From: --
Notebook Help: Goto notebook help page
Notebook Initialised: 2018-02-15 16:54:07
Document Setup
IMPORTANT: to be executed each time you run the notebook
End of explanation
"""
# Set as follows: DOC.set_author("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Authors
Set document authors
End of explanation
"""
# Set as follows: DOC.set_contributor("name", "email")
# TODO - please enter value(s)
"""
Explanation: Document Contributors
Specify document contributors
End of explanation
"""
# Set publication status:
# 0=do not publish, 1=publish.
DOC.set_publication_status(0)
"""
Explanation: Document Publication
Specify document publication status
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: Document Table of Contents
1. Key Properties
2. Key Properties --> Seawater Properties
3. Key Properties --> Bathymetry
4. Key Properties --> Nonoceanic Waters
5. Key Properties --> Software Properties
6. Key Properties --> Resolution
7. Key Properties --> Tuning Applied
8. Key Properties --> Conservation
9. Grid
10. Grid --> Discretisation --> Vertical
11. Grid --> Discretisation --> Horizontal
12. Timestepping Framework
13. Timestepping Framework --> Tracers
14. Timestepping Framework --> Baroclinic Dynamics
15. Timestepping Framework --> Barotropic
16. Timestepping Framework --> Vertical Physics
17. Advection
18. Advection --> Momentum
19. Advection --> Lateral Tracers
20. Advection --> Vertical Tracers
21. Lateral Physics
22. Lateral Physics --> Momentum --> Operator
23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff
24. Lateral Physics --> Tracers
25. Lateral Physics --> Tracers --> Operator
26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff
27. Lateral Physics --> Tracers --> Eddy Induced Velocity
28. Vertical Physics
29. Vertical Physics --> Boundary Layer Mixing --> Details
30. Vertical Physics --> Boundary Layer Mixing --> Tracers
31. Vertical Physics --> Boundary Layer Mixing --> Momentum
32. Vertical Physics --> Interior Mixing --> Details
33. Vertical Physics --> Interior Mixing --> Tracers
34. Vertical Physics --> Interior Mixing --> Momentum
35. Uplow Boundaries --> Free Surface
36. Uplow Boundaries --> Bottom Boundary Layer
37. Boundary Forcing
38. Boundary Forcing --> Momentum --> Bottom Friction
39. Boundary Forcing --> Momentum --> Lateral Friction
40. Boundary Forcing --> Tracers --> Sunlight Penetration
41. Boundary Forcing --> Tracers --> Fresh Water Forcing
1. Key Properties
Ocean key properties
1.1. Model Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of ocean model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 1.2. Model Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of ocean model code (NEMO 3.6, MOM 5.0,...)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.model_family')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "OGCM"
# "slab ocean"
# "mixed layer ocean"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.3. Model Family
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of ocean model.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.basic_approximations')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Primitive equations"
# "Non-hydrostatic"
# "Boussinesq"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.4. Basic Approximations
Is Required: TRUE Type: ENUM Cardinality: 1.N
Basic approximations made in the ocean.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.prognostic_variables')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Potential temperature"
# "Conservative temperature"
# "Salinity"
# "U-velocity"
# "V-velocity"
# "W-velocity"
# "SSH"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 1.5. Prognostic Variables
Is Required: TRUE Type: ENUM Cardinality: 1.N
List of prognostic variables in the ocean component.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear"
# "Wright, 1997"
# "Mc Dougall et al."
# "Jackett et al. 2006"
# "TEOS 2010"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 2. Key Properties --> Seawater Properties
Physical properties of seawater in ocean
2.1. Eos Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of EOS for sea water
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_temp')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Potential temperature"
# "Conservative temperature"
# TODO - please enter value(s)
"""
Explanation: 2.2. Eos Functional Temp
Is Required: TRUE Type: ENUM Cardinality: 1.1
Temperature used in EOS for sea water
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_salt')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Practical salinity Sp"
# "Absolute salinity Sa"
# TODO - please enter value(s)
"""
Explanation: 2.3. Eos Functional Salt
Is Required: TRUE Type: ENUM Cardinality: 1.1
Salinity used in EOS for sea water
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Pressure (dbars)"
# "Depth (meters)"
# TODO - please enter value(s)
"""
Explanation: 2.4. Eos Functional Depth
Is Required: TRUE Type: ENUM Cardinality: 1.1
Depth or pressure used in EOS for sea water ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_freezing_point')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "TEOS 2010"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 2.5. Ocean Freezing Point
Is Required: TRUE Type: ENUM Cardinality: 1.1
Equation used to compute the freezing point (in deg C) of seawater, as a function of salinity and pressure
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_specific_heat')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 2.6. Ocean Specific Heat
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Specific heat in ocean (cpocean) in J/(kg K)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_reference_density')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 2.7. Ocean Reference Density
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Boussinesq reference density (rhozero) in kg / m3
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.reference_dates')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Present day"
# "21000 years BP"
# "6000 years BP"
# "LGM"
# "Pliocene"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 3. Key Properties --> Bathymetry
Properties of bathymetry in ocean
3.1. Reference Dates
Is Required: TRUE Type: ENUM Cardinality: 1.1
Reference date of bathymetry
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 3.2. Type
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the bathymetry fixed in time in the ocean ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.ocean_smoothing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3.3. Ocean Smoothing
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe any smoothing or hand editing of bathymetry in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.bathymetry.source')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 3.4. Source
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe source of bathymetry in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.isolated_seas')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4. Key Properties --> Nonoceanic Waters
Non oceanic waters treatement in ocean
4.1. Isolated Seas
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how isolated seas is performed
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.river_mouth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 4.2. River Mouth
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe if/how river mouth mixing or estuaries specific treatment is performed
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.repository')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5. Key Properties --> Software Properties
Software properties of ocean code
5.1. Repository
Is Required: FALSE Type: STRING Cardinality: 0.1
Location of code for this component.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.code_version')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.2. Code Version
Is Required: FALSE Type: STRING Cardinality: 0.1
Code version identifier.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.software_properties.code_languages')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 5.3. Code Languages
Is Required: FALSE Type: STRING Cardinality: 0.N
Code language(s).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6. Key Properties --> Resolution
Resolution in the ocean grid
6.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
This is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.canonical_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.2. Canonical Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Expression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.range_horizontal_resolution')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 6.3. Range Horizontal Resolution
Is Required: TRUE Type: STRING Cardinality: 1.1
Range of horizontal resolution with spatial details, eg. 50(Equator)-100km or 0.1-0.5 degrees etc.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.number_of_horizontal_gridpoints')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 6.4. Number Of Horizontal Gridpoints
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Total number of horizontal (XY) points (or degrees of freedom) on computational grid.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.number_of_vertical_levels')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 6.5. Number Of Vertical Levels
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Number of vertical levels resolved on computational grid.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.is_adaptive_grid')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 6.6. Is Adaptive Grid
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Default is False. Set true if grid resolution changes during execution.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.resolution.thickness_level_1')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 6.7. Thickness Level 1
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Thickness of first surface ocean level (in meters)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7. Key Properties --> Tuning Applied
Tuning methodology for ocean component
7.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
General overview description of tuning: explain and motivate the main targets and metrics retained. &Document the relative weight given to climate performance metrics versus process oriented metrics, &and on the possible conflicts with parameterization level tuning. In particular describe any struggle &with a parameter value that required pushing it to its limits to solve a particular model deficiency.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.global_mean_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.2. Global Mean Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List set of metrics of the global mean state used in tuning model/component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.regional_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.3. Regional Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List of regional metrics of mean state (e.g THC, AABW, regional means etc) used in tuning model/component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.tuning_applied.trend_metrics_used')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 7.4. Trend Metrics Used
Is Required: FALSE Type: STRING Cardinality: 0.N
List observed trend metrics used in tuning model/component
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.description')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8. Key Properties --> Conservation
Conservation in the ocean component
8.1. Description
Is Required: TRUE Type: STRING Cardinality: 1.1
Brief description of conservation methodology
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.scheme')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Energy"
# "Enstrophy"
# "Salt"
# "Volume of ocean"
# "Momentum"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 8.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.N
Properties conserved in the ocean by the numerical schemes
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.consistency_properties')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.3. Consistency Properties
Is Required: FALSE Type: STRING Cardinality: 0.1
Any additional consistency properties (energy conversion, pressure gradient discretisation, ...)?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.corrected_conserved_prognostic_variables')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 8.4. Corrected Conserved Prognostic Variables
Is Required: FALSE Type: STRING Cardinality: 0.1
Set of variables which are conserved by more than the numerical scheme alone.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.key_properties.conservation.was_flux_correction_used')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 8.5. Was Flux Correction Used
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Does conservation involve flux correction ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 9. Grid
Ocean grid
9.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of grid in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.vertical.coordinates')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Z-coordinate"
# "Z*-coordinate"
# "S-coordinate"
# "Isopycnic - sigma 0"
# "Isopycnic - sigma 2"
# "Isopycnic - sigma 4"
# "Isopycnic - other"
# "Hybrid / Z+S"
# "Hybrid / Z+isopycnic"
# "Hybrid / other"
# "Pressure referenced (P)"
# "P*"
# "Z**"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 10. Grid --> Discretisation --> Vertical
Properties of vertical discretisation in ocean
10.1. Coordinates
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of vertical coordinates in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.vertical.partial_steps')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 10.2. Partial Steps
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Using partial steps with Z or Z vertical coordinate in ocean ?*
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Lat-lon"
# "Rotated north pole"
# "Two north poles (ORCA-style)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 11. Grid --> Discretisation --> Horizontal
Type of horizontal discretisation scheme in ocean
11.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal grid type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.staggering')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Arakawa B-grid"
# "Arakawa C-grid"
# "Arakawa E-grid"
# "N/a"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 11.2. Staggering
Is Required: FALSE Type: ENUM Cardinality: 0.1
Horizontal grid staggering type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.grid.discretisation.horizontal.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Finite difference"
# "Finite volumes"
# "Finite elements"
# "Unstructured grid"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 11.3. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Horizontal discretisation scheme in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 12. Timestepping Framework
Ocean Timestepping Framework
12.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of time stepping in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.diurnal_cycle')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Via coupling"
# "Specific treatment"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 12.2. Diurnal Cycle
Is Required: TRUE Type: ENUM Cardinality: 1.1
Diurnal cycle type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.tracers.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Leap-frog + Asselin filter"
# "Leap-frog + Periodic Euler"
# "Predictor-corrector"
# "Runge-Kutta 2"
# "AM3-LF"
# "Forward-backward"
# "Forward operator"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 13. Timestepping Framework --> Tracers
Properties of tracers time stepping in ocean
13.1. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Tracers time stepping scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.tracers.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 13.2. Time Step
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Tracers time step (in seconds)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Preconditioned conjugate gradient"
# "Sub cyling"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14. Timestepping Framework --> Baroclinic Dynamics
Baroclinic dynamics in ocean
14.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Baroclinic dynamics type
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Leap-frog + Asselin filter"
# "Leap-frog + Periodic Euler"
# "Predictor-corrector"
# "Runge-Kutta 2"
# "AM3-LF"
# "Forward-backward"
# "Forward operator"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 14.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Baroclinic dynamics scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 14.3. Time Step
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Baroclinic time step (in seconds)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.barotropic.splitting')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "split explicit"
# "implicit"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 15. Timestepping Framework --> Barotropic
Barotropic time stepping in ocean
15.1. Splitting
Is Required: TRUE Type: ENUM Cardinality: 1.1
Time splitting method
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.barotropic.time_step')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 15.2. Time Step
Is Required: FALSE Type: INTEGER Cardinality: 0.1
Barotropic time step (in seconds)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.timestepping_framework.vertical_physics.method')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 16. Timestepping Framework --> Vertical Physics
Vertical physics time stepping in ocean
16.1. Method
Is Required: TRUE Type: STRING Cardinality: 1.1
Details of vertical time stepping in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 17. Advection
Ocean advection
17.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of advection in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Flux form"
# "Vector form"
# TODO - please enter value(s)
"""
Explanation: 18. Advection --> Momentum
Properties of lateral momemtum advection scheme in ocean
18.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of lateral momemtum advection scheme in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.scheme_name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 18.2. Scheme Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Name of ocean momemtum advection scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.momentum.ALE')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 18.3. ALE
Is Required: FALSE Type: BOOLEAN Cardinality: 0.1
Using ALE for vertical advection ? (if vertical coordinates are sigma)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 19. Advection --> Lateral Tracers
Properties of lateral tracer advection scheme in ocean
19.1. Order
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Order of lateral tracer advection scheme in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.flux_limiter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 19.2. Flux Limiter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Monotonic flux limiter for lateral tracer advection scheme in ocean ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.effective_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 19.3. Effective Order
Is Required: TRUE Type: FLOAT Cardinality: 1.1
Effective order of limited lateral tracer advection scheme in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 19.4. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Descriptive text for lateral tracer advection scheme in ocean (e.g. MUSCL, PPM-H5, PRATHER,...)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers')
# PROPERTY VALUE(S):
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Ideal age"
# "CFC 11"
# "CFC 12"
# "SF6"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 19.5. Passive Tracers
Is Required: FALSE Type: ENUM Cardinality: 0.N
Passive tracers advected
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers_advection')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 19.6. Passive Tracers Advection
Is Required: FALSE Type: STRING Cardinality: 0.1
Is advection of passive tracers different than active ? if so, describe.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.vertical_tracers.name')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 20. Advection --> Vertical Tracers
Properties of vertical tracer advection scheme in ocean
20.1. Name
Is Required: TRUE Type: STRING Cardinality: 1.1
Descriptive text for vertical tracer advection scheme in ocean (e.g. MUSCL, PPM-H5, PRATHER,...)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.advection.vertical_tracers.flux_limiter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 20.2. Flux Limiter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Monotonic flux limiter for vertical tracer advection scheme in ocean ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 21. Lateral Physics
Ocean lateral physics
21.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of lateral physics in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Eddy active"
# "Eddy admitting"
# TODO - please enter value(s)
"""
Explanation: 21.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of transient eddy representation in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Horizontal"
# "Isopycnal"
# "Isoneutral"
# "Geopotential"
# "Iso-level"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 22. Lateral Physics --> Momentum --> Operator
Properties of lateral physics operator for momentum in ocean
22.1. Direction
Is Required: TRUE Type: ENUM Cardinality: 1.1
Direction of lateral physics momemtum scheme in the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Harmonic"
# "Bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 22.2. Order
Is Required: TRUE Type: ENUM Cardinality: 1.1
Order of lateral physics momemtum scheme in the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Second order"
# "Higher order"
# "Flux limiter"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 22.3. Discretisation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Discretisation of lateral physics momemtum scheme in the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Space varying"
# "Time + space varying (Smagorinsky)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff
Properties of eddy viscosity coeff in lateral physics momemtum scheme in the ocean
23.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Lateral physics momemtum eddy viscosity coeff type in the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.constant_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 23.2. Constant Coefficient
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant, value of eddy viscosity coeff in lateral physics momemtum scheme (in m2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.variable_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 23.3. Variable Coefficient
Is Required: FALSE Type: STRING Cardinality: 0.1
If space-varying, describe variations of eddy viscosity coeff in lateral physics momemtum scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 23.4. Coeff Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe background eddy viscosity coeff in lateral physics momemtum scheme (give values in m2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_backscatter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 23.5. Coeff Backscatter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there backscatter in eddy viscosity coeff in lateral physics momemtum scheme ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.mesoscale_closure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 24. Lateral Physics --> Tracers
Properties of lateral physics for tracers in ocean
24.1. Mesoscale Closure
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there a mesoscale closure in the lateral physics tracers scheme ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.submesoscale_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 24.2. Submesoscale Mixing
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there a submesoscale mixing parameterisation (i.e Fox-Kemper) in the lateral physics tracers scheme ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.direction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Horizontal"
# "Isopycnal"
# "Isoneutral"
# "Geopotential"
# "Iso-level"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 25. Lateral Physics --> Tracers --> Operator
Properties of lateral physics operator for tracers in ocean
25.1. Direction
Is Required: TRUE Type: ENUM Cardinality: 1.1
Direction of lateral physics tracers scheme in the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Harmonic"
# "Bi-harmonic"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 25.2. Order
Is Required: TRUE Type: ENUM Cardinality: 1.1
Order of lateral physics tracers scheme in the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.discretisation')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Second order"
# "Higher order"
# "Flux limiter"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 25.3. Discretisation
Is Required: TRUE Type: ENUM Cardinality: 1.1
Discretisation of lateral physics tracers scheme in the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant"
# "Space varying"
# "Time + space varying (Smagorinsky)"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff
Properties of eddy diffusity coeff in lateral physics tracers scheme in the ocean
26.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Lateral physics tracers eddy diffusity coeff type in the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.constant_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 26.2. Constant Coefficient
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant, value of eddy diffusity coeff in lateral physics tracers scheme (in m2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.variable_coefficient')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 26.3. Variable Coefficient
Is Required: FALSE Type: STRING Cardinality: 0.1
If space-varying, describe variations of eddy diffusity coeff in lateral physics tracers scheme
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 26.4. Coeff Background
Is Required: TRUE Type: INTEGER Cardinality: 1.1
Describe background eddy diffusity coeff in lateral physics tracers scheme (give values in m2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_backscatter')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 26.5. Coeff Backscatter
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there backscatter in eddy diffusity coeff in lateral physics tracers scheme ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "GM"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 27. Lateral Physics --> Tracers --> Eddy Induced Velocity
Properties of eddy induced velocity (EIV) in lateral physics tracers scheme in the ocean
27.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of EIV in lateral physics tracers in the ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.constant_val')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 27.2. Constant Val
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If EIV scheme for tracers is constant, specify coefficient value (M2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.flux_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 27.3. Flux Type
Is Required: TRUE Type: STRING Cardinality: 1.1
Type of EIV flux (advective or skew)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.added_diffusivity')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 27.4. Added Diffusivity
Is Required: TRUE Type: STRING Cardinality: 1.1
Type of EIV added diffusivity (constant, flow dependent or none)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 28. Vertical Physics
Ocean Vertical Physics
28.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of vertical physics in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.details.langmuir_cells_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 29. Vertical Physics --> Boundary Layer Mixing --> Details
Properties of vertical physics in ocean
29.1. Langmuir Cells Mixing
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there Langmuir cells mixing in upper ocean ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure - TKE"
# "Turbulent closure - KPP"
# "Turbulent closure - Mellor-Yamada"
# "Turbulent closure - Bulk Mixed Layer"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 30. Vertical Physics --> Boundary Layer Mixing --> Tracers
*Properties of boundary layer (BL) mixing on tracers in the ocean *
30.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of boundary layer mixing for tracers in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 30.2. Closure Order
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If turbulent BL mixing of tracers, specific order of closure (0, 1, 2.5, 3)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 30.3. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant BL mixing of tracers, specific coefficient (m2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 30.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background BL mixing of tracers coefficient, (schema and value in m2/s - may by none)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure - TKE"
# "Turbulent closure - KPP"
# "Turbulent closure - Mellor-Yamada"
# "Turbulent closure - Bulk Mixed Layer"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 31. Vertical Physics --> Boundary Layer Mixing --> Momentum
*Properties of boundary layer (BL) mixing on momentum in the ocean *
31.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of boundary layer mixing for momentum in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.closure_order')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 31.2. Closure Order
Is Required: FALSE Type: FLOAT Cardinality: 0.1
If turbulent BL mixing of momentum, specific order of closure (0, 1, 2.5, 3)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 31.3. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant BL mixing of momentum, specific coefficient (m2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 31.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background BL mixing of momentum coefficient, (schema and value in m2/s - may by none)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.convection_type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Non-penetrative convective adjustment"
# "Enhanced vertical diffusion"
# "Included in turbulence closure"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 32. Vertical Physics --> Interior Mixing --> Details
*Properties of interior mixing in the ocean *
32.1. Convection Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of vertical convection in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.tide_induced_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 32.2. Tide Induced Mixing
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how tide induced mixing is modelled (barotropic, baroclinic, none)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.double_diffusion')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 32.3. Double Diffusion
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there double diffusion
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.shear_mixing')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 32.4. Shear Mixing
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is there interior shear mixing
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure / TKE"
# "Turbulent closure - Mellor-Yamada"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 33. Vertical Physics --> Interior Mixing --> Tracers
*Properties of interior mixing on tracers in the ocean *
33.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of interior mixing for tracers in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 33.2. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant interior mixing of tracers, specific coefficient (m2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.profile')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 33.3. Profile
Is Required: TRUE Type: STRING Cardinality: 1.1
Is the background interior mixing using a vertical profile for tracers (i.e is NOT constant) ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 33.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background interior mixing of tracers coefficient, (schema and value in m2/s - may by none)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Constant value"
# "Turbulent closure / TKE"
# "Turbulent closure - Mellor-Yamada"
# "Richardson number dependent - PP"
# "Richardson number dependent - KT"
# "Imbeded as isopycnic vertical coordinate"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 34. Vertical Physics --> Interior Mixing --> Momentum
*Properties of interior mixing on momentum in the ocean *
34.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of interior mixing for momentum in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.constant')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 34.2. Constant
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If constant interior mixing of momentum, specific coefficient (m2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.profile')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 34.3. Profile
Is Required: TRUE Type: STRING Cardinality: 1.1
Is the background interior mixing using a vertical profile for momentum (i.e is NOT constant) ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.background')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 34.4. Background
Is Required: TRUE Type: STRING Cardinality: 1.1
Background interior mixing of momentum coefficient, (schema and value in m2/s - may by none)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 35. Uplow Boundaries --> Free Surface
Properties of free surface in ocean
35.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of free surface in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear implicit"
# "Linear filtered"
# "Linear semi-explicit"
# "Non-linear implicit"
# "Non-linear filtered"
# "Non-linear semi-explicit"
# "Fully explicit"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 35.2. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Free surface scheme in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.embeded_seaice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 35.3. Embeded Seaice
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the sea-ice embeded in the ocean model (instead of levitating) ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 36. Uplow Boundaries --> Bottom Boundary Layer
Properties of bottom boundary layer in ocean
36.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of bottom boundary layer in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.type_of_bbl')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Diffusive"
# "Acvective"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 36.2. Type Of Bbl
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of bottom boundary layer in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.lateral_mixing_coef')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# TODO - please enter value(s)
"""
Explanation: 36.3. Lateral Mixing Coef
Is Required: FALSE Type: INTEGER Cardinality: 0.1
If bottom BL is diffusive, specify value of lateral mixing coefficient (in m2/s)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.sill_overflow')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 36.4. Sill Overflow
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe any specific treatment of sill overflows
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.overview')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 37. Boundary Forcing
Ocean boundary forcing
37.1. Overview
Is Required: TRUE Type: STRING Cardinality: 1.1
Overview of boundary forcing in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.surface_pressure')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 37.2. Surface Pressure
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how surface pressure is transmitted to ocean (via sea-ice, nothing specific,...)
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum_flux_correction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 37.3. Momentum Flux Correction
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe any type of ocean surface momentum flux correction and, if applicable, how it is applied and where.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers_flux_correction')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 37.4. Tracers Flux Correction
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe any type of ocean surface tracers flux correction and, if applicable, how it is applied and where.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.wave_effects')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 37.5. Wave Effects
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how wave effects are modelled at ocean surface.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.river_runoff_budget')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 37.6. River Runoff Budget
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe how river runoff from land surface is routed to ocean and any global adjustment done.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.geothermal_heating')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 37.7. Geothermal Heating
Is Required: TRUE Type: STRING Cardinality: 1.1
Describe if/how geothermal heating is present at ocean bottom.
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum.bottom_friction.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Linear"
# "Non-linear"
# "Non-linear (drag function of speed of tides)"
# "Constant drag coefficient"
# "None"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 38. Boundary Forcing --> Momentum --> Bottom Friction
Properties of momentum bottom friction in ocean
38.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of momentum bottom friction in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.momentum.lateral_friction.type')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "None"
# "Free-slip"
# "No-slip"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 39. Boundary Forcing --> Momentum --> Lateral Friction
Properties of momentum lateral friction in ocean
39.1. Type
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of momentum lateral friction in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.scheme')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "1 extinction depth"
# "2 extinction depth"
# "3 extinction depth"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 40. Boundary Forcing --> Tracers --> Sunlight Penetration
Properties of sunlight penetration scheme in ocean
40.1. Scheme
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of sunlight penetration scheme in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.ocean_colour')
# PROPERTY VALUE:
# Set as follows: DOC.set_value(value)
# Valid Choices:
# True
# False
# TODO - please enter value(s)
"""
Explanation: 40.2. Ocean Colour
Is Required: TRUE Type: BOOLEAN Cardinality: 1.1
Is the ocean sunlight penetration scheme ocean colour dependent ?
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.extinction_depth')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 40.3. Extinction Depth
Is Required: FALSE Type: STRING Cardinality: 0.1
Describe and list extinctions depths for sunlight penetration scheme (if applicable).
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_atmopshere')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Freshwater flux"
# "Virtual salt flux"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 41. Boundary Forcing --> Tracers --> Fresh Water Forcing
Properties of surface fresh water forcing in ocean
41.1. From Atmopshere
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of surface fresh water forcing from atmos in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_sea_ice')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# Valid Choices:
# "Freshwater flux"
# "Virtual salt flux"
# "Real salt flux"
# "Other: [Please specify]"
# TODO - please enter value(s)
"""
Explanation: 41.2. From Sea Ice
Is Required: TRUE Type: ENUM Cardinality: 1.1
Type of surface fresh water forcing from sea-ice in ocean
End of explanation
"""
# PROPERTY ID - DO NOT EDIT !
DOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.forced_mode_restoring')
# PROPERTY VALUE:
# Set as follows: DOC.set_value("value")
# TODO - please enter value(s)
"""
Explanation: 41.3. Forced Mode Restoring
Is Required: TRUE Type: STRING Cardinality: 1.1
Type of surface salinity restoring in forced mode (OMIP)
End of explanation
"""
|
GoogleCloudPlatform/ai-platform-samples
|
notebooks/templates/ai_platform_notebooks_template_hybrid.ipynb
|
apache-2.0
|
%pip install -U missing_or_updating_package --user
"""
Explanation: <table align="left">
<td>
<a href="https://colab.research.google.com/github/GoogleCloudPlatform/ai-platform-samples/blob/main/notebooks/templates/ai_platform_notebooks_template_hybrid.ipynb"">
<img src="https://cloud.google.com/ml-engine/images/colab-logo-32px.png" alt="Colab logo"> Run in Colab
</a>
</td>
<td>
<a href="https://github.com/GoogleCloudPlatform/ai-platform-samples/blob/main/notebooks/templates/ai_platform_notebooks_template_hybrid.ipynb">
<img src="https://cloud.google.com/ml-engine/images/github-logo-32px.png" alt="GitHub logo">
View on GitHub
</a>
</td>
</table>
Overview
{Include a paragraph or two explaining what this example demonstrates, who should be interested in it, and what you need to know before you get started.}
Dataset
{Include a paragraph with Dataset information and where to obtain it}
Objective
In this notebook, you will learn how to {Complete the sentence explaining briefly what you will learn from the notebook, example
ML Training, HP tuning, Serving} The steps performed include:
* { add high level bullets for the steps of what you will perform in the notebook }
Costs
Example:
This tutorial uses billable components of Google Cloud Platform (GCP):
Cloud AI Platform
Cloud Storage
Learn about Cloud AI Platform
pricing and Cloud Storage
pricing, and use the Pricing
Calculator
to generate a cost estimate based on your projected usage.
Set up your local development environment
If you are using Colab or AI Platform Notebooks, your environment already meets
all the requirements to run this notebook. You can skip this step.
Otherwise, make sure your environment meets this notebook's requirements.
You need the following:
The Google Cloud SDK
Git
Python 3
virtualenv
Jupyter notebook running in a virtual environment with Python 3
The Google Cloud guide to Setting up a Python development
environment and the Jupyter
installation guide provide detailed instructions
for meeting these requirements. The following steps provide a condensed set of
instructions:
Install and initialize the Cloud SDK.
Install Python 3.
Install
virtualenv
and create a virtual environment that uses Python 3.
Activate that environment and run pip install jupyter in a shell to install
Jupyter.
Run jupyter notebook in a shell to launch Jupyter.
Open this notebook in the Jupyter Notebook Dashboard.
Installation
Install additional dependencies not installed in Notebook environment
(e.g. XGBoost, adanet, tf-hub)
Use the latest major GA version of the framework.
End of explanation
"""
# Automatically restart kernel after installs
import IPython
app = IPython.Application.instance()
app.kernel.do_shutdown(True)
"""
Explanation: Restart the Kernel
Once you've installed the {packages}, you need to restart the notebook kernel so it can find the packages.
End of explanation
"""
# Get your GCP project id from gcloud
shell_output=!gcloud config list --format 'value(core.project)' 2>/dev/null
PROJECT_ID=shell_output[0]
print("Project ID: ", PROJECT_ID)
"""
Explanation: Before you begin
GPU run-time
Make sure you're running this notebook in a GPU runtime if you have that option. In Colab, select Runtime --> Change runtime type
Follow before you begin in Guide
{ add link to any online before you begin tutorial on the product }
Set up your GCP project
The following steps are required, regardless of your notebook environment.
Select or create a GCP project.. When you first create an account, you get a $300 free credit towards your compute/storage costs.
Make sure that billing is enabled for your project.
Enable the AI Platform APIs and Compute Engine APIs.
If you are running this notebook locally, you will need to install Google Cloud SDK.
Enter your project ID in the cell below. Then run the cell to make sure the
Cloud SDK uses the right project for all the commands in this notebook.
Note: Jupyter runs lines prefixed with ! as shell commands, and it interpolates Python variables prefixed with $ into these commands.
Project ID
If you don't know your project ID, you may be able to get your PROJECT_ID using gcloud.
End of explanation
"""
if PROJECT_ID == "" or PROJECT_ID is None:
PROJECT_ID = "[your-project-id]" #@param {type:"string"}
"""
Explanation: Otherwise, set your project id here.
End of explanation
"""
import sys, os
# If you are running this notebook in Colab, run this cell and follow the
# instructions to authenticate your GCP account. This provides access to your
# Cloud Storage bucket and lets you submit training jobs and prediction
# requests.
# If on AI Platform, then don't execute this code
if not os.path.exists('/opt/deeplearning/metadata/env_version'):
if 'google.colab' in sys.modules:
from google.colab import auth as google_auth
google_auth.authenticate_user()
# If you are running this notebook locally, replace the string below with the
# path to your service account key and run this cell to authenticate your GCP
# account.
else:
%env GOOGLE_APPLICATION_CREDENTIALS ''
"""
Explanation: Authenticate your GCP account
If you are using AI Platform Notebooks, your environment is already
authenticated. Skip this step.
If you are using Colab, run the cell below and follow the instructions
when prompted to authenticate your account via oAuth.
Otherwise, follow these steps:
In the GCP Console, go to the Create service account key
page.
From the Service account drop-down list, select New service account.
In the Service account name field, enter a name.
From the Role drop-down list, select
Machine Learning Engine > AI Platform Admin and
Storage > Storage Object Admin.
Click Create. A JSON file that contains your key downloads to your
local environment.
Enter the path to your service account key as the
GOOGLE_APPLICATION_CREDENTIALS variable in the cell below and run the cell.
End of explanation
"""
BUCKET_NAME = "[your-bucket-name]" #@param {type:"string"}
REGION = "us-central1" #@param {type:"string"}
"""
Explanation: Create a Cloud Storage bucket
The following steps are required, regardless of your notebook environment.
When you submit a training job using the Cloud SDK, you upload a Python package
containing your training code to a Cloud Storage bucket. AI Platform runs
the code from this package. In this tutorial, AI Platform also saves the
trained model that results from your job in the same bucket. You can then
create an AI Platform model version based on this output in order to serve
online predictions.
Set the name of your Cloud Storage bucket below. It must be unique across all
Cloud Storage buckets.
You may also change the REGION variable, which is used for operations
throughout the rest of this notebook. Make sure to choose a region where Cloud
AI Platform services are
available. You may
not use a Multi-Regional Storage bucket for training with AI Platform.
End of explanation
"""
! gsutil mb -l $REGION gs://$BUCKET_NAME
"""
Explanation: Only if your bucket doesn't already exist: Run the following cell to create your Cloud Storage bucket.
End of explanation
"""
! gsutil ls -al gs://$BUCKET_NAME
"""
Explanation: Finally, validate access to your Cloud Storage bucket by examining its contents:
End of explanation
"""
from __future__ import absolute_import
from __future__ import division
from __future__ import print_function
import tensorflow as tf
import numpy as np
import sys, os
"""
Explanation: Import libraries and define constants
{Put all your imports and installs up into a setup section.}
End of explanation
"""
#Build the model
model = tf.keras.Sequential([
tf.keras.layers.Dense(10, activation='relu', input_shape=(None, 5)),
tf.keras.layers.Dense(3)
])
# Run the model on a single batch of data, and inspect the output.
result = model(tf.constant(np.random.randn(10,5), dtype = tf.float32)).numpy()
print("min:", result.min())
print("max:", result.max())
print("mean:", result.mean())
print("shape:", result.shape)
# Compile the model for training
model.compile(optimizer=tf.keras.optimizers.Adam(),
loss=tf.keras.losses.categorical_crossentropy)
"""
Explanation: Notes
The tips below are specific to notebooks for Tensorflow/Scikit-Learn/PyTorch/XGBoost code.
General
Include the collapsed license at the top (this uses Colab's "Form" mode to hide the cells).
Only include a single H1 title.
Include the button-bar immediately under the H1.
Include an overview section before any code.
Put all your installs and imports in a setup section.
Always include the three future imports.
Save the notebook with the Table of Contents open.
Write python3 compatible code.
Keep cells small (~max 20 lines).
Python Style guide
As Guido van Rossum said, “Code is read much more often than it is written”. Please make sure you are following
the guidelines to write Python code from the Python style guide.
Writing readable code here is critical. Specially when working with Notebooks: This will help other people, to read and understand your code. Having guidelines that you follow and recognize will make it easier for others to read your code.
Google Python Style guide
Code content
Use the highest level API that gets the job done (unless the goal is to demonstrate the low level API). For example, when using Tensorflow:
Use TF.keras.Sequential > keras functional api > keras model subclassing > ...
Use model.fit > model.train_on_batch > manual GradientTapes.
Use eager-style code.
Use tensorflow_datasets and tf.data where possible.
Text
Use an imperative style. "Run a batch of images through the model."
Use sentence case in titles/headings.
Use short titles/headings: "Download the data", "Build the Model", "Train the model".
Code Style
Notebooks are for people. Write code optimized for clarity.
Demonstrate small parts before combining them into something more complex. Like below:
End of explanation
"""
# Delete model version resource
! gcloud ai-platform versions delete $MODEL_VERSION --quiet --model $MODEL_NAME
# Delete model resource
! gcloud ai-platform models delete $MODEL_NAME --quiet
# Delete Cloud Storage objects that were created
! gsutil -m rm -r $JOB_DIR
# If training job is still running, cancel it
! gcloud ai-platform jobs cancel $JOB_NAME --quiet
"""
Explanation: Keep examples quick. Use small datasets, or small slices of datasets. You don't need to train to convergence, train until it's obvious it's making progress.
For a large example, don't try to fit all the code in the notebook. Add python files to tensorflow examples, and in the notebook run:
! pip install git+https://github.com/tensorflow/examples
Cleaning up
To clean up all GCP resources used in this project, you can delete the GCP
project you used for the tutorial.
{Include commands to delete individual resources below}
End of explanation
"""
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.