repo_name
stringlengths 6
77
| path
stringlengths 8
215
| license
stringclasses 15
values | cells
list | types
list |
|---|---|---|---|---|
raschuetz/foundations-homework
|
Data_and_Databases_homework/03/homework_3_schuetz_graded.ipynb
|
mit
|
[
"Grade: 9 / 9 -- great job!\nHomework assignment #3\nThese problem sets focus on using the Beautiful Soup library to scrape web pages.\nProblem Set #1: Basic scraping\nI've made a web page for you to scrape. It's available here. The page concerns the catalog of a famous widget company. You'll be answering several questions about this web page. In the cell below, I've written some code so that you end up with a variable called html_str that contains the HTML source code of the page, and a variable document that stores a Beautiful Soup object.",
"from bs4 import BeautifulSoup\nfrom urllib.request import urlopen\nhtml_str = urlopen(\"http://static.decontextualize.com/widgets2016.html\").read()\ndocument = BeautifulSoup(html_str, \"html.parser\")",
"Now, in the cell below, use Beautiful Soup to write an expression that evaluates to the number of <h3> tags contained in widgets2016.html.",
"# TA-COMMENT: All in one line! Beautiful! \nprint(len(document.find_all('h3')))",
"Now, in the cell below, write an expression or series of statements that displays the telephone number beneath the \"Widget Catalog\" header.",
"print(document.find('a', {'class': 'tel'}).string)",
"In the cell below, use Beautiful Soup to write some code that prints the names of all the widgets on the page. After your code has executed, widget_names should evaluate to a list that looks like this (though not necessarily in this order):\nSkinner Widget\nWidget For Furtiveness\nWidget For Strawman\nJittery Widget\nSilver Widget\nDivided Widget\nManicurist Widget\nInfinite Widget\nYellow-Tipped Widget\nUnshakable Widget\nSelf-Knowledge Widget\nWidget For Cinema",
"widgets = document.find_all('td', {'class': 'wname'})\nfor widget in widgets:\n print(widget.string)",
"Problem set #2: Widget dictionaries\nFor this problem set, we'll continue to use the HTML page from the previous problem set. In the cell below, I've made an empty list and assigned it to a variable called widgets. Write code that populates this list with dictionaries, one dictionary per widget in the source file. The keys of each dictionary should be partno, wname, price, and quantity, and the value for each of the keys should be the value for the corresponding column for each row. After executing the cell, your list should look something like this:\n[{'partno': 'C1-9476',\n 'price': '$2.70',\n 'quantity': u'512',\n 'wname': 'Skinner Widget'},\n {'partno': 'JDJ-32/V',\n 'price': '$9.36',\n 'quantity': '967',\n 'wname': u'Widget For Furtiveness'},\n ...several items omitted...\n {'partno': '5B-941/F',\n 'price': '$13.26',\n 'quantity': '919',\n 'wname': 'Widget For Cinema'}]\nAnd this expression:\nwidgets[5]['partno']\n\n... should evaluate to:\nLH-74/O",
"widgets = []\n\nwidget_table = document.find_all('tr', {'class': 'winfo'})\nfor row in widget_table:\n partno = row.find('td', {'class': 'partno'}).string\n wname = row.find('td', {'class': 'wname'}).string\n price = row.find('td', {'class': 'price'}).string\n quantity = row.find('td', {'class': 'quantity'}).string\n widgets.append({'partno': partno, 'wname': wname, 'price': price, 'quantity': quantity})\n\nwidgets\n\nwidgets[5]['partno']",
"In the cell below, duplicate your code from the previous question. Modify the code to ensure that the values for price and quantity in each dictionary are floating-point numbers and integers, respectively. I.e., after executing the cell, your code should display something like this:\n[{'partno': 'C1-9476',\n 'price': 2.7,\n 'quantity': 512,\n 'widgetname': 'Skinner Widget'},\n {'partno': 'JDJ-32/V',\n 'price': 9.36,\n 'quantity': 967,\n 'widgetname': 'Widget For Furtiveness'},\n ... some items omitted ...\n {'partno': '5B-941/F',\n 'price': 13.26,\n 'quantity': 919,\n 'widgetname': 'Widget For Cinema'}]\n\n(Hint: Use the float() and int() functions. You may need to use string slices to convert the price field to a floating-point number.)",
"widgets = []\n\nwidget_table = document.find_all('tr', {'class': 'winfo'})\nfor row in widget_table:\n partno = row.find('td', {'class': 'partno'}).string\n wname = row.find('td', {'class': 'wname'}).string\n price = float(row.find('td', {'class': 'price'}).string[1:])\n quantity = int(row.find('td', {'class': 'quantity'}).string)\n widgets.append({'partno': partno, 'wname': wname, 'price': price, 'quantity': quantity})\n\nwidgets",
"Great! I hope you're having fun. In the cell below, write an expression or series of statements that uses the widgets list created in the cell above to calculate the total number of widgets that the factory has in its warehouse.\nExpected output: 7928",
"quantity_total = 0\n\nfor widget in widgets:\n quantity_total += widget['quantity'] # TA-COMMENT: Yassss, putting += to use! \n\nprint(quantity_total)",
"In the cell below, write some Python code that prints the names of widgets whose price is above $9.30.\nExpected output:\nWidget For Furtiveness\nJittery Widget\nSilver Widget\nInfinite Widget\nWidget For Cinema",
"for widget in widgets:\n if widget['price'] > 9.3:\n print(widget['wname'])",
"Problem set #3: Sibling rivalries\nIn the following problem set, you will yet again be working with the data in widgets2016.html. In order to accomplish the tasks in this problem set, you'll need to learn about Beautiful Soup's .find_next_sibling() method. Here's some information about that method, cribbed from the notes:\nOften, the tags we're looking for don't have a distinguishing characteristic, like a class attribute, that allows us to find them using .find() and .find_all(), and the tags also aren't in a parent-child relationship. This can be tricky! For example, take the following HTML snippet, (which I've assigned to a string called example_html):",
"example_html = \"\"\"\n<h2>Camembert</h2>\n<p>A soft cheese made in the Camembert region of France.</p>\n\n<h2>Cheddar</h2>\n<p>A yellow cheese made in the Cheddar region of... France, probably, idk whatevs.</p>\n\"\"\"",
"If our task was to create a dictionary that maps the name of the cheese to the description that follows in the <p> tag directly afterward, we'd be out of luck. Fortunately, Beautiful Soup has a .find_next_sibling() method, which allows us to search for the next tag that is a sibling of the tag you're calling it on (i.e., the two tags share a parent), that also matches particular criteria. So, for example, to accomplish the task outlined above:",
"example_doc = BeautifulSoup(example_html, \"html.parser\")\ncheese_dict = {}\nfor h2_tag in example_doc.find_all('h2'):\n cheese_name = h2_tag.string\n cheese_desc_tag = h2_tag.find_next_sibling('p')\n cheese_dict[cheese_name] = cheese_desc_tag.string\n\ncheese_dict",
"With that knowledge in mind, let's go back to our widgets. In the cell below, write code that uses Beautiful Soup, and in particular the .find_next_sibling() method, to print the part numbers of the widgets that are in the table just beneath the header \"Hallowed Widgets.\"\nExpected output:\nMZ-556/B\nQV-730\nT1-9731\n5B-941/F",
"h3_tags = document.find_all('h3')\nfor h3_tag in h3_tags:\n if h3_tag.string == 'Hallowed widgets':\n table = h3_tag.find_next_sibling('table')\n part_numbers = table.find_all('td', {'class': 'partno'})\n for part in part_numbers:\n print(part.string)",
"Okay, now, the final task. If you can accomplish this, you are truly an expert web scraper. I'll have little web scraper certificates made up and I'll give you one, if you manage to do this thing. And I know you can do it!\nIn the cell below, I've created a variable category_counts and assigned to it an empty dictionary. Write code to populate this dictionary so that its keys are \"categories\" of widgets (e.g., the contents of the <h3> tags on the page: \"Forensic Widgets\", \"Mood widgets\", \"Hallowed Widgets\") and the value for each key is the number of widgets that occur in that category. I.e., after your code has been executed, the dictionary category_counts should look like this:\n{'Forensic Widgets': 3,\n 'Hallowed widgets': 4,\n 'Mood widgets': 2,\n 'Wondrous widgets': 3}",
"category_counts = {}\n\n# TA-COMMENT: Beautiful! \nfor h3_tag in h3_tags:\n table = h3_tag.find_next_sibling('table')\n list_of_widgets = table.find_all('tr', {'class': 'winfo'})\n category_counts[h3_tag.string] = len(list_of_widgets)\n\ncategory_counts",
"Congratulations! You're done."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
kit-cel/lecture-examples
|
mloc/ch4_Deep_Learning/pytorch/Deep_NN_Detection_QAM.ipynb
|
gpl-2.0
|
[
"QAM Demodulation in Nonlinear Channels with Deep Neural Networks\nThis code is provided as supplementary material of the lecture Machine Learning and Optimization in Communications (MLOC).<br>\nThis code illustrates:\n* demodulation of QAM symbols in highly nonlinear channels using an artificial neural network\n* utilization of softmax layer\n* variable batch size to improve learning towards lower error rates",
"import torch\nimport torch.nn as nn\nimport torch.optim as optim\nimport numpy as np\nimport matplotlib\nimport matplotlib.pyplot as plt\nfrom ipywidgets import interactive\nimport ipywidgets as widgets\n\ndevice = 'cuda' if torch.cuda.is_available() else 'cpu'\nprint(\"We are using the following device for learning:\",device)",
"Specify the parameters of the transmission as the fiber length $L$ (in km), the fiber nonlinearity coefficienty $\\gamma$ (given in 1/W/km) and the total noise power $P_n$ (given in dBM. The noise is due to amplified spontaneous emission in amplifiers along the link). We assume a model of a dispersion-less fiber affected by nonlinearity. The model, which is described for instance in [1] is given by an iterative application of the equation\n$$\nx_{k+1} = x_k\\exp\\left(\\jmath\\frac{L}{K}\\gamma|x_k|^2\\right) + n_{k+1},\\qquad 0 \\leq k < K\n$$\nwhere $x_0$ is the channel input (the modulated, complex symbols) and $x_K$ is the channel output. $K$ denotes the number of steps taken to simulate the channel Usually $K=50$ gives a good approximation.\n[1] S. Li, C. Häger, N. Garcia, and H. Wymeersch, \"Achievable Information Rates for Nonlinear Fiber Communication via End-to-end Autoencoder Learning,\" Proc. ECOC, Rome, Sep. 2018",
"# Length of transmission (in km)\nL = 5000\n\n# fiber nonlinearity coefficient\ngamma = 1.27\n\nPn = -21.3 # noise power (in dBm)\n\nKstep = 50 # number of steps used in the channel model\n\n# noise variance per step \nsigma_n = np.sqrt((10**((Pn-30)/10)) / Kstep / 2) \n\nconstellations = {'16-QAM': np.array([-3,-3,-3,-3,-1,-1,-1,-1,1,1,1,1,3,3,3,3]) + 1j*np.array([-3,-1,1,3,-3,-1,1,3,-3,-1,1,3,-3,-1,1,3]), \\\n '16-APSK': np.array([1,-1,0,0,1.4,1.4,-1.4,-1.4,3,-3,0,0,5,-5,0,0]) + 1j*np.array([0,0,1,-1,1.4,-1.4,1.4,-1.4,0,0,4,-4,0,0,6,-6]), \\\n '4-test' : np.array([-1,2,0,4]) + 1j*np.array([0,0,3,0])}\n\n# permute constellations so that it is visually more appealing with the chosen colormap\nfor cname in constellations.keys():\n constellations[cname] = constellations[cname][np.random.permutation(len(constellations[cname]))]\n\n\ndef simulate_channel(x, Pin, constellation): \n # modulate bpsk\n input_power_linear = 10**((Pin-30)/10)\n norm_factor = 1 / np.sqrt(np.mean(np.abs(constellation)**2)/input_power_linear)\n modulated = constellation[x] * norm_factor\n \n\n temp = np.array(modulated, copy=True)\n for i in range(Kstep):\n power = np.absolute(temp)**2\n rotcoff = (L / Kstep) * gamma * power\n \n temp = temp * np.exp(1j*rotcoff) + sigma_n*(np.random.randn(len(x)) + 1j*np.random.randn(len(x)))\n return temp",
"We consider BPSK transmission over this channel.\nShow constellation as a function of the fiber input power. When the input power is small, the effect of the nonlinearity is small (as $\\jmath\\frac{L}{K}\\gamma|x_k|^2 \\approx 0$) and the transmission is dominated by the additive noise. If the input power becomes larger, the effect of the noise (the noise power is constant) becomes less pronounced, but the constellation rotates due to the larger input power and hence effect of the nonlinearity.",
"length_plot = 4000\n\ndef plot_constellation(Pin, constellation_name):\n constellation = constellations[constellation_name]\n \n t = np.random.randint(len(constellation),size=length_plot)\n r = simulate_channel(t, Pin, constellation)\n\n plt.figure(figsize=(12,6))\n font = {'size' : 14}\n plt.rc('font', **font)\n plt.rc('text', usetex=matplotlib.checkdep_usetex(True))\n plt.subplot(1,2,1)\n r_tx = constellation[range(len(constellation))]\n plt.scatter(np.real(r_tx), np.imag(r_tx), c=range(len(constellation)), marker='o', s=200, cmap='tab20')\n plt.xticks(())\n plt.yticks(())\n plt.axis('equal')\n plt.xlabel(r'$\\Re\\{r\\}$',fontsize=14)\n plt.ylabel(r'$\\Im\\{r\\}$',fontsize=14)\n plt.title('Transmitted constellation')\n \n plt.subplot(1,2,2)\n plt.scatter(np.real(r), np.imag(r), c=t, cmap='tab20',s=4)\n plt.xlabel(r'$\\Re\\{r\\}$',fontsize=14)\n plt.ylabel(r'$\\Im\\{r\\}$',fontsize=14)\n plt.axis('equal')\n plt.title('Received constellation ($L = %d$\\,km, $P_{in} = %1.2f$\\,dBm)' % (L, Pin)) \n #plt.savefig('%s_received_zd_%1.2f.pdf' % (constellation_name.replace('-','_'),Pin),bbox_inches='tight')\n \ninteractive_update = interactive(plot_constellation, \\\n Pin = widgets.FloatSlider(min=-10.0,max=10.0,step=0.1,value=1, continuous_update=False, description='Input Power Pin (dBm)', style={'description_width': 'initial'}, layout=widgets.Layout(width='50%')), \\\n constellation_name = widgets.RadioButtons(options=['16-QAM','16-APSK','4-test'], value='16-QAM',continuous_update=False,description='Constellation'))\n\n\noutput = interactive_update.children[-1]\noutput.layout.height = '400px'\ninteractive_update",
"Helper function to compute Bit Error Rate (BER)",
"# helper function to compute the symbol error rate\ndef SER(predictions, labels):\n return (np.sum(np.argmax(predictions, 1) != labels) / predictions.shape[0])",
"Here, we define the parameters of the neural network and training, generate the validation set and a helping set to show the decision regions",
"# set input power\nPin = -5\n\n#define constellation\nconstellation = constellations['16-APSK']\n\ninput_power_linear = 10**((Pin-30)/10)\nnorm_factor = 1 / np.sqrt(np.mean(np.abs(constellation)**2)/input_power_linear)\nsigma = np.sqrt((10**((Pn-30)/10)) / Kstep / 2)\n\n\nconstellation_mat = np.stack([constellation.real * norm_factor, constellation.imag * norm_factor],axis=1)\n\n\n# validation set. Training examples are generated on the fly\nN_valid = 100000\n\n# number of neurons in hidden layers\nhidden_neurons_1 = 50\nhidden_neurons_2 = 50\nhidden_neurons_3 = 50\nhidden_neurons_4 = 50\n\n\n\ny_valid = np.random.randint(len(constellation),size=N_valid)\nr = simulate_channel(y_valid, Pin, constellation)\n\n# find extension of data (for normalization and plotting)\next_x = max(abs(np.real(r)))\next_y = max(abs(np.imag(r)))\next_max = max(ext_x,ext_y)*1.2\n\n# scale data to be between 0 and 1\nX_valid = torch.from_numpy(np.column_stack((np.real(r), np.imag(r))) / ext_max).float().to(device)\n\n\n# meshgrid for plotting\nmgx,mgy = np.meshgrid(np.linspace(-ext_max,ext_max,200), np.linspace(-ext_max,ext_max,200))\nmeshgrid = torch.from_numpy(np.column_stack((np.reshape(mgx,(-1,1)),np.reshape(mgy,(-1,1)))) / ext_max).float().to(device)",
"This is the main neural network with 4 hidden layers, each with ELU activation function. Note that the final layer does not use a softmax function, as this function is already included in the CrossEntropyLoss.",
"class Receiver_Network(nn.Module):\n def __init__(self, hidden_neurons_1, hidden_neurons_2, hidden_neurons_3, hidden_neurons_4):\n super(Receiver_Network, self).__init__()\n # Linear function, 2 input neurons (real and imaginary part) \n self.fc1 = nn.Linear(2, hidden_neurons_1) \n\n # Non-linearity\n self.activation_function = nn.ELU()\n \n # Linear function (hidden layer)\n self.fc2 = nn.Linear(hidden_neurons_1, hidden_neurons_2) \n \n # Another hidden layer\n self.fc3 = nn.Linear(hidden_neurons_2, hidden_neurons_3)\n \n # Another hidden layer\n self.fc4 = nn.Linear(hidden_neurons_3, hidden_neurons_4)\n \n # Output layer\n self.fc5 = nn.Linear(hidden_neurons_4, len(constellation))\n \n\n def forward(self, x):\n # Linear function, first layer\n out = self.fc1(x)\n\n # Non-linearity, first layer\n out = self.activation_function(out)\n \n # Linear function, second layer\n out = self.fc2(out)\n \n # Non-linearity, second layer\n out = self.activation_function(out)\n \n # Linear function, third layer\n out = self.fc3(out)\n\n # Non-linearity, third layer\n out = self.activation_function(out)\n \n # Linear function, fourth layer\n out = self.fc4(out)\n \n # Non-linearity, fourth layer\n out = self.activation_function(out)\n\n # Linear function, output layer\n out = self.fc5(out)\n \n # Do *not* apply softmax, as it is already included in the CrossEntropyLoss\n \n return out",
"This is the main learning function, generate the data directly on the GPU (if available) and the run the neural network. We use a variable batch size that varies during training. In the first iterations, we start with a small batch size to rapidly get to a working solution. The closer we come towards the end of the training we increase the batch size. If keeping the batch size small, it may happen that there are no misclassifications in a small batch and there is no incentive of the training to improve. A larger batch size will most likely contain errors in the batch and hence there will be incentive to keep on training and improving. \nHere, the data is generated on the fly inside the graph, by using PyTorchs random number generation. As PyTorch does not natively support complex numbers (at least in early versions), we decided to replace the complex number operations in the channel by an equivalent simple rotation matrix and treating real and imaginary parts separately.\nWe employ the Adam optimization algorithm. Here, the epochs are not defined in the classical way, as we do not have a training set per se. We generate new data on the fly and never reuse data.",
"model = Receiver_Network(hidden_neurons_1, hidden_neurons_2, hidden_neurons_3, hidden_neurons_4)\nmodel.to(device)\n\n# Cross Entropy loss accepting logits at input\nloss_fn = nn.CrossEntropyLoss()\n\n# Adam Optimizer\noptimizer = optim.Adam(model.parameters()) \n\n# Softmax function\nsoftmax = nn.Softmax(dim=1)\n\nnum_epochs = 100\nbatches_per_epoch = 500\n\n# increase batch size while learning from 100 up to 10000\nbatch_size_per_epoch = np.linspace(100,10000,num=num_epochs)\n\nvalidation_SERs = np.zeros(num_epochs)\ndecision_region_evolution = []\n\n\nconstellation_tensor = torch.from_numpy(constellation_mat).float().to(device)\n\nfor epoch in range(num_epochs):\n batch_labels = torch.empty(int(batch_size_per_epoch[epoch]), device=device)\n noise = torch.empty((int(batch_size_per_epoch[epoch]),2), device=device, requires_grad=False) \n\n for step in range(batches_per_epoch):\n # sample new mini-batch directory on the GPU (if available) \n batch_labels.random_(len(constellation))\n\n temp_onehot = torch.zeros(int(batch_size_per_epoch[epoch]), len(constellation), device=device)\n temp_onehot[range(temp_onehot.shape[0]), batch_labels.long()]=1\n \n # channel simulation directly on the GPU\n qam = (temp_onehot @ constellation_tensor).to(device)\n \n for i in range(Kstep):\n power = torch.norm(qam, dim=1) ** 2\n rotcoff = (L / Kstep) * gamma * power\n noise.normal_(mean=0, std=sigma) # sample noise\n \n # phase rotation due to nonlinearity\n temp1 = qam[:,0] * torch.cos(rotcoff) - qam[:,1] * torch.sin(rotcoff) \n temp2 = qam[:,0] * torch.sin(rotcoff) + qam[:,1] * torch.cos(rotcoff) \n qam = torch.stack([temp1, temp2], dim=1) + noise\n\n qam = qam / ext_max\n outputs = model(qam)\n\n # compute loss\n loss = loss_fn(outputs.squeeze(), batch_labels.long())\n \n # compute gradients\n loss.backward()\n \n optimizer.step()\n # reset gradients\n optimizer.zero_grad()\n \n # compute validation SER\n out_valid = softmax(model(X_valid))\n validation_SERs[epoch] = SER(out_valid.detach().cpu().numpy(), y_valid)\n \n print('Validation SER after epoch %d: %f (loss %1.8f)' % (epoch, validation_SERs[epoch], loss.detach().cpu().numpy())) \n \n # store decision region for generating the animation \n mesh_prediction = softmax(model(meshgrid)) \n decision_region_evolution.append(0.195*mesh_prediction.detach().cpu().numpy() + 0.4)\n",
"Plt decision region and scatter plot of the validation set. Note that the validation set is only used for computing BERs and plotting, there is no feedback towards the training!",
"cmap = matplotlib.cm.tab20\nbase = plt.cm.get_cmap(cmap)\ncolor_list = base.colors\nnew_color_list = [[t/2 + 0.5 for t in color_list[k]] for k in range(len(color_list))]\n\n# find minimum SER from validation set\nmin_SER_iter = np.argmin(validation_SERs)\n\nplt.figure(figsize=(16,8))\nplt.subplot(121)\n#plt.contourf(mgx,mgy,decision_region_evolution[-1].reshape(mgy.shape).T,cmap='coolwarm',vmin=0.3,vmax=0.7)\nplt.scatter(X_valid.cpu()[:,0]*ext_max, X_valid.cpu()[:,1]*ext_max, c=y_valid, cmap='tab20',s=4)\nplt.axis('scaled')\nplt.xlabel(r'$\\Re\\{r\\}$',fontsize=16)\nplt.ylabel(r'$\\Im\\{r\\}$',fontsize=16)\nplt.xlim((-ext_max,ext_max))\nplt.ylim((-ext_max,ext_max))\nplt.title('Received constellation',fontsize=16)\n\n#light_tab20 = cmap_map(lambda x: x/2 + 0.5, matplotlib.cm.tab20)\nplt.subplot(122)\ndecision_scatter = np.argmax(decision_region_evolution[min_SER_iter], 1)\nplt.scatter(meshgrid.cpu()[:,0] * ext_max, meshgrid.cpu()[:,1] * ext_max, c=decision_scatter, cmap=matplotlib.colors.ListedColormap(colors=new_color_list),s=4)\nplt.scatter(X_valid.cpu()[0:4000,0]*ext_max, X_valid.cpu()[0:4000,1]*ext_max, c=y_valid[0:4000], cmap='tab20',s=4)\nplt.axis('scaled')\nplt.xlim((-ext_max,ext_max))\nplt.ylim((-ext_max,ext_max))\nplt.xlabel(r'$\\Re\\{r\\}$',fontsize=16)\nplt.ylabel(r'$\\Im\\{r\\}$',fontsize=16)\nplt.title('Decision region after learning',fontsize=16)\n\n#plt.savefig('decision_region_16APSK_Pin%d.pdf' % Pin,bbox_inches='tight')",
"Generate animation and save as a gif.",
"%matplotlib notebook\n%matplotlib notebook\n# Generate animation\nfrom matplotlib import animation, rc\nfrom matplotlib.animation import PillowWriter # Disable if you don't want to save any GIFs.\n\nfont = {'size' : 18}\nplt.rc('font', **font)\n\nfig, ax = plt.subplots(1, figsize=(8,8))\nax.axis('scaled')\n\nwritten = False\ndef animate(i):\n ax.clear()\n decision_scatter = np.argmax(decision_region_evolution[i], 1)\n \n plt.scatter(meshgrid.cpu()[:,0] * ext_max, meshgrid.cpu()[:,1] * ext_max, c=decision_scatter, cmap=matplotlib.colors.ListedColormap(colors=new_color_list),s=4, marker='s')\n plt.scatter(X_valid.cpu()[0:4000,0]*ext_max, X_valid.cpu()[0:4000,1]*ext_max, c=y_valid[0:4000], cmap='tab20',s=4)\n ax.set_xlim(( -ext_max, ext_max))\n ax.set_ylim(( -ext_max, ext_max))\n\n ax.set_xlabel(r'$\\Re\\{r\\}$',fontsize=18)\n ax.set_ylabel(r'$\\Im\\{r\\}$',fontsize=18)\n\n \nanim = animation.FuncAnimation(fig, animate, frames=min_SER_iter+1, interval=200, blit=False)\nfig.show()\n#anim.save('learning_decision_16APSK_Pin%d_varbatch.gif' % Pin, writer=PillowWriter(fps=5))"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
ehthiede/PyEDGAR
|
examples/Committor_10d/Committor_10d.ipynb
|
mit
|
[
"Committor Estimate on the Muller-Brown Potential",
"import matplotlib.pyplot as plt\nimport numpy as np\nimport pyedgar\nfrom pyedgar.data_manipulation import tlist_to_flat, flat_to_tlist\n\n%matplotlib inline",
"Load Data and set Hyperparameters\nWe first load in the pre-sampled data. The data consists of 1000 short trajectories, each with 5 datapoints. The precise sampling procedure is described in \"Galerkin Approximation of Dynamical Quantities using Trajectory Data\" by Thiede et al. Note that this is a smaller dataset than in the paper. We use a smallar dataset to ensure the diffusion map basis construction runs in a reasonably short time.\nSet Hyperparameters\nHere we specify a few hyperparameters. Thes can be varied to study the behavior of the scheme in various limits by the user.",
"ntraj = 1000\ntrajectory_length = 5\ndim = 10",
"Load and format the data",
"trajs = np.load('data/muller_brown_trajs.npy')[:ntraj, :trajectory_length, :dim] # Raw trajectory\nstateA = np.load('data/muller_brown_stateA.npy')[:ntraj, :trajectory_length] # 1 if in state A, 0 otherwise\nstateB = np.load('data/muller_brown_stateB.npy')[:ntraj, :trajectory_length] # 1 if in state B, 0 otherwise\n\nprint(\"Data shape: \", trajs.shape)\n\ntrajs = [traj_i for traj_i in trajs]\nstateA = [A_i for A_i in stateA]\nstateB = [B_i for B_i in stateB]\nin_domain = [1. - B_i - A_i for (A_i, B_i) in zip(stateA, stateB)]",
"We also convert the data into the flattened format. This converts the data into a 2D array, which allows the data to be passed into many ML packages that require a two-dimensional dataset. In particular, this is the format accepted by the Diffusion Atlas object. Trajectory start/stop points are then stored in the traj_edges array.\nFinally, we load the reference, \"true\" committor for comparison.",
"ref_comm = np.load('reference/reference_committor.npy')\nref_potential = np.load('reference/potential.npy')\nxgrid = np.load('reference/xgrid.npy')\nygrid = np.load('reference/ygrid.npy')\n\n # Plot the true committor.\nfig, ax = plt.subplots(1)\nHM = ax.pcolor(xgrid, ygrid, ref_comm, vmin=0, vmax=1)\nax.contour(xgrid, ygrid, ref_potential, levels=np.linspace(0, 10., 11), colors='k') # Contour lines every 1 k_B T\nax.set_aspect('equal')\ncbar = plt.colorbar(HM, ax=ax)\n\nax.set_xlabel('x')\nax.set_ylabel('y')\nax.set_title('True Committor')",
"Construct DGA Committor\nWe now use PyEDGAR to build an estimate for the forward committor.\nBuild Basis Set\nWe first build the basis set required for the DGA Calculation. In this demo, we will use the diffusion map basis.",
"diff_atlas = pyedgar.basis.DiffusionAtlas.from_sklearn(alpha=0, k=500, bandwidth_type='-1/d', epsilon='bgh_generous')\ndiff_atlas.fit(trajs)",
"Here, we construct the basis and guess functions, and convert them back into lists of trajectories. The domain is the set of all sets out side of $(A\\cup B)^c$.",
"basis, evals = diff_atlas.make_dirichlet_basis(300, in_domain=in_domain, return_evals=True)\nguess = diff_atlas.make_FK_soln(stateB, in_domain=in_domain)\n\nflat_basis = np.vstack(basis)\nflat_guess = np.hstack(guess)",
"We plot the guess function and the first few basis functions.",
"# Flatten the basis, guess, and trajectories functions for easy plotting.\nflattened_trajs = np.vstack(trajs)\nflat_basis = np.vstack(basis)\nflat_guess = np.hstack(guess)\n\nfig, axes= plt.subplots(1, 5, figsize=(14,4.), sharex=True, sharey=True)\naxes[0].scatter(flattened_trajs[:,0], flattened_trajs[:,1], \n c=flat_guess, s=3)\naxes[0].set_title('Guess')\naxes[0].set_ylabel(\"y\")\n\nfor i, ax in enumerate(axes[1:]):\n vm = np.max(np.abs(flat_basis[:, i]))\n ax.scatter(flattened_trajs[:,0], flattened_trajs[:,1], \n c=flat_basis[:, i], s=3, cmap='coolwarm', \n vmin=-1*vm, vmax=vm)\n ax.set_title(r\"$\\phi_%d$\" % (i+1))\n\nfor ax in axes:\n ax.set_aspect('equal')\n# ax.\naxes[2].set_xlabel(\"x\")",
"The third basis function looks like noise from the perspective of the $x$ and $y$ coordinates. This is because it correlates most strongly with the harmonic degrees of freedom. Note that due to the boundary conditions, it is not precisely the dominant eigenvector of the harmonic degrees of freedom.",
"fig, (ax1) = plt.subplots(1, figsize=(3.5,3.5))\n\nvm = np.max(np.abs(flat_basis[:,2]))\nax1.scatter(flattened_trajs[:,3], flattened_trajs[:,5], \n c=flat_basis[:, 2], s=3, cmap='coolwarm', \n vmin=-1*vm, vmax=vm)\n\nax1.set_aspect('equal')\nax1.set_title(r\"$\\phi_%d$\" % 3)\nax1.set_xlabel(\"$z_2$\")\nax1.set_ylabel(\"$z_4$\")",
"Build the committor function\nWe are ready to compute the committor function using DGA. This can be done by passing the guess function and the basis to the the Galerkin module.",
"g = pyedgar.galerkin.compute_committor(basis, guess, lag=1)\n\nfig, (ax1) = plt.subplots(1, figsize=(5.5,3.5))\n\nSC = ax1.scatter(flattened_trajs[:,0], flattened_trajs[:,1], c=np.array(g).ravel(), vmin=0., vmax=1., s=3)\n\nax1.set_xlabel('x')\nax1.set_ylabel('y')\nax1.set_title('Estimated Committor')\nplt.colorbar(SC)\nax1.set_aspect('equal')",
"Here, we plot how much the DGA estimate perturbs the Guess function",
"fig, (ax1) = plt.subplots(1, figsize=(4.4,3.5))\n\nSC = ax1.scatter(flattened_trajs[:,0], flattened_trajs[:,1], c=np.array(g).ravel() - flat_guess, \n vmin=-.5, vmax=.5, cmap='bwr', s=3)\nax1.set_aspect('equal')\nax1.set_xlabel('x')\nax1.set_ylabel('y')\nax1.set_title('Estimate - Guess')\nplt.colorbar(SC, ax=ax1)",
"Compare against reference\nTo compare against the reference values, we will interpolate the reference onto the datapoints usingy scipy's interpolate package.",
"import scipy.interpolate as spi\n\nspline = spi.RectBivariateSpline(xgrid, ygrid, ref_comm.T)\nref_comm_on_data = np.array([spline.ev(c[0], c[1]) for c in flattened_trajs[:,:2]])\nref_comm_on_data[ref_comm_on_data < 0.] = 0.\nref_comm_on_data[ref_comm_on_data > 1.] = 1.",
"A comparison of our estimate with the True committor. While the estimate is good, we systematically underestimate the committor near (0, 0.5).",
"fig, axes = plt.subplots(1, 3, figsize=(16,3.5), sharex=True, sharey=True)\n(ax1, ax2, ax3) = axes\nSC = ax1.scatter(flattened_trajs[:,0], flattened_trajs[:,1], c=ref_comm_on_data, vmin=0., vmax=1., s=3)\nplt.colorbar(SC, ax=ax1)\nSC = ax2.scatter(flattened_trajs[:,0], flattened_trajs[:,1], c=np.array(g).ravel(), vmin=0., vmax=1., s=3)\nplt.colorbar(SC, ax=ax2)\nSC = ax3.scatter(flattened_trajs[:,0], flattened_trajs[:,1], c=np.array(g).ravel() -ref_comm_on_data, \n vmin=-.5, vmax=.5, s=3, cmap='bwr')\nplt.colorbar(SC, ax=ax3)\n\n\n# ax1.set_aspect('equal')\nax2.set_xlabel('x')\nax1.set_ylabel('y')\nax1.set_title('True Committor')\nax2.set_title('DGA Estimate')\nax3.set_title('Estimate - True')\nplt.tight_layout(pad=-1.)\nfor ax in axes:\n ax.set_aspect('equal')"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
lithiumdenis/MLSchool
|
3. Котики и собачки (исходные данные).ipynb
|
mit
|
[
"import matplotlib\nimport matplotlib.pyplot as plt\n\n%matplotlib inline\nmatplotlib.rcParams.update({'font.size': 12})\n\n# увеличим дефолтный размер графиков\nfrom pylab import rcParams\nrcParams['figure.figsize'] = 18, 6\nrcParams['font.size'] = 16\nrcParams['axes.labelsize'] = 14\nrcParams['xtick.labelsize'] = 13\nrcParams['ytick.labelsize'] = 13\n\nimport pandas as pd\nimport numpy as np",
"Данные\nВозьмите данные с https://www.kaggle.com/c/shelter-animal-outcomes .\nОбратите внимание, что в этот раз у нас много классов, почитайте в разделе Evaluation то, как вычисляется итоговый счет (score).\nВизуализация\n<div class=\"panel panel-info\" style=\"margin: 50px 0 0 0\">\n <div class=\"panel-heading\">\n <h3 class=\"panel-title\">Задание 1.</h3> \n </div>\n</div>\n\nВыясните, построив необходимые графики, влияет ли возраст, пол или фертильность животного на его шансы быть взятыми из приюта.\nПодготовим данные",
"visual = pd.read_csv('data/CatsAndDogs/train.csv')\n\n#Сделаем числовой столбец Outcome, показывающий, взяли животное из приюта или нет\n#Сначала заполним единицами, типа во всех случах хорошо\nvisual['Outcome'] = 'true'\n#Неудачные случаи занулим\nvisual.loc[visual.OutcomeType == 'Euthanasia', 'Outcome'] = 'false'\nvisual.loc[visual.OutcomeType == 'Died', 'Outcome'] = 'false'\n\n#Заменим строки, где в SexuponOutcome NaN, на что-нибудь осмысленное\nvisual.loc[visual.SexuponOutcome.isnull(), 'SexuponOutcome'] = 'Unknown Unknown'\n\n#Сделаем два отдельных столбца для пола и фертильности\nvisual['Gender'] = visual.SexuponOutcome.apply(lambda s: s.split(' ')[-1])\nvisual['Fertility'] = visual.SexuponOutcome.apply(lambda s: s.split(' ')[0])",
"Сравним по возрасту",
"mergedByAges = visual.groupby('AgeuponOutcome')['Outcome'].value_counts().to_dict()\n\nresults = pd.DataFrame(data = mergedByAges, index=[0]).stack().fillna(0).transpose()\nresults.columns = pd.Index(['true', 'false'])\nresults['total'] = results.true + results.false\nresults.sort_values(by='true', ascending=False, inplace=True)\nresults[['true', 'false']].plot(kind='bar', stacked=False, rot=45);",
"Сравним по полу",
"mergedByGender = visual.groupby('Gender')['Outcome'].value_counts().to_dict()\n\nresults = pd.DataFrame(data = mergedByGender, index=[0]).stack().fillna(0).transpose()\nresults.columns = pd.Index(['true', 'false'])\nresults['total'] = results.true + results.false\nresults.sort_values(by='true', ascending=False, inplace=True)\nresults[['true', 'false']].plot(kind='bar', stacked=True, rot=45);",
"Сравним по фертильности",
"mergedByFert = visual.groupby('Fertility')['Outcome'].value_counts().to_dict()\n\nresults = pd.DataFrame(data = mergedByFert, index=[0]).stack().fillna(0).transpose()\nresults.columns = pd.Index(['true', 'false'])\nresults['total'] = results.true + results.false\nresults.sort_values(by='true', ascending=False, inplace=True)\nresults[['true', 'false']].plot(kind='bar', stacked=True, rot=45);",
"<b>Вывод по возрасту:</b> лучше берут не самых старых, но и не самых молодых\n<br>\n<b>Вывод по полу:</b> по большому счёту не имеет значения\n<br>\n<b>Вывод по фертильности:</b> лучше берут животных с ненарушенными репродуктивными способностями. Однако две следующие группы не сильно различаются по сути и, если их сложить, то разница не столь велика.\nПостроение моделей\n<div class=\"panel panel-info\" style=\"margin: 50px 0 0 0\">\n <div class=\"panel-heading\">\n <h3 class=\"panel-title\">Задание 2.</h3> \n </div>\n</div>\n\nПосмотрите тетрадку с генерацией новых признаков. Сделайте как можно больше релевантных признаков из всех имеющихся.\nНе забудьте параллельно обрабатывать отложенную выборку (test), чтобы в ней были те же самые признаки, что и в обучающей.\n<b>Возьмем исходные данные</b>",
"train, test = pd.read_csv(\n 'data/CatsAndDogs/train.csv' #исходные данные\n), pd.read_csv(\n 'data/CatsAndDogs/test.csv' #исходные данные\n)\n\ntrain.head()\n\ntest.shape",
"<b>Добавим новые признаки в train</b>",
"#Сначала по-аналогии с визуализацией\n\n#Заменим строки, где в SexuponOutcome, Breed, Color NaN\ntrain.loc[train.SexuponOutcome.isnull(), 'SexuponOutcome'] = 'Unknown Unknown'\ntrain.loc[train.AgeuponOutcome.isnull(), 'AgeuponOutcome'] = '0 0'\ntrain.loc[train.Breed.isnull(), 'Breed'] = 'Unknown'\ntrain.loc[train.Color.isnull(), 'Color'] = 'Unknown'\n\n#Сделаем два отдельных столбца для пола и фертильности\ntrain['Gender'] = train.SexuponOutcome.apply(lambda s: s.split(' ')[-1])\ntrain['Fertility'] = train.SexuponOutcome.apply(lambda s: s.split(' ')[0])\n\n#Теперь что-то новое\n\n#Столбец, в котором отмечено, есть имя у животного или нет\ntrain['hasName'] = 1\ntrain.loc[train.Name.isnull(), 'hasName'] = 0\n\n#Столбец, в котором объединены порода и цвет\ntrain['breedColor'] = train.apply(lambda row: row['Breed'] + ' ' + str(row['Color']), axis=1)\n\n#Декомпозируем DateTime\n#Во-первых, конвертируем столбец в тип DateTime из строкового\ntrain['DateTime'] = pd.to_datetime(train['DateTime'])\n#А теперь декомпозируем\ntrain['dayOfWeek'] = train.DateTime.apply(lambda dt: dt.dayofweek)\ntrain['month'] = train.DateTime.apply(lambda dt: dt.month)\ntrain['day'] = train.DateTime.apply(lambda dt: dt.day)\ntrain['quarter'] = train.DateTime.apply(lambda dt: dt.quarter)\ntrain['hour'] = train.DateTime.apply(lambda dt: dt.hour)\ntrain['minute'] = train.DateTime.apply(lambda dt: dt.hour)\ntrain['year'] = train.DateTime.apply(lambda dt: dt.year)\n\n#Разбиение возраста\n#Сделаем два отдельных столбца для обозначения года/месяца и их количества\ntrain['AgeuponFirstPart'] = train.AgeuponOutcome.apply(lambda s: s.split(' ')[0])\ntrain['AgeuponSecondPart'] = train.AgeuponOutcome.apply(lambda s: s.split(' ')[-1])\n#Переведем примерно в среднем месяцы, годы и недели в дни с учетом окончаний s\ntrain['AgeuponSecondPartInDays'] = 0\ntrain.loc[train.AgeuponSecondPart == 'year', 'AgeuponSecondPartInDays'] = 365\ntrain.loc[train.AgeuponSecondPart == 'years', 'AgeuponSecondPartInDays'] = 365\ntrain.loc[train.AgeuponSecondPart == 'month', 'AgeuponSecondPartInDays'] = 30\ntrain.loc[train.AgeuponSecondPart == 'months', 'AgeuponSecondPartInDays'] = 30\ntrain.loc[train.AgeuponSecondPart == 'week', 'AgeuponSecondPartInDays'] = 7\ntrain.loc[train.AgeuponSecondPart == 'weeks', 'AgeuponSecondPartInDays'] = 7\n#Во-первых, конвертируем столбец в числовой тип из строкового\ntrain['AgeuponFirstPart'] = pd.to_numeric(train['AgeuponFirstPart'])\ntrain['AgeuponSecondPartInDays'] = pd.to_numeric(train['AgeuponSecondPartInDays'])\n\n#А теперь получим нормальное время жизни в днях\ntrain['LifetimeInDays'] = train['AgeuponFirstPart'] * train['AgeuponSecondPartInDays']\n\n#Удалим уж совсем бессмысленные промежуточные столбцы\ntrain = train.drop(['AgeuponSecondPartInDays', 'AgeuponSecondPart', 'AgeuponFirstPart', 'OutcomeSubtype'], axis=1)\ntrain.head()",
"<b>Добавим новые признаки в test по-аналогии</b>",
"#Сначала по-аналогии с визуализацией\n\n#Заменим строки, где в SexuponOutcome, Breed, Color NaN\ntest.loc[test.SexuponOutcome.isnull(), 'SexuponOutcome'] = 'Unknown Unknown'\ntest.loc[test.AgeuponOutcome.isnull(), 'AgeuponOutcome'] = '0 0'\ntest.loc[test.Breed.isnull(), 'Breed'] = 'Unknown'\ntest.loc[test.Color.isnull(), 'Color'] = 'Unknown'\n\n#Сделаем два отдельных столбца для пола и фертильности\ntest['Gender'] = test.SexuponOutcome.apply(lambda s: s.split(' ')[-1])\ntest['Fertility'] = test.SexuponOutcome.apply(lambda s: s.split(' ')[0])\n\n#Теперь что-то новое\n\n#Столбец, в котором отмечено, есть имя у животного или нет\ntest['hasName'] = 1\ntest.loc[test.Name.isnull(), 'hasName'] = 0\n\n#Столбец, в котором объединены порода и цвет\ntest['breedColor'] = test.apply(lambda row: row['Breed'] + ' ' + str(row['Color']), axis=1)\n\n#Декомпозируем DateTime\n#Во-первых, конвертируем столбец в тип DateTime из строкового\ntest['DateTime'] = pd.to_datetime(test['DateTime'])\n#А теперь декомпозируем\ntest['dayOfWeek'] = test.DateTime.apply(lambda dt: dt.dayofweek)\ntest['month'] = test.DateTime.apply(lambda dt: dt.month)\ntest['day'] = test.DateTime.apply(lambda dt: dt.day)\ntest['quarter'] = test.DateTime.apply(lambda dt: dt.quarter)\ntest['hour'] = test.DateTime.apply(lambda dt: dt.hour)\ntest['minute'] = test.DateTime.apply(lambda dt: dt.hour)\ntest['year'] = test.DateTime.apply(lambda dt: dt.year)\n\n#Разбиение возраста\n#Сделаем два отдельных столбца для обозначения года/месяца и их количества\ntest['AgeuponFirstPart'] = test.AgeuponOutcome.apply(lambda s: s.split(' ')[0])\ntest['AgeuponSecondPart'] = test.AgeuponOutcome.apply(lambda s: s.split(' ')[-1])\n#Переведем примерно в среднем месяцы, годы и недели в дни с учетом окончаний s\ntest['AgeuponSecondPartInDays'] = 0\ntest.loc[test.AgeuponSecondPart == 'year', 'AgeuponSecondPartInDays'] = 365\ntest.loc[test.AgeuponSecondPart == 'years', 'AgeuponSecondPartInDays'] = 365\ntest.loc[test.AgeuponSecondPart == 'month', 'AgeuponSecondPartInDays'] = 30\ntest.loc[test.AgeuponSecondPart == 'months', 'AgeuponSecondPartInDays'] = 30\ntest.loc[test.AgeuponSecondPart == 'week', 'AgeuponSecondPartInDays'] = 7\ntest.loc[test.AgeuponSecondPart == 'weeks', 'AgeuponSecondPartInDays'] = 7\n#Во-первых, конвертируем столбец в числовой тип из строкового\ntest['AgeuponFirstPart'] = pd.to_numeric(test['AgeuponFirstPart'])\ntest['AgeuponSecondPartInDays'] = pd.to_numeric(test['AgeuponSecondPartInDays'])\n\n#А теперь получим нормальное время жизни в днях\ntest['LifetimeInDays'] = test['AgeuponFirstPart'] * test['AgeuponSecondPartInDays']\n\n#Удалим уж совсем бессмысленные промежуточные столбцы\ntest = test.drop(['AgeuponSecondPartInDays', 'AgeuponSecondPart', 'AgeuponFirstPart'], axis=1)\n\ntest.head()",
"<div class=\"panel panel-info\" style=\"margin: 50px 0 0 0\">\n <div class=\"panel-heading\">\n <h3 class=\"panel-title\">Задание 3.</h3> \n </div>\n</div>\n\nВыполните отбор признаков, попробуйте различные методы. Проверьте качество на кросс-валидации. \nВыведите топ самых важных и самых незначащих признаков.\nПредобработка данных",
"np.random.seed = 1234\nfrom sklearn.preprocessing import LabelEncoder\nfrom sklearn import preprocessing\n\n#####################Заменим NaN значения на слово Unknown##################\n#Уберем Nan значения из train\ntrain.loc[train.AnimalID.isnull(), 'AnimalID'] = 'Unknown'\ntrain.loc[train.Name.isnull(), 'Name'] = 'Unknown'\ntrain.loc[train.OutcomeType.isnull(), 'OutcomeType'] = 'Unknown'\ntrain.loc[train.AnimalType.isnull(), 'AnimalType'] = 'Unknown'\ntrain.loc[train.AgeuponOutcome.isnull(), 'AgeuponOutcome'] = 'Unknown'\ntrain.loc[train.LifetimeInDays.isnull(), 'LifetimeInDays'] = 'Unknown'\n\n#Уберем Nan значения из test\ntest.loc[test.ID.isnull(), 'ID'] = 'Unknown'\ntest.loc[test.Name.isnull(), 'Name'] = 'Unknown'\ntest.loc[test.AnimalType.isnull(), 'AnimalType'] = 'Unknown'\ntest.loc[test.AgeuponOutcome.isnull(), 'AgeuponOutcome'] = 'Unknown'\ntest.loc[test.LifetimeInDays.isnull(), 'LifetimeInDays'] = 'Unknown'\n\n#####################Закодируем слова числами################################\n\n#Закодировали AnimalID цифрами вместо названий в test & train\n#encAnimalID = preprocessing.LabelEncoder()\n#encAnimalID.fit(pd.concat((test['AnimalID'], train['AnimalID'])))\n#test['AnimalID'] = encAnimalID.transform(test['AnimalID'])\n#train['AnimalID'] = encAnimalID.transform(train['AnimalID'])\n\n#Закодировали имя цифрами вместо названий в test & train\nencName = preprocessing.LabelEncoder()\nencName.fit(pd.concat((test['Name'], train['Name'])))\ntest['Name'] = encName.transform(test['Name'])\ntrain['Name'] = encName.transform(train['Name'])\n\n#Закодировали DateTime цифрами вместо названий в test & train\nencDateTime = preprocessing.LabelEncoder()\nencDateTime.fit(pd.concat((test['DateTime'], train['DateTime'])))\ntest['DateTime'] = encDateTime.transform(test['DateTime'])\ntrain['DateTime'] = encDateTime.transform(train['DateTime'])\n\n#Закодировали OutcomeType цифрами вместо названий в train, т.к. в test их нет\nencOutcomeType = preprocessing.LabelEncoder()\nencOutcomeType.fit(train['OutcomeType'])\ntrain['OutcomeType'] = encOutcomeType.transform(train['OutcomeType'])\n\n#Закодировали AnimalType цифрами вместо названий в test & train\nencAnimalType = preprocessing.LabelEncoder()\nencAnimalType.fit(pd.concat((test['AnimalType'], train['AnimalType'])))\ntest['AnimalType'] = encAnimalType.transform(test['AnimalType'])\ntrain['AnimalType'] = encAnimalType.transform(train['AnimalType'])\n\n#Закодировали SexuponOutcome цифрами вместо названий в test & train\nencSexuponOutcome = preprocessing.LabelEncoder()\nencSexuponOutcome.fit(pd.concat((test['SexuponOutcome'], train['SexuponOutcome'])))\ntest['SexuponOutcome'] = encSexuponOutcome.transform(test['SexuponOutcome'])\ntrain['SexuponOutcome'] = encSexuponOutcome.transform(train['SexuponOutcome'])\n\n#Закодировали AgeuponOutcome цифрами вместо названий в test & train\nencAgeuponOutcome = preprocessing.LabelEncoder()\nencAgeuponOutcome.fit(pd.concat((test['AgeuponOutcome'], train['AgeuponOutcome'])))\ntest['AgeuponOutcome'] = encAgeuponOutcome.transform(test['AgeuponOutcome'])\ntrain['AgeuponOutcome'] = encAgeuponOutcome.transform(train['AgeuponOutcome'])\n\n#Закодировали Breed цифрами вместо названий в test & train\nencBreed = preprocessing.LabelEncoder()\nencBreed.fit(pd.concat((test['Breed'], train['Breed'])))\ntest['Breed'] = encBreed.transform(test['Breed'])\ntrain['Breed'] = encBreed.transform(train['Breed'])\n\n#Закодировали Color цифрами вместо названий в test & train\nencColor = preprocessing.LabelEncoder()\nencColor.fit(pd.concat((test['Color'], train['Color'])))\ntest['Color'] = encColor.transform(test['Color'])\ntrain['Color'] = encColor.transform(train['Color'])\n\n#Закодировали Gender цифрами вместо названий в test & train\nencGender = preprocessing.LabelEncoder()\nencGender.fit(pd.concat((test['Gender'], train['Gender'])))\ntest['Gender'] = encGender.transform(test['Gender'])\ntrain['Gender'] = encGender.transform(train['Gender'])\n\n#Закодировали Fertility цифрами вместо названий в test & train\nencFertility = preprocessing.LabelEncoder()\nencFertility.fit(pd.concat((test['Fertility'], train['Fertility'])))\ntest['Fertility'] = encFertility.transform(test['Fertility'])\ntrain['Fertility'] = encFertility.transform(train['Fertility'])\n\n#Закодировали breedColor цифрами вместо названий в test & train\nencbreedColor = preprocessing.LabelEncoder()\nencbreedColor.fit(pd.concat((test['breedColor'], train['breedColor'])))\ntest['breedColor'] = encbreedColor.transform(test['breedColor'])\ntrain['breedColor'] = encbreedColor.transform(train['breedColor'])\n\n####################################Предобработка#################################\nfrom sklearn.model_selection import cross_val_score\n#poly_features = preprocessing.PolynomialFeatures(3)\n\n#Подготовили данные так, что X_tr - таблица без AnimalID и OutcomeType, а в y_tr сохранены OutcomeType\nX_tr, y_tr = train.drop(['AnimalID', 'OutcomeType'], axis=1), train['OutcomeType']\n\n#Типа перевели dataFrame в array и сдалали над ним предварительную обработку\n#X_tr = poly_features.fit_transform(X_tr)\nX_tr.head()",
"Статистические тесты",
"from sklearn.feature_selection import SelectKBest\nfrom sklearn.feature_selection import chi2, f_classif, mutual_info_classif\n\nskb = SelectKBest(mutual_info_classif, k=15)\nx_new = skb.fit_transform(X_tr, y_tr)\n\nx_new",
"Методы обертки",
"from sklearn.feature_selection import RFE\nfrom sklearn.linear_model import LinearRegression\n\nnames = X_tr.columns.values\nlr = LinearRegression()\nrfe = RFE(lr, n_features_to_select=1)\nrfe.fit(X_tr,y_tr);\nprint(\"Features sorted by their rank:\")\nprint(sorted(zip(map(lambda x: round(x, 4), rfe.ranking_), names)))",
"Отбор при помощи модели Lasso",
"from sklearn.linear_model import Lasso\nclf = Lasso()\nclf.fit(X_tr, y_tr);\nclf.coef_\n\nfeatures = X_tr.columns.values\nprint('Всего Lasso выкинуло %s переменных' % (clf.coef_ == 0).sum())\nprint('Это признаки:')\nfor s in features[np.where(clf.coef_ == 0)[0]]:\n print(' * ', s)",
"Отбор при помощи модели RandomForest",
"from sklearn.ensemble import RandomForestRegressor\nclf = RandomForestRegressor()\nclf.fit(X_tr, y_tr);\nclf.feature_importances_\n\nimp_feature_idx = clf.feature_importances_.argsort()\nimp_feature_idx\n\nfeatures = X_tr.columns.values\n\nk = 0\n\nwhile k < len(features):\n print(features[k], imp_feature_idx[k])\n k += 1",
"<b>Вывод по признакам:</b>\n<br>\n<b>Не нужны:</b> Name, DateTime, month, day, Breed, breedColor. Всё остальное менее однозначно, можно и оставить.\n<div class=\"panel panel-info\" style=\"margin: 50px 0 0 0\">\n <div class=\"panel-heading\">\n <h3 class=\"panel-title\">Задание 4.</h3> \n </div>\n</div>\n\nПопробуйте смешать разные модели с помощью <b>sklearn.ensemble.VotingClassifier</b>. Увеличилась ли точность? Изменилась ли дисперсия?",
"#Для начала выкинем ненужные признаки, выявленные на прошлом этапе\nX_tr = X_tr.drop(['Name'], axis=1) #, 'DateTime', 'breedColor', 'Breed'\ntest = test.drop(['Name'], axis=1) #, 'DateTime', 'breedColor', 'Breed'\nX_tr.head()\n\nfrom sklearn.ensemble import VotingClassifier\n\nfrom sklearn.linear_model import LogisticRegression\nfrom sklearn.naive_bayes import GaussianNB\nfrom sklearn.ensemble import RandomForestClassifier\nfrom sklearn.neighbors import KNeighborsClassifier\n\nclf1 = LogisticRegression(random_state=1234)\nclf3 = GaussianNB()\nclf4 = RandomForestClassifier(random_state=1234)\nclf5 = KNeighborsClassifier()\n\nfrom sklearn.ensemble import ExtraTreesClassifier\nclf6 = ExtraTreesClassifier(random_state=1234)\n\nfrom sklearn.tree import DecisionTreeClassifier\nclf7 = DecisionTreeClassifier(random_state=1234)\n\neclf = VotingClassifier(estimators=[\n ('lr', clf1), ('nb', clf3), ('knn', clf5), ('rf', clf4), ('etc', clf6), ('dtc', clf7)],\n voting='soft', weights=[1,1,2,2,2,1])\n\nscores = cross_val_score(eclf, X_tr, y_tr)\n\neclf = eclf.fit(X_tr, y_tr)\n\nprint('Mean score:', scores.mean())\n\n#delete AnimalID from test\nX_te = test.drop(['ID'], axis=1)\nX_te.head()\n\nids = test[['ID']]\n\nresult = pd.concat([ids,pd.DataFrame(data = eclf.predict_proba(X_te), columns = encOutcomeType.classes_)], axis=1)\nresult.head()\n\n#Сохраним\nresult.to_csv('ans_catdog_basic.csv', index=False)"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
gabrielrezzonico/dogsandcats
|
notebooks/01. Data loading and analysis.ipynb
|
mit
|
[
"import os\nWORKING_DIRECTORY = os.getcwd()\nprint(\"Current working directory: {}\".format(WORKING_DIRECTORY))\n\nORIGINAL_TRAIN_DIRECTORY = \"../data/original_train/\"\nTRAIN_DIRECTORY = \"../data/train/\"\nVALID_DIRECTORY = \"../data/valid/\"\nTEST_DIRECTORY = \"../data/test/\"\n\nIMAGE_SIZE = (360,404)\nCLASSES = ['cat', 'dog']\n\nVALIDATION_SIZE = 0.2 # size of the validation we want to use\nTEST_SIZE = 0.1",
"Samples",
"plot_grid(imgs, titles=labels)\n\n%autosave 0",
"Data size",
"import pandas as pd\nimport glob\nfrom PIL import Image\n\nfiles = glob.glob(ORIGINAL_TRAIN_DIRECTORY + '*')\ndf = pd.DataFrame({'fpath':files,'width':0,'height':0})\ndf['category'] = df.fpath.str.extract('../data/original_train/([a-zA-Z]*).', expand=False) # extract class\nfor idx in df.index:\n im = Image.open(df.ix[idx].fpath)\n df.ix[idx,['width','height']] = im.size\n\ndf.head()\n\ndf.describe()",
"There are 25000 images in the dataset. We can see that the mean size of the images is (360.478080,404.09904).",
"df['category'].value_counts()\n\n%matplotlib inline\nimport seaborn as sns\n\nax = sns.countplot(\"category\", data=df)\n\nsns.jointplot(x='width', \n y='height', \n data=df,\n joint_kws={'s': 0.5}, \n marginal_kws=dict(bins=50), \n size=10,\n stat_func=None);",
"Data preparation\nThe dataset can be downloaded from https://www.kaggle.com/c/the-nature-conservancy-fisheries-monitoring/data.\nNumber of training examples:",
"import os\nTOTAL_NUMBER_FILES = sum([len(files) for r, d, files in os.walk(ORIGINAL_TRAIN_DIRECTORY)])\nprint(\"Total number of files in train folder:\", TOTAL_NUMBER_FILES)",
"Folder structure\nThe train directory consist of labelled data with the following convention for each image: \ndata/train/CLASS.id.jpg\nWe are going to use keras.preprocessing.image so we want the folder structure to be:\ndata/train/CLASS/image-name.jpg",
"import glob\nimport os\nimport shutil\nimport numpy as np\n\nshutil.rmtree(os.path.join(TEST_DIRECTORY, \"dog\"), ignore_errors=True)\nshutil.rmtree(os.path.join(TEST_DIRECTORY, \"cat\"), ignore_errors=True)\n\nshutil.rmtree(os.path.join(VALID_DIRECTORY, \"dog\"), ignore_errors=True)\nshutil.rmtree(os.path.join(VALID_DIRECTORY, \"cat\"), ignore_errors=True)\n\nshutil.rmtree(os.path.join(TRAIN_DIRECTORY, \"dog\"), ignore_errors=True)\nshutil.rmtree(os.path.join(TRAIN_DIRECTORY, \"cat\"), ignore_errors=True)\n\nos.mkdir(os.path.join(TEST_DIRECTORY, \"dog\"))\nos.mkdir(os.path.join(TEST_DIRECTORY, \"cat\"))\n\nos.mkdir(os.path.join(VALID_DIRECTORY, \"dog\"))\nos.mkdir(os.path.join(VALID_DIRECTORY, \"cat\"))\n\nos.mkdir(os.path.join(TRAIN_DIRECTORY, \"dog\"))\nos.mkdir(os.path.join(TRAIN_DIRECTORY, \"cat\"))\n\n\n\n#########################\n# DOGS\n##########\n#random list of dog files\ndog_pattern = ORIGINAL_TRAIN_DIRECTORY + \"dog.*\"\ndog_files = np.random.permutation(glob.glob(dog_pattern))\n\n# randomly split the files in train folder and move them to validation\nnumber_validation_dog_files = int(len(dog_files) * VALIDATION_SIZE)\nnumber_test_dog_files = int(len(dog_files) * TEST_SIZE)\n\nfor index, dog_file in enumerate(dog_files):\n file_name = os.path.split(dog_file)[1]\n if index < number_validation_dog_files:#validation files\n new_path = os.path.join(VALID_DIRECTORY, \"dog\", file_name)\n elif index >= number_validation_dog_files and index < (number_validation_dog_files + number_test_dog_files):\n new_path = os.path.join(TEST_DIRECTORY, \"dog\", file_name)\n else:\n new_path = os.path.join(TRAIN_DIRECTORY, \"dog\", file_name)\n shutil.copy(dog_file, new_path)\n\n#########################\n# CATS\n##########\n#random list of dog files\ncat_pattern = ORIGINAL_TRAIN_DIRECTORY + \"cat.*\"\ncat_files = np.random.permutation(glob.glob(cat_pattern))\n\n# randomly split the files in train folder and move them to validation\nnumber_validation_cat_files = int(len(cat_files) * VALIDATION_SIZE)\nnumber_test_cat_files = int(len(cat_files) * TEST_SIZE)\n\nfor index, cat_file in enumerate(cat_files):\n file_name = os.path.split(cat_file)[1]\n if index < number_validation_cat_files:\n new_path = os.path.join(VALID_DIRECTORY, \"cat\", file_name)\n elif index >= number_validation_cat_files and index < (number_validation_cat_files+number_test_cat_files):\n new_path = os.path.join(TEST_DIRECTORY, \"cat\", file_name)\n else:\n new_path = os.path.join(TRAIN_DIRECTORY, \"cat\", file_name)\n shutil.copy(cat_file, new_path) \n\n## Samples\n\nimport utils;\nfrom utils import *\n\nbatch_generator = get_keras_batch_generator(VALID_DIRECTORY, batch_size=2, target_size=(IMAGE_SIZE[0], IMAGE_SIZE[1]))\n\nimgs,labels = next(batch_generator)\n\n%matplotlib inline\nplot_grid(imgs, titles=labels)\n\nimgs,labels = next(batch_generator)\n%matplotlib inline\nplot_grid(imgs, titles=labels)"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
llvll/beaconml
|
ip[y]/beaconml.ipynb
|
bsd-2-clause
|
[
"BeaconML\nMachine learning in action for iBeacon-based advertising\nThis is a simple demo to show the machine learning in action for iBeacon-based advertising.\nThe source code for this demo is available on GitHub \nThis IP[y] Notebook performs a step-by-step execution of 'beacon_test.py' file with extra comments.\nTo simplify the process of machine learning we're using TinyLearn framework, which wraps around Scikit-Learn and Pandas modules for easier classification tasks. The most optimal ML algorithm and parameters are selected automatically by CommonClassifier with the help of cross-validation approach using GridSearchCV.\nThe training data is supplied as CSV file, which contains some statistics from a shopping mall, where iBeacons are installed.\nEvery record in CSV defines the parameters for a successful case - visitor has entered a store or clicked on a mobile app's banner / button. \nOur goal is to predict 'Message Type' labels according to the supplied parameters - mobile app's context. Such labels will define the content type to be rendered on a smartphone: \n\nDiscount\nProduct Info\nJoke",
"# Let's inspect this CSV file\nimport pandas as pd\nsome_data =pd.read_csv(\"../data/beacon_data.csv\", header=0, index_col=None)\nsome_data.head(15)",
"We've loaded CSV file into Pandas DataFrame, which will contain train and test data for our model. \nBefore we will be able to start training we need to encode the strings into numeric values using LabelEncoder.",
"# Encode strings from CSV into numeric values\nfrom sklearn.preprocessing import LabelEncoder\nenc = LabelEncoder()\n\nfor col_name in some_data:\n some_data[col_name] = enc.fit_transform(some_data[col_name])",
"Now we split the DataFrame into train and test datasets.",
"# Split the data into training and test sets (the last 5 items)\ntrain_features, train_labels = some_data.iloc[:-5, :-1], some_data.iloc[:-5, -1]",
"Let's execute the model training and print the results.",
"# Create an instance of CommonClassifier, which will use the default list of estimators.\n# Removing the features with a weight smaller than 0.1.\nfrom tinylearn import CommonClassifier\n\nwrk = CommonClassifier(default=True, cv=3, reduce_func=lambda x: x < 0.1)\nwrk.fit(train_features, train_labels)\nwrk.print_fit_summary()",
"CommonClassifier has selected 'ExtraTreesClassifier' estimator. Let's do the actual prediction of labels on the test data:",
"# Predicting and decoding the labels back to strings\nprint(\"\\nPredicted data:\")\npredicted = wrk.predict(some_data.iloc[-5:, :-1])\nprint(enc.inverse_transform(predicted))",
"Pretty close to the actual labels ... with the following accuracy:",
"import numpy as np\nprint(\"\\nActual accuracy: \" +\n str(np.sum(predicted == some_data.iloc[-5:, -1])/predicted.size*100) + '%')",
"Let's take a look at the internals of TinyLearn and CommonClassifier specifically:",
"# %load tinylearn.py\n# Copyright (c) 2015, Oleg Puzanov\n# All rights reserved.\n#\n# Redistribution and use in source and binary forms, with or without\n# modification, are permitted provided that the following conditions are met:\n#\n# * Redistributions of source code must retain the above copyright notice,\n# this list of conditions and the following disclaimer.\n#\n# * Redistributions in binary form must reproduce the above copyright notice,\n# this list of conditions and the following disclaimer in the documentation\n# and/or other materials provided with the distribution.\n#\n# THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS \"AS IS\"\n# AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE\n# IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE\n# ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDER OR CONTRIBUTORS BE\n# LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR\n# CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF\n# SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS\n# INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN\n# CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)\n# ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE\n# POSSIBILITY OF SUCH DAMAGE.\n\n\"\"\"Helper classes for the basic classification tasks with Scikit-Learn and Pandas.\"\"\"\n\nimport logging\nimport numpy as np\nfrom fastdtw import fastdtw\nfrom sklearn.base import BaseEstimator, ClassifierMixin\nfrom sklearn.cross_validation import cross_val_score\nfrom sklearn.grid_search import GridSearchCV\nfrom sklearn.ensemble import (ExtraTreesClassifier,\n RandomForestClassifier)\nfrom sklearn.svm import SVC\nfrom sklearn.linear_model import (LogisticRegression,\n SGDClassifier)\n\n\nclass FeatureReducer(object):\n \"\"\" Removes the features (columns) from the supplied DataFrame according to the\n function 'reduce_func'.\n\n The default use case is about removing the features, which have a very small weight\n and won't be useful for classification tasks.\n\n Feature weighting is implemented using ExtraTreesClassifier.\n \"\"\"\n def __init__(self, df_features, df_targets, reduce_func=None):\n self.df_features = df_features\n self.df_targets = df_targets\n self.reduce_func = reduce_func\n self.dropped_cols = []\n\n def reduce(self, n_estimators=10):\n total_dropped = 0\n self.dropped_cols = []\n\n if self.reduce_func is not None:\n clf = ExtraTreesClassifier(n_estimators)\n clf.fit(self.df_features, self.df_targets).transform(self.df_features)\n\n for i in range(len(clf.feature_importances_)):\n if self.reduce_func(clf.feature_importances_[i]):\n total_dropped += 1\n logging.info(\"FeatureReducer: dropping column \\'\" +\n self.df_features.columns.values[i] + \"\\'\")\n self.dropped_cols.append(self.df_features.columns[i])\n\n [self.df_features.drop(c, axis=1, inplace=True) for c in self.dropped_cols]\n return total_dropped\n\n def print_weights(self, n_estimators=10):\n clf = ExtraTreesClassifier(n_estimators)\n clf.fit(self.df_features, self.df_targets).transform(self.df_features)\n [print(\"Feature \\'\" + self.df_features.columns.values[i] + \" has weight \" +\n clf.feature_importances_[i]) for i in range(len(clf.feature_importances_))]\n\n\nclass CrossValidator(object):\n \"\"\" Thin wrapper around 'cross_val_score' method of Scikit-Learn.\n \"\"\"\n def __init__(self, estimator, df_features, df_targets, cv=5):\n self.scores = np.empty\n self.estimator = estimator\n self.df_features = df_features\n self.df_targets = df_targets\n self.cv = cv\n\n def cross_validate(self):\n self.scores = cross_val_score(self.estimator, self.df_features, self.df_targets, cv=self.cv)\n return self.scores\n\n def print_summary(self):\n if self.scores.size == 0:\n print(\"No data, please execute 'cross_validate' at first.\")\n else:\n print(\"Cross-validation summary for \" + self.estimator.__class__.__name__)\n print(\"Mean score: %0.2f (+/- %0.2f)\" % (self.scores.mean(), self.scores.std() * 2))\n [print(\"Score #\" + i + \": %0.2f\", self.scores[i]) for i in range(len(self.scores))]\n\n\nclass CvEstimatorSelector(object):\n \"\"\"Executes the cross-validation procedures to discover the best performing estimator\n from the supplied ones.\n\n The best estimator is selected according to the highest mean score.\n \"\"\"\n def __init__(self, df_features, df_targets, cv=5):\n self.scores = {}\n self.estimators = {}\n self.df_features = df_features\n self.df_targets = df_targets\n self.cv = cv\n self.selected_name = None\n\n def add_estimator(self, name, instance):\n self.estimators[name] = instance\n\n def select_estimator(self):\n self.selected_name = None\n largest_val = 0\n\n for name in self.estimators:\n c_val = CrossValidator(self.estimators[name], self.df_features, self.df_targets, self.cv)\n self.scores[name] = c_val.cross_validate().mean()\n logging.info(\"Mean score for \\'\" + name + \"\\' estimator is \" + str(self.scores[name]))\n if largest_val < self.scores[name]:\n largest_val = self.scores[name]\n self.selected_name = name\n\n return self.selected_name\n\n def print_summary(self):\n if self.selected_name is None:\n print(\"No data, please execute 'select_estimator' at first.\")\n else:\n print(\"Selection summary based on the cross-validation of \" +\n str(len(self.estimators)) + \" estimators.\")\n print(\"Selected estimator \\'\" + self.selected_name +\n \"\\' with \" + str(self.scores[self.selected_name]) + \" mean score.\")\n print(\"Other scores ...\")\n [print(\"Estimator \\'\" + n + \" \\' has mean score \" +\n str(self.scores[n])) for n in self.estimators if (n != self.selected_name)]\n\n\nclass GridSearchEstimatorSelector(object):\n \"\"\"Thin wrapper around GridSearchCV class of Scikit-Learn for discovering\n the best performing estimator.\n \"\"\"\n def __init__(self, df_features, df_targets, cv=5):\n self.scores = {}\n self.estimators = {}\n self.df_features = df_features\n self.df_targets = df_targets\n self.cv = cv\n self.selected_name = None\n self.best_estimator = None\n\n def add_estimator(self, name, instance, params):\n self.estimators[name] = {'instance': instance, 'params': params}\n\n def select_estimator(self):\n self.selected_name = None\n largest_val = 0\n\n for name in self.estimators:\n est = self.estimators[name]\n clf = GridSearchCV(est['instance'], est['params'], cv=self.cv)\n clf.fit(self.df_features, self.df_targets)\n self.scores[name] = clf.best_score_\n logging.info(\"Best score for \\'\" + name + \"\\' estimator is \" + str(clf.best_score_))\n if largest_val < self.scores[name]:\n largest_val = self.scores[name]\n self.selected_name = name\n self.best_estimator = clf.best_estimator_\n\n return self.selected_name\n\n def print_summary(self):\n if self.selected_name is None:\n print(\"No data, please execute 'select_estimator' at first.\")\n else:\n print(\"Selection summary based on GridSearchCV and \" +\n str(len(self.estimators)) + \" estimators.\")\n print(\"Selected estimator \\'\" + self.selected_name +\n \"\\' with \" + str(self.scores[self.selected_name]) + \" mean score.\")\n print(self.best_estimator)\n print(\"\\nOther scores ...\")\n [print(\"Estimator \\'\" + n + \"\\' has mean score \" +\n str(self.scores[n])) for n in self.estimators.keys() if (n != self.selected_name)]\n\n\nclass KnnDtwClassifier(BaseEstimator, ClassifierMixin):\n \"\"\"Custom classifier implementation for Scikit-Learn using Dynamic Time Warping (DTW)\n and KNN (K-Nearest Neighbors) algorithms.\n\n This classifier can be used for labeling the varying-length sequences, like time series\n or motion data.\n\n FastDTW library is used for faster DTW calculations - linear instead of quadratic complexity.\n \"\"\"\n def __init__(self, n_neighbors=1):\n self.n_neighbors = n_neighbors\n self.features = []\n self.labels = []\n\n def get_distance(self, x, y):\n return fastdtw(x, y)[0]\n\n def fit(self, X, y=None):\n for index, l in enumerate(y):\n self.features.append(X[index])\n self.labels.append(l)\n return self\n\n def predict(self, X):\n dist = np.array([self.get_distance(X, seq) for seq in self.features])\n indices = dist.argsort()[:self.n_neighbors]\n return np.array(self.labels)[indices]\n\n def predict_ext(self, X):\n dist = np.array([self.get_distance(X, seq) for seq in self.features])\n indices = dist.argsort()[:self.n_neighbors]\n return (dist[indices],\n indices)\n\n\nclass CommonClassifier(object):\n \"\"\"Helper class to execute the common classification workflow - from training to prediction\n to metrics reporting with the popular ML algorithms, like SVM or Random Forest.\n\n Includes the default list of estimators with instances and parameters, which have been\n proven to work well.\n \"\"\"\n def __init__(self, default=True, cv=5, reduce_func=None):\n self.cv = cv\n self.default = default\n self.reduce_func = reduce_func\n self.reducer = None\n self.grid_search = None\n\n def add_estimator(self, name, instance, params):\n self.grid_search.add_estimator(name, instance, params)\n\n def fit(self, X, y=None):\n if self.default:\n self.grid_search = GridSearchEstimatorSelector(X, y, self.cv)\n self.grid_search.add_estimator('SVC', SVC(), {'kernel': [\"linear\", \"rbf\"],\n 'C': [1, 5, 10, 50],\n 'gamma': [0.0, 0.001, 0.0001]})\n self.grid_search.add_estimator('RandomForestClassifier', RandomForestClassifier(),\n {'n_estimators': [5, 10, 20, 50]})\n self.grid_search.add_estimator('ExtraTreeClassifier', ExtraTreesClassifier(),\n {'n_estimators': [5, 10, 20, 50]})\n self.grid_search.add_estimator('LogisticRegression', LogisticRegression(),\n {'C': [1, 5, 10, 50], 'solver': [\"lbfgs\", \"liblinear\"]})\n self.grid_search.add_estimator('SGDClassifier', SGDClassifier(),\n {'n_iter': [5, 10, 20, 50], 'alpha': [0.0001, 0.001],\n 'loss': [\"hinge\", \"modified_huber\",\n \"huber\", \"squared_hinge\", \"perceptron\"]})\n\n if self.reduce_func is not None:\n self.reducer = FeatureReducer(X, y, self.reduce_func)\n self.reducer.reduce(10)\n\n return self.grid_search.select_estimator()\n\n def print_fit_summary(self):\n return self.grid_search.print_summary()\n\n def predict(self, X):\n if self.grid_search.selected_name is not None:\n if self.reduce_func is not None and len(self.reducer.dropped_cols) > 0:\n X.drop(self.reducer.dropped_cols, axis=1, inplace=True)\n return self.grid_search.best_estimator.predict(X)\n else:\n return None\n"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
DJCordhose/ai
|
notebooks/manning/U4-M1-Preparing TensorFlow models.ipynb
|
mit
|
[
"Preparing a TensorFlow model",
"import warnings\nwarnings.filterwarnings('ignore')\n\n%matplotlib inline\n%pylab inline\nimport matplotlib.pyplot as plt\n\nimport pandas as pd\nprint(pd.__version__)\n\nimport tensorflow as tf\ntf.logging.set_verbosity(tf.logging.ERROR)\nprint(tf.__version__)\n\n# let's see what compute devices we have available, hopefully a GPU \nsess = tf.Session()\ndevices = sess.list_devices()\nfor d in devices:\n print(d.name)\n\n# a small sane check, does tf seem to work ok?\nhello = tf.constant('Hello TF!')\nprint(sess.run(hello))",
"Loading and validating our model",
"!curl -O https://raw.githubusercontent.com/DJCordhose/ai/master/notebooks/manning/model/insurance.hdf5\n\nmodel = tf.keras.models.load_model('insurance.hdf5')",
"Descison Boundaries for 2 Dimensions",
"# a little sane check, does it work at all?\n\n# within this code, we expect Olli to be a green customer with a high prabability\n# 0: red\n# 1: green\n# 2: yellow\n\nolli_data = [100, 47, 10]\n\nX = np.array([olli_data])\nmodel.predict(X)",
"Converting our Keras Model to the Alternative High-Level Estimator Model",
"# https://cloud.google.com/blog/products/gcp/new-in-tensorflow-14-converting-a-keras-model-to-a-tensorflow-estimator\nestimator_model = tf.keras.estimator.model_to_estimator(keras_model=model)\n\n# it still works the same, with a different style of API, though\nx = {\"hidden1_input\": X}\nlist(estimator_model.predict(input_fn=tf.estimator.inputs.numpy_input_fn(x, shuffle=False)))",
"Preparing our model for serving\n\nhttps://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/saved_model/README.md\nhttps://www.tensorflow.org/serving/serving_basic",
"!rm -rf tf\n\nimport os\n\nexport_path_base = 'tf'\nversion = 1\nexport_path = os.path.join(\n tf.compat.as_bytes(export_path_base),\n tf.compat.as_bytes(str(version)))\n\ntf.keras.backend.set_learning_phase(0)\nsess = tf.keras.backend.get_session()\n\nclassification_inputs = tf.saved_model.utils.build_tensor_info(model.input)\nclassification_outputs_scores = tf.saved_model.utils.build_tensor_info(model.output)\n\nsignature = tf.saved_model.signature_def_utils.build_signature_def(\n inputs={'inputs': classification_inputs},\n outputs={'scores': classification_outputs_scores},\n method_name=tf.saved_model.signature_constants.PREDICT_METHOD_NAME)\n\nbuilder = tf.saved_model.builder.SavedModelBuilder(export_path)\nbuilder.add_meta_graph_and_variables(\n sess, [tf.saved_model.tag_constants.SERVING],\n signature_def_map={\n tf.saved_model.signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY: signature\n })\nbuilder.save()\n\ndel model\n\ndel estimator_model\n\nimport gc\ngc.collect()\n\ntf.keras.backend.clear_session()"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
liufuyang/ManagingBigData_MySQL_DukeUniv
|
week4/MySQL_Exercise_11_Queries_that_Test_Relationships_Between_Test_Completion_and_Dog_Characterisitcs.ipynb
|
mit
|
[
"Copyright Jana Schaich Borg/Attribution-NonCommercial 4.0 International (CC BY-NC 4.0)\nMySQL Exercise 11: Queries that Test Relationships Between Test Completion and Dog Characteristics\nThis lesson we are going to integrate all the SQL syntax we've learned so far to start addressing questions in our Dognition Analysis Plan. I summarized the reasons having an analysis plan is so important in the \"Start with an Analysis Plan\" video accompanying this week's materials. Analysis plans ensure that you will address questions that are relevant to your business objectives as quickly and efficiently as possible. The quickest way to narrow in the factors in your analysis plan that are likely to create new insights is to combine simple SQL calculations with visualization programs, like Tableau, to identify which factors under consideration have the strongest effects on the business metric you are tasked with improving. You can then design more nuanced statistical models in other software, such as R, based on the factors you have confirmed are likely to be important for understanding and changing your business metric. \n<img src=\"https://duke.box.com/shared/static/davndrvd4jb1awwuq6sd1rgt0ck4o8nm.jpg\" width=400 alt=\"SELECT FROM WHERE ORDER BY\" />\nI describe a method for designing analysis plans in the Data Visualization and Communication with Tableau course earlier in this Specialization. I call that method Structured Pyramid Analysis Plans, or \"sPAPs\". I have provided a skeleton of an sPAP for the Dognition data set with the materials for this course that I will use as a road map for the queries we will design and practice in the next two lessons. To orient you, the SMART goal of the analysis project is at the top of the pyramid. This is a specific, measurable, attainable, relevant, and time-bound version of the general project objective, which is to make a recommendation to Dognition about what they could do to increase the number of tests customers complete. The variables you will use to assess the goal should be filled out right under where the SMART goal is written. Then under those variables, you will see ever-widening layers of categories and sub-categories of issues that will be important to analyze in order to achieve your SMART goal. \nIn this lesson, we will write queries to address the issues in the left-most branch of the sPAP. These issues all relate to \"Features of Dogs\" that could potentially influence the number of tests the dogs will ultimately complete. We will spend a lot of time discussing and practicing how to translate analysis questions described in words into queries written in SQL syntax.\nTo begin, load the sql library and database, and make the Dognition database your default database:",
"%load_ext sql\n%sql mysql://studentuser:studentpw@mysqlserver/dognitiondb\n%sql USE dognitiondb\n\n%config SqlMagic.displaylimit=25",
"<img src=\"https://duke.box.com/shared/static/p2eucjdttai08eeo7davbpfgqi3zrew0.jpg\" width=600 alt=\"SELECT FROM WHERE\" />\n1. Assess whether Dognition personality dimensions are related to the number of tests completed\nThe first variable in the Dognition sPAP we want to investigate is Dognition personality dimensions. Recall from the \"Meet Your Dognition Data\" video and the written description of the Dognition Data Set included with the Week 2 materials that Dognition personality dimensions represent distinct combinations of characteristics assessed by the Dognition tests. It is certainly plausible that certain personalities of dogs might be more or less likely to complete tests. For example, \"einstein\" dogs might be particularly likely to complete a lot of tests. \nTo test the relationship between Dognition personality dimensions and test completion totals, we need a query that will output a summary of the number of tests completed by dogs that have each of the Dognition personality dimensions. The features you will need to include in your query are foreshadowed by key words in this sentence. First, the fact that you need a summary of the number of tests completed suggests you will need an aggregation function. Next, the fact that you want a different summary for each personality dimension suggests that you will need a GROUP BY clause. Third, the fact that you need a \"summary of the number of tests completed\" rather than just a \"summary of the tests completed\" suggests that you might have to have multiple stages of aggegrations, which in turn might mean that you will need to use a subquery.\nLet's build the query step by step.\nQuestion 1: To get a feeling for what kind of values exist in the Dognition personality dimension column, write a query that will output all of the distinct values in the dimension column. Use your relational schema or the course materials to determine what table the dimension column is in. Your output should have 11 rows.",
"%%sql\nSELECT DISTINCT dimension\nFROM dogs",
"The results of the query above illustrate there are NULL values (indicated by the output value \"none\") in the dimension column. Keep that in mind in case it is relevant to future queries. \nWe want a summary of the total number of tests completed by dogs with each personality dimension. In order to calculate those summaries, we first need to calculate the total number of tests completed by each dog. We can achieve this using a subquery. The subquery will require data from both the dogs and the complete_tests table, so the subquery will need to include a join. We are only interested in dogs who have completed tests, so an inner join is appropriate in this case.\nQuestion 2: Use the equijoin syntax (described in MySQL Exercise 8) to write a query that will output the Dognition personality dimension and total number of tests completed by each unique DogID. This query will be used as an inner subquery in the next question. LIMIT your output to 100 rows for troubleshooting purposes.",
"%%sql\nSELECT d.dog_guid AS dogID, d.dimension AS dimension, count(c.created_at) AS\nnumtests\nFROM dogs d, complete_tests c\nWHERE d.dog_guid=c.dog_guid\nGROUP BY dogID",
"Question 3: Re-write the query in Question 2 using traditional join syntax (described in MySQL Exercise 8).",
"%%sql\nSELECT d.dog_guid AS dogID, d.dimension AS dimension, COUNT(ct.created_at) AS numtests\nFROM dogs d JOIN complete_tests ct on d.dog_guid=ct.dog_guid\nGROUP BY d.dog_guid",
"Now we need to summarize the total number of tests completed by each unique DogID within each Dognition personality dimension. To do this we will need to choose an appropriate aggregation function for the count column of the query we just wrote. \nQuestion 4: To start, write a query that will output the average number of tests completed by unique dogs in each Dognition personality dimension. Choose either the query in Question 2 or 3 to serve as an inner query in your main query. If you have trouble, make sure you use the appropriate aliases in your GROUP BY and SELECT statements.",
"%%sql\nSELECT t.dimension AS dimension, AVG(t.numtests) AS avg_numtests\nFROM\n (SELECT d.dog_guid AS dogID, d.dimension AS dimension, COUNT(ct.created_at) AS numtests\n FROM dogs d JOIN complete_tests ct on d.dog_guid=ct.dog_guid\n GROUP BY d.dog_guid)\n AS t\nGROUP BY t.dimension",
"You should retrieve an output of 11 rows with one of the dimensions labeled \"None\" and another labeled \"\" (nothing is between the quotation marks).\nQuestion 5: How many unique DogIDs are summarized in the Dognition dimensions labeled \"None\" or \"\"? (You should retrieve values of 13,705 and 71)",
"%%sql\nSELECT t.dimension AS dimension, COUNT(DISTINCT t.dogID) AS num_dog_guid, AVG(t.numtests) AS avg_numtests\nFROM\n (SELECT d.dog_guid AS dogID, d.dimension AS dimension, COUNT(ct.created_at) AS numtests\n FROM dogs d JOIN complete_tests ct on d.dog_guid=ct.dog_guid\n GROUP BY d.dog_guid)\n AS t\nGROUP BY t.dimension",
"It makes sense there would be many dogs with NULL values in the dimension column, because we learned from Dognition that personality dimensions can only be assigned after the initial \"Dognition Assessment\" is completed, which is comprised of the first 20 Dognition tests. If dogs did not complete the first 20 tests, they would retain a NULL value in the dimension column.\nThe non-NULL empty string values are more curious. It is not clear where those values would come from. \nQuestion 6: To determine whether there are any features that are common to all dogs that have non-NULL empty strings in the dimension column, write a query that outputs the breed, weight, value in the \"exclude\" column, first or minimum time stamp in the complete_tests table, last or maximum time stamp in the complete_tests table, and total number of tests completed by each unique DogID that has a non-NULL empty string in the dimension column.",
"%%sql\nSELECT d.dog_guid AS dogID, d.dimension AS dimension, d.breed, d.weight, d.exclude, \n MIN(ct.created_at), MAX(ct.created_at), COUNT(ct.created_at)\nFROM dogs d JOIN complete_tests ct on d.dog_guid=ct.dog_guid\nWHERE dimension=''\nGROUP BY d.dog_guid",
"A quick inspection of the output from the last query illustrates that almost all of the entries that have non-NULL empty strings in the dimension column also have \"exclude\" flags of 1, meaning that the entries are meant to be excluded due to factors monitored by the Dognition team. This provides a good argument for excluding the entire category of entries that have non-NULL empty strings in the dimension column from our analyses.\nQuestion 7: Rewrite the query in Question 4 to exclude DogIDs with (1) non-NULL empty strings in the dimension column, (2) NULL values in the dimension column, and (3) values of \"1\" in the exclude column. NOTES AND HINTS: You cannot use a clause that says d.exclude does not equal 1 to remove rows that have exclude flags, because Dognition clarified that both NULL values and 0 values in the \"exclude\" column are valid data. A clause that says you should only include values that are not equal to 1 would remove the rows that have NULL values in the exclude column, because NULL values are never included in equals statements (as we learned in the join lessons). In addition, although it should not matter for this query, practice including parentheses with your OR and AND statements that accurately reflect the logic you intend. Your results should return 402 DogIDs in the ace dimension and 626 dogs in the charmer dimension.",
"%%sql\nSELECT t.dimension AS dimension, COUNT(DISTINCT t.dogID) AS num_dog_guid, AVG(t.numtests) AS avg_numtests\nFROM\n (SELECT d.dog_guid AS dogID, d.dimension AS dimension, COUNT(ct.created_at) AS numtests\n FROM dogs d JOIN complete_tests ct on d.dog_guid=ct.dog_guid\n WHERE (d.dimension != '' AND d.dimension != 'None') AND (d.exclude=0 OR d.exclude IS NULL)\n GROUP BY d.dog_guid)\n AS t\nGROUP BY t.dimension",
"The results of Question 7 suggest there are not appreciable differences in the number of tests completed by dogs with different Dognition personality dimensions. Although these analyses are not definitive on their own, these results suggest focusing on Dognition personality dimensions will not likely lead to significant insights about how to improve Dognition completion rates.\n2. Assess whether dog breeds are related to the number of tests completed\nThe next variable in the Dognition sPAP we want to investigate is Dog Breed. We will run one analysis with Breed Group and one analysis with Breed Type.\nFirst, determine how many distinct breed groups there are.\nQuestions 8: Write a query that will output all of the distinct values in the breed_group field.",
"%%sql\nSELECT DISTINCT d.breed_group\nFROM dogs d",
"You can see that there are NULL values in the breed_group field. Let's examine the properties of these entries with NULL values to determine whether they should be excluded from our analysis.\nQuestion 9: Write a query that outputs the breed, weight, value in the \"exclude\" column, first or minimum time stamp in the complete_tests table, last or maximum time stamp in the complete_tests table, and total number of tests completed by each unique DogID that has a NULL value in the breed_group column.",
"%%sql\nSELECT d.dog_guid AS dogID, d.dimension AS dimension, d.breed, d.weight, d.exclude, \n MIN(ct.created_at), MAX(ct.created_at), COUNT(ct.created_at)\nFROM dogs d JOIN complete_tests ct on d.dog_guid=ct.dog_guid\nWHERE d.breed_group IS NULL\nGROUP BY d.dog_guid",
"There are a lot of these entries and there is no obvious feature that is common to all of them, so at present, we do not have a good reason to exclude them from our analysis. Therefore, let's move on to question 10 now....\nQuestion 10: Adapt the query in Question 7 to examine the relationship between breed_group and number of tests completed. Exclude DogIDs with values of \"1\" in the exclude column. Your results should return 1774 DogIDs in the Herding breed group.",
"%%sql\nSELECT t.breed_group AS breed_group, COUNT(DISTINCT t.dogID) AS num_dog_guid, AVG(t.numtests) AS avg_numtests\nFROM\n (SELECT d.dog_guid AS dogID, d.breed_group AS breed_group, COUNT(ct.created_at) AS numtests\n FROM dogs d JOIN complete_tests ct on d.dog_guid=ct.dog_guid\n GROUP BY d.dog_guid)\n AS t\nGROUP BY t.breed_group",
"The results show there are non-NULL entries of empty strings in breed_group column again. Ignoring them for now, Herding and Sporting breed_groups complete the most tests, while Toy breed groups complete the least tests. This suggests that one avenue an analyst might want to explore further is whether it is worth it to target marketing or certain types of Dognition tests to dog owners with dogs in the Herding and Sporting breed_groups. Later in this lesson we will discuss whether using a median instead of an average to summarize the number of completed tests might affect this potential course of action. \nQuestion 11: Adapt the query in Question 10 to only report results for Sporting, Hound, Herding, and Working breed_groups using an IN clause.",
"%%sql\nSELECT t.breed_group AS breed_group, COUNT(DISTINCT t.dogID) AS num_dog_guid, AVG(t.numtests) AS avg_numtests\nFROM\n (SELECT d.dog_guid AS dogID, d.breed_group AS breed_group, COUNT(ct.created_at) AS numtests\n FROM dogs d JOIN complete_tests ct on d.dog_guid=ct.dog_guid\n WHERE d.breed_group IN ('Sporting', 'Hound', 'Herding', 'Working')\n GROUP BY d.dog_guid)\n AS t\nGROUP BY t.breed_group",
"Next, let's examine the relationship between breed_type and number of completed tests. \nQuestions 12: Begin by writing a query that will output all of the distinct values in the breed_type field.",
"%%sql\nSELECT DISTINCT d.breed_type\nFROM dogs d",
"Question 13: Adapt the query in Question 7 to examine the relationship between breed_type and number of tests completed. Exclude DogIDs with values of \"1\" in the exclude column. Your results should return 8865 DogIDs in the Pure Breed group.",
"%%sql\nSELECT t.breed_type AS breed_type, COUNT(DISTINCT t.dogID) AS num_dog_guid, AVG(t.numtests) AS avg_numtests\nFROM\n (SELECT d.dog_guid AS dogID, d.breed_type AS breed_type, COUNT(ct.created_at) AS numtests\n FROM dogs d JOIN complete_tests ct on d.dog_guid=ct.dog_guid\n WHERE d.exclude=0 OR d.exclude IS NULL\n GROUP BY d.dog_guid)\n AS t\nGROUP BY t.breed_type",
"There does not appear to be an appreciable difference between number of tests completed by dogs of different breed types.\n3. Assess whether dog breeds and neutering are related to the number of tests completed\nTo explore the results we found above a little further, let's run some queries that relabel the breed_types according to \"Pure_Breed\" and \"Not_Pure_Breed\". \nQuestion 14: For each unique DogID, output its dog_guid, breed_type, number of completed tests, and use a CASE statement to include an extra column with a string that reads \"Pure_Breed\" whenever breed_type equals 'Pure Breed\" and \"Not_Pure_Breed\" whenever breed_type equals anything else. LIMIT your output to 50 rows for troubleshooting.",
"%%sql\nSELECT d.dog_guid AS dogID, d.breed_type AS breed_type, COUNT(ct.created_at) AS numtests,\n CASE d.breed_type\n WHEN 'Pure Breed' THEN 'Pure_Breed'\n ELSE 'Not_Pure_Breed'\n END AS pure_or_not\nFROM dogs d JOIN complete_tests ct on d.dog_guid=ct.dog_guid\n WHERE d.exclude=0 OR d.exclude IS NULL\nGROUP BY d.dog_guid",
"Question 15: Adapt your queries from Questions 7 and 14 to examine the relationship between breed_type and number of tests completed by Pure_Breed dogs and non_Pure_Breed dogs. Your results should return 8336 DogIDs in the Not_Pure_Breed group.",
"%%sql\nSELECT t.pure_or_not AS pure_or_not, COUNT(DISTINCT t.dogID) AS num_dog_guid, AVG(t.numtests) AS avg_numtests\nFROM\n (SELECT d.dog_guid AS dogID, COUNT(ct.created_at) AS numtests,\n CASE d.breed_type\n WHEN 'Pure Breed' THEN 'Pure_Breed'\n ELSE 'Not_Pure_Breed'\n END AS pure_or_not\n FROM dogs d JOIN complete_tests ct on d.dog_guid=ct.dog_guid\n WHERE d.exclude=0 OR d.exclude IS NULL\n GROUP BY d.dog_guid)\n AS t\nGROUP BY t.pure_or_not",
"Question 16: Adapt your query from Question 15 to examine the relationship between breed_type, whether or not a dog was neutered (indicated in the dog_fixed field), and number of tests completed by Pure_Breed dogs and non_Pure_Breed dogs. There are DogIDs with null values in the dog_fixed column, so your results should have 6 rows, and the average number of tests completed by non-pure-breeds who are neutered is 10.5681.",
"%%sql\nSELECT t.pure_or_not AS pure_or_not, t.neutered_or_not AS neutered_or_not,\n COUNT(DISTINCT t.dogID) AS num_dog_guid, AVG(t.numtests) AS avg_numtests\nFROM\n (SELECT d.dog_guid AS dogID, COUNT(ct.created_at) AS numtests,\n CASE d.breed_type\n WHEN 'Pure Breed' THEN 'Pure_Breed'\n ELSE 'Not_Pure_Breed'\n END AS pure_or_not,\n CASE d.dog_fixed\n WHEN 1 THEN 'neutered'\n WHEN 0 THEN 'not_neutered'\n ELSE 'None'\n END AS neutered_or_not\n FROM dogs d JOIN complete_tests ct on d.dog_guid=ct.dog_guid\n WHERE d.exclude=0 OR d.exclude IS NULL\n GROUP BY d.dog_guid)\n AS t\nGROUP BY t.pure_or_not, t.neutered_or_not\n\n%%sql\nSELECT numtests_per_dog.pure_breed AS pure_breed, neutered,\nAVG(numtests_per_dog.numtests) AS avg_tests_completed, COUNT(DISTINCT dogID)\nFROM( SELECT d.dog_guid AS dogID, d.breed_group AS breed_type, d.dog_fixed AS\nneutered,\nCASE WHEN d.breed_type='Pure Breed' THEN 'pure_breed'\nELSE 'not_pure_breed'\nEND AS pure_breed,\ncount(c.created_at) AS numtests\nFROM dogs d JOIN complete_tests c\nON d.dog_guid=c.dog_guid\nWHERE d.exclude IS NULL OR d.exclude=0\nGROUP BY dogID) AS numtests_per_dog\nGROUP BY pure_breed, neutered;",
"These results suggest that although a dog's breed_type doesn't seem to have a strong relationship with how many tests a dog completed, neutered dogs, on average, seem to finish 1-2 more tests than non-neutered dogs. It may be fruitful to explore further whether this effect is consistent across different segments of dogs broken up according to other variables. If the effects are consistent, the next step would be to seek evidence that could clarify whether neutered dogs are finishing more tests due to traits that arise when a dog is neutered, or instead, whether owners who are more likely to neuter their dogs have traits that make it more likely they will want to complete more tests.\n4. Other dog features that might be related to the number of tests completed, and a note about using averages as summary metrics\nTwo other dog features included in our sPAP were speed of game completion and previous behavioral training. Examing the relationship between the speed of game completion and number of games completed is best achieved through creating a scatter plot with a best fit line and/or running a statistical regression analysis. It is possible to achieve the statistical regression analysis through very advanced SQL queries, but the strategy that would be required is outside the scope of this course. Therefore, I would recommend exporting relevant data to a program like Tableau, R, or Matlab in order to assess the relationship between the speed of game completion and number of games completed. \nUnfortunately, there is no field available in the Dognition data that is relevant to a dog's previous behavioral training, so more data would need to be collected to examine whether previous behavioral training is related to the number of Dognition tests completed.\nOne last issue I would like to address in this lesson is the issue of whether an average is a good summary to use to represent the values of a certain group. Average calculations are very sensitive to extreme values, or outliers, in the data. This video provides a nice demonstration of how sensitive averages can be:\nhttp://www.statisticslectures.com/topics/outliereffects/\nIdeally, you would summarize the data in a group using a median calculation when you either don't know the distribution of values in your data or you already know that outliers are present (the definition of median is covered in the video above). Unfortunately, medians are more computationally intensive than averages, and there is no pre-made function that allows you to calculate medians using SQL. If you wanted to calculate the median, you would need to use an advanced strategy such as the ones described here:\nhttps://www.periscopedata.com/blog/medians-in-sql.html\nDespite the fact there is no simple way to calculate medians using SQL, there is a way to get a hint about whether average values are likely to be wildly misleading. As described in the first video (http://www.statisticslectures.com/topics/outliereffects/), strong outliers lead to large standard deviation values. Fortunately, we CAN calculate standard deviations in SQL easily using the STDDEV function. Therefore, it is good practice to include standard deviation columns with your outputs so that you have an idea whether the average values outputted by your queries are trustworthy. Whenever standard deviations are a significant portion of the average values of a field, and certainly when standard deviations are larger than the average values of a field, it's a good idea to export your data to a program that can handle more sophisticated statistical analyses before you interpret any results too strongly. \nLet's practice including standard deviations in our queries and interpretting their values.\nQuestion 17: Adapt your query from Question 7 to include a column with the standard deviation for the number of tests completed by each Dognition personality dimension.",
"%%sql\nSELECT t.dimension AS dimension, \n COUNT(DISTINCT t.dogID) AS num_dog_guid, AVG(t.numtests) AS avg_numtests,\n STDDEV(t.numtests) AS std_numtests\nFROM\n (SELECT d.dog_guid AS dogID, d.dimension AS dimension, COUNT(ct.created_at) AS numtests\n FROM dogs d JOIN complete_tests ct on d.dog_guid=ct.dog_guid\n WHERE (d.dimension != '' AND d.dimension != 'None') AND (d.exclude=0 OR d.exclude IS NULL)\n GROUP BY d.dog_guid)\n AS t\nGROUP BY t.dimension",
"The standard deviations are all around 20-25% of the average values of each personality dimension, and they are not appreciably different across the personality dimensions, so the average values are likely fairly trustworthy. Let's try calculating the standard deviation of a different measurement.\nQuestion 18: Write a query that calculates the average amount of time it took each dog breed_type to complete all of the tests in the exam_answers table. Exclude negative durations from the calculation, and include a column that calculates the standard deviation of durations for each breed_type group:",
"%%sql\nSELECT t.breed_type AS breed_type, \n COUNT(DISTINCT t.dogID) AS num_dog_guid, AVG(t.numtests) AS avg_numtests, STDDEV(t.numtests) AS std_numtests\nFROM\n (SELECT d.dog_guid AS dogID, d.breed_type AS breed_type, COUNT(ct.created_at) AS numtests\n FROM dogs d JOIN complete_tests ct on d.dog_guid=ct.dog_guid\n WHERE d.exclude=0 OR d.exclude IS NULL\n GROUP BY d.dog_guid)\n AS t\nGROUP BY t.breed_type",
"This time many of the standard deviations have larger magnitudes than the average duration values. This suggests there are outliers in the data that are significantly impacting the reported average values, so the average values are not likely trustworthy. These data should be exported to another program for more sophisticated statistical analysis.\nIn the next lesson, we will write queries that assess the relationship between testing circumstances and the number of tests completed. Until then, feel free to practice any additional queries you would like to below!"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
carlosclavero/PySimplex
|
Documentation/Tutorial SimplexSolver con Python.ipynb
|
gpl-3.0
|
[
"SimplexSolver\nEn este tutorial, veremos como se puede resolver un problema de programación lineal, utilizando el software SimplexSolver. Para una correcta ejecución del sistema, por favor, consulte el manual de instalación presente en la misma localización que este manual.\nEl primer paso, será crear un archivo de texto que contenga el problema. Por ejemplo, imaginemos que nuestro problema de programación lineal, será el siguiente:\nEl problema deberá tener dentro del archivo la siguiente apariencia:\nComo podemos ver, lo único que se debe tener en cuenta es, que hay que rellenar con 0, aquellos lugares que corresponden a las variables que no aparecen, bien en la restricción o bien en la función. No se deben incluir, tampoco los nombres de las variables, ni de la función.\nPor otro lado,como vemos, se pueden incluir dentro del archivo comentarios, que comiencen bien por \"//\" o bien por \"#\". Estos comentarios, podrían ir en una única línea o al final de una de las líneas del problema. Por supuesto los comentarios, no son necesarios, sino que pueden incluirse si se desea.\nUna vez creado el archivo, simplemente lo guardamos dentro del directorio, donde tengamos el programa SimplexSolver.py. Una vez esté todo guardado en el mismo directorio, mediante línea de comandos accedemos a dicho directorio (cd ruta del directorio).\nUna vez dentro del directorio, solo nos queda ejecutar el programa. Vamos a suponer que el archivo donde guardamos el problema se llama file.txt. Tenemos varias posibilidades para obtener la solución. Las posibilidades son las siguientes(el resultado de las ejecuciones se mostrará en su propia consola de comandos):\n\n\nSi ejecutamos, el siguiente comando, simplemente se nos ofrecerá la solución del problema, acompañado de una frase que indica el tipo de solución del problema:\npython SimplexSolver.py --input file.txt\n\n\nA continuación, se puede ver la salida(nótese que en su ejecución debe cambiar \"%run\", por python y que en su caso no será necesarion poner la ruta completa de SimplexSolver, puesto que previamenta ya habrá accedido a dicha localización). Tenga en cuenta que deberá incluir la ruta del archivo en la ejecución, siempre que este no se encuentre en la misma localización que SimplexSolver.py:",
"%run ..\\PySimplex\\SimplexSolver.py --input ..\\Files\\file.txt ",
"Mediante la siguiente ejecución, podemos obtener también las solución, pero además obtendremos el desarrollo completo del problema:\npython SimplexSolver.py --input file.txt --expl\n\n\nA continuación, se puede ver la salida(nótese que en su ejecución debe cambiar \"%run\", por python):",
"%run ..\\PySimplex\\SimplexSolver.py --input ..\\Files\\file1.txt --expl",
"El siguiente comando, nos permite guardar la solución(bien con el desarrollo completo o solo con la solución) del problema en un archivo. El nombre del archivo se le indica mediante --output, y aparecerá en el directorio donde tengamos guardado SimplexSolver.py a no ser que indiquemos otra localización:\npython SimplexSolver.py --input archivo.txt --expl --output out.txt\n\n\nA continuación, se puede ver la salida(nótese que en su ejecución debe cambiar \"%run\", por python):",
"%run ..\\PySimplex\\SimplexSolver.py --input ..\\Files\\file1.txt --output out.txt",
"Mediante el siguiente comando, además de lo que nos proporcionaban las anteriores ejecuciones, se puede añadir la solución gráfica(solo se puede obtener cuando el problema tiene dos variables):\n python SimplexSolver.py --input file.txt --expl --graphic\n\nA continuación, se puede ver la salida(nótese que en su ejecución debe cambiar \"%run\", por python):",
"%matplotlib inline\n%run ..\\PySimplex\\SimplexSolver.py --input ..\\Files\\file2.txt --graphic",
"Por último, también podemos pedir que se nos muestre la solución del problema dual, al que estamos resolviendo. Para ello, simplemente tendremos que incluir --dual, al final de nuestra ejecución:\npython SimplexSolver.py --input file.txt --output out.txt --dual\no bien,\npython SimplexSolver.py --input file.txt --dual\n\n\nAquí se muestra una ejecución de lo anterior:",
"%run ..\\PySimplex\\SimplexSolver.py --input ..\\Files\\file.txt --dual",
"Se ha incluído también una ayuda que permite, recordar cuál es el significado de cada uno de los parámetros de entrada, así como, cuál es la forma de ejecución del programa. Para visualizar esta ayuda, simplemente se debe ejecutar\npython SimplexSolver.py --help\nA continuación se puede visualizar dicha ayuda:",
"%run ..\\PySimplex\\SimplexSolver.py --help"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
tuanavu/coursera-university-of-washington
|
machine_learning/2_regression/assignment/week6/week-6-local-regression-assignment-exercise.ipynb
|
mit
|
[
"Predicting house prices using k-nearest neighbors regression\nIn this notebook, you will implement k-nearest neighbors regression. You will:\n * Find the k-nearest neighbors of a given query input\n * Predict the output for the query input using the k-nearest neighbors\n * Choose the best value of k using a validation set\nFire up GraphLab Create",
"import sys\nsys.path.append('C:\\Anaconda2\\envs\\dato-env\\Lib\\site-packages')\nimport graphlab",
"Load in house sales data\nFor this notebook, we use a subset of the King County housing dataset created by randomly selecting 40% of the houses in the full dataset.",
"sales = graphlab.SFrame('kc_house_data_small.gl/')",
"Import useful functions from previous notebooks\nTo efficiently compute pairwise distances among data points, we will convert the SFrame into a 2D Numpy array. First import the numpy library and then copy and paste get_numpy_data() from the second notebook of Week 2.",
"import numpy as np # note this allows us to refer to numpy as np instead\n\ndef get_numpy_data(data_sframe, features, output):\n data_sframe['constant'] = 1 # this is how you add a constant column to an SFrame\n # add the column 'constant' to the front of the features list so that we can extract it along with the others:\n features = ['constant'] + features # this is how you combine two lists\n # select the columns of data_SFrame given by the features list into the SFrame features_sframe (now including constant):\n features_sframe = data_sframe[features]\n # the following line will convert the features_SFrame into a numpy matrix:\n feature_matrix = features_sframe.to_numpy()\n # assign the column of data_sframe associated with the output to the SArray output_sarray\n output_sarray = data_sframe[output]\n # the following will convert the SArray into a numpy array by first converting it to a list\n output_array = output_sarray.to_numpy()\n return(feature_matrix, output_array)",
"We will also need the normalize_features() function from Week 5 that normalizes all feature columns to unit norm. Paste this function below.",
"def normalize_features(feature_matrix):\n norms = np.linalg.norm(feature_matrix, axis=0)\n normalized_features = feature_matrix/norms\n return (normalized_features, norms)",
"Split data into training, test, and validation sets",
"(train_and_validation, test) = sales.random_split(.8, seed=1) # initial train/test split\n(train, validation) = train_and_validation.random_split(.8, seed=1) # split training set into training and validation sets",
"Extract features and normalize\nUsing all of the numerical inputs listed in feature_list, transform the training, test, and validation SFrames into Numpy arrays:",
"feature_list = ['bedrooms', \n 'bathrooms', \n 'sqft_living', \n 'sqft_lot', \n 'floors',\n 'waterfront', \n 'view', \n 'condition', \n 'grade', \n 'sqft_above', \n 'sqft_basement',\n 'yr_built', \n 'yr_renovated', \n 'lat', \n 'long', \n 'sqft_living15', \n 'sqft_lot15']\nfeatures_train, output_train = get_numpy_data(train, feature_list, 'price')\nfeatures_test, output_test = get_numpy_data(test, feature_list, 'price')\nfeatures_valid, output_valid = get_numpy_data(validation, feature_list, 'price')",
"In computing distances, it is crucial to normalize features. Otherwise, for example, the sqft_living feature (typically on the order of thousands) would exert a much larger influence on distance than the bedrooms feature (typically on the order of ones). We divide each column of the training feature matrix by its 2-norm, so that the transformed column has unit norm.\nIMPORTANT: Make sure to store the norms of the features in the training set. The features in the test and validation sets must be divided by these same norms, so that the training, test, and validation sets are normalized consistently.",
"features_train, norms = normalize_features(features_train) # normalize training set features (columns)\nfeatures_test = features_test / norms # normalize test set by training set norms\nfeatures_valid = features_valid / norms # normalize validation set by training set norms",
"Compute a single distance\nTo start, let's just explore computing the \"distance\" between two given houses. We will take our query house to be the first house of the test set and look at the distance between this house and the 10th house of the training set.\nTo see the features associated with the query house, print the first row (index 0) of the test feature matrix. You should get an 18-dimensional vector whose components are between 0 and 1.",
"print features_test[0]",
"Now print the 10th row (index 9) of the training feature matrix. Again, you get an 18-dimensional vector with components between 0 and 1.",
"print features_train[9]",
"QUIZ QUESTION \nWhat is the Euclidean distance between the query house and the 10th house of the training set? \nNote: Do not use the np.linalg.norm function; use np.sqrt, np.sum, and the power operator (**) instead. The latter approach is more easily adapted to computing multiple distances at once.\n\nSlide 16\n\nEuclidean distance:\n$distance(x_j, x_q) \\sqrt{a_1(x_j[1]-x_q[1])^2 + ... + a_d(x_j[d]-x_q[d])^2)}$",
"euclidean_distance = np.sqrt(np.sum((features_train[9] - features_test[0])**2))\nprint euclidean_distance",
"Compute multiple distances\nOf course, to do nearest neighbor regression, we need to compute the distance between our query house and all houses in the training set. \nTo visualize this nearest-neighbor search, let's first compute the distance from our query house (features_test[0]) to the first 10 houses of the training set (features_train[0:10]) and then search for the nearest neighbor within this small set of houses. Through restricting ourselves to a small set of houses to begin with, we can visually scan the list of 10 distances to verify that our code for finding the nearest neighbor is working.\nWrite a loop to compute the Euclidean distance from the query house to each of the first 10 houses in the training set.",
"dist_dict = {}\nfor i in range(0,10):\n dist_dict[i] = np.sqrt(np.sum((features_train[i] - features_test[0])**2))\n print (i, np.sqrt(np.sum((features_train[i] - features_test[0])**2)))",
"QUIZ QUESTION \nAmong the first 10 training houses, which house is the closest to the query house?",
"print min(dist_dict.items(), key=lambda x: x[1]) ",
"It is computationally inefficient to loop over computing distances to all houses in our training dataset. Fortunately, many of the Numpy functions can be vectorized, applying the same operation over multiple values or vectors. We now walk through this process.\nConsider the following loop that computes the element-wise difference between the features of the query house (features_test[0]) and the first 3 training houses (features_train[0:3]):",
"for i in xrange(3):\n print features_train[i]-features_test[0]\n # should print 3 vectors of length 18",
"The subtraction operator (-) in Numpy is vectorized as follows:",
"print features_train[0:3] - features_test[0]",
"Note that the output of this vectorized operation is identical to that of the loop above, which can be verified below:",
"# verify that vectorization works\nresults = features_train[0:3] - features_test[0]\nprint results[0] - (features_train[0]-features_test[0])\n# should print all 0's if results[0] == (features_train[0]-features_test[0])\nprint results[1] - (features_train[1]-features_test[0])\n# should print all 0's if results[1] == (features_train[1]-features_test[0])\nprint results[2] - (features_train[2]-features_test[0])\n# should print all 0's if results[2] == (features_train[2]-features_test[0])",
"Aside: it is a good idea to write tests like this cell whenever you are vectorizing a complicated operation.\nPerform 1-nearest neighbor regression\nNow that we have the element-wise differences, it is not too hard to compute the Euclidean distances between our query house and all of the training houses. First, write a single-line expression to define a variable diff such that diff[i] gives the element-wise difference between the features of the query house and the i-th training house.",
"diff = features_train - features_test[0]",
"To test the code above, run the following cell, which should output a value -0.0934339605842:",
"print diff[-1].sum() # sum of the feature differences between the query and last training house\n# should print -0.0934339605842",
"The next step in computing the Euclidean distances is to take these feature-by-feature differences in diff, square each, and take the sum over feature indices. That is, compute the sum of square feature differences for each training house (row in diff).\nBy default, np.sum sums up everything in the matrix and returns a single number. To instead sum only over a row or column, we need to specifiy the axis parameter described in the np.sum documentation. In particular, axis=1 computes the sum across each row.\nBelow, we compute this sum of square feature differences for all training houses and verify that the output for the 16th house in the training set is equivalent to having examined only the 16th row of diff and computing the sum of squares on that row alone.",
"print np.sum(diff**2, axis=1)[15] # take sum of squares across each row, and print the 16th sum\nprint np.sum(diff[15]**2) # print the sum of squares for the 16th row -- should be same as above",
"With this result in mind, write a single-line expression to compute the Euclidean distances between the query house and all houses in the training set. Assign the result to a variable distances.\nHint: Do not forget to take the square root of the sum of squares.",
"distances = np.sqrt(np.sum(diff**2, axis=1))",
"To test the code above, run the following cell, which should output a value 0.0237082324496:",
"print distances[100] # Euclidean distance between the query house and the 101th training house\n# should print 0.0237082324496",
"Now you are ready to write a function that computes the distances from a query house to all training houses. The function should take two parameters: (i) the matrix of training features and (ii) the single feature vector associated with the query.",
"def compute_distances(train_matrix, query_vector):\n diff = train_matrix - query_vector\n distances = np.sqrt(np.sum(diff**2, axis=1))\n return distances",
"QUIZ QUESTIONS \n\nTake the query house to be third house of the test set (features_test[2]). What is the index of the house in the training set that is closest to this query house?\nWhat is the predicted value of the query house based on 1-nearest neighbor regression?",
"third_house_distance = compute_distances(features_train, features_test[2])\nprint third_house_distance.argsort()[:1], min(third_house_distance)\nprint third_house_distance[382]\n\nprint np.argsort(third_house_distance, axis = 0)[:4]\n\nprint output_train[382]",
"Perform k-nearest neighbor regression\nFor k-nearest neighbors, we need to find a set of k houses in the training set closest to a given query house. We then make predictions based on these k nearest neighbors.\nFetch k-nearest neighbors\nUsing the functions above, implement a function that takes in\n * the value of k;\n * the feature matrix for the training houses; and\n * the feature vector of the query house\nand returns the indices of the k closest training houses. For instance, with 2-nearest neighbor, a return value of [5, 10] would indicate that the 6th and 11th training houses are closest to the query house.\nHint: Look at the documentation for np.argsort.",
"def compute_k_nearest_neighbors(k, features_matrix, feature_vector):\n distances = compute_distances(features_matrix, feature_vector)\n return np.argsort(distances, axis = 0)[:k]",
"QUIZ QUESTION \nTake the query house to be third house of the test set (features_test[2]). What are the indices of the 4 training houses closest to the query house?",
"print compute_k_nearest_neighbors(4, features_train, features_test[2])",
"Make a single prediction by averaging k nearest neighbor outputs\nNow that we know how to find the k-nearest neighbors, write a function that predicts the value of a given query house. For simplicity, take the average of the prices of the k nearest neighbors in the training set. The function should have the following parameters:\n * the value of k;\n * the feature matrix for the training houses;\n * the output values (prices) of the training houses; and\n * the feature vector of the query house, whose price we are predicting.\nThe function should return a predicted value of the query house.\nHint: You can extract multiple items from a Numpy array using a list of indices. For instance, output_train[[6, 10]] returns the prices of the 7th and 11th training houses.",
"def compute_distances_k_avg(k, features_matrix, output_values, feature_vector):\n k_neigbors = compute_k_nearest_neighbors(k, features_matrix, feature_vector)\n avg_value = np.mean(output_values[k_neigbors])\n return avg_value \n ",
"QUIZ QUESTION \nAgain taking the query house to be third house of the test set (features_test[2]), predict the value of the query house using k-nearest neighbors with k=4 and the simple averaging method described and implemented above.",
"print compute_distances_k_avg(4, features_train, output_train, features_test[2])",
"Compare this predicted value using 4-nearest neighbors to the predicted value using 1-nearest neighbor computed earlier.\nMake multiple predictions\nWrite a function to predict the value of each and every house in a query set. (The query set can be any subset of the dataset, be it the test set or validation set.) The idea is to have a loop where we take each house in the query set as the query house and make a prediction for that specific house. The new function should take the following parameters:\n * the value of k;\n * the feature matrix for the training houses;\n * the output values (prices) of the training houses; and\n * the feature matrix for the query set.\nThe function should return a set of predicted values, one for each house in the query set.\nHint: To get the number of houses in the query set, use the .shape field of the query features matrix. See the documentation.",
"print features_test[0:10].shape[0]\n\ndef compute_distances_k_all(k, features_matrix, output_values, feature_vector):\n num_of_rows = feature_vector.shape[0]\n predicted_values = []\n for i in xrange(num_of_rows):\n avg_value = compute_distances_k_avg(k, features_train, output_train, features_test[i])\n predicted_values.append(avg_value)\n return predicted_values",
"QUIZ QUESTION \nMake predictions for the first 10 houses in the test set using k-nearest neighbors with k=10. \n\nWhat is the index of the house in this query set that has the lowest predicted value? \nWhat is the predicted value of this house?",
"predicted_values = compute_distances_k_all(10, features_train, output_train, features_test[0:10])\nprint predicted_values\nprint predicted_values.index(min(predicted_values))\n\nprint min(predicted_values)",
"Choosing the best value of k using a validation set\nThere remains a question of choosing the value of k to use in making predictions. Here, we use a validation set to choose this value. Write a loop that does the following:\n\nFor k in [1, 2, ..., 15]:\nMakes predictions for each house in the VALIDATION set using the k-nearest neighbors from the TRAINING set.\nComputes the RSS for these predictions on the VALIDATION set\nStores the RSS computed above in rss_all\n\n\nReport which k produced the lowest RSS on VALIDATION set.\n\n(Depending on your computing environment, this computation may take 10-15 minutes.)",
"rss_all = []\nfor k in range(1,16): \n predict_value = compute_distances_k_all(k, features_train, output_train, features_valid)\n residual = (output_valid - predict_value)\n rss = sum(residual**2)\n rss_all.append(rss)\n\nprint rss_all\n\nprint rss_all.index(min(rss_all))",
"To visualize the performance as a function of k, plot the RSS on the VALIDATION set for each considered k value:",
"import matplotlib.pyplot as plt\n%matplotlib inline\n\nkvals = range(1, 16)\nplt.plot(kvals, rss_all,'bo-')",
"QUIZ QUESTION \nWhat is the RSS on the TEST data using the value of k found above? To be clear, sum over all houses in the TEST set.",
"predict_value = compute_distances_k_all(14, features_train, output_train, features_test)\nresidual = (output_test - predict_value)\nrss = sum(residual**2)\nprint rss\n"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
quantumlib/Cirq
|
docs/tutorials/google/identifying_hardware_changes.ipynb
|
apache-2.0
|
[
"##### Copyright 2021 The Cirq Developers\n\n#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.",
"Identifying Hardware Changes\n<table class=\"tfo-notebook-buttons\" align=\"left\">\n <td>\n <a target=\"_blank\" href=\"https://quantumai.google/cirq/tutorials/google/identifying_hardware_changes\"><img src=\"https://quantumai.google/site-assets/images/buttons/quantumai_logo_1x.png\" />View on QuantumAI</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://colab.research.google.com/github/quantumlib/Cirq/blob/master/docs/tutorials/google/identifying_hardware_changes.ipynb\"><img src=\"https://quantumai.google/site-assets/images/buttons/colab_logo_1x.png\" />Run in Google Colab</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://github.com/quantumlib/Cirq/blob/master/docs/tutorials/google/identifying_hardware_changes.ipynb\"><img src=\"https://quantumai.google/site-assets/images/buttons/github_logo_1x.png\" />View source on GitHub</a>\n </td>\n <td>\n <a href=\"https://storage.googleapis.com/tensorflow_docs/Cirq/docs/tutorials/google/identifying_hardware_changes.ipynb\"><img src=\"https://quantumai.google/site-assets/images/buttons/download_icon_1x.png\" />Download notebook</a>\n </td>\n</table>\n\nYou've run your circuit with Google's Quantum Computing Service and you're getting results that unexpectedly differ from those you saw when you ran your experiment last week. What's the cause of this and what can you do about it? \nYour experience may be due to changes in the device that have occurred since the most recent maintenance Calibration. Every few days, the QCS devices are calibrated for the highest performance across all of their available qubits and operations. However, in the hours or days since the most recent maintenance calibration, the performance of the device hardware may have changed significantly, affecting your circuit's results. \nThe rest of this tutorial will describe these hardware changes, demonstrate how to collect error metrics for identifying if changes have occurred, and provide some examples of how you can compare your metric results to select the most performant qubits for your circuit. \nFor more further reading on qubit picking methodology, see the Best Practices guide and Qubit Picking with Loschmidt Echoes tutorial. The method presented in the Loschmidt Echoes tutorial is an alternative way to identify hardware changes.\nHardware Changes\nThe device hardware changes occur in both the qubits themselves and the control electronics used to drive gates and measure the state of the qubits. As analog devices, both the qubits and control electronics are subject to interactions with their environment that manifest as a meaningful change to the qubits gate or readout fidelity.\nQuantum processors based on frequency tunable superconducting qubits use a direct current (DC) bias current to set the frequency of the qubits' $|0\\rangle$ state to $|1\\rangle$ state transition. These DC biases are generated by classical analog control electronics, where resistors and other components can be affected by environmental temperature changes in an interaction called thermal drift. Uncompensated thermal drift results in a change in the qubit's transition frequency, which can cause unintended state transitions in the qubits during circuit execution or incorrect readout of the qubits' state. These manifest as changes to the error rates associated with gate and readout operations.\nAdditionally, the qubits may unexpectedly couple to other local energy systems and exchange energy with or lose energy to them. Because a qubit is only able to identify the presence of two levels in the parasitic local system, these interacting states are often referred to as two-level systems (TLS). While the exact physical origin of these states is unknown, defects in the hardware materials are a plausible explanation. It has been observed that interactions with these TLS can result in coherence fluctuations in time and frequency, again causing unintended state transitions or incorrect readouts, affecting error rates.\nFor more information on DC Bias and TLS and how they affect the devices, see arXiv:1809.01043.\nQubit Error Metrics\nThere are many Calibration Metrics available to measure gate and readout error rates and see if they have changed. The Visualizing Calibration Metrics tutorial demonstrates how to collect and visualize each of these available metrics. You can apply the comparison methods presented in this tutorial to any such metric, but the examples below focus on the two following metrics:\n\ntwo_qubit_parallel_sqrt_iswap_gate_xeb_pauli_error_per_cycle: This metric captures the estimated probability for the quantum state on two neighboring qubits to depolarize (as if a Pauli gate was applied to either or both qubits) after applying an $\\sqrt{i\\mathrm{SWAP}}$ gate. This metric includes some coherent error like the error introduced by control hardware. This metric is computed using Cross Entropy Benchmarking (XEB) during maintenance calibration and in this tutorial.\nparallel_p11_error: This metric estimates the probability for a readout register to correctly measure a $|1\\rangle$ state on a qubit that was prepared to be in the $|1\\rangle$ state. The Simultaneous Readout experiment used to collect this metric evaluates all of the qubits in parallel/simultaneously.\n\nNote: The two-qubit metric uses Pauli error, which has two other multiplicatively-related variants: Average error and Incoherent error.\nDisclaimer: The data shown in this tutorial is an example and not representative of the QCS in production.\nData Collection\nSetup\nFirst, install Cirq and import the necessary packages.\nNote: this notebook relies on unreleased Cirq features. If you want to try these features, make sure you install cirq via pip install cirq --pre.",
"try:\n import cirq\nexcept ImportError:\n !pip install --quiet cirq --pre\n\nimport matplotlib.pyplot as plt\nimport networkx as nx\nimport numpy as np\n\nimport cirq\nimport cirq_google as cg",
"Next, authorize to use the Quantum Computing Service with a project_id and processor_id, and get a sampler to run your experiments. Set the number of repetitions you'll use for all experiments.\nNote: You can select a subset of the qubits to shorten the runtime of the experiment.\nNote: You need to input a real QCS project_id and processor_id in the next cell. Otherwise, the code will assume you're running with a simulator, causing issues later.",
"from cirq_google.engine.qcs_notebook import get_qcs_objects_for_notebook\n\n# Set key variables\nproject_id = \"your_project_id_here\" #@param {type:\"string\"}\nprocessor_id = \"your_processor_id_here\" #@param {type:\"string\"}\nrepetitions = 2000 #@param {type:\"integer\"}\n\n# Get device sampler\nqcs_objects = get_qcs_objects_for_notebook(project_id=project_id, processor_id=processor_id)\n\ndevice = qcs_objects.device\nsampler = qcs_objects.sampler\n\n# Get qubit set\nqubits = device.qubit_set()\n\n# Limit device qubits to only those before row/column `device_limit`\ndevice_limit = 10 #@param {type:\"integer\"}\nqubits = {qb for qb in qubits if qb.row<device_limit and qb.col<device_limit}\n\n# Visualize the qubits on a grid by putting them in a throwaway device object used only for this print statement\nprint(cg.devices.XmonDevice(0,0,0,qubits))",
"Maintenance Calibration Data\nQuery for the calibration data with cirq_google.get_engine_calibration, select the two metrics by name from the calibration object, and visualize them with its plot() method.",
"# Retreive maintenance calibration data.\ncalibration = cg.get_engine_calibration(processor_id=processor_id)\n\n# Heatmap the two metrics.\ntwo_qubit_gate_metric = \"two_qubit_parallel_sqrt_iswap_gate_xeb_pauli_error_per_cycle\" #@param {type:\"string\"}\nreadout_metric = \"parallel_p11_error\" #@param {type:\"string\"}\n\n# Plot heatmaps with integrated histogram\ncalibration.plot(two_qubit_gate_metric, fig=plt.figure(figsize=(22, 10)))\ncalibration.plot(readout_metric, fig=plt.figure(figsize=(22, 10)))",
"You may have already seen this existing maintenance calibration data when you did qubit selection in the first place. Next, you'll run device characterization experiments to collect the same data metrics from the device, to see if their values have changed since the previous calibration.\nCurrent Two-Qubit Metric Data with XEB\nThis section is a shortened version of the Parallel XEB tutorial, which runs characterization experiments to collect data for the two_qubit_parallel_sqrt_iswap_gate_xeb_pauli_error_per_cycle metric. First, generate a library of two qubit circuits using the $\\sqrt{i\\mathrm{SWAP}}$ gate . These circuits will be run in parallel in larger circuits according to combinations_by_layer.",
"\"\"\"Setup for parallel XEB experiment.\"\"\"\nfrom cirq.experiments import random_quantum_circuit_generation as rqcg\nfrom itertools import combinations\n\nrandom_seed = 52\n\n# Generate library of two-qubit XEB circuits.\ncircuit_library = rqcg.generate_library_of_2q_circuits(\n n_library_circuits=20, \n two_qubit_gate=cirq.SQRT_ISWAP,\n random_state=random_seed,\n)\n\ndevice_graph = nx.Graph((q1,q2) for (q1,q2) in combinations(qubits, 2) if q1.is_adjacent(q2))\n\n# Generate different possible pairs of qubits, and randomly assign circuit (indices) to then, n_combinations times. \ncombinations_by_layer = rqcg.get_random_combinations_for_device(\n n_library_circuits=len(circuit_library),\n n_combinations=10,\n device_graph=device_graph,\n random_state=random_seed,\n)\n# Prepare the circuit depths the circuits will be truncated to. \ncycle_depths = np.arange(3, 100, 20)",
"Then, run the circuits on the device, combining them into larger circuits and truncating the circuits by length, with cirq.experiments.xeb_sampling.sample_2q_xeb_circuits. \nAfterwards, run the same circuits on a perfect simulator, and compare them to the sampled results. Finally, fit the collected data to an exponential decay curve to estimate the error rate per appication of each two-qubit $\\sqrt{i\\mathrm{SWAP}}$ gate.",
"\"\"\"Collect all data by executing circuits.\"\"\"\nfrom cirq.experiments.xeb_sampling import sample_2q_xeb_circuits\nfrom cirq.experiments.xeb_fitting import benchmark_2q_xeb_fidelities, fit_exponential_decays\n\n# Run XEB circuits on the processor.\nsampled_df = sample_2q_xeb_circuits(\n sampler=sampler,\n circuits=circuit_library,\n cycle_depths=cycle_depths,\n combinations_by_layer=combinations_by_layer,\n shuffle=np.random.RandomState(random_seed),\n repetitions=repetitions,\n)\n\n# Run XEB circuits on a simulator and fit exponential decays to get fidelities.\nfidelity_data = benchmark_2q_xeb_fidelities(\n sampled_df=sampled_df,\n circuits=circuit_library,\n cycle_depths=cycle_depths,\n)\nfidelities = fit_exponential_decays(fidelity_data)\n\n#Grab (pair, sqrt_iswap_pauli_error_per_cycle) data for all qubit pairs.\npxeb_results = {\n pair: (1.0 - fidelity) / (4 / 3) #Scalar to get Pauli error\n for (_, _, pair), fidelity in fidelities.layer_fid.items()\n}",
"Note: The parallel XEB errors are scaled in pxeb_results. This is because the collected fidelities are the estimated depolarization fidelities, not the Pauli error metrics available from the calibration data. See the XEB Theory tutorial for an explanation why, and Calibration Metrics for more information on the difference between these values. \nCurrent Readout Metric Data with Simultaneous Readout\nTo evaluate performance changes in the readout registers, collect the Parallel P11 error data for each qubit with the Simultaneous Readout experiment, accessible with estimate_parallel_single_qubit_readout_errors. This function runs the experiment to estimate P00 and P11 errors for each qubit (as opposed to querying for the most recent calibration data). The experiment prepares each qubit in the $|0\\rangle$ and $|1\\rangle$ states, measures them, and evaluates how often the qubits are measured in the expected state.",
"# Run experiment\nsq_result = cirq.estimate_parallel_single_qubit_readout_errors(sampler, qubits=qubits, repetitions=repetitions)\n\n# Use P11 errors\np11_results = sq_result.one_state_errors",
"Heatmap Comparisons\nFor each metric, plot the calibration and collected characterization data side by side, on the same scale. Also plot the difference between the two datasets (on a different scale). \nTwo-Qubit Metric Heatmap Comparison",
"from matplotlib.colors import LogNorm\n\n# Plot options. You may need to change these if you data shows a lot of the same colors.\nvmin = 5e-3\nvmax = 3e-2\noptions = {\"norm\": LogNorm()}\nformat = \"0.3f\"\n\nfig, (ax1,ax2,ax3) = plt.subplots(ncols=3, figsize=(30, 9))\n\n# Calibration two qubit data\ncalibration.heatmap(two_qubit_gate_metric).plot(\n ax=ax1, title=\"Calibration\", vmin=vmin, vmax=vmax, \n collection_options=options, annotation_format=format,\n)\n# Current two qubit data\ncirq.TwoQubitInteractionHeatmap(pxeb_results).plot(\n ax=ax2, title=\"Current\", vmin=vmin, vmax=vmax, \n collection_options=options, annotation_format=format,\n)\n\n# Calculate difference in two-qubit metric\ntwoq_diffs = {}\nfor pair,calibration_err in calibration[two_qubit_gate_metric].items():\n # The order of the qubits in the result dictionary keys is sometimes swapped. Eg: (Q1,Q2):0.04 vs (Q2,Q1):0.06 \n if pair in pxeb_results:\n characterization_err = pxeb_results[pair]\n else:\n characterization_err = pxeb_results[tuple(reversed(pair))]\n twoq_diffs[pair] = characterization_err - calibration_err[0]\n\n# Two qubit difference data\ncirq.TwoQubitInteractionHeatmap(twoq_diffs).plot(\n ax=ax3, title='Difference in Two Qubit Metrics',\n annotation_format=format,\n)\n\n# Add titles\nplt.figtext(0.5,0.97, two_qubit_gate_metric.replace(\"_\",\" \").title(), ha=\"center\", va=\"top\", fontsize=14)",
"The large numbers of zero and below values (green and darker colors) in the difference heatmap indicate that the device's two-qubit $\\sqrt{i\\mathrm{SWAP}}$ gates have improved noticeably across the device. In fact, only a couple qubit pairs towards the bottom of the device have worsened since the previous calibration. \nYou should try to make use of the qubit pairs $(Q(5,2),Q(5,3))$ and $(Q(5,1),Q(6,1))$, which were previously average but have become the most reliable $\\sqrt{i\\mathrm{SWAP}}$ gates in the device. \nQubit pairs $(Q(6,2),Q(7,2))$, $(Q(7,2),Q(7,3))$ and especially $(Q(6,4),Q(7,4))$, were the worst qubit pairs on the device, but have improved so significantly that they are within an acceptable range of $0.010$ to $0.016$ Pauli error. You may not need to avoid them now, if you were previously.\nIt's important to note that, if you have the option to use a consistently high reliablity qubit or qubit pair, instead of one that demonstrates inconsistent performance, you should do so. For example, qubit pairs $(Q(5,1),Q(5,2))$ and $(Q(5,2),Q(6,2))$ have not changed much, are still around $0.010$ Pauli error, and happen to be near the other two good qubit pairs mentioned earlier, making them a good candidates for inclusion. \nReadout Metric Heatmap Comparisons",
"# Plot options, with different vmin and vmax for readout data.\nvmin = 3e-2\nvmax = 1.1e-1\noptions = {\"norm\": LogNorm()}\nformat = \"0.3f\"\n\nfig, (ax1,ax2,ax3) = plt.subplots(ncols=3, figsize=(30, 9))\n\n# Calibration readout data\ncalibration.heatmap(readout_metric).plot(\n ax=ax1, title=\"Calibration\", vmin=vmin, vmax=vmax, \n collection_options=options, annotation_format=format,\n)\n\n# Current readout data\ncirq.Heatmap(p11_results).plot(\n ax=ax2, title=\"Current\", vmin=vmin, vmax=vmax, \n collection_options=options, annotation_format=format,\n)\n\n# Collect difference in readout metrics\nreadout_diffs = {q[0]:p11_results[q[0]] - err[0] for q,err in calibration[readout_metric].items()}\n\n# Readout difference data\ncirq.Heatmap(readout_diffs).plot(\n ax=ax3, title='Difference in Readout Metrics',\n annotation_format=format,\n)\n\n# Add title\nplt.figtext(0.5,0.97, readout_metric.replace(\"_\",\" \").title(), ha=\"center\", va=\"top\", fontsize=14)",
"The readout data demonstrates demonstrates more varying results than the two-qubit data. Many of the qubits have not changed significantly, but a few have, by a large margin. \nQubit $Q(5,0)$ has improved massively, but is still among the least reliable qubits on the device for readouts. Qubit $Q(7,2)$ has also improved, but is still quite high in Pauli error. Qubit $Q(5,4)$ was previously one of the best qubits to perform readout on, but has since deteriorated to be the second worst since the most recent maintenance calibration. Qubits $Q(7,3)$ and $Q(9,4)$ have improved meaningfully, becoming some of the best readout qubits available.\nAgain, it is valuable to find reliable qubits that didn't demonstrate significant change. In this case qubits $Q(4,1)$, $Q(4,2)$, and $Q(5,1)$ have not changed much but remain among the better qubits available for readout. \nNote: If your collected characterization data demonstrates that the device has changed vastly more than shown in these examples, the system may have become too far from the last calibration to be reasonably usable. In this case, please email the quantum engine support team to let them know. \nWhat's Next?\nYou've selected better candidate qubits for your circuit, based on updated information about the device. What else can you do for further improvements? \n\nYou need to map your actual circuit's logical qubits to your selected hardware qubits. This is in general a difficult problem, and the best solution can depend on the specific structure of the circuit to be run. Take a look at the Qubit Picking with Loschmidt Echoes tutorial, which estimates the error rates of gates for your specific circuit. Also, consider Best Practices#Qubit picking for additional advice on this.\nThe Optimization, Alignment, and Spin Echoes tutorial provides resources on how you can improve the reliability of your circuit by: optimizing away redundant or low-impact gates, aligning gates into moments with others of the same type, and preventing decay on idle qubits with by adding spin echoes. \nOther than for qubit picking, you should also use calibration for error compensation. The XEB and Coherent Error, XEB Calibration Example, Parallel XEB and Isolated XEB tutorials demonstrate how to run a classical optimizer on collected two-qubit gate characterization data, identity the true unitary matrix implemented by each gate, and add Virtual Pauli Z gates to compensate for the identified error, improving the reliability of your circuit.\nYou are also free to use the characterization data to improve the performance of large batches of experiment circuits. In this case you'd want to prepare your characterization ahead of running all your circuits, and use the data to compensate each circuit, right before running them. See Calibration FAQ for more information."
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
AllenDowney/ThinkBayes2
|
examples/august_soln.ipynb
|
mit
|
[
"Think Bayes\nThis notebook presents code and exercises from Think Bayes, second edition.\nCopyright 2018 Allen B. Downey\nMIT License: https://opensource.org/licenses/MIT",
"# Configure Jupyter so figures appear in the notebook\n%matplotlib inline\n\n# Configure Jupyter to display the assigned value after an assignment\n%config InteractiveShell.ast_node_interactivity='last_expr_or_assign'\n\nimport math\nimport numpy as np\nimport pandas as pd\n\nfrom thinkbayes2 import Pmf, Cdf, Suite, Joint\nimport thinkplot",
"The August birthday problem\nThis article:\nAttention Deficit–Hyperactivity Disorder and Month of School Enrollment\nFinds:\n\nThe rate of claims-based ADHD diagnosis among children in states with a September 1 cutoff was 85.1 per 10,000 children (309 cases among 36,319 children; 95% confidence interval [CI], 75.6 to 94.2) among those born in August and 63.6 per 10,000 children (225 cases among 35,353 children; 95% CI, 55.4 to 71.9) among those born in September, an absolute difference of 21.5 per 10,000 children (95% CI, 8.8 to 34.0); the corresponding difference in states without the September 1 cutoff was 8.9 per 10,000 children (95% CI, −14.9 to 20.8). The rate of ADHD treatment was 52.9 per 10,000 children (192 of 36,319 children; 95% CI, 45.4 to 60.3) among those born in August and 40.4 per 10,000 children (143 of 35,353 children; 95% CI, 33.8 to 47.1) among those born in September, an absolute difference of 12.5 per 10,000 children (95% CI, 2.43 to 22.4). These differences were not observed for other month-to-month comparisons, nor were they observed in states with non-September cutoff dates for starting kindergarten. In addition, in states with a September 1 cutoff, no significant differences between August-born and September-born children were observed in rates of asthma, diabetes, or obesity.\n\nIt includes this figure:\n\nHowever, there is an error in this figure, confirmed by personal correspondence:\n\nThe May and June [diagnoses] are reversed. May should be 317 (not 287) and June should be 287 (not 317).\n\nBased on this corrected data, what can we say about the probability of diagnosis as a function of birth month?\nWhat can we say about the rate of misdiagnosis?\nHere's the data from the table.",
"totals = np.array([32690, 31238, 34405, 34565, 34977, 34415, \n 36577, 36319, 35353, 34405, 31285, 31617])\n\ndiagnosed = np.array([265, 280, 307, 312, 317, 287, \n 320, 309, 225, 240, 232, 243])",
"I'll roll the data so September comes first.",
"totals = np.roll(totals, -8)\ndiagnosed = np.roll(diagnosed, -8)",
"Here are the diagnosis rates, which we can check against the rates in the table.",
"rates = diagnosed / totals * 10000\nnp.round(rates, 1)",
"Here's what the rates look like as a function of months after the September cutoff.",
"xs = np.arange(12)\nthinkplot.plot(xs, rates)\nthinkplot.decorate(xlabel='Months after cutoff',\n ylabel='Diagnosis rate per 10,000')",
"For the first 9 months, from September to May, we see what we would expect if at least some of the excess diagnoses are due to behavioral differences due to age. For each month of difference in age, we see an increase in the number of diagnoses.\nThis pattern breaks down for the last three months, June, July, and August. This might be explained by random variation, but it also might be due to parental manipulation; if some parents hold back students born near the deadline, the observations for these month would include a mixture of children who are relatively old for their grade, and therefore less likely to be diagnosed.\nWe could test this hypothesis by checking the actual ages of these students when they started school, rather than just looking at their months of birth.\nI'll use a beta distribution to compute the posterior credible interval for each of these rates.",
"import scipy.stats\n\npcount = 1\nres = []\nfor (x, d, t) in zip(xs, diagnosed, totals):\n a = d + pcount\n b = t-d + pcount\n ci = scipy.stats.beta(a, b).ppf([0.025, 0.975])\n res.append(ci * 10000)",
"By transposing the results, we can get them into two arrays for plotting.",
"low, high = np.transpose(res)\n\nlow\n\nhigh",
"Here's what the plot looks like with error bars.",
"import matplotlib.pyplot as plt\n\ndef errorbar(xs, low, high, **options):\n for x, l, h in zip(xs, low, high):\n plt.vlines(x, l, h, **options)\n\nerrorbar(xs, low, high, color='gray', alpha=0.7)\nthinkplot.plot(xs, rates)\nthinkplot.decorate(xlabel='Months after cutoff',\n ylabel='Diagnosis rate per 10,000')",
"It seems like the lower rates in the last 3 months are unlikely to be due to random variation, so it might be good to investigate the effect of \"red shirting\".\nBut for now I will proceed with a linear logistic model. The following table shows log odds of diagnosis for each month, which I will use to lay out a grid for parameter estimation.",
"from scipy.special import expit, logit\n\nfor (x, d, t) in zip(xs, diagnosed, totals):\n print(x, logit(d/t))",
"Here's a Suite that estimates the parameters of a logistic regression model, b0 and b1.",
"class August(Suite, Joint):\n \n def Likelihood(self, data, hypo):\n x, d, t = data\n b0, b1 = hypo\n \n p = expit(b0 + b1 * x)\n like = scipy.stats.binom.pmf(d, t, p)\n \n return like",
"The prior distributions are uniform over a grid that covers the most likely values.",
"from itertools import product\n\nb0 = np.linspace(-4.75, -5.1, 101)\nb1 = np.linspace(-0.05, 0.05, 101)\nhypos = product(b0, b1)\n\nsuite = August(hypos);",
"Here's the update.",
"for data in zip(xs, diagnosed, totals):\n suite.Update(data)",
"Here's the posterior marginal distribution for b0.",
"pmf0 = suite.Marginal(0)\nb0 = pmf0.Mean()\nprint(b0)\nthinkplot.Pdf(pmf0)\n\nthinkplot.decorate(title='Posterior marginal distribution',\n xlabel='Intercept log odds (b0)',\n ylabel='Pdf')",
"And the posterior marginal distribution for b1.",
"pmf1 = suite.Marginal(1)\nb1 = pmf1.Mean()\nprint(b1)\nthinkplot.Pdf(pmf1)\n\nthinkplot.decorate(title='Posterior marginal distribution',\n xlabel='Slope log odds (b0)',\n ylabel='Pdf')",
"Let's see what the posterior regression lines look like, superimposed on the data.",
"for i in range(100):\n b0, b1 = suite.Random()\n ys = expit(b0 + b1 * xs) * 10000\n thinkplot.plot(xs, ys, color='green', alpha=0.01)\n \nerrorbar(xs, low, high, color='gray', alpha=0.7)\nthinkplot.plot(xs, rates)\n\nthinkplot.decorate(xlabel='Months after cutoff',\n ylabel='Diagnosis rate per 10,000')",
"Most of these regression lines fall within the credible intervals of the observed rates, so in that sense it seems like this model is not ruled out by the data.\nBut it is clear that the lower rates in the last 3 months bring down the estimated slope, so we should probably treat the estimated effect size as a lower bound.\nTo express the results more clearly, we can look at the posterior predictive distribution for the difference between a child born in September and one born in August:",
"def posterior_predictive(x):\n pmf = Pmf()\n\n for (b0, b1), p in suite.Items():\n base = expit(b0 + b1 * x) * 10000\n pmf[base] += p\n \n return pmf",
"Here are posterior predictive CDFs for diagnosis rates.",
"pmf0 = posterior_predictive(0)\nthinkplot.Cdf(pmf0.MakeCdf(), label='September')\n\npmf1 = posterior_predictive(11)\nthinkplot.Cdf(pmf1.MakeCdf(), label='August')\n\nthinkplot.decorate(title='Posterior predictive distribution',\n xlabel='Diagnosis rate per 10,000',\n ylabel='CDF')\n\npmf0.Mean()",
"And we can compute the posterior predictive distribution for the difference.",
"def posterior_predictive_diff():\n pmf = Pmf()\n \n for (b0, b1), p in suite.Items():\n p0 = expit(b0) * 10000\n p1 = expit(b0 + b1 * 11) * 10000\n diff = p1 - p0\n pmf[diff] += p\n \n return pmf\n\npmf_diff = posterior_predictive_diff()\nthinkplot.Cdf(pmf_diff.MakeCdf())\n\nthinkplot.decorate(title='Posterior predictive distribution',\n xlabel='11 month increase in diagnosis rate per 10,000',\n ylabel='CDF')",
"To summarize, we can compute the mean and 95% credible interval for this difference.",
"pmf_diff.Mean()\n\npmf_diff.CredibleInterval(95)",
"A difference of 21 diagnoses, on a base rate of 71 diagnoses, is an increase of 30% (18%, 42%)",
"pmf_diff.Mean() / pmf0.Mean()\n\npmf_diff.CredibleInterval(95) / pmf0.Mean()"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
telescopeuser/workshop_blog
|
wechat_tool/lesson_4.ipynb
|
mit
|
[
"from IPython.display import YouTubeVideo\nYouTubeVideo('4m44aPkLY2k')",
"如何使用和开发微信聊天机器人的系列教程\nA workshop to develop & use an intelligent and interactive chat-bot in WeChat\nWeChat is a popular social media app, which has more than 800 million monthly active users.\n<img src='http://www.kudosdata.com/wp-content/uploads/2016/11/cropped-KudosLogo1.png' width=30% style=\"float: right;\">\n<img src='reference/WeChat_SamGu_QR.png' width=10% style=\"float: right;\">\nhttp://www.KudosData.com\nby: Sam.Gu@KudosData.com\nMay 2017 ========== Scan the QR code to become trainer's friend in WeChat ========>>\n第四课:自然语言处理:语义和情感分析\nLesson 4: Natural Language Processing 2\n\n消息文字中名称实体的识别 (Name-Entity detection)\n消息文字中语句的情感分析 (Sentiment analysis, Sentence level)\n整篇消息文字的情感分析 (Sentiment analysis, Document level)\n语句的语法分析 (Syntax / Grammar analysis)\n\nFlag to indicate the environment to run this program:",
"# parm_runtime_env_GCP = True\nparm_runtime_env_GCP = False",
"Using Google Cloud Platform's Machine Learning APIs\nFrom the same API console, choose \"Dashboard\" on the left-hand menu and \"Enable API\".\nEnable the following APIs for your project (search for them) if they are not already enabled:\n<ol>\n<li> Google Translate API </li>\n<li> Google Cloud Vision API </li>\n<li> Google Natural Language API </li>\n<li> Google Cloud Speech API </li>\n</ol>\n\nFinally, because we are calling the APIs from Python (clients in many other languages are available), let's install the Python package (it's not installed by default on Datalab)",
"# Copyright 2016 Google Inc.\n# Licensed under the Apache License, Version 2.0 (the \"License\"); \n# import subprocess\n# retcode = subprocess.call(['pip', 'install', '-U', 'google-api-python-client'])\n# retcode = subprocess.call(['pip', 'install', '-U', 'gTTS'])\n\n# Below is for GCP only: install audio conversion tool\n# retcode = subprocess.call(['apt-get', 'update', '-y'])\n# retcode = subprocess.call(['apt-get', 'install', 'libav-tools', '-y'])",
"导入需要用到的一些功能程序库:",
"import io, os, subprocess, sys, re, codecs, time, datetime, requests, itchat\nfrom itchat.content import *\nfrom googleapiclient.discovery import build",
"GCP Machine Learning API Key\nFirst, visit <a href=\"http://console.cloud.google.com/apis\">API console</a>, choose \"Credentials\" on the left-hand menu. Choose \"Create Credentials\" and generate an API key for your application. You should probably restrict it by IP address to prevent abuse, but for now, just leave that field blank and delete the API key after trying out this demo.\nCopy-paste your API Key here:",
"# Here I read in my own API_KEY from a file, which is not shared in Github repository:\nwith io.open('../../API_KEY.txt') as fp: \n for line in fp: APIKEY = line\n\n# You need to un-comment below line and replace 'APIKEY' variable with your own GCP API key:\n# APIKEY='AIzaSyCvxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx'\n\n# Below is for Google Speech synthesis: text to voice API\n# from gtts import gTTS\n\n# Below is for Google Speech recognition: voice to text API\n# speech_service = build('speech', 'v1', developerKey=APIKEY)\n\n# Below is for Google Language Tranlation API\n# service = build('translate', 'v2', developerKey=APIKEY)\n\n# Below is for Google Natual Language Processing API\n# nlp_service = build('language', 'v1', developerKey=APIKEY)\nnlp_service = build('language', 'v1beta2', developerKey=APIKEY)",
"多媒体二进制base64码转换 (Define media pre-processing functions)",
"# Import the base64 encoding library.\nimport base64\n# Pass the image data to an encoding function.\ndef encode_image(image_file):\n with io.open(image_file, \"rb\") as image_file:\n image_content = image_file.read()\n# Python 2\n if sys.version_info[0] < 3:\n return base64.b64encode(image_content)\n# Python 3\n else:\n return base64.b64encode(image_content).decode('utf-8')\n\n# Pass the audio data to an encoding function.\ndef encode_audio(audio_file):\n with io.open(audio_file, 'rb') as audio_file:\n audio_content = audio_file.read()\n# Python 2\n if sys.version_info[0] < 3:\n return base64.b64encode(audio_content)\n# Python 3\n else:\n return base64.b64encode(audio_content).decode('utf-8')\n",
"机器智能API接口控制参数 (Define control parameters for API)",
"# API control parameter for Image API:\nparm_image_maxResults = 10 # max objects or faces to be extracted from image analysis\n\n# API control parameter for Language Translation API:\nparm_translation_origin_language = 'zh' # original language in text: to be overwriten by TEXT_DETECTION\nparm_translation_target_language = 'zh' # target language for translation: Chinese\n\n\n# API control parameter for 消息文字转成语音 (Speech synthesis: text to voice)\nparm_speech_synthesis_language = 'zh' # speech synthesis API 'text to voice' language\n# parm_speech_synthesis_language = 'zh-tw' # speech synthesis API 'text to voice' language\n# parm_speech_synthesis_language = 'zh-yue' # speech synthesis API 'text to voice' language\n\n# API control parameter for 语音转换成消息文字 (Speech recognition: voice to text)\n# parm_speech_recognition_language = 'en' # speech API 'voice to text' language\nparm_speech_recognition_language = 'cmn-Hans-CN' # speech API 'voice to text' language\n\n# API control parameter for 自然语言处理:语义和情感分析\nparm_nlp_extractDocumentSentiment = True # 情感分析 (Sentiment analysis)\nparm_nlp_extractEntities = True # 消息文字中名称实体的识别 (Name-Entity detection)\nparm_nlp_extractEntitySentiment = False # Only available in v1beta2. But Chinese language zh is not supported yet.\nparm_nlp_extractSyntax = True # 语句的语法分析 (Syntax / Grammar analysis)",
"定义一个调用自然语言处理接口的小功能",
"# Running Speech API\ndef KudosData_nlp(text, extractDocumentSentiment, extractEntities, extractEntitySentiment, extractSyntax): \n # Python 2\n# if sys.version_info[0] < 3: \n# tts = gTTS(text=text2voice.encode('utf-8'), lang=parm_speech_synthesis_language, slow=False)\n # Python 3\n# else:\n# tts = gTTS(text=text2voice, lang=parm_speech_synthesis_language, slow=False)\n \n request = nlp_service.documents().annotateText(body={\n \"document\":{\n \"type\": \"PLAIN_TEXT\",\n \"content\": text\n },\n \"features\": {\n \"extractDocumentSentiment\": extractDocumentSentiment,\n \"extractEntities\": extractEntities,\n \"extractEntitySentiment\": extractEntitySentiment, # only available in v1beta2\n \"extractSyntax\": extractSyntax,\n },\n \"encodingType\":\"UTF8\"\n })\n responses = request.execute(num_retries=3) \n print('\\nCompeleted: NLP analysis API')\n return responses",
"< Start of interactive demo >",
"text4nlp = 'As a data science consultant and trainer with Kudos Data, Zhan GU (Sam) engages communities and schools ' \\\n 'to help organizations making sense of their data using advanced data science , machine learning and ' \\\n 'cloud computing technologies. Inspire next generation of artificial intelligence lovers and leaders.'\n\ntext4nlp = '作为酷豆数据科学的顾问和培训师,Sam Gu (白黑) 善长联络社群和教育资源。' \\\n '促进各大公司组织使用先进的数据科学、机器学习和云计算技术来获取数据洞见。激励下一代人工智能爱好者和领导者。'\n\nresponses = KudosData_nlp(text4nlp\n , parm_nlp_extractDocumentSentiment\n , parm_nlp_extractEntities\n , parm_nlp_extractEntitySentiment\n , parm_nlp_extractSyntax)\n\n# print(responses)",
"* 消息文字中名称实体的识别 (Name-Entity detection)",
"# print(responses['entities'])\n\nfor i in range(len(responses['entities'])): \n# print(responses['entities'][i])\n print('')\n print(u'[ 实体 {} : {} ]\\n 实体类别 : {}\\n 重要程度 : {}'.format(\n i+1\n , responses['entities'][i]['name']\n , responses['entities'][i]['type']\n , responses['entities'][i]['salience']\n ))\n# print(responses['entities'][i]['name'])\n# print(responses['entities'][i]['type'])\n# print(responses['entities'][i]['salience'])\n if 'sentiment' in responses['entities'][i]:\n print(u' 褒贬程度 : {}\\n 语彩累积 : {}'.format(\n responses['entities'][i]['sentiment']['score']\n , responses['entities'][i]['sentiment']['magnitude']\n ))\n# print(responses['entities'][i]['sentiment'])\n if responses['entities'][i]['metadata'] != {}:\n if 'wikipedia_url' in responses['entities'][i]['metadata']:\n print(' ' + responses['entities'][i]['metadata']['wikipedia_url'])",
"* 消息文字中语句的情感分析 (Sentiment analysis, Sentence level)",
"# print(responses['sentences'])\n\nfor i in range(len(responses['sentences'])):\n print('')\n print(u'[ 语句 {} : {} ]\\n( 褒贬程度 : {} | 语彩累积 : {} )'.format(\n i+1\n , responses['sentences'][i]['text']['content']\n , responses['sentences'][i]['sentiment']['score']\n , responses['sentences'][i]['sentiment']['magnitude']\n ))\n",
"https://cloud.google.com/natural-language/docs/basics\n\nscore 褒贬程度 of the sentiment ranges between -1.0 (negative) and 1.0 (positive) and corresponds to the overall emotional leaning of the text.\nmagnitude 语彩累积 indicates the overall strength of emotion (both positive and negative) within the given text, between 0.0 and +inf. Unlike score, magnitude is not normalized; each expression of emotion within the text (both positive and negative) contributes to the text's magnitude (so longer text blocks may have greater magnitudes).\n\n| Sentiment | Sample Values |\n|:-------------:|:-------------:|\n| 明显褒义 Clearly Positive | \"score 褒贬程度\": 0.8, \"magnitude 语彩累积\": 3.0 |\n| 明显贬义 Clearly Negative | \"score 褒贬程度\": -0.6, \"magnitude 语彩累积\": 4.0 |\n| 中性 Neutral | \"score 褒贬程度\": 0.1, \"magnitude 语彩累积\": 0.0 |\n| 混合 Mixed | \"score 褒贬程度\": 0.0, \"magnitude 语彩累积\": 4.0 |\n* 整篇消息文字的情感分析 (Sentiment analysis, Document level)",
"# print(responses['documentSentiment'])\n\nprint(u'[ 整篇消息 语种 : {} ]\\n( 褒贬程度 : {} | 语彩累积 : {} )'.format(\n responses['language']\n , responses['documentSentiment']['score']\n , responses['documentSentiment']['magnitude']\n ))",
"* 语句的语法分析 (Syntax / Grammar analysis)",
"for i in range(len(responses['tokens'])): \n print('')\n print(responses['tokens'][i]['text']['content'])\n print(responses['tokens'][i]['partOfSpeech'])\n print(responses['tokens'][i]['dependencyEdge'])\n# print(responses['tokens'][i]['text'])\n# print(responses['tokens'][i]['lemma'])\n ",
"< End of interactive demo >\n定义一个输出为NLP分析结果的文本消息的小功能,用于微信回复:",
"def KudosData_nlp_generate_reply(responses):\n nlp_reply = u'[ NLP 自然语言处理结果 ]'\n \n # 1. 整篇消息文字的情感分析 (Sentiment analysis, Document level)\n nlp_reply += '\\n'\n nlp_reply += '\\n' + u'[ 整篇消息 语种 : {} ]\\n( 褒贬程度 : {} | 语彩累积 : {} )'.format(\n responses['language']\n , responses['documentSentiment']['score']\n , responses['documentSentiment']['magnitude']\n )\n\n # 2. 消息文字中语句的情感分析 (Sentiment analysis, Sentence level) \n nlp_reply += '\\n'\n for i in range(len(responses['sentences'])):\n nlp_reply += '\\n' + u'[ 语句 {} : {} ]\\n( 褒贬程度 : {} | 语彩累积 : {} )'.format(\n i+1\n , responses['sentences'][i]['text']['content']\n , responses['sentences'][i]['sentiment']['score']\n , responses['sentences'][i]['sentiment']['magnitude']\n )\n \n # 3. 消息文字中名称实体的识别 (Name-Entity detection)\n nlp_reply += '\\n'\n for i in range(len(responses['entities'])): \n nlp_reply += '\\n' + u'[ 实体 {} : {} ]\\n 实体类别 : {}\\n 重要程度 : {}'.format(\n i+1\n , responses['entities'][i]['name']\n , responses['entities'][i]['type']\n , responses['entities'][i]['salience']\n )\n if 'sentiment' in responses['entities'][i]:\n nlp_reply += '\\n' + u' 褒贬程度 : {}\\n 语彩累积 : {}'.format(\n responses['entities'][i]['sentiment']['score']\n , responses['entities'][i]['sentiment']['magnitude']\n )\n if responses['entities'][i]['metadata'] != {}:\n if 'wikipedia_url' in responses['entities'][i]['metadata']:\n nlp_reply += '\\n ' + responses['entities'][i]['metadata']['wikipedia_url']\n \n # 4. 语句的语法分析 (Syntax / Grammar analysis)\n# nlp_reply += '\\n'\n# for i in range(len(responses['tokens'])): \n# nlp_reply += '\\n' + str(responses['tokens'][i])\n \n return nlp_reply\n\nprint(KudosData_nlp_generate_reply(responses))",
"用微信App扫QR码图片来自动登录",
"itchat.auto_login(hotReload=True) # hotReload=True: 退出程序后暂存登陆状态。即使程序关闭,一定时间内重新开启也可以不用重新扫码。\n\n# Obtain my own Nick Name\nMySelf = itchat.search_friends()\nNickName4RegEx = '@' + MySelf['NickName'] + '\\s*'\n\n# 单聊模式,自动进行自然语言分析,以文本形式返回处理结果:\n@itchat.msg_register([TEXT, MAP, CARD, NOTE, SHARING])\ndef text_reply(msg):\n text4nlp = msg['Content']\n # call NLP API:\n nlp_responses = KudosData_nlp(text4nlp\n , parm_nlp_extractDocumentSentiment\n , parm_nlp_extractEntities\n , parm_nlp_extractEntitySentiment\n , parm_nlp_extractSyntax)\n # Format NLP results:\n nlp_reply = KudosData_nlp_generate_reply(nlp_responses)\n print(nlp_reply)\n return nlp_reply\n\n# 群聊模式,如果收到 @ 自己的文字信息,会自动进行自然语言分析,以文本形式返回处理结果:\n@itchat.msg_register(TEXT, isGroupChat=True)\ndef text_reply(msg):\n if msg['isAt']:\n text4nlp = re.sub(NickName4RegEx, '', msg['Content'])\n # call NLP API:\n nlp_responses = KudosData_nlp(text4nlp\n , parm_nlp_extractDocumentSentiment\n , parm_nlp_extractEntities\n , parm_nlp_extractEntitySentiment\n , parm_nlp_extractSyntax)\n # Format NLP results:\n nlp_reply = KudosData_nlp_generate_reply(nlp_responses)\n print(nlp_reply)\n return nlp_reply\n\nitchat.run()\n\n# interupt kernel, then logout\nitchat.logout() # 安全退出",
"第四课:自然语言处理:语义和情感分析\nLesson 4: Natural Language Processing 2\n\n消息文字中名称实体的识别 (Name-Entity detection)\n消息文字中语句的情感分析 (Sentiment analysis, Sentence level)\n整篇消息文字的情感分析 (Sentiment analysis, Document level)\n语句的语法分析 (Syntax / Grammar analysis)\n\n下一课是:\n第五课:视频识别和处理\nLesson 5: Video Recognition & Processing\n\n识别视频消息中的物体名字 (Recognize objects in video)\n识别视频的场景 (Detect scenery in video)\n直接搜索视频内容 (Search content in video)\n\n<img src='http://www.kudosdata.com/wp-content/uploads/2016/11/cropped-KudosLogo1.png' width=30% style=\"float: right;\">\n<img src='reference/WeChat_SamGu_QR.png' width=10% style=\"float: left;\">"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
abulbasar/machine-learning
|
SparkML - 04 Text_Analysis.ipynb
|
apache-2.0
|
[
"Problem Statement: IMDB Comment Sentiment Classifier\nDataset: For this exercise we will use a dataset hosted at http://ai.stanford.edu/~amaas/data/sentiment/\nProblem Statement: \nThis is a dataset for binary sentiment classification containing substantially more data than previous benchmark datasets. We provide a set of 25,000 highly polar movie reviews for training, and 25,000 for testing. There is additional unlabeled data for use as well. Raw text and already processed bag of words formats are provided. \nLaunch a spark session, verify the spark session UI",
"spark.sparkContext.uiWebUrl",
"IMDB comments dataset has been stored in the following location",
"!wc -l data/imdb-comments.json",
"There are 50000 lines in the file. Let's the first line",
"!du -sh data/imdb-comments.json",
"Total size of the file is 66MB",
"!head -n 1 data/imdb-comments.json",
"Each line is a self contained json doc. Load the dataset using spark reader specifying the file format as json. As we see above size of the file is 66 MB, we should at least 2 partitons, since I am using dual core system, I will repartition the data to 4. Also will cache the data after repartitioning.",
"imdb = spark.read.format(\"json\").load(\"data/imdb-comments.json\").repartition(4).cache()",
"Find total number of records",
"imdb.count()",
"Print Schema and view the field types",
"imdb.printSchema()",
"Take a look at a few sample data",
"imdb.show()",
"label - column indicate whethet the data belong to training or test bucket.\nsentiment - column indicates whether the comment carries positive or negative sentiment. This column has been manually curated.\nFind out for each combination of label and sentimnet how many records are there.",
"from pyspark.sql.functions import *\nfrom pyspark.sql.types import *\n\nimdb.groupBy(\"sentiment\").pivot(\"label\").count().show()",
"Look at a sample comment value",
"content = imdb.sample(False, 0.001, 1).first().content\ncontent",
"Register a UDF function to clean the comment from the html tags. If BeautifulSoup is not installed, you can install it using pip \n(shell command)\n$ pip install BeautifulSoup4",
"from bs4 import BeautifulSoup\nfrom pyspark.sql.types import * \nimport re\ndef remove_html_tags(text):\n text = BeautifulSoup(text, \"html5lib\").text.lower() #removed html tags\n text = re.sub(\"[\\W]+\", \" \", text)\n return text\n\nspark.udf.register(\"remove_html_tags\", remove_html_tags, StringType())",
"Test the remove_html_tags function",
"remove_html_tags(content)",
"Apply the the udf on the imdb dataframe.",
"imdb_clean = imdb.withColumn(\"content\", expr(\"remove_html_tags(content)\")).cache()\nimdb_clean.sample(False, 0.001, 1).first().content",
"Use Tokenizer to split the string into terms. Then use StopWordsRemover to remove stop words like prepositions, apply CountVectorizer to find all distinct terms and found of each term per document.",
"from pyspark.ml.feature import HashingTF, IDF, Tokenizer, CountVectorizer, StopWordsRemover\n\ntokenizer = Tokenizer(inputCol=\"content\", outputCol=\"terms\")\nterms_data = tokenizer.transform(imdb_clean)\n\nprint(terms_data.sample(False, 0.001, 1).first().terms)\n\nremover = StopWordsRemover(inputCol=\"terms\", outputCol=\"filtered\")\nterms_stop_removed = remover.transform(terms_data)\n\nprint(terms_stop_removed.sample(False, 0.001, 1).first().filtered)\n\ncount_vectorizer = CountVectorizer(inputCol=\"filtered\", outputCol=\"count_vectors\")\ncount_vectorizer_model = count_vectorizer.fit(terms_stop_removed)\ncount_vectorized = count_vectorizer_model.transform(terms_stop_removed)\ncount_vectorized.sample(False, 0.001, 1).first().count_vectors",
"count_vectorized Dataframe contains a column count_vectors that is a SparseVector representing which term appears and how many times. The key is the index of all unique terms. You can find list of terms count_vectorizer_model.vocabulary. See below.",
"print(count_vectorizer_model.vocabulary[:100], \"\\n\\nTotal no of terms\", len(count_vectorizer_model.vocabulary))\n\ncount_vectorized.show()",
"SparkVector represents a vector of 103999, that means in the dataset (corpus) there are 103999 unique terms. Per document, only a few will be present. Find density of each count_vectors.",
"vocab_len = len(count_vectorizer_model.vocabulary)\nspark.udf.register(\"density\", lambda r: r.numNonzeros() / vocab_len, DoubleType())\ncount_vectorized.select(expr(\"density(count_vectors) density\")).show()",
"Density report shows, the count_vectors has very low density which illustrate the benefit of the choice of DenseVector for this column. \nNow, calculate tfidf for the document.",
"idf = IDF(inputCol=\"count_vectors\", outputCol=\"features\")\nidf_model = idf.fit(count_vectorized)\nidf_data = idf_model.transform(count_vectorized)\n\nidf_data.sample(False, 0.001, 1).first().features\n\nidf_data.printSchema()",
"Apply StringIndexer to conver the sentiment column from String type to number type - this is prerequisit to apply the LogisticRegression algorithm.",
"from pyspark.ml.feature import StringIndexer\n\nstring_indexer = StringIndexer(inputCol=\"sentiment\", outputCol=\"sentiment_idx\")\nstring_indexer_model = string_indexer.fit(idf_data)\nlabel_encoded = string_indexer_model.transform(idf_data)\n\nlabel_encoded.select(\"sentiment\", \"sentiment_idx\").show()",
"Split the data into traininf and testing groups with 70/30 ratio. Cache the dataframe so that training runs faster.",
"training, testing = label_encoded.randomSplit(weights=[0.7, 0.3], seed=1)\ntraining.cache()\ntesting.cache()",
"Verify that the StringIndex has done the expected job and training and testing data maintain the ratio of positive and negative records as in the whole dataset.",
"training.groupBy(\"sentiment_idx\", \"sentiment\").count().show()\n\ntesting.groupBy(\"sentiment_idx\", \"sentiment\").count().show()",
"Apply LogisticRegression classifier",
"from pyspark.ml.classification import LogisticRegression\n\nlr = LogisticRegression(maxIter=10000, regParam=0.1, elasticNetParam=0.0, \n featuresCol=\"features\", labelCol=\"sentiment_idx\")",
"Show the parameters that the LogisticRegression classifier takes.",
"print(lr.explainParams())\n\nlr_model = lr.fit(training)\n\nlr_model.coefficients[:100]",
"From the training summary find out the cost decay of the model.",
"training_summary = lr_model.summary\n\nimport pandas as pd\nimport matplotlib.pyplot as plt\n\n%matplotlib inline\n\npd.Series(training_summary.objectiveHistory).plot()\nplt.xlabel(\"Iteration\")\nplt.ylabel(\"Cost\")",
"Find area under the curve. Closer to 1 is better",
"training_summary.areaUnderROC\n\npredictions = lr_model.transform(testing).withColumn(\"match\", expr(\"prediction == sentiment_idx\"))\n\npredictions.select(\"prediction\", \"sentiment_idx\", \"sentiment\", \"match\").sample(False, 0.01).show(10)\n\npredictions.groupBy(\"sentiment_idx\").pivot(\"prediction\").count().show()",
"Find the accuracy of the prediction",
"accuracy = predictions.select(expr(\"sum(cast(match as int))\")).first()[0] / predictions.count()\naccuracy"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
chris-jd/udacity
|
intro_to_DS_assignment/Assignment 1 Submission Notebook.ipynb
|
mit
|
[
"Submission Notebook\nChris Madeley\nToC\nReferences\n\nStatistical Test\nLinear Regression (Questions)\nVisualisation\nConclusion\nReflection\n\nChange Log\n<b>Revision 1:</b> Corrections to questions 1.1, 1.4 based on the comments of the first review.\n<b>Revision 2:</b> Corrections to questions 1.1, 1.4, 4.1 based on the comments of the second review.\nOverview\nThese answers to the assignment questions have been prepared in a Jupyter (formally IPython) notebook. This was chosen to allow clarity of working, enable reproducability, and as it should be suitable and useful for the target audience, and can be converted to html easily. In general, the code necessary for each question is included below each question, although some blocks of necessary code fall inbetween questions.",
"# Imports\n# Numeric Packages\nfrom __future__ import division\nimport numpy as np\nimport pandas as pd\nimport scipy.stats as sps\n\n# Plotting packages\nimport matplotlib.pyplot as plt\nfrom matplotlib import ticker\nimport seaborn as sns\n%matplotlib inline\nsns.set_style('whitegrid')\nsns.set_context('talk')\n\n\n# Other\nfrom datetime import datetime, timedelta\nimport statsmodels.api as sm\n\n# Import turnstile data and convert datetime column to datetime python objects\ndf = pd.read_csv('turnstile_weather_v2.csv')\ndf['datetime'] = pd.to_datetime(df['datetime'])",
"0. References\nIn general, only standard package documentation has been used throughout. A couple of one-liners adapted from stackoverflow answers noted in code where used.\n1. Statistical Test\n1.1 Which statistical test did you use to analyze the NYC subway data? Did you use a one-tail or a two-tail P value? What is the null hypothesis? What is your p-critical value?\nThe objective of this project, as described in the project details, is to figure out if more people ride the subway when it is raining versus when it is not raining.\nTo evaluate this question through statistical testing, a hypothesis test is used. To perform such a test two opposing hypotheses are constructed: the null hypothesis and the alternative hypothesis. A hypothesis test considers one sample of data to determine if there is sufficient evidence to reject the null hypothesis for the entire population from which it came; that the difference in the two underlying populations are different with statistical significance. The test is performed to a 'significance level' which determines the probability of Type 1 error occuring, where Type 1 error is the incorrect rejection of the null hypothesis; a false positive.\nThe null hypothesis is constructed to represent the status quo, where the treatment on a population has no effect on the population, chosen this way because the test controls only for Type 1 error. In the context of this assignment, the null hypothesis for this test is on average, no more people ride the subway compared to when it is not; i.e. 'ridership' is the population and 'rain' is the treatment.\ni.e. $H_0: \\alpha_{raining} \\leq \\alpha_{not_raining}$\nwhere $\\alpha$ represents the average ridership of the subway.\nConsequently, the alternative hypothesis is given by:\n$H_1: \\alpha_{raining} > \\alpha_{not_raining}$.\nDue to the way the hypothesis is framed, that we are only questioning whether ridership increases during rain, a single-tailed test is required. This is because we are only looking for a test statistic that shows an increase in ridership in order to reject the null hypothesis.\nA significance value of 0.05 has been chosen to reject the null hypothesis for this test, due to it being the most commonly used value for testing.\n1.2 Why is this statistical test applicable to the dataset? In particular, consider the assumptions that the test is making about the distribution of ridership in the two samples.\nThe Mann-Whitney U test was chosen for the hypothesis testing due as it is agnostic to the underlying distribution. The entry values are definitely not normally distributed, illustrated below both graphically and using the Shapiro-Wilk test.",
"W, p = sps.shapiro(df.ENTRIESn_hourly.tolist())\nprint 'Probability that data is the realisation of a gaussian random variable: {:.3f}'.format(p)\n\nplt.figure(figsize=[8,5])\nsns.distplot(df.ENTRIESn_hourly.tolist(), bins=np.arange(0,10001,500), kde=False)\nplt.xlim(0,10000)\nplt.yticks(np.arange(0,16001,4000))\nplt.title('Histogram of Entry Count')\nplt.show()",
"1.3 What results did you get from this statistical test? These should include the following numerical values: p-values, as well as the means for each of the two samples under test.",
"raindata = np.array(df[df.rain==1].ENTRIESn_hourly.tolist())\nnoraindata = np.array(df[df.rain==0].ENTRIESn_hourly.tolist())\nU, p = sps.mannwhitneyu(raindata, noraindata)\nprint 'Results'\nprint '-------'\nprint 'p-value: {:.2f}'.format(p) # Note that p value calculated by scipy is single-tailed\nprint 'Mean with rain: {:.0f}'.format(raindata.mean())\nprint 'Mean without rain: {:.0f}'.format(noraindata.mean())",
"1.4 What is the significance and interpretation of these results?\nGiven the p-value < 0.05, we can reject the null hypothesis that the average ridership is not greater when it is raining, hence the we can accept the alternative hypothesis the average ridership is greater when it rains.\n2. Linear Regression",
"# Because the hour '0' is actually the entries from 20:00 to 24:00, it makes more sense to label it 24 when plotting data\ndf.datetime -= timedelta(seconds=1)\ndf['day']= df.datetime.apply(lambda x: x.day)\ndf['hour'] = df.datetime.apply(lambda x: x.hour+1)\ndf['weekday'] = df.datetime.apply(lambda x: not bool(x.weekday()//5))\ndf['day_week'] = df.datetime.apply(lambda x: x.weekday())\n# The dataset includes the Memorial Day Public Holiday, which makes more sense to be classify as a weekend.\ndf.loc[df['day']==30,'weekday'] = False",
"2.1 What approach did you use to compute the coefficients theta and produce prediction for ENTRIESn_hourly in your regression model:\nOrdinary Least Squares (OLS) was used for the linear regression for this model.\n2.2 What features (input variables) did you use in your model? Did you use any dummy variables as part of your features?\nThe final fit used in the model includes multiple components, two of which include the custom input stall_num2, described later:\nENTRIESn_hourly ~ 'ENTRIESn_hourly ~ rain:C(hour) + stall_num2*C(hour) + stall_num2*weekday'\n- stall_num2 - includes the effect off the stall (unit) number;\n- C(hour) - (dummy variable) included using dummy variables, since the the entries across hour vary in a highly nonlinear way;\n- weekday - true/false value for whether it is a weekday;\n- rain:C(hour) - rain is included as the focus of the study, however it has been combined with the time of day;\n- stall_num2 * C(hour) - (dummy variable) interaction between the stall number and time of day; and\n- stall_num2 * weekday - interaction between the stall number and whether it is a weekday.\nAdditionally, an intercept was included in the model, statsmodels appears to automatically create N-1 dummies when this is included.\nThe variable stall_num2 was created as a substitute to using the UNIT column as a dummy variable. It was clear early on that using UNIT has a large impact on the model accuracy, intuitive given the relative popularity of stalls will be important for predicting their entry count. However, with 240 stalls, a lot of dummy variables are created, and it makes interactions between UNIT and other variables impractical. Additionally, so many dummy variables throws away information relating to the similar response between units of similar popularity.\nstall_num2 was constructed by calculating the number of entries that passed through each stall as a proportion of total entries for the entire period of the data. These results were then normalised to have μ=0 and σ=1 (although they're not normally distributed) to make the solution matrix well behaved; keep the condition number within normal bounds.",
"# Create a new column, stall_num2, representing the proportion of entries through a stall across the entire period.\ntotal_patrons = df.ENTRIESn_hourly.sum()\n# Dataframe with the units, and total passing through each unit across the time period\ntotal_by_stall = pd.DataFrame(df.groupby('UNIT').ENTRIESn_hourly.sum()) \n# Create new variable = proportion of total entries\ntotal_by_stall['stall_num2'] = total_by_stall.ENTRIESn_hourly/total_patrons \n# Normalise by mean and standard deviation... fixes orders of magnitude errors in the output\ntotal_stall_mean = total_by_stall.stall_num2.mean()\ntotal_stall_stddev = total_by_stall.stall_num2.std()\ntotal_by_stall.stall_num2 = (\n (total_by_stall.stall_num2 - total_stall_mean)\n / total_stall_stddev\n )\n# Map the new variable back on the original dataframe\ndf['stall_num2'] = df.UNIT.apply(lambda x: total_by_stall.stall_num2[x]) ",
"2.3 Why did you select these features in your model?\nThe first step was to qualitatively assess which parameters may be useful for the model. This begins with looking at a list of the data, and the type of data, which has been captured, illustrated as follows.",
"for i in df.columns.tolist(): print i,",
"Some parameters are going to be clearly important:\n- UNIT/station - ridership will vary between entry points;\n- hour - ridership will definitely be different between peak hour and 4am; and\n- weekday - it is intutive that there will be more entries on weekdays; this is clearly illustrated in the visualisations in section 3.\nAdditionally, rain needed to be included as a feature due to it being the focus of the overall investigation.\nBeyond these parameters, I selected a set of numeric features which may have an impact on the result, and initially computed and plotted the correlations between featires in an effort to screen out some multicollinearity prior linear regression. The results of this correlation matrix indicated a moderately strong correlations between:\n- Entries and exits - hence exits is not really suitable for predicting entries, which is somewhat intuitive\n- Day of the week and weekday - obviously correlated, hence only one should be chosen.\n- Day of the month and temperature are well correlated, and when plotted show a clear warming trend throughout May.\nThere are also a handful of weaker environmental correlations, such as precipitation and fog, rain and precipitation and rain and temperature.",
"plt.figure(figsize=[8,6])\ncorr = df[['ENTRIESn_hourly',\n 'EXITSn_hourly',\n 'day_week', # Day of the week (0-6)\n 'weekday', # Whether it is a weekday or not\n 'day', # Day of the month\n 'hour', # In set [4, 8, 12, 16, 20, 24]\n 'fog',\n 'precipi',\n 'rain',\n 'tempi',\n 'wspdi']].corr()\nsns.heatmap(corr)\nplt.title('Correlation matrix between potential features')\nplt.show()",
"The final selection of variables was determined through trial and error of rational combinations of variables. The station popularity was captured in using the stall_num2 variable, since it appears to create a superior model compared with just using UNIT dummies, and because it allowed the creation of combinations. Combining the station with hour was useful, and is intuitive since stations in the CBD will have the greatest patronage and have greater entries in the evening peak hour. A similar logic applies to combining the station and whether it is a weekday.\nVarious combinations of environmental variables were trialled in the model, but none appeared to improve the model accuracy and were subsequently dicarded. Since rain is the focus of this study it was retained, however it was combined with the time of day. The predictive strenght of the model was not really improved with the inclusion of a rain parameter, however combining it with hour appears to improve it's usefulness for providing insight, as will be discussed in section 4.\n2.4 What are the parameters (also known as \"coefficients\" or \"weights\") of the non-dummy features in your linear regression model?",
"# Construct and fit the model\nmod = sm.OLS.from_formula('ENTRIESn_hourly ~ rain:C(hour) + stall_num2*C(hour) + stall_num2*weekday', data=df)\nres = mod.fit_regularized()\ns = res.summary2()",
"Due to the use of several combinations, there are very few non-dummy features, with the coefficients illustrated below. Since stall_num2 is also used in several combinations, it's individual coefficient doesn't prove very useful.",
"s.tables[1].ix[['Intercept', 'stall_num2']]",
"However when looking at all the combinations for stall_num2 provides greater insight. Here we can see that activity is greater on weekdays, and greatest in the 16:00-20:00hrs block. It is lowest in the 00:00-04:00hrs block, not shown as it was removed by the model due to the generic stall_num2 parameter being there; the other combinations are effectively referenced to the 00:00-04:00hrs block.",
"s.tables[1].ix[[i for i in s.tables[1].index if i[:5]=='stall']]",
"Even more interesting are the coefficient for the rain combinations. These appear to indicate that patronage increases in the 08:00-12:00 and 16:00-20:00, corresponding to peak hour. Conversely, subway entries are lower at all other times. Could it be that subway usage increases if it is raining when people are travelling to and from work, but decreases otherwise because people prefer not to travel in the rain at all?",
"s.tables[1].ix[[i for i in s.tables[1].index if i[:4]=='rain']]",
"2.5 What is your model’s R2 (coefficients of determination) value?",
"print 'Model Coefficient of Determination (R-squared): {:.3f}'.format(res.rsquared)",
"The final R-squared value of 0.74 is much greater than earlier models that used UNIT as a dummy variable, which had R-squared values around 0.55.\n2.6 What does this R2 value mean for the goodness of fit for your regression model? Do you think this linear model to predict ridership is appropriate for this dataset, given this R2 value?\nTo evaluate the goodness of fit the residuals of the model have been evaluated in two ways. First, a histogram of the residuals has been plotted below. The distribution of residuals is encouragingly symmetric. However efforts to fit a normal distribution found distributions which underestimated the frequency at the mode and tails. Fitting a fat-tailed distribution, such as the Cauchy distribution below, was far more successful. I'm not sure if there's a good reason why it's worked out this way (but would love to hear ideas as to why).",
"residuals = res.resid\nsns.set_style('whitegrid')\nsns.distplot(residuals,bins=np.arange(-10000,10001,200),\n kde = False, # kde_kws={'kernel':'gau', 'gridsize':4000, 'bw':100},\n fit=sps.cauchy, fit_kws={'gridsize':4000})\nplt.xlim(-5000,5000)\nplt.title('Distribution of Residuals\\nwith fitted cauchy Distribution overlaid')\nplt.show()",
"Secondly, a scatterplot of the residuals against the expected values is plotted. As expected, the largest residuals are associated with cases where the traffic is largest. In general the model appears to underpredict the traffic at the busiest of units. Also clear on this plot is how individual stations form a 'streak' of points on the diagonal. This is because the model essentially makes a prediction for each station per hour per for weekdays and weekends. The natural variation of the actual result in this timeframe creates the run of points.",
"sns.set_style('whitegrid')\nfig = plt.figure(figsize=[6,6])\nplt.xlabel('ENTRIESn_hourly')\nplt.ylabel('Residuals')\nplt.scatter(df.ENTRIESn_hourly, residuals,\n c=(df.stall_num2*total_stall_stddev+total_stall_mean)*100, # denormalise values\n cmap='YlGnBu')\nplt.colorbar(label='UNIT Relative Traffic (%)')\nplt.plot([0,20000],[0,-20000], ls=':', c='0.7', lw=2) # Line to show negative prediction values (i.e. negative entries)\nplt.xlim(xmin=0)\nplt.ylim(-20000,25000)\nplt.xticks(rotation='45')\nplt.title('Model Residuals vs. Expected Value')\nplt.show()",
"Additionally, note that the condition number for the final model is relatively low, hence there don't appear to be any collinearity issues with this model. By comparison, when UNIT was included as a dummy variable instead, the correlation was weaker and the condition number was up around 220.",
"print 'Condition Number: {:.2f}'.format(res.condition_number)",
"In summary, it appears that this linear model has done a reasonable job of predicting ridership in this instance. Clearly some improvements are possible (like fixing the predictions of negative entries!), but given there will always be a degree of random variation, an R-squared value of 0.74 for a linear model seems quite reasonable. To be sure of the model suitability the data should be split into training/test sets. Additionally, more data from extra months could prove beneficial.\n3. Visualisation\n3.1 One visualization should contain two histograms: one of ENTRIESn_hourly for rainy days and one of ENTRIESn_hourly for non-rainy days.",
"sns.set_style('white')\nsns.set_context('talk')\nmydf = df.copy()\nmydf['rain'] = mydf.rain.apply(lambda x: 'Raining' if x else 'Not Raining')\nraindata = df[df.rain==1].ENTRIESn_hourly.tolist()\nnoraindata = df[df.rain==0].ENTRIESn_hourly.tolist()\n\nfig = plt.figure(figsize=[9,6])\nax = fig.add_subplot(111)\nplt.hist([raindata,noraindata],\n normed=True,\n bins=np.arange(0,11500,1000),\n color=['dodgerblue', 'indianred'],\n label=['Raining', 'Not Raining'],\n align='right')\nplt.legend()\nsns.despine(left=True, bottom=True)\n\n\n# http://stackoverflow.com/questions/9767241/setting-a-relative-frequency-in-a-matplotlib-histogram\ndef adjust_y_axis(x, pos):\n return '{:.0%}'.format(x * 1000)\n\nax.yaxis.set_major_formatter(ticker.FuncFormatter(adjust_y_axis))\nplt.title('Histogram of Subway Entries per 4 hour Block per Gate')\nplt.ylabel('Proportion of Total Entries')\nplt.xlim(500,10500)\nplt.xticks(np.arange(1000,10001,1000))\nplt.show()",
"Once both plots are normalised, the difference between subway entries when raining and not raining are almost identical. No useful differentiation can be made between the two datasets here.\n3.2 One visualization can be more freeform. You should feel free to implement something that we discussed in class (e.g., scatter plots, line plots) or attempt to implement something more advanced if you'd like.",
"# Plot to illustrate the average riders per time block for each weekday.\n# First we need to sum up the entries per hour (category) per weekday across all units.\n# This is done for every day, whilst retaining the 'day_week' field for convenience. reset_index puts it back into a standard dataframe\n# For the sake of illustration, memorial day has been excluded since it would incorrectly characterise the Monday ridership\nmydf = df.copy()\nmydf = mydf[mydf.day!=30].pivot_table(values='ENTRIESn_hourly', index=['day','day_week','hour'], aggfunc=np.sum).reset_index()\n# The second pivot takes the daily summed data, and finds the mean for each weekday/hour block.\nmydf = mydf.pivot_table(values='ENTRIESn_hourly', index='hour', columns='day_week', aggfunc=np.mean)\n\n# Generate plout using the seaborn heatplot function.\nfig = plt.figure(figsize=[9,6])\ntimelabels = ['Midnight - 4am','4am - 8am','8am - 12pm','12pm - 4pm','4pm - 8pm','8pm - Midnight']\nweekdays = ['Mon', 'Tue', 'Wed', 'Thu', 'Fri', 'Sat', 'Sun']\nplot = sns.heatmap(mydf, yticklabels=timelabels, xticklabels=weekdays)\nplt.xlabel('') # The axis ticks are descriptive enough to negate the need for axis labels\nplt.ylabel('')\nplot.tick_params(labelsize=14) # Make stuff bigger!\n\n# Make heatmap ticks bigger http://stackoverflow.com/questions/27832054/change-tick-size-on-colorbar-of-seaborn-heatmap\ncax = plt.gcf().axes[-1]\ncax.tick_params(labelsize=14)\n\nplt.title('Daily NYC Subway Ridership\\n(Data from May 2011)', fontsize=20)\nplt.show()",
"This plot illustrates the variation in ridership of the subway across the week. Ridership is very small in the very early morning, and there are two bold stripes where peak hour occurs in at 8am-12pm and 4pm-8pm. The weekend is clearly far less busy, in fact the 4am-8am block is the quietest of the day!\n4. Conclusion\n4.1 From your analysis and interpretation of the data, do more people ride the NYC subway when it is raining or when it is not raining?\nThe statistical test performed in section 1 indicates that on average, more people ride the subway when it is raining. However the relationship between rain and ridership are possibly more complex, the results described in section 2.4 indicate that the impact of rain on ridership may depend on time of day, with more people using the subway when it rains during peak hour, and fewer otherwise.\n4.2 What analyses lead you to this conclusion?\nThe statistical tests do indicate that on average ridership is greater when it rains. However, the difference in mean is small. When linear regression was used, the effect of rain can be considered with other variables controlled for. When rain was considered without considering the interaction with time of day, there was no statistically significant result for the effect of rain. Controlling for time of day indicates the more detailed result described previously. Although the p-values for each of the coefficients described previously are small, indicating statistical significance, the effect of rain is a second-order effect on patronage. In fact, the addition of rain, with time of day interaction, didn't improve the accuracy of the model by an appreciable amount; if it wasn't the focus of this study the variable would have been dropped.\n5. Reflection\n5.1 Please discuss potential shortcomings of the methods of your analysis, including: 1. Dataset, 2. Analysis, such as the linear regression model or statistical test.\nThe dataset was too short in duration, and therefor had too few rainy days, to draw detailed conclusions. Given this is publicly available data, and once the effort to wrangle the data had been made, it would probably be sensible to run the analyses on far larger timespans. Of course, running the analysis over many months would require adding variables to control for seasons/time of year.\nThe linear regression model worked better than expected, finishing with an r-squared value of 0.74. However it has curious features, such as predicting negative entries for some cases, which is clearly impossible. I imagine there are superior approaches to modelling the system, however this model can't be too far off the ultimate achievable accuracy, the natural daily variation can't be captured in any (reasonable model). For instance, when the Yankees play the nearest station will have a great increase in patronage, but that won't be captured in any model which doesn't include the playing schedule of the team, which applies to all other large events around NYC.\nOne aspect of using a correctly constructed linear model is the simplicity of understanding the relative effect of each parameter, since each one is described by a single coefficient. More complex models may not provide the same level of simple insight that comparing coefficients can provide.\n5.2 (Optional) Do you have any other insight about the dataset that you would like to share with us?\nThis is more to do with the linear model than the dataset: I tried eliminating the negative predictions by fitting the model square-root of the entries, and then squaring the output predictions. This was successful in eliminating the negative predictions, and only reduced the predictive capactiy from an R-squared value of 0.74 to approx 0.71. Although this is one approach to eliminating the negative values (which didn't sit well with me, and which I wouldn't want to include in any published model as it would be ridiculed by the public), I'm curious to know if there are any better approaches to keeping the predictions positive."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
google/applied-machine-learning-intensive
|
content/05_deep_learning/01_recurrent_neural_networks/colab.ipynb
|
apache-2.0
|
[
"<a href=\"https://colab.research.google.com/github/google/applied-machine-learning-intensive/blob/master/content/05_deep_learning/01_recurrent_neural_networks/colab.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>\nCopyright 2020 Google LLC.",
"# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.",
"Recurrent Neural Networks (RNNs)\nRecurrent Neural Networks (RNNs) are an interesting application of deep learning that allow models to predict the future. While regression models attempt to fit an equation to existing data and extend the predictive power of the equation into the future, RNNs fit a model and use sequences of time series data to make step-by-step predictions about the next most likely output of the model.\nIn this colab we will create a recurrent neural network that can predict engine vibrations.\nExploratory Data Analysis\nWe'll use the Engine Vibrations data from Kaggle. This dataset contains artificial engine vibration values we will use to train a model that can predict future values.\nTo load the data, upload your kaggle.json file and run the code block below.",
"! chmod 600 kaggle.json && (ls ~/.kaggle 2>/dev/null || mkdir ~/.kaggle) && mv kaggle.json ~/.kaggle/ && echo 'Done'",
"Next, download the data from Kaggle.",
"!kaggle datasets download joshmcadams/engine-vibrations\n!ls",
"Now load the data into a DataFrame.",
"import pandas as pd\n\ndf = pd.read_csv('engine-vibrations.zip')\ndf.describe()",
"We know the data contains readings of engine vibration over time. Let's see how that looks on a line chart.",
"import matplotlib.pyplot as plt\n\nplt.figure(figsize=(24, 8))\nplt.plot(list(range(len(df['mm']))), df['mm'])\nplt.show()",
"That's quite a tough chart to read. Let's sample it.",
"import matplotlib.pyplot as plt\n\nplt.figure(figsize=(24, 8))\nplt.plot(list(range(100)), df['mm'].iloc[:100])\nplt.show()",
"See if any of the data is missing.",
"df.isna().any()",
"Finally, we'll do a box plot to see if the data is evenly distributed, which it is.",
"import seaborn as sns\n\n_ = sns.boxplot(df['mm'])",
"There is not much more EDA we need to do at this point. Let's move on to modeling.\nPreparing the Data\nCurrently we have a series of data that contains a single list of vibration values over time. When training our model and when asking for predictions, we'll want to instead feed the model a subset of our sequence.\nWe first need to determine our subsequence length and then create in-order subsequences of that length.\nWe'll create a list of lists called X that contains subsequences. We'll also create a list called y that contains the next value after each subsequence stored in X.",
"import numpy as np\n\nX = []\ny = []\nsseq_len = 50\nfor i in range(0, len(df['mm']) - sseq_len - 1):\n X.append(df['mm'][i:i+sseq_len])\n y.append(df['mm'][i+sseq_len+1])\n\ny = np.array(y)\nX = np.array(X)\n\nX.shape, y.shape",
"We also need to explicitly set the final dimension of the data in order to have it pass through our model.",
"X = np.expand_dims(X, axis=2)\ny = np.expand_dims(y, axis=1)\n\nX.shape, y.shape",
"We'll also standardize our data for the model. Note that we don't normalize here because we need to be able to reproduce negative values.",
"data_std = df['mm'].std()\ndata_mean = df['mm'].mean()\n\nX = (X - data_mean) / data_std\ny = (y - data_mean) / data_std\n\nX.max(), y.max(), X.min(), y.min()",
"And for final testing after model training, we'll split off 20% of the data.",
"from sklearn.model_selection import train_test_split\n\nX_train, X_test, y_train, y_test = train_test_split(\n X, y, test_size=0.2, random_state=0)",
"Setting a Baseline\nWe are only training with 50 data points at a time. This is well within the bounds of what a standard deep neural network can handle, so let's first see what a very simple neural network can do.",
"import math\nimport matplotlib.pyplot as plt\nimport tensorflow as tf\nimport tensorflow.keras as keras\n\ntf.random.set_seed(0)\n\nmodel = keras.models.Sequential([\n keras.layers.Flatten(input_shape=[sseq_len, 1]),\n keras.layers.Dense(1)\n])\n\nmodel.compile(\n loss='mse',\n optimizer='Adam',\n metrics=['mae', 'mse'],\n)\n\nstopping = tf.keras.callbacks.EarlyStopping(\n monitor='loss',\n min_delta=0,\n patience=2)\n\nhistory = model.fit(X_train, y_train, epochs=50, callbacks=[stopping])\n\ny_pred = model.predict(X_test)\nrmse = math.sqrt(np.mean(keras.losses.mean_squared_error(y_test, y_pred)))\nprint(\"RMSE Scaled: {}\\nRMSE Base Units: {}\".format(\n rmse, rmse * data_std + data_mean))\n\nplt.figure(figsize=(10,10))\nplt.plot(list(range(len(history.history['mse']))), history.history['mse'])\nplt.show()",
"We quickly converged and, when we ran the model, we got a baseline quality value of 0.03750885081060467.\nThe Most Basic RNN\nLet's contrast a basic feedforward neural network with a basic RNN. To do this we simply need to use the SimpleRNN layer in our network in place of the Dense layer in our network above. Notice that, in this case, there is no need to flatten the data before we feed it into the model.",
"tf.random.set_seed(0)\n\nmodel = keras.models.Sequential([\n keras.layers.SimpleRNN(1, input_shape=[None, 1])\n])\n\nmodel.compile(\n loss='mse',\n optimizer='Adam',\n metrics=['mae', 'mse'],\n)\n\nstopping = tf.keras.callbacks.EarlyStopping(\n monitor='loss',\n min_delta=0,\n patience=2)\n\nhistory = model.fit(X_train, y_train, epochs=50, callbacks=[stopping])\n\ny_pred = model.predict(X_test)\nrmse = math.sqrt(np.mean(keras.losses.mean_squared_error(y_test, y_pred)))\nprint(\"RMSE Scaled: {}\\nRMSE Base Units: {}\".format(\n rmse, rmse * data_std + data_mean))\n\nplt.figure(figsize=(10,10))\nplt.plot(list(range(len(history.history['mse']))), history.history['mse'])\nplt.show()",
"Our model converged a little more slowly, but it got an error of only 0.8974118571865628, which is not an improvement over the baseline model.\nA Deep RNN\nLet's try to build a deep RNN and see if we can get better results.\nIn the model below, we stick together four layers ranging in width from 50 nodes to our final output of 1.\nNotice all of the layers except the output layer have return_sequences=True set. This causes the layer to pass outputs for all timestamps to the next layer. If you don't include this argument, only the output for the last timestamp is passed, and intermediate layers will complain about the wrong shape of input.",
"tf.random.set_seed(0)\n\nmodel = keras.models.Sequential([\n keras.layers.SimpleRNN(50, return_sequences=True, input_shape=[None, 1]),\n keras.layers.SimpleRNN(20, return_sequences=True),\n keras.layers.SimpleRNN(10, return_sequences=True),\n keras.layers.SimpleRNN(1)\n])\n\nmodel.compile(\n loss='mse',\n optimizer='Adam',\n metrics=['mae', 'mse'],\n)\n\nstopping = tf.keras.callbacks.EarlyStopping(\n monitor='loss',\n min_delta=0,\n patience=2)\n\nhistory = model.fit(X_train, y_train, epochs=50, callbacks=[stopping])\n\ny_pred = model.predict(X_test)\nrmse = math.sqrt(np.mean(keras.losses.mean_squared_error(y_test, y_pred)))\nprint(\"RMSE Scaled: {}\\nRMSE Base Units: {}\".format(\n rmse, rmse * data_std + data_mean))\n\nplt.figure(figsize=(10,10))\nplt.plot(list(range(len(history.history['mse']))), history.history['mse'])\nplt.show()",
"Woah! What happened? Our MSE during training looked nice: 0.0496. But our final testing didn't perform much better than our simple model. We seem to have overfit!\nWe can try to simplify the model and add dropout layers to reduce overfitting, but even with a very basic model like the one below, we still get very different MSE between the training and test datasets.",
"tf.random.set_seed(0)\n\nmodel = keras.models.Sequential([\n keras.layers.SimpleRNN(2, return_sequences=True, input_shape=[None, 1]),\n keras.layers.Dropout(0.3),\n keras.layers.SimpleRNN(1),\n keras.layers.Dense(1)\n])\n\nmodel.compile(\n loss='mse',\n optimizer='Adam',\n metrics=['mae', 'mse'],\n)\n\nstopping = tf.keras.callbacks.EarlyStopping(\n monitor='loss',\n min_delta=0,\n patience=2)\n\nhistory = model.fit(X_train, y_train, epochs=50, callbacks=[stopping])\n\ny_pred = model.predict(X_test)\nrmse = math.sqrt(np.mean(keras.losses.mean_squared_error(y_test, y_pred)))\nprint(\"RMSE Scaled: {}\\nRMSE Base Units: {}\".format(\n rmse, rmse * data_std + data_mean))\n\nplt.figure(figsize=(10,10))\nplt.plot(list(range(len(history.history['mse']))), history.history['mse'])\nplt.show()",
"Even with these measures, we still seem to be overfitting a bit. We could keep tuning, but let's instead look at some other types of neurons found in RNNs.\nLong Short Term Memory\nThe RNN layers we've been using are basic neurons that have a very short memory. They tend to learn patterns that they have recently seen, but they quickly forget older training data.\nThe Long Short Term Memory (LSTM) neuron was built to combat this forgetfulness. The neuron outputs values for the next layer in the network, and it also outputs two other values: one for short-term memory and one for long-term memory. These weights are then fed back into the neuron at the next iteration of the network. This backfeed is similar to that of a SimpleRNN, except the SimpleRNN only has one backfeed.\nWe can replace the SimpleRNN with an LSTM layer, as you can see below.",
"tf.random.set_seed(0)\n\nmodel = keras.models.Sequential([\n keras.layers.LSTM(1, input_shape=[None, 1]),\n])\n\nmodel.compile(\n loss='mse',\n optimizer=tf.keras.optimizers.Adam(),\n metrics=['mae', 'mse'],\n)\n\nstopping = tf.keras.callbacks.EarlyStopping(\n monitor='loss',\n min_delta=0,\n patience=2)\n\nhistory = model.fit(X_train, y_train, epochs=100, callbacks=[stopping])\n\ny_pred = model.predict(X_test)\nrmse = math.sqrt(np.mean(keras.losses.mean_squared_error(y_test, y_pred)))\nprint(\"RMSE Scaled: {}\\nRMSE Base Units: {}\".format(\n rmse, rmse * data_std + data_mean))\n\nplt.figure(figsize=(10,10))\nplt.plot(list(range(len(history.history['mse']))), history.history['mse'])\nplt.show()",
"We got a test RMSE of 0.8989123704842217, which is still not better than our SimpleRNN. And in the more complex model below, we got close to the baseline but still didn't beat it.",
"tf.random.set_seed(0)\n\nmodel = keras.models.Sequential([\n keras.layers.LSTM(20, return_sequences=True, input_shape=[None, 1]),\n keras.layers.Dropout(0.2),\n keras.layers.LSTM(10),\n keras.layers.Dropout(0.2),\n keras.layers.Dense(1)\n])\n\nmodel.compile(\n loss='mse',\n optimizer='Adam',\n metrics=['mae', 'mse'],\n)\n\nstopping = tf.keras.callbacks.EarlyStopping(\n monitor='loss',\n min_delta=0,\n patience=2)\n\nhistory = model.fit(X_train, y_train, epochs=50, callbacks=[stopping])\n\ny_pred = model.predict(X_test)\nrmse = math.sqrt(np.mean(keras.losses.mean_squared_error(y_test, y_pred)))\nprint(\"RMSE Scaled: {}\\nRMSE Base Units: {}\".format(\n rmse, rmse * data_std + data_mean))\n\nplt.figure(figsize=(10,10))\nplt.plot(list(range(len(history.history['mse']))), history.history['mse'])\nplt.show()",
"LSTM neurons can be very useful, but as we have seen, they aren't always the best option.\nLet's look at one more neuron commonly found in RNN models, the GRU.\nGated Recurrent Unit\nThe Gated Recurrent Unit (GRU) is another special neuron that often shows up in Recurrent Neural Networks. The GRU is similar to the LSTM in that it feeds output back into itself. The difference is that the GRU feeds a single weight back into itself and then makes long- and short-term state adjustments based on that single backfeed.\nThe GRU tends to train faster than LSTM and has similar performance. Let's see how a network containing one GRU performs.",
"tf.random.set_seed(0)\n\nmodel = keras.models.Sequential([\n keras.layers.GRU(1),\n])\n\nmodel.compile(\n loss='mse',\n optimizer='Adam',\n metrics=['mae', 'mse'],\n)\n\nstopping = tf.keras.callbacks.EarlyStopping(\n monitor='loss',\n min_delta=0,\n patience=2)\n\nhistory = model.fit(X_train, y_train, epochs=50, callbacks=[stopping])\n\ny_pred = model.predict(X_test)\nrmse = math.sqrt(np.mean(keras.losses.mean_squared_error(y_test, y_pred)))\nprint(\"RMSE Scaled: {}\\nRMSE Base Units: {}\".format(\n rmse, rmse * data_std + data_mean))\n\nplt.figure(figsize=(10,10))\nplt.plot(list(range(len(history.history['mse']))), history.history['mse'])\nplt.show()",
"We got a RMSE of 0.9668634342193015, which isn't bad, but it still performs worse than our baseline.\nConvolutional Layers\nConvolutional layers are limited to image classification models. They can also be really handy when training RNNs. For training on a sequence of data, we use the Conv1D class as shown below.",
"tf.random.set_seed(0)\n\n\nmodel = keras.models.Sequential([\n keras.layers.Conv1D(filters=20, kernel_size=4, strides=2, padding=\"valid\",\n input_shape=[None, 1]),\n keras.layers.GRU(2, input_shape=[None, 1], activation='relu'),\n keras.layers.Dropout(0.2),\n keras.layers.Dense(1),\n])\n\nmodel.compile(\n loss='mse',\n optimizer='Adam',\n metrics=['mae', 'mse'],\n)\n\nstopping = tf.keras.callbacks.EarlyStopping(\n monitor='loss',\n min_delta=0,\n patience=2)\n\nhistory = model.fit(X_train, y_train, epochs=50, callbacks=[stopping])\n\ny_pred = model.predict(X_test)\nrmse = math.sqrt(np.mean(keras.losses.mean_squared_error(y_test, y_pred)))\nprint(\"RMSE Scaled: {}\\nRMSE Base Units: {}\".format(\n rmse, rmse * data_std + data_mean))\n\nplt.figure(figsize=(10,10))\nplt.plot(list(range(len(history.history['mse']))), history.history['mse'])\nplt.show()",
"Recurrent Neural Networks are a powerful tool for sequence generation and prediction. But they aren't the only mechanism for sequence prediction. If the sequence you are predicting is short enough, then a standard deep neural network might be able to provide the predictions you are looking for.\nAlso note that we created a model that took a series of data and output one value. It is possible to create RNNs that input one or more values and output one or more values. Each use case is different.\nExercises\nExercise 1: Visualization\nCreate a plot containing a series of at least 50 predicted points. Plot that series against the actual.\n\nHint: Pick a sequence of 100 values from the original data. Plot data points 50-100 as the actual line. Then predict 50 single values starting with the features 0-49, 1-50, etc.\n\nStudent Solution",
"# Your code goes here",
"Exercise 2: Stock Price Prediction\nUsing the Stonks! dataset, create a recurrent neural network that can predict the stock price for the 'AAA' ticker. Calculate your RMSE with some holdout data.\nUse as many text and code cells as you need to complete this exercise.\n\nHint: if predicting absolute prices doesn't yield a good model, look into other ways to represent the day-to-day change in data.",
"# Your code goes here",
""
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
milancurcic/lunch-bytes
|
Spring_2019/LB29/GettingData_XRDASK.ipynb
|
cc0-1.0
|
[
"<a name=\"top\"></a>\n<div style=\"width:1000 px\">\n\n<div style=\"float:right; width:98 px; height:98px;\">\n<img src=\"https://cdn.miami.edu/_assets-common/images/system/um-logo-gray-bg.png\" alt=\"Miami Logo\" style=\"height: 98px;\">\n</div>\n\n<h1>Lunch Byte 4/19/2019</h1>\nBy Kayla Besong\n\n<br>\n<br>\n<br>Introduction to Xarray and Dask to upload and process data from NOAA for ProcessData_XR.ipynb\n<br>use to compare to GettingData_XR.ipynb\n\n\n\n<div style=\"clear:both\"></div>\n</div>\n\n<hr style=\"height:2px;\">",
"import xarray as xr\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport matplotlib as mpl\nimport netCDF4 as nc\nfrom mpl_toolkits.basemap import Basemap\nfrom datetime import datetime\nfrom dask.diagnostics import ProgressBar\n\n\n%matplotlib inline\n\nfrom dask.distributed import Client\nimport xarray as xr",
"Let's Import Some Data through NOAA",
"%%time\n\nheights = [] # empty array to append opened netCDF's to\ntemps = []\ndate_range = np.arange(1995,2001,1) # range of years interested in obtaining, remember python starts counting at 0 so for 10 years we actually need to say through 2005\n\n\nfor i in date_range:\n url_h = 'https://www.esrl.noaa.gov/psd/thredds/dodsC/Datasets/ncep.reanalysis2/pressure/hgt.%s.nc' % i # string subset --> %.s and % i will insert the i in date_range we are looping through\n url_t = 'https://www.esrl.noaa.gov/psd/thredds/dodsC/Datasets/ncep.reanalysis2/pressure/air.%s.nc' % i\n print(url_h, url_t)\n \n heights.append(url_h) # append \n temps.append(url_t)",
"Turn list of urls into one large, combined (concatenated) dataset based on time",
"%%time \nconcat_h = xr.open_mfdataset(heights) # aligns all the lat, lon, lev, values of all the datasets based on dimesnion of time\n\n\n%%time \nconcat_t = xr.open_mfdataset(temps)",
"Take a peak to ensure everything was read successfully and understand the dataset that you have",
"concat_h, concat_t\n\n\n%%time\nconcat_h = concat_h.sel(lat = slice(90,0), level = 500).resample(time = '24H').mean(dim = 'time')\n\n%%time\nconcat_t = concat_t.sel(lat = slice(90,0), level = 925).resample(time = '24H').mean(dim = 'time')",
"Take another peak",
"concat_h, concat_t",
"Write out data for processing",
"%%time\nconcat_h.to_netcdf('heights_9520.nc')\n\n%%time\nconcat_t.to_netcdf('temps_9520.nc')"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
turbomanage/training-data-analyst
|
CPB100/lab4c/mlapis.ipynb
|
apache-2.0
|
[
"<h1> Using Machine Learning APIs </h1>\n\nFirst, visit <a href=\"http://console.cloud.google.com/apis\">API console</a>, choose \"Credentials\" on the left-hand menu. Choose \"Create Credentials\" and generate an API key for your application. You should probably restrict it by IP address to prevent abuse, but for now, just leave that field blank and delete the API key after trying out this demo.\nCopy-paste your API Key here:",
"APIKEY=\"CHANGE-THIS-KEY\" # Replace with your API key",
"<b> Note: Make sure you generate an API Key and replace the value above. The sample key will not work.</b>\nFrom the same API console, choose \"Dashboard\" on the left-hand menu and \"Enable API\".\nEnable the following APIs for your project (search for them) if they are not already enabled:\n<ol>\n<li> Google Translate API </li>\n<li> Google Cloud Vision API </li>\n<li> Google Natural Language API </li>\n<li> Google Cloud Speech API </li>\n</ol>\n\nFinally, because we are calling the APIs from Python (clients in many other languages are available), let's install the Python package (it's not installed by default on Datalab)",
"!pip install --upgrade google-api-python-client",
"<h2> Invoke Translate API </h2>",
"# running Translate API\nfrom googleapiclient.discovery import build\nservice = build('translate', 'v2', developerKey=APIKEY)\n\n# use the service\ninputs = ['is it really this easy?', 'amazing technology', 'wow']\noutputs = service.translations().list(source='en', target='fr', q=inputs).execute()\n# print outputs\nfor input, output in zip(inputs, outputs['translations']):\n print(\"{0} -> {1}\".format(input, output['translatedText']))",
"<h2> Invoke Vision API </h2>\n\nThe Vision API can work off an image in Cloud Storage or embedded directly into a POST message. I'll use Cloud Storage and do OCR on this image: <img src=\"https://storage.googleapis.com/cloud-training-demos/vision/sign2.jpg\" width=\"200\" />. That photograph is from http://www.publicdomainpictures.net/view-image.php?image=15842",
"# Running Vision API\nimport base64\nIMAGE=\"gs://cloud-training-demos/vision/sign2.jpg\"\nvservice = build('vision', 'v1', developerKey=APIKEY)\nrequest = vservice.images().annotate(body={\n 'requests': [{\n 'image': {\n 'source': {\n 'gcs_image_uri': IMAGE\n }\n },\n 'features': [{\n 'type': 'TEXT_DETECTION',\n 'maxResults': 3,\n }]\n }],\n })\nresponses = request.execute(num_retries=3)\nprint(responses)\n\nforeigntext = responses['responses'][0]['textAnnotations'][0]['description']\nforeignlang = responses['responses'][0]['textAnnotations'][0]['locale']\nprint(foreignlang, foreigntext)",
"<h2> Translate sign </h2>",
"inputs=[foreigntext]\noutputs = service.translations().list(source=foreignlang, target='en', q=inputs).execute()\n# print(outputs)\nfor input, output in zip(inputs, outputs['translations']):\n print(\"{0} -> {1}\".format(input, output['translatedText']))",
"<h2> Sentiment analysis with Language API </h2>\n\nLet's evaluate the sentiment of some famous quotes using Google Cloud Natural Language API.",
"lservice = build('language', 'v1beta1', developerKey=APIKEY)\nquotes = [\n 'To succeed, you must have tremendous perseverance, tremendous will.',\n 'It’s not that I’m so smart, it’s just that I stay with problems longer.',\n 'Love is quivering happiness.',\n 'Love is of all passions the strongest, for it attacks simultaneously the head, the heart, and the senses.',\n 'What difference does it make to the dead, the orphans and the homeless, whether the mad destruction is wrought under the name of totalitarianism or in the holy name of liberty or democracy?',\n 'When someone you love dies, and you’re not expecting it, you don’t lose her all at once; you lose her in pieces over a long time — the way the mail stops coming, and her scent fades from the pillows and even from the clothes in her closet and drawers. '\n]\nfor quote in quotes:\n response = lservice.documents().analyzeSentiment(\n body={\n 'document': {\n 'type': 'PLAIN_TEXT',\n 'content': quote\n }\n }).execute()\n polarity = response['documentSentiment']['polarity']\n magnitude = response['documentSentiment']['magnitude']\n print('POLARITY=%s MAGNITUDE=%s for %s' % (polarity, magnitude, quote))",
"<h2> Speech API </h2>\n\nThe Speech API can work on streaming data, audio content encoded and embedded directly into the POST message, or on a file on Cloud Storage. Here I'll pass in this <a href=\"https://storage.googleapis.com/cloud-training-demos/vision/audio.raw\">audio file</a> in Cloud Storage.",
"sservice = build('speech', 'v1', developerKey=APIKEY)\nresponse = sservice.speech().recognize(\n body={\n 'config': {\n 'languageCode' : 'en-US',\n 'encoding': 'LINEAR16',\n 'sampleRateHertz': 16000\n },\n 'audio': {\n 'uri': 'gs://cloud-training-demos/vision/audio.raw'\n }\n }).execute()\nprint(response)\n\nprint(response['results'][0]['alternatives'][0]['transcript'])\nprint('Confidence=%f' % response['results'][0]['alternatives'][0]['confidence'])",
"<h2> Clean up </h2>\n\nRemember to delete the API key by visiting <a href=\"http://console.cloud.google.com/apis\">API console</a>.\nIf necessary, commit all your notebooks to git.\nIf you are running Datalab on a Compute Engine VM or delegating to one, remember to stop or shut it down so that you are not charged.\nChallenge Exercise\nHere are a few portraits from the Metropolitan Museum of Art, New York (they are part of a BigQuery public dataset ):\n\ngs://cloud-training-demos/images/met/APS6880.jpg\ngs://cloud-training-demos/images/met/DP205018.jpg\ngs://cloud-training-demos/images/met/DP290402.jpg\ngs://cloud-training-demos/images/met/DP700302.jpg\n\nUse the Vision API to identify which of these images depict happy people and which ones depict unhappy people.\nHint (highlight to see): <p style=\"color:white\">You will need to look for joyLikelihood and/or sorrowLikelihood from the response.</p>\nCopyright 2018 Google Inc.\nLicensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at\nhttp://www.apache.org/licenses/LICENSE-2.0\nUnless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
NifTK/NiftyNet
|
demos/module_examples/ImageSampler.ipynb
|
apache-2.0
|
[
"This demo presents the usage of NiftyNet's image sampler implemented at niftynet.contrib.dataset_sampler.*\nWhat's a sampler?\nThe sampler takes an image reader as input, produces image windows sampled from each image.\nWhy sampler?\nIn many cases in medical imaging applications, deep neural networks operate on image windows instead of the entire images/datasets, partly due to the GPU memory limitation and training efficiencies. The sampler provides ready-to-use interfaces to generate image windows, works well with tf.data.Dataset.\nBefore the demo...\nFirst make sure the source code is available, and import the module.\nFor NiftyNet installation, please checkout:\nhttp://niftynet.readthedocs.io/en/dev/installation.html\nFor demonstration purpose we download some demo data to ~/niftynet/data/\nFor visualisation we install matplotlib.",
"import sys\nniftynet_path = '/Users/foo/Documents/Niftynet/'\nsys.path.insert(0, niftynet_path)\n\nfrom niftynet.utilities.download import download\ndownload('mr_ct_regression_model_zoo_data')\n\n!{sys.executable} -m pip install matplotlib",
"Image as 'window'\nThe simplest use case is treating each image as an image window. This is implemented by default using ImageWindowDataset class from niftynet.contrib.dataset_sampler. This class also acts as a base class, which can be extended to generate smaller windows using different sampling strategies.",
"from niftynet.io.image_reader import ImageReader\nfrom niftynet.engine.image_window_dataset import ImageWindowDataset\n\n# creating an image reader.\ndata_param = \\\n {'CT': {'path_to_search': '~/niftynet/data/mr_ct_regression/CT_zero_mean',\n 'filename_contains': 'nii'}}\nreader = ImageReader().initialise(data_param)\n\n# creating a window sampler dataset from the reader\nsampler = ImageWindowDataset(reader)",
"The sampler can be used as a numpy function, or a tensorflow operation.\nUse the sampler as a numpy function\nDirectly call the instance (this is actually invoking sampler.layer_op):",
"windows = sampler()\nprint(windows.keys(), windows['CT_location'], windows['CT'].shape)\n \nimport matplotlib.pyplot as plt\nplt.imshow(windows['CT'][0,:,:,0,0,0])\nplt.show()",
"Use the sampler as a tensorflow op\nFirst add a iterator node, this is wrapped as pop_batch_op(),\nthen run the op.",
"import tensorflow as tf\n# adding the tensorflow tensors\nnext_window = sampler.pop_batch_op()\n\n# run the tensors\nwith tf.Session() as sess:\n sampler.run_threads(sess) #initialise the iterator\n windows = sess.run(next_window)\n print(windows.keys(), windows['CT_location'], windows['CT'].shape)\n",
"The location array ['MR_location'] represents the spatial coordinates of the window: \n\n[subject_id, x_start, y_start, z_start, x_end, y_end, z_end]\n\nAs a numpy function, the output shape is (1, x, y, z, 1, 1) which represents batch, width, height, depth, time points, channels. As a tensorflow op, the output shape is (batch, x,[ y,][ z,] channels), which means the time dimension (not currently supported) and the spatial axes will be \"squeezed\" if the length along them is one. This simplifies the network definitions based on 2D or 3D outputs of the sampler.\n\nUniform sampler\nGenerating image windows randomly from images.\nThis is implemented by overriding the layer_op of ImageWindowDataset.\nThe following code creates a uniform sampler and draws an image window.\nTwo sections MR and CT are given as the input data parameter,\nthe reader loads these sections by matching the filenames, and\noutputs the sampling windows (from the same spatial coordinates for all input images) as a dictionary.\n\n\nWhen the spatial window sizes are different in MR and CT, concentric windows will be sampled.\n\n\nWhen the spatial window size is (0, 0, 0), size of the first image from the reader will be used as the window size.\n\n\nwindows_per_image parameter specifies the number of image windows from each image returned by the reader.",
"from niftynet.io.image_reader import ImageReader\nfrom niftynet.engine.sampler_uniform_v2 import UniformSampler\n\n# creating an image reader.\n# creating an image reader.\ndata_param = \\\n {'MR': {'path_to_search': '~/niftynet/data/mr_ct_regression/CT_zero_mean',\n 'filename_contains': 'nii',\n 'spatial_window_size': (80, 80, 1)},\n 'CT': {'path_to_search': '~/niftynet/data/mr_ct_regression/T2_corrected',\n 'filename_contains': 'nii',\n 'spatial_window_size': (80, 80, 1)},\n }\nwindow_sizes = {'MR': (80, 80, 1), 'CT': (80, 80, 1)}\nreader = ImageReader().initialise(data_param)\n\n# uniform sampler returns windows of 32^3-voxels\nuniform_sampler = UniformSampler(\n reader, window_sizes, batch_size=2, windows_per_image=5)\n\n\nimport tensorflow as tf\n# adding the tensorflow tensors\nnext_window = uniform_sampler.pop_batch_op()\n\n# run the tensors\nwith tf.Session() as sess:\n uniform_sampler.run_threads(sess) #initialise the iterator\n windows = sess.run(next_window)\n print(windows['MR_location'], windows['MR'].shape)\n print(windows['CT_location'], windows['CT'].shape)\n\n\nimport matplotlib.pyplot as plt\nplt.figure()\nplt.subplot(1,2,1)\nplt.imshow(windows['MR'][0,:,:,0])\nplt.subplot(1,2,2)\nplt.imshow(windows['CT'][0,:,:,0])\nplt.show()",
"Grid sampler\nGeneratoring image windows from images with a sliding window.\nThis is implemented by overriding the layer_op of ImageWindowDataset.\nThe window_border parameter controls the amount of overlap in between sampling windows.\nWhen the grid sampler is used for fully convolutional inference, the overlapping regions of the windows are cropped, so that the windows can be aggregated as an image at the resolution of the input. \nSee also: http://niftynet.readthedocs.io/en/dev/config_spec.html#border",
"from niftynet.io.image_reader import ImageReader\nfrom niftynet.engine.sampler_grid_v2 import GridSampler\n\n# creating an image reader.\ndata_param = \\\n {'CT': {'path_to_search': '~/niftynet/data/mr_ct_regression/CT_zero_mean',\n 'filename_contains': 'nii'}}\nreader = ImageReader().initialise(data_param)\n\n# uniform sampler returns windows of 32^3-voxels\nuniform_sampler = GridSampler(reader, \n window_sizes=(42, 42, 1), \n window_border=(8,8,1), batch_size=1)\n\n\nimport tensorflow as tf\n# adding the tensorflow tensors\nnext_window = uniform_sampler.pop_batch_op()\n\n# run the tensors\nwith tf.Session() as sess:\n uniform_sampler.run_threads(sess) #initialise the iterator\n subject_id = 0\n coords = []\n while True:\n windows = sess.run(next_window)\n if not subject_id == windows['CT_location'][0,0]:\n break;\n #print(windows.keys(), windows['MR_location'], windows['MR'].shape)\n coords.append(windows['CT_location'])",
"Visualisation of the window coordinates (change window_sizes and window_border to see different window allocations):",
"import numpy as np\nimport matplotlib.pyplot as plt\nimport matplotlib.patches as patches\nfrom matplotlib.collections import PatchCollection\n\nf, (ax1, ax) = plt.subplots(1,2)\n\n# show image\n_, img, _ = reader(idx=0)\nprint(img['CT'].shape)\nplt.subplot(1,2,1)\nplt.imshow(img['CT'][:,:,0,0,0])\n\n# show sampled windows\nall_patch = []\nfor win in np.concatenate(coords, axis=0):\n patch = patches.Rectangle(\n (win[1], win[2]),\n win[4]-win[1], win[5]-win[2], linewidth=1)\n all_patch.append(patch)\nall_pc = PatchCollection(\n all_patch, alpha=0.1, edgecolor='r', facecolor='r')\nax.add_collection(all_pc)\nax.set_xlim([0, np.max(coords, axis=0)[0,4]])\nax.set_ylim([0, np.max(coords, axis=0)[0,5]])\nax.set_aspect('equal', 'datalim')\nplt.show()",
"Weighted sampler\nGeneratoring image windows from images with a sampling prior of the foreground.\nThis sampler uses a cumulative histogram for fast sampling, works with both continous and discrete maps.\nIt is implemented by overriding the layer_op of ImageWindowDataset.\nWeight map can be specified by an input specification with sampler as the key.\nThe following example uses a binary mask as the weight map.",
"from niftynet.io.image_reader import ImageReader\nfrom niftynet.engine.sampler_weighted_v2 import WeightedSampler\n\n# creating an image reader.\ndata_param = \\\n {'CT': {'path_to_search': '~/niftynet/data/mr_ct_regression/CT_zero_mean',\n 'filename_contains': 'PAR.nii.gz'},\n 'sampler': {'path_to_search': '~/niftynet/data/mr_ct_regression/T2_mask',\n 'filename_contains': 'nii'}}\nreader = ImageReader().initialise(data_param)\n\nweighted_sampler = WeightedSampler(\n reader, window_sizes=(12, 12, 1), batch_size=1, windows_per_image=100)\n\nimport tensorflow as tf\n# adding the tensorflow tensors\nnext_window = weighted_sampler.pop_batch_op()\n\n# run the tensors\nwith tf.Session() as sess:\n weighted_sampler.run_threads(sess) #initialise the iterator\n coords = []\n for _ in range(200):\n windows = sess.run(next_window)\n #print(windows.keys(), windows['CT_location'], windows['CT'].shape)\n coords.append(windows['CT_location'])\n\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport matplotlib.patches as patches\nfrom matplotlib.collections import PatchCollection\n\nf, (ax1, ax) = plt.subplots(1,2)\n\n# show image\n_, img, _ = reader(idx=0)\nprint(img['CT'].shape)\nplt.subplot(1,2,1)\nplt.imshow(img['CT'][:,:,0,0,0])\n#plt.subplot(1,2,2)\nax.imshow(img['sampler'][:,:,0,0,0])\n\n# show sampled windows\nall_patch = []\nfor win in np.concatenate(coords, axis=0):\n patch = patches.Rectangle(\n (win[2], win[1]),\n win[5]-win[2], win[4]-win[1], linewidth=1)\n all_patch.append(patch)\nall_pc = PatchCollection(\n all_patch, alpha=0.5, edgecolor='r', facecolor='r')\nax.add_collection(all_pc)\nplt.show()",
"Balanced sampler\nGeneratoring image windows from images with a sampling prior of the foreground.\nThis sampler generates image windows from a discrete label map as if every label\nhad the same probability of occurrence.\nIt is implemented by overriding the layer_op of ImageWindowDataset.\nWeight map can be specified by an input specification with sampler as the key.\nThe following example uses a foreground mask as the weight map.",
"from niftynet.io.image_reader import ImageReader\nfrom niftynet.engine.sampler_balanced_v2 import BalancedSampler\n\n# creating an image reader.\ndata_param = \\\n {'CT': {'path_to_search': '~/niftynet/data/mr_ct_regression/CT_zero_mean',\n 'filename_contains': 'PAR.nii.gz'},\n 'sampler': {'path_to_search': '~/niftynet/data/mr_ct_regression/T2_mask',\n 'filename_contains': 'nii'}}\nreader = ImageReader().initialise(data_param)\n\nbalanced_sampler = BalancedSampler(\n reader, window_sizes=(12, 12, 1), windows_per_image=100)\n\nimport tensorflow as tf\n# adding the tensorflow tensors\nnext_window = balanced_sampler.pop_batch_op()\n\n# run the tensors\nwith tf.Session() as sess:\n balanced_sampler.run_threads(sess) #initialise the iterator\n coords = []\n for _ in range(200):\n windows = sess.run(next_window)\n #print(windows.keys(), windows['CT_location'], windows['CT'].shape)\n coords.append(windows['CT_location'])",
"Visualisation of the window coordinates (change data_param see different window allocations):",
"import numpy as np\nimport matplotlib.pyplot as plt\nimport matplotlib.patches as patches\nfrom matplotlib.collections import PatchCollection\n\nf, (ax1, ax) = plt.subplots(1,2)\n\n# show image\n_, img, _ = reader(idx=0)\nprint(img['CT'].shape)\nplt.subplot(1,2,1)\nplt.imshow(img['CT'][:,:,0,0,0])\n#plt.subplot(1,2,2)\nax.imshow(img['sampler'][:,:,0,0,0])\n\n# show sampled windows\nall_patch = []\nfor win in np.concatenate(coords, axis=0):\n patch = patches.Rectangle(\n (win[2], win[1]),\n win[5]-win[2], win[4]-win[1], linewidth=1)\n all_patch.append(patch)\nall_pc = PatchCollection(\n all_patch, alpha=0.5, edgecolor='r', facecolor='r')\nax.add_collection(all_pc)\nplt.show()"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
dtamayo/reboundx
|
ipython_examples/YarkovskyEffect.ipynb
|
gpl-3.0
|
[
"Yarkovsky Effect\nThis example shows how to add the Yarkovsky effect in a Rebound simulation. There are two versions, which we call the 'Full Version' and the 'Simple Version.' A special parameter called 'ye_flag' is used to switch between the two. The difference between the versions and what situations they're better suited for is discussed in more detail below. \nFor more information on this effect, please visit: (implementation paper in progress) \nWe'll start with the Full Version.\nFull Version\nThis version of the effect is based off of the equations found in Veras et al. (2015). A link to the paper is provided below. The Full Version can be used to get detailed calculations of the Yarkovsky effect on a particular body. However, it requires a large amount of parameters that may be difficult to find for a particular object. It also takes more computational time due to the large amount of equations that must be calaculated between each time step of the simulation. This version of the effect can be used to get accurate calculations on how a body is perturbed by the Yarkovsky effect.\nLink to paper: https://ui.adsabs.harvard.edu/abs/2015MNRAS.451.2814V/abstract\nBelow is a simple example to show how to add the effect to a simulation. First, we create a Rebound simulation with the Sun and a test particle (which will be considered an asteroid) at .5 AU.",
"import rebound\nimport reboundx\nimport numpy as np\nimport astropy.units as u\nimport astropy.constants as constants\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\n#Simulation begins here\nsim = rebound.Simulation()\n\nsim.units = ('yr', 'AU', 'Msun') #changes simulation and G to units of solar masses, years, and AU \nsim.integrator = \"whfast\" #integrator for sim\nsim.dt = .05 #timestep for sim\n\nsim.add(m=1) #Adds Sun \nsim.add(a=.5, f=0, Omega=0, omega=0, e=0, inc=0, m=0) #adds test particle \n\n#Moves all particles to center of momentum frame\nsim.move_to_com()\n\n#Gives orbital information before the simulation begins\nprint(\"\\n***INITIAL ORBITS:***\")\nfor orbit in sim.calculate_orbits():\n print(orbit)",
"As with all REBOUNDx effects, the parameters must be inputed with the same units as the simulation (in this case it's AU/Msun/yr). We'll use the astropy units module to help avoid errors",
"density = (3000.0*u.kg/u.m**3).to(u.Msun/u.AU**3)\nc = (constants.c).to(u.AU/u.yr) #speed of light\nlstar = (3.828e26*u.kg*u.m**2/u.s**3).to(u.Msun*u.AU**2/u.yr**3) #luminosity of star\nradius = (1000*u.m).to(u.AU) #radius of object\nalbedo = .017 #albedo of object\nstef_boltz = constants.sigma_sb.to(u.Msun/u.yr**3/u.K**4) #Stefan-Boltzmann constant\nemissivity = .9 #emissivity of object\nk = .25 #constant between\nGamma = (310*u.kg/u.s**(5/2)).to(u.Msun/u.yr**(5/2)) #thermal inertia of object\nrotation_period = (15470.9*u.s).to(u.yr) #rotation period of object",
"We then add the Yarkovsky effect and the required parameters for this version. Importantly, we must set 'ye_flag' to 0 to get the Full Version. Physical constants and the stellar luminosity get added to the effect yark",
"#Loads the effect into Rebound\nrebx = reboundx.Extras(sim)\nyark = rebx.load_force(\"yarkovsky_effect\")\n\n#Sets the parameters for the effect\nyark.params[\"ye_c\"] = c.value #set on the sim and not a particular particle\nyark.params[\"ye_lstar\"] = lstar.value #set on the sim and not a particular particle\nyark.params[\"ye_stef_boltz\"] = stef_boltz.value #set on the sim and not a particular particle",
"Other parameters need to be added to each particle feeling the Yarkovsky effect",
"# Sets parameters for the particle\nps = sim.particles\nps[1].r = radius.value #remember radius is not inputed as a Rebx parameter - it's inputed on the particle in the Rebound sim\nps[1].params[\"ye_flag\"] = 0 #setting this flag to 0 will give us the full version of the effect\nps[1].params[\"ye_body_density\"] = density.value\nps[1].params[\"ye_albedo\"] = albedo\nps[1].params[\"ye_emissivity\"] = emissivity\nps[1].params[\"ye_k\"] = k\nps[1].params[\"ye_thermal_inertia\"] = Gamma.value\nps[1].params[\"ye_rotation_period\"] = rotation_period.value\n\n# For this example we assume the object has a spin axis perpendicular to the orbital plane: unit vector = (0,0,1)\nps[1].params[\"ye_spin_axis_x\"] = 0\nps[1].params[\"ye_spin_axis_y\"] = 0\nps[1].params[\"ye_spin_axis_z\"] = 1\n\nrebx.add_force(yark) #adds the force to the simulation",
"We integrate this system for 100,000 years and print out the difference between the particle's semi-major axis before and after the simulation.",
"%%time\ntmax=100000 # in yrs\nNout = 1000\ntimes = np.linspace(0, tmax, Nout)\na_start = .5 #starting semi-major axis for the asteroid\na = np.zeros(Nout)\nfor i, time in enumerate(times):\n a[i] = ps[1].a\n sim.integrate(time)\na_final = ps[1].a #semi-major axis of asteroid after the sim \n \nprint(\"CHANGE IN SEMI-MAJOR AXIS:\", a_final-a_start, \"AU\\n\") #prints difference between the initial and final semi-major axes of asteroid\n\nfig, ax = plt.subplots()\nax.plot(times, a-a_start, '.')\nax.set_xlabel('Time (yrs)')\nax.set_ylabel('Change in semimajor axis (AU)')",
"Simple Version\nThis version of the effect is based off of equations from Veras et al. (2019). Once again, a link to this paper is provided below. This version simplifies the equations by placing constant values in a rotation matrix that in general is time-dependent. It requires fewer parameters than the full version and takes less computational time. However, it is mostly useful only to get a general idea on how much the effect can push bodies inwards or outwards. This version of the effect is better for simulating large groups of asteroids or trying to see general trends in the behavior of a body. \nLink to paper: https://ui.adsabs.harvard.edu/abs/2019MNRAS.485..708V/abstract\nWe'll use the same setup as before, but we'll also add another asteroid at .75 AU with identical physical properties. Let's start by creating a Rebound simulation again.",
"sim = rebound.Simulation()\n\nsim.units = ('yr', 'AU', 'Msun') #changes simulation and G to units of solar masses, years, and AU \nsim.integrator = \"whfast\" #integrator for sim\nsim.dt = .05 #timestep for sim\n\nsim.add(m=1) #Adds Sun \nsim.add(a=.5, f=0, Omega=0, omega=0, e=0, inc=0, m=0) #adds test particle \nsim.add(a=.75, f=0, Omega=0, omega=0, e=0, inc=0, m=0) #adds a second test particle\n\n#Moves all particles to center of momentum frame\nsim.move_to_com()\n\n#Gives orbital information before the simulation begins\nprint(\"\\n***INITIAL ORBITS:***\")\nfor orbit in sim.calculate_orbits():\n print(orbit)",
"We then add the Yarkovsky effect from Reboundx and the necesary parameters for this version. This time, we must make sure that 'ye_flag' is set to 1 or -1 to get the Simple Version of the effect. Setting it to 1 will push the asteroid outwards, while setting it to -1 will push it inwards. We'll push out our original asteroid and push in our new one. We use the same physical properties as in the example above:",
"#Loads the effect into Rebound\nrebx = reboundx.Extras(sim)\nyark = rebx.load_force(\"yarkovsky_effect\")\n\n#Sets the parameters for the effect\nyark.params[\"ye_c\"] = c.value\nyark.params[\"ye_lstar\"] = lstar.value\n\nps = sim.particles #simplifies way to access particles parameters \nps[1].params[\"ye_flag\"] = 1 #setting this flag to 1 will give us the outward version of the effect \nps[1].params[\"ye_body_density\"] = density.value\nps[1].params[\"ye_albedo\"] = albedo\nps[1].r = radius.value #remember radius is not inputed as a Rebx parameter - it's inputed on the particle in the Rebound sim\n\nps[2].params[\"ye_flag\"] = -1 #setting this flag to -1 will give us the inward version of the effect \nps[2].params[\"ye_body_density\"] = density.value\nps[2].params[\"ye_albedo\"] = albedo\nps[2].r = radius.value \n\nrebx.add_force(yark) #adds the force to the simulation",
"Now we run the sim for 100,000 years and print out the results for both asteroids. Note the difference in simulation times between the versions. Even with an extra particle, the simple version was faster than the full version.",
"%%time\ntmax=100000 # in yrs\n\na_start_1 = .5 #starting semi-major axis for the 1st asteroid\na_start_2 = .75 #starting semi-major axis for the 2nd asteroid\n\na1, a2 = np.zeros(Nout), np.zeros(Nout)\nfor i, time in enumerate(times):\n a1[i] = ps[1].a\n a2[i] = ps[2].a\n sim.integrate(time)\n\na_final_1 = ps[1].a #semi-major axis of 1st asteroid after the sim\na_final_2 = ps[2].a #semi-major axis of 2nd asteroid after the sim\n \nprint(\"CHANGE IN SEMI-MAJOR AXIS(Asteroid 1):\", a_final_1-a_start_1, \"AU\\n\")\nprint(\"CHANGE IN SEMI-MAJOR AXIS(Asteroid 2):\", a_final_2-a_start_2, \"AU\\n\")\n\nfig, ax = plt.subplots()\nax.plot(times, a1-a_start_1, '.', label='Asteroid 1')\nax.plot(times, a2-a_start_2, '.', label='Asteroid 2')\nax.set_xlabel('Time (yrs)')\nax.set_ylabel('Change in semimajor axis (AU)')\nax.legend()"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
AllenDowney/ModSim
|
python/soln/chap02.ipynb
|
gpl-2.0
|
[
"Chapter 2\nModeling and Simulation in Python\nCopyright 2021 Allen Downey\nLicense: Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International",
"# install Pint if necessary\n\ntry:\n import pint\nexcept ImportError:\n !pip install pint\n\n# download modsim.py if necessary\n\nfrom os.path import exists\n\nfilename = 'modsim.py'\nif not exists(filename):\n from urllib.request import urlretrieve\n url = 'https://raw.githubusercontent.com/AllenDowney/ModSim/main/'\n local, _ = urlretrieve(url+filename, filename)\n print('Downloaded ' + local)\n\n# import functions from modsim\n\nfrom modsim import *",
"This chapter presents a simple model of a bike share system and\ndemonstrates the features of Python we'll use to develop simulations of real-world systems.\nAlong the way, we'll make decisions about how to model the system. In\nthe next chapter we'll review these decisions and gradually improve the model.\nModeling a Bike Share System\nImagine a bike share system for students traveling between Olin College and Wellesley College, which are about 3 miles apart in eastern Massachusetts.\nSuppose the system contains 12 bikes and two bike racks, one at Olin and one at Wellesley, each with the capacity to hold 12 bikes.\nAs students arrive, check out a bike, and ride to the other campus, the number of bikes in each location changes. In the simulation, we'll need to keep track of where the bikes are. To do that, we'll use a function called State, which is defined in the ModSim library.",
"bikeshare = State(olin=10, wellesley=2)",
"The expressions in parentheses are keyword arguments.\nThey create two variables, olin and wellesley, and give them values.\nThen we call the State function.\nThe result is a State object, which is a collection of state variables.\nIn this example, the state variables represent the number of\nbikes at each location. The initial values are 10 and 2, indicating that there are 10 bikes at Olin and 2 at Wellesley. \nThe State object is assigned to a new variable named bikeshare.\nWe can read the variables inside a State object using the dot\noperator, like this:",
"bikeshare.olin",
"And this:",
"bikeshare.wellesley",
"Or, to display the state variables and their values, you can just type the name of the object:",
"bikeshare",
"These values make up the state of the system.\nThe ModSim library provides a function called show that displays a State object as a table.",
"show(bikeshare)",
"You don't have to use show, but I think it looks better.\nWe can update the state by assigning new values to the variables. \nFor example, if a student moves a bike from Olin to Wellesley, we can figure out the new values and assign them:",
"bikeshare.olin = 9\nbikeshare.wellesley = 3",
"Or we can use update operators, -= and +=, to subtract 1 from\nolin and add 1 to wellesley:",
"bikeshare.olin -= 1\nbikeshare.wellesley += 1",
"The result is the same either way.\nDefining functions\nSo far we have used functions defined in NumPy and ModSim. Now we're going to define our own function.\nWhen you are developing code in Jupyter, it is often efficient to write a few lines of code, test them to confirm they do what you intend, and then use them to define a new function. For example, these lines move a bike from Olin to Wellesley:",
"bikeshare.olin -= 1\nbikeshare.wellesley += 1",
"Rather than repeat them every time a bike moves, we can define a new\nfunction:",
"def bike_to_wellesley():\n bikeshare.olin -= 1\n bikeshare.wellesley += 1",
"def is a special word in Python that indicates we are defining a new\nfunction. The name of the function is bike_to_wellesley. The empty\nparentheses indicate that this function requires no additional\ninformation when it runs. The colon indicates the beginning of an\nindented code block.\nThe next two lines are the body of the function. They have to be\nindented; by convention, the indentation is 4 spaces.\nWhen you define a function, it has no immediate effect. The body of the\nfunction doesn't run until you call the function. Here's how to call\nthis function:",
"bike_to_wellesley()",
"When you call the function, it runs the statements in the body, which\nupdate the variables of the bikeshare object; you can check by\ndisplaying the new state.",
"bikeshare",
"When you call a function, you have to include the parentheses. If you\nleave them out, you get this:",
"bike_to_wellesley",
"This result indicates that bike_to_wellesley is a function. You don't\nhave to know what __main__ means, but if you see something like this,\nit probably means that you looked up a function but you didn't actually\ncall it. So don't forget the parentheses.\nPrint statements\nAs you write more complicated programs, it is easy to lose track of what\nis going on. One of the most useful tools for debugging is the print\nstatement, which displays text in the Jupyter notebook.\nNormally when Jupyter runs the code in a cell, it displays the value of\nthe last line of code. For example, if you run:",
"bikeshare.olin\nbikeshare.wellesley",
"Jupyter runs both lines, but it only displays the value of the\nsecond. If you want to display more than one value, you can use\nprint statements:",
"print(bikeshare.olin)\nprint(bikeshare.wellesley)",
"When you call the print function, you can put a variable name in\nparentheses, as in the previous example, or you can provide a sequence\nof variables separated by commas, like this:",
"print(bikeshare.olin, bikeshare.wellesley)",
"Python looks up the values of the variables and displays them; in this\nexample, it displays two values on the same line, with a space between\nthem.\nPrint statements are useful for debugging functions. For example, we can\nadd a print statement to move_bike, like this:",
"def bike_to_wellesley():\n print('Moving a bike to Wellesley')\n bikeshare.olin -= 1\n bikeshare.wellesley += 1",
"Each time we call this version of the function, it displays a message,\nwhich can help us keep track of what the program is doing.\nThe message in this example is a string, which is a sequence of\nletters and other symbols in quotes.\nJust like bike_to_wellesley, we can define a function that moves a\nbike from Wellesley to Olin:",
"def bike_to_olin():\n print('Moving a bike to Olin')\n bikeshare.wellesley -= 1\n bikeshare.olin += 1",
"And call it like this:",
"bike_to_olin()",
"One benefit of defining functions is that you avoid repeating chunks of\ncode, which makes programs smaller. Another benefit is that the name you\ngive the function documents what it does, which makes programs more\nreadable.\nIf statements\nThe ModSim library provides a function called flip that generates random \"coin tosses\".\nWhen you call it, you provide a probability between 0 and 1, like this:",
"flip(0.7)",
"The result is one of two values: True with probability 0.7 (in this example) or False\nwith probability 0.3. If you run flip like this 100 times, you should\nget True about 70 times and False about 30 times. But the results\nare random, so they might differ from these expectations.\nTrue and False are special values defined by Python. Note that they\nare not strings. There is a difference between True, which is a\nspecial value, and 'True', which is a string.\nTrue and False are called boolean values because they are\nrelated to Boolean algebra (https://modsimpy.com/boolean).\nWe can use boolean values to control the behavior of the program, using\nan if statement:",
"if flip(0.5):\n print('heads')",
"If the result from flip is True, the program displays the string\n'heads'. Otherwise it does nothing.\nThe syntax for if statements is similar to the syntax for\nfunction definitions: the first line has to end with a colon, and the\nlines inside the if statement have to be indented.\nOptionally, you can add an else clause to indicate what should\nhappen if the result is False:",
"if flip(0.5):\n print('heads')\nelse:\n print('tails') ",
"Now we can use flip to simulate the arrival of students who want to\nborrow a bike. Suppose students arrive at the Olin station every 2\nminutes, on average. In that case, the chance of an arrival during any\none-minute period is 50%, and we can simulate it like this:",
"if flip(0.5):\n bike_to_wellesley()",
"If students arrive at the Wellesley station every 3 minutes, on average,\nthe chance of an arrival during any one-minute period is 33%, and we can\nsimulate it like this:",
"if flip(0.33):\n bike_to_olin()",
"We can combine these snippets into a function that simulates a time\nstep, which is an interval of time, in this case one minute:",
"def step():\n if flip(0.5):\n bike_to_wellesley()\n \n if flip(0.33):\n bike_to_olin()",
"Then we can simulate a time step like this:",
"step()",
"Even though there are no values in parentheses, we have to include them.\nParameters\nThe previous version of step is fine if the arrival probabilities\nnever change, but in reality, these probabilities vary over time.\nSo instead of putting the constant values 0.5 and 0.33 in step we can replace them with parameters. Parameters are variables whose values are set when a function is called.\nHere's a version of step that takes two parameters, p1 and p2:",
"def step(p1, p2):\n if flip(p1):\n bike_to_wellesley()\n \n if flip(p2):\n bike_to_olin()",
"The values of p1 and p2 are not set inside this function; instead,\nthey are provided when the function is called, like this:",
"step(0.5, 0.33)",
"The values you provide when you call the function are called\narguments. The arguments, 0.5 and 0.33 in this example, get\nassigned to the parameters, p1 and p2, in order. So running this\nfunction has the same effect as:",
"p1 = 0.5\np2 = 0.33\n\nif flip(p1):\n bike_to_wellesley()\n \nif flip(p2):\n bike_to_olin()",
"The advantage of using parameters is that you can call the same function many times, providing different arguments each time.\nAdding parameters to a function is called generalization, because it makes the function more general, that is, less specialized.\nFor loops\nAt some point you will get sick of running cells over and over.\nFortunately, there is an easy way to repeat a chunk of code, the for\nloop. Here's an example:",
"for i in range(3):\n print(i)\n bike_to_wellesley()",
"The syntax here should look familiar; the first line ends with a\ncolon, and the lines inside the for loop are indented. The other\nelements of the loop are:\n\n\nThe words for and in are special words we have to use in a for\n loop.\n\n\nrange is a Python function we're using here to control the number of times the loop runs.\n\n\ni is a loop variable that gets created when the for loop runs.\n\n\nWhen this loop runs, it runs the statements inside the loop three times. The first time, the value of i is 0; the second time, it is 1; the third time, it is 2.\nEach time through the loop, it prints the value of i and moves one bike Olin to Wellesley.\nTimeSeries\nWhen we run a simulation, we often want to save the results for later analysis. The ModSim library provides a TimeSeries object for this purpose. A TimeSeries contains a sequence of time stamps and a\ncorresponding sequence of quantities.\nIn this example, the time stamps are integers representing minutes, and the quantities are the number of bikes at one location.\nSince we have moved a number of bikes around, let's start again with a new State object.",
"bikeshare = State(olin=10, wellesley=2)",
"We can create a new, empty TimeSeries like this:",
"results = TimeSeries()",
"And we can add a quantity like this:",
"results[0] = bikeshare.olin",
"The number in brackets is the time stamp, also called a label.\nWe can use a TimeSeries inside a for loop to store the results of the simulation:",
"for i in range(3):\n print(i)\n step(0.6, 0.6)\n results[i+1] = bikeshare.olin",
"Each time through the loop, we print the value of i and call step, which updates bikeshare.\nThen we store the number of bikes at Olin in results. \nWe use the loop variable, i, to compute the time stamp, i+1.\nThe first time through the loop, the value of i is 0, so the time stamp is 1.\nThe last time, the value of i is 2, so the time stamp is 3.\nWhen the loop exits, results contains 4 time stamps, from 0 through\n3, and the number of bikes at Olin at the end of each time step.\nWe can display the TimeSeries like this:",
"results",
"The left column is the time stamps; the right column is the quantities (which might be negative, depending on the state of the system).\nAt the bottom, dtype is the type of the data in the TimeSeries; you can ignore this for now.\nThe show function displays a TimeSeries as a table:",
"show(results)",
"Plotting\nresults provides a function called plot we can use to plot\nthe results, and the ModSim library provides decorate, which we can use to label the axes and give the figure a title:",
"results.plot()\n\ndecorate(title='Olin-Wellesley Bikeshare',\n xlabel='Time step (min)', \n ylabel='Number of bikes')",
"Summary\nThis chapter introduces the tools we need to run simulations, record the results, and plot them.\nWe used a State object to represent the state of the system.\nThen we used the flip function and an if statement to simulate a single time step.\nWe used for loop to simulate a series of steps, and a TimeSeries to record the results.\nFinally, we used plot and decorate to plot the results.\nIn the next chapter, we will extend this simulation to make it a little more realistic.\nExercises\nExercise: What happens if you spell the name of a state variable wrong? Edit the following cell, change the spelling of wellesley, and run it.\nThe error message uses the word \"attribute\", which is another name for what we are calling a state variable.",
"bikeshare = State(olin=10, wellesley=2)\n\nbikeshare.wellesley",
"Exercise: Make a State object with a third state variable, called babson, with initial value 0, and display the state of the system.",
"# Solution\n\nbikeshare = State(olin=10, wellesley=2, babson=0)\nshow(bikeshare)",
"Exercise: Wrap the code in the chapter in a function named run_simulation that takes three parameters, named p1, p2, and num_steps.\nIt should:\n\n\nCreate a TimeSeries object to hold the results.\n\n\nUse a for loop to run step the number of times specified by num_steps, passing along the specified values of p1 and p2.\n\n\nAfter each step, it should save the number of bikes at Olin in the TimeSeries.\n\n\nAfter the for loop, it should plot the results and\n\n\nDecorate the axes.\n\n\nTo test your function:\n\n\nCreate a State object with the initial state of the system.\n\n\nCall run_simulation with appropriate parameters.",
"# Solution\n\ndef run_simulation(p1, p2, num_steps):\n results = TimeSeries()\n results[0] = bikeshare.olin\n \n for i in range(num_steps):\n step(p1, p2)\n results[i+1] = bikeshare.olin\n \n results.plot()\n decorate(title='Olin-Wellesley Bikeshare',\n xlabel='Time step (min)', \n ylabel='Number of bikes')\n\n# Solution\n\nbikeshare = State(olin=10, wellesley=2)\nrun_simulation(0.3, 0.2, 60)",
"Under the Hood\nThis section contains additional information about the functions we've used and pointers to their documentation.\nYou don't need to know anything in these sections, so if you are already feeling overwhelmed, you might want to skip them. But if you are curious, read on.\nState and TimeSeries objects are based on the Series object defined by a the Pandas library.\nThe documentation is at https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.html.\nshow works by creating another Pandas object, called a DataFrame, which can be displayed as a table.\nWe'll use DataFrame objects in future chapters.\nSeries objects provide their own plot function which is why we call it like this:\nresults.plot()\nInstead of like this:\nplot(results)\nYou can read the documentation of Series.plot at https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.Series.plot.html.\ndecorate is based on Matplotlib, which is a widely-used plotting library for Python. Matplotlib provides separate functions for title, xlabel, and ylabel.\ndecorate makes them a little easier to use.\nFor the list of keyword arguments you can pass to decorate, see https://matplotlib.org/3.2.2/api/axes_api.html?highlight=axes#module-matplotlib.axes.\nThe flip function uses NumPy's random function to generate a random number between 0 and 1, then returns True or False with the given probability.\nYou can get the source code for flip (or any other function) by running the following cell.",
"source_code(flip)",
"You might not understand everything in this function yet, but you will."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
yandexdataschool/LHCb-topo-trigger
|
HLT2-TreesPruning.ipynb
|
apache-2.0
|
[
"%run additional.ipynb\n\n%pylab inline\n\npandas.set_option('display.max_colwidth', 120)\n\nPROFILE = 'ssh-ipy'",
"HLT2 nbody classification\ndid preselections:\n\nany sv.n, \nany sv.minpt\nsv.nlt16 < 2\n\nTraining channels (read data)\nWe will use just 11114001, 11296013, 11874042, 12103035, 13246001, 13264021",
"sig_train_modes_names = [11114001, 11296013, 11874042, 12103035, 13246001, 13264021]\nbck_train_mode_name = 30000000\nsig_train_files = ['mod_{}.csv'.format(name) for name in sig_train_modes_names]\nbck_train_files = 'mod_30000000.csv'\nfolder = \"datasets/prepared_hlt_body/\"\n\n# concat all signal data\nif not os.path.exists(folder + 'signal_hlt2.csv'):\n concat_files(folder, sig_train_files, os.path.join(folder , 'signal_hlt2.csv'))\n\nsignal_data = pandas.read_csv(os.path.join(folder , 'signal_hlt2.csv'), sep='\\t')\nbck_data = pandas.read_csv(os.path.join(folder , bck_train_files), sep='\\t')\n\nsignal_data.columns",
"Counting events and svrs,\nthat passed L0 and GoodGenB preselection (this data was generated by skim)",
"print 'Signal', statistic_length(signal_data)\nprint 'Bck', statistic_length(bck_data)\n\ntotal_bck_events = statistic_length(bck_data)['Events'] + empty_events[bck_train_mode_name]\ntotal_signal_events_by_mode = dict()\nfor mode in sig_train_modes_names:\n total_signal_events_by_mode[mode] = statistic_length(signal_data[signal_data['mode'] == mode])['Events'] + empty_events[mode]",
"events distribution by mode",
"print 'Bck:', total_bck_events\n'Signal:', total_signal_events_by_mode",
"Define variables",
"variables = [\"n\", \"mcor\", \"chi2\", \"eta\", \"fdchi2\", \"minpt\", \"nlt16\", \"ipchi2\", \"n1trk\", \"sumpt\"]",
"Counting events and svrs,\nwhich passed pass_nbody (equivalent Mike's preselections for nbody selection)",
"# hlt2 nbody selection\nsignal_data = signal_data[(signal_data['pass_nbody'] == 1) & (signal_data['mcor'] <= 10e3)]\nbck_data = bck_data[(bck_data['pass_nbody'] == 1) & (bck_data['mcor'] <= 10e3)]\n\nprint 'Signal', statistic_length(signal_data)\nprint 'Bck', statistic_length(bck_data)\n\ntotal_signal_events_by_mode_presel = dict()\nfor mode in sig_train_modes_names:\n total_signal_events_by_mode_presel[mode] = statistic_length(signal_data[signal_data['mode'] == mode])['Events']\ntotal_bck_events_presel = statistic_length(bck_data)['Events']",
"events distribution by mode",
"print 'Bck:', total_bck_events_presel\n'Signal:', total_signal_events_by_mode_presel\n\nsignal_data.head()",
"Prepare train/test splitting\nDivide events which passed alll preselections into two equal parts randomly",
"ds_train_signal, ds_train_bck, ds_test_signal, ds_test_bck = prepare_data(signal_data, bck_data, 'unique')",
"train: counting events and svrs",
"print 'Signal', statistic_length(ds_train_signal)\nprint 'Bck', statistic_length(ds_train_bck)\n\ntrain = pandas.concat([ds_train_bck, ds_train_signal])",
"test: counting events and svrs",
"print 'Signal', statistic_length(ds_test_signal)\nprint 'Bck', statistic_length(ds_test_bck)\n\ntest = pandas.concat([ds_test_bck, ds_test_signal])",
"Define all total events in test samples\n(which passed just l0 and goodgenB) using also empty events. Suppose that events which didn't pass pass_nboby also were equal randomly divided into training and test samples",
"total_test_bck_events = (total_bck_events - total_bck_events_presel) // 2 + statistic_length(ds_test_bck)['Events']\ntotal_test_signal_events = dict()\nfor mode in sig_train_modes_names:\n total_not_passed_signal = total_signal_events_by_mode[mode] - total_signal_events_by_mode_presel[mode]\n total_test_signal_events[mode] = total_not_passed_signal // 2 + \\\n statistic_length(ds_test_signal[ds_test_signal['mode'] == mode])['Events']\n\nprint 'Bck total test events:', total_test_bck_events\n'Signal total test events:', total_test_signal_events\n\nimport cPickle\nif os.path.exists('models/prunned.pkl'):\n with open('models/prunned.pkl', 'r') as file_pr:\n estimators = cPickle.load(file_pr)",
"Matrixnet training",
"from rep_ef.estimators import MatrixNetSkyGridClassifier",
"Base model with 5000 trees",
"ef_base = MatrixNetSkyGridClassifier(train_features=variables, user_name='antares',\n connection='skygrid',\n iterations=5000, sync=False)\nef_base.fit(train, train['signal'])",
"Base BBDT model",
"special_b = {\n'n': [2.5, 3.5],\n'mcor': [2000,3000,4000,5000,7500], # I want to remove splits too close the the B mass as I was looking in simulation and this could distort the mass peak (possibly)\n'chi2': [1,2.5,5,7.5,10,100], # I also propose we add a cut to the pre-selection of chi2 < 1000. I don't want to put in splits at too small values here b/c these type of inputs are never modeled quite right in the simulation (they always look a bit more smeared in data).\n'sumpt': [3000,4000,5000,6000,7500,9000,12e3,23e3,50e3], # I am happy with the MN splits here (these are almost \"as is\" from modify-6)\n'eta': [2.5,3,3.75,4.25,4.5], # Close to MN. \n'fdchi2': [33,125,350,780,1800,5000,10000], # I want to make the biggest split 10e3 because in the simulated events there is pretty much only BKGD above 40e3 but we don't want the BDT to learn to kill these as new particles would live here. Otherwise I took the MN splits and modified the first one (the first one is 5sigma now).\n'minpt': [350,500,750,1500,3000,5000], # let's make 500 the 2nd split so that this lines up with the HLT1 SVs.\n'nlt16': [0.5],\n'ipchi2': [8,26,62,150,500,1000], # I also propose we add a cut of IP chi2 < 5000 as it's all background out there. \n'n1trk': [0.5, 1.5, 2.5, 3.5]\n}\n\nef_base_bbdt = MatrixNetSkyGridClassifier(train_features=variables, user_name='antares',\n connection='skygrid',\n iterations=5000, sync=False, intervals=special_b)\nef_base_bbdt.fit(train, train['signal'])",
"BBDT-5, 6",
"ef_base_bbdt5 = MatrixNetSkyGridClassifier(train_features=variables, user_name='antares',\n connection='skygrid',\n iterations=5000, sync=False, intervals=5)\nef_base_bbdt5.fit(train, train['signal'])\n\nef_base_bbdt6 = MatrixNetSkyGridClassifier(train_features=variables, user_name='antares',\n connection='skygrid',\n iterations=5000, sync=False, intervals=6)\nef_base_bbdt6.fit(train, train['signal'])",
"Pruning",
"from rep.data import LabeledDataStorage\nfrom rep.report import ClassificationReport\nreport = ClassificationReport({'base': ef_base}, LabeledDataStorage(test, test['signal']))\n\nreport.roc()",
"Minimize log_loss\nон же BinomialDeviance",
"%run pruning.py",
"Training sample is cut to be aliquot 8",
"new_trainlen = (len(train) // 8) * 8\ntrainX = train[ef_base.features][:new_trainlen].values\ntrainY = train['signal'][:new_trainlen].values \ntrainW = numpy.ones(len(trainY))\ntrainW[trainY == 0] *= sum(trainY) / sum(1 - trainY)\n\nnew_features, new_formula_mx, new_classifier = select_trees(trainX, trainY, sample_weight=trainW,\n initial_classifier=ef_base,\n iterations=100, n_candidates=100, \n learning_rate=0.1, regularization=50.)\n\nprunned = cPickle.loads(cPickle.dumps(ef_base))\nprunned.formula_mx = new_formula_mx\n\ndef mode_scheme_fit(train, base, suf, model_file):\n blending_parts = OrderedDict()\n for n_ch, ch in enumerate(sig_train_modes_names):\n temp = FoldingClassifier(base_estimator=base, random_state=11, features=variables, ipc_profile=PROFILE)\n temp_data = train[(train['mode'] == ch) | (train['mode'] == bck_train_mode_name)]\n temp.fit(temp_data, temp_data['signal'])\n blending_parts['ch' + str(n_ch) + suf] = temp\n import cPickle\n with open(model_file, 'w') as f:\n cPickle.dump(blending_parts, f)\n\ndef mode_scheme_predict(data, suf, model_file, mode='train'):\n with open(model_file, 'r') as f:\n blending_parts = cPickle.load(f)\n for n_ch, ch in enumerate(sig_train_modes_names):\n temp_name = 'ch' + str(n_ch) + suf\n if mode == 'train':\n temp_key = ((data['mode'] == ch) | (data['mode'] == bck_train_mode_name))\n data.ix[temp_key, temp_name] = blending_parts[temp_name].predict_proba(\n data[temp_key])[:, 1]\n data.ix[~temp_key, temp_name] = blending_parts[temp_name].predict_proba(\n data[~temp_key])[:, 1]\n else:\n data[temp_name] = blending_parts[temp_name].predict_proba(data)[:, 1]\n\ndef get_best_svr_by_channel(data, feature_mask, count=1):\n add_events = []\n for id_est, channel in enumerate(sig_train_modes_names):\n train_part = data[(data['mode'] == channel)]\n for num, group in train_part.groupby('unique'):\n index = numpy.argsort(group[feature_mask.format(id_est)].values)[::-1]\n add_events.append(group.iloc[index[:count], :])\n good_events = pandas.concat([data[(data['mode'] == bck_train_mode_name)]] + add_events)\n print len(good_events)\n return good_events\n\nfrom sklearn.ensemble import RandomForestClassifier\nfrom rep.metaml import FoldingClassifier\nbase = RandomForestClassifier(n_estimators=500, min_samples_leaf=50, max_depth=6,\n max_features=7, n_jobs=8)\n\nmode_scheme_fit(train, base, '', 'forest_trick.pkl')\nmode_scheme_predict(train, '', 'forest_trick.pkl')\nmode_scheme_predict(test, '', 'forest_trick.pkl', mode='test')\n\ngood_events = get_best_svr_by_channel(train, 'ch{}', 2)\nforest_mn = MatrixNetSkyGridClassifier(train_features=variables,\n user_name='antares',\n connection='skygrid',\n iterations=5000, sync=False)\nforest_mn.fit(good_events, good_events['signal'])\n\nforest_mn_bbdt = MatrixNetSkyGridClassifier(train_features=variables,\n user_name='antares',\n connection='skygrid',\n iterations=5000, sync=False, intervals=special_b)\nforest_mn_bbdt.fit(good_events, good_events['signal'])\n\nnew_trainlen = (len(good_events) // 8) * 8\ntrainX = good_events[forest_mn.features][:new_trainlen].values\ntrainY = good_events['signal'][:new_trainlen].values \ntrainW = numpy.ones(len(trainY))\ntrainW[trainY == 0] *= sum(trainY) / sum(1 - trainY)\n\nlen(train), len(good_events)\n\nnew_features_f, new_formula_mx_f, new_classifier_f = select_trees(trainX, trainY, sample_weight=trainW,\n initial_classifier=forest_mn,\n iterations=100, n_candidates=100, \n learning_rate=0.1, regularization=50.)\n\nprunned_f = cPickle.loads(cPickle.dumps(forest_mn))\nprunned_f.formula_mx = new_formula_mx_f\n\nestimators = {'base MN': ef_base, 'BBDT MN-6': ef_base_bbdt6, 'BBDT MN-5': ef_base_bbdt5,\n 'BBDT MN special': ef_base_bbdt,\n 'Prunned MN': prunned, 'base MN + forest': forest_mn,\n 'BBDT MN special + forest': forest_mn_bbdt, 'Prunned MN + forest': prunned_f}\n\nimport cPickle\nwith open('models/prunned.pkl', 'w') as file_pr:\n cPickle.dump(estimators, file_pr)",
"Calculate thresholds on classifiers",
"thresholds = dict()\ntest_bck = test[test['signal'] == 0]\nRATE = [2500., 4000.]\nevents_pass = dict()\nfor name, cl in estimators.items():\n prob = cl.predict_proba(test_bck)\n thr, result = calculate_thresholds(test_bck, prob, total_test_bck_events, rates=RATE)\n for rate, val in result.items():\n events_pass['{}-{}'.format(rate, name)] = val[1]\n thresholds[name] = thr\n print name, result",
"Final efficiencies for each mode",
"train_modes_eff, statistic = result_statistic(estimators, sig_train_modes_names, \n test[test['signal'] == 1],\n thresholds, RATE, total_test_signal_events)\n\nfrom rep.plotting import BarComparePlot\nxticks_labels = ['$B^0 \\\\to K^*\\mu^+\\mu^-$', \"$B^0 \\\\to D^+D^-$\", \"$B^0 \\\\to D^- \\mu^+ \\\\nu_{\\mu}$\", \n '$B^+ \\\\to \\pi^+ K^-K^+$', '$B^0_s \\\\to \\psi(1S) K^+K^-\\pi^+\\pi^-$', '$B^0_s \\\\to D_s^-\\pi^+$']\nfor r in RATE:\n new_dict = [] \n for key, val in train_modes_eff.iteritems():\n if (key[0] in {'base MN', 'Prunned MN', 'BBDT MN special', \n 'base MN + forest', 'Prunned MN + forest', 'BBDT MN special + forest'}) and r == key[1]:\n new_dict.append((key, val))\n new_dict = dict(new_dict) \n BarComparePlot(new_dict).plot(new_plot=True, figsize=(24, 8), ylabel='efficiency', fontsize=22)\n xticks(3 + 11 * numpy.arange(6), xticks_labels, rotation=0)\n lgd = legend(bbox_to_anchor=(0.5, 1.3), loc='upper center', ncol=2, fontsize=22)\n# plt.savefig('hlt2-experiments.pdf' , format='pdf', bbox_extra_artists=(lgd,), bbox_inches='tight')\n\nfrom rep.plotting import BarComparePlot\nfor r in RATE:\n new_dict = [] \n for key, val in train_modes_eff.iteritems():\n if r == key[1]:\n new_dict.append((key, val))\n new_dict = dict(new_dict) \n BarComparePlot(new_dict).plot(new_plot=True, figsize=(24, 8), ylabel='efficiency', fontsize=22)\n lgd = legend(bbox_to_anchor=(0.5, 1.3), loc='upper center', ncol=2, fontsize=22)\n# plt.savefig('hlt2-experiments.pdf' , format='pdf', bbox_extra_artists=(lgd,), bbox_inches='tight')",
"Classification report using events",
"plots = OrderedDict()\nfor key, value in estimators.items():\n plots[key] = plot_roc_events(value, test[test['signal'] == 1], test[test['signal'] == 0], key)\n\nbbdt_plots = plots.copy()\nbbdt_plots.pop('Prunned MN')\nbbdt_plots.pop('Prunned MN + forest')\n\nfrom rep.plotting import FunctionsPlot\nFunctionsPlot(bbdt_plots).plot(new_plot=True, xlim=(0.02, 0.06), ylim=(0.65, 0.82))\nplot([1. * events_pass['2500.0-base MN'] / statistic_length(ds_test_bck)['Events']] * 2, \n [0., 1], 'b--', label='rate: 2.5 kHz')\nplot([1. * events_pass['4000.0-base MN'] / statistic_length(ds_test_bck)['Events']] * 2, \n [0., 1], 'g--', label='rate: 4. kHz')\nlgd = legend(loc='upper center', fontsize=16, bbox_to_anchor=(0.5, 1.3), ncol=3)\ntitle('ROC for events (training decays)', fontsize=20)\nxlabel('FRP, background events efficiency', fontsize=20)\nylabel('TPR, signal events efficiency', fontsize=20)\n\nfrom rep.plotting import FunctionsPlot\nFunctionsPlot(plots).plot(new_plot=True, xlim=(0.02, 0.06), ylim=(0.65, 0.82))\nplot([1. * events_pass['2500.0-base MN'] / statistic_length(ds_test_bck)['Events']] * 2, \n [0., 1], 'b--', label='rate: 2.5 kHz')\nplot([1. * events_pass['4000.0-base MN'] / statistic_length(ds_test_bck)['Events']] * 2, \n [0., 1], 'g--', label='rate: 4. kHz')\nlgd = legend(loc='upper center', fontsize=16, bbox_to_anchor=(0.5, 1.4), ncol=3)\ntitle('ROC for events (training decays)', fontsize=20)\nxlabel('FRP, background events efficiency', fontsize=20)\nylabel('TPR, signal events efficiency', fontsize=20)",
"all channels efficiencies",
"from collections import defaultdict\nall_channels = []\nefficiencies = defaultdict(OrderedDict)\nfor mode in empty_events.keys():\n if mode in set(sig_train_modes_names) or mode == bck_train_mode_name:\n continue\n df = pandas.read_csv(os.path.join(folder , 'mod_{}.csv'.format(mode)), sep='\\t')\n if len(df) <= 0:\n continue\n total_events = statistic_length(df)['Events'] + empty_events[mode]\n df = df[(df['pass_nbody'] == 1) & (df['mcor'] <= 10e3)]\n passed_events = statistic_length(df)['Events']\n all_channels.append(df)\n for name, cl in estimators.items():\n prob = cl.predict_proba(df)\n for rate, thresh in thresholds[name].items():\n eff = final_eff_for_mode(df, prob, total_events, thresh)\n latex_name = '$' + Samples[str(mode)]['root'].replace(\"#\", \"\\\\\") + '$'\n efficiencies[(name, rate)][latex_name] = eff\n\nfor key, val in efficiencies.items():\n for key_2, val_2 in val.items():\n if val_2 <= 0.1:\n efficiencies[key].pop(key_2)\n\nfrom rep.plotting import BarComparePlot\nfor r in RATE:\n new_dict = [] \n for key, val in efficiencies.iteritems():\n if r == key[1]:\n new_dict.append((key, val))\n new_dict = dict(new_dict) \n BarComparePlot(new_dict).plot(new_plot=True, figsize=(24, 8), ylabel='efficiency', fontsize=22)\n lgd = legend(bbox_to_anchor=(0.5, 1.4), loc='upper center', ncol=2, fontsize=22)\n\nplots_all = OrderedDict()\nfor key, value in estimators.items():\n plots_all[key] = plot_roc_events(value, pandas.concat([test[test['signal'] == 1]] + all_channels), \n test[test['signal'] == 0], key)\n\nfrom rep.plotting import FunctionsPlot\nFunctionsPlot(plots_all).plot(new_plot=True, xlim=(0.02, 0.06), ylim=(0.5, 0.66))\nplot([1. * events_pass['2500.0-base MN'] / statistic_length(ds_test_bck)['Events']] * 2, \n [0., 1], 'b--', label='rate: 2.5 kHz')\nplot([1. * events_pass['4000.0-base MN'] / statistic_length(ds_test_bck)['Events']] * 2, \n [0., 1], 'g--', label='rate: 4. kHz')\nlgd = legend(loc='upper center', fontsize=16, bbox_to_anchor=(0.5, 1.3), ncol=4)\ntitle('ROC for events (all decays together)', fontsize=20)\nxlabel('FRP, background events efficiency', fontsize=20)\nylabel('TPR, signal events efficiency', fontsize=20)",
"DIfferent rates",
"thresholds = OrderedDict()\nRATE = [2000., 2500., 3000., 3500., 4000.]\nfor name, cl in estimators.items():\n prob = cl.predict_proba(ds_test_bck)\n thr, result = calculate_thresholds(ds_test_bck, prob, total_test_bck_events, rates=RATE)\n thresholds[name] = thr\n print name, result\n\ntrain_modes_eff, statistic = result_statistic({'base MN': estimators['base MN']}, sig_train_modes_names, \n test[test['signal'] == 1],\n thresholds, RATE, total_test_signal_events)\n\norder_rate = OrderedDict()\nfor j in numpy.argsort([i[1] for i in train_modes_eff.keys()]):\n order_rate[train_modes_eff.keys()[j]] = train_modes_eff.values()[j]\n\nfrom rep.plotting import BarComparePlot\nBarComparePlot(order_rate).plot(new_plot=True, figsize=(18, 6), ylabel='efficiency', fontsize=18)\nlgd = legend(bbox_to_anchor=(0.5, 1.2), loc='upper center', ncol=5, fontsize=18)\n# plt.savefig('rates.pdf' , format='pdf', bbox_extra_artists=(lgd,), bbox_inches='tight')"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
dchandan/rebound
|
ipython_examples/FourierSpectrum.ipynb
|
gpl-3.0
|
[
"Fourier analysis & resonances\nA great benefit of being able to call rebound from within python is the ability to directly apply sophisticated analysis tools from scipy and other python libraries. Here we will do a simple Fourier analysis of a reduced Solar System consisting of Jupiter and Saturn. Let's begin by setting our units and adding these planets using JPL's horizons database:",
"import rebound\nimport numpy as np\nsim = rebound.Simulation()\nsim.units = ('AU', 'yr', 'Msun')\nsim.add(\"Sun\")\nsim.add(\"Jupiter\")\nsim.add(\"Saturn\")",
"Now let's set the integrator to whfast, and sacrificing accuracy for speed, set the timestep for the integration to about $10\\%$ of Jupiter's orbital period.",
"sim.integrator = \"whfast\"\nsim.dt = 1. # in years. About 10% of Jupiter's period\nsim.move_to_com()",
"The last line (moving to the center of mass frame) is important to take out the linear drift in positions due to the constant COM motion. Without it we would erase some of the signal at low frequencies.\nNow let's run the integration, storing time series for the two planets' eccentricities (for plotting) and x-positions (for the Fourier analysis). Additionally, we store the mean longitudes and pericenter longitudes (varpi) for reasons that will become clear below. Having some idea of what the secular timescales are in the Solar System, we'll run the integration for $3\\times 10^5$ yrs. We choose to collect $10^5$ outputs in order to resolve the planets' orbital periods ($\\sim 10$ yrs) in the Fourier spectrum.",
"Nout = 100000\ntmax = 3.e5\nNplanets = 2\n\nx = np.zeros((Nplanets,Nout))\necc = np.zeros((Nplanets,Nout))\nlongitude = np.zeros((Nplanets,Nout))\nvarpi = np.zeros((Nplanets,Nout))\n\ntimes = np.linspace(0.,tmax,Nout)\nps = sim.particles\n\nfor i,time in enumerate(times):\n sim.integrate(time)\n os = sim.calculate_orbits()\n for j in range(Nplanets):\n x[j][i] = ps[j+1].x # we use the 0 index in x for Jup and 1 for Sat, but the indices for ps start with the Sun at 0\n ecc[j][i] = os[j].e\n longitude[j][i] = os[j].l\n varpi[j][i] = os[j].Omega + os[j].omega",
"Let's see what the eccentricity evolution looks like with matplotlib:",
"%matplotlib inline\nlabels = [\"Jupiter\", \"Saturn\"]\nimport matplotlib.pyplot as plt\nfig = plt.figure(figsize=(12,5))\nax = plt.subplot(111)\nplt.plot(times,ecc[0],label=labels[0])\nplt.plot(times,ecc[1],label=labels[1])\nax.set_xlabel(\"Time (yrs)\", fontsize=20)\nax.set_ylabel(\"Eccentricity\", fontsize=20)\nax.tick_params(labelsize=20)\nplt.legend();",
"Now let's try to analyze the periodicities in this signal. Here we have a uniformly spaced time series, so we could run a Fast Fourier Transform, but as an example of the wider array of tools available through scipy, let's run a Lomb-Scargle periodogram (which allows for non-uniform time series). This could also be used when storing outputs at each timestep using the integrator IAS15 (which uses adaptive and therefore nonuniform timesteps).\nLet's check for periodicities with periods logarithmically spaced between 10 and $10^5$ yrs. From the documentation, we find that the lombscargle function requires a list of corresponding angular frequencies (ws), and we obtain the appropriate normalization for the plot. To avoid conversions to orbital elements, we analyze the time series of Jupiter's x-position.",
"from scipy import signal\nNpts = 3000\nlogPmin = np.log10(10.)\nlogPmax = np.log10(1.e5)\nPs = np.logspace(logPmin,logPmax,Npts)\nws = np.asarray([2*np.pi/P for P in Ps])\n\nperiodogram = signal.lombscargle(times,x[0],ws)\n\nfig = plt.figure(figsize=(12,5))\nax = plt.subplot(111)\nax.plot(Ps,np.sqrt(4*periodogram/Nout))\nax.set_xscale('log')\nax.set_xlim([10**logPmin,10**logPmax])\nax.set_ylim([0,0.15])\nax.set_xlabel(\"Period (yrs)\", fontsize=20)\nax.set_ylabel(\"Power\", fontsize=20)\nax.tick_params(labelsize=20)",
"We pick out the obvious signal in the eccentricity plot with a period of $\\approx 45000$ yrs, which is due to secular interactions between the two planets. There is quite a bit of power aliased into neighboring frequencies due to the short integration duration, with contributions from the second secular timescale, which is out at $\\sim 2\\times10^5$ yrs and causes a slower, low-amplitude modulation of the eccentricity signal plotted above (we limited the time of integration so that the example runs in a few seconds). \nAdditionally, though it was invisible on the scale of the eccentricity plot above, we clearly see a strong signal at Jupiter's orbital period of about 12 years. \nBut wait! Even on this scale set by the dominant frequencies of the problem, we see an additional blip just below $10^3$ yrs. Such a periodicity is actually visible in the above eccentricity plot if you inspect the thickness of the lines. Let's investigate by narrowing the period range:",
"fig = plt.figure(figsize=(12,5))\nax = plt.subplot(111)\nax.plot(Ps,np.sqrt(4*periodogram/Nout))\nax.set_xscale('log')\nax.set_xlim([600,1600])\nax.set_ylim([0,0.003])\nax.set_xlabel(\"Period (yrs)\", fontsize=20)\nax.set_ylabel(\"Power\", fontsize=20)\nax.tick_params(labelsize=20)",
"This is the right timescale to be due to resonant perturbations between giant planets ($\\sim 100$ orbits). In fact, Jupiter and Saturn are close to a 5:2 mean-motion resonance. This is the famous great inequality that Laplace showed was responsible for slight offsets in the predicted positions of the two giant planets. Let's check whether this is in fact responsible for the peak. \nIn this case, we have that the mean longitude of Jupiter $\\lambda_J$ cycles approximately 5 times for every 2 of Saturn's ($\\lambda_S$). The game is to construct a slowly-varying resonant angle, which here could be $\\phi_{5:2} = 5\\lambda_S - 2\\lambda_J - 3\\varpi_J$, where $\\varpi_J$ is Jupiter's longitude of pericenter. This last term is a much smaller contribution to the variation of $\\phi_{5:2}$ than the first two, but ensures that the coefficients in the resonant angle sum to zero and therefore that the physics do not depend on your choice of coordinates.\nTo see a clear trend, we have to shift each value of $\\phi_{5:2}$ into the range $[0,360]$ degrees, so we define a small helper function that does the wrapping and conversion to degrees:",
"def zeroTo360(val):\n while val < 0:\n val += 2*np.pi\n while val > 2*np.pi:\n val -= 2*np.pi\n return val*180/np.pi",
"Now we construct $\\phi_{5:2}$ and plot it over the first 5000 yrs.",
"phi = [zeroTo360(5.*longitude[1][i] - 2.*longitude[0][i] - 3.*varpi[0][i]) for i in range(Nout)]\nfig = plt.figure(figsize=(12,5))\nax = plt.subplot(111)\nax.plot(times,phi)\nax.set_xlim([0,5.e3])\nax.set_ylim([0,360.])\nax.set_xlabel(\"time (yrs)\", fontsize=20)\nax.set_ylabel(r\"$\\phi_{5:2}$\", fontsize=20)\nax.tick_params(labelsize=20)",
"We see that the resonant angle $\\phi_{5:2}$ circulates, but with a long period of $\\approx 900$ yrs (compared to the orbital periods of $\\sim 10$ yrs), which precisely matches the blip we saw in the Lomb-Scargle periodogram. This is approximately the same oscillation period observed in the Solar System, despite our simplified setup!\nThis resonant angle is able to have a visible effect because its (small) effects build up coherently over many orbits. As a further illustration, other resonance angles like those at the 2:1 will circulate much faster (because Jupiter and Saturn's period ratio is not close to 2). We can easily plot this. Taking one of the 2:1 resonance angles $\\phi_{2:1} = 2\\lambda_S - \\lambda_J - \\varpi_J$,",
"phi2 = [zeroTo360(2*longitude[1][i] - longitude[0][i] - varpi[0][i]) for i in range(Nout)]\nfig = plt.figure(figsize=(12,5))\nax = plt.subplot(111)\nax.plot(times,phi2)\nax.set_xlim([0,5.e3])\nax.set_ylim([0,360.])\nax.set_xlabel(\"time (yrs)\", fontsize=20)\nax.set_ylabel(r\"$\\phi_{2:1}$\", fontsize=20)\nax.tick_params(labelsize=20)",
"In this case, since we are far from this particular resonance (the 2:1), the corresponding resonance angles vary on fast (orbital) timescales, and their effects simply average out."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
brettavedisian/phys202-2015-work
|
assignments/assignment05/InteractEx03.ipynb
|
mit
|
[
"Interact Exercise 3\nImports",
"%matplotlib inline\nfrom matplotlib import pyplot as plt\nimport numpy as np\n\nfrom IPython.html.widgets import interact, interactive, fixed\nfrom IPython.display import display",
"Using interact for animation with data\nA soliton is a constant velocity wave that maintains its shape as it propagates. They arise from non-linear wave equations, such has the Korteweg–de Vries equation, which has the following analytical solution:\n$$\n\\phi(x,t) = \\frac{1}{2} c \\mathrm{sech}^2 \\left[ \\frac{\\sqrt{c}}{2} \\left(x - ct - a \\right) \\right]\n$$\nThe constant c is the velocity and the constant a is the initial location of the soliton.\nDefine soliton(x, t, c, a) function that computes the value of the soliton wave for the given arguments. Your function should work when the postion x or t are NumPy arrays, in which case it should return a NumPy array itself.",
"def soliton(x, t, c, a):\n \"\"\"Return phi(x, t) for a soliton wave with constants c and a.\"\"\"\n return (0.5)*c*((1/np.cosh(((np.sqrt(c)*0.5)*(x-c*t-a)))**2))\n\nassert np.allclose(soliton(np.array([0]),0.0,1.0,0.0), np.array([0.5]))",
"To create an animation of a soliton propagating in time, we are going to precompute the soliton data and store it in a 2d array. To set this up, we create the following variables and arrays:",
"tmin = 0.0\ntmax = 10.0\ntpoints = 100\nt = np.linspace(tmin, tmax, tpoints)\n\nxmin = 0.0\nxmax = 10.0\nxpoints = 200\nx = np.linspace(xmin, xmax, xpoints)\n\nc = 1.0\na = 0.0",
"Compute a 2d NumPy array called phi:\n\nIt should have a dtype of float.\nIt should have a shape of (xpoints, tpoints).\nphi[i,j] should contain the value $\\phi(x[i],t[j])$.",
"phi=np.zeros((xpoints,tpoints)) #worked with Hunter Thomas\nfor i in x:\n for j in t:\n phi[i,j]=soliton(x[i],t[j],c,a)\n\nassert phi.shape==(xpoints, tpoints)\nassert phi.ndim==2\nassert phi.dtype==np.dtype(float)\nassert phi[0,0]==soliton(x[0],t[0],c,a)",
"Write a plot_soliton_data(i) function that plots the soliton wave $\\phi(x, t[i])$. Customize your plot to make it effective and beautiful.",
"def plot_soliton_data(i=0):\n \"\"\"Plot the soliton data at t[i] versus x.\"\"\"\n plt.plot(soliton(x,t[i],c,a))\n plt.xlabel('Time')\n plt.ylabel('Phi')\n plt.title('Solition wave vs. Time')\n plt.tick_params(axis='x', top='off', direction='out')\n plt.tick_params(axis='y', right='off', direction='out')\n\nplot_soliton_data(0)\n\nassert True # leave this for grading the plot_soliton_data function",
"Use interact to animate the plot_soliton_data function versus time.",
"interact(plot_soliton_data, i=(0.0,50.0,0.1));\n\nassert True # leave this for grading the interact with plot_soliton_data cell"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
tensorflow/docs-l10n
|
site/ja/hub/tutorials/tf2_image_retraining.ipynb
|
apache-2.0
|
[
"Copyright 2021 The TensorFlow Hub Authors.\nLicensed under the Apache License, Version 2.0 (the \"License\");",
"# Copyright 2021 The TensorFlow Hub Authors. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n# ==============================================================================",
"画像分類器を再トレーニングする\n<table class=\"tfo-notebook-buttons\" align=\"left\">\n <td><a target=\"_blank\" href=\"https://www.tensorflow.org/hub/tutorials/tf2_image_retraining\"><img src=\"https://www.tensorflow.org/images/tf_logo_32px.png\"> TensorFlow.orgで表示</a></td>\n <td> <a target=\"_blank\" href=\"https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/ja/hub/tutorials/tf2_image_retraining.ipynb\"><img src=\"https://www.tensorflow.org/images/colab_logo_32px.png\">Google Colab で実行</a>\n</td>\n <td> <a target=\"_blank\" href=\"https://github.com/tensorflow/docs-l10n/blob/master/site/ja/hub/tutorials/tf2_image_retraining.ipynb\"><img src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\"> GitHub でソースを表示</a> </td>\n <td> <a href=\"https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/ja/hub/tutorials/tf2_image_retraining.ipynb\"><img src=\"https://www.tensorflow.org/images/download_logo_32px.png\">ノートブックをダウンロード</a> </td>\n <td> <a href=\"https://tfhub.dev/google/collections/image/1\"><img src=\"https://www.tensorflow.org/images/hub_logo_32px.png\">TF Hub モデルを参照</a> </td>\n</table>\n\nはじめに\n画像分類モデルには数百個のパラメータがあります。モデルをゼロからトレーニングするには、ラベル付きの多数のトレーニングデータと膨大なトレーニング性能が必要となります。転移学習とは、関連するタスクでトレーニングされたモデルの一部を取り出して新しいモデルで再利用することで、学習の大部分を省略するテクニックを指します。\nこの Colab では、より大規模で一般的な ImageNet データセットでトレーニングされた、TensorFlow Hub のトレーニング済み TF2 SavedModel を使用して画像特徴量を抽出することで、5 種類の花を分類する Keras モデルの構築方法を実演します。オプションとして、特徴量抽出器を新たに追加される分類器とともにトレーニング(「ファインチューニング」)することができます。\n代替ツールをお探しですか?\nこれは、TensorFlow のコーディングチュートリアルです。TensorFlow または TF Lite モデルを構築するだけのツールをお探しの方は、PIP パッケージ tensorflow-hub[make_image_classifier] によってインストールされる make_image_classifier コマンドラインツール、またはこちらの TF Lite Colab をご覧ください。\nセットアップ",
"import itertools\nimport os\n\nimport matplotlib.pylab as plt\nimport numpy as np\n\nimport tensorflow as tf\nimport tensorflow_hub as hub\n\nprint(\"TF version:\", tf.__version__)\nprint(\"Hub version:\", hub.__version__)\nprint(\"GPU is\", \"available\" if tf.config.list_physical_devices('GPU') else \"NOT AVAILABLE\")",
"使用する TF2 SavedModel モジュールを選択する\n手始めに、https://tfhub.dev/google/imagenet/mobilenet_v2_100_224/feature_vector/4 を使用します。同じ URL を、SavedModel を識別するコードに使用できます。またブラウザで使用すれば、そのドキュメントを表示することができます。(ここでは TF1 Hub 形式のモデルは機能しないことに注意してください。)\n画像特徴量ベクトルを生成するその他の TF2 モデルは、こちらをご覧ください。\n試すことのできるモデルはたくさんあります。下のセルから別のモデルを選択し、ノートブックの指示に従ってください。",
"model_name = \"efficientnetv2-xl-21k\" # @param ['efficientnetv2-s', 'efficientnetv2-m', 'efficientnetv2-l', 'efficientnetv2-s-21k', 'efficientnetv2-m-21k', 'efficientnetv2-l-21k', 'efficientnetv2-xl-21k', 'efficientnetv2-b0-21k', 'efficientnetv2-b1-21k', 'efficientnetv2-b2-21k', 'efficientnetv2-b3-21k', 'efficientnetv2-s-21k-ft1k', 'efficientnetv2-m-21k-ft1k', 'efficientnetv2-l-21k-ft1k', 'efficientnetv2-xl-21k-ft1k', 'efficientnetv2-b0-21k-ft1k', 'efficientnetv2-b1-21k-ft1k', 'efficientnetv2-b2-21k-ft1k', 'efficientnetv2-b3-21k-ft1k', 'efficientnetv2-b0', 'efficientnetv2-b1', 'efficientnetv2-b2', 'efficientnetv2-b3', 'efficientnet_b0', 'efficientnet_b1', 'efficientnet_b2', 'efficientnet_b3', 'efficientnet_b4', 'efficientnet_b5', 'efficientnet_b6', 'efficientnet_b7', 'bit_s-r50x1', 'inception_v3', 'inception_resnet_v2', 'resnet_v1_50', 'resnet_v1_101', 'resnet_v1_152', 'resnet_v2_50', 'resnet_v2_101', 'resnet_v2_152', 'nasnet_large', 'nasnet_mobile', 'pnasnet_large', 'mobilenet_v2_100_224', 'mobilenet_v2_130_224', 'mobilenet_v2_140_224', 'mobilenet_v3_small_100_224', 'mobilenet_v3_small_075_224', 'mobilenet_v3_large_100_224', 'mobilenet_v3_large_075_224']\n\nmodel_handle_map = {\n \"efficientnetv2-s\": \"https://tfhub.dev/google/imagenet/efficientnet_v2_imagenet1k_s/feature_vector/2\",\n \"efficientnetv2-m\": \"https://tfhub.dev/google/imagenet/efficientnet_v2_imagenet1k_m/feature_vector/2\",\n \"efficientnetv2-l\": \"https://tfhub.dev/google/imagenet/efficientnet_v2_imagenet1k_l/feature_vector/2\",\n \"efficientnetv2-s-21k\": \"https://tfhub.dev/google/imagenet/efficientnet_v2_imagenet21k_s/feature_vector/2\",\n \"efficientnetv2-m-21k\": \"https://tfhub.dev/google/imagenet/efficientnet_v2_imagenet21k_m/feature_vector/2\",\n \"efficientnetv2-l-21k\": \"https://tfhub.dev/google/imagenet/efficientnet_v2_imagenet21k_l/feature_vector/2\",\n \"efficientnetv2-xl-21k\": \"https://tfhub.dev/google/imagenet/efficientnet_v2_imagenet21k_xl/feature_vector/2\",\n \"efficientnetv2-b0-21k\": \"https://tfhub.dev/google/imagenet/efficientnet_v2_imagenet21k_b0/feature_vector/2\",\n \"efficientnetv2-b1-21k\": \"https://tfhub.dev/google/imagenet/efficientnet_v2_imagenet21k_b1/feature_vector/2\",\n \"efficientnetv2-b2-21k\": \"https://tfhub.dev/google/imagenet/efficientnet_v2_imagenet21k_b2/feature_vector/2\",\n \"efficientnetv2-b3-21k\": \"https://tfhub.dev/google/imagenet/efficientnet_v2_imagenet21k_b3/feature_vector/2\",\n \"efficientnetv2-s-21k-ft1k\": \"https://tfhub.dev/google/imagenet/efficientnet_v2_imagenet21k_ft1k_s/feature_vector/2\",\n \"efficientnetv2-m-21k-ft1k\": \"https://tfhub.dev/google/imagenet/efficientnet_v2_imagenet21k_ft1k_m/feature_vector/2\",\n \"efficientnetv2-l-21k-ft1k\": \"https://tfhub.dev/google/imagenet/efficientnet_v2_imagenet21k_ft1k_l/feature_vector/2\",\n \"efficientnetv2-xl-21k-ft1k\": \"https://tfhub.dev/google/imagenet/efficientnet_v2_imagenet21k_ft1k_xl/feature_vector/2\",\n \"efficientnetv2-b0-21k-ft1k\": \"https://tfhub.dev/google/imagenet/efficientnet_v2_imagenet21k_ft1k_b0/feature_vector/2\",\n \"efficientnetv2-b1-21k-ft1k\": \"https://tfhub.dev/google/imagenet/efficientnet_v2_imagenet21k_ft1k_b1/feature_vector/2\",\n \"efficientnetv2-b2-21k-ft1k\": \"https://tfhub.dev/google/imagenet/efficientnet_v2_imagenet21k_ft1k_b2/feature_vector/2\",\n \"efficientnetv2-b3-21k-ft1k\": \"https://tfhub.dev/google/imagenet/efficientnet_v2_imagenet21k_ft1k_b3/feature_vector/2\",\n \"efficientnetv2-b0\": \"https://tfhub.dev/google/imagenet/efficientnet_v2_imagenet1k_b0/feature_vector/2\",\n \"efficientnetv2-b1\": \"https://tfhub.dev/google/imagenet/efficientnet_v2_imagenet1k_b1/feature_vector/2\",\n \"efficientnetv2-b2\": \"https://tfhub.dev/google/imagenet/efficientnet_v2_imagenet1k_b2/feature_vector/2\",\n \"efficientnetv2-b3\": \"https://tfhub.dev/google/imagenet/efficientnet_v2_imagenet1k_b3/feature_vector/2\",\n \"efficientnet_b0\": \"https://tfhub.dev/tensorflow/efficientnet/b0/feature-vector/1\",\n \"efficientnet_b1\": \"https://tfhub.dev/tensorflow/efficientnet/b1/feature-vector/1\",\n \"efficientnet_b2\": \"https://tfhub.dev/tensorflow/efficientnet/b2/feature-vector/1\",\n \"efficientnet_b3\": \"https://tfhub.dev/tensorflow/efficientnet/b3/feature-vector/1\",\n \"efficientnet_b4\": \"https://tfhub.dev/tensorflow/efficientnet/b4/feature-vector/1\",\n \"efficientnet_b5\": \"https://tfhub.dev/tensorflow/efficientnet/b5/feature-vector/1\",\n \"efficientnet_b6\": \"https://tfhub.dev/tensorflow/efficientnet/b6/feature-vector/1\",\n \"efficientnet_b7\": \"https://tfhub.dev/tensorflow/efficientnet/b7/feature-vector/1\",\n \"bit_s-r50x1\": \"https://tfhub.dev/google/bit/s-r50x1/1\",\n \"inception_v3\": \"https://tfhub.dev/google/imagenet/inception_v3/feature-vector/4\",\n \"inception_resnet_v2\": \"https://tfhub.dev/google/imagenet/inception_resnet_v2/feature-vector/4\",\n \"resnet_v1_50\": \"https://tfhub.dev/google/imagenet/resnet_v1_50/feature-vector/4\",\n \"resnet_v1_101\": \"https://tfhub.dev/google/imagenet/resnet_v1_101/feature-vector/4\",\n \"resnet_v1_152\": \"https://tfhub.dev/google/imagenet/resnet_v1_152/feature-vector/4\",\n \"resnet_v2_50\": \"https://tfhub.dev/google/imagenet/resnet_v2_50/feature-vector/4\",\n \"resnet_v2_101\": \"https://tfhub.dev/google/imagenet/resnet_v2_101/feature-vector/4\",\n \"resnet_v2_152\": \"https://tfhub.dev/google/imagenet/resnet_v2_152/feature-vector/4\",\n \"nasnet_large\": \"https://tfhub.dev/google/imagenet/nasnet_large/feature_vector/4\",\n \"nasnet_mobile\": \"https://tfhub.dev/google/imagenet/nasnet_mobile/feature_vector/4\",\n \"pnasnet_large\": \"https://tfhub.dev/google/imagenet/pnasnet_large/feature_vector/4\",\n \"mobilenet_v2_100_224\": \"https://tfhub.dev/google/imagenet/mobilenet_v2_100_224/feature_vector/4\",\n \"mobilenet_v2_130_224\": \"https://tfhub.dev/google/imagenet/mobilenet_v2_130_224/feature_vector/4\",\n \"mobilenet_v2_140_224\": \"https://tfhub.dev/google/imagenet/mobilenet_v2_140_224/feature_vector/4\",\n \"mobilenet_v3_small_100_224\": \"https://tfhub.dev/google/imagenet/mobilenet_v3_small_100_224/feature_vector/5\",\n \"mobilenet_v3_small_075_224\": \"https://tfhub.dev/google/imagenet/mobilenet_v3_small_075_224/feature_vector/5\",\n \"mobilenet_v3_large_100_224\": \"https://tfhub.dev/google/imagenet/mobilenet_v3_large_100_224/feature_vector/5\",\n \"mobilenet_v3_large_075_224\": \"https://tfhub.dev/google/imagenet/mobilenet_v3_large_075_224/feature_vector/5\",\n}\n\nmodel_image_size_map = {\n \"efficientnetv2-s\": 384,\n \"efficientnetv2-m\": 480,\n \"efficientnetv2-l\": 480,\n \"efficientnetv2-b0\": 224,\n \"efficientnetv2-b1\": 240,\n \"efficientnetv2-b2\": 260,\n \"efficientnetv2-b3\": 300,\n \"efficientnetv2-s-21k\": 384,\n \"efficientnetv2-m-21k\": 480,\n \"efficientnetv2-l-21k\": 480,\n \"efficientnetv2-xl-21k\": 512,\n \"efficientnetv2-b0-21k\": 224,\n \"efficientnetv2-b1-21k\": 240,\n \"efficientnetv2-b2-21k\": 260,\n \"efficientnetv2-b3-21k\": 300,\n \"efficientnetv2-s-21k-ft1k\": 384,\n \"efficientnetv2-m-21k-ft1k\": 480,\n \"efficientnetv2-l-21k-ft1k\": 480,\n \"efficientnetv2-xl-21k-ft1k\": 512,\n \"efficientnetv2-b0-21k-ft1k\": 224,\n \"efficientnetv2-b1-21k-ft1k\": 240,\n \"efficientnetv2-b2-21k-ft1k\": 260,\n \"efficientnetv2-b3-21k-ft1k\": 300, \n \"efficientnet_b0\": 224,\n \"efficientnet_b1\": 240,\n \"efficientnet_b2\": 260,\n \"efficientnet_b3\": 300,\n \"efficientnet_b4\": 380,\n \"efficientnet_b5\": 456,\n \"efficientnet_b6\": 528,\n \"efficientnet_b7\": 600,\n \"inception_v3\": 299,\n \"inception_resnet_v2\": 299,\n \"nasnet_large\": 331,\n \"pnasnet_large\": 331,\n}\n\nmodel_handle = model_handle_map.get(model_name)\npixels = model_image_size_map.get(model_name, 224)\n\nprint(f\"Selected model: {model_name} : {model_handle}\")\n\nIMAGE_SIZE = (pixels, pixels)\nprint(f\"Input size {IMAGE_SIZE}\")\n\nBATCH_SIZE = 16#@param {type:\"integer\"}",
"Flowers データセットをセットアップする\n入力は、選択されたモジュールに合わせてサイズ変更されます。データセットを拡張することで(読み取られるたびに画像をランダムに歪みを加える)、特にファインチューニング時のトレーニングが改善されます。",
"data_dir = tf.keras.utils.get_file(\n 'flower_photos',\n 'https://storage.googleapis.com/download.tensorflow.org/example_images/flower_photos.tgz',\n untar=True)\n\ndef build_dataset(subset):\n return tf.keras.preprocessing.image_dataset_from_directory(\n data_dir,\n validation_split=.20,\n subset=subset,\n label_mode=\"categorical\",\n # Seed needs to provided when using validation_split and shuffle = True.\n # A fixed seed is used so that the validation set is stable across runs.\n seed=123,\n image_size=IMAGE_SIZE,\n batch_size=1)\n\ntrain_ds = build_dataset(\"training\")\nclass_names = tuple(train_ds.class_names)\ntrain_size = train_ds.cardinality().numpy()\ntrain_ds = train_ds.unbatch().batch(BATCH_SIZE)\ntrain_ds = train_ds.repeat()\n\nnormalization_layer = tf.keras.layers.Rescaling(1. / 255)\npreprocessing_model = tf.keras.Sequential([normalization_layer])\ndo_data_augmentation = False #@param {type:\"boolean\"}\nif do_data_augmentation:\n preprocessing_model.add(\n tf.keras.layers.RandomRotation(40))\n preprocessing_model.add(\n tf.keras.layers.RandomTranslation(0, 0.2))\n preprocessing_model.add(\n tf.keras.layers.RandomTranslation(0.2, 0))\n # Like the old tf.keras.preprocessing.image.ImageDataGenerator(),\n # image sizes are fixed when reading, and then a random zoom is applied.\n # If all training inputs are larger than image_size, one could also use\n # RandomCrop with a batch size of 1 and rebatch later.\n preprocessing_model.add(\n tf.keras.layers.RandomZoom(0.2, 0.2))\n preprocessing_model.add(\n tf.keras.layers.RandomFlip(mode=\"horizontal\"))\ntrain_ds = train_ds.map(lambda images, labels:\n (preprocessing_model(images), labels))\n\nval_ds = build_dataset(\"validation\")\nvalid_size = val_ds.cardinality().numpy()\nval_ds = val_ds.unbatch().batch(BATCH_SIZE)\nval_ds = val_ds.map(lambda images, labels:\n (normalization_layer(images), labels))",
"モデルを定義する\nHub モジュールを使用して、線形分類器を feature_extractor_layer の上に配置するだけで定義できます。\n高速化するため、トレーニング不可能な feature_extractor_layer から始めますが、ファインチューニングを実施して精度を高めることもできます。",
"do_fine_tuning = False #@param {type:\"boolean\"}\n\nprint(\"Building model with\", model_handle)\nmodel = tf.keras.Sequential([\n # Explicitly define the input shape so the model can be properly\n # loaded by the TFLiteConverter\n tf.keras.layers.InputLayer(input_shape=IMAGE_SIZE + (3,)),\n hub.KerasLayer(model_handle, trainable=do_fine_tuning),\n tf.keras.layers.Dropout(rate=0.2),\n tf.keras.layers.Dense(len(class_names),\n kernel_regularizer=tf.keras.regularizers.l2(0.0001))\n])\nmodel.build((None,)+IMAGE_SIZE+(3,))\nmodel.summary()",
"モデルをトレーニングする",
"model.compile(\n optimizer=tf.keras.optimizers.SGD(learning_rate=0.005, momentum=0.9), \n loss=tf.keras.losses.CategoricalCrossentropy(from_logits=True, label_smoothing=0.1),\n metrics=['accuracy'])\n\nsteps_per_epoch = train_size // BATCH_SIZE\nvalidation_steps = valid_size // BATCH_SIZE\nhist = model.fit(\n train_ds,\n epochs=5, steps_per_epoch=steps_per_epoch,\n validation_data=val_ds,\n validation_steps=validation_steps).history\n\nplt.figure()\nplt.ylabel(\"Loss (training and validation)\")\nplt.xlabel(\"Training Steps\")\nplt.ylim([0,2])\nplt.plot(hist[\"loss\"])\nplt.plot(hist[\"val_loss\"])\n\nplt.figure()\nplt.ylabel(\"Accuracy (training and validation)\")\nplt.xlabel(\"Training Steps\")\nplt.ylim([0,1])\nplt.plot(hist[\"accuracy\"])\nplt.plot(hist[\"val_accuracy\"])",
"検証データの画像でモデルが機能するか試してみましょう。",
"x, y = next(iter(val_ds))\nimage = x[0, :, :, :]\ntrue_index = np.argmax(y[0])\nplt.imshow(image)\nplt.axis('off')\nplt.show()\n\n# Expand the validation image to (1, 224, 224, 3) before predicting the label\nprediction_scores = model.predict(np.expand_dims(image, axis=0))\npredicted_index = np.argmax(prediction_scores)\nprint(\"True label: \" + class_names[true_index])\nprint(\"Predicted label: \" + class_names[predicted_index])",
"最後に次のようにして、トレーニングされたモデルを、TF Serving または TF Lite(モバイル)用に保存することができます。",
"saved_model_path = f\"/tmp/saved_flowers_model_{model_name}\"\ntf.saved_model.save(model, saved_model_path)",
"オプション: TensorFlow Lite にデプロイする\nTensorFlow Lite では、TensorFlow モデルをモバイルおよび IoT デバイスにデプロイすることができます。以下のコードには、トレーニングされたモデルを TF Lite に変換して、TensorFlow Model Optimization Toolkit のポストトレーニングツールを適用する方法が示されています。最後に、結果の質を調べるために、変換したモデルを TF Lite Interpreter で実行しています。\n\n最適化せずに変換すると、前と同じ結果が得られます(丸め誤差まで)。\nデータなしで最適化して変換すると、モデルの重みを 8 ビットに量子化しますが、それでもニューラルネットワークアクティベーションの推論では浮動小数点数計算が使用されます。これにより、モデルのサイズが約 4 倍に縮小されるため、モバイルデバイスの CPU レイテンシが改善されます。\n最上部の、ニューラルネットワークアクティベーションの計算は、量子化の範囲を調整するために小規模な参照データセットが提供される場合、8 ビット整数に量子化されます。モバイルデバイスでは、これにより推論がさらに高速化されるため、EdgeTPU などのアクセラレータで実行することが可能となります。",
"#@title Optimization settings\noptimize_lite_model = False #@param {type:\"boolean\"}\n#@markdown Setting a value greater than zero enables quantization of neural network activations. A few dozen is already a useful amount.\nnum_calibration_examples = 60 #@param {type:\"slider\", min:0, max:1000, step:1}\nrepresentative_dataset = None\nif optimize_lite_model and num_calibration_examples:\n # Use a bounded number of training examples without labels for calibration.\n # TFLiteConverter expects a list of input tensors, each with batch size 1.\n representative_dataset = lambda: itertools.islice(\n ([image[None, ...]] for batch, _ in train_ds for image in batch),\n num_calibration_examples)\n\nconverter = tf.lite.TFLiteConverter.from_saved_model(saved_model_path)\nif optimize_lite_model:\n converter.optimizations = [tf.lite.Optimize.DEFAULT]\n if representative_dataset: # This is optional, see above.\n converter.representative_dataset = representative_dataset\nlite_model_content = converter.convert()\n\nwith open(f\"/tmp/lite_flowers_model_{model_name}.tflite\", \"wb\") as f:\n f.write(lite_model_content)\nprint(\"Wrote %sTFLite model of %d bytes.\" %\n (\"optimized \" if optimize_lite_model else \"\", len(lite_model_content)))\n\ninterpreter = tf.lite.Interpreter(model_content=lite_model_content)\n# This little helper wraps the TFLite Interpreter as a numpy-to-numpy function.\ndef lite_model(images):\n interpreter.allocate_tensors()\n interpreter.set_tensor(interpreter.get_input_details()[0]['index'], images)\n interpreter.invoke()\n return interpreter.get_tensor(interpreter.get_output_details()[0]['index'])\n\n#@markdown For rapid experimentation, start with a moderate number of examples.\nnum_eval_examples = 50 #@param {type:\"slider\", min:0, max:700}\neval_dataset = ((image, label) # TFLite expects batch size 1.\n for batch in train_ds\n for (image, label) in zip(*batch))\ncount = 0\ncount_lite_tf_agree = 0\ncount_lite_correct = 0\nfor image, label in eval_dataset:\n probs_lite = lite_model(image[None, ...])[0]\n probs_tf = model(image[None, ...]).numpy()[0]\n y_lite = np.argmax(probs_lite)\n y_tf = np.argmax(probs_tf)\n y_true = np.argmax(label)\n count +=1\n if y_lite == y_tf: count_lite_tf_agree += 1\n if y_lite == y_true: count_lite_correct += 1\n if count >= num_eval_examples: break\nprint(\"TFLite model agrees with original model on %d of %d examples (%g%%).\" %\n (count_lite_tf_agree, count, 100.0 * count_lite_tf_agree / count))\nprint(\"TFLite model is accurate on %d of %d examples (%g%%).\" %\n (count_lite_correct, count, 100.0 * count_lite_correct / count))"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
eecs445-f16/umich-eecs445-f16
|
handsOn_lecture19_baum-welch-pgm-inference/handsOn_lecture19_baum-welch-pgm-inference-template.ipynb
|
mit
|
[
"$$ \\LaTeX \\text{ command declarations here.}\n\\newcommand{\\R}{\\mathbb{R}}\n\\renewcommand{\\vec}[1]{\\mathbf{#1}}\n\\newcommand{\\X}{\\mathcal{X}}\n\\newcommand{\\D}{\\mathcal{D}}\n\\newcommand{\\G}{\\mathcal{G}}\n\\newcommand{\\L}{\\mathcal{L}}\n\\newcommand{\\X}{\\mathcal{X}}\n\\newcommand{\\Parents}{\\mathrm{Parents}}\n\\newcommand{\\NonDesc}{\\mathrm{NonDesc}}\n\\newcommand{\\I}{\\mathcal{I}}\n\\newcommand{\\dsep}{\\text{d-sep}}\n\\newcommand{\\Cat}{\\mathrm{Categorical}}\n\\newcommand{\\Bin}{\\mathrm{Binomial}}\n$$\nHMMs and the Baum-Welch Algorithm\nAs covered in lecture, the Baum-Welch Algorithm is a derivation of the EM algorithm for HMMs where we learn the paramaters A, B and $\\pi$ given a set of observations.\nIn this hands-on exercise we will build upon the forward and backward algorithms from last exercise, which can be used for the E-step, and implement Baum-Welch ourselves!\nLike last time, we'll work with an example where we observe a sequence of words backed by a latent part of speech variable.\n$X$: discrete distribution over bag of words\n$Z$: discrete distribution over parts of speech\n$A$: the probability of a part of speech given a previous part of speech, e.g, what do we expect to see after a noun? \n$B$: the distribution of words given a particular part of speech, e.g, what words are we likely to see if we know it is a verb?\n$x_{i}s$ a sequence of observed words (a sentence). Note: in for both variables we have a special \"end\" outcome that signals the end of a sentence. This makes sense as a part of speech tagger would like to have a sense of sentence boundaries.",
"import numpy as np\nnp.set_printoptions(suppress=True)\n\nparts_of_speech = DETERMINER, NOUN, VERB, END = 0, 1, 2, 3\nwords = THE, DOG, CAT, WALKED, RAN, IN, PARK, END = 0, 1, 2, 3, 4, 5, 6, 7\n\n# transition probabilities\nA = np.array([\n # D N V E\n [0.1, 0.8, 0.1, 0.0], # D: determiner most likely to go to noun\n [0.1, 0.1, 0.6, 0.2], # N: noun most likely to go to verb\n [0.4, 0.3, 0.2, 0.1], # V \n [0.0, 0.0, 0.0, 1.0]]) # E: end always goes to end\n\n# distribution of parts of speech for the first word of a sentence\npi = np.array([0.4, 0.3, 0.3, 0.0])\n\n# emission probabilities\nB = np.array([\n # D N V E\n [ 0.8, 0.1, 0.1, 0. ], # the\n [ 0.1, 0.8, 0.1, 0. ], # dog\n [ 0.1, 0.8, 0.1, 0. ], # cat\n [ 0. , 0. , 1. , 0. ], # walked\n [ 0. , 0.2 , 0.8 , 0. ], # ran\n [ 1. , 0. , 0. , 0. ], # in\n [ 0. , 0.1, 0.9, 0. ], # park\n [ 0. , 0. , 0. , 1. ]]) # end\n\nB = B / np.sum(B, axis=0)\n\n\n# utilties for printing out parameters of HMM\n\nimport pandas as pd\n\npos_labels = [\"D\", \"N\", \"V\", \"E\"]\nword_labels = [\"the\", \"dog\", \"cat\", \"walked\", \"ran\", \"in\", \"park\", \"end\"]\n\ndef print_B(B):\n print(pd.DataFrame(B, columns=pos_labels, index=word_labels))\n \ndef print_A(A):\n print(pd.DataFrame(A, columns=pos_labels, index=pos_labels))\n \nprint_A(A)\nprint_B(B)",
"Review: Forward / Backward\nHere are solutions to last hands-on lecture's coding problems along with example uses with a pre-defined A and B matrices.\n$\\alpha_t(z_t) = B_{z_t,x_t} \\sum_{z_{t-1}} \\alpha_{t-1}(z_{t-1}) A_{z_{t-1}, z_t} $\n$\\beta(z_t) = \\sum_{z_{t+1}} A_{z_t, z_{t+1}} B_{z_{t+1}, x_{t+1}} \\beta_{t+1}(z_{t+1})$",
"def forward(params, observations):\n pi, A, B = params\n N = len(observations)\n S = pi.shape[0]\n \n alpha = np.zeros((N, S))\n \n # base case\n alpha[0, :] = pi * B[observations[0], :]\n \n # recursive case\n for i in range(1, N):\n for s2 in range(S):\n for s1 in range(S):\n alpha[i, s2] += alpha[i-1, s1] * A[s1, s2] * B[observations[i], s2] \n \n return (alpha, np.sum(alpha[N-1,:]))\n\ndef print_forward(params, observations):\n alpha, za = forward(params, observations)\n print(pd.DataFrame(\n alpha, \n columns=pos_labels, \n index=[word_labels[i] for i in observations]))\n\nprint_forward((pi, A, B), [THE, DOG, WALKED, IN, THE, PARK, END])\nprint_forward((pi, A, B), [THE, CAT, RAN, IN, THE, PARK, END])\n\ndef backward(params, observations):\n pi, A, B = params\n N = len(observations)\n S = pi.shape[0]\n \n beta = np.zeros((N, S))\n \n # base case\n beta[N-1, :] = 1\n \n # recursive case\n for i in range(N-2, -1, -1):\n for s1 in range(S):\n for s2 in range(S):\n beta[i, s1] += beta[i+1, s2] * A[s1, s2] * B[observations[i+1], s2]\n \n return (beta, np.sum(pi * B[observations[0], :] * beta[0,:]))\n\nbackward((pi, A, B), [THE, DOG, WALKED, IN, THE, PARK, END])",
"Implementing Baum-welch\nWith the forward and backward algorithm implementions ready, let's use them to implement baum-welch, EM for HMMs.\nIn the M step, here's the parameters are updated:\n$ p(z_{t-1}, z_t | \\X, \\theta) = \\frac{\\alpha_{t-1}(z_{t-1}) \\beta_t(z_t) A_{z_{t-1}, z_t} B_{z_t, x_t}}{\\sum_k \\alpha_t(k)\\beta_t(k)} $\nFirst, let's look at an implementation of this below and see how it works when applied to some training data.",
"# Some utitlities for tracing our implementation below\n\ndef left_pad(i, s):\n return \"\\n\".join([\"{}{}\".format(' '*i, l) for l in s.split(\"\\n\")])\n\ndef pad_print(i, s):\n print(left_pad(i, s))\n \ndef pad_print_args(i, **kwargs):\n pad_print(i, \"\\n\".join([\"{}:\\n{}\".format(k, kwargs[k]) for k in sorted(kwargs.keys())])) \n\n\ndef baum_welch(training, pi, A, B, iterations, trace=False):\n pi, A, B = np.copy(pi), np.copy(A), np.copy(B) # take copies, as we modify them\n S = pi.shape[0]\n\n # iterations of EM\n for it in range(iterations):\n if trace:\n pad_print(0, \"for it={} in range(iterations)\".format(it))\n pad_print_args(2, A=A, B=B, pi=pi, S=S)\n pi1 = np.zeros_like(pi)\n A1 = np.zeros_like(A)\n B1 = np.zeros_like(B)\n\n for observations in training:\n if trace:\n pad_print(2, \"for observations={} in training\".format(observations))\n\n # \n # E-Step: compute forward-backward matrices\n # \n \n alpha, za = forward((pi, A, B), observations)\n beta, zb = backward((pi, A, B), observations)\n if trace:\n pad_print(4, \"\"\"alpha, za = forward((pi, A, B), observations)\\nbeta, zb = backward((pi, A, B), observations)\"\"\")\n pad_print_args(4, alpha=alpha, beta=beta, za=za, zb=zb)\n\n assert abs(za - zb) < 1e-6, \"it's badness 10000 if the marginals don't agree ({} vs {})\".format(za, zb)\n\n #\n # M-step: calculating the frequency of starting state, transitions and (state, obs) pairs\n #\n \n # Update PI: \n pi1 += alpha[0, :] * beta[0, :] / za\n\n if trace:\n pad_print(4, \"pi1 += alpha[0, :] * beta[0, :] / za\")\n pad_print_args(4, pi1=pi1)\n pad_print(4, \"for i in range(0, len(observations)):\")\n \n # Update B (transition) matrix\n for i in range(0, len(observations)):\n # Hint: B1 can be updated similarly to PI for each row 1 \n if trace:\n pad_print_args(4, B1=B1)\n pad_print(4, \"for i in range(1, len(observations)):\")\n \n # Update A (emission) matrix\n for i in range(1, len(observations)):\n if trace: \n pad_print(6, \"for s1 in range(S={})\".format(S))\n for s1 in range(S):\n if trace: pad_print(8, \"for s2 in range(S={})\".format(S))\n for s2 in range(S):\n if trace: pad_print_args(4, A1=A1)\n\n # normalise pi1, A1, B1\n \n return pi, A, B\n",
"Training with examples\nLet's try producing updated parameters to our HMM using a few examples. How did the A and B matrixes get updated with data? Was any confidence gained in the emission probabilities of nouns? Verbs?",
"pi2, A2, B2 = baum_welch([\n [THE, DOG, WALKED, IN, THE, PARK, END, END], # END -> END needs at least one transition example\n [THE, DOG, RAN, IN, THE, PARK, END],\n [THE, CAT, WALKED, IN, THE, PARK, END],\n [THE, DOG, RAN, IN, THE, PARK, END]], pi, A, B, 10, trace=False)\n\nprint(\"original A\")\nprint_A(A)\n\nprint(\"updated A\")\nprint_A(A2)\n\nprint(\"\\noriginal B\")\nprint_B(B)\n\nprint(\"updated B\")\nprint_B(B2)\n\nprint(\"\\nForward probabilities of sample using updated params:\")\nprint_forward((pi2, A2, B2), [THE, DOG, WALKED, IN, THE, PARK, END])",
"Tracing through the implementation\nLet's look at a trace of one iteration. Study the steps carefully and make sure you understand how we are updating the parameters, corresponding to these updates:\n$ p(z_{t-1}, z_t | \\X, \\theta) = \\frac{\\alpha_{t-1}(z_{t-1}) \\beta_t(z_t) A_{z_{t-1}, z_t} B_{z_t, x_t}}{\\sum_k \\alpha_t(k)\\beta_t(k)} $",
"pi3, A3, B3 = baum_welch([\n [THE, DOG, WALKED, IN, THE, PARK, END, END], \n [THE, CAT, RAN, IN, THE, PARK, END, END]], pi, A, B, 1, trace=True)\n\nprint(\"\\n\\n\")\n\nprint_A(A3)\n\nprint_B(B3)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
michaelneuder/image_quality_analysis
|
bin/calculations/human_data/old/saving.ipynb
|
mit
|
[
"saving\n\nchecking out how to save current weight matrices and load them back in.",
"import numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt",
"first just checking that the flattening and reshaping works as expected",
"test = np.random.randn(11,11,4,100)\ntest.shape\n\ntest_flat = test.flatten()\ntest_flat.shape\n\nnp.savetxt('test.txt', test_flat)\ntest_back = np.loadtxt('test.txt').reshape((11,11,4,100))\ntest_back.shape\n\nnp.mean(test - test_back) ",
"looks good."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
Mynti207/cs207project
|
docs/persistence_demo.ipynb
|
mit
|
[
"import sys # for gioia to load aiohttp\nsys.path.append('/Users/maggiori/anaconda/envs/py35/lib/python3.5/site-packages')\n\n# to import modules locally without having installed the entire package\n# http://stackoverflow.com/questions/714063/importing-modules-from-parent-folder\nimport os, sys, inspect\ncurrentdir = os.path.dirname(os.path.abspath(inspect.getfile(inspect.currentframe())))\nparentdir = os.path.dirname(currentdir)\nsys.path.insert(0, parentdir) \n\nimport signal\nimport time\nimport subprocess\nimport numpy as np\nfrom scipy.stats import norm\nimport matplotlib.pyplot as plt\n%matplotlib inline\nimport seaborn as sns\nsns.set_style('white')\nsns.set_context('notebook')",
"Time Series Database\nThis notebook demonstrates the persistent behavior of the database.\nInitialization\n\nClear the file system for demonstration purposes.",
"# database parameters\nts_length = 100\ndata_dir = '../db_files'\ndb_name = 'default'\ndir_path = data_dir + '/' + db_name + '/'\n\n# clear file system for testing\nif not os.path.exists(dir_path):\n os.makedirs(dir_path)\nfilelist = [dir_path + f for f in os.listdir(dir_path)]\nfor f in filelist:\n os.remove(f)",
"Load the database server.",
"# when running from the terminal\n# python go_server_persistent.py --ts_length 100 --db_name 'demo'\n\n# here we load the server as a subprocess for demonstration purposes\nserver = subprocess.Popen(['python', '../go_server_persistent.py',\n '--ts_length', str(ts_length), '--data_dir', data_dir, '--db_name', db_name])\ntime.sleep(5) # make sure it loads completely",
"Load the database webserver.",
"# when running from the terminal\n# python go_webserver.py\n\n# here we load the server as a subprocess for demonstration purposes\nwebserver = subprocess.Popen(['python', '../go_webserver.py'])\ntime.sleep(5) # make sure it loads completely",
"Import the web interface and initialize it.",
"from webserver import *\n\nweb_interface = WebInterface()",
"Generate Data\nLet's create some dummy data to aid in our demonstration. You will need to import the timeseries package to work with the TimeSeries format.\nNote: the database is persistent, so can store data between sessions, but we will start with an empty database here for demonstration purposes.",
"from timeseries import *\n\ndef tsmaker(m, s, j):\n '''\n Helper function: randomly generates a time series for testing.\n\n Parameters\n ----------\n m : float\n Mean value for generating time series data\n s : float\n Standard deviation value for generating time series data\n j : float\n Quantifies the \"jitter\" to add to the time series data\n\n Returns\n -------\n A time series and associated meta data.\n '''\n\n # generate metadata\n meta = {}\n meta['order'] = int(np.random.choice(\n [-5, -4, -3, -2, -1, 0, 1, 2, 3, 4, 5]))\n meta['blarg'] = int(np.random.choice([1, 2]))\n\n # generate time series data\n t = np.arange(0.0, 1.0, 0.01)\n v = norm.pdf(t, m, s) + j * np.random.randn(ts_length)\n\n # return time series and metadata\n return meta, TimeSeries(t, v)\n\n# generate sample time series\nnum_ts = 50\nmus = np.random.uniform(low=0.0, high=1.0, size=num_ts)\nsigs = np.random.uniform(low=0.05, high=0.4, size=num_ts)\njits = np.random.uniform(low=0.05, high=0.2, size=num_ts)\n\n# initialize dictionaries for time series and their metadata\nprimary_keys = []\ntsdict = {}\nmetadict = {}\n\n# fill dictionaries with randomly generated entries for database\nfor i, m, s, j in zip(range(num_ts), mus, sigs, jits):\n meta, tsrs = tsmaker(m, s, j) # generate data\n pk = \"ts-{}\".format(i) # generate primary key\n primary_keys.append(pk) # keep track of all primary keys\n tsdict[pk] = tsrs # store time series data\n metadict[pk] = meta # store metadata\n \n# to assist with later testing\nts_keys = sorted(tsdict.keys())\n \n# randomly choose time series as vantage points\nnum_vps = 5\nvpkeys = list(np.random.choice(ts_keys, size=num_vps, replace=False))\nvpdist = ['d_vp_{}'.format(i) for i in vpkeys]",
"Insert Data\nLet's start by loading the data into the database, using the REST API web interface.",
"# check that the database is empty\nweb_interface.select()\n\n# add stats trigger\nweb_interface.add_trigger('stats', 'insert_ts', ['mean', 'std'], None)\n\n# insert the time series\nfor k in tsdict:\n web_interface.insert_ts(k, tsdict[k])\n\n# upsert the metadata\nfor k in tsdict:\n web_interface.upsert_meta(k, metadict[k])\n\n# add the vantage points\nfor i in range(num_vps):\n web_interface.insert_vp(vpkeys[i])",
"Inspect Data\nLet's inspect the data, to make sure that all the previous operations were successful.",
"# select all database entries; all metadata fields\nresults = web_interface.select(fields=[])\n\n# we have the right number of database entries\nassert len(results) == num_ts\n\n# we have all the right primary keys\nassert sorted(results.keys()) == ts_keys\n\n# check that all the time series and metadata matches\nfor k in tsdict:\n results = web_interface.select(fields=['ts'], md={'pk': k})\n assert results[k]['ts'] == tsdict[k]\n results = web_interface.select(fields=[], md={'pk': k})\n for field in metadict[k]:\n assert metadict[k][field] == results[k][field]\n\n# check that the vantage points match\nprint('Vantage points selected:', vpkeys)\nprint('Vantage points in database:',\n web_interface.select(fields=None, md={'vp': True}, additional={'sort_by': '+pk'}).keys())\n\n# check that the vantage point distance fields have been created\nprint('Vantage point distance fields:', vpdist)\nweb_interface.select(fields=vpdist, additional={'sort_by': '+pk', 'limit': 1})\n\n# check that the trigger has executed as expected (allowing for rounding errors)\nfor k in tsdict:\n results = web_interface.select(fields=['mean', 'std'], md={'pk': k})\n assert np.round(results[k]['mean'], 4) == np.round(tsdict[k].mean(), 4)\n assert np.round(results[k]['std'], 4) == np.round(tsdict[k].std(), 4)",
"Let's generate an additional time series for similarity searches. We'll store the time series and the results of the similarity searches, so that we can compare against them after reloading the database.",
"_, query = tsmaker(np.random.uniform(low=0.0, high=1.0),\n np.random.uniform(low=0.05, high=0.4),\n np.random.uniform(low=0.05, high=0.2))\n\nresults_vp = web_interface.vp_similarity_search(query, 1)\nresults_vp\n\nresults_isax = web_interface.isax_similarity_search(query)\nresults_isax",
"Finally, let's store our iSAX tree representation.",
"results_tree = web_interface.isax_tree()\nprint(results_tree)",
"Terminate and Reload Database\nNow that we know that everything is loaded, let's close the database and re-open it.",
"os.kill(server.pid, signal.SIGINT)\ntime.sleep(5) # give it time to terminate\nos.kill(webserver.pid, signal.SIGINT)\ntime.sleep(5) # give it time to terminate\nweb_interface = None\n\nserver = subprocess.Popen(['python', '../go_server_persistent.py',\n '--ts_length', str(ts_length), '--data_dir', data_dir, '--db_name', db_name])\ntime.sleep(5) # give it time to load fully\nwebserver = subprocess.Popen(['python', '../go_webserver.py'])\ntime.sleep(5) # give it time to load fully\nweb_interface = WebInterface()",
"Inspect Data\nLet's repeat the previous tests to check whether our persistence architecture worked.",
"# select all database entries; all metadata fields\nresults = web_interface.select(fields=[])\n\n# we have the right number of database entries\nassert len(results) == num_ts\n\n# we have all the right primary keys\nassert sorted(results.keys()) == ts_keys\n\n# check that all the time series and metadata matches\nfor k in tsdict:\n results = web_interface.select(fields=['ts'], md={'pk': k})\n assert results[k]['ts'] == tsdict[k]\n results = web_interface.select(fields=[], md={'pk': k})\n for field in metadict[k]:\n assert metadict[k][field] == results[k][field]\n\n# check that the vantage points match\nprint('Vantage points selected:', vpkeys)\nprint('Vantage points in database:',\n web_interface.select(fields=None, md={'vp': True}, additional={'sort_by': '+pk'}).keys())\n\n# check that isax tree has fully reloaded\nprint(web_interface.isax_tree())\n\n# compare vantage point search results\nresults_vp == web_interface.vp_similarity_search(query, 1)\n\n# compare isax search results\nresults_isax == web_interface.isax_similarity_search(query)\n\n# check that the trigger is still there by loading new data\n\n# create test time series\n_, test = tsmaker(np.random.uniform(low=0.0, high=1.0),\n np.random.uniform(low=0.05, high=0.4),\n np.random.uniform(low=0.05, high=0.2))\n\n# insert test time series\nweb_interface.insert_ts('test', test)\n\n# check that mean and standard deviation have been calculated\nprint(web_interface.select(fields=['mean', 'std'], md={'pk': 'test'}))\n\n# remove test time series\nweb_interface.delete_ts('test');",
"We have successfully reloaded all of the database components from disk!",
"# terminate processes before exiting\nos.kill(server.pid, signal.SIGINT)\ntime.sleep(5) # give it time to terminate\nweb_interface = None\nwebserver.terminate()"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
mathnathan/notebooks
|
Voronoi The Astrocytes.ipynb
|
mit
|
[
"<h1 align=\"center\">\nResearch Proposal <br>\n</h1>\n\n<h5 align=\"center\">\n*Nathan Crock* <br>\nDecember 22$^{nd}$, 2015 <br>\n</h5>\n\n<p align=\"center\" style=\"padding: 0px 100px 0px 100px;\">\nI am proposing a research direction to address the problem posed by Monica during our recent discussions. Here is the problem statement as I understand it so far. Given a 3D scan of cortical astrocytes, can we determine the particular orientation of one astrocyte based on the location and orientation of its neighboring astrocytes? I propose we construct a voronoi mesh around the target astrocyte. I will show below, that this voronoi mesh will be a convex hull around the target astrocyte and that this hull will be a discrete, piece-wise linear approximation to the space that is \"optimally far\" from all neighboring astrocytes and \"optimally close\" to the target astrocyte.\n</p>\n\n<h3>Introduction</h3>\n\nDuring recent discussions, Monica shared a hypothesis regarding the formation and relative positioning of neighboring astrocytes. In addition she expressed an interest in devising a quantifiable test to help validate its veracity. I will attempt to summarize the hypothesis in my own words here and then describe an experiment that I believe will accurately test the hypothesis. Astrocytes express some form of chemical or molecular messengers which act as a deterrent to other astrocytes. As a result, astrocytes will try to align themselves maximally far away from one another. To test this hypothesis we will need two things. Firstly we will need a formal definition of what it means to be \"optimally far\" from neighboring astrocytes and \"optimally close\" to the target astrocyte. Let us simply call this region the \"optimal region\". And secondly, we will need a method to test this hypothesis on the datasets provided by Monica and James. I have an idea for both.\n<h3>Methodology</h3>\n\n<h5>Optimal Region</h5>\n\nI propose that the \"optimal region\" or space that the target astrocyte should occupy given its neighbor's positions should be the region enclosed around the target astrocyte by the voronoi mesh constructed using the astrocyte's centroids. Given a set of unordered points, a voronoi mesh is created by connecting the midpoints of all of the lines between one point and its neighbors. I will demonstrate with a 2 dimensional example below.\nFirst we create 5 points in the region $[-5,5]\\times[-5,5]$. These will be the centroids of the neighboring astrocytes.",
"import numpy as np\nxpoints = np.array([-4.,-2.,3.,2.,0.])\nypoints = np.array([0.,3.,4.,-1.,-3.])\n\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\nplt.scatter(xpoints, ypoints)\nplt.xlim((-5,5)); plt.ylim((-5,5))",
"Above, we see 5 points. These indicate the centroids of astrocytes. We will use the origin, i.e. $(0,0)$, as the target astrocyte. Thus we will construct the voronoi region around the origin.",
"plt.scatter((0),(0), color=\"red\")\nplt.scatter(xpoints, ypoints)\nfor pt in zip(xpoints,ypoints):\n plt.plot((0,pt[0]),(0,pt[1]),color=\"black\")\nplt.xlim((-5,5)); plt.ylim((-5,5))",
"The target astrocyte is drawn in red. To help us visualize how the voronoi mesh is constructed we drew a line connecting the target astrocyte to each of its neighbors. Next we will mark the midpoint of each line with a green dot. These will form the vertices of the voronoi mesh.",
"plt.scatter((0),(0), color=\"red\")\nplt.scatter(xpoints, ypoints)\nvpts = []\nfor pt in zip(xpoints,ypoints):\n plt.plot((0,pt[0]),(0,pt[1]),color=\"black\")\n plt.scatter((pt[0]/2),(pt[1]/2),color=\"green\")\nplt.xlim((-5,5)); plt.ylim((-5,5))",
"Lastly, by connecting all of these vertices with green lines we will have the voronoi region around the origin marked off by the perimeter of green lines.",
"plt.scatter((0),(0), color=\"red\")\nplt.scatter(xpoints, ypoints)\nfor i,pt in enumerate(zip(xpoints,ypoints)):\n plt.plot((0,pt[0]),(0,pt[1]),color=\"black\")\n plt.scatter((pt[0]/2),(pt[1]/2),color=\"green\")\n plt.plot((pt[0]/2,xpoints[i-1]/2),(pt[1]/2,ypoints[i-1]/2),color=\"green\")\nplt.xlim((-5,5)); plt.ylim((-5,5))",
"In the figure above the red point represents the target astrocyte whose \"optimal region\" we would like to determine. The blue points are neighboring astrocytes. The region enclosed in green is the voronoi region around the target astrocyte which I am proposing we call the \"optimal region\" for the target astrocyte.\n<h5>How to Test the Dataset</h5>\n\nIf we use the above definition as the \"optimal region\" for an astrocyte to occupy then we can test this against the datasets obtained by James and Monica by seeing how much of each astrocyte is within these regions. We can determine this using the following method.\n\n\nUse a density based clustering algorithm on a thresholded version of the dataset to determine which points belong to which astrocyte. The threshold will be set to determine which points are astrocytes and which are background. Gordon has already developed one of these in the past during our earlier research. It is written in Python and is super fast. We can apply this to the 3D dataset and specify that we want $n$ clusters, where $n$ is equal to the number of astrocytes that we have predetermined to be within the data set.\n\n\nOnce we have identified which points in the dataset belong to which astrocyte we next need to determine how many of each of its points are within its \"optimal region\". We can do this by taking each point one at a time and testing whether it is within the boundary circumscribed by the voronoi region. The exact details of implementation can be determined later, but essentially the algorithm will go as follows.\n -For each point belonging to the target astrocyte do the following:\n -Find the 3 closest voronoi vertices.\n -Construct a plane in 3D using the 3 vertices. \n -Evaluate the plane at the point in question. \n -The sign of the result will tell us whether the point is inside or outside of the voronoi region.\n\nFinally we can do some statistics on the results to describe more accurately what is going on. Most simply we would say something like (num pts inside)/(total pts) to get a percentage describing how much the astrocyte is within the \"optimal region\". We could also examine the distance of all the points outside the region to get some sort of error measure. I'm sure there are many other interesting metrics we can devise using the above data.\n\n<h3>Timeline</h3>\n\nI have personally implemented or have access to implementations of all the compuational ideas outlined in the above proposal. Therefore I believe that this would be a straight forward test to implement and run. After looking at a sample dataset provided by Monica and outlining the code myself, I believe the proposed idea could be implemented and run within a month. I welcome any ideas and criticims you may have."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
italoPontes/Machine-learning
|
Tarefas/Regressão-Linear-Simples-do-Zero/Task 01.ipynb
|
lgpl-3.0
|
[
"Atividade de Regressão Linear\nCódigo-fonte disponível em: link",
"%matplotlib notebook\n#!/usr/bin/env python\n# -*- coding: utf-8 -*-\n#Federal University of Campina Grande (UFCG)\n#Author: Ítalo de Pontes Oliveira\n#Adapted from: Siraj Raval\n#Available at: https://github.com/llSourcell/linear_regression_live\n\n#The optimal values of m and b can be actually calculated with way less effort than doing a linear regression. \n#this is just to demonstrate gradient descent\n\n\"\"\"This project will calculate linear regression\n\"\"\"\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\nimport numpy\nfrom numpy import *\nimport sys\n\n# y = mx + b\n# m is slope, b is y-intercept\n## Compute the errors for a given line\n# @param b Is the linear coefficient\n# @param m Is the angular coefficient\n# @param x Domain points\n# @param y Domain points\ndef compute_error_for_line_given_points(w0, w1, x, y):\n\ttotalError = sum((y - (w1 * x + w0)) ** 2)\n\ttotalError /= float(len(x))\n\treturn totalError\n\n## Calculate a new linear and angular coefficient step by a learning rate. \n# @param w0_current Current linear coefficient\n# @param w1_current Current linear coefficient\n# @param x Domain points\n# @param y Image points\n# @param learningRate The rate in which the gradient will be changed in one step\ndef step_gradient(w0_current, w1_current, x, y, learningRate):\n\tw0_gradient = 0\n\tw1_gradient = 0\n\tnorma = 0\n\tN = float(len(x))\n\t\n\tw0_gradient = -2 * sum( y - ( w0_current + ( w1_current * x ) ) ) / N\n\tw1_gradient = -2 * sum( ( y - ( w0_current + ( w1_current * x ) ) ) * x ) / N\n\n\tnorma = numpy.linalg.norm(w0_gradient - w1_gradient)\n\t\n\tnew_w0 = w0_current - (learningRate * w0_gradient)\n\tnew_w1 = w1_current - (learningRate * w1_gradient)\n\t\n\treturn [new_w0, new_w1, norma]\n\n## Run the descending gradient\n# @param x Domain points\n# @param y Image points\n# @param starting_w0 Linear coefficient initial\n# @param starting_w1 Angular coefficient initial\n# @param learning_rate The rate in which the gradient will be changed in one step\n# @param num_iterations Interactions number that the slope line will approximate before a stop.\ndef gradient_descent_runner(x, y, starting_w0, starting_w1, learning_rate, num_iterations):\n\tw0 = starting_w0\n\tw1 = starting_w1\n\trss_by_step = 0\n\trss_total = []\n\tnorma = learning_rate\n\titeration_number = 0\n\t\n\tcondiction = True\n\tif num_iterations < 1:\n\t\tcondiction = False\n \n\twhile (norma > 0.001 and not condiction) or ( iteration_number < num_iterations and condiction):\n\t\trss_by_step = compute_error_for_line_given_points(w0, w1, x, y)\n\t\trss_total.append(rss_by_step)\n\t\tw0, w1, norma = step_gradient(w0, w1, x, y, learning_rate)\n\t\titeration_number += 1\n\t\n\treturn [w0, w1, iteration_number, rss_total]",
"Questões\n1. Rode o mesmo programa nos dados contendo anos de escolaridade (primeira coluna) versus salário (segunda coluna). Baixe os dados aqui. Esse exemplo foi trabalhado em sala de aula em várias ocasiões. Os itens a seguir devem ser respondidos usando esses dados.\nRESOLUÇÃO: Arquivo baixado, encontra-se no diretório atual com o nome \"income.csv\".\n2. Modifique o código original para imprimir o RSS a cada iteração do gradiente descendente.\nRESOLUÇÃO: Foi preferível adicionar uma nova funcionalidade ao código. Ao final da execução é salvo um gráfico com o RSS para todas as iterações.",
"## Show figure \n# @param data Data to show in the graphic.\n# @param xlabel Text to be shown in abscissa axis.\n# @param ylabel Text to be shown in ordinate axis.\ndef show_figure(data, xlabel, ylabel):\n\tplt.plot(data)\n\tplt.xlabel(xlabel)\n\tplt.ylabel(ylabel)",
"3. O que acontece com o RSS ao longo das iterações (aumenta ou diminui) se você usar 1000 iterações e um learning_rate (tamanho do passo do gradiente) de 0.001? Por que você acha que isso acontece?",
"points = genfromtxt(\"income.csv\", delimiter=\",\")\nx = points[:,0] \ny = points[:,1]\n\nstarting_w0 = 0\nstarting_w1 = 0\n\nlearning_rate = 0.001\niterations_number = 50\n[w0, w1, iter_number, rss_total] = gradient_descent_runner(x, y, starting_w0, starting_w1, learning_rate, iterations_number)\nshow_figure(rss_total, \"Iteraction\", \"RSS\") \nprint(\"RSS na última iteração: %.2f\" % rss_total[-1])\n\nlearning_rate = 0.0001\n[w0, w1, iter_number, rss_total] = gradient_descent_runner(x, y, starting_w0, starting_w1, learning_rate, iterations_number)\nshow_figure(rss_total, \"Iteraction\", \"RSS\") \nprint(\"RSS na última iteração: %.2f\" % rss_total[-1])",
"Com esse gráfico é possível observar que:\nQuanto maior o Learning Rate, maior o número de iterações necessárias para se atingir um mesmo erro.",
"learning_rate = 0.001\niterations_number = 1000\n[w0, w1, iter_number, rss_total] = gradient_descent_runner(x, y, starting_w0, starting_w1, learning_rate, iterations_number)\nprint(\"RSS na última iteração: %.2f\" % rss_total[-1])\n\niterations_number = 10000\n[w0, w1, iter_number, rss_total] = gradient_descent_runner(x, y, starting_w0, starting_w1, learning_rate, iterations_number)\nprint(\"RSS na última iteração: %.2f\" % rss_total[-1])",
"Ao observar os valores de RSS calculados quando o número de iterações aumenta, é possível observar que o RSS obtido diminui cada vez mais.\n4. Teste valores diferentes do número de iterações e learning_rate até que w0 e w1 sejam aproximadamente iguais a -39 e 5 respectivamente. Reporte os valores do número de iterações e learning_rate usados para atingir esses valores.\nForam testados diferentes valores para o número de iterações, e diferentes frações do Learning Rate até que com a seguinte configuração, obteve o valor desejado:",
"learning_rate = 0.0025\niterations_number = 20000\n[w0, w1, iter_number, rss_total] = gradient_descent_runner(x, y, starting_w0, starting_w1, learning_rate, iterations_number)\n\nprint(\"W0: %.2f\" % w0)\nprint(\"W1: %.2f\" % w1)\nprint(\"RSS na última iteração: %.2f\" % rss_total[-1])",
"5. O algoritmo do vídeo usa o número de iterações como critério de parada. Mude o algoritmo para considerar um critério de tolerância que é comparado ao tamanho do gradiente (como no algoritmo dos slides apresentados em sala).\nA metodologia aplicada foi a seguinte: quando não se fornece o número de iterações por parâmetro, o algoritmo irá iterar até que a norma do gradiente descendente seja igual a 0,001. Ou:\n$\\vert\\vert (W_{0}^{grad}, W_{1}^{grad} ) \\vert\\vert < 0,001 $",
"learning_rate = 0.0025\niterations_number = 0\n[w0, w1, iter_number, rss_total] = gradient_descent_runner(x, y, starting_w0, starting_w1, learning_rate, iterations_number)\n\nprint(\"W0: %.2f\" % w0)\nprint(\"W1: %.2f\" % w1)\nprint(\"RSS na última iteração: %.2f\" % rss_total[-1])",
"6. Ache um valor de tolerância que se aproxime dos valores dos parâmetros do item 4 acima. Que valor foi esse?\nO valor utilizado, conforme descrito na questão anterior, foi 0,001. Ou seja, quando o tamanho do gradiente for menor que 0,001, então, o algoritmo entenderá que a aproximação convergiu e terminará o processamento.\n7. Implemente a forma fechada (equações normais) de calcular os coeficientes de regressão (vide algoritmo nos slides). Compare o tempo de processamento com o gradiente descendente considerando sua solução do item 6.\nFoi implementada a função considerando a forma fechada. Dessa maneira, foi observado que o tempo de processamento descrito na questão 6 foi, aproximadamente, cinco vezes maior. Mesmo considerando o código implementado já na versão vetorizada.",
"import time\n\nstart_time = time.time()\n[w0, w1, iter_number, rss_total] = gradient_descent_runner(x, y, starting_w0, starting_w1, learning_rate, iterations_number)\ngradient_time = float(time.time()-start_time)\nprint(\"Tempo para calcular os coeficientes pelo gradiente descendente: %.2f s.\" % gradient_time)\n\nstart_time = time.time()\n\n## Compute the W0 and W1 by derivative\n# @param x Domain points\n# @param y Image points\ndef compute_normal_equation(x, y):\n\tx_mean = numpy.mean(x)\n\ty_mean = numpy.mean(y)\n\tw1 = sum((x - x_mean)*(y - y_mean))/sum((x - x_mean)**2)\n\tw0 = y_mean-(w1*x_mean)\n\treturn [w0, w1]\n\nderivative_time = float(time.time()-start_time)\nprint(\"Tempo para calcular os coeficientes de maneira fechada: %.4f s.\" % derivative_time)\n\nratio = float(gradient_time/derivative_time)\nprint(\"Ou seja, calcular os coeficientes por meio da forma fechada é %.0f vezes mais rápido que via gradiente.\" % (ratio))\n"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
manparvesh/manparvesh.github.io
|
oldsitejekyll/markdown_generator/publications.ipynb
|
mit
|
[
"Publications markdown generator for academicpages\nTakes a TSV of publications with metadata and converts them for use with academicpages.github.io. This is an interactive Jupyter notebook (see more info here). The core python code is also in publications.py. Run either from the markdown_generator folder after replacing publications.tsv with one containing your data.\nTODO: Make this work with BibTex and other databases of citations, rather than Stuart's non-standard TSV format and citation style.\nData format\nThe TSV needs to have the following columns: pub_date, title, venue, excerpt, citation, site_url, and paper_url, with a header at the top. \n\nexcerpt and paper_url can be blank, but the others must have values. \npub_date must be formatted as YYYY-MM-DD.\nurl_slug will be the descriptive part of the .md file and the permalink URL for the page about the paper. The .md file will be YYYY-MM-DD-[url_slug].md and the permalink will be https://[yourdomain]/publications/YYYY-MM-DD-[url_slug]\n\nThis is how the raw file looks (it doesn't look pretty, use a spreadsheet or other program to edit and create).",
"!cat publications.tsv",
"Import pandas\nWe are using the very handy pandas library for dataframes.",
"import pandas as pd",
"Import TSV\nPandas makes this easy with the read_csv function. We are using a TSV, so we specify the separator as a tab, or \\t.\nI found it important to put this data in a tab-separated values format, because there are a lot of commas in this kind of data and comma-separated values can get messed up. However, you can modify the import statement, as pandas also has read_excel(), read_json(), and others.",
"publications = pd.read_csv(\"publications.tsv\", sep=\"\\t\", header=0)\npublications\n",
"Escape special characters\nYAML is very picky about how it takes a valid string, so we are replacing single and double quotes (and ampersands) with their HTML encoded equivilents. This makes them look not so readable in raw format, but they are parsed and rendered nicely.",
"html_escape_table = {\n \"&\": \"&\",\n '\"': \""\",\n \"'\": \"'\"\n }\n\ndef html_escape(text):\n \"\"\"Produce entities within text.\"\"\"\n return \"\".join(html_escape_table.get(c,c) for c in text)",
"Creating the markdown files\nThis is where the heavy lifting is done. This loops through all the rows in the TSV dataframe, then starts to concatentate a big string (md) that contains the markdown for each type. It does the YAML metadata first, then does the description for the individual page.",
"import os\nfor row, item in publications.iterrows():\n \n md_filename = str(item.pub_date) + \"-\" + item.url_slug + \".md\"\n html_filename = str(item.pub_date) + \"-\" + item.url_slug\n year = item.pub_date[:4]\n \n ## YAML variables\n \n md = \"---\\ntitle: \\\"\" + item.title + '\"\\n'\n \n md += \"\"\"collection: publications\"\"\"\n \n md += \"\"\"\\npermalink: /publication/\"\"\" + html_filename\n \n if len(str(item.excerpt)) > 5:\n md += \"\\nexcerpt: '\" + html_escape(item.excerpt) + \"'\"\n \n md += \"\\ndate: \" + str(item.pub_date) \n \n md += \"\\nvenue: '\" + html_escape(item.venue) + \"'\"\n \n if len(str(item.paper_url)) > 5:\n md += \"\\npaperurl: '\" + item.paper_url + \"'\"\n \n md += \"\\ncitation: '\" + html_escape(item.citation) + \"'\"\n \n md += \"\\n---\"\n \n ## Markdown description for individual page\n \n if len(str(item.excerpt)) > 5:\n md += \"\\n\" + html_escape(item.excerpt) + \"\\n\"\n \n if len(str(item.paper_url)) > 5:\n md += \"\\n[Download paper here](\" + item.paper_url + \")\\n\" \n \n md += \"\\nRecommended citation: \" + item.citation\n \n md_filename = os.path.basename(md_filename)\n \n with open(\"../_publications/\" + md_filename, 'w') as f:\n f.write(md)",
"These files are in the publications directory, one directory below where we're working from.",
"!ls ../_publications/\n\n!cat ../_publications/2009-10-01-paper-title-number-1.md"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
EvoML/EvoML
|
EvoML - Example Usage.ipynb
|
gpl-3.0
|
[
"import pandas as pd\n",
"Aim\nMotive of the notebook is to give a brief overview as to how to use the evolutionary sampling powered ensemble models as part of the EvoML research project. \nWill make the notebook more verbose if time permits. Priority will be to showcase the flexible API of the new estimators which encourage research and tinkering. \nContents\n\nSubsampling\nSubspacing\n\n1. Subsampling - Sampling in the example space - rows will be mutated and evolved.",
"from evoml.subsampling import BasicSegmenter_FEMPO, BasicSegmenter_FEGT, BasicSegmenter_FEMPT\n\ndf = pd.read_csv('datasets/ozone.csv')\n\ndf.head(2)\n\nX, y = df.iloc[:,:-1], df['output']\n\nprint(BasicSegmenter_FEGT.__doc__)\n\nfrom sklearn.tree import DecisionTreeRegressor\nclf_dt = DecisionTreeRegressor(max_depth=3)\nclf = BasicSegmenter_FEGT(base_estimator=clf_dt, statistics=True)\n\nclf.fit(X, y)\n\nclf.score(X, y)\n\nEGs = clf.segments_\n\nlen(EGs)\n\nsampled_datasets = [eg.get_data() for eg in EGs]\n\n[sd.shape for sd in sampled_datasets]",
"2. Subspacing - sampling in the domain of features - evolving and mutating columns",
"from evoml.subspacing import FeatureStackerFEGT, FeatureStackerFEMPO\n\nprint(FeatureStackerFEGT.__doc__)\n\nclf = FeatureStackerFEGT(ngen=30)\n\nclf.fit(X, y)\n\nclf.score(X, y)\n\n## Get the Hall of Fame individual\nhof = clf.segment[0]\n\nsampled_datasets = [eg.get_data() for eg in hof]\n\n[data.columns.tolist() for data in sampled_datasets]\n\n## Original X columns\nX.columns"
] |
[
"code",
"markdown",
"code",
"markdown",
"code"
] |
jrg365/gpytorch
|
examples/03_Multitask_Exact_GPs/Hadamard_Multitask_GP_Regression.ipynb
|
mit
|
[
"Hadamard Multitask GP Regression\nIntroduction\nThis notebook demonstrates how to perform \"Hadamard\" multitask regression. \nThis differs from the multitask gp regression example notebook in one key way:\n\nHere, we assume that we have observations for one task per input. For each input, we specify the task of the input that we observe. (The kernel that we learn is expressed as a Hadamard product of an input kernel and a task kernel)\nIn the other notebook, we assume that we observe all tasks per input. (The kernel in that notebook is the Kronecker product of an input kernel and a task kernel).\n\nMultitask regression, first introduced in this paper learns similarities in the outputs simultaneously. It's useful when you are performing regression on multiple functions that share the same inputs, especially if they have similarities (such as being sinusodial).\nGiven inputs $x$ and $x'$, and tasks $i$ and $j$, the covariance between two datapoints and two tasks is given by\n$$ k([x, i], [x', j]) = k_\\text{inputs}(x, x') * k_\\text{tasks}(i, j)\n$$\nwhere $k_\\text{inputs}$ is a standard kernel (e.g. RBF) that operates on the inputs.\n$k_\\text{task}$ is a special kernel - the IndexKernel - which is a lookup table containing inter-task covariance.",
"import math\nimport torch\nimport gpytorch\nfrom matplotlib import pyplot as plt\n\n%matplotlib inline\n%load_ext autoreload\n%autoreload 2",
"Set up training data\nIn the next cell, we set up the training data for this example. For each task we'll be using 50 random points on [0,1), which we evaluate the function on and add Gaussian noise to get the training labels. Note that different inputs are used for each task.\nWe'll have two functions - a sine function (y1) and a cosine function (y2).",
"train_x1 = torch.rand(50)\ntrain_x2 = torch.rand(50)\n\ntrain_y1 = torch.sin(train_x1 * (2 * math.pi)) + torch.randn(train_x1.size()) * 0.2\ntrain_y2 = torch.cos(train_x2 * (2 * math.pi)) + torch.randn(train_x2.size()) * 0.2",
"Set up a Hadamard multitask model\nThe model should be somewhat similar to the ExactGP model in the simple regression example.\nThe differences:\n\nThe model takes two input: the inputs (x) and indices. The indices indicate which task the observation is for.\nRather than just using a RBFKernel, we're using that in conjunction with a IndexKernel.\nWe don't use a ScaleKernel, since the IndexKernel will do some scaling for us. (This way we're not overparameterizing the kernel.)",
"class MultitaskGPModel(gpytorch.models.ExactGP):\n def __init__(self, train_x, train_y, likelihood):\n super(MultitaskGPModel, self).__init__(train_x, train_y, likelihood)\n self.mean_module = gpytorch.means.ConstantMean()\n self.covar_module = gpytorch.kernels.RBFKernel()\n \n # We learn an IndexKernel for 2 tasks\n # (so we'll actually learn 2x2=4 tasks with correlations)\n self.task_covar_module = gpytorch.kernels.IndexKernel(num_tasks=2, rank=1)\n\n def forward(self,x,i):\n mean_x = self.mean_module(x)\n \n # Get input-input covariance\n covar_x = self.covar_module(x)\n # Get task-task covariance\n covar_i = self.task_covar_module(i)\n # Multiply the two together to get the covariance we want\n covar = covar_x.mul(covar_i)\n \n return gpytorch.distributions.MultivariateNormal(mean_x, covar)\n\nlikelihood = gpytorch.likelihoods.GaussianLikelihood()\n\ntrain_i_task1 = torch.full_like(train_x1, dtype=torch.long, fill_value=0)\ntrain_i_task2 = torch.full_like(train_x2, dtype=torch.long, fill_value=1)\n\nfull_train_x = torch.cat([train_x1, train_x2])\nfull_train_i = torch.cat([train_i_task1, train_i_task2])\nfull_train_y = torch.cat([train_y1, train_y2])\n\n# Here we have two iterms that we're passing in as train_inputs\nmodel = MultitaskGPModel((full_train_x, full_train_i), full_train_y, likelihood)",
"Training the model\nIn the next cell, we handle using Type-II MLE to train the hyperparameters of the Gaussian process.\nSee the simple regression example for more info on this step.",
"# this is for running the notebook in our testing framework\nimport os\nsmoke_test = ('CI' in os.environ)\ntraining_iterations = 2 if smoke_test else 50\n\n\n# Find optimal model hyperparameters\nmodel.train()\nlikelihood.train()\n\n# Use the adam optimizer\noptimizer = torch.optim.Adam(model.parameters(), lr=0.1) # Includes GaussianLikelihood parameters\n\n# \"Loss\" for GPs - the marginal log likelihood\nmll = gpytorch.mlls.ExactMarginalLogLikelihood(likelihood, model)\n\nfor i in range(training_iterations):\n optimizer.zero_grad()\n output = model(full_train_x, full_train_i)\n loss = -mll(output, full_train_y)\n loss.backward()\n print('Iter %d/50 - Loss: %.3f' % (i + 1, loss.item()))\n optimizer.step()",
"Make predictions with the model",
"# Set into eval mode\nmodel.eval()\nlikelihood.eval()\n\n# Initialize plots\nf, (y1_ax, y2_ax) = plt.subplots(1, 2, figsize=(8, 3))\n\n# Test points every 0.02 in [0,1]\ntest_x = torch.linspace(0, 1, 51)\ntast_i_task1 = torch.full_like(test_x, dtype=torch.long, fill_value=0)\ntest_i_task2 = torch.full_like(test_x, dtype=torch.long, fill_value=1)\n\n# Make predictions - one task at a time\n# We control the task we cae about using the indices\n\n# The gpytorch.settings.fast_pred_var flag activates LOVE (for fast variances)\n# See https://arxiv.org/abs/1803.06058\nwith torch.no_grad(), gpytorch.settings.fast_pred_var():\n observed_pred_y1 = likelihood(model(test_x, tast_i_task1))\n observed_pred_y2 = likelihood(model(test_x, test_i_task2))\n\n# Define plotting function\ndef ax_plot(ax, train_y, train_x, rand_var, title):\n # Get lower and upper confidence bounds\n lower, upper = rand_var.confidence_region()\n # Plot training data as black stars\n ax.plot(train_x.detach().numpy(), train_y.detach().numpy(), 'k*')\n # Predictive mean as blue line\n ax.plot(test_x.detach().numpy(), rand_var.mean.detach().numpy(), 'b')\n # Shade in confidence \n ax.fill_between(test_x.detach().numpy(), lower.detach().numpy(), upper.detach().numpy(), alpha=0.5)\n ax.set_ylim([-3, 3])\n ax.legend(['Observed Data', 'Mean', 'Confidence'])\n ax.set_title(title)\n\n# Plot both tasks\nax_plot(y1_ax, train_y1, train_x1, observed_pred_y1, 'Observed Values (Likelihood)')\nax_plot(y2_ax, train_y2, train_x2, observed_pred_y2, 'Observed Values (Likelihood)')"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
rgarcia-herrera/sistemas-dinamicos
|
Glass_Pasternack.ipynb
|
gpl-3.0
|
[
"import numpy as np\n\n# importamos bibliotecas para plotear\nimport matplotlib\nimport matplotlib.pyplot as plt\n\n# para desplegar los plots en el notebook\n%matplotlib inline\n\n# para cómputo simbólico\nfrom sympy import *\n\n# configuramos los símbolos alfa, beta, x y f para su uso en sympy\nalpha, beta, x = symbols('alpha beta x')\nf = symbols('f', cls=Function)\ninit_printing()\n\n# importamos slider para experimentos interactivos\nfrom __future__ import print_function\nfrom ipywidgets import interact, interactive, fixed\nimport ipywidgets as widgets\n\n\ndef cobweb(f, x, y):\n \"\"\"\n Dibuja un diagrama de telaraña para una función.\n \"\"\"\n plt.axhline(linewidth=1.0, color=\"black\")\n plt.axvline(linewidth=1.0, color=\"black\")\n plt.ylim((y.min(),y.max()))\n indep = np.linspace(x.min(), x.max(), len(x))\n \n # grafica la funcion \n plt.plot(indep,f(indep),'blue')\n \n # grafica la diagonal\n plt.plot(indep, indep, 'black')\n\n # grafica la telaraña\n y0 = f(x[0])\n x0 = x[0]\n for i in range(len(x)):\n plt.hlines(y0, x0, y0,'r')\n x0 = y0\n y0 = f(x0)\n plt.vlines(x0, x0, y0,'r')",
"La función\nEsta ecuación por Glass y Pasternack (1978) sirve para modelar redes neuronales y de interacción génica.\n$$x_{t+1}=\\frac{\\alpha x_{t}}{1+\\beta x_{t}}$$\nDonde $\\alpha$ y $\\beta$ son números positivos y $x_{t}\\geq0$.",
"def g(x, alpha, beta):\n assert alpha >= 0 and beta >= 0\n return (alpha*x)/(1 + (beta * x))\n\ndef plot_cobg(x, alpha, beta): \n y = np.linspace(x[0],x[1],300)\n g_y = g(y, alpha, beta)\n cobweb(lambda x: g(x, alpha, beta), y, g_y)\n\n# configura gráfica interactiva\ninteract(plot_cobg,\n x=widgets.FloatRangeSlider(min=0.01, max=3, step=0.01,\n value=[0.02, 3],\n continuous_update=False),\n alpha=widgets.FloatSlider(min=0.001, max=30,step=0.01,\n value=12, continuous_update=False),\n beta=widgets.FloatSlider(min=0.001, max=30,step=0.01,\n value=7, continuous_update=False))",
"Búsqueda algebráica de puntos fijos\nA continuación sustituiremos f(x) en x reiteradamente hasta obtener la cuarta iterada de f.",
"# primera iterada\nf0 = (alpha*x)/(1+beta*x)\nEq(f(x),f0)\n\n# segunda iterada\n\n# subs-tituye f0 en la x de f0 para generar f1\nf1 = simplify(f0.subs(x, f0))\nEq(f(f(x)), f1)\n\n# tercera iterada\nf2 = simplify(f1.subs(x, f1))\nEq(f(f(f(x))), f2)\n\n# cuarta iterada\nf3 = simplify(f2.subs(x, f2))\nEq(f(f(f(f(x)))), f3)\n\n# puntos fijos resolviendo la primera iterada\nsolveset(Eq(f1,x),x)\n\n(alpha-1)/beta",
"Punto fijo oscilatorio\nAl configurar $$\\alpha, \\beta$$ de modo que haya un punto fijo la serie de tiempo revela una oscilación entre cero y el punto fijo.",
"def solve_g(a, b):\n y = list(np.linspace(0,float(list(solveset(Eq(f1.subs(alpha, a).subs(beta, b), x), x)).pop()),2))\n\n for t in range(30):\n y.append(g(y[t], a, b))\n zoom = plt.plot(y)\n\n print(\"ultimos 15 de la serie:\")\n pprint(y[-15:])\n print(\"\\npuntos fijos:\")\n return solveset(Eq(f1.subs(alpha, a).subs(beta, b), x), x)\n \n# gráfica interactiva\ninteract(solve_g,\n a=widgets.IntSlider(min=0, max=30, step=1,\n value=11, continuous_update=False,\n description='alpha'),\n b=widgets.IntSlider(min=0, max=30, step=1,\n value=5, continuous_update=False,\n description='beta'))",
"¿Qué pasará con infinitas iteraciones?\nTodo parece indicar que la función converge a 1 si $\\alpha=1$ y $\\beta=1$.\nSi no, converge a $\\frac{\\alpha}{\\beta}$",
"# con alfa=1 y beta=1\nEq(collect(f3, x), x/(x+1))\n\n\n\ndef plot_g(x, alpha, beta):\n pprint(x)\n y = np.linspace(x[0],x[1],300)\n g_y = g(y, alpha, beta)\n fig1 = plt.plot(y, g_y)\n fig1 = plt.plot(y, y, color='red')\n plt.axis('equal')\n\ninteract(plot_g,\n x=widgets.FloatRangeSlider(min=0, max=30, step=0.01, value=[0,1], continuous_update=False),\n alpha=widgets.IntSlider(min=0,max=30,step=1,value=1, continuous_update=False),\n beta=widgets.IntSlider(min=0,max=30,step=1,value=1, continuous_update=False))\n"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
NREL/bifacial_radiance
|
docs/tutorials/16 - AgriPV - 3-up and 4-up collector optimization.ipynb
|
bsd-3-clause
|
[
"16 - AgriPV - 3-up and 4-up collector optimization\nThis journal helps the exploration of varying collector widths and xgaps in the ground underneath as well as on the rear irradiance for bifacial AgriPV. The optimization varies the numpanels combinations with xgaps for having 3-up and 4-up collectors with varying space along the row (xgap). The actual raytracing is not performed in the jupyter journal but rather on the HPC, but the geometry is the same as presented here.\nThe steps on this journal:\n<ol>\n <li> <a href='#step1'> Making Collectors for each number panel and xgap case </a></li> \n <li> <a href='#step2'> Builds the Scene so it can be viewed with rvu </a></li> \n\n\nAn area of 40m x 20 m area is sampled on the HPC, and is highlighted in the visualizations below with an appended terrain of 'litesoil'. The image below shows the two extremes of the variables optimized and the raytrace results, including the worst-case shading experienced under the array ( 100 - min_irradiance *100 / GHI).\n\n\n\n",
"import os\nfrom pathlib import Path\n\ntestfolder = Path().resolve().parent.parent / 'bifacial_radiance' / 'TEMP' / 'Tutorial_16'\nif not os.path.exists(testfolder):\n os.makedirs(testfolder)\n\nprint (\"Your simulation will be stored in %s\" % testfolder)\n\n\nimport bifacial_radiance\nimport numpy as np\n\nrad_obj = bifacial_radiance.RadianceObj('tutorial_16', str(testfolder)) \n",
"<a id='step1'></a>\n1. Making Collectors for each number panel and xgap case",
"x = 2\ny = 1\nygap = 0.1524 # m = 6 in\nzgap = 0.002 # m, veyr little gap to torquetube.\n\ntubeParams = {'diameter':0.15,\n 'tubetype':'square',\n 'material':'Metal_Grey',\n 'axisofrotation':True,\n 'visible': True}\n\nft2m = 0.3048\nxgaps = [3, 4, 6, 9, 12, 15, 18, 21]\nnumpanelss = [3, 4]\n\n\n# Loops\nfor ii in range(0, len(numpanelss)):\n numpanels = numpanelss[ii]\n for jj in range(0, len(xgaps)):\n xgap = xgaps[jj]*ft2m\n\n moduletype = 'test-module_'+str(numpanels)+'up_'+str(round(xgap,1))+'xgap'\n rad_obj.makeModule(moduletype, \n x=x, y=y, \n xgap=xgap, zgap=zgap, ygap = ygap, numpanels=numpanels, \n tubeParams=tubeParams)\n\n",
"<a id='step2'></a>\n2. Build the Scene so it can be viewed with rvu",
"xgaps = np.round(np.array([3, 4, 6, 9, 12, 15, 18, 21]) * ft2m,1)\nnumpanelss = [3, 4]\nsensorsxs = np.array(list(range(0, 201))) \n\n# Select CASE:\nxgap = np.round(xgaps[-1],1)\nnumpanels = 4\n\n# All the rest\n\nft2m = 0.3048\nhub_height = 8.0 * ft2m\ny = 1\npitch = 0.001 # If I recall, it doesn't like when pitch is 0 even if it's a single row, but any value works here. \nygap = 0.15\ntilt = 18\n\nsim_name = ('Coffee_'+str(numpanels)+'up_'+\n str(round(xgap,1))+'_xgap')\n\nalbedo = 0.35 # Grass value from Torres Molina, \"Measuring UHI in Puerto Rico\" 18th LACCEI \n # International Multi-Conference for Engineering, Education, and Technology\n\nazimuth = 180\nif numpanels == 3:\n nMods = 9\nif numpanels == 4:\n nMods = 7\nnRows = 1\n\nmoduletype = 'test-module_'+str(numpanels)+'up_'+str(round(xgap,1))+'xgap'\n\nrad_obj.setGround(albedo)\nlat = 18.202142\nlon = -66.759187\nmetfile = rad_obj.getEPW(lat,lon)\nrad_obj.readWeatherFile(metfile)\n\nsceneDict = {'tilt':tilt,'pitch':pitch,'hub_height':hub_height,'azimuth':azimuth, 'nMods': nMods, 'nRows': nRows} \nscene = rad_obj.makeScene(module=moduletype,sceneDict=sceneDict, radname = sim_name)\n\nrad_obj.gendaylit(4020)\n\n\noctfile = rad_obj.makeOct(filelist = rad_obj.getfilelist(), octname = rad_obj.basename) \n\nname='SampleArea'\ntext='! genbox litesoil cuteBox 40 20 0.01 | xform -t -20 -10 0.01'\ncustomObject =rad_obj.makeCustomObject(name,text)\nrad_obj.appendtoScene(scene.radfiles, customObject, '!xform -rz 0')\n\noctfile = rad_obj.makeOct(rad_obj.getfilelist()) \n",
"To View the generated Scene, you can navigate to the testfolder on a terminal and use:\n<b>front view:<b>\n\nrvu -vf views\\front.vp -e .0265652 -vp 2 -21 2.5 -vd 0 1 0 makemod.oct\n\n<b> top view: </b>\n\nrvu -vf views\\front.vp -e .0265652 -vp 5 0 70 -vd 0 0.0001 -1 makemod.oct\n\nOr run it directly from Jupyter by removing the comment from the following cell:",
"\n## Comment the ! line below to run rvu from the Jupyter notebook instead of your terminal.\n## Simulation will stop until you close the rvu window\n\n#!rvu -vf views\\front.vp -e .0265652 -vp 2 -21 2.5 -vd 0 1 0 makemod.oct\n#!rvu -vf views\\front.vp -e .0265652 -vp 5 0 70 -vd 0 0.0001 -1 makemod.oct\n"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
hayatoy/cloudml-magic
|
examples/Keras_Fine_Tuning.ipynb
|
mit
|
[
"Keras Fine Tuning on Cloud Machine Learning Engine\nThis notebook shows how to create transfer learning model using Keras, and train on ML Engine then serve on ML Engine's Online Prediction. \nInstall cloudmlmagic ahead of running.",
"# !pip install cloudmlmagic",
"Load cloudmlmagic extension",
"%load_ext cloudmlmagic",
"Initialize and setup ML Engine parameters.\n<font color=\"red\">Change PROJECTID and BUCKET</font> \nFollowing dict will be written in setup.py of your package,\nso list up neccesary packages of your code.",
"%%ml_init -projectId PROJECTID -bucket BUCKET -scaleTier BASIC_GPU -region asia-east1 -runtimeVersion 1.2\n{'install_requires': ['keras', 'h5py', 'Pillow']}",
"Load InceptionV3 model",
"%%ml_code\n\nfrom keras.applications.inception_v3 import InceptionV3\n\nmodel = InceptionV3(weights='imagenet')",
"Load dataset",
"%%ml_code\n\nfrom keras.preprocessing import image\nfrom keras.applications.inception_v3 import preprocess_input, decode_predictions\nfrom io import BytesIO\nimport numpy as np\nimport pandas as pd\nimport requests\n\nurl = 'https://github.com/hayatoy/deep-learning-datasets/releases/download/v0.1/tl_opera_capitol.npz'\nresponse = requests.get(url)\ndataset = np.load(BytesIO(response.content))\n\nX_dataset = dataset['features']\ny_dataset = dataset['labels']",
"Split dataset for train and test",
"%%ml_code\n\nfrom keras.utils import np_utils\nfrom sklearn.model_selection import train_test_split\n\nX_dataset = preprocess_input(X_dataset)\ny_dataset = np_utils.to_categorical(y_dataset)\nX_train, X_test, y_train, y_test = train_test_split(\n X_dataset, y_dataset, test_size=0.2, random_state=42)",
"The code cell above won't be included in the package being deployed on ML Engine.\nJust to clarify that normal InceptionV3 model cannot predict correctly with the Opera/Capitol dataset.",
"x = X_dataset[0]\nx = np.expand_dims(x, axis=0)\n\npreds = model.predict(x)\nprint('Predicted:')\nfor p in decode_predictions(preds, top=5)[0]:\n print(\"Score {}, Label {}\".format(p[2], p[1]))",
"Visualize last layers of InceptionV3",
"pd.DataFrame(model.layers).tail()\n\n%ml_code\n\nfrom keras.models import Model\n\n# Intermediate layer\nintermediate_layer_model = Model(inputs=model.input, outputs=model.layers[311].output)",
"Extract intermediate features",
"x = np.expand_dims(X_dataset[0], axis=0)\nfeature = intermediate_layer_model.predict(x)\npd.DataFrame(feature.reshape(-1,1)).plot(figsize=(12, 3))",
"Append dense layer at the last",
"%%ml_code\n\nfrom keras.layers import Dense\n\n# Append dense layer\nx = intermediate_layer_model.output\nx = Dense(1024, activation='relu')(x)\npredictions = Dense(2, activation='softmax')(x)\n\n# Transfer learning model, all layers are trainable at this moment\ntransfer_model = Model(inputs=intermediate_layer_model.input, outputs=predictions)\n\nprint(pd.DataFrame(transfer_model.layers).tail())\n\n# Freeze all layers\nfor layer in transfer_model.layers:\n layer.trainable = False\n\n# Unfreeze the last layers, so that only these layers are trainable.\ntransfer_model.layers[312].trainable = True\ntransfer_model.layers[313].trainable = True\n\ntransfer_model.compile(loss='categorical_crossentropy',\n optimizer='adam',\n metrics=['accuracy'])\n\n%%ml_run cloud\n\nimport tensorflow as tf\nfrom keras import backend as K\n\ntransfer_model.fit(X_train, y_train, epochs=20,\n verbose=2,\n validation_data=(X_test, y_test))\nloss, acc = transfer_model.evaluate(X_test, y_test)\nprint('Loss {}, Accuracy {}'.format(loss, acc))\n\nK.set_learning_phase(0) # test\nsess = K.get_session()\n\nfrom tensorflow.python.framework import graph_util\n\n# Make GraphDef of Transfer Model\ng_trans = sess.graph\ng_trans_def = graph_util.convert_variables_to_constants(sess,\n g_trans.as_graph_def(),\n [transfer_model.output.name.replace(':0','')])\n\n# Image Converter Model\nwith tf.Graph().as_default() as g_input:\n input_b64 = tf.placeholder(shape=(1,), dtype=tf.string, name='input')\n input_bytes = tf.decode_base64(input_b64[0])\n image = tf.image.decode_image(input_bytes)\n image_f = tf.image.convert_image_dtype(image, dtype=tf.float32)\n input_image = tf.expand_dims(image_f, 0)\n output = tf.identity(input_image, name='input_image')\n\ng_input_def = g_input.as_graph_def()\n\n\n\nwith tf.Graph().as_default() as g_combined:\n x = tf.placeholder(tf.string, name=\"input_b64\")\n\n im, = tf.import_graph_def(g_input_def,\n input_map={'input:0': x},\n return_elements=[\"input_image:0\"])\n\n pred, = tf.import_graph_def(g_trans_def,\n input_map={transfer_model.input.name: im,\n 'batch_normalization_1/keras_learning_phase:0': False},\n return_elements=[transfer_model.output.name])\n\n with tf.Session() as sess2:\n inputs = {\"inputs\": tf.saved_model.utils.build_tensor_info(x)}\n outputs = {\"outputs\": tf.saved_model.utils.build_tensor_info(pred)}\n signature = tf.saved_model.signature_def_utils.build_signature_def(\n inputs=inputs,\n outputs=outputs,\n method_name=tf.saved_model.signature_constants.PREDICT_METHOD_NAME\n )\n\n # save as SavedModel\n b = tf.saved_model.builder.SavedModelBuilder('gs://BUCKET/keras-mlengine/savedmodel')\n b.add_meta_graph_and_variables(sess2,\n [tf.saved_model.tag_constants.SERVING],\n signature_def_map={'serving_default': signature})\n b.save()\n\n# This cell is to prevent \"runAll\".\n# you must wait until ML Engine job finishes\nraise Exception('wait until ml engine job finishes..')",
"Create Model and Version for Online Prediction",
"# !gcloud ml-engine models create OperaCapitol\n!gcloud ml-engine versions create v1 --model OperaCapitol --runtime-version 1.2 --origin gs://BUCKET/keras-mlengine/savedmodel",
"Let's classify this image! This must be class 0..\n<img src=\"opera.jpg\">",
"from oauth2client.client import GoogleCredentials\nfrom googleapiclient import discovery\nfrom googleapiclient import errors\n\nPROJECTID = 'PROJECTID'\nprojectID = 'projects/{}'.format(PROJECTID)\nmodelName = 'OperaCapitol'\nmodelID = '{}/models/{}'.format(projectID, modelName)\n\ncredentials = GoogleCredentials.get_application_default()\nml = discovery.build('ml', 'v1', credentials=credentials)\n\nwith open('opera.jpg', 'rb') as f:\n b64_x = f.read()\nimport base64\nimport json\n\nb64_x = base64.urlsafe_b64encode(b64_x)\ninput_instance = dict(inputs=b64_x)\ninput_instance = json.loads(json.dumps(input_instance))\nrequest_body = {\"instances\": [input_instance]}\n\nrequest = ml.projects().predict(name=modelID, body=request_body)\ntry:\n response = request.execute()\nexcept errors.HttpError as err:\n # Something went wrong with the HTTP transaction.\n # To use logging, you need to 'import logging'.\n print('There was an HTTP error during the request:')\n print(err._get_reason())\nresponse"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
gschivley/ERCOT_power
|
Raw Data/Merging EPA and EIA.ipynb
|
mit
|
[
"TODO:\n\n\nVerify that the merge result is as expected. I noticed that when I checked the length of the EPA for a 6-month period, the length was longer than the post-merge 6-month period. This implies that the EIA data doesn't have some facilities that are listed in EPA. I haven't verified this though.\n\n\nNaN values also need to be replaced with 0's.\n\n\nNumbers are being stored in scientific notation",
"%matplotlib inline\nimport matplotlib.pyplot as plt\nimport pandas as pd\nimport os\nimport glob\nimport re\nimport cPickle as pickle\nimport gzip\nimport seaborn as sns",
"Loading the EIA Data, the path may need to be updated...\nThis will take a few minutes to run.",
"#Iterate through the directory to find all the files to import\n#Modified so that it also works on macs\npath = os.path.join('EIA Data', '923-No_Header')\nfull_path = os.path.join(path, '*.*')\n\n\neiaNames = os.listdir(path)\n\n#Rename the keys for easier merging later\nfileNameMap = {'EIA923 SCHEDULES 2_3_4_5 Final 2010.xls':2010,\n 'EIA923 SCHEDULES 2_3_4_5 M Final 2009 REVISED 05252011.XLS':2009,\n 'eia923December2008.xls':2008,\n 'EIA923_Schedules_2_3_4_5_2011_Final_Revision.xlsx':2011,\n 'EIA923_Schedules_2_3_4_5_2012_Final_Release_12.04.2013.xlsx':2012,\n 'EIA923_Schedules_2_3_4_5_2013_Final_Revision.xlsx':2013,\n 'EIA923_Schedules_2_3_4_5_M_12_2014_Final_Revision.xlsx':2014,\n 'EIA923_Schedules_2_3_4_5_M_12_2015_Final.xlsx':2015,\n 'f906920_2007.xls':2007}\n\n#Load the files into data frames, one df per file\neiaDict = {fileNameMap[fn]:pd.read_excel(os.path.join(path, fn)) for fn in eiaNames}\neiaDict = {key:val[val[\"NERC Region\"] == \"TRE\"] for key, val in eiaDict.iteritems()}",
"The excel documents have different column names so we need to standardize them all",
"#Dict of values to replace to standardize column names across all dataframes\nmonthDict = {\"JANUARY\":\"JAN\",\n \"FEBRUARY\":\"FEB\",\n \"MARCH\":\"MAR\",\n \"APRIL\":\"APR\",\n \"MAY\":\"MAY\",\n \"JUNE\":\"JUN\",\n \"JULY\":\"JUL\",\n \"AUGUST\":\"AUG\",\n \"SEPTEMBER\":\"SEP\",\n \"OCTOBER\":\"OCT\",\n \"NOVEMBER\":\"NOV\",\n \"DECEMBER\":\"DEC\"}\n \nreplaceDict = {\"ELECTRIC\":\"ELEC\",\n \"&\":\"AND\",\n \"I.D.\":\"ID\",\n \"MMBTUPER\":\"MMBTU_PER\"}\n \n#Add \"MMBTUMON\" : \"MMBTU_MON\" to be replaced\nfor month in monthDict.values():\n replaceDict[\"MMBTU\"+month] = \"MMBTU_\" + month\n\n#Replace the column name\ndef rename(col):\n for old, new in monthDict.iteritems():\n col = col.replace(old, new)\n \n for old, new in replaceDict.iteritems():\n col = col.replace(old, new)\n \n col = col.replace(\"MMBTUS\", \"MMBTU\")\n return col\n \n#Iterate through each column name of each dataframe to standardize\nfor key, df in eiaDict.iteritems():\n colNames = [name.replace(\"\\n\", \"_\").replace(\" \", \"_\").strip().upper() for name in df.columns]\n colNames = [rename(col) for col in colNames]\n eiaDict[key].columns = colNames",
"Define which columns we need to sum, and which columns don't need to be summed, but we still need to keep.\nNote: If we don't care about monthly stuff we can delete the second block of code.",
"#Define the columns that are necessary but are not summable\nallCols = eiaDict[fileNameMap.values()[0]].columns\nnonSumCols = [\"PLANT_ID\", \"PLANT_NAME\", \"YEAR\"]\n\n#Define the columns that contain the year's totals (Used to calc fuel type %)\nyearCols = [\"TOTAL_FUEL_CONSUMPTION_QUANTITY\", \"ELEC_FUEL_CONSUMPTION_QUANTITY\",\n \"TOTAL_FUEL_CONSUMPTION_MMBTU\", \"ELEC_FUEL_CONSUMPTION_MMBTU\",\n \"NET_GENERATION_(MEGAWATTHOURS)\"]\n\n\n#Define the columns that are necessary and summable\nsumCols = []\nsumCols.extend(yearCols)\n# regex = re.compile(r\"^ELEC_QUANTITY_.*\")\n# sumCols.extend([col for col in allCols if regex.search(col)])\nregex = re.compile(r\"^MMBTU_PER_UNIT_.*\")\nsumCols.extend([col for col in allCols if regex.search(col)])\nregex = re.compile(r\"^TOT_MMBTU_.*\")\nsumCols.extend([col for col in allCols if regex.search(col)])\nregex = re.compile(r\"^ELEC_MMBTUS_.*\")\nsumCols.extend([col for col in allCols if regex.search(col)])\nregex = re.compile(r\"^NETGEN_.*\")\nsumCols.extend([col for col in allCols if regex.search(col)])",
"Get a list of all the different fuel type codes. If we don't care about all of them, then just hardcode the list",
"fuelTypes = []\nfuelTypes.extend([fuelType for df in eiaDict.values() for fuelType in df[\"REPORTED_FUEL_TYPE_CODE\"].tolist()])\nfuelTypes = set(fuelTypes)\n\nfuelTypes",
"3 parts to aggregate by facility, and to calculate the % of each type of fuel. This will take a few minutes to run.\nThe end result is aggEIADict.",
"#Actually calculate the % type for each facility grouping\ndef calcPerc(group, aggGroup, fuelType, col):\n #Check to see if the facility has a record for the fuel type, and if the total column > 0\n if len(group[group[\"REPORTED_FUEL_TYPE_CODE\"] == fuelType]) > 0 and aggGroup[col] > 0:\n #summing fuel type because a facility may have multiple plants with the same fuel type \n return float((group[group[\"REPORTED_FUEL_TYPE_CODE\"] == fuelType][col]).sum())/aggGroup[col] \n else:\n return 0\n\n#Perform the aggregation on facility level\ndef aggAndCalcPerc(group):\n aggGroup = group.iloc[0][nonSumCols] #Get the non-agg columns\n aggGroup = aggGroup.append(group[sumCols].sum()) #Aggregate the agg columns and append to non-agg\n percCols = {col + \" %\" + fuelType:calcPerc(group, aggGroup, fuelType, col) for col in yearCols for fuelType in fuelTypes}\n aggGroup = aggGroup.append(pd.Series(percCols))\n return aggGroup \n\n#Iterate through each dataframe to perform aggregation by facility\naggEIADict = dict()\nfor key, df in eiaDict.iteritems():\n gb = df.groupby(by=\"PLANT_ID\")\n #aggGroup will be a list of panda series, each series representing a facility\n aggGroup = [aggAndCalcPerc(gb.get_group(group)) for group in gb.groups]\n aggEIADict[key] = pd.DataFrame(aggGroup)",
"Column order doesn't match in all years",
"aggEIADict[2007].head()\n\naggEIADict[2015].head()",
"Export the EIA 923 data as pickle\nJust sending the dictionary to a pickle file for now. At least doing this will save several min of time loading and processing the data in the future.",
"filename = 'EIA 923.pkl'\npath = '../Clean Data'\nfullpath = os.path.join(path, filename)\n\npickle.dump(aggEIADict, open(fullpath, 'wb'))",
"Combine all df's from the dict into one df\nConcat all dataframes, reset the index, determine the primary fuel type for each facility, filter to only include fossil power plants, and export as a csv",
"all923 = pd.concat(aggEIADict)\n\nall923.head()\n\nall923.reset_index(drop=True, inplace=True)\n\n# Check column numbers to use in the function below\nall923.iloc[1,1:27]\n\ndef top_fuel(row):\n #Fraction of largest fuel for electric heat input \n try:\n fuel = row.iloc[1:27].idxmax()[29:]\n except:\n return None\n return fuel\n\nall923['FUEL'] = all923.apply(top_fuel, axis=1)\n\nall923.head()\n\nfossil923 = all923.loc[all923['FUEL'].isin(['DFO', 'LIG', 'NG', 'PC', 'SUB'])]",
"Export the EIA 923 data dataframe as csv\nExport the dataframe with primary fuel and filtered to only include fossil plants",
"filename = 'Fossil EIA 923.csv'\npath = '../Clean Data'\nfullpath = os.path.join(path, filename)\nfossil923.to_csv(fullpath)\n",
"Loading the EPA Data, the path may need to be updated...",
"#Read the EPA files into a dataframe\npath2 = os.path.join('EPA air markets')\nepaNames = os.listdir(path2)\nfilePaths = {dn:os.path.join(path2, dn, \"*.txt\") for dn in epaNames}\nfilePaths = {dn:glob.glob(val) for dn, val in filePaths.iteritems()}\nepaDict = {key:pd.read_csv(fp, index_col = False) for key, val in filePaths.iteritems() for fp in val}",
"First rename the column name so we can merge on that column, then change the datatype of date to a datetime object",
"#Rename the column names to remove the leading space.\nfor key, df in epaDict.iteritems():\n colNames = [name.upper().strip() for name in df.columns]\n colNames[colNames.index(\"FACILITY ID (ORISPL)\")] = \"PLANT_ID\"\n epaDict[key].columns = colNames\n \n#Convert DATE to datetime object\n#Add new column DATETIME with both date and hour\nfor key, df in epaDict.iteritems():\n epaDict[key][\"DATE\"] = pd.to_datetime(df[\"DATE\"])\n epaDict[key]['DATETIME'] = df['DATE'] + pd.to_timedelta(df['HOUR'], unit='h')",
"The DataFrames in epaDict contain all power plants in Texas. We can filter on NERC REGION so that it only includes ERCOT.",
"set(epaDict['2015 July-Dec'].loc[:,'NERC REGION'])\n\n#Boolean filter to only keep ERCOT plants\nfor key, df in epaDict.iteritems():\n epaDict[key] = df[df[\"NERC REGION\"] == \"ERCOT\"].reset_index(drop = True)\n \n\nset(epaDict['2015 July-Dec'].loc[:,'NERC REGION'])\n\nepaDict['2015 July-Dec'].head()",
"Export EPA data as a series of dataframes\nThe whole dictionary is too big as a pickle file",
"# pickle with gzip, from http://stackoverflow.com/questions/18474791/decreasing-the-size-of-cpickle-objects\ndef save_zipped_pickle(obj, filename, protocol=-1):\n with gzip.open(filename, 'wb') as f:\n pickle.dump(obj, f, protocol)\n\nfilename = 'EPA hourly dictionary.pgz'\npath = '../Clean Data'\nfullpath = os.path.join(path, filename)\n\nsave_zipped_pickle(epaDict, fullpath)\n\ndf = epaDict['2015 July-Dec']\n\ndf.head()\n\nset(df['PLANT_ID'])\n\ndf_temp = df[df['PLANT_ID'].isin([127, 298, 3439])].fillna(0)\n\ndf_temp.head()\n\ng = sns.FacetGrid(df_temp, col='PLANT_ID')\ng.map(plt.plot, 'datetime', 'GROSS LOAD (MW)')\ng.set_xticklabels(rotation=30)\n\npath = os.path.join('..', 'Exploratory visualization', 'Midterm figures', 'Sample hourly load.svg')\nplt.savefig(path)",
"Finally join the two data sources\nSwitch to an inner join?\nNo need to join. Can keep them as separate databases, since one is hourly data and the other is annual/monthly Create a clustering dataframe with index of all plant IDs (from the EPA hourly data), add columns with variables. Calculate the inputs in separate dataframes - example is to calculate ramp rate values in the EPA hourly data, then put the results in the clustering dataframe.",
"#Join the two data sources on PLANT_ID\nfullData = {key:df.merge(aggEIADict[df[\"YEAR\"][0]], on=\"PLANT_ID\") for key, df in epaDict.iteritems()}\n\nfullData[fullData.keys()[0]].head()",
"BIT, SUB, LIG, NG, DFO, RFO",
"[x for x in fullData[fullData.keys()[0]].columns]",
"Loading EIA 860 Data",
"# Iterate through the directory to find all the files to import\npath = os.path.join('EIA Data', '860-No_Header')\nfull_path = os.path.join(path, '*.*')\n\neia860Names = os.listdir(path)\n\n# Rename the keys for easier merging later\nfileName860Map = { 'GenY07.xls':2007,\n 'GenY08.xls':2008,\n 'GeneratorY09.xls':2009,\n 'GeneratorsY2010.xls':2010,\n 'GeneratorY2011.xlsx':2011,\n 'GeneratorY2012.xlsx':2012,\n '3_1_Generator_Y2013.xlsx':2013,\n '3_1_Generator_Y2014.xlsx':2014,\n '3_1_Generator_Y2015.xlsx':2015}\n\n#Load the files into data frames, one df per file\neia860Dict = {fileName860Map[fn]:pd.read_excel(os.path.join(path, fn)) for fn in eia860Names} \n\n#Dict of values to replace to standardize column names across all dataframes\nrenameDict = { \"PLNTCODE\":\"PLANT_ID\",\n \"PLANT_CODE\":\"PLANT_ID\",\n \"Plant Code\":\"PLANT_ID\",\n \"NAMEPLATE\":\"NAMEPLATE_CAPACITY(MW)\",\n \"Nameplate Capacity (MW)\":\"NAMEPLATE_CAPACITY(MW)\"}\n\n#Replace the column name\ndef rename860(col):\n for old, new in renameDict.iteritems():\n col = col.replace(old, new)\n return col\n\n#Iterate through each column name of each dataframe to standardize and select columns 'PLANT_ID', 'NAMEPLATE_CAPACITY(MW)'\nfor key, df in eia860Dict.iteritems():\n colNames = [rename860(col) for col in df.columns]\n eia860Dict[key].columns = colNames\n eia860Dict[key] = eia860Dict[key][[\"PLANT_ID\", \"NAMEPLATE_CAPACITY(MW)\"]]\n\n# verify the tables\nfor key, df in eia860Dict.iteritems():\n print key, df.columns, len(df)\n\n# Iterate through each dataframe to perform aggregation by PLANT_ID\nfor key, df in eia860Dict.iteritems():\n gb = df.groupby(by='PLANT_ID').apply(lambda x: x['NAMEPLATE_CAPACITY(MW)'].sum())\n eia860Dict[key]['NAMEPLATE_CAPACITY(MW)'] = eia860Dict[key].PLANT_ID.apply(gb.get_value)\n eia860Dict[key] = eia860Dict[key].drop_duplicates(subset=['PLANT_ID', 'NAMEPLATE_CAPACITY(MW)'])\n eia860Dict[key] = eia860Dict[key].sort_values(by='PLANT_ID').reset_index(drop=True)",
"Export EIA 860 data",
"filename = 'EIA 860.pkl'\npath = '../Clean Data'\nfullpath = os.path.join(path, filename)\n\npickle.dump(eia860Dict, open(fullpath, 'wb'))",
"Creating Final DataFrame for Clustering Algorithm:\nclusterDict {year : cluster_DF}\n\nFor each PLANT_ID in aggEIADict, fetch the corresponding aggregated NAMEPLATE_CAPACITY(MW)",
"clusterDict = dict()\nfor key, df in eia860Dict.iteritems():\n clusterDict[key] = pd.merge(aggEIADict[key], eia860Dict[key], how='left', on='PLANT_ID')[['PLANT_ID', 'NAMEPLATE_CAPACITY(MW)']]\n clusterDict[key].rename(columns={'NAMEPLATE_CAPACITY(MW)': 'capacity', 'PLANT_ID': 'plant_id'}, inplace=True)\n\n# verify for no loss of data\nfor key, df in eia860Dict.iteritems():\n print key, len(clusterDict[key]), len(aggEIADict[key])\n\nclusterDict[2015].head()",
"Function to get fuel type",
"fuel_cols = [col for col in aggEIADict[2008].columns if 'ELEC_FUEL_CONSUMPTION_MMBTU %' in col]\n\ndef top_fuel(row):\n #Fraction of largest fuel for electric heat input \n try:\n fuel = row.idxmax()[29:]\n except:\n return None\n return fuel\n\n# clusterDict[2008]['fuel'] = aggEIADict[2008][fuel_cols].apply(top_fuel, axis=1)",
"Calculate Capacity factor, Efficiency, Fuel type",
"for key, df in clusterDict.iteritems():\n clusterDict[key]['year'] = key\n clusterDict[key]['capacity_factor'] = aggEIADict[key]['NET_GENERATION_(MEGAWATTHOURS)'] / (8670*clusterDict[key]['capacity'])\n clusterDict[key]['efficiency'] = (aggEIADict[key]['NET_GENERATION_(MEGAWATTHOURS)']*3.412)/(1.0*aggEIADict[key]['ELEC_FUEL_CONSUMPTION_MMBTU'])\n clusterDict[key]['fuel_type'] = aggEIADict[key][fuel_cols].apply(top_fuel, axis=1)\n clusterDict[key] = clusterDict[key][clusterDict[key]['fuel_type'].isin(['SUB', \n 'LIG', \n 'DFO',\n 'NG', \n 'PC'])]",
"Merge all epa files in one df",
"columns = ['PLANT_ID', 'YEAR', 'DATE', 'HOUR', 'GROSS LOAD (MW)']\ncounter = 0\nfor key, df in epaDict.iteritems():\n if counter == 0:\n result = epaDict[key][columns]\n counter = 1\n else:\n result = result.append(epaDict[key][columns], ignore_index=True)\n \n# Change nan to 0\nresult.fillna(0, inplace=True)\n\nresult.describe()",
"Function to calculate the ramp rate for every hour",
"def plant_gen_delta(df):\n \"\"\"\n For every plant in the input df, calculate the change in gross load (MW)\n from the previous hour.\n \n input:\n df: dataframe of EPA clean air markets data\n return:\n df: concatanated list of dataframes\n \"\"\"\n df_list = []\n for plant in df['PLANT_ID'].unique():\n temp = df.loc[df['PLANT_ID'] == plant,:]\n gen_change = temp.loc[:,'GROSS LOAD (MW)'].values - temp.loc[:,'GROSS LOAD (MW)'].shift(1).values\n temp.loc[:,'Gen Change'] = gen_change\n df_list.append(temp)\n return pd.concat(df_list)\n\nramp_df = plant_gen_delta(result)\n\nramp_df.describe()",
"Get the max ramp rate for every plant for each year",
"cols = ['PLANT_ID', 'YEAR', 'Gen Change']\n\nramp_rate_list = []\nfor year in ramp_df['YEAR'].unique():\n for plant in ramp_df.loc[ramp_df['YEAR']==year,'PLANT_ID'].unique():\n # 95th percentile ramp rate per plant per year\n ramp_95 = ramp_df.loc[(ramp_df['PLANT_ID']== plant) & \n (ramp_df['YEAR']==year),'Gen Change'].quantile(0.95, interpolation='nearest')\n ramp_rate_list.append([plant, year, ramp_95])\n\nramp_rate_df = pd.DataFrame(ramp_rate_list, columns=['plant_id', 'year', 'ramp_rate'])\n\nramp_rate_df.describe()\n\nfor key, df in clusterDict.iteritems():\n clusterDict[key] = pd.merge(clusterDict[key], ramp_rate_df, how='left', on=['plant_id', 'year'])\n\nclusterDict[2010].head()\n\n# Check plants larger than 25MW, which is the lower limit for EPA\nclusterDict[2010][clusterDict[2010].capacity >=25].describe()\n\nfor key in clusterDict.keys():\n print key, clusterDict[key].plant_id.count(), clusterDict[key].ramp_rate.count()",
"Save dict to csv",
"# re-arrange column order\ncolumns = ['year', 'plant_id', 'capacity', 'capacity_factor', 'efficiency', 'ramp_rate', 'fuel_type']\n\nfilename = 'Cluster_Data_2.csv'\npath = '../Clean Data'\nfullpath = os.path.join(path, filename)\n\ncounter = 0\nfor key, df in clusterDict.iteritems():\n # create the csv file\n if counter == 0:\n df[columns].sort_values(by='plant_id').to_csv(fullpath, sep=',', index = False)\n counter += 1\n # append to existing csv file\n else:\n df[columns].sort_values(by='plant_id').to_csv(fullpath, sep=',', index = False, header=False, mode = 'a')",
"Assumptions\n\nPlant capacity changes at the start of the year and is constant for the entire year\nSame for ramp rate - no changes over the course of the year"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
Olsthoorn/TransientGroundwaterFlow
|
exercises_notebooks/chap5_4_waves.ipynb
|
gpl-3.0
|
[
"<figure>\n <IMG SRC=\"../logo/logo.png\" WIDTH=250 ALIGN=\"right\">\n</figure>\n\nChap5_3, sine wave in a 1-D aquifer of half-infinite extent ($x>0$)\nThese are the exercises we did in class on Friday 2019-12-20.\nThey were done after the class downloaded the Anaconda Python distribution from here.\nThe groundwater theory can be found in the syllabus that you have; the newest syllabus can be downloaded here\nAll the stuff for this course can be found here. If you don't have it yet try using the clone option at the right top of the page of this website. It will put everything on you computer, but the bonus is that you will be automatically informed if there are updates.\nTutorials on using Python for engineers\nFor more help on using Python and jupyter notebooks see \nthe notebook Techniques.ipynb in directory \\TransiendGroundwaterFlow/Assignment and work some tutorial notebooks on exploratory computing by Mark Bakker on github.\nGroundwater\nThe solution for the head change due to a sinusoidal variation of the water level at $x=$ is given by\n$$s(x,t) = A e^{-a x} \\sin (\\omega t - a x)$$\nThe discharge then is\n$$ Q(x, t) = kD\\, A\\, e^{-a x} \\left[sin(\\omega t - a x) + cos(\\omega t - ax)\\right]$$\nor, equivalently\n$$ Q(x, t) = kD \\, A\\, e^{-a x} \\sqrt{2}\\,\\, sin(\\omega t - a x + \\frac \\pi 4)$$\nalso from the theory we have\n$$a = \\sqrt{\\frac {\\omega S}{2 kD}}$$\nWe further have from the theory that the wave length equals\n$$\\Delta x_{full\\,wave} = \\frac {2 \\pi}{a}$$\nAnd the wave velocity\n$$ v = \\frac \\omega a$$\nIt is always a good exercise and an essential check to verify the dimensions of all the variables and parmeters.",
"# Always load these modules for arrays, math and visualization\nimport numpy as np\nimport matplotlib.pyplot as plt",
"First example",
"import numpy as np\nimport matplotlib.pyplot as plt",
"We'll show the wave of head in the subsurface due to a sinusoidal tide at $x=0$. So we choose the parameters that we considere known ($kD$, $S$, $\\omega$ and the amplitude $A$). Then we choose values of $x$ for which to compute the head change $s(x, t)$ and some times for each of which we compute $s(x,t)$. This gives a number of waves, equal to the length of the list or array of chosen $t$ values.",
"times = np.arange(0, 24) / 24 # hourly values\nx = np.linspace(0, 400, num=401) # the x values packed in an numpy array\n\n# Aquifer\nkD = 600 # [m2/d], transmissivty\nS = 0.1 # [-], storage coefficient\n\n# Wave\nA = 1.25 # [m] the amplitude\nT = 0.5 # [d] cycle time\nomega = 2 * np.pi / T # [radians/d] angle velocity\n\n# Combined property a\na = np.sqrt(omega * S / (2 * kD)) # [1/m] damping factor\n\n# Set up a figure\nplt.title('Groundwater head due to sinusoidal fluctuations of head at x=0\\n' +\n f'kD = {kD:.0f} m2/d, S={S:4g}, A={A:.1f} m, \\omega={omega:.2f} [1/d]')\nplt.xlabel('x [m]')\nplt.ylabel('drawdown s(x, t) [m]')\nplt.plot(x, +A * np.exp(-a * x), label='Upper envelope')\nplt.plot(x, -A * np.exp(-a * x), label='Lower envelope')\nplt.grid()\n\nfor t in times:\n y = A * np.exp(-a * x) * np.sin(omega * t - a * x)\n plt.plot(x, y, label='t = {:4g}'.format(t))\n\nplt.legend()\nprint(\"we're done\")",
"This picture shows what we wanted to, but it is not nice. There are too many times in the legend. And the picture is also quite small.\nFurther, it may become boring to have to type all these individual plot instructions all the time. There is a better way. We could define a function that does all setup work for us and also sets the size to a default value, which we can change at will when we call the function.",
"def newfig(title='forgot title?', xlabel='forgot xlabel?', ylabel='forgot ylabel?',\n xlim=None, ylim=None, xscale='linear', yscale='linear', size_inches=(12,7)):\n fig, ax = plt.subplots()\n fig.set_size_inches(size_inches)\n ax.set_title(title)\n ax.set_xlabel(xlabel)\n ax.set_ylabel(ylabel)\n ax.set_xscale(xscale)\n ax.set_yscale(yscale)\n if xlim is not None:\n ax.set_xlim(xlim)\n if ylim is not None:\n ax.set_ylim(ylim)\n ax.grid()\n return ax\n\n# try it\n#newfig(xlim=(1.0e-3, 1.0e2), xscale='log')\n\n# Same example but now with newfig\n\n# Aquifer\nkD = 600 # [m2/d], transmissivty\nS = 0.1 # [-], storage coefficient\n\n# Wave\nA = 1.25 # [m] the amplitude\nT = 0.5 # [d] cycle time\nomega = 2 * np.pi / T # [radians/d] angle velocity\n\n# Combined property a\na = np.sqrt(omega * S / (2 * kD)) # [1/m] damping factor\n\ntitle = 'Head change due to sine-tide at x = 0\\n'\nsubtitle = f'kD = {kD:.0f} m2/d, S={S:.4g}, A={A:.1f} m, ' +\\\n f'T={T:.2f} d, omega={omega:3g}/d, a={a:3g}/m'\n\nax = newfig(title=title + subtitle, xlabel='x [m]', ylabel='s(x,t) [m]')\n\n\nfor t in times:\n y = A * np.exp(-a * x) * np.sin(omega * t - a * x)\n plt.plot(x, y, label='t = {:4g} h'.format(t * 24))\n\nplt.legend()\n",
"This looks good, both the more concise code as the picture in size and legend. The title now contains all information pertaining to the problem we want to solve. So this can never be forgotton. I used a string for the title and the subtitle (second line of the title). Adding two strings like str1 + str2 glues the two strings together, which is what was donce in the call of the newfig function with title=title + subtitle. The subtitle was formatted with the values of the parameters we wanted to show. Read about formatting strings on the internet by searcheing for Python fstrings. It's easy. Then in the loop when plotting each wave, I formatted the label, not with the value in days but in hours by multiplying the value of t by 24 and adding h to the string to denote hours. See the code and the resulting legend. This reads much better than the numbers in the legend of the previous figure. Finally, the size of the figure is bigger. This is because newfig() used the default value for size_inches, which was set at (12, 7) in newfig, meaning 12 inches wide and 7 inchdes high. If you prefer cm or mm, you have to recalc yourself, i.e. call the function with mm and in the function convert to inches, because matplotlib always treats the specified numbers as inches.\nYou can now experiment with the parameter values. Simply chane one or more and run all the cells in which you made changes again.\nSecond example not one but several waves (superposition)\nWe still have only one value for kD and S since we word with the same aquifer. But we may want to consider several waves, each with its own omega, amplitude\nWe call the list of wave cycle times Times the omega values omeagas,\nthe list of amplitudes amplitudes and the list of damping factors dampings.\nTo make sure that each wave gets its own specific values we bundle these listes in a zip and then loop over them\nfor omgea, A, a in zip(omegas, amplitudes, dampings):\n etc\nAnd this loops loops over the loop over the times (note that times is different from Times. times were just real times in days and Times are the list of cycle times for the waves!",
"kD = 600 # [m2/d], transmissivty\nS = 0.1 # [-], storage coefficient\n\nthetas = np.array([0.3, 0.7, 0.19, 0.6, 0.81]) * 2 * np.pi # initial angles\namplitudes = [1.25, 0.6, 1.7, 0.8, 1.1] # [m] the 5 amplitudes\nTimes = np.array([0.1, 0.25, 0.33, 0.67, 1.0]) # [d] cycle times\nomegas = 2 * np.pi / Times # [radians/d] angle velocity\ndampings = np.sqrt(omegas * S / (2 * kD)) # [1/m] damping factor\n\nx = np.linspace(0, 200, num=201)\ntimes = np.arange(0, 24, 6) / 24\n\n\ntitle = 'Head change due to several sinusoidal tides at x = 0 '\nsubtitle = f'kD = {kD:.0f} m2/d, S={S:.4g}'\n\nax = newfig(title=title + subtitle, xlabel='x [m]', ylabel='s(x,t) [m]')\n\nt = times[3]\ns = np.zeros_like(x)\nfor omega, A, a, theta in zip(omegas, amplitudes, dampings, thetas):\n ds = A * np.exp(-a * x) * np.sin(omega * t - a * x + theta)\n s = s + ds\n ax.plot(x, ds, label=\n 't={:.2g}h theta= {:.2g} rad, A={:.2g}m, T={:.2g}h, a={:.2g}/m'.\n format(t/24, theta, A, 2 * np.pi/omega * 24, a))\n\nax.plot(x, s, lw=3, label='total', alpha=0.3)\nax.legend()",
"Next we'll only show the total effect, i.e. the sum of all individual waves, but we'll do that for a sequence of times",
"kD = 600 # [m2/d], transmissivty\nS = 0.1 # [-], storage coefficient\n\nthetas = np.array([0.3, 0.7, 0.19, 0.6, 0.81]) * 2 * np.pi # initial angles\namplitudes = [1.25, 0.6, 1.7, 0.8, 1.1] # [m] the 5 amplitudes\nTimes = np.array([0.1, 0.25, 0.33, 0.67, 1.0]) # [d] cycle times\nomegas = 2 * np.pi / Times # [radians/d] angle velocity\ndampings = np.sqrt(omegas * S / (2 * kD)) # [1/m] damping factor\n\nx = np.linspace(0, 200, num=201)\ntimes = np.arange(0, 1, 0.1)\n\n\ntitle = 'Head change due to several sinusoidal tides at x = 0 '\nsubtitle = f'kD = {kD:.0f} m2/d, S={S:.4g}'\n\nax = newfig(title=title + subtitle, xlabel='x [m]', ylabel='s(x,t) [m]', ylim=(-5, 5))\n\nt = times[3]\nfor t in times:\n s = np.zeros_like(x)\n for omega, A, a, theta in zip(omegas, amplitudes, dampings, thetas):\n ds = A * np.exp(-a * x) * np.sin(omega * t - a * x + theta)\n s = s + ds\n #ax.plot(x, ds, label=\n # 't={:.2g}h theta= {:.2g} rad, A={:.2g}m, T={:.2g}h, a={:.2g}/m'.\n # format(t/24, theta, A, 2 * np.pi/omega * 24, a))\n\n ax.plot(x, s, lw=3, label=f'total, t={t * 24:.2g}h', alpha=0.3)\nax.legend(loc='best')\n",
"Finally, we'll animate the wave. You may read more about making animations using matplotlib here. Once the idea of animation is understood, animating functions become straightforward.\nThe computation and saving of the animation may require some time. But once the the computions are finished the animation well be shown by video-software that you already have on your PC. If not go to the directory where the video was saved as a .gis file and click it on. That shoul launch your video, Some times looking at the file with the file-browser is enough to have the browser animate the saved .gis file.",
"from matplotlib.animation import FuncAnimation\n#import matplotlib.animation\n\n# We need dit to make sure the video is launched\n%matplotlib\n\nkD = 600 # [m2/d], transmissivty\nS = 0.1 # [-], storage coefficient\n\nthetas = np.array([0.3, 0.7, 0.19, 0.6, 0.81]) * 2 * np.pi # initial angles\namplitudes = [1.25, 0.6, 1.7, 0.8, 1.1] # [m] the 5 amplitudes\nTimes = np.array([0.1, 0.25, 0.33, 0.67, 1.0]) # [d] cycle times\nomegas = 2 * np.pi / Times # [radians/d] angle velocity\ndampings = np.sqrt(omegas * S / (2 * kD)) # [1/m] damping factor\n\nx = np.linspace(0, 200, num=201)\ntimes = np.arange(0, 1, 0.01)\n\n\ntitle = 'Several sines superimposed, '\nsubtitle = f'kD = {kD:.0f} m2/d, S={S:.4g}'\n\nax = newfig(title=title + subtitle, xlabel='x [m]', ylabel='s(x,t) [m]',\n ylim=(-5, 5), xlim=(0, x[-1]), size_inches=(6, 4))\nfig = plt.gcf()\n\n# initial\nline, = ax.plot(x,np.zeros_like(x), lw=3, alpha=0.3) # the comme picks line from list [line]\n\ndef init():\n pass # don't need to do anything, we already have the initial line\n return line, # the comma packs line int a tuple\n\ndef animate(i): # this does the work after each cycle.\n t = times[i]\n s = np.zeros_like(x)\n for omega, A, a, theta in zip(omegas, amplitudes, dampings, thetas):\n s += A * np.exp(-a * x) * np.sin(omega * t - a * x + theta)\n line.set_data(x, s) # replaces the line s\n return line, # comma packs line in to a tuple like (line), required\n\n# animate\nanim = FuncAnimation(fig, animate, init_func=init, fargs=None,\n frames=len(times), interval=50, blit=True)\n\n# write your animation to disk\nanim.save('compound_sine_wave.gif', writer='imagemagick')\n\n# show reaults\nplt.show()\n"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
molgor/spystats
|
notebooks/.ipynb_checkpoints/predictions_and_residuals_of_FIA_logbiomasS_logsppn_with_spautocor-checkpoint.ipynb
|
bsd-2-clause
|
[
"# Load Biospytial modules and etc.\n%matplotlib inline\nimport sys\nsys.path.append('/apps')\nimport django\ndjango.setup()\nimport pandas as pd\nimport numpy as np\nimport matplotlib.pyplot as plt\n## Use the ggplot style\nplt.style.use('ggplot')",
"Predictions and residuals of the FIA model\nRemember that the model was: $$log(Biomass) \\sim log(SppN) + Z(x) + \\epsilon$$\nWhere Z(x) is a gaussian random field with mean 0 and $\\Sigma^{+} = \\rho(x,x^{'})$ \nWe have done that analysis in former notebooks. This notebook considers that the file: /RawDataCSV/predictions_residuals.csv exists.",
"path = \"/RawDataCSV/predictions_residuals.csv\"\ndata = pd.read_csv(path,index_col='Unnamed: 0')\ndata.shape\n\ndata.shape\ndata.columns =['Y','logBiomass','logSppN','modres']\n\ndata = data.dropna()\ndata.loc[:5]\n\nplt.scatter(np.exp(data.logSppN),np.exp(data.Y))\n\nfrom statsmodels.api import OLS\n\nmod_lin = OLS.from_formula('logBiomass ~ logSppN',data)\n\nres = mod_lin.fit()\n\nlnY = res.predict()\n\nplt.scatter(data.logSppN,data.logBiomass,label=\"Observations\")\nplt.scatter(data.logSppN,data.Y,c='Red',label=\"Predicted (GLS)\")\nplt.scatter(data.logSppN,lnY,c='Green',label=\"Prediction (OLS)\")\nplt.title(\"Comparison of observed and predicted values (log(Biomass) ~ log(SppN))\")\nplt.legend(loc='lower right')\n\nplt.scatter(np.exp(data.logSppN),np.exp(data.logBiomass),label=\"Observations\")\nplt.scatter(np.exp(data.logSppN),np.exp(data.Y),c='Red',label=\"Predicted (GLS)\")\nplt.scatter(np.exp(data.logSppN),np.exp(lnY),c='Green',label=\"Prediction (OLS)\")\nplt.title(\"Comparison of observed and predicted values Biomass ~ SppN\")\nplt.legend(loc='upper right')\n\nplt.scatter(data.logBiomass,data.modres,label=\"Observations\")\n#plt.scatter(np.exp(data.logSppN),np.exp(data.Y),c='Red',label=\"Predicted (GLS)\")\n#plt.scatter(np.exp(data.logSppN),np.exp(lnY),c='Green',label=\"Prediction (OLS)\")\n#plt.title(\"Comparison of observed and predicted values Biomass ~ SppN\")\n#plt.legend(loc='upper right')",
"Standard Error\nUsing the White’s (1980) heteroskedasticity robust standard errors.\nI used the others: MacKinnon and White’s (1985) alternative heteroskedasticity robust standard error \nThe values were the same.\nStandard Errors:\n* Intercept 0.011947\n* logSppN 0.006440",
"path = \"/RawDataCSV/predictions2.csv\"\ndata2 = pd.read_csv(path,index_col='Unnamed: 0')\n#data2 = data2.dropna()\n\n#plt.scatter(data.logSppN,data.logBiomass,label=\"Observations\")\nplt.scatter(data.logSppN,data2.mean,c='Red',label=\"Predicted (GLS)\")\n#plt.scatter(data.logSppN,lnY,c='Green',label=\"Prediction (OLS)\")\n#plt.title(\"Comparison of observed and predicted values (log(Biomass) ~ log(SppN))\")\n#plt.legend(loc='lower right')\n\ndata2.shape\n\ndata.logSppN.shape\n\nd = data.dropna()"
] |
[
"code",
"markdown",
"code",
"markdown",
"code"
] |
CCBatIIT/AlGDock
|
Example/test_GB.ipynb
|
mit
|
[
"import AlGDock.BindingPMF_plots\nfrom AlGDock.BindingPMF_plots import *\nimport os, shutil, glob\n\nphases = ['NAMD_Gas', 'NAMD_OBC']\n\nself = AlGDock.BindingPMF_plots.BPMF_plots(\\\n dir_dock='dock', dir_cool='cool',\\\n ligand_database='prmtopcrd/ligand.db', \\\n forcefield='prmtopcrd/gaff2.dat', \\\n ligand_prmtop='prmtopcrd/ligand.prmtop', \\\n ligand_inpcrd='prmtopcrd/ligand.trans.inpcrd', \\\n ligand_mol2='prmtopcrd/ligand.mol2', \\\n ligand_rb='prmtopcrd/ligand.rb', \\\n receptor_prmtop='prmtopcrd/receptor.prmtop', \\\n receptor_inpcrd='prmtopcrd/receptor.trans.inpcrd', \\\n receptor_fixed_atoms='prmtopcrd/receptor.pdb', \\\n complex_prmtop='prmtopcrd/complex.prmtop', \\\n complex_inpcrd='prmtopcrd/complex.trans.inpcrd', \\\n complex_fixed_atoms='prmtopcrd/complex.pdb', \\\n score = 'prmtopcrd/anchor_and_grow_scored.mol2', \\\n pose=-1, \\\n rmsd=True, \\\n dir_grid='grids', \\\n protocol='Adaptive', cool_therm_speed=5.0, dock_therm_speed=0.5, \\\n T_HIGH=600.0, T_SIMMIN=300.0, T_TARGET=300.0, \\\n sampler='HMC', \\\n MCMC_moves=1, \\\n sampling_importance_resampling = True, \\\n solvation = 'Fractional', \\\n seeds_per_state=10, steps_per_seed=200, darts_per_seed=0, \\\n sweeps_per_cycle=25, attempts_per_sweep=100, \\\n steps_per_sweep=100, darts_per_sweep=0, \\\n cool_repX_cycles=3, dock_repX_cycles=4, \\\n site='Sphere', site_center=[1.7416, 1.7416, 1.7416], \\\n site_max_R=0.6, \\\n site_density=10., \\\n phases=phases, \\\n cores=-1, \\\n random_seed=-1, \\\n max_time=240, \\\n keep_intermediate=True)\n\nself.universe.setConfiguration(Configuration(self.universe,self.confs['ligand']))\nself._set_universe_evaluator(self._lambda(1.0, 'dock'))\nself.universe.energyTerms()\n# Before the unit fix, OBC_desolv is -358.92191041097993\n\nprint self._forceFields['OBC'].desolvationGridFN\n\n# First goal: Get GB radii for ligand.\n# Second goal: Load receptor\n# Third goal: Get GB radii for ligand with receptor present\n\n# This is to test the gradients, which do not work\n# from AlGDock.ForceFields.OBC.OBC import OBCForceField\n# FF_desolv = OBCForceField(desolvationGridFN=self._FNs['grids']['desolv'])\n# self.universe.setForceField(FF_desolv)\n\n# from MMTK.ForceFields.ForceFieldTest import gradientTest\n#\n# gradientTest(self.universe)\n\n# This is the receptor without the ligand\nself.receptor_R = MMTK.Molecule('receptor.db')\n\nself.universe_R = MMTK.Universe.InfiniteUniverse()\nself.universe_R.addObject(self.receptor_R)\n\n# Getting the AMBER energy appears to work\n# from MMTK.ForceFields import Amber12SBForceField\n# self._forceFields['AMBER'] = \\\n# Amber12SBForceField(\\\n# mod_files=['/Users/dminh/Installers/AlGDock-0.0.1/Data/frcmod.ff14SB',\n# '/Users/dminh/Installers/AlGDock-0.0.1/Example/prmtopcrd/R2773.frcmod',\n# '/Users/dminh/Installers/AlGDock-0.0.1/Example/prmtopcrd/R2777.frcmod'])\n# Specifying the main force field file seems to make the energy calculation crash\n# '/Users/dminh/Installers/AlGDock-0.0.1/Data/parm10_gaff2.dat', \n# self.universe_R.setForceField(self._forceFields['AMBER'])\n# print self.universe_R.energyTerms()\n\nfrom AlGDock.ForceFields.OBC.OBC import OBCForceField\nself.universe_R.setForceField(OBCForceField())\nprint self.universe_R.energyTerms()\n\n# This is the protein alone\n# import MMTK\n# universe = MMTK.Universe.InfiniteUniverse()\n\n# from MMTK.Proteins import Protein\n# protein = Protein('/Users/dminh/Installers/AlGDock-0.0.1/Example/prmtopcrd/receptor.pdb')\n# universe.addObject(protein)\n\n# from MMTK.ForceFields import Amber12SBForceField\n# forcefield = Amber12SBForceField(mod_files=['/Users/dminh/Installers/AlGDock-0.0.1/Data/frcmod.ff14SB'])\n# # '/Users/dminh/Installers/AlGDock-0.0.1/Data/parm10.dat', \n# # mod_files=['/Users/dminh/Installers/AlGDock-0.0.1/Data/frcmod.ff14SB'])\n# universe.setForceField(forcefield)\n# universe.energyTerms()\n\nself.receptor_RL = MMTK.Molecule('receptor.db')\nself.molecule_RL = MMTK.Molecule(os.path.basename(self._FNs['ligand_database']))\nfor (atom,pos) in zip(self.molecule_RL.atomList(),self.confs['ligand']):\n atom.setPosition(pos)\n\nself.universe_RL = MMTK.Universe.InfiniteUniverse()\nself.universe_RL.addObject(self.receptor_RL)\nself.universe_RL.addObject(self.molecule_RL)\nself.universe_RL.configuration().array[-len(self.universe.configuration().array):,:] = self.universe.configuration().array\n\nfrom AlGDock.ForceFields.OBC.OBC import OBCForceField\nself.universe_RL.setForceField(OBCForceField())\nprint self.universe_RL.energyTerms()\n\n# The columns are before and after fractional desolvation and in the complex\nI = np.array(\n[( 1.55022, 1.66261, 2.90035 ),\n ( 1.56983, 1.68106, 2.82747 ),\n ( 1.41972, 1.53815, 2.8298 ),\n ( 1.45936, 1.58425, 2.84062 ),\n ( 2.05316, 2.15476, 3.24919 ),\n ( 1.5354, 1.65705, 2.93556 ),\n ( 1.43417, 1.56071, 3.04796 ),\n ( 1.85508, 1.9636 , 3.13083 ),\n ( 2.06909, 2.17661, 3.35871 ),\n ( 2.4237, 2.57502 , 4.41055 ),\n ( 1.9603, 2.16599 , 4.35363 ),\n ( 2.18017, 2.32447, 3.89703 ),\n ( 2.19774, 2.32646, 3.68217 ),\n ( 2.02152, 2.16213, 3.76695 ),\n ( 2.05662, 2.21947, 3.8166 ),\n ( 2.65659, 2.79067, 4.25442 ),\n ( 2.81839, 2.9631 , 4.4784 ),\n ( 2.90653, 3.02582, 4.34538 ),\n ( 2.37779, 2.5372 , 4.38744 ),\n ( 2.17795, 2.32426, 4.23382 ),\n ( 1.77652, 1.89423, 3.35642 ),\n ( 1.22359, 1.38423, 3.22575 ),\n ( 1.2336, 1.40284 , 3.49822 ),\n ( 1.21771, 1.37135, 3.07989 )])\n\n%matplotlib inline\nimport matplotlib.pyplot as plt\n\n# This is the different between the quantity added by fractional desolvation and by the full complex\ndiff_frac = I[:,1]-I[:,0]\ndiff_real = I[:,2]-I[:,0]\nplt.plot(diff_frac, diff_real,'.')\n\nA = np.vstack([diff_frac, np.ones(len(diff_frac))]).T\nm,c = np.linalg.lstsq(A, diff_real)[0]\n\nplt.plot(diff_frac, m*diff_frac + c, 'r')\n\nprint 'The correlation is', np.corrcoef(diff_frac,diff_real)[0,1]\nprint 'The linear least squares regression slope is', m, 'and intercept is', c",
"Igrid[atomI] appears to be off by a factor of 12.6!",
"# This is probably due to a unit conversion in a multiplicative prefactor\n\n# This multiplicative prefactor is based on nanometers\nr_min = 0.14\nr_max = 1.0\nprint (1/r_min - 1/r_max)\n\n# This multiplicative prefactor is based on angstroms\nr_min = 1.4\nr_max = 10.0\nprint (1/r_min - 1/r_max)",
"Switching from nanometers to angstroms makes the multiplicative prefactor smaller, which is opposite of the desired effect!",
"4*np.pi",
"Igrid[atomI] appears to be off by a factor of 4*pi!",
"# This is after multiplication by 4*pi\n# Sum for atom 0: 1.55022, 2.96246\n# Sum for atom 1: 1.56983, 2.96756\n# Sum for atom 2: 1.41972, 2.90796\n# Sum for atom 3: 1.45936, 3.02879\n# Sum for atom 4: 2.05316, 3.32989\n# Sum for atom 5: 1.5354, 3.06405\n# Sum for atom 6: 1.43417, 3.02438\n# Sum for atom 7: 1.85508, 3.21875\n# Sum for atom 8: 2.06909, 3.42013\n# Sum for atom 9: 2.4237, 4.32524\n# Sum for atom 10: 1.9603, 4.54512\n# Sum for atom 11: 2.18017, 3.99349\n# Sum for atom 12: 2.19774, 3.8152\n# Sum for atom 13: 2.02152, 3.7884\n# Sum for atom 14: 2.05662, 4.10305\n# Sum for atom 15: 2.65659, 4.34157\n# Sum for atom 16: 2.81839, 4.63688\n# Sum for atom 17: 2.90653, 4.40561\n# Sum for atom 18: 2.37779, 4.38092\n# Sum for atom 19: 2.17795, 4.01659\n# Sum for atom 20: 1.77652, 3.25573\n# Sum for atom 21: 1.22359, 3.24223\n# Sum for atom 22: 1.2336, 3.3604\n# Sum for atom 23: 1.21771, 3.1483"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
vzg100/Post-Translational-Modification-Prediction
|
old/Tyrosine Phosphorylation Example.ipynb
|
mit
|
[
"Example of using ptm_pred to prototype phosphorylation classifiers \nHistadine Phosphorylation is a quick place to start, not much data though. However, that means the code runs much faster.\nPredictor is the class which handles reading the data, sequence vector is a function which vectorizes a protien sequence into a feature array representing amino acids as integer values between 0-20. 0 represents empty space to average out vector length. It can also include hydrophobicity as a feature.",
"from pred import Predictor\nfrom pred import sequence_vector",
"Next we are going to load our data and generate random negative data aka gibberish data. The clean data files has negatives created from the data sets pulled from phosphoELM and dbptm. \nIn generate_random_data the amino acid parameter represents the amino acid being modified aka the target amino acid modification, the float being passed through is multiplier. For example we use .5 here, that means that .5 * number of data points = random negatives generated.",
"y = Predictor()\ny.load_data(file=\"Data/Training/clean_Y.csv\")",
"Next we vectorize the sequences, we are going to use the sequence vector. \nNow we can apply a data balancing function, here we are using adasyn which generates synthetic examples of the minority (in this case positive) class. By setting random data to 1",
"y.process_data(vector_function=\"sequence\", amino_acid=\"Y\", imbalance_function=\"ADASYN\", random_data=1)",
"Now we can apply a data balancing function, here we are using adasyn which generates synthetic examples of the minority (in this case positive) class. \nThe array outputed contains the precision, recall, fscore, and total numbers correctly estimated.",
"y.supervised_training(\"mlp_adam\")",
"Next we can check against the benchmarks pulled from dbptm.",
"y.benchmark(\"Data/Benchmarks/phos.csv\", \"Y\")",
"Want to explore the data some more, easily generate PCA and TSNE diagrams of the training set.",
"y.generate_pca()\n\ny.generate_tsne()",
"There you have it, you have prototype a Tyrosine phosphorylation classifier."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
tpin3694/tpin3694.github.io
|
python/applying_functions_to_list_items.ipynb
|
mit
|
[
"Title: Applying Functions To List Items\nSlug: applying_functions_to_list_items\nSummary: Applying Functions To List Items\nDate: 2016-05-01 12:00\nCategory: Python\nTags: Basics\nAuthors: Chris Albon \nCreate a list of regiment names",
"regimentNames = ['Night Riflemen', 'Jungle Scouts', 'The Dragoons', 'Midnight Revengence', 'Wily Warriors']",
"Using A For Loop\nCreate a for loop goes through the list and capitalizes each",
"# create a variable for the for loop results\nregimentNamesCapitalized_f = []\n\n# for every item in regimentNames\nfor i in regimentNames:\n # capitalize the item and add it to regimentNamesCapitalized_f\n regimentNamesCapitalized_f.append(i.upper())\n \n# View the outcome\nregimentNamesCapitalized_f",
"Using Map()\nCreate a lambda function that capitalizes x",
"capitalizer = lambda x: x.upper()",
"Map the capitalizer function to regimentNames, convert the map into a list, and view the variable",
"regimentNamesCapitalized_m = list(map(capitalizer, regimentNames)); regimentNamesCapitalized_m",
"Using List Comprehension\nApply the expression x.upper to each item in the list called regiment names. Then view the output",
"regimentNamesCapitalized_l = [x.upper() for x in regimentNames]; regimentNamesCapitalized_l"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
sdss/marvin
|
docs/sphinx/jupyter/dap_spaxel_queries.ipynb
|
bsd-3-clause
|
[
"DAP Zonal Queries (or Spaxel Queries)\nMarvin allows you to perform queries on individual spaxels within and across the MaNGA dataset.",
"from marvin import config\nfrom marvin.tools.query import Query\nconfig.mode='remote'",
"Let's grab all spaxels with an Ha-flux > 25 from MPL-5.",
"config.setRelease('MPL-5')\nf = 'emline_gflux_ha_6564 > 25'\nq = Query(search_filter=f)\nprint(q)\n\n# let's run the query\nr = q.run()\n\nr.totalcount\nr.results",
"Spaxel queries are queries on individual spaxels, and thus will always return a spaxel x and y satisfying your input condition. There is the potential of returning a large number of results that span only a few actual galaxies. Let's see how many..",
"# get a list of the plate-ifus\nplateifu = r.getListOf('plateifu')\n# look at the unique values with Python set\nprint('unique galaxies', set(plateifu), len(set(plateifu)))",
"Optimize your query\nUnless specified, spaxel queries will query across all bintypes and stellar templates. If you only want to search over a certain binning mode, this must be specified. If your query is taking too long, or returning too many results, consider filtering on a specific bintype and template.",
"f = 'emline_gflux_ha_6564 > 25 and bintype.name == SPX'\nq = Query(search_filter=f, return_params=['template.name'])\nprint(q)\n\n# run it\nr = q.run()\n\nr.results",
"Global+Local Queries\nTo combine global and local searches, simply combine them together in one filter condition. Let's look for all spaxels that have an H-alpha EW > 3 in galaxies with NSA redshift < 0.1 and a log sersic_mass > 9.5",
"f = 'nsa.sersic_logmass > 9.5 and nsa.z < 0.1 and emline_sew_ha_6564 > 3'\nq = Query(search_filter=f)\nprint(q)\n\nr = q.run()\n\n# Let's see how many spaxels we returned from how many galaxies\nplateifu = r.getListOf('plateifu')\nprint('spaxels returned', r.totalcount)\nprint('from galaxies', len(set(plateifu)))\n\nr.results[0:5]",
"Query Functions\nMarvin also contains more advanced queries in the form of predefined functions. \nFor example, let's say you want to ask Marvin\n\"Give me all galaxies that have an H-alpha flux > 25 in more than 20% of their good spaxels\"\nyou can do so using the query function npergood. npergood accepts as input a standard filter expression condition. E.g., the syntax for the above query would be input as \nnpergood(emline_gflux_ha_6564 > 25) >= 20\nThe syntax is \nFUNCTION(Conditional Expression) Operator Value\nLet's try it...",
"config.mode='remote'\nconfig.setRelease('MPL-4')\nf = 'npergood(emline_gflux_ha_6564 > 5) >= 20'\nq = Query(search_filter=f)\nr = q.run()\n\nr.results"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
kazzz24/deep-learning
|
tensorboard/.ipynb_checkpoints/Anna KaRNNa Name Scoped-checkpoint.ipynb
|
mit
|
[
"Anna KaRNNa\nIn this notebook, I'll build a character-wise RNN trained on Anna Karenina, one of my all-time favorite books. It'll be able to generate new text based on the text from the book.\nThis network is based off of Andrej Karpathy's post on RNNs and implementation in Torch. Also, some information here at r2rt and from Sherjil Ozair on GitHub. Below is the general architecture of the character-wise RNN.\n<img src=\"assets/charseq.jpeg\" width=\"500\">",
"import time\nfrom collections import namedtuple\n\nimport numpy as np\nimport tensorflow as tf",
"First we'll load the text file and convert it into integers for our network to use.",
"with open('anna.txt', 'r') as f:\n text=f.read()\nvocab = set(text)\nvocab_to_int = {c: i for i, c in enumerate(vocab)}\nint_to_vocab = dict(enumerate(vocab))\nchars = np.array([vocab_to_int[c] for c in text], dtype=np.int32)\n\ntext[:100]\n\nchars[:100]",
"Now I need to split up the data into batches, and into training and validation sets. I should be making a test set here, but I'm not going to worry about that. My test will be if the network can generate new text.\nHere I'll make both input and target arrays. The targets are the same as the inputs, except shifted one character over. I'll also drop the last bit of data so that I'll only have completely full batches.\nThe idea here is to make a 2D matrix where the number of rows is equal to the number of batches. Each row will be one long concatenated string from the character data. We'll split this data into a training set and validation set using the split_frac keyword. This will keep 90% of the batches in the training set, the other 10% in the validation set.",
"def split_data(chars, batch_size, num_steps, split_frac=0.9):\n \"\"\" \n Split character data into training and validation sets, inputs and targets for each set.\n \n Arguments\n ---------\n chars: character array\n batch_size: Size of examples in each of batch\n num_steps: Number of sequence steps to keep in the input and pass to the network\n split_frac: Fraction of batches to keep in the training set\n \n \n Returns train_x, train_y, val_x, val_y\n \"\"\"\n \n \n slice_size = batch_size * num_steps\n n_batches = int(len(chars) / slice_size)\n \n # Drop the last few characters to make only full batches\n x = chars[: n_batches*slice_size]\n y = chars[1: n_batches*slice_size + 1]\n \n # Split the data into batch_size slices, then stack them into a 2D matrix \n x = np.stack(np.split(x, batch_size))\n y = np.stack(np.split(y, batch_size))\n \n # Now x and y are arrays with dimensions batch_size x n_batches*num_steps\n \n # Split into training and validation sets, keep the virst split_frac batches for training\n split_idx = int(n_batches*split_frac)\n train_x, train_y= x[:, :split_idx*num_steps], y[:, :split_idx*num_steps]\n val_x, val_y = x[:, split_idx*num_steps:], y[:, split_idx*num_steps:]\n \n return train_x, train_y, val_x, val_y\n\ntrain_x, train_y, val_x, val_y = split_data(chars, 10, 200)\n\ntrain_x.shape\n\ntrain_x[:,:10]",
"I'll write another function to grab batches out of the arrays made by split data. Here each batch will be a sliding window on these arrays with size batch_size X num_steps. For example, if we want our network to train on a sequence of 100 characters, num_steps = 100. For the next batch, we'll shift this window the next sequence of num_steps characters. In this way we can feed batches to the network and the cell states will continue through on each batch.",
"def get_batch(arrs, num_steps):\n batch_size, slice_size = arrs[0].shape\n \n n_batches = int(slice_size/num_steps)\n for b in range(n_batches):\n yield [x[:, b*num_steps: (b+1)*num_steps] for x in arrs]\n\ndef build_rnn(num_classes, batch_size=50, num_steps=50, lstm_size=128, num_layers=2,\n learning_rate=0.001, grad_clip=5, sampling=False):\n \n if sampling == True:\n batch_size, num_steps = 1, 1\n\n tf.reset_default_graph()\n \n # Declare placeholders we'll feed into the graph\n with tf.name_scope('inputs'):\n inputs = tf.placeholder(tf.int32, [batch_size, num_steps], name='inputs')\n x_one_hot = tf.one_hot(inputs, num_classes, name='x_one_hot')\n \n with tf.name_scope('targets'):\n targets = tf.placeholder(tf.int32, [batch_size, num_steps], name='targets')\n y_one_hot = tf.one_hot(targets, num_classes, name='y_one_hot')\n y_reshaped = tf.reshape(y_one_hot, [-1, num_classes])\n \n keep_prob = tf.placeholder(tf.float32, name='keep_prob')\n \n # Build the RNN layers\n with tf.name_scope(\"RNN_layers\"):\n #lstm = tf.contrib.rnn.BasicLSTMCell(lstm_size)\n #drop = tf.contrib.rnn.DropoutWrapper(lstm, output_keep_prob=keep_prob)\n #cell = tf.contrib.rnn.MultiRNNCell([drop] * num_layers)\n lstm = tf.nn.rnn_cell.BasicLSTMCell(lstm_size)\n drop = tf.nn.rnn_cell.DropoutWrapper(lstm, output_keep_prob=keep_prob)\n cell = tf.nn.rnn_cell.MultiRNNCell([drop] * num_layers)\n \n with tf.name_scope(\"RNN_init_state\"):\n initial_state = cell.zero_state(batch_size, tf.float32)\n\n # Run the data through the RNN layers\n with tf.name_scope(\"RNN_forward\"):\n rnn_inputs = [tf.squeeze(i, squeeze_dims=[1]) for i in tf.split(x_one_hot, num_steps, 1)]\n #outputs, state = tf.contrib.rnn.static_rnn(cell, rnn_inputs, initial_state=initial_state)\n outputs, state = tf.nn.rnn(cell, rnn_inputs, initial_state=initial_state)\n \n final_state = state\n \n # Reshape output so it's a bunch of rows, one row for each cell output\n with tf.name_scope('sequence_reshape'):\n #seq_output = tf.concat(outputs, axis=1,name='seq_output')\n seq_output = tf.concat(1,outputs,name='seq_output')\n output = tf.reshape(seq_output, [-1, lstm_size], name='graph_output')\n \n # Now connect the RNN putputs to a softmax layer and calculate the cost\n with tf.name_scope('logits'):\n softmax_w = tf.Variable(tf.truncated_normal((lstm_size, num_classes), stddev=0.1),\n name='softmax_w')\n softmax_b = tf.Variable(tf.zeros(num_classes), name='softmax_b')\n logits = tf.matmul(output, softmax_w) + softmax_b\n\n with tf.name_scope('predictions'):\n preds = tf.nn.softmax(logits, name='predictions')\n \n \n with tf.name_scope('cost'):\n loss = tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y_reshaped, name='loss')\n cost = tf.reduce_mean(loss, name='cost')\n\n # Optimizer for training, using gradient clipping to control exploding gradients\n with tf.name_scope('train'):\n tvars = tf.trainable_variables()\n grads, _ = tf.clip_by_global_norm(tf.gradients(cost, tvars), grad_clip)\n train_op = tf.train.AdamOptimizer(learning_rate)\n optimizer = train_op.apply_gradients(zip(grads, tvars))\n \n # Export the nodes \n export_nodes = ['inputs', 'targets', 'initial_state', 'final_state',\n 'keep_prob', 'cost', 'preds', 'optimizer']\n Graph = namedtuple('Graph', export_nodes)\n local_dict = locals()\n graph = Graph(*[local_dict[each] for each in export_nodes])\n \n return graph",
"Hyperparameters\nHere I'm defining the hyperparameters for the network. The two you probably haven't seen before are lstm_size and num_layers. These set the number of hidden units in the LSTM layers and the number of LSTM layers, respectively. Of course, making these bigger will improve the network's performance but you'll have to watch out for overfitting. If your validation loss is much larger than the training loss, you're probably overfitting. Decrease the size of the network or decrease the dropout keep probability.",
"batch_size = 100\nnum_steps = 100\nlstm_size = 512\nnum_layers = 2\nlearning_rate = 0.001",
"Write out the graph for TensorBoard",
"model = build_rnn(len(vocab), \n batch_size=batch_size,\n num_steps=num_steps,\n learning_rate=learning_rate,\n lstm_size=lstm_size,\n num_layers=num_layers)\n\nwith tf.Session() as sess:\n \n sess.run(tf.global_variables_initializer())\n file_writer = tf.summary.FileWriter('./logs/3', sess.graph)",
"Training\nTime for training which is is pretty straightforward. Here I pass in some data, and get an LSTM state back. Then I pass that state back in to the network so the next batch can continue the state from the previous batch. And every so often (set by save_every_n) I calculate the validation loss and save a checkpoint.",
"!mkdir -p checkpoints/anna\n\nepochs = 10\nsave_every_n = 200\ntrain_x, train_y, val_x, val_y = split_data(chars, batch_size, num_steps)\n\nmodel = build_rnn(len(vocab), \n batch_size=batch_size,\n num_steps=num_steps,\n learning_rate=learning_rate,\n lstm_size=lstm_size,\n num_layers=num_layers)\n\nsaver = tf.train.Saver(max_to_keep=100)\n\nwith tf.Session() as sess:\n sess.run(tf.global_variables_initializer())\n \n # Use the line below to load a checkpoint and resume training\n #saver.restore(sess, 'checkpoints/anna20.ckpt')\n \n n_batches = int(train_x.shape[1]/num_steps)\n iterations = n_batches * epochs\n for e in range(epochs):\n \n # Train network\n new_state = sess.run(model.initial_state)\n loss = 0\n for b, (x, y) in enumerate(get_batch([train_x, train_y], num_steps), 1):\n iteration = e*n_batches + b\n start = time.time()\n feed = {model.inputs: x,\n model.targets: y,\n model.keep_prob: 0.5,\n model.initial_state: new_state}\n batch_loss, new_state, _ = sess.run([model.cost, model.final_state, model.optimizer], \n feed_dict=feed)\n loss += batch_loss\n end = time.time()\n print('Epoch {}/{} '.format(e+1, epochs),\n 'Iteration {}/{}'.format(iteration, iterations),\n 'Training loss: {:.4f}'.format(loss/b),\n '{:.4f} sec/batch'.format((end-start)))\n \n \n if (iteration%save_every_n == 0) or (iteration == iterations):\n # Check performance, notice dropout has been set to 1\n val_loss = []\n new_state = sess.run(model.initial_state)\n for x, y in get_batch([val_x, val_y], num_steps):\n feed = {model.inputs: x,\n model.targets: y,\n model.keep_prob: 1.,\n model.initial_state: new_state}\n batch_loss, new_state = sess.run([model.cost, model.final_state], feed_dict=feed)\n val_loss.append(batch_loss)\n\n print('Validation loss:', np.mean(val_loss),\n 'Saving checkpoint!')\n saver.save(sess, \"checkpoints/anna/i{}_l{}_{:.3f}.ckpt\".format(iteration, lstm_size, np.mean(val_loss)))\n\ntf.train.get_checkpoint_state('checkpoints/anna')",
"Sampling\nNow that the network is trained, we'll can use it to generate new text. The idea is that we pass in a character, then the network will predict the next character. We can use the new one, to predict the next one. And we keep doing this to generate all new text. I also included some functionality to prime the network with some text by passing in a string and building up a state from that.\nThe network gives us predictions for each character. To reduce noise and make things a little less random, I'm going to only choose a new character from the top N most likely characters.",
"def pick_top_n(preds, vocab_size, top_n=5):\n p = np.squeeze(preds)\n p[np.argsort(p)[:-top_n]] = 0\n p = p / np.sum(p)\n c = np.random.choice(vocab_size, 1, p=p)[0]\n return c\n\ndef sample(checkpoint, n_samples, lstm_size, vocab_size, prime=\"The \"):\n prime = \"Far\"\n samples = [c for c in prime]\n model = build_rnn(vocab_size, lstm_size=lstm_size, sampling=True)\n saver = tf.train.Saver()\n with tf.Session() as sess:\n saver.restore(sess, checkpoint)\n new_state = sess.run(model.initial_state)\n for c in prime:\n x = np.zeros((1, 1))\n x[0,0] = vocab_to_int[c]\n feed = {model.inputs: x,\n model.keep_prob: 1.,\n model.initial_state: new_state}\n preds, new_state = sess.run([model.preds, model.final_state], \n feed_dict=feed)\n\n c = pick_top_n(preds, len(vocab))\n samples.append(int_to_vocab[c])\n\n for i in range(n_samples):\n x[0,0] = c\n feed = {model.inputs: x,\n model.keep_prob: 1.,\n model.initial_state: new_state}\n preds, new_state = sess.run([model.preds, model.final_state], \n feed_dict=feed)\n\n c = pick_top_n(preds, len(vocab))\n samples.append(int_to_vocab[c])\n \n return ''.join(samples)\n\ncheckpoint = \"checkpoints/anna/i3560_l512_1.122.ckpt\"\nsamp = sample(checkpoint, 2000, lstm_size, len(vocab), prime=\"Far\")\nprint(samp)\n\ncheckpoint = \"checkpoints/anna/i200_l512_2.432.ckpt\"\nsamp = sample(checkpoint, 1000, lstm_size, len(vocab), prime=\"Far\")\nprint(samp)\n\ncheckpoint = \"checkpoints/anna/i600_l512_1.750.ckpt\"\nsamp = sample(checkpoint, 1000, lstm_size, len(vocab), prime=\"Far\")\nprint(samp)\n\ncheckpoint = \"checkpoints/anna/i1000_l512_1.484.ckpt\"\nsamp = sample(checkpoint, 1000, lstm_size, len(vocab), prime=\"Far\")\nprint(samp)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
aukintux/business_binomial_analysis
|
business_analysis.ipynb
|
mit
|
[
"Business Feasibility Overview\nThe purpose of this notebook is to analyze the feasibility of a business based on its intrinsic probabilities of loss/gain and return on investment in the cases of loss/gain. \nThis type of analysis refers to a very specific type of bussiness in which you have defined iterations. As far as we can think in a first approach there are 2 types of bussinessess:\n\nOne starts with a principal P, bussiness has a defined madurity time T, and at the end of such maturity time the capital becomes O, in which, O = P + G, where G corresponds to the gain which can be positive or negative, each possible value of the range of G has a certain specific probability.\nOne starts with a principal P, which is composed of a \"sunken capital\" S and a \"working capital\" W bussiness should in principle go on forever, however if bussiness does not adapt correctly to market conditions it will have an expiration date, which usually occurs, be it 100 years or 10 years, there is also a probability of initial kickstart success or failure Pk, this type of bussiness gives periodically a profit or loss G in periods of time T which are smaller than the expiration date, which is uncertain. The sunken part of the principal S devaluates (due to devaluation of assets) or valuates in time (due to brand awareness). With regard to the expiration date it is uncertain but one could assume a range in which it could take values with increasing probability of expiration as the time increases, asymptotically reaching 1 (this is the assumption that no bussiness lives forever, think universe imploding).\n\nThe questions to solve in this Notebook refer to the first type of bussiness.\n Questions to solve: \nGiven the parameters of the business, namely: \n\nThe return on investment when a gain event occurs ROI_G.\nThe return on investment when a loss event occurs ROI_L.\nThe probability that a gain event occurs P_G.\n\nWhere we have made simplifying assumptions given that the ROI_G, ROI_L are continuous variable P_G(ROI_G) is actually a single continuous real function. Also, we have made the simplifying assumption that the madurity time T is always the same. Which is also not absolutely true.\n\n\nStarting with a principal P, after N iterations, what is the probability to see that capital become O for each possible O that is allowed by the binomial process.\n\n\nOn would also like to see how the capital P evolves through the Bernoulli process. However since at iteration N regardless of the specific Bernoulli process what matters is where this process falls in the Binomial distribution. Each Bernoulli process has equal probability of ocurring as another which has the same amount of YES/NO Bernoulli trials in it. A graph of different timelines for each possible Bernoulli trial would be inadequate at best. Instead it would be interesting to see how the probability spreads out over the possible range of values of the Binomial process once the number of iterations increases. One would require a color plot. (Something similar to a Choropleth). This would be the time evolution of the projection to the x axis of the figure obtained in question 1.\n\n\nObtain a single parameter that indicates whether a business is feasible in this sense or not. The definition of feasibility to use is to have X percent of the mass of the pmf above a certain ROI after n iterations. e.g. having 80% of the mass of the pmf above a factor of 2 or 200% ROI (profit) after 10 iterations. i.e. to have a 80% probability of earning a 200% profit after 10 iterations. According to this criteria one would determine if a business is feasible or not. To define it after n=1 iterations would just result in the original parameters. This is a special case in which the answer of the questions is simplified and does not require numerical computations.\n\n\nGet probability of seeing a capital decline of X percent over the next n iterations. It does not matter the nominal value of capital you start at. Produce a plot where each curve represents the decline probability vs iterations for each cutoff percentage.\n\n\nBased on the results of question 4 obtain the probability of bankruptcy in n iterations. The probability of bankruptcy should be defined as seeing the capital decline over X percent i.e. it would be the probability attained by performing a sum over all curves that see a capital decline bigger than the cutoff value.\n\n\nImport Modules",
"# Numpy\nimport numpy as np\n# Scipy\nfrom scipy import stats\nfrom scipy import linspace\n# Plotly\nfrom plotly.offline import download_plotlyjs, init_notebook_mode, plot, iplot\nimport plotly.graph_objs as go\ninit_notebook_mode(connected=True) # Offline plotting",
"Define Common Parameters",
"# Probabilities\nP_G = 0.8\n# Return on investment rates\nROI_G = 1\nROI_L = -0.2\n# Principal (initial capital)\nP = 1",
"Question 1.\nStarting with a principal P, after N iterations, what is the probability to see that capital become O for each possible O that is allowed by the binomial process.\nDefine the functions that will evolve the principal capital P a Binomial process.",
"# Takes the principal P and performs the evolution of the capital using \n# the result x of the random binomial variable after n trials\ndef evolve_with_binomial(P, x, n):\n return P * ((1 + ROI_G) ** x) * ((1 + ROI_L) ** (n - x))",
"Run the simulation using the Binomial process which is equivalent to performing a very large (~1000's) Bernoulli processes and grouping their results. Since the order in which 1's and 0's occur in the sequence does not affect the final result.",
"# Number of iterations\nyears = 5\niterations_per_year = 2\nn = iterations_per_year * (years)\n\n# Sorted array of unique values ocurring in instance of Binomial process \nx_binomial = linspace(0,n,n+1)\n\n# Arrays of data to plot\ndata_dict = { 'x': [], 'y': []}\ndata_dict['x'] = [evolve_with_binomial(P, x, max(x_binomial)) for x in x_binomial]\ndata_dict['y'] = stats.binom.pmf(x_binomial,max(x_binomial),P_G)\n\n# Plot data variable. It contains the trace objects\nfig_data = [\n go.Bar( \n x=data_dict['x'], \n y=data_dict['y'], \n name=\"Probabilities\" \n ),\n go.Scatter( \n x=data_dict['x'], \n y=data_dict['y'], \n mode='lines+markers', \n name=\"Fitting\",\n line=dict(\n shape='spline'\n )\n )\n ]\n\n# Set layout for figure\nlayout = go.Layout(\n title='Binomial Distribution of Capital at N Iterations',\n font=dict(\n family='Arial, sans-serif;',\n size=12,\n color='#000'\n ),\n xaxis = dict(title='Capital Multiplier'),\n yaxis = dict(title='Event Probability'),\n orientation=0,\n autosize=True,\n annotations=[\n dict(\n x=max(data_dict['x'])/2,\n y=max(data_dict['y']),\n text='N: {0} | P_G: {1}'.format(n, P_G),\n showarrow=False\n )\n ]\n)\n\n# Plot figure\n#iplot({\"data\": fig_data, \"layout\": layout})",
"Question 2.\nPlot the time evolution of the principal P through the Binomial process. Where a more intense color means a higher probability and a less intense color means a lower probability.",
"# Number of iterations\nyears = 5\niterations_per_year = 2\nn = iterations_per_year * (years)\n\n# Arrays of data to plot\ndata_dict = { 'values': [], 'probs': np.array([]), 'iterations': [], 'mean': [], 'most_prob': [], 'uniq_iterations': []}\n\n\n# For each iteration less than the maximun number of iterations\ni = 1\nwhile i <= n:\n x_i = linspace(0,i,i+1) # Possible values of success event in \"i\" trials\n values = [evolve_with_binomial(P, x, max(x_i)) for x in x_i] # Capital evolution according to Binomial process\n probs = stats.binom.pmf(x_i,max(x_i),P_G) # Probabilities of Binomial process\n # Set values in dictionary\n data_dict['values'] = data_dict['values'] + values\n data_dict['mean'].append(np.mean(values))\n data_dict['most_prob'].append(values[np.argmax(probs)])\n data_dict['uniq_iterations'].append(i)\n data_dict['probs'] = np.concatenate((data_dict['probs'], probs), axis=0)\n data_dict['iterations'] = data_dict['iterations'] + [i]*len(x_i)\n i += 1\n\n# Plot data variable. It contains the trace objects\nfig_data = [\n go.Scatter( \n x=data_dict['iterations'], \n y=data_dict['values'], \n mode='markers',\n name=\"Evolution\",\n marker=dict(\n cmin = 0,\n cmax = 1,\n color = data_dict['probs'],\n size = 16\n )\n ),\n go.Scatter( \n x=data_dict['uniq_iterations'], \n y=data_dict['mean'], \n mode='lines+markers', \n name=\"Mean\",\n line=dict(\n shape='spline'\n )\n ),\n go.Scatter( \n x=data_dict['uniq_iterations'], \n y=data_dict['most_prob'], \n mode='lines+markers', \n name=\"Most Probable\",\n line=dict(\n shape='spline'\n )\n )\n ]\n\n# Set layout for figure\nlayout = go.Layout(\n title='Evolution of Capital Through Binomial Process',\n font=dict(\n family='Arial, sans-serif;',\n size=12,\n color='#000'\n ),\n xaxis = dict(title='Iteration Number'),\n yaxis = dict(title='Capital Multiplier'),\n orientation=0,\n autosize=True,\n annotations=[\n dict(\n x=n/2,\n y=max(data_dict['values']),\n text='P_G: {0}'.format(P_G),\n showarrow=False\n )\n ]\n)\n\n# Plot figure\n#iplot({\"data\": fig_data, \"layout\": layout})",
"The previous plot shows the evolution of the capital throughout the Binomial process, alongside we show the mean and the most probable value of the possible outcomes. As one increases the number of iterations the mean surpassess the most probable value for good while maintaining a very close gap.\nQuestion 4.\nWe want to see how likely it is to have a capital decline of \"X\" percent over the next \"n\" iterations.\nThe plot we want is obtained by selecting a subset of the evolution curve. The subset of the values correspond to those where the multiplying factors are less than 1. After such values are selected one applies the transformation:\n$$ y = 1-x$$\nIn this new scale the y value represents the capital decline.",
"# Calculate the possible capital declines and their respective probabilities \ndata_dict[\"decline_values\"] = []\ndata_dict[\"decline_probs\"] = []\ndata_dict[\"decline_iterations\"] = []\nfor index, val in enumerate(data_dict[\"values\"]):\n if val < 1:\n data_dict[\"decline_values\"].append((1-val)*100)\n data_dict[\"decline_probs\"].append(100*data_dict[\"probs\"][index])\n data_dict[\"decline_iterations\"].append(data_dict[\"iterations\"][index])\n \n# Plot data variable. It contains the trace objects\nfig_data = [\n go.Scatter( \n x=data_dict['decline_iterations'], \n y=data_dict['decline_values'], \n mode='markers',\n name=\"Evolution\",\n marker=dict(\n cmin = 0,\n cmax = 1,\n color = data_dict['decline_probs']\n )\n )\n ]\n\nfig_data[0].text = [\"Probability: {0:.2f}%\".format(prob) for prob in data_dict[\"decline_probs\"]]\n\n# Set layout for figure\nlayout = go.Layout(\n title='Possible Capital Decline Through Binomial Process',\n font=dict(\n family='Arial, sans-serif;',\n size=12,\n color='#000'\n ),\n xaxis = dict(title='Iteration Number'),\n yaxis = dict(title='Percentage Decline [%]'),\n orientation=0,\n autosize=True,\n annotations=[\n dict(\n x=max(data_dict[\"decline_iterations\"])/2,\n y=max(data_dict['decline_values']),\n text='P_G: {0}'.format(P_G),\n showarrow=False\n )\n ]\n)\n\n# Plot figure\n#iplot({\"data\": fig_data, \"layout\": layout})",
"Question 5.\n Obtain the probability of bankrupcty after N iterations, bankruptcy is defined for the purposes of this notebook as the event in which the principal perceives a capital decline bigger than or equal to X percent",
"# Capital percentage decline of bankruptcy\nCP_br = 20\n\n# Variable to store the plot data\ndata_dict[\"bankruptcy_probs\"] = []\ndata_dict[\"bankruptcy_iterations\"] = []\n\n# Calculate for each iteration the probability of bankruptcy\niter_counter = 0\nfor i, iteration in enumerate(data_dict[\"decline_iterations\"]):\n if data_dict[\"decline_values\"][i] >= CP_br:\n if iteration > iter_counter:\n data_dict[\"bankruptcy_probs\"].append(data_dict[\"decline_probs\"][i])\n data_dict[\"bankruptcy_iterations\"].append(iteration)\n else:\n data_dict[\"bankruptcy_probs\"][-1] = data_dict[\"bankruptcy_probs\"][-1] + data_dict[\"decline_probs\"][i]\n iter_counter = iteration\n\n# Plot data variable. It contains the trace objects\nfig_data = [\n go.Scatter( \n x=data_dict['bankruptcy_iterations'], \n y=data_dict['bankruptcy_probs'], \n mode='lines+markers', \n name=\"Mean\",\n line=dict(\n shape='spline'\n )\n )\n ]\n\n# Set layout for figure\nlayout = go.Layout(\n title='Probability of Bankruptcy Through Binomial Process',\n font=dict(\n family='Arial, sans-serif;',\n size=12,\n color='#000'\n ),\n xaxis = dict(title='Iteration Number'),\n yaxis = dict(title='Event Probability [%]'),\n orientation=0,\n autosize=True,\n annotations=[\n dict(\n x=max(data_dict['bankruptcy_iterations'])/2,\n y=max(data_dict['bankruptcy_probs']),\n text='P_G: {0} | CP_br: {1}%'.format(P_G, CP_br),\n showarrow=False\n )\n ]\n)\n\n# Plot figure\n#iplot({\"data\": fig_data, \"layout\": layout})",
""
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
sdss/marvin
|
docs/sphinx/tutorials/notebooks/marvin_queries.ipynb
|
bsd-3-clause
|
[
"import warnings\nwarnings.simplefilter('ignore')",
"Marvin Queries\nThis tutorial goes through a few basics of how to perform queries on the MaNGA dataset using the Marvin Query tool. Please see the Marvin Query page for more details on how to use Queries. This tutorial covers the basics of:\n\nquerying on metadata information from the NSA catalog\nhow to combine multiple filter and return additional parameters\nhow to perform radial cone searches with Marvin\nquerying on information from the MaNGA DAPall summary file\nquerying using quality and target flags \n\nFirst let's import some basics",
"# we should be using DR15 MaNGA data\nfrom marvin import config\nconfig.release\n\n# import the Query tool\nfrom marvin.tools.query import Query",
"Query Basics\nQuerying on Metadata\nLet's go through some Query basics of how to do a query on metadata. The two main keyword arguments to Query are search_filter and return_params. search_filter is a string representing the SQL where condition you'd like to filter on. This tutorial assumes a basic familiarity with the SQL boolean syntax needed to construct Marvin Queries. Please see the tutorial on SQL Boolean syntax to learn more. return_params is a list of parameters you want to return in the query in addition to those used in the SQL filter condition. \nLet's search for all galaxies with a redshift less than 0.1. To specify our search parameter, redshift, we must know the database table and name of the parameter. In this case, MaNGA uses the NASA-Sloan Atlas (NSA) for redshift information. In the NSA catalog, the redshift is the z parameter of the nsa table, so our search parameter will be nsa.z. Generically, all search parameters will take the form table.parameter.",
"# filter for galaxies with a redshift < 0.1\nmy_filter = 'nsa.z < 0.1'\n\n# construct the query\nq = Query(search_filter=my_filter)\nq",
"The Query tool works with a local or remote manga database. Without a database, the Query tool submits your inputs to the remote server using the API, rather than doing anything locally. We run the query with the run method. Your inputs are sent to the server where the query is built dynamically and run. The results of the query are returned via the API, parsed and converted into a Marvin Results object.",
"# run the query\nr = q.run()\n\n# print some results information\nprint(r)\nprint('number of results:', r.totalcount)",
"After constructing queries, we can run them with q.run(). This returns a Marvin Results object. Let's take a look. This query returned 4275 objects. For queries with large results, the results are automatically paginated in sets of 100 objects. Default parameters returned in queries always include the mangaid and plateifu. Marvin Queries will also return any parameters used in the definition of your filter condition. Since we filtered on redshift, the redshift is automatically included.",
"# look at the current page of results (subset of 10)\nprint('number in current set:', len(r.results))\nprint(r.results[0:10])",
"Finding Available Parameters\nWe can use the Query datamodel to look up all the available parameters one can use in the search_filter or in the return_params keyword arguments to Query. The Query datamodel contains many parameters, which we bundle into groups for easier navigation. We currently offer four groups of parameters, with remaining parametrs captured in an Other group. See the Query Datamodel in the docs for more detailed info.",
"# look up the available query datamodel groups\nq.datamodel.groups",
"Each group contains a list of QueryParameters. A QueryParameter contains the designation and syntax used for specifying parameters in your query. The full attribute indicates the table.parameter name needed as input into the search_filter or return_params keyword arguments to the Marvin Query. The redshift is a parameter from the NSA catalog. There are 158 parameters in the NSA Catalog group.",
"# select and print the NSA parameter group\nnsa = q.datamodel.groups['nsa']\nnsa.parameters",
"Multiple Search Criteria and Returning Additional Parameters\nWe can easily combine query filter conditions by constructing a boolean string using AND. Let's search for galaxies with a redshift < 0.1 and log M$_\\star$ < 10. The NSA catalog contains the Sersic profile determination for stellar mass, which is the sersic_mass or sersic_logmass parameter of the nsa table, so our search parameter will be nsa.sersic_logmass. \nLet's also return the object RA and Dec as well using the return_params keyword. This accepts a list of string parameters. Object RA and Dec are included in the cube table so the parameter names are cube.ra and cube.dec.",
"my_filter = 'nsa.z < 0.1 and nsa.sersic_logmass < 10'\nq = Query(search_filter=my_filter, return_params=['cube.ra', 'cube.dec'])\nr = q.run()\nprint(r)\nprint('Number of objects:', r.totalcount)",
"This query return 1932 objects and now includes the RA, Dec, redshift and log Sersic stellar mass parameters.",
"# print the first 10 rows\nr.results[0:10]",
"Radial Queries in Marvin\nCone searches can be performed with Marvin Queries using a special functional syntax in your SQL string. Cone searches can be performed using the special radial string function. The syntax for a cone search query is radial(RA, Dec, radius). Let's search for all galaxies within 0.5 degrees centered on RA, Dec = 232.5447, 48.6902. The RA and Dec must be in decimal degrees and the radius is in units of degrees.",
"# build the radial filter condition\nmy_filter = 'radial(232.5447, 48.6902, 0.5)'\nq = Query(search_filter=my_filter)\nr = q.run()\nprint(r)\nprint(r.results)",
"Queries using DAPall parameters.\nMaNGA provides derived analysis properties in its dapall summary file. Marvin allows for queries on any of the parameters in the file. The table name for these parameters is dapall. Let's find all galaxies that have a total measure star-formation rate > 5 M$_\\odot$/year. The total SFR parameter in the DAPall table is sfr_tot.",
"my_filter = 'dapall.sfr_tot > 5'\nq = Query(search_filter=my_filter)\nr = q.run()\nprint(r)\nprint(r.results)",
"The query returns 6 results, but looking at the plateifu, we see there are only 3 unique targets. This is because the DAPall file provides measurements for multiple bintypes and by default will return entries for all bintypes. We can select those out using the bintype.name parameter. Let's filter on only the HYB10 bintype.",
"my_filter = 'dapall.sfr_tot > 5 and bintype.name==HYB10'\nq = Query(search_filter=my_filter)\nr = q.run()\nprint(r)\nprint(r.results)",
"Query on Quality and Target Flags\nMarvin includes the ability to perform queries using quality or target flag information. These work using the special quality and targets keyword arguments. These keywords accept a list of flag maskbit labels provided by the Maskbit Datamodel. These keywords are inclusive, meaning they will only filter on objects satisfying those labels. \nSearching by Target Flags\nLet's find all galaxies that are in the MaNGA MAIN target selection sample. Targets in the MAIN sample are a part of the PRIMARY, SECONDARY and COLOR-ENHANCED samples. These are the primary, secondary, and color-enhanced flag labels. The targets keywords accepts all labels from the MANGA_TARGET1, MANGA_TARGET2, or MANGA_TARGET3 maskbit schema.",
"# create the targets list of labels\ntargets = ['primary', 'secondary', 'color-enhanced']\nq = Query(targets=targets)\nr = q.run()\nprint(r)\nprint('There are {0} galaxies in the main sample'.format(r.totalcount))\nprint(r.results[0:5])",
"The targets keyword is equivalent to the cube.manga_targetX search parameter, where X is 1, 2, or 3. The bits for the primary, secondary, and color-enhanced samples are 10, 11, and 12, respectively. These combine into the value 7168. The above query is equivalent to the filter condition cube.manga_target1 & 7168",
"value = 1<<10 | 1<<11 | 1<<12\nmy_filter = 'cube.manga_target1 & {0}'.format(value)\nq = Query(search_filter=my_filter)\nr = q.run()\nprint(r)",
"Let's search only for galaxies that are Milky Way Analogs or Dwarfs ancillary targets.",
"targets = ['mwa', 'dwarf']\nq = Query(targets=targets)\nr = q.run()\nprint(r)\nprint('There are {0} galaxies from the Milky Way Analogs and Dwarfs ancillary target catalogs'.format(r.totalcount))\nprint(r.results)",
"Searching by Quality Flags\nThe quality accepts all labels from the MANGA_DRPQUAL and MANGA_DAPQUAL maskbit schema. Let's find all galaxies that suffered from bad flux calibration. This is the flag BADFLUX (bit 8) from the MANGA_DRPQUAL maskbit schema.",
"quality = ['BADFLUX']\nq = Query(quality=quality)\nr = q.run()\nprint(r)\nprint('There are {0} galaxies with bad flux calibration'.format(r.totalcount))\nprint(r.results[0:10])",
"The quality keyword is equivalent to the search parameters cube.quality for DRP flags or the file.quality for DAP flags. The above query is equivalent to cube.quality & 256. You can also perform a NOT bitmask selection using the ~ symbol. To perform a NOT selection we can only use the cube.quality parameter. Let's select all galaxies that do not have bad flux calibration.",
"# the above query as a filter condition\nq = Query(search_filter='cube.quality & 256')\nr = q.run()\nprint('Objects with bad flux calibration:', r.totalcount)\n\n# objects with bad quality other than bad flux calibration\nq = Query(search_filter='cube.quality & ~256')\nr = q.run()\nprint('Bad objects with no bad flux calibration:', r.totalcount)",
"To find exactly objects with good quality and no bad flags set, use cube.quality == 0.",
"q = Query(search_filter='cube.quality == 0')\nr = q.run()\nprint(r)\nprint('Objects with good quality:', r.totalcount)",
"Useful Resources\nCheck out these pages on the Marvin Docs site for more information querying with Marvin.\n\nQuery\nQuery Datamodel\nResults\nSQL Boolean Syntax Tutorial"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
ChristinaB/Observatory
|
tutorials/Observatory_usecase7_xmapLandlab.ipynb
|
mit
|
[
"Retrieve NetCDF and model gridded climate time-series for a watershed\nCase study: the Sauk-Suiattle Watershed\n<img src=\"http://www.sauk-suiattle.com/images/Elliott.jpg\" \nstyle=\"float:right;width:150px;padding:20px\">\nUse this Jupyter Notebook to:\n1. HydroShare setup and preparation\n2. Re-establish the paths to the mapping file\n3. Compute daily, monthly, and annual temperature and precipitation statistics\n4. Visualize precipitation results relative to the forcing data\n5. Visualize the time-series trends\n6. Save results back into HydroShare\n\n<br/><br/><br/>\n<img src=\"https://www.washington.edu/brand/files/2014/09/W-Logo_Purple_Hex.png\"\nstyle=\"float:right;width:150px;padding:20px\">\n<br/><br/>\nThis data is compiled to digitally observe the watersheds, powered by HydroShare. <br/>Provided by the Watershed Dynamics Group, Dept. of Civil and Environmental Engineering, University of Washington\n1. Prepare HydroShare Setup and Preparation\nTo run this notebook, we must import several libaries. These are listed in order of 1) Python standard libraries, 2) hs_utils library provides functions for interacting with HydroShare, including resource querying, dowloading and creation, and 3) the observatory_gridded_hydromet library that is downloaded with this notebook.",
"#!conda install -c conda-forge ogh libgdal gdal pygraphviz ncurses matplotlib=2.2.3 --yes\n\n# silencing warning\nimport warnings\nwarnings.filterwarnings(\"ignore\")\n\n# data processing\nimport os\nimport pandas as pd, numpy as np, dask\n\n# data migration library\nimport ogh\nimport ogh_xarray_landlab as oxl\nfrom utilities import hydroshare\nfrom ecohydrology_model_functions import run_ecohydrology_model, plot_results\nfrom landlab import imshow_grid, CLOSED_BOUNDARY\n\n# modeling input params\nInputFile = os.path.join(os.getcwd(),'ecohyd_inputs.yaml')\n\n# plotting and shape libraries\nimport matplotlib.pyplot as plt\n%matplotlib inline\n\nInputFile\n\n# initialize ogh_meta\nmeta_file = dict(ogh.ogh_meta())\nsorted(meta_file.keys())",
"Establish a secure connection with HydroShare by instantiating the hydroshare class that is defined within hs_utils. In addition to connecting with HydroShare, this command also sets and prints environment variables for several parameters that will be useful for saving work back to HydroShare.",
"notebookdir = os.getcwd()\n\nhs=hydroshare.hydroshare()\nhomedir = hs.getContentPath(os.environ[\"HS_RES_ID\"])\nos.chdir(homedir)",
"If you are curious about where the data is being downloaded, click on the Jupyter Notebook dashboard icon to return to the File System view. The homedir directory location printed above is where you can find the data and contents you will download to a HydroShare JupyterHub server. At the end of this work session, you can migrate this data to the HydroShare iRods server as a Generic Resource. \n2. Get list of gridded climate points for the watershed\nFor visualization purposes, we will also remap the study site shapefile, which is stored in HydroShare at the following url: https://www.hydroshare.org/resource/c532e0578e974201a0bc40a37ef2d284/. Since the shapefile was previously migrated, we can select 'N' for no overwriting.\nIn the usecase1 notebook, the treatgeoself function identified the gridded cell centroid coordinates that overlap with our study site. These coordinates were documented within the mapping file, which will be remapped here. In the usecase2 notebook, the downloaded files were cataloged within the mapping file, so we will use the mappingfileSummary function to characterize the files available for Sauk-Suiattle for each gridded data product.",
"\"\"\"\n1/16-degree Gridded cell centroids\n\"\"\"\n# List of available data\nhs.getResourceFromHydroShare('ef2d82bf960144b4bfb1bae6242bcc7f')\nNAmer = hs.content['NAmer_dem_list.shp']\n\n\n\"\"\"\nSauk\n\"\"\"\n# Watershed extent\nhs.getResourceFromHydroShare('c532e0578e974201a0bc40a37ef2d284')\nsauk = hs.content['wbdhub12_17110006_WGS84_Basin.shp']\n\n# reproject the shapefile into WGS84\nogh.reprojShapefile(sourcepath=sauk)",
"Summarize the file availability from each watershed mapping file",
"%%time\n\n# map the mappingfiles from usecase1\nmappingfile1=ogh.treatgeoself(shapefile=sauk, NAmer=NAmer, buffer_distance=0.06,\n mappingfile=os.path.join(homedir,'Sauk_mappingfile.csv'))",
"3. Compare Hydrometeorology\nThis section performs computations and generates plots of the Livneh 2013 and Salathe 2014 mean temperature and mean total monthly precipitation in order to compare them with each other. The generated plots are automatically downloaded and saved as .png files within the \"homedir\" directory.\nLet's compare the Livneh 2013 and Salathe 2014 using the period of overlapping history.",
"help(ogh.getDailyMET_livneh2013)\n\nhelp(oxl.get_x_dailymet_Livneh2013_raw)",
"NetCDF retrieval and clipping to a spatial extent\nThe function get_x_dailywrf_salathe2014 retrieves and clips NetCDF files archived within the UW Rocinante NNRP repository. This archive contains daily data from January 1970 through December 1979 (10 years). Each netcdf file is comprised of meteorologic and VIC hydrologic outputs for a calendar month. The expected number of files would be 360 files (12 months for 30 years). \nIn the code chunk below, 40 parallel workers will be initialized to distribute file retrieval and spatial clipping tasks. For each worker, they will wget the requested file, clip the netcdf file to gridded cell centroids within the the provided bounding box, then return the location of the spatially clipped output files.\nProvide the home and subdirectory where the cropped NetCDF files will be stored. Also provide the spatial bounds (in WGS84) to crop the NetCDF files upon download. Finally, provide the number of workers to carry out the download tasks, and the start and end date of the files of interest.",
"maptable, nstations = ogh.mappingfileToDF(mappingfile1)\nspatialbounds = {'minx':maptable.LONG_.min(), 'maxx':maptable.LONG_.max(),\n 'miny':maptable.LAT.min(), 'maxy':maptable.LAT.max()}\n\noutputfiles = oxl.get_x_dailymet_Livneh2013_raw(homedir=homedir,\n subdir='livneh2013/Daily_MET_1970_1970/raw_netcdf',\n spatialbounds=spatialbounds,\n nworkers=6,\n start_date='1970-01-01', end_date='1970-12-31')",
"Convert collection of NetCDF files into a collection of ASCII files\nProvide the home and subdirectory where the ASCII files will be stored, the source_directory of netCDF files, and the mapping file to which the resulting ASCII files will be cataloged. Also, provide the Pandas Datetime code for the frequency of the time steps. Finally, provide the catalog label that will be used for the mapping file catalog and the metadata file label.",
"%%time\n# convert the netCDF files into daily ascii time-series files for each gridded location\noutfilelist = oxl.netcdf_to_ascii(homedir=homedir, \n subdir='livneh2013/Daily_MET_1970_1970/raw_ascii', \n source_directory=os.path.join(homedir, 'livneh2013/Daily_MET_1970_1970/raw_netcdf'),\n mappingfile=mappingfile1,\n temporal_resolution='D',\n meta_file=meta_file,\n catalog_label='sp_dailymet_livneh_1970_1970')\n\nt1 = ogh.mappingfileSummary(listofmappingfiles = [mappingfile1], \n listofwatershednames = ['Sauk-Suiattle river'],\n meta_file=meta_file)\n\nt1\n\n# Save the metadata\nogh.saveDictOfDf(dictionaryObject=meta_file, outfilepath='test.json')",
"Create a dictionary of climate variables for the long-term mean (ltm).\nINPUT: gridded meteorology ASCII files located from the Sauk-Suiattle Mapping file. The inputs to gridclim_dict() include the folder location and name of the hydrometeorology data, the file start and end, the analysis start and end, and the elevation band to be included in the analsyis (max and min elevation). <br/>OUTPUT: dictionary of dataframes where rows are temporal summaries and columns are spatial summaries",
"meta_file['sp_dailymet_livneh_1970_1970']['variable_list']\n\n%%time\n\nltm = ogh.gridclim_dict(mappingfile=mappingfile1,\n metadata=meta_file,\n dataset='sp_dailymet_livneh_1970_1970',\n variable_list=['Prec','Tmax','Tmin'])\n\nsorted(ltm.keys())",
"Compute the total monthly and yearly precipitation, as well as the mean values across time and across stations\nINPUT: daily precipitation for each station from the long-term mean dictionary (ltm) <br/>OUTPUT: Append the computed dataframes and values into the ltm dictionary",
"# extract metadata\ndr = meta_file['sp_dailymet_livneh_1970_1970']\n\n# compute sums and mean monthly an yearly sums\nltm = ogh.aggregate_space_time_sum(df_dict=ltm,\n suffix='Prec_sp_dailymet_livneh_1970_1970',\n start_date=dr['start_date'],\n end_date=dr['end_date'])\n\n# print the name of the analytical dataframes and values within ltm\nsorted(ltm.keys())\n\n# initialize list of outputs\nfiles=[]\n\n# create the destination path for the dictionary of dataframes\nltm_sauk=os.path.join(homedir, 'ltm_1970_1970_sauk.json')\nogh.saveDictOfDf(dictionaryObject=ltm, outfilepath=ltm_sauk)\nfiles.append(ltm_sauk)\n\n# append the mapping file for Sauk-Suiattle gridded cell centroids\nfiles.append(mappingfile1)",
"Visualize the \"average monthly total precipitations\"\nINPUT: dataframe with each month as a row and each station as a column. <br/>OUTPUT: A png file that represents the distribution across stations (in Wateryear order)",
"# # two lowest elevation locations\nlowE_ref = ogh.findCentroidCode(mappingfile=mappingfile1, colvar='ELEV', colvalue=164)\n\n# one highest elevation location\nhighE_ref = ogh.findCentroidCode(mappingfile=mappingfile1, colvar='ELEV', colvalue=2216)\n\n# combine references together\nreference_lines = highE_ref + lowE_ref\nreference_lines\n\n\nogh.renderValueInBoxplot(ltm['meanbymonthsum_Prec_sp_dailymet_livneh_1970_1970'],\n outfilepath='totalMonthlyRainfall.png', \n plottitle='Total monthly rainfall',\n time_steps='month',\n wateryear=True,\n reference_lines=reference_lines,\n ref_legend=True,\n value_name='Total daily precipitation (mm)',\n cmap='seismic_r',\n figsize=(6,6))\n\nogh.renderValuesInPoints(ltm['meanbymonthsum_Prec_sp_dailymet_livneh_1970_1970'], \n vardf_dateindex=12, \n shapefile=sauk, \n cmap='seismic_r',\n outfilepath='test.png', \n plottitle='December total rainfall',\n colorbar_label='Total monthly rainfall (mm)', \n figsize=(1.5,1.5))\n\nminx2, miny2, maxx2, maxy2 = oxl.calculateUTMbounds(mappingfile=mappingfile1,\n mappingfile_crs={'init':'epsg:4326'},\n spatial_resolution=0.06250)\n\nprint(minx2, miny2, maxx2, maxy2)",
"generate a raster",
"help(oxl.rasterDimensions)\n\n# generate a raster\nraster, row_list, col_list = oxl.rasterDimensions(minx=minx2, miny=miny2, maxx=maxx2, maxy=maxy2, dx=1000, dy=1000)\nraster.shape",
"Higher resolution children of gridded cells\nget data from Lower resolution parent grid cells to the children",
"help(oxl.mappingfileToRaster)\n\n%%time\n\n# landlab raster node crossmap to gridded cell id\nnodeXmap, raster, m = oxl.mappingfileToRaster(mappingfile=mappingfile1, spatial_resolution=0.06250, \n minx=minx2, miny=miny2, maxx=maxx2, maxy=maxy2, dx=1000, dy=1000)\n\n# print the raster dimensions\nraster.shape\n\n%%time\nnodeXmap.plot(column='ELEV', figsize=(10,10), legend=True)\n\n# generate vector array of December monthly precipitation\nprec_vector = ogh.rasterVector(vardf=ltm['meanbymonthsum_Prec_sp_dailymet_livneh_1970_1970'],\n vardf_dateindex=12,\n crossmap=nodeXmap,\n nodata=-9999)\n\n# close-off areas without data\nraster.status_at_node[prec_vector==-9999] = CLOSED_BOUNDARY\n\nfig =plt.figure(figsize=(10,10))\nimshow_grid(raster, \n prec_vector,\n var_name='Monthly precipitation',\n var_units=meta_file['sp_dailymet_livneh_1970_1970']['variable_info']['Prec'].attrs['units'],\n color_for_closed='black', \n cmap='seismic_r')\n\ntmax_vector = ogh.rasterVector(vardf=ltm['meanbymonth_Tmax_sp_dailymet_livneh_1970_1970'],\n vardf_dateindex=12,\n crossmap=nodeXmap,\n nodata=-9999)\n\nfig = plt.figure(figsize=(10,10))\nimshow_grid(raster, \n tmax_vector,\n var_name='Daily maximum temperature',\n var_units=meta_file['sp_dailymet_livneh_1970_1970']['variable_info']['Tmax'].attrs['units'],\n color_for_closed='black', symmetric_cbar=False, cmap='magma')\n\ntmin_vector = ogh.rasterVector(vardf=ltm['meanbymonth_Tmin_sp_dailymet_livneh_1970_1970'],\n vardf_dateindex=12,\n crossmap=nodeXmap,\n nodata=-9999)\n\nfig = plt.figure(figsize=(10,10))\nimshow_grid(raster, \n tmin_vector,\n var_name='Daily minimum temperature',\n var_units=meta_file['sp_dailymet_livneh_1970_1970']['variable_info']['Tmin'].attrs['units'],\n color_for_closed='black', symmetric_cbar=False, cmap='magma')\n\n# convert a raster vector back to geospatial presentation\nt4, t5 = oxl.rasterVectorToWGS(prec_vector, nodeXmap=nodeXmap, UTM_transformer=m)\n\nt4.plot(column='value', figsize=(10,10), legend=True)\n\n# this is one decade\ninputvectors = {'precip_met': np.tile(ltm['meandaily_Prec_sp_dailymet_livneh_1970_1970'], 15000),\n 'Tmax_met': np.tile(ltm['meandaily_Tmax_sp_dailymet_livneh_1970_1970'], 15000),\n 'Tmin_met': np.tile(ltm['meandaily_Tmin_sp_dailymet_livneh_1970_1970'], 15000)}\n\n%%time\n(VegType_low, yrs_low, debug_low) = run_ecohydrology_model(raster,\n input_data=inputvectors,\n input_file=InputFile,\n synthetic_storms=False,\n number_of_storms=100000,\n pet_method='PriestleyTaylor')\n\nplot_results(raster, VegType_low, yrs_low, yr_step=yrs_low-1)\nplt.show()\nplt.savefig('grid_low.png')",
"Visualize the \"average monthly total precipitation\"\n5. Save the results back into HydroShare\n<a name=\"creation\"></a>\nUsing the hs_utils library, the results of the Geoprocessing steps above can be saved back into HydroShare. First, define all of the required metadata for resource creation, i.e. title, abstract, keywords, content files. In addition, we must define the type of resource that will be created, in this case genericresource. \nNote: Make sure you save the notebook at this point, so that all notebook changes will be saved into the new HydroShare resource.\nTotal files and image to migrate",
"len(files)\n\n# for each file downloaded onto the server folder, move to a new HydroShare Generic Resource\ntitle = 'Computed spatial-temporal summaries of two gridded data product data sets for Sauk-Suiattle'\nabstract = 'This resource contains the computed summaries for the Meteorology data from Livneh et al. 2013 and the WRF data from Salathe et al. 2014.'\nkeywords = ['Sauk-Suiattle', 'Livneh 2013', 'Salathe 2014','climate','hydromet','watershed', 'visualizations and summaries'] \nrtype = 'genericresource'\n\n# create the new resource\nresource_id = hs.createHydroShareResource(abstract, \n title,\n keywords=keywords, \n resource_type=rtype, \n content_files=files, \n public=False)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
spencer2211/deep-learning
|
dcgan-svhn/DCGAN_Exercises.ipynb
|
mit
|
[
"Deep Convolutional GANs\nIn this notebook, you'll build a GAN using convolutional layers in the generator and discriminator. This is called a Deep Convolutional GAN, or DCGAN for short. The DCGAN architecture was first explored last year and has seen impressive results in generating new images, you can read the original paper here.\nYou'll be training DCGAN on the Street View House Numbers (SVHN) dataset. These are color images of house numbers collected from Google street view. SVHN images are in color and much more variable than MNIST. \n\nSo, we'll need a deeper and more powerful network. This is accomplished through using convolutional layers in the discriminator and generator. It's also necessary to use batch normalization to get the convolutional networks to train. The only real changes compared to what you saw previously are in the generator and discriminator, otherwise the rest of the implementation is the same.",
"%matplotlib inline\n\nimport pickle as pkl\n\nimport matplotlib.pyplot as plt\nimport numpy as np\nfrom scipy.io import loadmat\nimport tensorflow as tf\n\n!mkdir data",
"Getting the data\nHere you can download the SVHN dataset. Run the cell above and it'll download to your machine.",
"from urllib.request import urlretrieve\nfrom os.path import isfile, isdir\nfrom tqdm import tqdm\n\ndata_dir = 'data/'\n\nif not isdir(data_dir):\n raise Exception(\"Data directory doesn't exist!\")\n\nclass DLProgress(tqdm):\n last_block = 0\n\n def hook(self, block_num=1, block_size=1, total_size=None):\n self.total = total_size\n self.update((block_num - self.last_block) * block_size)\n self.last_block = block_num\n\nif not isfile(data_dir + \"train_32x32.mat\"):\n with DLProgress(unit='B', unit_scale=True, miniters=1, desc='SVHN Training Set') as pbar:\n urlretrieve(\n 'http://ufldl.stanford.edu/housenumbers/train_32x32.mat',\n data_dir + 'train_32x32.mat',\n pbar.hook)\n\nif not isfile(data_dir + \"test_32x32.mat\"):\n with DLProgress(unit='B', unit_scale=True, miniters=1, desc='SVHN Testing Set') as pbar:\n urlretrieve(\n 'http://ufldl.stanford.edu/housenumbers/test_32x32.mat',\n data_dir + 'test_32x32.mat',\n pbar.hook)",
"These SVHN files are .mat files typically used with Matlab. However, we can load them in with scipy.io.loadmat which we imported above.",
"trainset = loadmat(data_dir + 'train_32x32.mat')\ntestset = loadmat(data_dir + 'test_32x32.mat')",
"Here I'm showing a small sample of the images. Each of these is 32x32 with 3 color channels (RGB). These are the real images we'll pass to the discriminator and what the generator will eventually fake.",
"idx = np.random.randint(0, trainset['X'].shape[3], size=36)\nfig, axes = plt.subplots(6, 6, sharex=True, sharey=True, figsize=(5,5),)\nfor ii, ax in zip(idx, axes.flatten()):\n ax.imshow(trainset['X'][:,:,:,ii], aspect='equal')\n ax.xaxis.set_visible(False)\n ax.yaxis.set_visible(False)\nplt.subplots_adjust(wspace=0, hspace=0)",
"Here we need to do a bit of preprocessing and getting the images into a form where we can pass batches to the network. First off, we need to rescale the images to a range of -1 to 1, since the output of our generator is also in that range. We also have a set of test and validation images which could be used if we're trying to identify the numbers in the images.",
"def scale(x, feature_range=(-1, 1)):\n # scale to (0, 1)\n x = ((x - x.min())/(255 - x.min()))\n \n # scale to feature_range\n min, max = feature_range\n x = x * (max - min) + min\n return x\n\nclass Dataset:\n def __init__(self, train, test, val_frac=0.5, shuffle=False, scale_func=None):\n split_idx = int(len(test['y'])*(1 - val_frac))\n self.test_x, self.valid_x = test['X'][:,:,:,:split_idx], test['X'][:,:,:,split_idx:]\n self.test_y, self.valid_y = test['y'][:split_idx], test['y'][split_idx:]\n self.train_x, self.train_y = train['X'], train['y']\n \n self.train_x = np.rollaxis(self.train_x, 3)\n self.valid_x = np.rollaxis(self.valid_x, 3)\n self.test_x = np.rollaxis(self.test_x, 3)\n \n if scale_func is None:\n self.scaler = scale\n else:\n self.scaler = scale_func\n self.shuffle = shuffle\n \n def batches(self, batch_size):\n if self.shuffle:\n idx = np.arange(len(dataset.train_x))\n np.random.shuffle(idx)\n self.train_x = self.train_x[idx]\n self.train_y = self.train_y[idx]\n \n n_batches = len(self.train_y)//batch_size\n for ii in range(0, len(self.train_y), batch_size):\n x = self.train_x[ii:ii+batch_size]\n y = self.train_y[ii:ii+batch_size]\n \n yield self.scaler(x), y",
"Network Inputs\nHere, just creating some placeholders like normal.",
"def model_inputs(real_dim, z_dim):\n inputs_real = tf.placeholder(tf.float32, (None, *real_dim), name='input_real')\n inputs_z = tf.placeholder(tf.float32, (None, z_dim), name='input_z')\n \n return inputs_real, inputs_z",
"Generator\nHere you'll build the generator network. The input will be our noise vector z as before. Also as before, the output will be a $tanh$ output, but this time with size 32x32 which is the size of our SVHN images.\nWhat's new here is we'll use convolutional layers to create our new images. The first layer is a fully connected layer which is reshaped into a deep and narrow layer, something like 4x4x1024 as in the original DCGAN paper. Then we use batch normalization and a leaky ReLU activation. Next is a transposed convolution where typically you'd halve the depth and double the width and height of the previous layer. Again, we use batch normalization and leaky ReLU. For each of these layers, the general scheme is convolution > batch norm > leaky ReLU.\nYou keep stacking layers up like this until you get the final transposed convolution layer with shape 32x32x3. Below is the archicture used in the original DCGAN paper:\n\nNote that the final layer here is 64x64x3, while for our SVHN dataset, we only want it to be 32x32x3. \n\nExercise: Build the transposed convolutional network for the generator in the function below. Be sure to use leaky ReLUs on all the layers except for the last tanh layer, as well as batch normalization on all the transposed convolutional layers except the last one.",
"def generator(z, output_dim, reuse=False, alpha=0.2, training=True):\n with tf.variable_scope('generator', reuse=reuse):\n # First fully connected layer\n x1 = tf.layers.dense(z, 4*4*512)\n # transpose\n x1 = tf.reshape(x1, (-1, 4, 4, 512))\n x1 = tf.layers.batch_normalization(x1, training=training)\n x1 = tf.maximum(alpha * x1, x1)\n # Now 4x4x512\n \n x2 = tf.layers.conv2d_transpose(x1, 256, 5, strides=2, padding='same')\n x2 = tf.layers.batch_normalization(x2, training=training)\n x2 = tf.maximum(alpha * x2, x2)\n # Now 8x8x256\n \n x3 = tf.layers.conv2d_transpose(x2, 128, 5, strides=2, padding='same')\n x3 = tf.layers.batch_normalization(x3, training=training)\n x3 = tf.maximum(alpha * x3, x3)\n # Now 16x16x128\n \n # Output layer, 32x32x3\n logits = tf.layers.conv2d_transpose(x3, output_dim, 5, strides=2, padding='same')\n # Now 32x32x3\n \n out = tf.tanh(logits)\n \n return out",
"Discriminator\nHere you'll build the discriminator. This is basically just a convolutional classifier like you've built before. The input to the discriminator are 32x32x3 tensors/images. You'll want a few convolutional layers, then a fully connected layer for the output. As before, we want a sigmoid output, and you'll need to return the logits as well. For the depths of the convolutional layers I suggest starting with 16, 32, 64 filters in the first layer, then double the depth as you add layers. Note that in the DCGAN paper, they did all the downsampling using only strided convolutional layers with no maxpool layers.\nYou'll also want to use batch normalization with tf.layers.batch_normalization on each layer except the first convolutional and output layers. Again, each layer should look something like convolution > batch norm > leaky ReLU.\nNote: in this project, your batch normalization layers will always use batch statistics. (That is, always set training to True.) That's because we are only interested in using the discriminator to help train the generator. However, if you wanted to use the discriminator for inference later, then you would need to set the training parameter appropriately.\n\nExercise: Build the convolutional network for the discriminator. The input is a 32x32x3 images, the output is a sigmoid plus the logits. Again, use Leaky ReLU activations and batch normalization on all the layers except the first.",
"def discriminator(x, reuse=False, alpha=0.2):\n with tf.variable_scope('discriminator', reuse=reuse):\n # Input layer is 32x32x3\n x1 = tf.layers.conv2d(x, 64, 5, strides=2, padding='same')\n relu1 = tf.maximum(alpha * x1, x1)\n # Now 16x16x64\n \n x2 = tf.layers.conv2d(relu1, 128, 5, strides=2, padding='same')\n bn2 = tf.layers.batch_normalization(x2, training=True)\n relu2 = tf.maximum(alpha * bn2, bn2)\n # 8*8*128\n \n x3 = tf.layers.conv2d(relu2, 256, 5, strides=2, padding='same')\n bn3 = tf.layers.batch_normalization(x3, training=True)\n relu3 = tf.maximum(alpha * bn3, bn3)\n # 4x4x256\n \n # Flatten it\n flat = tf.reshape(relu3, (-1, 4*4*256))\n logits = tf.layers.dense(flat, 1)\n out = tf.sigmoid(logits)\n \n return out, logits",
"Model Loss\nCalculating the loss like before, nothing new here.",
"def model_loss(input_real, input_z, output_dim, alpha=0.2):\n \"\"\"\n Get the loss for the discriminator and generator\n :param input_real: Images from the real dataset\n :param input_z: Z input\n :param out_channel_dim: The number of channels in the output image\n :return: A tuple of (discriminator loss, generator loss)\n \"\"\"\n g_model = generator(input_z, output_dim, alpha=alpha)\n d_model_real, d_logits_real = discriminator(input_real, alpha=alpha)\n d_model_fake, d_logits_fake = discriminator(g_model, reuse=True, alpha=alpha)\n\n d_loss_real = tf.reduce_mean(\n tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_real, labels=tf.ones_like(d_model_real)))\n d_loss_fake = tf.reduce_mean(\n tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake, labels=tf.zeros_like(d_model_fake)))\n g_loss = tf.reduce_mean(\n tf.nn.sigmoid_cross_entropy_with_logits(logits=d_logits_fake, labels=tf.ones_like(d_model_fake)))\n\n d_loss = d_loss_real + d_loss_fake\n\n return d_loss, g_loss",
"Optimizers\nNot much new here, but notice how the train operations are wrapped in a with tf.control_dependencies block so the batch normalization layers can update their population statistics.",
"def model_opt(d_loss, g_loss, learning_rate, beta1):\n \"\"\"\n Get optimization operations\n :param d_loss: Discriminator loss Tensor\n :param g_loss: Generator loss Tensor\n :param learning_rate: Learning Rate Placeholder\n :param beta1: The exponential decay rate for the 1st moment in the optimizer\n :return: A tuple of (discriminator training operation, generator training operation)\n \"\"\"\n # Get weights and bias to update\n t_vars = tf.trainable_variables()\n d_vars = [var for var in t_vars if var.name.startswith('discriminator')]\n g_vars = [var for var in t_vars if var.name.startswith('generator')]\n\n # Optimize\n with tf.control_dependencies(tf.get_collection(tf.GraphKeys.UPDATE_OPS)):\n d_train_opt = tf.train.AdamOptimizer(learning_rate, beta1=beta1).minimize(d_loss, var_list=d_vars)\n g_train_opt = tf.train.AdamOptimizer(learning_rate, beta1=beta1).minimize(g_loss, var_list=g_vars)\n\n return d_train_opt, g_train_opt",
"Building the model\nHere we can use the functions we defined about to build the model as a class. This will make it easier to move the network around in our code since the nodes and operations in the graph are packaged in one object.",
"class GAN:\n def __init__(self, real_size, z_size, learning_rate, alpha=0.2, beta1=0.5):\n tf.reset_default_graph()\n \n self.input_real, self.input_z = model_inputs(real_size, z_size)\n \n self.d_loss, self.g_loss = model_loss(self.input_real, self.input_z,\n real_size[2], alpha=0.2)\n \n self.d_opt, self.g_opt = model_opt(self.d_loss, self.g_loss, learning_rate, beta1)",
"Here is a function for displaying generated images.",
"def view_samples(epoch, samples, nrows, ncols, figsize=(5,5)):\n fig, axes = plt.subplots(figsize=figsize, nrows=nrows, ncols=ncols, \n sharey=True, sharex=True)\n for ax, img in zip(axes.flatten(), samples[epoch]):\n ax.axis('off')\n img = ((img - img.min())*255 / (img.max() - img.min())).astype(np.uint8)\n ax.set_adjustable('box-forced')\n im = ax.imshow(img, aspect='equal')\n \n plt.subplots_adjust(wspace=0, hspace=0)\n return fig, axes",
"And another function we can use to train our network. Notice when we call generator to create the samples to display, we set training to False. That's so the batch normalization layers will use the population statistics rather than the batch statistics. Also notice that we set the net.input_real placeholder when we run the generator's optimizer. The generator doesn't actually use it, but we'd get an error without it because of the tf.control_dependencies block we created in model_opt.",
"def train(net, dataset, epochs, batch_size, print_every=10, show_every=100, figsize=(5,5)):\n saver = tf.train.Saver()\n sample_z = np.random.uniform(-1, 1, size=(72, z_size))\n\n samples, losses = [], []\n steps = 0\n\n with tf.Session() as sess:\n sess.run(tf.global_variables_initializer())\n for e in range(epochs):\n for x, y in dataset.batches(batch_size):\n steps += 1\n\n # Sample random noise for G\n batch_z = np.random.uniform(-1, 1, size=(batch_size, z_size))\n\n # Run optimizers\n _ = sess.run(net.d_opt, feed_dict={net.input_real: x, net.input_z: batch_z})\n _ = sess.run(net.g_opt, feed_dict={net.input_z: batch_z, net.input_real: x})\n\n if steps % print_every == 0:\n # At the end of each epoch, get the losses and print them out\n train_loss_d = net.d_loss.eval({net.input_z: batch_z, net.input_real: x})\n train_loss_g = net.g_loss.eval({net.input_z: batch_z})\n\n print(\"Epoch {}/{}...\".format(e+1, epochs),\n \"Discriminator Loss: {:.4f}...\".format(train_loss_d),\n \"Generator Loss: {:.4f}\".format(train_loss_g))\n # Save losses to view after training\n losses.append((train_loss_d, train_loss_g))\n\n if steps % show_every == 0:\n gen_samples = sess.run(\n generator(net.input_z, 3, reuse=True, training=False),\n feed_dict={net.input_z: sample_z})\n samples.append(gen_samples)\n _ = view_samples(-1, samples, 6, 12, figsize=figsize)\n plt.show()\n\n saver.save(sess, './checkpoints/generator.ckpt')\n\n with open('samples.pkl', 'wb') as f:\n pkl.dump(samples, f)\n \n return losses, samples",
"Hyperparameters\nGANs are very sensitive to hyperparameters. A lot of experimentation goes into finding the best hyperparameters such that the generator and discriminator don't overpower each other. Try out your own hyperparameters or read the DCGAN paper to see what worked for them.\n\nExercise: Find hyperparameters to train this GAN. The values found in the DCGAN paper work well, or you can experiment on your own. In general, you want the discriminator loss to be around 0.3, this means it is correctly classifying images as fake or real about 50% of the time.",
"real_size = (32,32,3)\nz_size = 100\nlearning_rate = 0.0002\nbatch_size = 128\nepochs = 25\nalpha = 0.2\nbeta1 = 0.5\n\n# Create the network\nnet = GAN(real_size, z_size, learning_rate, alpha=alpha, beta1=beta1)\n\n# Load the data and train the network here\ndataset = Dataset(trainset, testset)\nlosses, samples = train(net, dataset, epochs, batch_size, figsize=(10,5))\n\nfig, ax = plt.subplots()\nlosses = np.array(losses)\nplt.plot(losses.T[0], label='Discriminator', alpha=0.5)\nplt.plot(losses.T[1], label='Generator', alpha=0.5)\nplt.title(\"Training Losses\")\nplt.legend()\n\n_ = view_samples(-1, samples, 6, 12, figsize=(10,5))"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
mne-tools/mne-tools.github.io
|
dev/_downloads/23237b92405a4b223d89222e217ffffd/morph_volume_stc.ipynb
|
bsd-3-clause
|
[
"%matplotlib inline",
"Morph volumetric source estimate\nThis example demonstrates how to morph an individual subject's\n:class:mne.VolSourceEstimate to a common reference space. We achieve this\nusing :class:mne.SourceMorph. Data will be morphed based on\nan affine transformation and a nonlinear registration method\nknown as Symmetric Diffeomorphic Registration (SDR) by\n:footcite:AvantsEtAl2008.\nTransformation is estimated from the subject's anatomical T1 weighted MRI\n(brain) to FreeSurfer's 'fsaverage' T1 weighted MRI (brain)_.\nAfterwards the transformation will be applied to the volumetric source\nestimate. The result will be plotted, showing the fsaverage T1 weighted\nanatomical MRI, overlaid with the morphed volumetric source estimate.",
"# Author: Tommy Clausner <tommy.clausner@gmail.com>\n#\n# License: BSD-3-Clause\n\nimport os\n\nimport nibabel as nib\nimport mne\nfrom mne.datasets import sample, fetch_fsaverage\nfrom mne.minimum_norm import apply_inverse, read_inverse_operator\nfrom nilearn.plotting import plot_glass_brain\n\nprint(__doc__)",
"Setup paths",
"sample_dir_raw = sample.data_path()\nsample_dir = os.path.join(sample_dir_raw, 'MEG', 'sample')\nsubjects_dir = os.path.join(sample_dir_raw, 'subjects')\n\nfname_evoked = os.path.join(sample_dir, 'sample_audvis-ave.fif')\nfname_inv = os.path.join(sample_dir, 'sample_audvis-meg-vol-7-meg-inv.fif')\n\nfname_t1_fsaverage = os.path.join(subjects_dir, 'fsaverage', 'mri',\n 'brain.mgz')\nfetch_fsaverage(subjects_dir) # ensure fsaverage src exists\nfname_src_fsaverage = subjects_dir + '/fsaverage/bem/fsaverage-vol-5-src.fif'",
"Compute example data. For reference see ex-inverse-volume.\nLoad data:",
"evoked = mne.read_evokeds(fname_evoked, condition=0, baseline=(None, 0))\ninverse_operator = read_inverse_operator(fname_inv)\n\n# Apply inverse operator\nstc = apply_inverse(evoked, inverse_operator, 1.0 / 3.0 ** 2, \"dSPM\")\n\n# To save time\nstc.crop(0.09, 0.09)",
"Get a SourceMorph object for VolSourceEstimate\nsubject_from can typically be inferred from\n:class:src <mne.SourceSpaces>,\nand subject_to is set to 'fsaverage' by default. subjects_dir can be\nNone when set in the environment. In that case SourceMorph can be initialized\ntaking src as only argument. See :class:mne.SourceMorph for more\ndetails.\nThe default parameter setting for zooms will cause the reference volumes\nto be resliced before computing the transform. A value of '5' would cause\nthe function to reslice to an isotropic voxel size of 5 mm. The higher this\nvalue the less accurate but faster the computation will be.\nThe recommended way to use this is to morph to a specific destination source\nspace so that different subject_from morphs will go to the same space.`\nA standard usage for volumetric data reads:",
"src_fs = mne.read_source_spaces(fname_src_fsaverage)\nmorph = mne.compute_source_morph(\n inverse_operator['src'], subject_from='sample', subjects_dir=subjects_dir,\n niter_affine=[10, 10, 5], niter_sdr=[10, 10, 5], # just for speed\n src_to=src_fs, verbose=True)",
"Apply morph to VolSourceEstimate\nThe morph can be applied to the source estimate data, by giving it as the\nfirst argument to the :meth:morph.apply() <mne.SourceMorph.apply> method.\n<div class=\"alert alert-info\"><h4>Note</h4><p>Volumetric morphing is much slower than surface morphing because the\n volume for each time point is individually resampled and SDR morphed.\n The :meth:`mne.SourceMorph.compute_vol_morph_mat` method can be used\n to compute an equivalent sparse matrix representation by computing the\n transformation for each source point individually. This generally takes\n a few minutes to compute, but can be\n :meth:`saved <mne.SourceMorph.save>` to disk and be reused. The\n resulting sparse matrix operation is very fast (about 400× faster) to\n :meth:`apply <mne.SourceMorph.apply>`. This approach is more efficient\n when the number of time points to be morphed exceeds the number of\n source space points, which is generally in the thousands. This can\n easily occur when morphing many time points and multiple conditions.</p></div>",
"stc_fsaverage = morph.apply(stc)",
"Convert morphed VolSourceEstimate into NIfTI\nWe can convert our morphed source estimate into a NIfTI volume using\n:meth:morph.apply(..., output='nifti1') <mne.SourceMorph.apply>.",
"# Create mri-resolution volume of results\nimg_fsaverage = morph.apply(stc, mri_resolution=2, output='nifti1')",
"Plot results",
"# Load fsaverage anatomical image\nt1_fsaverage = nib.load(fname_t1_fsaverage)\n\n# Plot glass brain (change to plot_anat to display an overlaid anatomical T1)\ndisplay = plot_glass_brain(t1_fsaverage,\n title='subject results to fsaverage',\n draw_cross=False,\n annotate=True)\n\n# Add functional data as overlay\ndisplay.add_overlay(img_fsaverage, alpha=0.75)",
"Reading and writing SourceMorph from and to disk\nAn instance of SourceMorph can be saved, by calling\n:meth:morph.save <mne.SourceMorph.save>.\nThis methods allows for specification of a filename under which the morph\nwill be save in \".h5\" format. If no file extension is provided, \"-morph.h5\"\nwill be appended to the respective defined filename::\n>>> morph.save('my-file-name')\n\nReading a saved source morph can be achieved by using\n:func:mne.read_source_morph::\n>>> morph = mne.read_source_morph('my-file-name-morph.h5')\n\nOnce the environment is set up correctly, no information such as\nsubject_from or subjects_dir must be provided, since it can be\ninferred from the data and used morph to 'fsaverage' by default, e.g.::\n>>> morph.apply(stc)\n\nReferences\n.. footbibliography::"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
EnergyID/opengrid
|
scripts/Demo_WaterGasElekVisualisation.ipynb
|
gpl-2.0
|
[
"This script shows how to use the existing code in opengrid\nto create (a) a timeseries plot and (b) a load curve of gas, water or elektricity usage.\nTodo:\nChange numeric \"chosen_type\" to a textual choice, with lookupvalue of UtilityType in Utilitytypes.",
"import os, sys\nimport inspect\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom matplotlib.dates import HourLocator, DateFormatter, AutoDateLocator\nimport datetime as dt\nimport pytz\nimport pandas as pd\nimport pdb\n\nscript_dir = os.path.dirname(os.path.abspath(inspect.getfile(inspect.currentframe())))\n# add the path to opengrid to sys.path\nsys.path.append(os.path.join(script_dir, os.pardir, os.pardir))\nfrom opengrid.library import config\n\nc = config.Config()\n\nsys.path.append(c.get('tmpo', 'folder'))\nimport tmpo\ntry:\n if os.path.exists(c.get('tmpo', 'data')):\n path_to_tmpo_data = c.get('tmpo', 'data')\nexcept:\n path_to_tmpo_data = None\n\nfrom opengrid.library.houseprint import houseprint\n\n%matplotlib inline\nplt.rcParams['figure.figsize']=14,8",
"Script settings",
"hp = houseprint.load_houseprint_from_file('new_houseprint.pkl')\nhp.init_tmpo(path_to_tmpo_data=path_to_tmpo_data)",
"Fill in here (chosen type [0-2]) what type of data you'd like to plot:",
"chosen_type = 0\n# 0 =water, 1 = gas, 2 = electricity\n\nUtilityTypes = ['water', 'gas','electricity'] # {'water','gas','electricity'} \nutility = UtilityTypes[chosen_type] # here 'electricity'\n\n#default values:\nFL_units = ['l/day', 'm^3/day ~ 10 kWh/day','Ws/day'] #TODO, to be checked!!\nBase_Units = ['l/min', 'kW','kW']\nBase_Corr = [1/24.0/60.0, 1/100.0/24.0/3.600 , 3.600/1000.0/24 ] #TODO,check validity of conversions!! # water => (l/day) to (l/hr), gas: (l/day) to (kW), elektr Ws/d to kW\n\ntInt_Units = ['l', 'kWh','kWh'] #units after integration\ntInt_Corr = [1/60, 3600/60, 3600/60] #TODO, to be checked!! # water => (l/hr) to (l_cumul/min), gas: kW to (kWh/min)\n\n# units for this utility type\nbUnit = Base_Units[chosen_type]\nbCorr = Base_Corr[chosen_type]\nfl_unit = FL_units[chosen_type]\ntiUnit = tInt_Units[chosen_type]\ntiCorr = tInt_Corr[chosen_type]\n",
"Available data is loaded in one big dataframe, the columns are the sensors of chosen type.\nalso, it is rescaled to more \"managable\" units (to be verified!)",
"#load data\nprint 'Loading', utility ,'-data and converting from ',fl_unit ,' to ',bUnit,':'\ndf = hp.get_data(sensortype=utility)\ndf = df.diff() #data is cumulative, we need to take the derivative\ndf = df[df>0] #filter out negative values\n\n# conversion dependent on type of utility (to be checked!!) \ndf = df*bCorr\n\n# plot timeseries and load duration for each retained sensor\n\nfor sensor in df.columns:\n FL = hp.find_sensor(sensor).device.key\n plt.figure()\n ax1=plt.subplot(121)\n plt.plot_date(df.index, df[sensor], '-', label=\"{}\".format(FL))\n plt.ylabel(\"{}-usage [{}]\".format(utility,bUnit) )\n plt.legend()\n \n ax2=plt.subplot(122)\n plt.plot(np.sort(df[sensor])[::-1], label=sensor)\n plt.ylabel(\"{}-load curve [{}]\".format(utility,bUnit) )\n plt.legend()\n\n#Date/Time library\nfrom arrow import Arrow\n\n#Prepare NVD3.js dependencies\nfrom IPython import display as d\nimport nvd3\nnvd3.ipynb.initialize_javascript(use_remote=True)\n\n#Filter sensors and period\nsensorlist = ['b28509eb97137e723995838c393d49df', '2923b75daf93e539e37ce5177c0008c5', 'a926bc966f178fc5d507a569a5bfc3d7']\ndf_water= df[sensorlist][Arrow(2015, 4, 1).datetime:Arrow(2015, 4, 2).datetime].dropna()\n\n#Prepare chart name and timescale in epoch\nchart_name = \"{}-usage [{}]\".format(utility,bUnit)\ndf_water[\"epoch\"] = [(Arrow.fromdatetime(o) - Arrow(1970, 1, 1)).total_seconds()*1000 for o in df_water.index]\n\n#Create NVD3 chart\nwater_chart = nvd3.lineChart(x_is_date=True,name=chart_name,height=450,width=800)\nfor sensor in sensorlist: # df.columns:\n series_name = name=\"{}\".format(hp.find_sensor(sensor).device.key)\n water_chart.add_serie(name=series_name, x=list(df_water[\"epoch\"]), y=list(df_water[sensor]))\n\nwater_chart",
"Tests with the tmpo-based approach",
"start = pd.Timestamp('20150201')\nend = pd.Timestamp('20150301')\n\ndfcum = hp.get_data(sensortype='electricity', head= start, tail = end)\n\ndfcum.shape\n\ndfcum.columns\n\ndfcum.tail()\n\ndfi = dfcum.resample(rule='900s', how='max')\ndfi = dfi.interpolate(method='time')\ndfi=dfi.diff()*3600/900\ndfi.plot()\n#dfi.ix['20150701'].plot()\n\n# This works, but is a bad idea if you have multiple sensors for a FLM: you obtain identical column names.\n# df.rename(columns = hp.get_flukso_from_sensor, inplace=True)\n\n# Getting a single sensor\ndfi['1a1dac9c2ac155f95c58bf1d4f4b7d01'].plot()"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
simmy88/UdacityMLND
|
Customer Segments/customer_segments.ipynb
|
mit
|
[
"Machine Learning Engineer Nanodegree\nUnsupervised Learning\nProject: Creating Customer Segments\nWelcome to the third project of the Machine Learning Engineer Nanodegree! In this notebook, some template code has already been provided for you, and it will be your job to implement the additional functionality necessary to successfully complete this project. Sections that begin with 'Implementation' in the header indicate that the following block of code will require additional functionality which you must provide. Instructions will be provided for each section and the specifics of the implementation are marked in the code block with a 'TODO' statement. Please be sure to read the instructions carefully!\nIn addition to implementing code, there will be questions that you must answer which relate to the project and your implementation. Each section where you will answer a question is preceded by a 'Question X' header. Carefully read each question and provide thorough answers in the following text boxes that begin with 'Answer:'. Your project submission will be evaluated based on your answers to each of the questions and the implementation you provide. \n\nNote: Code and Markdown cells can be executed using the Shift + Enter keyboard shortcut. In addition, Markdown cells can be edited by typically double-clicking the cell to enter edit mode.\n\nGetting Started\nIn this project, you will analyze a dataset containing data on various customers' annual spending amounts (reported in monetary units) of diverse product categories for internal structure. One goal of this project is to best describe the variation in the different types of customers that a wholesale distributor interacts with. Doing so would equip the distributor with insight into how to best structure their delivery service to meet the needs of each customer.\nThe dataset for this project can be found on the UCI Machine Learning Repository. For the purposes of this project, the features 'Channel' and 'Region' will be excluded in the analysis — with focus instead on the six product categories recorded for customers.\nRun the code block below to load the wholesale customers dataset, along with a few of the necessary Python libraries required for this project. You will know the dataset loaded successfully if the size of the dataset is reported.",
"# Import libraries necessary for this project\nimport numpy as np\nimport pandas as pd\nfrom IPython.display import display # Allows the use of display() for DataFrames\n\n# Import supplementary visualizations code visuals.py\nimport visuals as vs\n\n# Pretty display for notebooks\n%matplotlib inline\n\n# Load the wholesale customers dataset\ntry:\n data = pd.read_csv(\"customers.csv\")\n data.drop(['Region', 'Channel'], axis = 1, inplace = True)\n print \"Wholesale customers dataset has {} samples with {} features each.\".format(*data.shape)\nexcept:\n print \"Dataset could not be loaded. Is the dataset missing?\"",
"Data Exploration\nIn this section, you will begin exploring the data through visualizations and code to understand how each feature is related to the others. You will observe a statistical description of the dataset, consider the relevance of each feature, and select a few sample data points from the dataset which you will track through the course of this project.\nRun the code block below to observe a statistical description of the dataset. Note that the dataset is composed of six important product categories: 'Fresh', 'Milk', 'Grocery', 'Frozen', 'Detergents_Paper', and 'Delicatessen'. Consider what each category represents in terms of products you could purchase.",
"# Display a description of the dataset\ndisplay(data.describe())",
"Implementation: Selecting Samples\nTo get a better understanding of the customers and how their data will transform through the analysis, it would be best to select a few sample data points and explore them in more detail. In the code block below, add three indices of your choice to the indices list which will represent the customers to track. It is suggested to try different sets of samples until you obtain customers that vary significantly from one another.",
"# TODO: Select three indices of your choice you wish to sample from the dataset\nindices = [100,200,300]\n\n# Create a DataFrame of the chosen samples\nsamples = pd.DataFrame(data.loc[indices], columns = data.keys()).reset_index(drop = True)\nprint \"Chosen samples of wholesale customers dataset:\"\ndisplay(samples)",
"Question 1\nConsider the total purchase cost of each product category and the statistical description of the dataset above for your sample customers.\nWhat kind of establishment (customer) could each of the three samples you've chosen represent?\nHint: Examples of establishments include places like markets, cafes, and retailers, among many others. Avoid using names for establishments, such as saying \"McDonalds\" when describing a sample customer as a restaurant.\nAnswer: Sample 0 has a values which are significantly higher than the mean in the categories of Grocery and Detergents_Paper but values which are much closer to the mean in the other categories. This sample could represent supermarket which sells freshly prepared food. Sample 1 has a much higher than mean value for Milk and Grocery and significantly below mean value for Fresh and Delicatessen. This sample could represent a supermarket - selling mostly Groceries, Milk. Sample 2 has very high values, compared to the mean, for Fresh and very low values for Frozen. This sample could represent a Cafe or Restaurant which sells a lot of fresh food but little frozen food.\nImplementation: Feature Relevance\nOne interesting thought to consider is if one (or more) of the six product categories is actually relevant for understanding customer purchasing. That is to say, is it possible to determine whether customers purchasing some amount of one category of products will necessarily purchase some proportional amount of another category of products? We can make this determination quite easily by training a supervised regression learner on a subset of the data with one feature removed, and then score how well that model can predict the removed feature.\nIn the code block below, you will need to implement the following:\n - Assign new_data a copy of the data by removing a feature of your choice using the DataFrame.drop function.\n - Use sklearn.cross_validation.train_test_split to split the dataset into training and testing sets.\n - Use the removed feature as your target label. Set a test_size of 0.25 and set a random_state.\n - Import a decision tree regressor, set a random_state, and fit the learner to the training data.\n - Report the prediction score of the testing set using the regressor's score function.",
"from sklearn.model_selection import train_test_split\nfrom sklearn.tree import DecisionTreeRegressor\nfrom sklearn.metrics import r2_score\n\n# TODO: Make a copy of the DataFrame, using the 'drop' function to drop the given feature\nnew_data = data.drop(['Milk'], axis = 1)\n\n# TODO: Split the data into training and testing sets using the given feature as the target\nX_train, X_test, y_train, y_test = train_test_split(new_data, data['Milk'], test_size=0.25, random_state=23)\n\n# TODO: Create a decision tree regressor and fit it to the training set\nregressor = DecisionTreeRegressor()\nregressor = regressor.fit(X_train, y_train)\n\n# TODO: Report the score of the prediction using the testing set\ny_pred = regressor.predict(X_test)\nscore = r2_score(y_test, y_pred)\nprint \"R^2 score for prediction using testing set:\" \nprint score",
"Question 2\nWhich feature did you attempt to predict? What was the reported prediction score? Is this feature necessary for identifying customers' spending habits?\nHint: The coefficient of determination, R^2, is scored between 0 and 1, with 1 being a perfect fit. A negative R^2 implies the model fails to fit the data.\nAnswer: I tried to predict the Fresh feature from the data and the score was -1.56. This implies that the remaining data was unable to model the Fresh feature. This implies that the feature is necessary for identifying customer spending habits. By contrast the Milk feature was predicted with a score of 0.47. This implies that the feature can be modelled using the other features to a reasonable extent and might not be necessary to predict customer spending habits.\nVisualize Feature Distributions\nTo get a better understanding of the dataset, we can construct a scatter matrix of each of the six product features present in the data. If you found that the feature you attempted to predict above is relevant for identifying a specific customer, then the scatter matrix below may not show any correlation between that feature and the others. Conversely, if you believe that feature is not relevant for identifying a specific customer, the scatter matrix might show a correlation between that feature and another feature in the data. Run the code block below to produce a scatter matrix.",
"# Produce a scatter matrix for each pair of features in the data\npd.plotting.scatter_matrix(data, alpha = 0.3, figsize = (14,8), diagonal = 'kde');",
"Question 3\nAre there any pairs of features which exhibit some degree of correlation? Does this confirm or deny your suspicions about the relevance of the feature you attempted to predict? How is the data for those features distributed?\nHint: Is the data normally distributed? Where do most of the data points lie? \nAnswer: There seems to be a correlation between Grocery and Milk as well as between Detergents_Paper and Grocery. The correlation between Milk and Grocery seems to be weaker than between Detergents_Paper and Grocery. This confirms why the Milk feature was hard to predict. The data is not normally distributed, with a significant number of points lying at the lower end of the scale.\nData Preprocessing\nIn this section, you will preprocess the data to create a better representation of customers by performing a scaling on the data and detecting (and optionally removing) outliers. Preprocessing data is often times a critical step in assuring that results you obtain from your analysis are significant and meaningful.\nImplementation: Feature Scaling\nIf data is not normally distributed, especially if the mean and median vary significantly (indicating a large skew), it is most often appropriate to apply a non-linear scaling — particularly for financial data. One way to achieve this scaling is by using a Box-Cox test, which calculates the best power transformation of the data that reduces skewness. A simpler approach which can work in most cases would be applying the natural logarithm.\nIn the code block below, you will need to implement the following:\n - Assign a copy of the data to log_data after applying logarithmic scaling. Use the np.log function for this.\n - Assign a copy of the sample data to log_samples after applying logarithmic scaling. Again, use np.log.",
"# TODO: Scale the data using the natural logarithm\nlog_data = np.log(data)\n\n# TODO: Scale the sample data using the natural logarithm\nlog_samples = np.log(samples)\n\n# Produce a scatter matrix for each pair of newly-transformed features\npd.plotting.scatter_matrix(log_data, alpha = 0.3, figsize = (14,8), diagonal = 'kde');",
"Observation\nAfter applying a natural logarithm scaling to the data, the distribution of each feature should appear much more normal. For any pairs of features you may have identified earlier as being correlated, observe here whether that correlation is still present (and whether it is now stronger or weaker than before).\nRun the code below to see how the sample data has changed after having the natural logarithm applied to it.",
"# Display the log-transformed sample data\ndisplay(log_samples)",
"Implementation: Outlier Detection\nDetecting outliers in the data is extremely important in the data preprocessing step of any analysis. The presence of outliers can often skew results which take into consideration these data points. There are many \"rules of thumb\" for what constitutes an outlier in a dataset. Here, we will use Tukey's Method for identfying outliers: An outlier step is calculated as 1.5 times the interquartile range (IQR). A data point with a feature that is beyond an outlier step outside of the IQR for that feature is considered abnormal.\nIn the code block below, you will need to implement the following:\n - Assign the value of the 25th percentile for the given feature to Q1. Use np.percentile for this.\n - Assign the value of the 75th percentile for the given feature to Q3. Again, use np.percentile.\n - Assign the calculation of an outlier step for the given feature to step.\n - Optionally remove data points from the dataset by adding indices to the outliers list.\nNOTE: If you choose to remove any outliers, ensure that the sample data does not contain any of these points!\nOnce you have performed this implementation, the dataset will be stored in the variable good_data.",
"# For each feature find the data points with extreme high or low values\nfor feature in log_data.keys():\n \n # TODO: Calculate Q1 (25th percentile of the data) for the given feature\n Q1 = np.percentile(log_data[feature], 25)\n \n # TODO: Calculate Q3 (75th percentile of the data) for the given feature\n Q3 = np.percentile(log_data[feature],75)\n \n # TODO: Use the interquartile range to calculate an outlier step (1.5 times the interquartile range)\n step = (Q3-Q1)*1.5\n \n # Display the outliers\n print \"Data points considered outliers for the feature '{}':\".format(feature)\n display(log_data[~((log_data[feature] >= Q1 - step) & (log_data[feature] <= Q3 + step))])\n \n# OPTIONAL: Select the indices for data points you wish to remove\noutliers = [65, 66, 75, 128, 154]\n\n# Remove the outliers, if any were specified\ngood_data = log_data.drop(log_data.index[outliers]).reset_index(drop = True)",
"Question 4\nAre there any data points considered outliers for more than one feature based on the definition above? Should these data points be removed from the dataset? If any data points were added to the outliers list to be removed, explain why. \nAnswer: There were three points which were considered outliers for more than one feature. These were points 65, 66, 75, 128, 154. These points were removed as they might skew the correlations and affect the location of the cluster boundaries between these features. I decided not to remove any of the other outliers as they would affect the location of the cluster centers but shouldn't affect the boundaries between the clusters. \nFeature Transformation\nIn this section you will use principal component analysis (PCA) to draw conclusions about the underlying structure of the wholesale customer data. Since using PCA on a dataset calculates the dimensions which best maximize variance, we will find which compound combinations of features best describe customers.\nImplementation: PCA\nNow that the data has been scaled to a more normal distribution and has had any necessary outliers removed, we can now apply PCA to the good_data to discover which dimensions about the data best maximize the variance of features involved. In addition to finding these dimensions, PCA will also report the explained variance ratio of each dimension — how much variance within the data is explained by that dimension alone. Note that a component (dimension) from PCA can be considered a new \"feature\" of the space, however it is a composition of the original features present in the data.\nIn the code block below, you will need to implement the following:\n - Import sklearn.decomposition.PCA and assign the results of fitting PCA in six dimensions with good_data to pca.\n - Apply a PCA transformation of log_samples using pca.transform, and assign the results to pca_samples.",
"from sklearn.decomposition import PCA\n# TODO: Apply PCA by fitting the good data with the same number of dimensions as features\npca = PCA(n_components=6).fit(good_data)\n\n# TODO: Transform log_samples using the PCA fit above\npca_samples = pca.transform(log_samples)\n\n# Generate PCA results plot\npca_results = vs.pca_results(good_data, pca)",
"Question 5\nHow much variance in the data is explained in total by the first and second principal component? What about the first four principal components? Using the visualization provided above, discuss what the first four dimensions best represent in terms of customer spending.\nHint: A positive increase in a specific dimension corresponds with an increase of the positive-weighted features and a decrease of the negative-weighted features. The rate of increase or decrease is based on the individual feature weights.\nAnswer: The first two components account for approximately 71.6% of the variance. When this is extended to the first four components they account for approximately 93.4% of the variance. The first four dimensions probably represent different types of customer spending, i.e. shop types. The first dimension has large weights between Detergents_Paper, Milk, and Grocery, which are negatively correlated to Fresh and Frozen. This could indicate purchasing of household items. Dimension 2 has strong weights in Fresh, Frozen, and Delicatessen. This could indicate purchasing of food items which are generally bought together. Dimension 3 indicates there is a negative relationship between Fresh and Delicatessen, which could be a measure which shows people who buy a lot of Delicatessen items are unlikely to spend a lot on Fresh food. Dimension 4 shows strong negative correlations between Frozen and Delicatessen. Indicating that people who spend a large amount of Frozen spend less on Delicatessen, and vice versa.\nObservation\nRun the code below to see how the log-transformed sample data has changed after having a PCA transformation applied to it in six dimensions. Observe the numerical value for the first four dimensions of the sample points. Consider if this is consistent with your initial interpretation of the sample points.",
"# Display sample log-data after having a PCA transformation applied\ndisplay(pd.DataFrame(np.round(pca_samples, 4), columns = pca_results.index.values))",
"Implementation: Dimensionality Reduction\nWhen using principal component analysis, one of the main goals is to reduce the dimensionality of the data — in effect, reducing the complexity of the problem. Dimensionality reduction comes at a cost: Fewer dimensions used implies less of the total variance in the data is being explained. Because of this, the cumulative explained variance ratio is extremely important for knowing how many dimensions are necessary for the problem. Additionally, if a signifiant amount of variance is explained by only two or three dimensions, the reduced data can be visualized afterwards.\nIn the code block below, you will need to implement the following:\n - Assign the results of fitting PCA in two dimensions with good_data to pca.\n - Apply a PCA transformation of good_data using pca.transform, and assign the results to reduced_data.\n - Apply a PCA transformation of log_samples using pca.transform, and assign the results to pca_samples.",
"# TODO: Apply PCA by fitting the good data with only two dimensions\npca = PCA(n_components=2).fit(good_data)\n\n# TODO: Transform the good data using the PCA fit above\nreduced_data = pca.transform(good_data)\n\n# TODO: Transform log_samples using the PCA fit above\npca_samples = pca.transform(log_samples)\n\n# Create a DataFrame for the reduced data\nreduced_data = pd.DataFrame(reduced_data, columns = ['Dimension 1', 'Dimension 2'])",
"Observation\nRun the code below to see how the log-transformed sample data has changed after having a PCA transformation applied to it using only two dimensions. Observe how the values for the first two dimensions remains unchanged when compared to a PCA transformation in six dimensions.",
"# Display sample log-data after applying PCA transformation in two dimensions\ndisplay(pd.DataFrame(np.round(pca_samples, 4), columns = ['Dimension 1', 'Dimension 2']))",
"Visualizing a Biplot\nA biplot is a scatterplot where each data point is represented by its scores along the principal components. The axes are the principal components (in this case Dimension 1 and Dimension 2). In addition, the biplot shows the projection of the original features along the components. A biplot can help us interpret the reduced dimensions of the data, and discover relationships between the principal components and original features.\nRun the code cell below to produce a biplot of the reduced-dimension data.",
"# Create a biplot\nvs.biplot(good_data, reduced_data, pca)",
"Observation\nOnce we have the original feature projections (in red), it is easier to interpret the relative position of each data point in the scatterplot. For instance, a point the lower right corner of the figure will likely correspond to a customer that spends a lot on 'Milk', 'Grocery' and 'Detergents_Paper', but not so much on the other product categories. \nFrom the biplot, which of the original features are most strongly correlated with the first component? What about those that are associated with the second component? Do these observations agree with the pca_results plot you obtained earlier?\nClustering\nIn this section, you will choose to use either a K-Means clustering algorithm or a Gaussian Mixture Model clustering algorithm to identify the various customer segments hidden in the data. You will then recover specific data points from the clusters to understand their significance by transforming them back into their original dimension and scale. \nQuestion 6\nWhat are the advantages to using a K-Means clustering algorithm? What are the advantages to using a Gaussian Mixture Model clustering algorithm? Given your observations about the wholesale customer data so far, which of the two algorithms will you use and why?\nAnswer: K-Means clustering uses a hard assignment of points where as the Gaussian Mixture Model uses soft assignment. Given that the customer data seems to not have clear and distinct boundaries I will use the Gaussian Mixture Model as it assigns points to clusters probablistically rather than using hard assignments.\nImplementation: Creating Clusters\nDepending on the problem, the number of clusters that you expect to be in the data may already be known. When the number of clusters is not known a priori, there is no guarantee that a given number of clusters best segments the data, since it is unclear what structure exists in the data — if any. However, we can quantify the \"goodness\" of a clustering by calculating each data point's silhouette coefficient. The silhouette coefficient for a data point measures how similar it is to its assigned cluster from -1 (dissimilar) to 1 (similar). Calculating the mean silhouette coefficient provides for a simple scoring method of a given clustering.\nIn the code block below, you will need to implement the following:\n - Fit a clustering algorithm to the reduced_data and assign it to clusterer.\n - Predict the cluster for each data point in reduced_data using clusterer.predict and assign them to preds.\n - Find the cluster centers using the algorithm's respective attribute and assign them to centers.\n - Predict the cluster for each sample data point in pca_samples and assign them sample_preds.\n - Import sklearn.metrics.silhouette_score and calculate the silhouette score of reduced_data against preds.\n - Assign the silhouette score to score and print the result.",
"from sklearn.mixture import GaussianMixture\nfrom sklearn.metrics import silhouette_score\n# TODO: Apply your clustering algorithm of choice to the reduced data \nGM = GaussianMixture(n_components = 2)\nclusterer = GM.fit(reduced_data)\n\n# TODO: Predict the cluster for each data point\npreds = clusterer.predict(reduced_data)\n\n# TODO: Find the cluster centers\ncenters = clusterer.means_\n\n# TODO: Predict the cluster for each transformed sample data point\nsample_preds = clusterer.predict(pca_samples)\n\n# TODO: Calculate the mean silhouette coefficient for the number of clusters chosen\nscore = silhouette_score(reduced_data, preds)\nprint score",
"Question 7\nReport the silhouette score for several cluster numbers you tried. Of these, which number of clusters has the best silhouette score? \nAnswer: 2 clusters has the best silhouette score of 0.422 which is slightly better than 3 clusters, which had a score of 0.403. 4 clusters had a significantly smaller score of 0.333\nCluster Visualization\nOnce you've chosen the optimal number of clusters for your clustering algorithm using the scoring metric above, you can now visualize the results by executing the code block below. Note that, for experimentation purposes, you are welcome to adjust the number of clusters for your clustering algorithm to see various visualizations. The final visualization provided should, however, correspond with the optimal number of clusters.",
"# Display the results of the clustering from implementation\nvs.cluster_results(reduced_data, preds, centers, pca_samples)",
"Implementation: Data Recovery\nEach cluster present in the visualization above has a central point. These centers (or means) are not specifically data points from the data, but rather the averages of all the data points predicted in the respective clusters. For the problem of creating customer segments, a cluster's center point corresponds to the average customer of that segment. Since the data is currently reduced in dimension and scaled by a logarithm, we can recover the representative customer spending from these data points by applying the inverse transformations.\nIn the code block below, you will need to implement the following:\n - Apply the inverse transform to centers using pca.inverse_transform and assign the new centers to log_centers.\n - Apply the inverse function of np.log to log_centers using np.exp and assign the true centers to true_centers.",
"# TODO: Inverse transform the centers\nlog_centers = pca.inverse_transform(centers)\n\n# TODO: Exponentiate the centers\ntrue_centers = np.exp(log_centers)\n\n# Display the true centers\nsegments = ['Segment {}'.format(i) for i in range(0,len(centers))]\ntrue_centers = pd.DataFrame(np.round(true_centers), columns = data.keys())\ntrue_centers.index = segments\ndisplay(true_centers)",
"Question 8\nConsider the total purchase cost of each product category for the representative data points above, and reference the statistical description of the dataset at the beginning of this project. What set of establishments could each of the customer segments represent?\nHint: A customer who is assigned to 'Cluster X' should best identify with the establishments represented by the feature set of 'Segment X'.\nAnswer: Segment 1 has high values for Fresh and Frozen, but lower values for Milk, Grocery, and Detergents_Paper, when compared to Segment 0. This indicates that Segment 1 could represent Food Markets where shoppers spend mostly on Fresh and Frozen items. Segment 0, with higher values of Milk, Grocery, and Detergents_Paper, could indicate Supermarkets where people are buying household staples.\nQuestion 9\nFor each sample point, which customer segment from Question 8 best represents it? Are the predictions for each sample point consistent with this?\nRun the code block below to find which cluster each sample point is predicted to be.",
"# Display the predictions\nfor i, pred in enumerate(sample_preds):\n print \"Sample point\", i, \"predicted to be in Cluster\", pred",
"Answer: The points are all predicted to be in Cluster 0. Analysing the scatter plot, one of them is quite close to the boundary with Cluster 1 and this is likely to be Sample point 2, which has a smaller Grocery value and significant higher Fresh value than the other two points. When compared to the cluster centers, the two points which are firmly in Cluster 0 have values which generally correlate well with Cluster 0's center, particularly values of Milk and Grocery for point 0 and point 1 follows the correlation between the two features. The point which is closer to the boundary has high values of Fresh, middling values of Milk and Grocery, but a low value of Frozen. This combination of features results in it not clearly representing either Cluster particularly well. \nConclusion\nIn this final section, you will investigate ways that you can make use of the clustered data. First, you will consider how the different groups of customers, the customer segments, may be affected differently by a specific delivery scheme. Next, you will consider how giving a label to each customer (which segment that customer belongs to) can provide for additional features about the customer data. Finally, you will compare the customer segments to a hidden variable present in the data, to see whether the clustering identified certain relationships.\nQuestion 10\nCompanies will often run A/B tests when making small changes to their products or services to determine whether making that change will affect its customers positively or negatively. The wholesale distributor is considering changing its delivery service from currently 5 days a week to 3 days a week. However, the distributor will only make this change in delivery service for customers that react positively. How can the wholesale distributor use the customer segments to determine which customers, if any, would react positively to the change in delivery service?\nHint: Can we assume the change affects all customers equally? How can we determine which group of customers it affects the most?\nAnswer: The wholesale distributor can test the change in delivery service on both groups and see which, if any, reacts positively to the change in delivery service.\nQuestion 11\nAdditional structure is derived from originally unlabeled data when using clustering techniques. Since each customer has a customer segment it best identifies with (depending on the clustering algorithm applied), we can consider 'customer segment' as an engineered feature for the data. Assume the wholesale distributor recently acquired ten new customers and each provided estimates for anticipated annual spending of each product category. Knowing these estimates, the wholesale distributor wants to classify each new customer to a customer segment to determine the most appropriate delivery service.\nHow can the wholesale distributor label the new customers using only their estimated product spending and the customer segment data?\nHint: A supervised learner could be used to train on the original customers. What would be the target variable?\nAnswer: Using a supervised learner to associate the points with the customer segment label that had been arributed using the unsupervised learner. The trained supervised learner can then be used to label the new points.\nVisualizing Underlying Distributions\nAt the beginning of this project, it was discussed that the 'Channel' and 'Region' features would be excluded from the dataset so that the customer product categories were emphasized in the analysis. By reintroducing the 'Channel' feature to the dataset, an interesting structure emerges when considering the same PCA dimensionality reduction applied earlier to the original dataset.\nRun the code block below to see how each data point is labeled either 'HoReCa' (Hotel/Restaurant/Cafe) or 'Retail' the reduced space. In addition, you will find the sample points are circled in the plot, which will identify their labeling.",
"# Display the clustering results based on 'Channel' data\nvs.channel_results(reduced_data, outliers, pca_samples)",
"Question 12\nHow well does the clustering algorithm and number of clusters you've chosen compare to this underlying distribution of Hotel/Restaurant/Cafe customers to Retailer customers? Are there customer segments that would be classified as purely 'Retailers' or 'Hotels/Restaurants/Cafes' by this distribution? Would you consider these classifications as consistent with your previous definition of the customer segments?\nAnswer: The clustering algorithm I chose, with 2 clusters, matched the underlying distribution in the data reasonably well despite having one additional cluster. I would consider the classification in the data to be consistent with my pervious definitions.\n\nNote: Once you have completed all of the code implementations and successfully answered each question above, you may finalize your work by exporting the iPython Notebook as an HTML document. You can do this by using the menu above and navigating to\nFile -> Download as -> HTML (.html). Include the finished document along with this notebook as your submission."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
tensorflow/tensorrt
|
tftrt/examples/presentations/GTC-April2021-Dynamic-shape-BERT.ipynb
|
apache-2.0
|
[
"# Copyright 2021 NVIDIA Corporation. All Rights Reserved.\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.",
"Accelerate BERT encoder with TF-TRT\nIntroduction\nThe NVIDIA TensorRT is a C++ library that facilitates high performance inference on NVIDIA graphics processing units (GPUs). TensorFlow™ integration with TensorRT™ (TF-TRT) optimizes TensorRT compatible parts of your computation graph, allowing TensorFlow to execute the remaining graph. While you can use TensorFlow's wide and flexible feature set, TensorRT will produce a highly optimized runtime engine for the TensorRT compatible subgraphs of your network.\nIn this notebook, we demonstrate accelerating BERT inference using TF-TRT. We focus on the encoder.\nRequirements\nThis notebook requires at least TF 2.5 and TRT 7.1.3.\n1. Download the model\nWe will download a bert base model from TF-Hub.",
"!pip install -q tf-models-official\n\nimport tensorflow as tf\nimport tensorflow_hub as hub\n\ntfhub_handle_encoder = 'https://tfhub.dev/tensorflow/bert_en_uncased_L-12_H-768_A-12/3'\nbert_saved_model_path = 'bert_base'\n\nbert_model = hub.load(tfhub_handle_encoder)\ntf.saved_model.save(bert_model, bert_saved_model_path)",
"2. Inference\nIn this section we will convert the model using TF-TRT and run inference.",
"import matplotlib.pyplot as plt\nimport numpy as np\n\nfrom tensorflow.python.saved_model import signature_constants\nfrom tensorflow.python.saved_model import tag_constants\nfrom tensorflow.python.compiler.tensorrt import trt_convert as trt\nfrom timeit import default_timer as timer\n\ntf.get_logger().setLevel('ERROR')",
"2.1 Helper functions",
"def get_func_from_saved_model(saved_model_dir):\n saved_model_loaded = tf.saved_model.load(\n saved_model_dir, tags=[tag_constants.SERVING])\n graph_func = saved_model_loaded.signatures[\n signature_constants.DEFAULT_SERVING_SIGNATURE_DEF_KEY]\n return graph_func, saved_model_loaded\n\ndef predict_and_benchmark_throughput(input_dict, model, N_warmup_run=50, N_run=500,\n result_key='predictions', batch_size=None):\n elapsed_time = []\n \n for val in input_dict.values():\n input_batch_size = val.shape[0]\n break\n if batch_size is None or batch_size > input_batch_size:\n batch_size = input_batch_size\n \n print('Benchmarking with batch size', batch_size)\n \n elapsed_time = np.zeros(N_run)\n for i in range(N_warmup_run): \n preds = model(**input_dict)\n \n # Force device synchronization with .numpy()\n tmp = preds[result_key][0].numpy() \n \n for i in range(N_run):\n start_time = timer()\n preds = model(**input_dict)\n # Synchronize\n tmp += preds[result_key][0].numpy() \n end_time = timer()\n elapsed_time[i] = end_time - start_time\n\n if i>=50 and i % 50 == 0:\n print('Steps {}-{} average: {:4.1f}ms'.format(i-50, i, (elapsed_time[i-50:i].mean()) * 1000))\n\n latency = elapsed_time.mean() * 1000\n print('Latency: {:5.2f}+/-{:4.2f}ms'.format(latency, elapsed_time.std() * 1000))\n print('Throughput: {:.0f} samples/s'.format(N_run * batch_size / elapsed_time.sum()))\n return latency\n\ndef trt_convert(input_path, output_path, input_shapes, explicit_batch=False,\n dtype=np.float32, precision='FP32', prof_strategy='Optimal'):\n conv_params=trt.TrtConversionParams(\n precision_mode=precision, minimum_segment_size=50,\n max_workspace_size_bytes=12*1<<30, maximum_cached_engines=1)\n converter = trt.TrtGraphConverterV2(\n input_saved_model_dir=input_path, conversion_params=conv_params,\n use_dynamic_shape=explicit_batch,\n dynamic_shape_profile_strategy=prof_strategy)\n\n converter.convert()\n\n def input_fn():\n for shapes in input_shapes:\n # return a list of input tensors\n yield [np.ones(shape=x).astype(dtype) for x in shapes]\n\n converter.build(input_fn)\n converter.save(output_path)\n \n\ndef random_input(batch_size, seq_length):\n # Generate random input data\n mask = tf.convert_to_tensor(np.ones((batch_size, seq_length), dtype=np.int32))\n type_id = tf.convert_to_tensor(np.zeros((batch_size, seq_length), dtype=np.int32))\n word_id = tf.convert_to_tensor(np.random.randint(0, 1000, size=[batch_size, seq_length], dtype=np.int32))\n return {'input_mask':mask, 'input_type_ids': type_id, 'input_word_ids':word_id}",
"2.2 Convert the model with TF-TRT",
"bert_trt_path = bert_saved_model_path + '_trt'\ninput_shapes = [[(1, 128), (1, 128), (1, 128)]] \ntrt_convert(bert_saved_model_path, bert_trt_path, input_shapes, True, np.int32, precision='FP16')",
"2.3 Run inference with converted model",
"trt_func, _ = get_func_from_saved_model(bert_trt_path)\n\ninput_dict = random_input(1, 128)\nresult_key = 'bert_encoder_1' # 'classifier'\nres = predict_and_benchmark_throughput(input_dict, trt_func, result_key=result_key)",
"Compare to the original function",
"func, model = get_func_from_saved_model(bert_saved_model_path)\nres = predict_and_benchmark_throughput(input_dict, func, result_key=result_key)",
"3. Dynamic sequence length\nThe sequence length for the encoder is dynamic, we can use different input sequence lengths. Here we call the original model for two sequences.",
"seq1 = random_input(1, 128)\nres1 = func(**seq1)\n\nseq2 = random_input(1, 180)\nres2 = func(**seq2)",
"The converted model is optimized for a sequnce length of 128 (and batch size 8). If we infer the converted model using a different sequence length, then two things can happen:\n1. If TrtConversionParams.allow_build_at_runtime == False: native TF model is inferred\n2. if TrtConversionParams.allow_build_at_runtime == True a new TRT engine is created which is optimized for the new sequence length. \nThe first option do not provide TRT accelaration while the second one creates a large overhead while the new engine is constructed. In the next section we convert the model to handle multiple sequence lengths.\n3.1 TRT Conversion with dynamic sequence length",
"bert_trt_path = bert_saved_model_path + '_trt2'\ninput_shapes = [[(1, 128), (1, 128), (1, 128)], [(1, 180), (1, 180), (1, 180)]] \ntrt_convert(bert_saved_model_path, bert_trt_path, input_shapes, True, np.int32, precision='FP16',\n prof_strategy='Range')\n\ntrt_func_dynamic, _ = get_func_from_saved_model(bert_trt_path)\n\ntrt_res = trt_func_dynamic(**seq1)\n\nresult_key = 'bert_encoder_1' # 'classifier'\nres = predict_and_benchmark_throughput(seq1, trt_func_dynamic, result_key=result_key)\n\nres = predict_and_benchmark_throughput(seq2, trt_func_dynamic, result_key=result_key)"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
astro4dev/OAD-Data-Science-Toolkit
|
Teaching Materials/Programming/Python/PythonISYA2018/04.Astropy/01_constants_units.ipynb
|
gpl-3.0
|
[
"Units and constants\nDespite being conceptually simple, changing units can become a tedious task. Specially when complex algebraic expressions appear. Fortunately, astropy has the units submodule to perform such tasks.\nThis is allowed by a new class called quantity that inclues both a numerical value and a physical unit.",
"from astropy import units as u # This imports the new class \nimport numpy as np\n\nd = 4.0 * u.parsec \n\nprint(d)\n\ntype(d)",
"This structure can also be initialized with lists and numpy arrays",
"d = np.array([3,6,12]) * u.parsec\n\nprint(d)\n\nd.value # value is one of the attributes of this class\n\nd.unit # the unit is another attribute",
"Now we can quickly change the units of this quantity using the method to()",
"d.to(u.km)",
"The real power of the units submodule comes at the moment of computing quantities with mixed units",
"x = 4.0 * u.parsec # 4 parsec\nt = 6.0 * u.year # 6 years\nv = x/t\n\nprint(v)",
"Let's change the units to $km/s$",
"v.to(u.km/u.s)",
"Physical constants are also available",
"from astropy import constants as c\n\nc.G # The gravitational constant\n\nc.c # The speed of light",
"Exercise 1.1\nThe free fall time is an useful quantity in gravitational studies. \nIt represents the typical time-scale for a system of density $\\rho$ evolving under its own gravity.\nIts functional form is:\n$$\nt_{ff} = \\sqrt{\\frac{1}{G\\rho}}\n$$\nCompute the free fall time in units of years for the following cases\n\nThe Earth-Moon system.\nThe Sun-Earth system. \nThe Milky Way.\n\nExercise 1.2\nThe Alfven velocity (in cgs units) of a plasma with ion number density $n$ and ion mass $m$ in a magnetic field $B$ can be defined as:\n$$\nv = \\frac{B}{\\sqrt{4\\pi n m}}\n$$\nEstimate the Alfven velocity in $km/s$ for the solar wind assuming that you are close to the Solar corona."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
claudiuskerth/PhDthesis
|
Data_analysis/SNP-indel-calling/ANGSD/SnpStat/MAF_by_pval.ipynb
|
mit
|
[
"Table of Contents\n<p>",
"import numpy as np\nimport matplotlib.pyplot as plt\n\n%matplotlib inline\n\nplt.rcParams['figure.figsize'] = [12, 10]\nplt.rcParams['font.size'] = 12\n\n%ll\n\npval, MAF, numSNP = [], [], []\n\nwith open(\"MAF_by_pval_ery\") as f:\n f.readline() # read the first line, but discard (header)\n for line in f:\n one, two, three = line.strip().split(\"\\t\")\n pval.append(float(one))\n MAF.append(float(two))\n numSNP.append(int(three))\n \nnumSNP = np.array(numSNP)\n\nplt.scatter?\n\nplt.scatter(pval, MAF, s=75, c=numSNP, cmap='jet')\nplt.colorbar()\nplt.grid()\nplt.xlabel('p-value')\nplt.ylabel('average MAF')\n\nplt.semilogx?\n\nax = plt.gca()\nax.semilogx()\nplt.scatter(pval, MAF, s=80, c=np.log10(numSNP), cmap='jet')\ncb = plt.colorbar(ticks=[2, 3, 4]) # shrink=0.9, ticks=[1, 2, 3, 4, 5]\ncb.set_label(r\"$log_{10}$\" + \" #SNP\")\nplt.grid()\nplt.xlabel('p-value')\nplt.ylabel('average MAF')\nplt.title('Dependence of HWE p-value on MAF')\n#ax.text(20, 0.445, r\"$log_{10}$\" + \" #SNP\")\nplt.savefig(\"MAF_by_pval_ery.png\")\n\nplt.colorbar?\n\nnp.log10?\n\nnp.log10(numSNP)\n\nplt.annotate?\n\ncb.set_label?",
"I am still looking for a way to set tick labels on the colorbar.\nNow do the same for the SNP's in the PAR population.",
"pval, MAF, numSNP = [], [], []\n\nwith open(\"MAF_by_pval_par\") as f:\n f.readline() # read the first line, but discard (header)\n for line in f:\n one, two, three = line.strip().split(\"\\t\")\n pval.append(float(one))\n MAF.append(float(two))\n numSNP.append(int(three))\n \nnumSNP = np.array(numSNP)\n\nax = plt.gca()\nax.semilogx()\nplt.scatter(pval, MAF, s=80, c=np.log10(numSNP), cmap='jet')\ncb = plt.colorbar(ticks=[2, 3, 4]) # shrink=0.9, ticks=[1, 2, 3, 4, 5]\ncb.set_label(r\"$log_{10}$\" + \" #SNP\")\nplt.grid()\nplt.xlabel('p-value')\nplt.ylabel('average MAF')\nplt.title('Dependence of HWE p-value on MAF')\n#ax.text(20, 0.445, r\"$log_{10}$\" + \" #SNP\")\nplt.savefig(\"MAF_by_pval_par.png\")",
"I have also determined the MAF for SNP with negative F value for different p-value cutoffs.",
"pval, MAF, numSNP = [], [], []\n\nwith open(\"MAF_by_pval_negFis_par\") as f:\n f.readline() # read the first line, but discard (header)\n for line in f:\n one, two, three = line.strip().split(\"\\t\")\n pval.append(float(one))\n MAF.append(float(two))\n numSNP.append(int(three))\n \nnumSNP = np.array(numSNP)\n\nax = plt.gca()\nax.semilogx()\nplt.scatter(pval, MAF, s=80, c=np.log10(numSNP), cmap='jet')\ncb = plt.colorbar(ticks=[2, 3, 4]) # shrink=0.9, ticks=[1, 2, 3, 4, 5]\ncb.set_label(r\"$log_{10}$\" + \" #SNP\")\nplt.grid()\nplt.xlabel('p-value')\nplt.ylabel('average MAF')\nplt.title('Dependence of HWE p-value on MAF')\n#ax.text(20, 0.445, r\"$log_{10}$\" + \" #SNP\")\nplt.savefig(\"MAF_by_pval_par.png\")\n\npval, MAF, numSNP = [], [], []\n\nwith open(\"MAF_by_pval_negFis_ery\") as f:\n f.readline() # read the first line, but discard (header)\n for line in f:\n one, two, three = line.strip().split(\"\\t\")\n pval.append(float(one))\n MAF.append(float(two))\n numSNP.append(int(three))\n \nnumSNP = np.array(numSNP)\n\nax = plt.gca()\nax.semilogx()\nplt.scatter(pval, MAF, s=80, c=np.log10(numSNP), cmap='jet')\ncb = plt.colorbar(ticks=[2, 3, 4]) # shrink=0.9, ticks=[1, 2, 3, 4, 5]\ncb.set_label(r\"$log_{10}$\" + \" #SNP\")\nplt.grid()\nplt.xlabel('p-value')\nplt.ylabel('average MAF')\nplt.title('Dependence of HWE p-value on MAF')\n#ax.text(20, 0.445, r\"$log_{10}$\" + \" #SNP\")\nplt.savefig(\"MAF_by_pval_par.png\")"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
mne-tools/mne-tools.github.io
|
0.24/_downloads/d2352ab4b72ce7d1dc05c76bda6ef71d/55_setting_eeg_reference.ipynb
|
bsd-3-clause
|
[
"%matplotlib inline",
"Setting the EEG reference\nThis tutorial describes how to set or change the EEG reference in MNE-Python.\nAs usual we'll start by importing the modules we need, loading some\nexample data <sample-dataset>, and cropping it to save memory. Since\nthis tutorial deals specifically with EEG, we'll also restrict the dataset to\njust a few EEG channels so the plots are easier to see:",
"import os\nimport mne\n\nsample_data_folder = mne.datasets.sample.data_path()\nsample_data_raw_file = os.path.join(sample_data_folder, 'MEG', 'sample',\n 'sample_audvis_raw.fif')\nraw = mne.io.read_raw_fif(sample_data_raw_file, verbose=False)\nraw.crop(tmax=60).load_data()\nraw.pick(['EEG 0{:02}'.format(n) for n in range(41, 60)])",
"Background\nEEG measures a voltage (difference in electric potential) between each\nelectrode and a reference electrode. This means that whatever signal is\npresent at the reference electrode is effectively subtracted from all the\nmeasurement electrodes. Therefore, an ideal reference signal is one that\ncaptures none of the brain-specific fluctuations in electric potential,\nwhile capturing all of the environmental noise/interference that is being\npicked up by the measurement electrodes.\nIn practice, this means that the reference electrode is often placed in a\nlocation on the subject's body and close to their head (so that any\nenvironmental interference affects the reference and measurement electrodes\nsimilarly) but as far away from the neural sources as possible (so that the\nreference signal doesn't pick up brain-based fluctuations). Typical reference\nlocations are the subject's earlobe, nose, mastoid process, or collarbone.\nEach of these has advantages and disadvantages regarding how much brain\nsignal it picks up (e.g., the mastoids pick up a fair amount compared to the\nothers), and regarding the environmental noise it picks up (e.g., earlobe\nelectrodes may shift easily, and have signals more similar to electrodes on\nthe same side of the head).\nEven in cases where no electrode is specifically designated as the reference,\nEEG recording hardware will still treat one of the scalp electrodes as the\nreference, and the recording software may or may not display it to you (it\nmight appear as a completely flat channel, or the software might subtract out\nthe average of all signals before displaying, making it look like there is\nno reference).\nSetting or changing the reference channel\nIf you want to recompute your data with a different reference than was used\nwhen the raw data were recorded and/or saved, MNE-Python provides the\n:meth:~mne.io.Raw.set_eeg_reference method on :class:~mne.io.Raw objects\nas well as the :func:mne.add_reference_channels function. To use an\nexisting channel as the new reference, use the\n:meth:~mne.io.Raw.set_eeg_reference method; you can also designate multiple\nexisting electrodes as reference channels, as is sometimes done with mastoid\nreferences:",
"# code lines below are commented out because the sample data doesn't have\n# earlobe or mastoid channels, so this is just for demonstration purposes:\n\n# use a single channel reference (left earlobe)\n# raw.set_eeg_reference(ref_channels=['A1'])\n\n# use average of mastoid channels as reference\n# raw.set_eeg_reference(ref_channels=['M1', 'M2'])\n\n# use a bipolar reference (contralateral)\n# raw.set_bipolar_reference(anode='[F3'], cathode=['F4'])",
"If a scalp electrode was used as reference but was not saved alongside the\nraw data (reference channels often aren't), you may wish to add it back to\nthe dataset before re-referencing. For example, if your EEG system recorded\nwith channel Fp1 as the reference but did not include Fp1 in the data\nfile, using :meth:~mne.io.Raw.set_eeg_reference to set (say) Cz as the\nnew reference will then subtract out the signal at Cz without restoring\nthe signal at Fp1. In this situation, you can add back Fp1 as a flat\nchannel prior to re-referencing using :func:~mne.add_reference_channels.\n(Since our example data doesn't use the 10-20 electrode naming system_, the\nexample below adds EEG 999 as the missing reference, then sets the\nreference to EEG 050.) Here's how the data looks in its original state:",
"raw.plot()",
"By default, :func:~mne.add_reference_channels returns a copy, so we can go\nback to our original raw object later. If you wanted to alter the\nexisting :class:~mne.io.Raw object in-place you could specify\ncopy=False.",
"# add new reference channel (all zero)\nraw_new_ref = mne.add_reference_channels(raw, ref_channels=['EEG 999'])\nraw_new_ref.plot()",
".. KEEP THESE BLOCKS SEPARATE SO FIGURES ARE BIG ENOUGH TO READ",
"# set reference to `EEG 050`\nraw_new_ref.set_eeg_reference(ref_channels=['EEG 050'])\nraw_new_ref.plot()",
"Notice that the new reference (EEG 050) is now flat, while the original\nreference channel that we added back to the data (EEG 999) has a non-zero\nsignal. Notice also that EEG 053 (which is marked as \"bad\" in\nraw.info['bads']) is not affected by the re-referencing.\nSetting average reference\nTo set a \"virtual reference\" that is the average of all channels, you can use\n:meth:~mne.io.Raw.set_eeg_reference with ref_channels='average'. Just\nas above, this will not affect any channels marked as \"bad\", nor will it\ninclude bad channels when computing the average. However, it does modify the\n:class:~mne.io.Raw object in-place, so we'll make a copy first so we can\nstill go back to the unmodified :class:~mne.io.Raw object later:",
"# use the average of all channels as reference\nraw_avg_ref = raw.copy().set_eeg_reference(ref_channels='average')\nraw_avg_ref.plot()",
"Creating the average reference as a projector\nIf using an average reference, it is possible to create the reference as a\n:term:projector rather than subtracting the reference from the data\nimmediately by specifying projection=True:",
"raw.set_eeg_reference('average', projection=True)\nprint(raw.info['projs'])",
"Creating the average reference as a projector has a few advantages:\n\n\nIt is possible to turn projectors on or off when plotting, so it is easy\n to visualize the effect that the average reference has on the data.\n\n\nIf additional channels are marked as \"bad\" or if a subset of channels are\n later selected, the projector will be re-computed to take these changes\n into account (thus guaranteeing that the signal is zero-mean).\n\n\nIf there are other unapplied projectors affecting the EEG channels (such\n as SSP projectors for removing heartbeat or blink artifacts), EEG\n re-referencing cannot be performed until those projectors are either\n applied or removed; adding the EEG reference as a projector is not subject\n to that constraint. (The reason this wasn't a problem when we applied the\n non-projector average reference to raw_avg_ref above is that the\n empty-room projectors included in the sample data :file:.fif file were\n only computed for the magnetometers.)",
"for title, proj in zip(['Original', 'Average'], [False, True]):\n fig = raw.plot(proj=proj, n_channels=len(raw))\n # make room for title\n fig.subplots_adjust(top=0.9)\n fig.suptitle('{} reference'.format(title), size='xx-large', weight='bold')",
"Using an infinite reference (REST)\nTo use the \"point at infinity\" reference technique described in\n:footcite:Yao2001 requires a forward model, which we can create in a few\nsteps. Here we use a fairly large spacing of vertices (pos = 15 mm) to\nreduce computation time; a 5 mm spacing is more typical for real data\nanalysis:",
"raw.del_proj() # remove our average reference projector first\nsphere = mne.make_sphere_model('auto', 'auto', raw.info)\nsrc = mne.setup_volume_source_space(sphere=sphere, exclude=30., pos=15.)\nforward = mne.make_forward_solution(raw.info, trans=None, src=src, bem=sphere)\nraw_rest = raw.copy().set_eeg_reference('REST', forward=forward)\n\nfor title, _raw in zip(['Original', 'REST (∞)'], [raw, raw_rest]):\n fig = _raw.plot(n_channels=len(raw), scalings=dict(eeg=5e-5))\n # make room for title\n fig.subplots_adjust(top=0.9)\n fig.suptitle('{} reference'.format(title), size='xx-large', weight='bold')",
"Using a bipolar reference\nTo create a bipolar reference, you can use :meth:~mne.set_bipolar_reference\nalong with the respective channel names for anode and cathode which\ncreates a new virtual channel that takes the difference between two\nspecified channels (anode and cathode) and drops the original channels by\ndefault. The new virtual channel will be annotated with the channel info of\nthe anode with location set to (0, 0, 0) and coil type set to\nEEG_BIPOLAR by default. Here we use a contralateral/transverse bipolar\nreference between channels EEG 054 and EEG 055 as described in\n:footcite:YaoEtAl2019 which creates a new virtual channel\nnamed EEG 054-EEG 055.",
"raw_bip_ref = mne.set_bipolar_reference(raw, anode=['EEG 054'],\n cathode=['EEG 055'])\nraw_bip_ref.plot()",
"EEG reference and source modeling\nIf you plan to perform source modeling (either with EEG or combined EEG/MEG\ndata), it is strongly recommended to use the\naverage-reference-as-projection approach. It is important to use an average\nreference because using a specific\nreference sensor (or even an average of a few sensors) spreads the forward\nmodel error from the reference sensor(s) into all sensors, effectively\namplifying the importance of the reference sensor(s) when computing source\nestimates. In contrast, using the average of all EEG channels as reference\nspreads the forward modeling error evenly across channels, so no one channel\nis weighted more strongly during source estimation. See also this FieldTrip\nFAQ on average referencing_ for more information.\nThe main reason for specifying the average reference as a projector was\nmentioned in the previous section: an average reference projector adapts if\nchannels are dropped, ensuring that the signal will always be zero-mean when\nthe source modeling is performed. In contrast, applying an average reference\nby the traditional subtraction method offers no such guarantee.\nFor these reasons, when performing inverse imaging, MNE-Python will raise\na ValueError if there are EEG channels present and something other than\nan average reference strategy has been specified.\n.. LINKS\nhttp://www.fieldtriptoolbox.org/faq/why_should_i_use_an_average_reference_for_eeg_source_reconstruction/\n https://en.wikipedia.org/wiki/10%E2%80%9320_system_(EEG)\nReferences\n.. footbibliography::"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
msadegh97/machine-learning-course
|
appendix-02-Numpy_Pandas.ipynb
|
gpl-3.0
|
[
"NumPy\nNumPy is a Linear Algebra Library for Python.\nNumPy’s main object is the homogeneous multidimensional array. It is a table of elements (usually numbers), all of the same type, indexed by a tuple of positive integers. In NumPy dimensions are called axes. The number of axes is rank.\nFor example, the coordinates of a point in 3D space [1, 2, 1] is an array of rank 1, because it has one axis. That axis has a length of 3. In the example pictured below, the array has rank 2 (it is 2-dimensional).\nNumpy is also incredibly fast, as it has bindings to C libraries.\nFor easy installing Numpy:\nbash \nsudo pip3 install numpy\nNumPy array",
"import numpy as np \n\na = [1,2,3]\n\na\n\nb = np.array(a)\nb\n\nnp.arange(1, 10)\n\nnp.arange(1, 10, 2)",
"zeros , ones and eye\nnp.zeros\nReturn a new array of given shape and type, filled with zeros.",
"np.zeros(2, dtype=float)\n\nnp.zeros((2,3))",
"ones\nReturn a new array of given shape and type, filled with ones.",
"np.ones(3, )",
"eye\nReturn a 2-D array with ones on the diagonal and zeros elsewhere.",
"np.eye(3)",
"linspace\nReturns num evenly spaced samples, calculated over the interval [start, stop].",
"np.linspace(1, 11, 3)",
"Random number and matrix\nrand\nRandom values in a given shape.",
"np.random.rand(2)\n\nnp.random.rand(2,3,4)",
"randn\nReturn a sample (or samples) from the \"standard normal\" distribution.\n\nandom.standard_normal Similar, but takes a tuple as its argument.",
"np.random.randn(2,3)",
"random\nReturn random floats in the half-open interval [0.0, 1.0).",
"np.random.random()",
"randint\nReturn n random integers (by default one integer) from low (inclusive) to high (exclusive).",
"np.random.randint(1,50,10)\n\nnp.random.randint(1,40)",
"Shape and Reshape\nshape return the shape of data and reshape returns an array containing the same data with a new shape",
"zero = np.zeros([3,4])\nprint(zero , ' ' ,'shape of a :' , zero.shape)\nzero = zero.reshape([2,6])\nprint()\nprint(zero)\n",
"Basic Operation\nElement wise product and matrix product",
"number = np.array([[1,2,],\n [3,4]])\nnumber2 = np.array([[1,3],[2,1]])\n\nprint('element wise product :\\n',number * number2 )\nprint('matrix product :\\n',number.dot(number2)) ## also can use : np.dot(number, number2)",
"min max argmin argmax mean",
"numbers = np.random.randint(1,100, 10)\nprint(numbers)\nprint('max is :', numbers.max())\nprint('index of max :', numbers.argmax())\nprint('min is :', numbers.min())\nprint('index of min :', numbers.argmin())\nprint('mean :', numbers.mean())",
"Universal function\nnumpy also has some funtion for mathmatical operation like exp, log, sqrt, abs and etc .\nfor find more function click here",
"number = np.arange(1,10).reshape(3,3)\nprint(number)\nprint()\nprint('exp:\\n', np.exp(number))\nprint()\nprint('sqrt:\\n',np.sqrt(number))",
"dtype",
"numbers.dtype",
"No copy & Shallow copy & Deep copy\n\n\nNo copy\n ###### Simple assignments make no copy of array objects or of their data.",
"number = np.arange(0,20)\nnumber2 = number \nprint (number is number2 , id(number), id(number2))\nprint(number)\nnumber2.shape = (4,5)\nprint(number)",
"Shallow copy\n\n\nDifferent array objects can share the same data. The view method creates a new array object that looks at the same data.",
"number = np.arange(0,20)\nnumber2 = number.view()\nprint (number is number2 , id(number), id(number2))\n\nnumber2.shape = (5,4)\nprint('number2 shape:', number2.shape,'\\nnumber shape:', number.shape)\n\nprint('befor:', number)\nnumber2[0][0] = 2222\nprint()\nprint('after:', number)",
"Deep copy\n\n\nThe copy method makes a complete copy of the array and its data.",
"number = np.arange(0,20)\nnumber2 = number.copy()\nprint (number is number2 , id(number), id(number2))\n\n\nprint('befor:', number)\nnumber2[0] = 10\nprint()\nprint('after:', number)\nprint()\nprint('number2:',number2)",
"Broadcating\n###### One of important concept to understand numpy is Broadcasting \n It's very useful for performancing mathmaica operation beetween arrays of different shape.",
"number = np.arange(1,11)\nnum = 2 \nprint(' number =', number)\nprint('\\n number .* num =',number * num)\n\nnumber = np.arange(1,10).reshape(3,3)\nnumber2 = np.arange(1,4).reshape(1,3)\nnumber * number2\n\nnumber = np.array([1,2,3])\nprint('number =', number)\nprint('\\nnumber =', number + 100)\n\nnumber = np.arange(1,10).reshape(3,3)\nnumber2 = np.arange(1,4)\nprint('number: \\n', number)\nadd = number + number2 \nprint()\nprint('number2: \\n ', number2)\nprint()\nprint('add: \\n', add)",
"If you still doubt Why we use Python and NumPy see it. 😉",
"from time import time\na = np.random.rand(8000000, 1)\nc = 0\ntic = time()\nfor i in range(len(a)):\n c +=(a[i][0] * a[i][0])\n \nprint ('output1:', c)\ntak = time()\n\nprint('multiply 2 matrix with loop: ', tak - tic)\n\ntic = time()\nprint('output2:', np.dot(a.T, a))\ntak = time()\n\n\nprint('multiply 2 matrix with numpy func: ', tak - tic)",
"I tried to write essential things in numpy that you can start to code and enjoy it but there are many function that i don't write in this book if you neet more informatino click here\nPandas\npandas is an open source library providing high-performance, easy-to-use data structures and data analysis tools for the Python programming language.\nFor easy installing Pandas\nbash \nsudo pip3 install pandas",
"import pandas as pd ",
"Series",
"labels = ['a','b','c']\nmy_list = [10,20,30]\narr = np.array([10,20,30])\nd = {'a':10,'b':20,'c':30}\n\npd.Series(data=my_list)\n\npd.Series(data=my_list,index=labels)\n\npd.Series(d)",
"Dataframe\nTwo-dimensional size-mutable, potentially heterogeneous tabular data structure with labeled axes (rows and columns). Arithmetic operations align on both row and column labels. Can be thought of as a dict-like container for Series objects. The primary pandas data structure",
"dataframe = pd.DataFrame(np.random.randn(5,4),columns=['A','B','V','D'])\n\ndataframe.head()",
"Selection",
"dataframe['A']\n\ndataframe[['A', 'D']]",
"creating new column",
"dataframe['E'] = dataframe['A'] + dataframe['B']\n\ndataframe",
"removing a column",
"dataframe.drop('E', axis=1)\n\ndataframe\n\ndataframe.drop('E', axis=1, inplace=True)\ndataframe",
"Selcting row",
"dataframe.loc[0]\n\ndataframe.iloc[0]\n\ndataframe.loc[0 , 'A']\n\ndataframe.loc[[0,2],['A', 'C']]",
"Conditional Selection",
"dataframe > 0.3\n\ndataframe[dataframe > 0.3 ]\n\ndataframe[dataframe['A']>0.3]\n\ndataframe[dataframe['A']>0.3]['B']\n\ndataframe[(dataframe['A']>0.5) & (dataframe['C'] > 0)]",
"Multi-Index and Index Hierarchy",
"layer1 = ['g1','g1','g1','g2','g2','g2']\nlayer2 = [1,2,3,1,2,3]\nhier_index = list(zip(layer1,layer2))\nhier_index = pd.MultiIndex.from_tuples(hier_index)\n\nhier_index\n\ndataframe2 = pd.DataFrame(np.random.randn(6,2),index=hier_index,columns=['A','B'])\n\ndataframe2\n\ndataframe2.loc['g1']\n\ndataframe2.loc['g1'].loc[1]",
"Input and output",
"titanic = pd.read_csv('Datasets/titanic.csv')\n\npd.read\n\ntitanic.head()\n\ntitanic.drop('Name', axis=1 , inplace = True)\n\ntitanic.head()\n\ntitanic.to_csv('Datasets/titanic_drop_names.csv')",
"csv is one of the most important format but Pandas compatible with many other format like html table , sql, json and etc.\nMising data (NaN)",
"titanic.head()\n\ntitanic.dropna()\n\ntitanic.dropna(axis=1)\n\ntitanic.fillna('Fill NaN').head()",
"Concating merging and ...",
"df1 = pd.DataFrame({'A': ['A0', 'A1', 'A2', 'A3'],\n 'B': ['B0', 'B1', 'B2', 'B3'],\n 'C': ['C0', 'C1', 'C2', 'C3'],\n 'D': ['D0', 'D1', 'D2', 'D3']},\n index=[0, 1, 2, 3])\n\ndf2 = pd.DataFrame({'A': ['A4', 'A5', 'A6', 'A7'],\n 'B': ['B4', 'B5', 'B6', 'B7'],\n 'C': ['C4', 'C5', 'C6', 'C7'],\n 'D': ['D4', 'D5', 'D6', 'D7']},\n index=[4, 5, 6, 7]) \n\ndf3 = pd.DataFrame({'A': ['A8', 'A9', 'A10', 'A11'],\n 'B': ['B8', 'B9', 'B10', 'B11'],\n 'C': ['C8', 'C9', 'C10', 'C11'],\n 'D': ['D8', 'D9', 'D10', 'D11']},\n index=[8, 9, 10, 11])\n\ndf1\n\ndf2\n\ndf3",
"Concatenation",
"frames = [df1, df2, df3 ]\n\npd.concat(frames)\n#pd.concat(frames, ignore_index=True)\n\npd.concat(frames, axis=1)\n\ndf1.append(df2)",
"Mergeing",
"left = pd.DataFrame({'key': ['K0', 'K1', 'K2', 'K3'],\n 'A': ['A0', 'A1', 'A2', 'A3'],\n 'B': ['B0', 'B1', 'B2', 'B3']})\n \nright = pd.DataFrame({'key': ['K0', 'K1', 'K2', 'K3'],\n 'C': ['C0', 'C1', 'C2', 'C3'],\n 'D': ['D0', 'D1', 'D2', 'D3']}) \n\nleft\n\nright\n\npd.merge(left, right, on= 'key')\n\nleft = pd.DataFrame({'key1': ['K0', 'K0', 'K1', 'K2'],\n 'key2': ['K0', 'K1', 'K0', 'K1'],\n 'A': ['A0', 'A1', 'A2', 'A3'],\n 'B': ['B0', 'B1', 'B2', 'B3']})\n \nright = pd.DataFrame({'key1': ['K0', 'K1', 'K1', 'K2'],\n 'key2': ['K0', 'K0', 'K0', 'K0'],\n 'C': ['C0', 'C1', 'C2', 'C3'],\n 'D': ['D0', 'D1', 'D2', 'D3']})\n\npd.merge(left, right, on=['key1', 'key2'])\n\npd.merge(left, right, how='outer', on=['key1', 'key2'])\n\npd.merge(left, right, how='left', on=['key1', 'key2'])\n\npd.merge(left, right, how='right', on=['key1', 'key2'])",
"Joining",
"left = pd.DataFrame({'A': ['A0', 'A1', 'A2'],\n 'B': ['B0', 'B1', 'B2']},\n index=['K0', 'K1', 'K2']) \n\nright = pd.DataFrame({'C': ['C0', 'C2', 'C3'],\n 'D': ['D0', 'D2', 'D3']},\n index=['K0', 'K2', 'K3'])\n\nleft\n\nright\n\nleft.join(right)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
JanetMatsen/Machine_Learning_CSE_546
|
HW2/notebooks/Q-1-1-3_Multiclass_Ridge.ipynb
|
mit
|
[
"Question_1-1-3_Multiclass_Ridge\nJanet Matsen\nCode notes:\n* Indivudal regressions are done by instinces of RidgeRegression, defined in rige_regression.py.\n * RidgeRegression gets some methods from ClassificationBase, defined in classification_base.py.\n* The class HyperparameterExplorer in hyperparameter_explorer is used to tune hyperparameters on training data.",
"import numpy as np\nimport matplotlib as mpl\n%matplotlib inline\nimport time\n\nimport pandas as pd\nimport seaborn as sns\n\nfrom mnist import MNIST # public package for making arrays out of MINST data.\n\nimport sys\nsys.path.append('../code/')\n\nfrom ridge_regression import RidgeMulti\nfrom hyperparameter_explorer import HyperparameterExplorer\n\nfrom mnist_helpers import mnist_training, mnist_testing\n\nimport matplotlib.pyplot as plt\nfrom pylab import rcParams\nrcParams['figure.figsize'] = 4, 3",
"Prepare MNIST training data",
"train_X, train_y = mnist_training()\ntest_X, test_y = mnist_testing()",
"Explore hyperparameters before training model on all of the training data.",
"hyper_explorer = HyperparameterExplorer(X=train_X, y=train_y, \n model=RidgeMulti, \n validation_split=0.1, score_name = 'training RMSE', \n use_prev_best_weights=False,\n test_X=test_X, test_y=test_y)\n\nhyper_explorer.train_model(lam=1e10, verbose=False)\n\nhyper_explorer.train_model(lam=1e+08, verbose=False)\nhyper_explorer.train_model(lam=1e+07, verbose=False)\n\nhyper_explorer.train_model(lam=1e+06, verbose=False)\n\nhyper_explorer.train_model(lam=1e5, verbose=False)\nhyper_explorer.train_model(lam=1e4, verbose=False)\nhyper_explorer.train_model(lam=1e03, verbose=False)\nhyper_explorer.train_model(lam=1e2, verbose=False)\n\nhyper_explorer.train_model(lam=1e1, verbose=False)\n\nhyper_explorer.train_model(lam=1e0, verbose=False)\nhyper_explorer.train_model(lam=1e-1, verbose=False)\nhyper_explorer.train_model(lam=1e-2, verbose=False)\nhyper_explorer.train_model(lam=1e-3, verbose=False)\nhyper_explorer.train_model(lam=1e-4, verbose=False)\nhyper_explorer.train_model(lam=1e-5, verbose=False)\n\nhyper_explorer.summary\n\nhyper_explorer.plot_fits()\n\nt = time.localtime(time.time())\n\nhyper_explorer.plot_fits(filename = \"Q-1-1-3_val_and_train_RMSE_{}-{}\".format(t.tm_mon, t.tm_mday))\n\nhyper_explorer.plot_fits(ylim=(.6,.7),\n filename = \"Q-1-1-3_val_and_train_RMSE_zoomed_in{}-{}\".format(t.tm_mon, t.tm_mday))\n\nhyper_explorer.best('score')\n\nhyper_explorer.best('summary')\n\nhyper_explorer.best('best score')\n\nhyper_explorer.train_on_whole_training_set(lam=1e7)\n\nhyper_explorer.final_model.results_row()\n\nhyper_explorer.evaluate_test_data()"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
ES-DOC/esdoc-jupyterhub
|
notebooks/awi/cmip6/models/awi-cm-1-0-lr/atmos.ipynb
|
gpl-3.0
|
[
"ES-DOC CMIP6 Model Properties - Atmos\nMIP Era: CMIP6\nInstitute: AWI\nSource ID: AWI-CM-1-0-LR\nTopic: Atmos\nSub-Topics: Dynamical Core, Radiation, Turbulence Convection, Microphysics Precipitation, Cloud Scheme, Observation Simulation, Gravity Waves, Solar, Volcanos. \nProperties: 156 (127 required)\nModel descriptions: Model description details\nInitialized From: -- \nNotebook Help: Goto notebook help page\nNotebook Initialised: 2018-02-15 16:53:37\nDocument Setup\nIMPORTANT: to be executed each time you run the notebook",
"# DO NOT EDIT ! \nfrom pyesdoc.ipython.model_topic import NotebookOutput \n\n# DO NOT EDIT ! \nDOC = NotebookOutput('cmip6', 'awi', 'awi-cm-1-0-lr', 'atmos')",
"Document Authors\nSet document authors",
"# Set as follows: DOC.set_author(\"name\", \"email\") \n# TODO - please enter value(s)",
"Document Contributors\nSpecify document contributors",
"# Set as follows: DOC.set_contributor(\"name\", \"email\") \n# TODO - please enter value(s)",
"Document Publication\nSpecify document publication status",
"# Set publication status: \n# 0=do not publish, 1=publish. \nDOC.set_publication_status(0)",
"Document Table of Contents\n1. Key Properties --> Overview\n2. Key Properties --> Resolution\n3. Key Properties --> Timestepping\n4. Key Properties --> Orography\n5. Grid --> Discretisation\n6. Grid --> Discretisation --> Horizontal\n7. Grid --> Discretisation --> Vertical\n8. Dynamical Core\n9. Dynamical Core --> Top Boundary\n10. Dynamical Core --> Lateral Boundary\n11. Dynamical Core --> Diffusion Horizontal\n12. Dynamical Core --> Advection Tracers\n13. Dynamical Core --> Advection Momentum\n14. Radiation\n15. Radiation --> Shortwave Radiation\n16. Radiation --> Shortwave GHG\n17. Radiation --> Shortwave Cloud Ice\n18. Radiation --> Shortwave Cloud Liquid\n19. Radiation --> Shortwave Cloud Inhomogeneity\n20. Radiation --> Shortwave Aerosols\n21. Radiation --> Shortwave Gases\n22. Radiation --> Longwave Radiation\n23. Radiation --> Longwave GHG\n24. Radiation --> Longwave Cloud Ice\n25. Radiation --> Longwave Cloud Liquid\n26. Radiation --> Longwave Cloud Inhomogeneity\n27. Radiation --> Longwave Aerosols\n28. Radiation --> Longwave Gases\n29. Turbulence Convection\n30. Turbulence Convection --> Boundary Layer Turbulence\n31. Turbulence Convection --> Deep Convection\n32. Turbulence Convection --> Shallow Convection\n33. Microphysics Precipitation\n34. Microphysics Precipitation --> Large Scale Precipitation\n35. Microphysics Precipitation --> Large Scale Cloud Microphysics\n36. Cloud Scheme\n37. Cloud Scheme --> Optical Cloud Properties\n38. Cloud Scheme --> Sub Grid Scale Water Distribution\n39. Cloud Scheme --> Sub Grid Scale Ice Distribution\n40. Observation Simulation\n41. Observation Simulation --> Isscp Attributes\n42. Observation Simulation --> Cosp Attributes\n43. Observation Simulation --> Radar Inputs\n44. Observation Simulation --> Lidar Inputs\n45. Gravity Waves\n46. Gravity Waves --> Orographic Gravity Waves\n47. Gravity Waves --> Non Orographic Gravity Waves\n48. Solar\n49. Solar --> Solar Pathways\n50. Solar --> Solar Constant\n51. Solar --> Orbital Parameters\n52. Solar --> Insolation Ozone\n53. Volcanos\n54. Volcanos --> Volcanoes Treatment \n1. Key Properties --> Overview\nTop level key properties\n1.1. Model Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of atmosphere model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.overview.model_overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.2. Model Name\nIs Required: TRUE Type: STRING Cardinality: 1.1\nName of atmosphere model code (CAM 4.0, ARPEGE 3.2,...)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.overview.model_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.3. Model Family\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nType of atmospheric model.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.overview.model_family') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"AGCM\" \n# \"ARCM\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"1.4. Basic Approximations\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nBasic approximations made in the atmosphere.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.overview.basic_approximations') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"primitive equations\" \n# \"non-hydrostatic\" \n# \"anelastic\" \n# \"Boussinesq\" \n# \"hydrostatic\" \n# \"quasi-hydrostatic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"2. Key Properties --> Resolution\nCharacteristics of the model resolution\n2.1. Horizontal Resolution Name\nIs Required: TRUE Type: STRING Cardinality: 1.1\nThis is a string usually used by the modelling group to describe the resolution of the model grid, e.g. T42, N48.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.resolution.horizontal_resolution_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"2.2. Canonical Horizontal Resolution\nIs Required: TRUE Type: STRING Cardinality: 1.1\nExpression quoted for gross comparisons of resolution, e.g. 2.5 x 3.75 degrees lat-lon.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.resolution.canonical_horizontal_resolution') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"2.3. Range Horizontal Resolution\nIs Required: TRUE Type: STRING Cardinality: 1.1\nRange of horizontal resolution with spatial details, eg. 1 deg (Equator) - 0.5 deg",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.resolution.range_horizontal_resolution') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"2.4. Number Of Vertical Levels\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nNumber of vertical levels resolved on the computational grid.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.resolution.number_of_vertical_levels') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"2.5. High Top\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nDoes the atmosphere have a high-top? High-Top atmospheres have a fully resolved stratosphere with a model top above the stratopause.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.resolution.high_top') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"3. Key Properties --> Timestepping\nCharacteristics of the atmosphere model time stepping\n3.1. Timestep Dynamics\nIs Required: TRUE Type: STRING Cardinality: 1.1\nTimestep for the dynamics, e.g. 30 min.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_dynamics') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"3.2. Timestep Shortwave Radiative Transfer\nIs Required: FALSE Type: STRING Cardinality: 0.1\nTimestep for the shortwave radiative transfer, e.g. 1.5 hours.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_shortwave_radiative_transfer') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"3.3. Timestep Longwave Radiative Transfer\nIs Required: FALSE Type: STRING Cardinality: 0.1\nTimestep for the longwave radiative transfer, e.g. 3 hours.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.timestepping.timestep_longwave_radiative_transfer') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"4. Key Properties --> Orography\nCharacteristics of the model orography\n4.1. Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nTime adaptation of the orography.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.orography.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"present day\" \n# \"modified\" \n# TODO - please enter value(s)\n",
"4.2. Changes\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nIf the orography type is modified describe the time adaptation changes.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.key_properties.orography.changes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"related to ice sheets\" \n# \"related to tectonics\" \n# \"modified mean\" \n# \"modified variance if taken into account in model (cf gravity waves)\" \n# TODO - please enter value(s)\n",
"5. Grid --> Discretisation\nAtmosphere grid discretisation\n5.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview description of grid discretisation in the atmosphere",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.grid.discretisation.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6. Grid --> Discretisation --> Horizontal\nAtmosphere discretisation in the horizontal\n6.1. Scheme Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nHorizontal discretisation type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"spectral\" \n# \"fixed grid\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"6.2. Scheme Method\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nHorizontal discretisation method",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"finite elements\" \n# \"finite volumes\" \n# \"finite difference\" \n# \"centered finite difference\" \n# TODO - please enter value(s)\n",
"6.3. Scheme Order\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nHorizontal discretisation function order",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.grid.discretisation.horizontal.scheme_order') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"second\" \n# \"third\" \n# \"fourth\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"6.4. Horizontal Pole\nIs Required: FALSE Type: ENUM Cardinality: 0.1\nHorizontal discretisation pole singularity treatment",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.grid.discretisation.horizontal.horizontal_pole') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"filter\" \n# \"pole rotation\" \n# \"artificial island\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"6.5. Grid Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nHorizontal grid type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.grid.discretisation.horizontal.grid_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Gaussian\" \n# \"Latitude-Longitude\" \n# \"Cubed-Sphere\" \n# \"Icosahedral\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"7. Grid --> Discretisation --> Vertical\nAtmosphere discretisation in the vertical\n7.1. Coordinate Type\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nType of vertical coordinate system",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.grid.discretisation.vertical.coordinate_type') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"isobaric\" \n# \"sigma\" \n# \"hybrid sigma-pressure\" \n# \"hybrid pressure\" \n# \"vertically lagrangian\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"8. Dynamical Core\nCharacteristics of the dynamical core\n8.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview description of atmosphere dynamical core",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8.2. Name\nIs Required: FALSE Type: STRING Cardinality: 0.1\nCommonly used name for the dynamical core of the model.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8.3. Timestepping Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nTimestepping framework type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.timestepping_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Adams-Bashforth\" \n# \"explicit\" \n# \"implicit\" \n# \"semi-implicit\" \n# \"leap frog\" \n# \"multi-step\" \n# \"Runge Kutta fifth order\" \n# \"Runge Kutta second order\" \n# \"Runge Kutta third order\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"8.4. Prognostic Variables\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nList of the model prognostic variables",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.prognostic_variables') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"surface pressure\" \n# \"wind components\" \n# \"divergence/curl\" \n# \"temperature\" \n# \"potential temperature\" \n# \"total water\" \n# \"water vapour\" \n# \"water liquid\" \n# \"water ice\" \n# \"total water moments\" \n# \"clouds\" \n# \"radiation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"9. Dynamical Core --> Top Boundary\nType of boundary layer at the top of the model\n9.1. Top Boundary Condition\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nTop boundary condition",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_boundary_condition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"sponge layer\" \n# \"radiation boundary condition\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"9.2. Top Heat\nIs Required: TRUE Type: STRING Cardinality: 1.1\nTop boundary heat treatment",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_heat') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"9.3. Top Wind\nIs Required: TRUE Type: STRING Cardinality: 1.1\nTop boundary wind treatment",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.top_boundary.top_wind') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"10. Dynamical Core --> Lateral Boundary\nType of lateral boundary condition (if the model is a regional model)\n10.1. Condition\nIs Required: FALSE Type: ENUM Cardinality: 0.1\nType of lateral boundary condition",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.lateral_boundary.condition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"sponge layer\" \n# \"radiation boundary condition\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"11. Dynamical Core --> Diffusion Horizontal\nHorizontal diffusion scheme\n11.1. Scheme Name\nIs Required: FALSE Type: STRING Cardinality: 0.1\nHorizontal diffusion scheme name",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"11.2. Scheme Method\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nHorizontal diffusion scheme method",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.diffusion_horizontal.scheme_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"iterated Laplacian\" \n# \"bi-harmonic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"12. Dynamical Core --> Advection Tracers\nTracer advection scheme\n12.1. Scheme Name\nIs Required: FALSE Type: ENUM Cardinality: 0.1\nTracer advection scheme name",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Heun\" \n# \"Roe and VanLeer\" \n# \"Roe and Superbee\" \n# \"Prather\" \n# \"UTOPIA\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"12.2. Scheme Characteristics\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nTracer advection scheme characteristics",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.scheme_characteristics') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Eulerian\" \n# \"modified Euler\" \n# \"Lagrangian\" \n# \"semi-Lagrangian\" \n# \"cubic semi-Lagrangian\" \n# \"quintic semi-Lagrangian\" \n# \"mass-conserving\" \n# \"finite volume\" \n# \"flux-corrected\" \n# \"linear\" \n# \"quadratic\" \n# \"quartic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"12.3. Conserved Quantities\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nTracer advection scheme conserved quantities",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conserved_quantities') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"dry mass\" \n# \"tracer mass\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"12.4. Conservation Method\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nTracer advection scheme conservation method",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.advection_tracers.conservation_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"conservation fixer\" \n# \"Priestley algorithm\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"13. Dynamical Core --> Advection Momentum\nMomentum advection scheme\n13.1. Scheme Name\nIs Required: FALSE Type: ENUM Cardinality: 0.1\nMomentum advection schemes name",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"VanLeer\" \n# \"Janjic\" \n# \"SUPG (Streamline Upwind Petrov-Galerkin)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"13.2. Scheme Characteristics\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nMomentum advection scheme characteristics",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_characteristics') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"2nd order\" \n# \"4th order\" \n# \"cell-centred\" \n# \"staggered grid\" \n# \"semi-staggered grid\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"13.3. Scheme Staggering Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nMomentum advection scheme staggering type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.scheme_staggering_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Arakawa B-grid\" \n# \"Arakawa C-grid\" \n# \"Arakawa D-grid\" \n# \"Arakawa E-grid\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"13.4. Conserved Quantities\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nMomentum advection scheme conserved quantities",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conserved_quantities') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Angular momentum\" \n# \"Horizontal momentum\" \n# \"Enstrophy\" \n# \"Mass\" \n# \"Total energy\" \n# \"Vorticity\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"13.5. Conservation Method\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nMomentum advection scheme conservation method",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.dynamical_core.advection_momentum.conservation_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"conservation fixer\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"14. Radiation\nCharacteristics of the atmosphere radiation process\n14.1. Aerosols\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nAerosols whose radiative effect is taken into account in the atmosphere model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.aerosols') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"sulphate\" \n# \"nitrate\" \n# \"sea salt\" \n# \"dust\" \n# \"ice\" \n# \"organic\" \n# \"BC (black carbon / soot)\" \n# \"SOA (secondary organic aerosols)\" \n# \"POM (particulate organic matter)\" \n# \"polar stratospheric ice\" \n# \"NAT (nitric acid trihydrate)\" \n# \"NAD (nitric acid dihydrate)\" \n# \"STS (supercooled ternary solution aerosol particle)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"15. Radiation --> Shortwave Radiation\nProperties of the shortwave radiation scheme\n15.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview description of shortwave radiation in the atmosphere",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_radiation.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"15.2. Name\nIs Required: FALSE Type: STRING Cardinality: 0.1\nCommonly used name for the shortwave radiation scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_radiation.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"15.3. Spectral Integration\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nShortwave radiation scheme spectral integration",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_integration') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"wide-band model\" \n# \"correlated-k\" \n# \"exponential sum fitting\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"15.4. Transport Calculation\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nShortwave radiation transport calculation methods",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_radiation.transport_calculation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"two-stream\" \n# \"layer interaction\" \n# \"bulk\" \n# \"adaptive\" \n# \"multi-stream\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"15.5. Spectral Intervals\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nShortwave radiation scheme number of spectral intervals",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_radiation.spectral_intervals') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"16. Radiation --> Shortwave GHG\nRepresentation of greenhouse gases in the shortwave radiation scheme\n16.1. Greenhouse Gas Complexity\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nComplexity of greenhouse gases whose shortwave radiative effects are taken into account in the atmosphere model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_GHG.greenhouse_gas_complexity') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"CO2\" \n# \"CH4\" \n# \"N2O\" \n# \"CFC-11 eq\" \n# \"CFC-12 eq\" \n# \"HFC-134a eq\" \n# \"Explicit ODSs\" \n# \"Explicit other fluorinated gases\" \n# \"O3\" \n# \"H2O\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"16.2. ODS\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nOzone depleting substances whose shortwave radiative effects are explicitly taken into account in the atmosphere model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_GHG.ODS') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"CFC-12\" \n# \"CFC-11\" \n# \"CFC-113\" \n# \"CFC-114\" \n# \"CFC-115\" \n# \"HCFC-22\" \n# \"HCFC-141b\" \n# \"HCFC-142b\" \n# \"Halon-1211\" \n# \"Halon-1301\" \n# \"Halon-2402\" \n# \"methyl chloroform\" \n# \"carbon tetrachloride\" \n# \"methyl chloride\" \n# \"methylene chloride\" \n# \"chloroform\" \n# \"methyl bromide\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"16.3. Other Flourinated Gases\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nOther flourinated gases whose shortwave radiative effects are explicitly taken into account in the atmosphere model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_GHG.other_flourinated_gases') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"HFC-134a\" \n# \"HFC-23\" \n# \"HFC-32\" \n# \"HFC-125\" \n# \"HFC-143a\" \n# \"HFC-152a\" \n# \"HFC-227ea\" \n# \"HFC-236fa\" \n# \"HFC-245fa\" \n# \"HFC-365mfc\" \n# \"HFC-43-10mee\" \n# \"CF4\" \n# \"C2F6\" \n# \"C3F8\" \n# \"C4F10\" \n# \"C5F12\" \n# \"C6F14\" \n# \"C7F16\" \n# \"C8F18\" \n# \"c-C4F8\" \n# \"NF3\" \n# \"SF6\" \n# \"SO2F2\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"17. Radiation --> Shortwave Cloud Ice\nShortwave radiative properties of ice crystals in clouds\n17.1. General Interactions\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nGeneral shortwave radiative interactions with cloud ice crystals",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.general_interactions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"scattering\" \n# \"emission/absorption\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"17.2. Physical Representation\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nPhysical representation of cloud ice crystals in the shortwave radiation scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.physical_representation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"bi-modal size distribution\" \n# \"ensemble of ice crystals\" \n# \"mean projected area\" \n# \"ice water path\" \n# \"crystal asymmetry\" \n# \"crystal aspect ratio\" \n# \"effective crystal radius\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"17.3. Optical Methods\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nOptical methods applicable to cloud ice crystals in the shortwave radiation scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_cloud_ice.optical_methods') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"T-matrix\" \n# \"geometric optics\" \n# \"finite difference time domain (FDTD)\" \n# \"Mie theory\" \n# \"anomalous diffraction approximation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"18. Radiation --> Shortwave Cloud Liquid\nShortwave radiative properties of liquid droplets in clouds\n18.1. General Interactions\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nGeneral shortwave radiative interactions with cloud liquid droplets",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.general_interactions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"scattering\" \n# \"emission/absorption\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"18.2. Physical Representation\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nPhysical representation of cloud liquid droplets in the shortwave radiation scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.physical_representation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"cloud droplet number concentration\" \n# \"effective cloud droplet radii\" \n# \"droplet size distribution\" \n# \"liquid water path\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"18.3. Optical Methods\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nOptical methods applicable to cloud liquid droplets in the shortwave radiation scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_cloud_liquid.optical_methods') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"geometric optics\" \n# \"Mie theory\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"19. Radiation --> Shortwave Cloud Inhomogeneity\nCloud inhomogeneity in the shortwave radiation scheme\n19.1. Cloud Inhomogeneity\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nMethod for taking into account horizontal cloud inhomogeneity",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_cloud_inhomogeneity.cloud_inhomogeneity') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Monte Carlo Independent Column Approximation\" \n# \"Triplecloud\" \n# \"analytic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"20. Radiation --> Shortwave Aerosols\nShortwave radiative properties of aerosols\n20.1. General Interactions\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nGeneral shortwave radiative interactions with aerosols",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.general_interactions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"scattering\" \n# \"emission/absorption\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"20.2. Physical Representation\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nPhysical representation of aerosols in the shortwave radiation scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.physical_representation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"number concentration\" \n# \"effective radii\" \n# \"size distribution\" \n# \"asymmetry\" \n# \"aspect ratio\" \n# \"mixing state\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"20.3. Optical Methods\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nOptical methods applicable to aerosols in the shortwave radiation scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_aerosols.optical_methods') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"T-matrix\" \n# \"geometric optics\" \n# \"finite difference time domain (FDTD)\" \n# \"Mie theory\" \n# \"anomalous diffraction approximation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"21. Radiation --> Shortwave Gases\nShortwave radiative properties of gases\n21.1. General Interactions\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nGeneral shortwave radiative interactions with gases",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.shortwave_gases.general_interactions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"scattering\" \n# \"emission/absorption\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"22. Radiation --> Longwave Radiation\nProperties of the longwave radiation scheme\n22.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview description of longwave radiation in the atmosphere",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_radiation.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"22.2. Name\nIs Required: FALSE Type: STRING Cardinality: 0.1\nCommonly used name for the longwave radiation scheme.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_radiation.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"22.3. Spectral Integration\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nLongwave radiation scheme spectral integration",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_integration') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"wide-band model\" \n# \"correlated-k\" \n# \"exponential sum fitting\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"22.4. Transport Calculation\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nLongwave radiation transport calculation methods",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_radiation.transport_calculation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"two-stream\" \n# \"layer interaction\" \n# \"bulk\" \n# \"adaptive\" \n# \"multi-stream\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"22.5. Spectral Intervals\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nLongwave radiation scheme number of spectral intervals",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_radiation.spectral_intervals') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"23. Radiation --> Longwave GHG\nRepresentation of greenhouse gases in the longwave radiation scheme\n23.1. Greenhouse Gas Complexity\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nComplexity of greenhouse gases whose longwave radiative effects are taken into account in the atmosphere model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_GHG.greenhouse_gas_complexity') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"CO2\" \n# \"CH4\" \n# \"N2O\" \n# \"CFC-11 eq\" \n# \"CFC-12 eq\" \n# \"HFC-134a eq\" \n# \"Explicit ODSs\" \n# \"Explicit other fluorinated gases\" \n# \"O3\" \n# \"H2O\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"23.2. ODS\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nOzone depleting substances whose longwave radiative effects are explicitly taken into account in the atmosphere model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_GHG.ODS') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"CFC-12\" \n# \"CFC-11\" \n# \"CFC-113\" \n# \"CFC-114\" \n# \"CFC-115\" \n# \"HCFC-22\" \n# \"HCFC-141b\" \n# \"HCFC-142b\" \n# \"Halon-1211\" \n# \"Halon-1301\" \n# \"Halon-2402\" \n# \"methyl chloroform\" \n# \"carbon tetrachloride\" \n# \"methyl chloride\" \n# \"methylene chloride\" \n# \"chloroform\" \n# \"methyl bromide\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"23.3. Other Flourinated Gases\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nOther flourinated gases whose longwave radiative effects are explicitly taken into account in the atmosphere model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_GHG.other_flourinated_gases') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"HFC-134a\" \n# \"HFC-23\" \n# \"HFC-32\" \n# \"HFC-125\" \n# \"HFC-143a\" \n# \"HFC-152a\" \n# \"HFC-227ea\" \n# \"HFC-236fa\" \n# \"HFC-245fa\" \n# \"HFC-365mfc\" \n# \"HFC-43-10mee\" \n# \"CF4\" \n# \"C2F6\" \n# \"C3F8\" \n# \"C4F10\" \n# \"C5F12\" \n# \"C6F14\" \n# \"C7F16\" \n# \"C8F18\" \n# \"c-C4F8\" \n# \"NF3\" \n# \"SF6\" \n# \"SO2F2\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"24. Radiation --> Longwave Cloud Ice\nLongwave radiative properties of ice crystals in clouds\n24.1. General Interactions\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nGeneral longwave radiative interactions with cloud ice crystals",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.general_interactions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"scattering\" \n# \"emission/absorption\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"24.2. Physical Reprenstation\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nPhysical representation of cloud ice crystals in the longwave radiation scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.physical_reprenstation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"bi-modal size distribution\" \n# \"ensemble of ice crystals\" \n# \"mean projected area\" \n# \"ice water path\" \n# \"crystal asymmetry\" \n# \"crystal aspect ratio\" \n# \"effective crystal radius\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"24.3. Optical Methods\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nOptical methods applicable to cloud ice crystals in the longwave radiation scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_cloud_ice.optical_methods') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"T-matrix\" \n# \"geometric optics\" \n# \"finite difference time domain (FDTD)\" \n# \"Mie theory\" \n# \"anomalous diffraction approximation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"25. Radiation --> Longwave Cloud Liquid\nLongwave radiative properties of liquid droplets in clouds\n25.1. General Interactions\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nGeneral longwave radiative interactions with cloud liquid droplets",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.general_interactions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"scattering\" \n# \"emission/absorption\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"25.2. Physical Representation\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nPhysical representation of cloud liquid droplets in the longwave radiation scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.physical_representation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"cloud droplet number concentration\" \n# \"effective cloud droplet radii\" \n# \"droplet size distribution\" \n# \"liquid water path\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"25.3. Optical Methods\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nOptical methods applicable to cloud liquid droplets in the longwave radiation scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_cloud_liquid.optical_methods') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"geometric optics\" \n# \"Mie theory\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"26. Radiation --> Longwave Cloud Inhomogeneity\nCloud inhomogeneity in the longwave radiation scheme\n26.1. Cloud Inhomogeneity\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nMethod for taking into account horizontal cloud inhomogeneity",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_cloud_inhomogeneity.cloud_inhomogeneity') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Monte Carlo Independent Column Approximation\" \n# \"Triplecloud\" \n# \"analytic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"27. Radiation --> Longwave Aerosols\nLongwave radiative properties of aerosols\n27.1. General Interactions\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nGeneral longwave radiative interactions with aerosols",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_aerosols.general_interactions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"scattering\" \n# \"emission/absorption\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"27.2. Physical Representation\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nPhysical representation of aerosols in the longwave radiation scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_aerosols.physical_representation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"number concentration\" \n# \"effective radii\" \n# \"size distribution\" \n# \"asymmetry\" \n# \"aspect ratio\" \n# \"mixing state\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"27.3. Optical Methods\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nOptical methods applicable to aerosols in the longwave radiation scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_aerosols.optical_methods') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"T-matrix\" \n# \"geometric optics\" \n# \"finite difference time domain (FDTD)\" \n# \"Mie theory\" \n# \"anomalous diffraction approximation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"28. Radiation --> Longwave Gases\nLongwave radiative properties of gases\n28.1. General Interactions\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nGeneral longwave radiative interactions with gases",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.radiation.longwave_gases.general_interactions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"scattering\" \n# \"emission/absorption\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"29. Turbulence Convection\nAtmosphere Convective Turbulence and Clouds\n29.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview description of atmosphere convection and turbulence",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"30. Turbulence Convection --> Boundary Layer Turbulence\nProperties of the boundary layer turbulence scheme\n30.1. Scheme Name\nIs Required: FALSE Type: ENUM Cardinality: 0.1\nBoundary layer turbulence scheme name",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Mellor-Yamada\" \n# \"Holtslag-Boville\" \n# \"EDMF\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"30.2. Scheme Type\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nBoundary layer turbulence scheme type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.scheme_type') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"TKE prognostic\" \n# \"TKE diagnostic\" \n# \"TKE coupled with water\" \n# \"vertical profile of Kz\" \n# \"non-local diffusion\" \n# \"Monin-Obukhov similarity\" \n# \"Coastal Buddy Scheme\" \n# \"Coupled with convection\" \n# \"Coupled with gravity waves\" \n# \"Depth capped at cloud base\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"30.3. Closure Order\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nBoundary layer turbulence scheme closure order",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.closure_order') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"30.4. Counter Gradient\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nUses boundary layer turbulence scheme counter gradient",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.boundary_layer_turbulence.counter_gradient') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"31. Turbulence Convection --> Deep Convection\nProperties of the deep convection scheme\n31.1. Scheme Name\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDeep convection scheme name",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"31.2. Scheme Type\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nDeep convection scheme type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_type') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"mass-flux\" \n# \"adjustment\" \n# \"plume ensemble\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"31.3. Scheme Method\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nDeep convection scheme method",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.scheme_method') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"CAPE\" \n# \"bulk\" \n# \"ensemble\" \n# \"CAPE/WFN based\" \n# \"TKE/CIN based\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"31.4. Processes\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nPhysical processes taken into account in the parameterisation of deep convection",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.processes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"vertical momentum transport\" \n# \"convective momentum transport\" \n# \"entrainment\" \n# \"detrainment\" \n# \"penetrative convection\" \n# \"updrafts\" \n# \"downdrafts\" \n# \"radiative effect of anvils\" \n# \"re-evaporation of convective precipitation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"31.5. Microphysics\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nMicrophysics scheme for deep convection. Microphysical processes directly control the amount of detrainment of cloud hydrometeor and water vapor from updrafts",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.deep_convection.microphysics') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"tuning parameter based\" \n# \"single moment\" \n# \"two moment\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"32. Turbulence Convection --> Shallow Convection\nProperties of the shallow convection scheme\n32.1. Scheme Name\nIs Required: FALSE Type: STRING Cardinality: 0.1\nShallow convection scheme name",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"32.2. Scheme Type\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nshallow convection scheme type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_type') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"mass-flux\" \n# \"cumulus-capped boundary layer\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"32.3. Scheme Method\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nshallow convection scheme method",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.scheme_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"same as deep (unified)\" \n# \"included in boundary layer turbulence\" \n# \"separate diagnosis\" \n# TODO - please enter value(s)\n",
"32.4. Processes\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nPhysical processes taken into account in the parameterisation of shallow convection",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.processes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"convective momentum transport\" \n# \"entrainment\" \n# \"detrainment\" \n# \"penetrative convection\" \n# \"re-evaporation of convective precipitation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"32.5. Microphysics\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nMicrophysics scheme for shallow convection",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.turbulence_convection.shallow_convection.microphysics') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"tuning parameter based\" \n# \"single moment\" \n# \"two moment\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"33. Microphysics Precipitation\nLarge Scale Cloud Microphysics and Precipitation\n33.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview description of large scale cloud microphysics and precipitation",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.microphysics_precipitation.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"34. Microphysics Precipitation --> Large Scale Precipitation\nProperties of the large scale precipitation scheme\n34.1. Scheme Name\nIs Required: FALSE Type: STRING Cardinality: 0.1\nCommonly used name of the large scale precipitation parameterisation scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.scheme_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"34.2. Hydrometeors\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nPrecipitating hydrometeors taken into account in the large scale precipitation scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_precipitation.hydrometeors') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"liquid rain\" \n# \"snow\" \n# \"hail\" \n# \"graupel\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"35. Microphysics Precipitation --> Large Scale Cloud Microphysics\nProperties of the large scale cloud microphysics scheme\n35.1. Scheme Name\nIs Required: FALSE Type: STRING Cardinality: 0.1\nCommonly used name of the microphysics parameterisation scheme used for large scale clouds.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.scheme_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"35.2. Processes\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nLarge scale cloud microphysics processes",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.microphysics_precipitation.large_scale_cloud_microphysics.processes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"mixed phase\" \n# \"cloud droplets\" \n# \"cloud ice\" \n# \"ice nucleation\" \n# \"water vapour deposition\" \n# \"effect of raindrops\" \n# \"effect of snow\" \n# \"effect of graupel\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"36. Cloud Scheme\nCharacteristics of the cloud scheme\n36.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview description of the atmosphere cloud scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"36.2. Name\nIs Required: FALSE Type: STRING Cardinality: 0.1\nCommonly used name for the cloud scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"36.3. Atmos Coupling\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nAtmosphere components that are linked to the cloud scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.atmos_coupling') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"atmosphere_radiation\" \n# \"atmosphere_microphysics_precipitation\" \n# \"atmosphere_turbulence_convection\" \n# \"atmosphere_gravity_waves\" \n# \"atmosphere_solar\" \n# \"atmosphere_volcano\" \n# \"atmosphere_cloud_simulator\" \n# TODO - please enter value(s)\n",
"36.4. Uses Separate Treatment\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nDifferent cloud schemes for the different types of clouds (convective, stratiform and boundary layer)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.uses_separate_treatment') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"36.5. Processes\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nProcesses included in the cloud scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.processes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"entrainment\" \n# \"detrainment\" \n# \"bulk cloud\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"36.6. Prognostic Scheme\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs the cloud scheme a prognostic scheme?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.prognostic_scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"36.7. Diagnostic Scheme\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs the cloud scheme a diagnostic scheme?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.diagnostic_scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"36.8. Prognostic Variables\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nList the prognostic variables used by the cloud scheme, if applicable.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.prognostic_variables') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"cloud amount\" \n# \"liquid\" \n# \"ice\" \n# \"rain\" \n# \"snow\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"37. Cloud Scheme --> Optical Cloud Properties\nOptical cloud properties\n37.1. Cloud Overlap Method\nIs Required: FALSE Type: ENUM Cardinality: 0.1\nMethod for taking into account overlapping of cloud layers",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_overlap_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"random\" \n# \"maximum\" \n# \"maximum-random\" \n# \"exponential\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"37.2. Cloud Inhomogeneity\nIs Required: FALSE Type: STRING Cardinality: 0.1\nMethod for taking into account cloud inhomogeneity",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.optical_cloud_properties.cloud_inhomogeneity') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"38. Cloud Scheme --> Sub Grid Scale Water Distribution\nSub-grid scale water distribution\n38.1. Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nSub-grid scale water distribution type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# TODO - please enter value(s)\n",
"38.2. Function Name\nIs Required: TRUE Type: STRING Cardinality: 1.1\nSub-grid scale water distribution function name",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"38.3. Function Order\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nSub-grid scale water distribution function type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.function_order') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"38.4. Convection Coupling\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nSub-grid scale water distribution coupling with convection",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_water_distribution.convection_coupling') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"coupled with deep\" \n# \"coupled with shallow\" \n# \"not coupled with convection\" \n# TODO - please enter value(s)\n",
"39. Cloud Scheme --> Sub Grid Scale Ice Distribution\nSub-grid scale ice distribution\n39.1. Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nSub-grid scale ice distribution type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# TODO - please enter value(s)\n",
"39.2. Function Name\nIs Required: TRUE Type: STRING Cardinality: 1.1\nSub-grid scale ice distribution function name",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"39.3. Function Order\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nSub-grid scale ice distribution function type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.function_order') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"39.4. Convection Coupling\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nSub-grid scale ice distribution coupling with convection",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.cloud_scheme.sub_grid_scale_ice_distribution.convection_coupling') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"coupled with deep\" \n# \"coupled with shallow\" \n# \"not coupled with convection\" \n# TODO - please enter value(s)\n",
"40. Observation Simulation\nCharacteristics of observation simulation\n40.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview description of observation simulator characteristics",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"41. Observation Simulation --> Isscp Attributes\nISSCP Characteristics\n41.1. Top Height Estimation Method\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nCloud simulator ISSCP top height estimation methodUo",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_estimation_method') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"no adjustment\" \n# \"IR brightness\" \n# \"visible optical depth\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"41.2. Top Height Direction\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nCloud simulator ISSCP top height direction",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.isscp_attributes.top_height_direction') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"lowest altitude level\" \n# \"highest altitude level\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"42. Observation Simulation --> Cosp Attributes\nCFMIP Observational Simulator Package attributes\n42.1. Run Configuration\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nCloud simulator COSP run configuration",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.run_configuration') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Inline\" \n# \"Offline\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"42.2. Number Of Grid Points\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nCloud simulator COSP number of grid points",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_grid_points') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"42.3. Number Of Sub Columns\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nCloud simulator COSP number of sub-cloumns used to simulate sub-grid variability",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_sub_columns') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"42.4. Number Of Levels\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nCloud simulator COSP number of levels",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.cosp_attributes.number_of_levels') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"43. Observation Simulation --> Radar Inputs\nCharacteristics of the cloud radar simulator\n43.1. Frequency\nIs Required: TRUE Type: FLOAT Cardinality: 1.1\nCloud simulator radar frequency (Hz)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.frequency') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"43.2. Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nCloud simulator radar type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"surface\" \n# \"space borne\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"43.3. Gas Absorption\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nCloud simulator radar uses gas absorption",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.gas_absorption') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"43.4. Effective Radius\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nCloud simulator radar uses effective radius",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.radar_inputs.effective_radius') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"44. Observation Simulation --> Lidar Inputs\nCharacteristics of the cloud lidar simulator\n44.1. Ice Types\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nCloud simulator lidar ice type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.ice_types') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"ice spheres\" \n# \"ice non-spherical\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"44.2. Overlap\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nCloud simulator lidar overlap",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.observation_simulation.lidar_inputs.overlap') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"max\" \n# \"random\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"45. Gravity Waves\nCharacteristics of the parameterised gravity waves in the atmosphere, whether from orography or other sources.\n45.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview description of gravity wave parameterisation in the atmosphere",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"45.2. Sponge Layer\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nSponge layer in the upper levels in order to avoid gravity wave reflection at the top.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.sponge_layer') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Rayleigh friction\" \n# \"Diffusive sponge layer\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"45.3. Background\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nBackground wave distribution",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.background') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"continuous spectrum\" \n# \"discrete spectrum\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"45.4. Subgrid Scale Orography\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nSubgrid scale orography effects taken into account.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.subgrid_scale_orography') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"effect on drag\" \n# \"effect on lifting\" \n# \"enhanced topography\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"46. Gravity Waves --> Orographic Gravity Waves\nGravity waves generated due to the presence of orography\n46.1. Name\nIs Required: FALSE Type: STRING Cardinality: 0.1\nCommonly used name for the orographic gravity wave scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"46.2. Source Mechanisms\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nOrographic gravity wave source mechanisms",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.source_mechanisms') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"linear mountain waves\" \n# \"hydraulic jump\" \n# \"envelope orography\" \n# \"low level flow blocking\" \n# \"statistical sub-grid scale variance\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"46.3. Calculation Method\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nOrographic gravity wave calculation method",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.calculation_method') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"non-linear calculation\" \n# \"more than two cardinal directions\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"46.4. Propagation Scheme\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nOrographic gravity wave propogation scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.propagation_scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"linear theory\" \n# \"non-linear theory\" \n# \"includes boundary layer ducting\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"46.5. Dissipation Scheme\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nOrographic gravity wave dissipation scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.orographic_gravity_waves.dissipation_scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"total wave\" \n# \"single wave\" \n# \"spectral\" \n# \"linear\" \n# \"wave saturation vs Richardson number\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"47. Gravity Waves --> Non Orographic Gravity Waves\nGravity waves generated by non-orographic processes.\n47.1. Name\nIs Required: FALSE Type: STRING Cardinality: 0.1\nCommonly used name for the non-orographic gravity wave scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"47.2. Source Mechanisms\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nNon-orographic gravity wave source mechanisms",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.source_mechanisms') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"convection\" \n# \"precipitation\" \n# \"background spectrum\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"47.3. Calculation Method\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nNon-orographic gravity wave calculation method",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.calculation_method') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"spatially dependent\" \n# \"temporally dependent\" \n# TODO - please enter value(s)\n",
"47.4. Propagation Scheme\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nNon-orographic gravity wave propogation scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.propagation_scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"linear theory\" \n# \"non-linear theory\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"47.5. Dissipation Scheme\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nNon-orographic gravity wave dissipation scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.gravity_waves.non_orographic_gravity_waves.dissipation_scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"total wave\" \n# \"single wave\" \n# \"spectral\" \n# \"linear\" \n# \"wave saturation vs Richardson number\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"48. Solar\nTop of atmosphere solar insolation characteristics\n48.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview description of solar insolation of the atmosphere",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.solar.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"49. Solar --> Solar Pathways\nPathways for solar forcing of the atmosphere\n49.1. Pathways\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nPathways for the solar forcing of the atmosphere model domain",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.solar.solar_pathways.pathways') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"SW radiation\" \n# \"precipitating energetic particles\" \n# \"cosmic rays\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"50. Solar --> Solar Constant\nSolar constant and top of atmosphere insolation characteristics\n50.1. Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nTime adaptation of the solar constant.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.solar.solar_constant.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"fixed\" \n# \"transient\" \n# TODO - please enter value(s)\n",
"50.2. Fixed Value\nIs Required: FALSE Type: FLOAT Cardinality: 0.1\nIf the solar constant is fixed, enter the value of the solar constant (W m-2).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.solar.solar_constant.fixed_value') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"50.3. Transient Characteristics\nIs Required: TRUE Type: STRING Cardinality: 1.1\nsolar constant transient characteristics (W m-2)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.solar.solar_constant.transient_characteristics') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"51. Solar --> Orbital Parameters\nOrbital parameters and top of atmosphere insolation characteristics\n51.1. Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nTime adaptation of orbital parameters",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.solar.orbital_parameters.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"fixed\" \n# \"transient\" \n# TODO - please enter value(s)\n",
"51.2. Fixed Reference Date\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nReference date for fixed orbital parameters (yyyy)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.solar.orbital_parameters.fixed_reference_date') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"51.3. Transient Method\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescription of transient orbital parameters",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.solar.orbital_parameters.transient_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"51.4. Computation Method\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nMethod used for computing orbital parameters.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.solar.orbital_parameters.computation_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Berger 1978\" \n# \"Laskar 2004\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"52. Solar --> Insolation Ozone\nImpact of solar insolation on stratospheric ozone\n52.1. Solar Ozone Impact\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nDoes top of atmosphere insolation impact on stratospheric ozone?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.solar.insolation_ozone.solar_ozone_impact') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"53. Volcanos\nCharacteristics of the implementation of volcanoes\n53.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview description of the implementation of volcanic effects in the atmosphere",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.volcanos.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"54. Volcanos --> Volcanoes Treatment\nTreatment of volcanoes in the atmosphere\n54.1. Volcanoes Implementation\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nHow volcanic effects are modeled in the atmosphere.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.atmos.volcanos.volcanoes_treatment.volcanoes_implementation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"high frequency solar constant anomaly\" \n# \"stratospheric aerosols optical thickness\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"©2017 ES-DOC"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
jbwhit/OSCON-2015
|
develop/2015-07-16-jw-example-notebook-setup.ipynb
|
mit
|
[
"Example of how to set up your lab notebook\nAnalysis in this notebook\n\n[Dead end] Does year predict production?\nDoes \"hours worked\" correlate with production?\n\nTip\nStandard imports at the top\nImports should be grouped in the following order:\n\nmagics\nAlphabetical order \nstandard library imports\nrelated third party imports\nlocal application/library specific imports",
"# Magics first (server issues)\n%matplotlib inline \n# Do below if you want interactive matplotlib plot ()\n# %matplotlib notebook \n\n# https://ipython.org/ipython-doc/dev/config/extensions/autoreload.html\n%load_ext autoreload\n%autoreload 2\n\n# %install_ext http://raw.github.com/jrjohansson/version_information/master/version_information.py\n%load_ext version_information\n%version_information numpy, scipy, matplotlib, pandas\n\n# Standard library\nimport os\nimport sys\nsys.path.append(\"../src/\")\n\n# Third party imports\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport pandas as pd\nimport seaborn as sns\n\n# Local imports\nfrom simpleexample import example_func\n\n# Customizations\nsns.set() # matplotlib defaults\n\n# Any tweaks that normally go in .matplotlibrc, etc., should explicitly go here\nplt.rcParams['figure.figsize'] = (12, 12)\n\n# Find the notebook the saved figures came from\nfig_prefix = \"../figures/2015-07-16-jw-\"\n\nexample_func()",
"Importing cleaned data\nSee ../deliver/coal_data_cleanup.ipynb for how the raw data was cleaned.",
"from IPython.display import FileLink\n\nFileLink(\"../deliver/coal_data_cleanup.ipynb\")\n\ndframe = pd.read_csv(\"../data/coal_prod_cleaned.csv\")",
"[Dead end] Does year predict production?",
"plt.scatter(dframe['Year'], dframe['Production_short_tons'])",
"Does Hours worked correlate with output?",
"df2 = dframe.groupby('Mine_State').sum() \n\nsns.jointplot('Labor_Hours', 'Production_short_tons', data=df2, kind=\"reg\", ) \nplt.xlabel(\"Labor Hours Worked\")\nplt.ylabel(\"Total Amount Produced\") \nplt.tight_layout()\n# plt.savefig(fig_prefix + \"production-vs-hours-worked.png\", dpi=350) \n\n%load_ext autoreload\n%autoreload 2\n\nimport sys\nsys.path.append(\"../src/\")\n\nfrom simpleexample import example_func\nexample_func()\n\nexample_func()"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
ajhenrikson/phys202-2015-work
|
assignments/assignment10/ODEsEx03.ipynb
|
mit
|
[
"Ordinary Differential Equations Exercise 3\nImports",
"%matplotlib inline\nimport matplotlib.pyplot as plt\nimport numpy as np\nimport seaborn as sns\nfrom scipy.integrate import odeint\nfrom IPython.html.widgets import interact, fixed",
"Damped, driven nonlinear pendulum\nThe equations of motion for a simple pendulum of mass $m$, length $l$ are:\n$$\n\\frac{d^2\\theta}{dt^2} = \\frac{-g}{\\ell}\\sin\\theta\n$$\nWhen a damping and periodic driving force are added the resulting system has much richer and interesting dynamics:\n$$\n\\frac{d^2\\theta}{dt^2} = \\frac{-g}{\\ell}\\sin\\theta - a \\omega - b \\sin(\\omega_0 t)\n$$\nIn this equation:\n\n$a$ governs the strength of the damping.\n$b$ governs the strength of the driving force.\n$\\omega_0$ is the angular frequency of the driving force.\n\nWhen $a=0$ and $b=0$, the energy/mass is conserved:\n$$E/m =g\\ell(1-\\cos(\\theta)) + \\frac{1}{2}\\ell^2\\omega^2$$\nBasic setup\nHere are the basic parameters we are going to use for this exercise:",
"g = 9.81 # m/s^2\nl = 0.5 # length of pendulum, in meters\ntmax = 50. # seconds\nt = np.linspace(0, tmax, int(100*tmax))",
"Write a function derivs for usage with scipy.integrate.odeint that computes the derivatives for the damped, driven harmonic oscillator. The solution vector at each time will be $\\vec{y}(t) = (\\theta(t),\\omega(t))$.",
"def derivs(y, t, a, b, omega0):\n \"\"\"Compute the derivatives of the damped, driven pendulum.\n \n Parameters\n ----------\n y : ndarray\n The solution vector at the current time t[i]: [theta[i],omega[i]].\n t : float\n The current time t[i].\n a, b, omega0: float\n The parameters in the differential equation.\n \n Returns\n -------\n dy : ndarray\n The vector of derviatives at t[i]: [dtheta[i],domega[i]].\n \"\"\"\n theta=y[0]\n omega=y[1]\n domega=-g*np.sin(theta)/1-a*omega-b*np.sin(omega0*t)\n dtheta=omega\n return np.array([dtheta,domega])\n \"\"\"remember that omega is litterally the derivative of the angle formula\n like velocity is the derivative of position(it took me a while to figure \n this out)\"\"\"\n\nassert np.allclose(derivs(np.array([np.pi,1.0]), 0, 1.0, 1.0, 1.0), [1.,-1.])\n\ndef energy(y):\n \"\"\"Compute the energy for the state array y.\n \n The state array y can have two forms:\n \n 1. It could be an ndim=1 array of np.array([theta,omega]) at a single time.\n 2. It could be an ndim=2 array where each row is the [theta,omega] at single\n time.\n \n Parameters\n ----------\n y : ndarray, list, tuple\n A solution vector\n \n Returns\n -------\n E/m : float (ndim=1) or ndarray (ndim=2)\n The energy per mass.\n \"\"\"\n if y.ndim==1:\n theta=y[0]\n omega=y[1]\n elif y.ndim==2:\n theta=y[:,0]\n omega=y[:,1]\n return g*l*(1-np.cos(theta)) + 0.5*(l**2)*omega**2 \n\nassert np.allclose(energy(np.array([np.pi,0])),g)\nassert np.allclose(energy(np.ones((10,2))), np.ones(10)*energy(np.array([1,1])))",
"Simple pendulum\nUse the above functions to integrate the simple pendulum for the case where it starts at rest pointing vertically upwards. In this case, it should remain at rest with constant energy.\n\nIntegrate the equations of motion.\nPlot $E/m$ versus time.\nPlot $\\theta(t)$ and $\\omega(t)$ versus time.\nTune the atol and rtol arguments of odeint until $E/m$, $\\theta(t)$ and $\\omega(t)$ are constant.\n\nAnytime you have a differential equation with a a conserved quantity, it is critical to make sure the numerical solutions conserve that quantity as well. This also gives you an opportunity to find other bugs in your code. The default error tolerances (atol and rtol) used by odeint are not sufficiently small for this problem. Start by trying atol=1e-3, rtol=1e-2 and then decrease each by an order of magnitude until your solutions are stable.",
"thetai=np.pi\nomegai=0\nic=np.array([thetai,omegai])\ny=odeint(derivs,ic,t,args=(0.0,0.0,0.0),atol=1e-6,rtol=1e-5)\n\nplt.plot(t,energy(y))\nplt.xlabel('$t$')\nplt.ylabel('$E/m$')\nplt.title('Energy/Mass v. Time');\n\nplt.plot(t, y[:,0], label='$\\\\theta(t)$')\nplt.plot(t, y[:,1], label='$\\omega(t)$')\nplt.xlabel('$t$')\nplt.ylabel('Solution')\nplt.title('State vars v. time')\nplt.legend(loc='best');\n\nassert True # leave this to grade the two plots and their tuning of atol, rtol.",
"Damped pendulum\nWrite a plot_pendulum function that integrates the damped, driven pendulum differential equation for a particular set of parameters $[a,b,\\omega_0]$.\n\nUse the initial conditions $\\theta(0)=-\\pi + 0.1$ and $\\omega=0$.\nDecrease your atol and rtol even futher and make sure your solutions have converged.\nMake a parametric plot of $[\\theta(t),\\omega(t)]$ versus time.\nUse the plot limits $\\theta \\in [-2 \\pi,2 \\pi]$ and $\\theta \\in [-10,10]$\nLabel your axes and customize your plot to make it beautiful and effective.",
"def plot_pendulum(a=0.0, b=0.0, omega0=0.0):\n \"\"\"Integrate the damped, driven pendulum and make a phase plot of the solution.\"\"\"\n theta1=-np.pi+0.1\n omega1=0.0\n ic = np.array([theta1,omega1])\n y=odeint(derivs,ic,t,args=(a,b,omega0),atol=1e-10,rtol=1e-9)\n plt.plot(y[:0],y[:,1])\n ",
"Here is an example of the output of your plot_pendulum function that should show a decaying spiral.",
"plot_pendulum(0.5, 0.0, 0.0)",
"Use interact to explore the plot_pendulum function with:\n\na: a float slider over the interval $[0.0,1.0]$ with steps of $0.1$.\nb: a float slider over the interval $[0.0,10.0]$ with steps of $0.1$.\nomega0: a float slider over the interval $[0.0,10.0]$ with steps of $0.1$.",
"# YOUR CODE HERE\nraise NotImplementedError()",
"Use your interactive plot to explore the behavior of the damped, driven pendulum by varying the values of $a$, $b$ and $\\omega_0$.\n\nFirst start by increasing $a$ with $b=0$ and $\\omega_0=0$.\nThen fix $a$ at a non-zero value and start to increase $b$ and $\\omega_0$.\n\nDescribe the different classes of behaviors you observe below.\nYOUR ANSWER HERE"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
google/uncertainty-baselines
|
experimental/language_structure/psl/colabs/gradient_based_constraint_learning_demo.ipynb
|
apache-2.0
|
[
"Gradient Based Constraint Learning Demo\nLicensed under the Apache License, Version 2.0.\nThis colab explores joint learning neural networks with soft constraints.",
"import numpy as np\nimport pandas as pd\nimport random\nimport tensorflow as tf\n\nfrom tensorflow import keras",
"Dataset and Task\nWe test and validate our system over a common fairness dataset and task: Adult Census Income dataset. This data was extracted from the 1994 Census bureau database by Ronny Kohavi and Barry Becker. Our analysis aims at learning a model that does not bias predictions towards men over 50K through soft constraints.",
"# ========================================================================\n# Constants\n# ========================================================================\n_TRAIN_PATH = ''\n_TEST_PATH = ''\n\n_COLUMNS = [\"age\", \"workclass\", \"fnlwgt\", \"education\", \"education_num\",\n \"marital_status\", \"occupation\", \"relationship\", \"race\", \"gender\",\n \"capital_gain\", \"capital_loss\", \"hours_per_week\", \"native_country\",\n \"income_bracket\"]\n\n# ========================================================================\n# Seed Data\n# ========================================================================\nSEED = random.randint(-10000000, 10000000)\nprint(\"Seed: %d\" % SEED)\ntf.random.set_seed(SEED)\n\n# ========================================================================\n# Load Data\n# ========================================================================\nwith tf.io.gfile.GFile(_TRAIN_PATH, 'r') as csv_file:\n train_df = pd.read_csv(csv_file, names=_COLUMNS, sep=r'\\s*,\\s*', na_values=\"?\").dropna(how=\"any\", axis=0)\n\nwith tf.io.gfile.GFile(_TEST_PATH, 'r') as csv_file:\n test_df = pd.read_csv(csv_file, names=_COLUMNS, skiprows=[0], sep=r'\\s*,\\s*', na_values=\"?\").dropna(how=\"any\", axis=0)",
"Feature Columns\nThe following code was taken from intro_to_fairness. In short, Tensorflow requires a mapping of data and so every column is specified.",
"#@title Prepare Dataset\n# ========================================================================\n# Categorical Feature Columns\n# ========================================================================\n# Unknown length\noccupation = tf.feature_column.categorical_column_with_hash_bucket(\n \"occupation\", hash_bucket_size=1000)\nnative_country = tf.feature_column.categorical_column_with_hash_bucket(\n \"native_country\", hash_bucket_size=1000)\n\n# Known length\ngender = tf.feature_column.categorical_column_with_vocabulary_list(\n \"gender\", [\"Female\", \"Male\"])\nrace = tf.feature_column.categorical_column_with_vocabulary_list(\n \"race\", [\n \"White\", \"Asian-Pac-Islander\", \"Amer-Indian-Eskimo\", \"Other\", \"Black\"\n ])\neducation = tf.feature_column.categorical_column_with_vocabulary_list(\n \"education\", [\n \"Bachelors\", \"HS-grad\", \"11th\", \"Masters\", \"9th\",\n \"Some-college\", \"Assoc-acdm\", \"Assoc-voc\", \"7th-8th\",\n \"Doctorate\", \"Prof-school\", \"5th-6th\", \"10th\", \"1st-4th\",\n \"Preschool\", \"12th\"\n ])\nmarital_status = tf.feature_column.categorical_column_with_vocabulary_list(\n \"marital_status\", [\n \"Married-civ-spouse\", \"Divorced\", \"Married-spouse-absent\",\n \"Never-married\", \"Separated\", \"Married-AF-spouse\", \"Widowed\"\n ])\nrelationship = tf.feature_column.categorical_column_with_vocabulary_list(\n \"relationship\", [\n \"Husband\", \"Not-in-family\", \"Wife\", \"Own-child\", \"Unmarried\",\n \"Other-relative\"\n ])\nworkclass = tf.feature_column.categorical_column_with_vocabulary_list(\n \"workclass\", [\n \"Self-emp-not-inc\", \"Private\", \"State-gov\", \"Federal-gov\",\n \"Local-gov\", \"?\", \"Self-emp-inc\", \"Without-pay\", \"Never-worked\"\n ])\n\n# ========================================================================\n# Numeric Feature Columns\n# ========================================================================\nage = tf.feature_column.numeric_column(\"age\")\nage_buckets = tf.feature_column.bucketized_column(age, boundaries=[18, 25, 30, 35, 40, 45, 50, 55, 60, 65])\nfnlwgt = tf.feature_column.numeric_column(\"fnlwgt\")\neducation_num = tf.feature_column.numeric_column(\"education_num\")\ncapital_gain = tf.feature_column.numeric_column(\"capital_gain\")\ncapital_loss = tf.feature_column.numeric_column(\"capital_loss\")\nhours_per_week = tf.feature_column.numeric_column(\"hours_per_week\")\n\n# ========================================================================\n# Specify Features\n# ========================================================================\ndeep_columns = [\n tf.feature_column.indicator_column(workclass),\n tf.feature_column.indicator_column(education),\n tf.feature_column.indicator_column(age_buckets),\n tf.feature_column.indicator_column(gender),\n tf.feature_column.indicator_column(relationship),\n tf.feature_column.embedding_column(native_country, dimension=8),\n tf.feature_column.embedding_column(occupation, dimension=8),\n]\n\nfeatures = {\n 'age': tf.keras.Input(shape=(1,), name='age'),\n 'education': tf.keras.Input(shape=(1,), name='education', dtype=tf.string),\n 'gender': tf.keras.Input(shape=(1,), name='gender', dtype=tf.string),\n 'native_country': tf.keras.Input(shape=(1,), name='native_country', dtype=tf.string),\n 'occupation': tf.keras.Input(shape=(1,), name='occupation', dtype=tf.string),\n 'relationship': tf.keras.Input(shape=(1,), name='relationship', dtype=tf.string),\n 'workclass': tf.keras.Input(shape=(1,), name='workclass', dtype=tf.string),\n}\n\n# ========================================================================\n# Create Dataset\n# ========================================================================\ndef df_to_dataset(dataframe, shuffle=True, batch_size=512):\n dataframe = dataframe.copy()\n labels = dataframe.pop('income_bracket').apply(lambda x: \">50K\" in x).astype(int)\n ds = tf.data.Dataset.from_tensor_slices((dict(dataframe), labels))\n if shuffle:\n ds = ds.shuffle(buffer_size=len(dataframe))\n ds = ds.batch(batch_size)\n return ds\n\n#@title Helper Functions\ndef confusion_matrix(predictions, labels, threshold=0.5):\n tp = 0\n tn = 0\n fp = 0\n fn = 0\n for prediction, label in zip(predictions, labels):\n if prediction > threshold:\n if label == 1:\n tp += 1\n else:\n fp += 1\n else:\n if label == 0:\n tn += 1\n else:\n fn += 1\n return (tp, tn, fp, fn)\n\ndef remove_group(dataframe, predictions, group):\n dataframe = dataframe.copy()\n dataframe['predictions'] = predictions\n dataframe = dataframe[dataframe.gender != group]\n\n group_predictions = dataframe.pop('predictions')\n\n return dataframe, group_predictions\n\ndef print_accuracy(dataframe, predictions, threshold=0.5):\n dataframe = dataframe.copy()\n labels = dataframe.pop('income_bracket').apply(lambda x: \">50K\" in x).astype(int)\n\n tp, tn, fp, fn = confusion_matrix(predictions, labels, threshold=threshold)\n print(\"True Positives: %d True Negatives: %d False Positives %d False Negatives: %d\" % (tp, tn, fp, fn))\n print(\"Accuracy: %0.5f\" % ((tp+tn) / (tp + tn + fp + fn)))\n print(\"Positive Accuracy: %0.5f\" % (tp / (tp + fp)))\n print(\"Negative Accuracy: %0.5f\" % (tn / (tn + fn)))\n print(\"Percentage Predicted over >50K: %0.5f\" % (((tp + fp) / (tp + tn + fp + fn)) * 100))\n\n return (tp, tn, fp, fn)\n\ndef parity(m_tp, m_fp, m_tn, m_fn, f_tp, f_fp, f_tn, f_fn):\n return ((m_tp + m_fp) / (m_tp + m_tn + m_fp + m_fn)) - ((f_tp + f_fp) / (f_tp + f_tn + f_fp + f_fn))\n\ndef print_title(title, print_length=50):\n print(('-' * print_length) + '\\n' + title + '\\n' + ('-' * print_length))\n\ndef print_analysis(train_df, train_predictions, test_df, test_predictions):\n print_title(\"Train Accuracy\")\n print_accuracy(train_df, train_predictions)\n\n print_title(\"Full Test Accuracy\")\n print_accuracy(test_df, test_predictions)\n\n print_title(\"Male Test Accuracy\")\n male_df, male_pred = remove_group(test_df, test_predictions, \"Female\")\n m_tp, m_tn, m_fp, m_fn = print_accuracy(male_df, male_pred)\n\n print_title(\"Female Test Accuracy\")\n female_df, female_pred = remove_group(test_df, test_predictions, \"Male\")\n f_tp, f_tn, f_fp, f_fn = print_accuracy(female_df, female_pred)\n\n print_title(\"Parity\")\n print(parity(m_tp, m_fp, m_tn, m_fn, f_tp, f_fp, f_tn, f_fn))",
"Create and Run Non-Constrained Neural Model\nDefining our neural model that will be used as a comparison. Note: this model was purposfully designed to be simplistic, as it is trying to highlight the benifit to learning with soft constraints.",
"def build_model(feature_columns, features):\n feature_layer = tf.keras.layers.DenseFeatures(feature_columns)\n hidden_layer_1 = tf.keras.layers.Dense(1024, activation='relu')(feature_layer(features))\n hidden_layer_2 = tf.keras.layers.Dense(512, activation='relu')(hidden_layer_1)\n output = tf.keras.layers.Dense(1, activation='sigmoid')(hidden_layer_2)\n\n model = tf.keras.Model([v for v in features.values()], output)\n\n model.compile(optimizer='adam',\n loss='mse',\n metrics=['accuracy'])\n\n return model\n\nbaseline_model = build_model(deep_columns, features)\nbaseline_model.fit(df_to_dataset(train_df), epochs=50)\n\ntest_predictions = baseline_model.predict(df_to_dataset(test_df, shuffle=False))\nbaseline_model.evaluate(df_to_dataset(test_df))\n\ntrain_predictions = baseline_model.predict(df_to_dataset(train_df, shuffle=False))\nbaseline_model.evaluate(df_to_dataset(train_df))",
"Analyze Non-Constrained Results\nFor this example we look at the fairness constraint that the protected group (gender) should have no predictive difference between classes. In this situation this means that the ratio of positive predictions should be the same between male and female.\nNote: this is by no means the only fairness constraint needed to have a fair model, and in fact can result in some doubious results (as seen in the follwoing section).\nThe results do clearly show a skew in ratios as males are have a higher ratio of >50k predictions.",
"print_analysis(train_df, train_predictions, test_df, test_predictions)",
"Define Constraints\nThis requires a constrained loss function and a custom train step within the keras model class.",
"def constrained_loss(data, logits, threshold=0.5, weight=3):\n \"\"\"Linear constrained loss for equal ratio prediction for the protected group.\n\n The constraint: (#Female >50k / #Total Female) - (#Male >50k / #Total Male)\n This constraint penalizes predictions between the protected group (gender),\n such that the ratio between all classes must be the same.\n An important note: to maintian differentability we do not use #Female >50k\n (which requires a round operation), instead we set values below the threshold\n to zero, and sum the logits.\n\n Args:\n data: Input features.\n logits: Predictions made in the logit.\n threshold: Binary threshold for predicting positive and negative labels.\n weight: Weight of the constrained loss.\n\n Returns:\n A scalar loss of the constraint violations.\n \"\"\"\n gender_label, gender_idx, gender_count = tf.unique_with_counts(data['gender'], out_idx=tf.int32, name=None)\n cut_logits = tf.reshape(tf.cast(logits > threshold, logits.dtype) * logits, [-1])\n\n def f1():\n return gender_idx\n def f2():\n return tf.cast(tf.math.logical_not(tf.cast(gender_idx, tf.bool)), tf.int32)\n\n # Load male indexes as ones and female indexes to zeros.\n male_index = tf.cond(tf.reduce_all(tf.equal(gender_label, tf.constant([\"Male\", \"Female\"]))), f1, f2)\n # Cast the integers to float32 to do a multiplication with the logits.\n male_index = tf.cast(male_index, tf.float32)\n # (#Male > 50k / #Total Male)\n male_prob = tf.divide(tf.reduce_sum(tf.multiply(cut_logits, male_index)), tf.reduce_sum(male_index))\n\n # Flip all female indexes to one and male indexes to zeros.\n female_index = tf.math.logical_not(tf.cast(male_index, tf.bool))\n # Cast the integers to float32 to do a multiplication with the logits.\n female_index = tf.cast(female_index, tf.float32)\n # (#Female > 50k / #Total Female)\n female_prob = tf.divide(tf.reduce_sum(tf.multiply(cut_logits, female_index)), tf.reduce_sum(female_index))\n\n # Since tf.math.abs is not differentable, separate the loss into two hinges.\n loss = tf.add(tf.maximum(male_prob - female_prob, 0.0), tf.maximum(female_prob - male_prob, 0.0))\n return tf.multiply(loss, weight)\n\nclass StructureModel(keras.Model):\n def train_step(self, data):\n features, labels = data\n\n with tf.GradientTape() as tape:\n logits = self(features, training=True)\n standard_loss = self.compiled_loss(labels, logits, regularization_losses=self.losses)\n constraint_loss = constrained_loss(features, logits)\n loss = standard_loss + constraint_loss\n\n trainable_vars = self.trainable_variables\n gradients = tape.gradient(loss, trainable_vars)\n\n self.optimizer.apply_gradients(zip(gradients, trainable_vars))\n self.compiled_metrics.update_state(labels, logits)\n\n return {m.name: m.result() for m in self.metrics}",
"Build and Run Constrained Neural Model",
"def build_constrained_model(feature_columns, features):\n feature_layer = tf.keras.layers.DenseFeatures(feature_columns)\n hidden_layer_1 = tf.keras.layers.Dense(1024, activation='relu')(feature_layer(features))\n hidden_layer_2 = tf.keras.layers.Dense(512, activation='relu')(hidden_layer_1)\n output = tf.keras.layers.Dense(1, activation='sigmoid')(hidden_layer_2)\n\n model = StructureModel([v for v in features.values()], output)\n\n model.compile(optimizer='adam',\n loss='mse',\n metrics=['accuracy'])\n\n return model\n\nconstrained_model = build_constrained_model(deep_columns, features)\nconstrained_model.fit(df_to_dataset(train_df), epochs=50)\n\ntest_predictions = constrained_model.predict(df_to_dataset(test_df, shuffle=False))\nconstrained_model.evaluate(df_to_dataset(test_df))\n\ntrain_predictions = constrained_model.predict(df_to_dataset(train_df, shuffle=False))\nconstrained_model.evaluate(df_to_dataset(train_df))",
"Analyze Constrained Results\nIdeally this constraint should correct the ratio imbalance between the protected groups (gender). This means our parity should be very close to zero.\nNote: This constraint does not mean the neural classifier is guaranteed to generalize and make better predictions. It is more likely to attempt to balance the class prediction ratio in the simplest fashion (resulting in a worse accuracy).",
"print_analysis(train_df, train_predictions, test_df, test_predictions)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
esa-as/2016-ml-contest
|
LA_Team/Facies_classification_LA_TEAM_05.ipynb
|
apache-2.0
|
[
"Facies classification using Machine Learning\nLA Team Submission 5 ##\nLukas Mosser, Alfredo De la Fuente\nIn this approach for solving the facies classfication problem ( https://github.com/seg/2016-ml-contest. ) we will explore the following statregies:\n- Features Exploration: based on Paolo Bestagini's work, we will consider imputation, normalization and augmentation routines for the initial features.\n- Model tuning: \nLibraries\nWe will need to install the following libraries and packages.",
"%%sh\npip install pandas\npip install scikit-learn\npip install tpot\n\nfrom __future__ import print_function\nimport numpy as np\n%matplotlib inline\nimport pandas as pd\nimport matplotlib.pyplot as plt\n\nfrom sklearn.model_selection import cross_val_score\nfrom sklearn.model_selection import KFold , StratifiedKFold\nfrom classification_utilities import display_cm, display_adj_cm\nfrom sklearn.metrics import confusion_matrix, f1_score\nfrom sklearn import preprocessing\nfrom sklearn.model_selection import LeavePGroupsOut\nfrom sklearn.multiclass import OneVsOneClassifier\nfrom sklearn.ensemble import RandomForestClassifier\nfrom scipy.signal import medfilt",
"Data Preprocessing",
"#Load Data\ndata = pd.read_csv('../facies_vectors.csv')\n\n# Parameters\nfeature_names = ['GR', 'ILD_log10', 'DeltaPHI', 'PHIND', 'PE', 'NM_M', 'RELPOS']\nfacies_names = ['SS', 'CSiS', 'FSiS', 'SiSh', 'MS', 'WS', 'D', 'PS', 'BS']\nfacies_colors = ['#F4D03F', '#F5B041','#DC7633','#6E2C00', '#1B4F72','#2E86C1', '#AED6F1', '#A569BD', '#196F3D']\n\n# Store features and labels\nX = data[feature_names].values \ny = data['Facies'].values \n\n# Store well labels and depths\nwell = data['Well Name'].values\ndepth = data['Depth'].values\n\n# Fill 'PE' missing values with mean\nimp = preprocessing.Imputer(missing_values='NaN', strategy='mean', axis=0)\nimp.fit(X)\nX = imp.transform(X)",
"We procceed to run Paolo Bestagini's routine to include a small window of values to acount for the spatial component in the log analysis, as well as the gradient information with respect to depth. This will be our prepared training dataset.",
"# Feature windows concatenation function\ndef augment_features_window(X, N_neig):\n \n # Parameters\n N_row = X.shape[0]\n N_feat = X.shape[1]\n\n # Zero padding\n X = np.vstack((np.zeros((N_neig, N_feat)), X, (np.zeros((N_neig, N_feat)))))\n\n # Loop over windows\n X_aug = np.zeros((N_row, N_feat*(2*N_neig+1)))\n for r in np.arange(N_row)+N_neig:\n this_row = []\n for c in np.arange(-N_neig,N_neig+1):\n this_row = np.hstack((this_row, X[r+c]))\n X_aug[r-N_neig] = this_row\n\n return X_aug\n\n\n# Feature gradient computation function\ndef augment_features_gradient(X, depth):\n \n # Compute features gradient\n d_diff = np.diff(depth).reshape((-1, 1))\n d_diff[d_diff==0] = 0.001\n X_diff = np.diff(X, axis=0)\n X_grad = X_diff / d_diff\n \n # Compensate for last missing value\n X_grad = np.concatenate((X_grad, np.zeros((1, X_grad.shape[1]))))\n \n return X_grad\n\n\n# Feature augmentation function\ndef augment_features(X, well, depth, N_neig=1):\n \n # Augment features\n X_aug = np.zeros((X.shape[0], X.shape[1]*(N_neig*2+2)))\n for w in np.unique(well):\n w_idx = np.where(well == w)[0]\n X_aug_win = augment_features_window(X[w_idx, :], N_neig)\n X_aug_grad = augment_features_gradient(X[w_idx, :], depth[w_idx])\n X_aug[w_idx, :] = np.concatenate((X_aug_win, X_aug_grad), axis=1)\n \n # Find padded rows\n padded_rows = np.unique(np.where(X_aug[:, 0:7] == np.zeros((1, 7)))[0])\n \n return X_aug, padded_rows\n\nX_aug, padded_rows = augment_features(X, well, depth)\n\n# Initialize model selection methods\nlpgo = LeavePGroupsOut(2)\n\n# Generate splits\nsplit_list = []\nfor train, val in lpgo.split(X, y, groups=data['Well Name']):\n hist_tr = np.histogram(y[train], bins=np.arange(len(facies_names)+1)+.5)\n hist_val = np.histogram(y[val], bins=np.arange(len(facies_names)+1)+.5)\n if np.all(hist_tr[0] != 0) & np.all(hist_val[0] != 0):\n split_list.append({'train':train, 'val':val})\n \n \ndef preprocess():\n \n # Preprocess data to use in model\n X_train_aux = []\n X_test_aux = []\n y_train_aux = []\n y_test_aux = []\n \n # For each data split\n split = split_list[5]\n \n # Remove padded rows\n split_train_no_pad = np.setdiff1d(split['train'], padded_rows)\n\n # Select training and validation data from current split\n X_tr = X_aug[split_train_no_pad, :]\n X_v = X_aug[split['val'], :]\n y_tr = y[split_train_no_pad]\n y_v = y[split['val']]\n\n # Select well labels for validation data\n well_v = well[split['val']]\n\n # Feature normalization\n scaler = preprocessing.RobustScaler(quantile_range=(25.0, 75.0)).fit(X_tr)\n X_tr = scaler.transform(X_tr)\n X_v = scaler.transform(X_v)\n \n X_train_aux.append( X_tr )\n X_test_aux.append( X_v )\n y_train_aux.append( y_tr )\n y_test_aux.append ( y_v )\n \n X_train = np.concatenate( X_train_aux )\n X_test = np.concatenate ( X_test_aux )\n y_train = np.concatenate ( y_train_aux )\n y_test = np.concatenate ( y_test_aux )\n \n return X_train , X_test , y_train , y_test ",
"Data Analysis\nIn this section we will run a Cross Validation routine",
"from tpot import TPOTClassifier\nfrom sklearn.model_selection import train_test_split\nX_train, X_test, y_train, y_test = preprocess()\n\ntpot = TPOTClassifier(generations=5, population_size=20, \n verbosity=2,max_eval_time_mins=20,\n max_time_mins=100,scoring='f1_micro',\n random_state = 17)\ntpot.fit(X_train, y_train)\nprint(tpot.score(X_test, y_test))\ntpot.export('FinalPipeline.py')\n\nfrom sklearn.ensemble import RandomForestClassifier, VotingClassifier\nfrom sklearn.model_selection import train_test_split\nfrom sklearn.naive_bayes import BernoulliNB\nfrom sklearn.pipeline import make_pipeline, make_union\nfrom sklearn.preprocessing import FunctionTransformer\n\n# Train and test a classifier\ndef train_and_test(X_tr, y_tr, X_v, well_v):\n \n # Feature normalization\n scaler = preprocessing.RobustScaler(quantile_range=(25.0, 75.0)).fit(X_tr)\n X_tr = scaler.transform(X_tr)\n X_v = scaler.transform(X_v)\n \n # Train classifier\n #clf = make_pipeline(make_union(VotingClassifier([(\"est\", ExtraTreesClassifier(criterion=\"gini\", max_features=1.0, n_estimators=500))]), FunctionTransformer(lambda X: X)), XGBClassifier(learning_rate=0.73, max_depth=10, min_child_weight=10, n_estimators=500, subsample=0.27))\n #clf = make_pipeline( KNeighborsClassifier(n_neighbors=5, weights=\"distance\") ) \n #clf = make_pipeline(MaxAbsScaler(),make_union(VotingClassifier([(\"est\", RandomForestClassifier(n_estimators=500))]), FunctionTransformer(lambda X: X)),ExtraTreesClassifier(criterion=\"entropy\", max_features=0.0001, n_estimators=500))\n clf = make_pipeline( make_union(VotingClassifier([(\"est\", BernoulliNB(alpha=60.0, binarize=0.26, fit_prior=True))]), FunctionTransformer(lambda X: X)),RandomForestClassifier(n_estimators=500))\n clf.fit(X_tr, y_tr)\n \n # Test classifier\n y_v_hat = clf.predict(X_v)\n \n # Clean isolated facies for each well\n for w in np.unique(well_v):\n y_v_hat[well_v==w] = medfilt(y_v_hat[well_v==w], kernel_size=5)\n \n return y_v_hat",
"Prediction",
"#Load testing data\ntest_data = pd.read_csv('../validation_data_nofacies.csv')\n\n# Prepare training data\nX_tr = X\ny_tr = y\n\n# Augment features\nX_tr, padded_rows = augment_features(X_tr, well, depth)\n\n# Removed padded rows\nX_tr = np.delete(X_tr, padded_rows, axis=0)\ny_tr = np.delete(y_tr, padded_rows, axis=0) \n\n# Prepare test data\nwell_ts = test_data['Well Name'].values\ndepth_ts = test_data['Depth'].values\nX_ts = test_data[feature_names].values\n\n# Augment features\nX_ts, padded_rows = augment_features(X_ts, well_ts, depth_ts)\n\n# Predict test labels\ny_ts_hat = train_and_test(X_tr, y_tr, X_ts, well_ts)\n\n# Save predicted labels\ntest_data['Facies'] = y_ts_hat\ntest_data.to_csv('Prediction_X_Final.csv')"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
esumitra/minecraft-programming
|
notebooks/Adventure3.ipynb
|
mit
|
[
"Superpowers for Steve\n\nWith our newly learned programming skills let's give Steve some superpowers! Usually when Steve steps off a ledge he falls down and gets hurt and when he jumps into water, he sinks unless he starts swimming. Lets write a program that lets Steve automatically build a glass bridge whenever he steps off the ground into water or air. With this new program Steve will never fall or sink. An awesome superpower!\nAlong the way, we will learn about functions and arguments and use conditions and loops from the previous adventures.\nLets get started ...",
"import sys\n# sys.path.append('/Users/esumitra/workspaces/mc/mcpipy')\n# Run this once before starting your tasks\nimport mcpi.minecraft as minecraft\nimport mcpi.block as block\nimport time\nmc = minecraft.Minecraft.create()",
"Task 1: What are you standing on Steve?\nWe first need to detect if Steve is falling or sinking. How do we do that? As you know, we use blocks for building things in Minecraft. Most of you have built houses and crafted weapons. The secret is that in Minecraft every square that is visible is a block. The ground is built from ground blocks and water is built from water blocks. Even air is built from air blocks! That means Steve is going to fall if he is standing on an air block and he will sink if he is on a water block.\nThe function getBlock(x,y,z) gives us the block at the Minecraft coordinates (x,y,z). Each kind of block has a unique identifier. An air block has an identifier block.AIR.id and a water block has an identifier block.WATER_STATIONARY.id and block.WATER_FLOWING.id. To use these block ids we need to import the block library which has the ids of all the blocks used in Minecraft. \nYour first task is to write a program that prints the block Steve is standing on when run. Complete the program below and verify that the program prints the correct block",
"# Task 1\npos = mc.player.getTilePos() # Steve's current position\n# b = mc.getBlock(?,?,?)\n",
"Hmm ... The program is printing numbers. It turns out that the block identifier is a number. Often numbers are used as identifiers. In order to print a useful message try the following\n```python\nif b == block.AIR.id:\n print \"I am on air\"\nif b == block.WATER_FLOWING.id:\n print \"I am on flowing water\"\nif b == block.WATER:\n print \"I am on water\"\nif b == block.WATER_STATIONARY:\n print \"I am on water\"\n```\nUpdate your program for task 1 above and verify that you are able to correctly detect air and water.\nNice job\nTask 2: Some fun\nThe previous task had a lot of statements to figure out if Steve was standing on air or water. We need to use these statements everytime Steve moves to a new position. Wouldn't it be nice if Minecraft had a function just like postToChat that would tell us if Steve was on a air or water block? Unfortunately, Minecraft does not have such a function but you have some superpowers - you can define your own functions! And when you define your own functions, you can use these functions in your programs as you need. \nFirst, lets take a look at the function postToChat like the example below\npython\nmc.postToChat(\"Minecraft rocks!\")\nThe name of the function is postToChat. The message you give the function to post is called the function's parameter. A function may have no parameters like in minecraft.Minecraft.create() or it may have one parameter like mc.postToChat(\"something useful\") or it could have many parameters like max(1,5,2). Some functions return values when you call the function. For example, the function max(1,5,2) returns the maximum value of the numbers 1,5 and 2. Try it out.\nNow lets write our own fun function :) Remember every function has a name, can have zero or more parameters and can have a return value. Lets define a function named myCoolFunction with one parameter named n that will post the message \"Minecraft rocks times ...\" when you call the function. In Pytho, functiosn are defined using def as shown below.\n```python\ndef myCoolFunction(n):\n mc.postToChat(\"Minecraft rocks times \" + str(n))\n```\nGo ahead and type the code below to define your function",
"# Task 2 code:",
"Now that you have defined your own cool function lets call the function with different arguments like\n```python\nmyCoolFunction(1)\nmyCoolFunction(3)\nmyCoolFunction(5)\n```\nTry calling your function below. That was something fun with functions!",
"# call your cool function",
"Task 3: Is Steve Safe?\nFor this task we will write a function named isSafe that will take a parameter position and return a value False if the input parameter position is above air or water and will return the value True otherwise. We will use the statements from Task 1 to write this function. Use return to return a value from this function. The outline of the function you need to write is below. Try to complete the function in the code block below.\n```python\ndef isSafe(position):\n b = mc.getBlock(position.x,position.y-1,position.z)\n if b == ?:\n return False\n if b == ?:\n return False\n return True\n```",
"# Task 3",
"Task 4: Steve's Safety Status\nLets test the function isSafe that you wrote in Task 3 by posting a message evertime Steve is not safe i.e., lets post a message \"You are not safe\" evertime Steve is over air or water. Type and modify the code below to call your function to show the message.\npython\nwhile True:\n pos = mc.player.getTilePos() # Steve's current position\n time.sleep(0.1)\n # add your code here\nHint: You can call your function in an if statment like\npython \nif not isSafe(pos):\n doSomething\n Nice job!",
"# Task 4",
"Task 5: Set a Block\nNow for the fun part of giving Steve superpowers. All we need to do to make Steve build a bridge everytime he is not on a safe block is to set the block under him to a GLASS block or any other block we want to use for the bridge. To set a block at a position in Minecraft use the setBlock function. E.g., if Steve is at coordinates pos, to set a glass block under him use\npython\nmc.setBlock(pos.x, pos.y-1, pos.z, block.GLASS.id)\nLets use the safe function you wrote earlier and modify the program in Task 4 to set a glass block evertime Steve is not on a safe block.\nHint: Change the mc.postToChat function in the Task 4 program to mc.setBlock",
"# Task 5",
"Other blocks that you can try for your bridge are STONE, GRASS and OBSIDIAN. Try building the bridge with different blocks and have fun exploring the world!"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
GSimas/EEL7045
|
Aula 14 - Circuito RLC paralelo.ipynb
|
mit
|
[
"Circuito RLC paralelo sem fonte\nJupyter Notebook desenvolvido por Gustavo S.S.\nCircuitos RLC em paralelo têm diversas aplicações, como em projetos de filtros\ne redes de comunicação. Suponha que a corrente inicial I0 no indutor e a tensão inicial V0 no capacitor sejam:\n\\begin{align}\n{\\Large i(0) = I_0 = \\frac{1}{L} \\int_{-\\infty}^{0} v(t) dt}\n\\end{align}\n\\begin{align}\n{\\Large v(0) = V_0}\n\\end{align}\n\nPortanto, aplicando a LKC ao nó superior fornece:\n\\begin{align}\n{\\Large \\frac{v}{R} + \\frac{1}{L} \\int_{-\\infty}^{t} v(\\tau) d\\tau + C \\frac{dv}{dt} = 0}\n\\end{align}\nExtraindo a derivada em relação a t e dividindo por C resulta em:\n\\begin{align}\n{\\Large \\frac{d^2v}{dt^2} + \\frac{1}{RC} \\frac{dv}{dt} + \\frac{1}{LC} v = 0}\n\\end{align}\nObtemos a equação característica substituindo a primeira derivada por s e a segunda\npor s^2:\n\\begin{align}\n{\\Large s^2 + \\frac{1}{RC} s + \\frac{1}{LC} = 0}\n\\end{align}\nAssim, as raízes da equação característica são:\n\\begin{align}\n{\\Large s_{1,2} = -\\alpha \\pm \\sqrt{\\alpha^2 - \\omega_0^2}}\n\\end{align}\nonde:\n\\begin{align}\n{\\Large \\alpha = \\frac{1}{2RC}, \\space \\space \\space \\omega_0 = \\frac{1}{\\sqrt{LC}} }\n\\end{align}\nAmortecimento Supercrítico / Superamortecimento (α > ω0)\nQuando α > ω0, as raízes da equação característica são reais e negativas. A resposta é:\n\\begin{align}\n{\\Large v(t) = A_1 e^{s_1 t} + A_2 e^{s_2 t} }\n\\end{align}\nAmortecimento Crítico (α = ω0)\nQuando α = ω0 as raízes da equação característica são reais e iguais de modo que a resposta seja:\n\\begin{align}\n{\\Large v(t) = (A_1 + A_2t)e^{-\\alpha t}}\n\\end{align}\nSubamortecimento (α < ω0)\nQuando α < ω0, nesse caso, as raízes são complexas e podem ser expressas como segue:\n\\begin{align}\n{\\Large s_{1,2} = -\\alpha \\pm j\\omega_d}\n\\\n\\{\\Large \\omega_d = \\sqrt{\\omega_0^2 - \\alpha^2}}\n\\end{align}\n\\begin{align}\n{\\Large v(t) = e^{-\\alpha t}(A_1 cos(\\omega_d t) + A_2 sen(\\omega_d t))}\n\\end{align}\n\nAs constantes A1 e A2 em cada caso podem ser determinadas a partir das\ncondições iniciais. Precisamos de v(0) e dv(0)/dt.\nExemplo 8.5\nNo circuito paralelo da Figura 8.13, determine v(t) para t > 0, supondo que v(0) = 5 V,\ni(0) = 0, L = 1 H e C = 10 mF. Considere os seguintes casos: \nR = 1,923 Ω, \nR = 5 Ω e\nR = 6,25 Ω .",
"print(\"Exemplo 8.5\")\n\nfrom sympy import *\n\nm = 10**(-3) #definicao de mili\nL = 1\nC = 10*m\nv0 = 5\ni0 = 0\n\nA1 = symbols('A1')\nA2 = symbols('A2')\nt = symbols('t')\n\ndef sqrt(x, root = 2): #definir funcao para raiz\n y = x**(1/root)\n return y\n\nprint(\"\\n--------------\\n\")\n\n## PARA R = 1.923\nR = 1.923\n\nprint(\"Para R = \", R)\n\ndef resolve_rlc(R,L,C):\n\n alpha = 1/(2*R*C)\n omega = 1/sqrt(L*C)\n print(\"Alpha:\",alpha)\n print(\"Omega:\",omega)\n \n s1 = -alpha + sqrt(alpha**2 - omega**2)\n s2 = -alpha - sqrt(alpha**2 - omega**2)\n\n def rlc(alpha,omega): #funcao para verificar tipo de amortecimento\n resposta = \"\"\n if alpha > omega:\n resposta = \"superamortecimento\"\n v = A1*exp(s1*t) + A2*exp(s2*t)\n elif alpha == omega:\n resposta = \"amortecimento critico\"\n v = (A1 + A2*t)*exp(-alpha*t)\n else:\n resposta = \"subamortecimento\"\n v = exp(-alpha*t)*(A1*cos(omega_d*t) + A2*sin(omega_d*t))\n return resposta,v\n\n resposta,v = rlc(alpha,omega)\n print(\"Tipo de resposta:\",resposta)\n print(\"Resposta v(t):\",v)\n print(\"v(0):\",v.subs(t,0))\n print(\"dv(0)/dt:\",v.diff(t).subs(t,0))\n \n return alpha,omega,s1,s2,resposta,v\n\nalpha,omega,s1,s2,resposta,v = resolve_rlc(R,L,C)\n\n#v(0) = 5 = A1 + A2 -> A2 = 5 - A1\n#dv(0)/dt = -2A1 - 50A2\n#C*dv(0)/dt + i(0) + v(0)/R = 0\n #0.01*(-2A1 - 50A2) + 0 + 5/1.923 = 0\n #(-2A1 -50(5 - A1)) = -5/(1.923*0.01)\n #48A1 = 250 - 5/(1.923*0.01)\nA1 = (250 - 5/(1.923*0.01))/48\nprint(\"Constante A1:\",A1)\nA2 = 5 - A1\nprint(\"Constante A2:\",A2)\nv = A1*exp(s1*t) + A2*exp(s2*t)\nprint(\"Resposta v(t):\",v,\"V\")\n\nprint(\"\\n--------------\\n\")\n\n## PARA R = 5\nR = 5\n\nA1 = symbols('A1')\nA2 = symbols('A2')\n\nprint(\"Para R = \", R)\n\nalpha,omega,s1,s2,resposta,v = resolve_rlc(R,L,C)\n\n#v(t) = (A1 + A2t)e^(-alpha*t)\n\n#v(0) = A1 = 5\nA1 = 5\n#C*dv(0)/dt + i(0) + v(0)/R = 0\n #0.01(-10A1 + A2) + 0 + 5/5 = 0\n #0.01A2 = -1 + 0.5\nA2 = (-1 + 0.5)/0.01\n\nprint(\"Constante A1:\",A1)\nprint(\"Constante A2:\",A2)\nv = (A1 + A2*t)*exp(-alpha*t)\nprint(\"Resposta v(t):\",v,\"V\")\n\nprint(\"\\n--------------\\n\")\n\n\n## PARA R = 6.25\nR = 6.25\n\nA1 = symbols('A1')\nA2 = symbols('A2')\n\nprint(\"Para R = \", R)\n\nomega_d = sqrt(omega**2 - alpha**2)\nalpha,omega,s1,s2,resposta,v = resolve_rlc(R,L,C)\n\n#v(t) = e^-(alpha*t)*(A1cos(wd*t) + A2sen(wd*t))\n\n#v(0) = A1 = 5\nA1 = 5\n#C*dv(0)/dt + i(0) + v(0)/R = 0\n #0.01*(-8A1 + 6A2) + 0 + 5/6.25 = 0\n #-0.4 + 0.06A2 = -5/6.25\nA2 = (-5/6.25 + 0.4)/0.06\n\nprint(\"Constante A1:\",A1)\nprint(\"Constante A2:\",A2)\nv = exp(-alpha*t)*(A1*cos(omega_d*t) + A2*sin(omega_d*t))\nprint(\"Resposta v(t):\",v,\"V\")",
"Problema Prático 8.5\nNa Figura 8.13, seja R = 2 Ω, L = 0,4 H, C = 25 mF, v(0) = 0, e i(0) = 50 mA. Determine v(t) para t > 0.",
"print(\"Problema Prático 8.5\")\n\nR = 2\nL = 0.4\nC = 25*m\nv0 = 0\ni0 = 50*m\n\nA1 = symbols('A1')\nA2 = symbols('A2')\n\nalpha,omega,s1,s2,resposta,v = resolve_rlc(R,L,C)\n\n#C*dv(0)/dt + i(0) + v(0)/R = 0\n #C*(-10A1 + A2) + i0 + v(0)/2 = 0\n #v(0) = 0 = A1\n #C*A2 = -i0\nA2 = -i0/C\nA1 = 0\n\nprint(\"Constante A1:\",A1)\nprint(\"Constante A2:\",A2)\nv = (A1 + A2*t)*exp(-10.0*t)\nprint(\"Resposta v(t):\",v,\"V\")",
"Exemplo 8.6\nDetermine v(t) para t > 0 no circuito RLC da Figura 8.15.",
"print(\"Exemplo 8.6\")\n\nu = 10**(-6) #definicao de micro\nVs = 40\nL = 0.4\nC = 20*u\n\nA1 = symbols('A1')\nA2 = symbols('A2')\n\n#Para t < 0\nv0 = Vs*50/(50 + 30)\ni0 = -Vs/(50 + 30)\n\nprint(\"V0:\",v0,\"V\")\nprint(\"i0:\",i0,\"A\")\n\n#Para t > 0\n#C*dv(0)/dt + i(0) + v(0)/50 = 0\n #20u*dv(0)/dt - 0.5 + 0.5 = 0\n #dv(0)/dt = 0\n\nR = 50\nalpha,omega,s1,s2,resposta,v = resolve_rlc(R,L,C)\n\n#v(0) = 25 = A1 + A2\n #A1 = 25 - A2\n#dv(0)/dt = -146A1 - 854A2 = 0\n #-146(25 - A2) - 854A2 = 0\n #146A2 - 854A2 = 3650\n #-708A2 = 3650\nA2 = -3650/708\nA1 = 25 - A2\n\nprint(\"Constante A1:\",A1)\nprint(\"Constante A2:\",A2)\n\nv = A1*exp(s1*t) + A2*exp(s2*t)\nprint(\"Resposta v(t):\",v,\"V\")"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
suriyan/ethnicolr
|
ethnicolr/examples/ethnicolr_app_contrib2010.ipynb
|
mit
|
[
"Application: Illustrating the use of the package by imputing the race of the campaign contributors recorded by FEC for the years 2000 and 2010\na) what proportion of contributors were black, whites, hispanics, asian etc.\nb) and proportion of total donation given by blacks, hispanics, whites, and asians. \nc) get amount contributed by people of each race and divide it by total amount contributed.",
"import pandas as pd\n\ndf = pd.read_csv('/opt/names/fec_contrib/contribDB_2010.csv.zip', nrows=100)\ndf.columns",
"amount, date, contributor_name, contributor_lname, contributor_fname, contributor_type == 'I'",
"\ndf = pd.read_csv('/opt/names/fec_contrib/contribDB_2010.csv.zip', usecols=['date', 'amount', 'contributor_type', 'contributor_lname', 'contributor_fname', 'contributor_name'])\ndf\n\n#sdf = df[df.contributor_type=='I'].sample(1000)\nsdf = df[df.contributor_type=='I'].copy()\nsdf\n\nfrom clean_names import clean_name\n\ndef do_clean_name(n):\n n = str(n)\n return clean_name(n)\n\n#sdf['clean_name'] = sdf['contributor_name'].apply(lambda c: do_clean_name(c))\n#sdf\n\nfrom ethnicolr import census_ln, pred_census_ln\n\nrdf = pred_census_ln(sdf, 'contributor_lname', 2010)\nrdf\n\n#rdf.to_csv('output-pred-contrib2010-ln.csv', index_label='idx')",
"a) what proportion of contributors were black, whites, hispanics, asian etc.",
"adf = rdf.groupby(['race']).agg({'contributor_lname': 'count'})\nadf *100 / adf.sum()",
"b) and proportion of total donation given by blacks, hispanics, whites, and asians.",
"bdf = rdf.groupby(['race']).agg({'amount': 'sum'})\nbdf * 100 / bdf.sum()",
"c) get amount contributed by people of each race and divide it by total amount contributed.",
"contrib_white = sum(rdf.amount * rdf.white)\ncontrib_black = sum(rdf.amount * rdf.black)\ncontrib_api = sum(rdf.amount * rdf.api)\ncontrib_hispanic = sum(rdf.amount * rdf.hispanic)\n\ncontrib_amount = [{'race': 'white', 'amount': contrib_white},\n {'race': 'black', 'amount': contrib_black},\n {'race': 'api', 'amount': contrib_api},\n {'race': 'hispanic', 'amount': contrib_hispanic}]\ncontrib_df = pd.DataFrame(contrib_amount, columns=['race', 'amount'])\ncontrib_df.amount /= 10e6\ncontrib_df.columns = ['race', 'amount($1M)']\ncontrib_df\n\ncontrib_df.set_index('race', inplace=True, drop=True)\ncontrib_df.columns = ['% amount']\ncontrib_df * 100 / contrib_df.sum()"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
ES-DOC/esdoc-jupyterhub
|
notebooks/fio-ronm/cmip6/models/sandbox-1/ocean.ipynb
|
gpl-3.0
|
[
"ES-DOC CMIP6 Model Properties - Ocean\nMIP Era: CMIP6\nInstitute: FIO-RONM\nSource ID: SANDBOX-1\nTopic: Ocean\nSub-Topics: Timestepping Framework, Advection, Lateral Physics, Vertical Physics, Uplow Boundaries, Boundary Forcing. \nProperties: 133 (101 required)\nModel descriptions: Model description details\nInitialized From: -- \nNotebook Help: Goto notebook help page\nNotebook Initialised: 2018-02-15 16:54:01\nDocument Setup\nIMPORTANT: to be executed each time you run the notebook",
"# DO NOT EDIT ! \nfrom pyesdoc.ipython.model_topic import NotebookOutput \n\n# DO NOT EDIT ! \nDOC = NotebookOutput('cmip6', 'fio-ronm', 'sandbox-1', 'ocean')",
"Document Authors\nSet document authors",
"# Set as follows: DOC.set_author(\"name\", \"email\") \n# TODO - please enter value(s)",
"Document Contributors\nSpecify document contributors",
"# Set as follows: DOC.set_contributor(\"name\", \"email\") \n# TODO - please enter value(s)",
"Document Publication\nSpecify document publication status",
"# Set publication status: \n# 0=do not publish, 1=publish. \nDOC.set_publication_status(0)",
"Document Table of Contents\n1. Key Properties\n2. Key Properties --> Seawater Properties\n3. Key Properties --> Bathymetry\n4. Key Properties --> Nonoceanic Waters\n5. Key Properties --> Software Properties\n6. Key Properties --> Resolution\n7. Key Properties --> Tuning Applied\n8. Key Properties --> Conservation\n9. Grid\n10. Grid --> Discretisation --> Vertical\n11. Grid --> Discretisation --> Horizontal\n12. Timestepping Framework\n13. Timestepping Framework --> Tracers\n14. Timestepping Framework --> Baroclinic Dynamics\n15. Timestepping Framework --> Barotropic\n16. Timestepping Framework --> Vertical Physics\n17. Advection\n18. Advection --> Momentum\n19. Advection --> Lateral Tracers\n20. Advection --> Vertical Tracers\n21. Lateral Physics\n22. Lateral Physics --> Momentum --> Operator\n23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff\n24. Lateral Physics --> Tracers\n25. Lateral Physics --> Tracers --> Operator\n26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff\n27. Lateral Physics --> Tracers --> Eddy Induced Velocity\n28. Vertical Physics\n29. Vertical Physics --> Boundary Layer Mixing --> Details\n30. Vertical Physics --> Boundary Layer Mixing --> Tracers\n31. Vertical Physics --> Boundary Layer Mixing --> Momentum\n32. Vertical Physics --> Interior Mixing --> Details\n33. Vertical Physics --> Interior Mixing --> Tracers\n34. Vertical Physics --> Interior Mixing --> Momentum\n35. Uplow Boundaries --> Free Surface\n36. Uplow Boundaries --> Bottom Boundary Layer\n37. Boundary Forcing\n38. Boundary Forcing --> Momentum --> Bottom Friction\n39. Boundary Forcing --> Momentum --> Lateral Friction\n40. Boundary Forcing --> Tracers --> Sunlight Penetration\n41. Boundary Forcing --> Tracers --> Fresh Water Forcing \n1. Key Properties\nOcean key properties\n1.1. Model Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of ocean model.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.model_overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.2. Model Name\nIs Required: TRUE Type: STRING Cardinality: 1.1\nName of ocean model code (NEMO 3.6, MOM 5.0,...)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.model_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.3. Model Family\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nType of ocean model.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.model_family') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"OGCM\" \n# \"slab ocean\" \n# \"mixed layer ocean\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"1.4. Basic Approximations\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nBasic approximations made in the ocean.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.basic_approximations') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Primitive equations\" \n# \"Non-hydrostatic\" \n# \"Boussinesq\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"1.5. Prognostic Variables\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nList of prognostic variables in the ocean component.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.prognostic_variables') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Potential temperature\" \n# \"Conservative temperature\" \n# \"Salinity\" \n# \"U-velocity\" \n# \"V-velocity\" \n# \"W-velocity\" \n# \"SSH\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"2. Key Properties --> Seawater Properties\nPhysical properties of seawater in ocean\n2.1. Eos Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nType of EOS for sea water",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Linear\" \n# \"Wright, 1997\" \n# \"Mc Dougall et al.\" \n# \"Jackett et al. 2006\" \n# \"TEOS 2010\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"2.2. Eos Functional Temp\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nTemperature used in EOS for sea water",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_temp') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Potential temperature\" \n# \"Conservative temperature\" \n# TODO - please enter value(s)\n",
"2.3. Eos Functional Salt\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nSalinity used in EOS for sea water",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_salt') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Practical salinity Sp\" \n# \"Absolute salinity Sa\" \n# TODO - please enter value(s)\n",
"2.4. Eos Functional Depth\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nDepth or pressure used in EOS for sea water ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_depth') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Pressure (dbars)\" \n# \"Depth (meters)\" \n# TODO - please enter value(s)\n",
"2.5. Ocean Freezing Point\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nEquation used to compute the freezing point (in deg C) of seawater, as a function of salinity and pressure",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_freezing_point') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"TEOS 2010\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"2.6. Ocean Specific Heat\nIs Required: TRUE Type: FLOAT Cardinality: 1.1\nSpecific heat in ocean (cpocean) in J/(kg K)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_specific_heat') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"2.7. Ocean Reference Density\nIs Required: TRUE Type: FLOAT Cardinality: 1.1\nBoussinesq reference density (rhozero) in kg / m3",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_reference_density') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"3. Key Properties --> Bathymetry\nProperties of bathymetry in ocean\n3.1. Reference Dates\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nReference date of bathymetry",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.bathymetry.reference_dates') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Present day\" \n# \"21000 years BP\" \n# \"6000 years BP\" \n# \"LGM\" \n# \"Pliocene\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"3.2. Type\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs the bathymetry fixed in time in the ocean ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.bathymetry.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"3.3. Ocean Smoothing\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe any smoothing or hand editing of bathymetry in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.bathymetry.ocean_smoothing') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"3.4. Source\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe source of bathymetry in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.bathymetry.source') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"4. Key Properties --> Nonoceanic Waters\nNon oceanic waters treatement in ocean\n4.1. Isolated Seas\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe if/how isolated seas is performed",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.isolated_seas') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"4.2. River Mouth\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe if/how river mouth mixing or estuaries specific treatment is performed",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.river_mouth') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"5. Key Properties --> Software Properties\nSoftware properties of ocean code\n5.1. Repository\nIs Required: FALSE Type: STRING Cardinality: 0.1\nLocation of code for this component.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.software_properties.repository') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"5.2. Code Version\nIs Required: FALSE Type: STRING Cardinality: 0.1\nCode version identifier.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.software_properties.code_version') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"5.3. Code Languages\nIs Required: FALSE Type: STRING Cardinality: 0.N\nCode language(s).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.software_properties.code_languages') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6. Key Properties --> Resolution\nResolution in the ocean grid\n6.1. Name\nIs Required: TRUE Type: STRING Cardinality: 1.1\nThis is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.resolution.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6.2. Canonical Horizontal Resolution\nIs Required: TRUE Type: STRING Cardinality: 1.1\nExpression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.resolution.canonical_horizontal_resolution') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6.3. Range Horizontal Resolution\nIs Required: TRUE Type: STRING Cardinality: 1.1\nRange of horizontal resolution with spatial details, eg. 50(Equator)-100km or 0.1-0.5 degrees etc.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.resolution.range_horizontal_resolution') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6.4. Number Of Horizontal Gridpoints\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nTotal number of horizontal (XY) points (or degrees of freedom) on computational grid.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.resolution.number_of_horizontal_gridpoints') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"6.5. Number Of Vertical Levels\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nNumber of vertical levels resolved on computational grid.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.resolution.number_of_vertical_levels') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"6.6. Is Adaptive Grid\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nDefault is False. Set true if grid resolution changes during execution.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.resolution.is_adaptive_grid') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"6.7. Thickness Level 1\nIs Required: TRUE Type: FLOAT Cardinality: 1.1\nThickness of first surface ocean level (in meters)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.resolution.thickness_level_1') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"7. Key Properties --> Tuning Applied\nTuning methodology for ocean component\n7.1. Description\nIs Required: TRUE Type: STRING Cardinality: 1.1\nGeneral overview description of tuning: explain and motivate the main targets and metrics retained. &Document the relative weight given to climate performance metrics versus process oriented metrics, &and on the possible conflicts with parameterization level tuning. In particular describe any struggle &with a parameter value that required pushing it to its limits to solve a particular model deficiency.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.tuning_applied.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"7.2. Global Mean Metrics Used\nIs Required: FALSE Type: STRING Cardinality: 0.N\nList set of metrics of the global mean state used in tuning model/component",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.tuning_applied.global_mean_metrics_used') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"7.3. Regional Metrics Used\nIs Required: FALSE Type: STRING Cardinality: 0.N\nList of regional metrics of mean state (e.g THC, AABW, regional means etc) used in tuning model/component",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.tuning_applied.regional_metrics_used') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"7.4. Trend Metrics Used\nIs Required: FALSE Type: STRING Cardinality: 0.N\nList observed trend metrics used in tuning model/component",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.tuning_applied.trend_metrics_used') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8. Key Properties --> Conservation\nConservation in the ocean component\n8.1. Description\nIs Required: TRUE Type: STRING Cardinality: 1.1\nBrief description of conservation methodology",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.conservation.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8.2. Scheme\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nProperties conserved in the ocean by the numerical schemes",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.conservation.scheme') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Energy\" \n# \"Enstrophy\" \n# \"Salt\" \n# \"Volume of ocean\" \n# \"Momentum\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"8.3. Consistency Properties\nIs Required: FALSE Type: STRING Cardinality: 0.1\nAny additional consistency properties (energy conversion, pressure gradient discretisation, ...)?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.conservation.consistency_properties') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8.4. Corrected Conserved Prognostic Variables\nIs Required: FALSE Type: STRING Cardinality: 0.1\nSet of variables which are conserved by more than the numerical scheme alone.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.conservation.corrected_conserved_prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8.5. Was Flux Correction Used\nIs Required: FALSE Type: BOOLEAN Cardinality: 0.1\nDoes conservation involve flux correction ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.conservation.was_flux_correction_used') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"9. Grid\nOcean grid\n9.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of grid in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.grid.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"10. Grid --> Discretisation --> Vertical\nProperties of vertical discretisation in ocean\n10.1. Coordinates\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nType of vertical coordinates in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.grid.discretisation.vertical.coordinates') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Z-coordinate\" \n# \"Z*-coordinate\" \n# \"S-coordinate\" \n# \"Isopycnic - sigma 0\" \n# \"Isopycnic - sigma 2\" \n# \"Isopycnic - sigma 4\" \n# \"Isopycnic - other\" \n# \"Hybrid / Z+S\" \n# \"Hybrid / Z+isopycnic\" \n# \"Hybrid / other\" \n# \"Pressure referenced (P)\" \n# \"P*\" \n# \"Z**\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"10.2. Partial Steps\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nUsing partial steps with Z or Z vertical coordinate in ocean ?*",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.grid.discretisation.vertical.partial_steps') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"11. Grid --> Discretisation --> Horizontal\nType of horizontal discretisation scheme in ocean\n11.1. Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nHorizontal grid type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.grid.discretisation.horizontal.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Lat-lon\" \n# \"Rotated north pole\" \n# \"Two north poles (ORCA-style)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"11.2. Staggering\nIs Required: FALSE Type: ENUM Cardinality: 0.1\nHorizontal grid staggering type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.grid.discretisation.horizontal.staggering') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Arakawa B-grid\" \n# \"Arakawa C-grid\" \n# \"Arakawa E-grid\" \n# \"N/a\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"11.3. Scheme\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nHorizontal discretisation scheme in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.grid.discretisation.horizontal.scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Finite difference\" \n# \"Finite volumes\" \n# \"Finite elements\" \n# \"Unstructured grid\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"12. Timestepping Framework\nOcean Timestepping Framework\n12.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of time stepping in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.timestepping_framework.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"12.2. Diurnal Cycle\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nDiurnal cycle type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.timestepping_framework.diurnal_cycle') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"None\" \n# \"Via coupling\" \n# \"Specific treatment\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"13. Timestepping Framework --> Tracers\nProperties of tracers time stepping in ocean\n13.1. Scheme\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nTracers time stepping scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.timestepping_framework.tracers.scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Leap-frog + Asselin filter\" \n# \"Leap-frog + Periodic Euler\" \n# \"Predictor-corrector\" \n# \"Runge-Kutta 2\" \n# \"AM3-LF\" \n# \"Forward-backward\" \n# \"Forward operator\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"13.2. Time Step\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nTracers time step (in seconds)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.timestepping_framework.tracers.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"14. Timestepping Framework --> Baroclinic Dynamics\nBaroclinic dynamics in ocean\n14.1. Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nBaroclinic dynamics type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Preconditioned conjugate gradient\" \n# \"Sub cyling\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"14.2. Scheme\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nBaroclinic dynamics scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Leap-frog + Asselin filter\" \n# \"Leap-frog + Periodic Euler\" \n# \"Predictor-corrector\" \n# \"Runge-Kutta 2\" \n# \"AM3-LF\" \n# \"Forward-backward\" \n# \"Forward operator\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"14.3. Time Step\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nBaroclinic time step (in seconds)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"15. Timestepping Framework --> Barotropic\nBarotropic time stepping in ocean\n15.1. Splitting\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nTime splitting method",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.timestepping_framework.barotropic.splitting') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"None\" \n# \"split explicit\" \n# \"implicit\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"15.2. Time Step\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nBarotropic time step (in seconds)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.timestepping_framework.barotropic.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"16. Timestepping Framework --> Vertical Physics\nVertical physics time stepping in ocean\n16.1. Method\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDetails of vertical time stepping in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.timestepping_framework.vertical_physics.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"17. Advection\nOcean advection\n17.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of advection in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.advection.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"18. Advection --> Momentum\nProperties of lateral momemtum advection scheme in ocean\n18.1. Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nType of lateral momemtum advection scheme in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.advection.momentum.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Flux form\" \n# \"Vector form\" \n# TODO - please enter value(s)\n",
"18.2. Scheme Name\nIs Required: TRUE Type: STRING Cardinality: 1.1\nName of ocean momemtum advection scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.advection.momentum.scheme_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"18.3. ALE\nIs Required: FALSE Type: BOOLEAN Cardinality: 0.1\nUsing ALE for vertical advection ? (if vertical coordinates are sigma)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.advection.momentum.ALE') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"19. Advection --> Lateral Tracers\nProperties of lateral tracer advection scheme in ocean\n19.1. Order\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nOrder of lateral tracer advection scheme in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.advection.lateral_tracers.order') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"19.2. Flux Limiter\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nMonotonic flux limiter for lateral tracer advection scheme in ocean ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.advection.lateral_tracers.flux_limiter') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"19.3. Effective Order\nIs Required: TRUE Type: FLOAT Cardinality: 1.1\nEffective order of limited lateral tracer advection scheme in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.advection.lateral_tracers.effective_order') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"19.4. Name\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescriptive text for lateral tracer advection scheme in ocean (e.g. MUSCL, PPM-H5, PRATHER,...)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.advection.lateral_tracers.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"19.5. Passive Tracers\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nPassive tracers advected",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Ideal age\" \n# \"CFC 11\" \n# \"CFC 12\" \n# \"SF6\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"19.6. Passive Tracers Advection\nIs Required: FALSE Type: STRING Cardinality: 0.1\nIs advection of passive tracers different than active ? if so, describe.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers_advection') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"20. Advection --> Vertical Tracers\nProperties of vertical tracer advection scheme in ocean\n20.1. Name\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescriptive text for vertical tracer advection scheme in ocean (e.g. MUSCL, PPM-H5, PRATHER,...)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.advection.vertical_tracers.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"20.2. Flux Limiter\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nMonotonic flux limiter for vertical tracer advection scheme in ocean ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.advection.vertical_tracers.flux_limiter') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"21. Lateral Physics\nOcean lateral physics\n21.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of lateral physics in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"21.2. Scheme\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nType of transient eddy representation in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"None\" \n# \"Eddy active\" \n# \"Eddy admitting\" \n# TODO - please enter value(s)\n",
"22. Lateral Physics --> Momentum --> Operator\nProperties of lateral physics operator for momentum in ocean\n22.1. Direction\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nDirection of lateral physics momemtum scheme in the ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.direction') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Horizontal\" \n# \"Isopycnal\" \n# \"Isoneutral\" \n# \"Geopotential\" \n# \"Iso-level\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"22.2. Order\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nOrder of lateral physics momemtum scheme in the ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.order') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Harmonic\" \n# \"Bi-harmonic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"22.3. Discretisation\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nDiscretisation of lateral physics momemtum scheme in the ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.discretisation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Second order\" \n# \"Higher order\" \n# \"Flux limiter\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff\nProperties of eddy viscosity coeff in lateral physics momemtum scheme in the ocean\n23.1. Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nLateral physics momemtum eddy viscosity coeff type in the ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Constant\" \n# \"Space varying\" \n# \"Time + space varying (Smagorinsky)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"23.2. Constant Coefficient\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nIf constant, value of eddy viscosity coeff in lateral physics momemtum scheme (in m2/s)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.constant_coefficient') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"23.3. Variable Coefficient\nIs Required: FALSE Type: STRING Cardinality: 0.1\nIf space-varying, describe variations of eddy viscosity coeff in lateral physics momemtum scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.variable_coefficient') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"23.4. Coeff Background\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe background eddy viscosity coeff in lateral physics momemtum scheme (give values in m2/s)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_background') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"23.5. Coeff Backscatter\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs there backscatter in eddy viscosity coeff in lateral physics momemtum scheme ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_backscatter') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"24. Lateral Physics --> Tracers\nProperties of lateral physics for tracers in ocean\n24.1. Mesoscale Closure\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs there a mesoscale closure in the lateral physics tracers scheme ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.mesoscale_closure') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"24.2. Submesoscale Mixing\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs there a submesoscale mixing parameterisation (i.e Fox-Kemper) in the lateral physics tracers scheme ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.submesoscale_mixing') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"25. Lateral Physics --> Tracers --> Operator\nProperties of lateral physics operator for tracers in ocean\n25.1. Direction\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nDirection of lateral physics tracers scheme in the ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.direction') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Horizontal\" \n# \"Isopycnal\" \n# \"Isoneutral\" \n# \"Geopotential\" \n# \"Iso-level\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"25.2. Order\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nOrder of lateral physics tracers scheme in the ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.order') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Harmonic\" \n# \"Bi-harmonic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"25.3. Discretisation\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nDiscretisation of lateral physics tracers scheme in the ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.discretisation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Second order\" \n# \"Higher order\" \n# \"Flux limiter\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff\nProperties of eddy diffusity coeff in lateral physics tracers scheme in the ocean\n26.1. Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nLateral physics tracers eddy diffusity coeff type in the ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Constant\" \n# \"Space varying\" \n# \"Time + space varying (Smagorinsky)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"26.2. Constant Coefficient\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nIf constant, value of eddy diffusity coeff in lateral physics tracers scheme (in m2/s)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.constant_coefficient') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"26.3. Variable Coefficient\nIs Required: FALSE Type: STRING Cardinality: 0.1\nIf space-varying, describe variations of eddy diffusity coeff in lateral physics tracers scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.variable_coefficient') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"26.4. Coeff Background\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nDescribe background eddy diffusity coeff in lateral physics tracers scheme (give values in m2/s)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_background') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"26.5. Coeff Backscatter\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs there backscatter in eddy diffusity coeff in lateral physics tracers scheme ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_backscatter') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"27. Lateral Physics --> Tracers --> Eddy Induced Velocity\nProperties of eddy induced velocity (EIV) in lateral physics tracers scheme in the ocean\n27.1. Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nType of EIV in lateral physics tracers in the ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"GM\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"27.2. Constant Val\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nIf EIV scheme for tracers is constant, specify coefficient value (M2/s)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.constant_val') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"27.3. Flux Type\nIs Required: TRUE Type: STRING Cardinality: 1.1\nType of EIV flux (advective or skew)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.flux_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"27.4. Added Diffusivity\nIs Required: TRUE Type: STRING Cardinality: 1.1\nType of EIV added diffusivity (constant, flow dependent or none)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.added_diffusivity') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"28. Vertical Physics\nOcean Vertical Physics\n28.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of vertical physics in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"29. Vertical Physics --> Boundary Layer Mixing --> Details\nProperties of vertical physics in ocean\n29.1. Langmuir Cells Mixing\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs there Langmuir cells mixing in upper ocean ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.details.langmuir_cells_mixing') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"30. Vertical Physics --> Boundary Layer Mixing --> Tracers\n*Properties of boundary layer (BL) mixing on tracers in the ocean *\n30.1. Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nType of boundary layer mixing for tracers in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Constant value\" \n# \"Turbulent closure - TKE\" \n# \"Turbulent closure - KPP\" \n# \"Turbulent closure - Mellor-Yamada\" \n# \"Turbulent closure - Bulk Mixed Layer\" \n# \"Richardson number dependent - PP\" \n# \"Richardson number dependent - KT\" \n# \"Imbeded as isopycnic vertical coordinate\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"30.2. Closure Order\nIs Required: FALSE Type: FLOAT Cardinality: 0.1\nIf turbulent BL mixing of tracers, specific order of closure (0, 1, 2.5, 3)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.closure_order') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"30.3. Constant\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nIf constant BL mixing of tracers, specific coefficient (m2/s)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.constant') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"30.4. Background\nIs Required: TRUE Type: STRING Cardinality: 1.1\nBackground BL mixing of tracers coefficient, (schema and value in m2/s - may by none)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.background') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"31. Vertical Physics --> Boundary Layer Mixing --> Momentum\n*Properties of boundary layer (BL) mixing on momentum in the ocean *\n31.1. Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nType of boundary layer mixing for momentum in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Constant value\" \n# \"Turbulent closure - TKE\" \n# \"Turbulent closure - KPP\" \n# \"Turbulent closure - Mellor-Yamada\" \n# \"Turbulent closure - Bulk Mixed Layer\" \n# \"Richardson number dependent - PP\" \n# \"Richardson number dependent - KT\" \n# \"Imbeded as isopycnic vertical coordinate\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"31.2. Closure Order\nIs Required: FALSE Type: FLOAT Cardinality: 0.1\nIf turbulent BL mixing of momentum, specific order of closure (0, 1, 2.5, 3)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.closure_order') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"31.3. Constant\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nIf constant BL mixing of momentum, specific coefficient (m2/s)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.constant') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"31.4. Background\nIs Required: TRUE Type: STRING Cardinality: 1.1\nBackground BL mixing of momentum coefficient, (schema and value in m2/s - may by none)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.background') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"32. Vertical Physics --> Interior Mixing --> Details\n*Properties of interior mixing in the ocean *\n32.1. Convection Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nType of vertical convection in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.convection_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Non-penetrative convective adjustment\" \n# \"Enhanced vertical diffusion\" \n# \"Included in turbulence closure\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"32.2. Tide Induced Mixing\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe how tide induced mixing is modelled (barotropic, baroclinic, none)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.tide_induced_mixing') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"32.3. Double Diffusion\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs there double diffusion",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.double_diffusion') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"32.4. Shear Mixing\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs there interior shear mixing",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.shear_mixing') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"33. Vertical Physics --> Interior Mixing --> Tracers\n*Properties of interior mixing on tracers in the ocean *\n33.1. Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nType of interior mixing for tracers in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Constant value\" \n# \"Turbulent closure / TKE\" \n# \"Turbulent closure - Mellor-Yamada\" \n# \"Richardson number dependent - PP\" \n# \"Richardson number dependent - KT\" \n# \"Imbeded as isopycnic vertical coordinate\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"33.2. Constant\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nIf constant interior mixing of tracers, specific coefficient (m2/s)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.constant') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"33.3. Profile\nIs Required: TRUE Type: STRING Cardinality: 1.1\nIs the background interior mixing using a vertical profile for tracers (i.e is NOT constant) ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.profile') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"33.4. Background\nIs Required: TRUE Type: STRING Cardinality: 1.1\nBackground interior mixing of tracers coefficient, (schema and value in m2/s - may by none)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.background') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"34. Vertical Physics --> Interior Mixing --> Momentum\n*Properties of interior mixing on momentum in the ocean *\n34.1. Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nType of interior mixing for momentum in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Constant value\" \n# \"Turbulent closure / TKE\" \n# \"Turbulent closure - Mellor-Yamada\" \n# \"Richardson number dependent - PP\" \n# \"Richardson number dependent - KT\" \n# \"Imbeded as isopycnic vertical coordinate\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"34.2. Constant\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nIf constant interior mixing of momentum, specific coefficient (m2/s)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.constant') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"34.3. Profile\nIs Required: TRUE Type: STRING Cardinality: 1.1\nIs the background interior mixing using a vertical profile for momentum (i.e is NOT constant) ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.profile') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"34.4. Background\nIs Required: TRUE Type: STRING Cardinality: 1.1\nBackground interior mixing of momentum coefficient, (schema and value in m2/s - may by none)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.background') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"35. Uplow Boundaries --> Free Surface\nProperties of free surface in ocean\n35.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of free surface in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"35.2. Scheme\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nFree surface scheme in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Linear implicit\" \n# \"Linear filtered\" \n# \"Linear semi-explicit\" \n# \"Non-linear implicit\" \n# \"Non-linear filtered\" \n# \"Non-linear semi-explicit\" \n# \"Fully explicit\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"35.3. Embeded Seaice\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs the sea-ice embeded in the ocean model (instead of levitating) ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.embeded_seaice') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"36. Uplow Boundaries --> Bottom Boundary Layer\nProperties of bottom boundary layer in ocean\n36.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of bottom boundary layer in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"36.2. Type Of Bbl\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nType of bottom boundary layer in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.type_of_bbl') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Diffusive\" \n# \"Acvective\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"36.3. Lateral Mixing Coef\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nIf bottom BL is diffusive, specify value of lateral mixing coefficient (in m2/s)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.lateral_mixing_coef') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"36.4. Sill Overflow\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe any specific treatment of sill overflows",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.sill_overflow') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"37. Boundary Forcing\nOcean boundary forcing\n37.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of boundary forcing in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"37.2. Surface Pressure\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe how surface pressure is transmitted to ocean (via sea-ice, nothing specific,...)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.surface_pressure') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"37.3. Momentum Flux Correction\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe any type of ocean surface momentum flux correction and, if applicable, how it is applied and where.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.momentum_flux_correction') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"37.4. Tracers Flux Correction\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe any type of ocean surface tracers flux correction and, if applicable, how it is applied and where.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.tracers_flux_correction') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"37.5. Wave Effects\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe if/how wave effects are modelled at ocean surface.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.wave_effects') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"37.6. River Runoff Budget\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe how river runoff from land surface is routed to ocean and any global adjustment done.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.river_runoff_budget') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"37.7. Geothermal Heating\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe if/how geothermal heating is present at ocean bottom.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.geothermal_heating') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"38. Boundary Forcing --> Momentum --> Bottom Friction\nProperties of momentum bottom friction in ocean\n38.1. Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nType of momentum bottom friction in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.momentum.bottom_friction.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Linear\" \n# \"Non-linear\" \n# \"Non-linear (drag function of speed of tides)\" \n# \"Constant drag coefficient\" \n# \"None\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"39. Boundary Forcing --> Momentum --> Lateral Friction\nProperties of momentum lateral friction in ocean\n39.1. Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nType of momentum lateral friction in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.momentum.lateral_friction.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"None\" \n# \"Free-slip\" \n# \"No-slip\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"40. Boundary Forcing --> Tracers --> Sunlight Penetration\nProperties of sunlight penetration scheme in ocean\n40.1. Scheme\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nType of sunlight penetration scheme in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"1 extinction depth\" \n# \"2 extinction depth\" \n# \"3 extinction depth\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"40.2. Ocean Colour\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs the ocean sunlight penetration scheme ocean colour dependent ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.ocean_colour') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"40.3. Extinction Depth\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe and list extinctions depths for sunlight penetration scheme (if applicable).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.extinction_depth') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"41. Boundary Forcing --> Tracers --> Fresh Water Forcing\nProperties of surface fresh water forcing in ocean\n41.1. From Atmopshere\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nType of surface fresh water forcing from atmos in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_atmopshere') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Freshwater flux\" \n# \"Virtual salt flux\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"41.2. From Sea Ice\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nType of surface fresh water forcing from sea-ice in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_sea_ice') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Freshwater flux\" \n# \"Virtual salt flux\" \n# \"Real salt flux\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"41.3. Forced Mode Restoring\nIs Required: TRUE Type: STRING Cardinality: 1.1\nType of surface salinity restoring in forced mode (OMIP)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.forced_mode_restoring') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"©2017 ES-DOC"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
tpin3694/tpin3694.github.io
|
machine-learning/logistic_regression_on_very_large_data.ipynb
|
mit
|
[
"Title: Logistic Regression On Very Large Data\nSlug: logistic_regression_on_very_large_data\nSummary: How to train a logistic regression on very large data in scikit-learn.\nDate: 2017-09-21 12:00\nCategory: Machine Learning\nTags: Logistic Regression\nAuthors: Chris Albon \nscikit-learn's LogisticRegression offers a number of techniques for training a logistic regression, called solvers. Most of the time scikit-learn will select the best solver automatically for us or warn us that you cannot do some thing with that solver. However, there is one particular case we should be aware of. \nWhile an exact explanation is beyond the bounds of this book, stochastic average gradient descent allows us to train a model much faster than other solvers when our data is very large. However, it is also very sensitive to feature scaling to standardizing our features is particularly important. We can set our learning algorithm to use this solver by setting solver='sag'.\nPreliminaries",
"# Load libraries\nfrom sklearn.linear_model import LogisticRegression\nfrom sklearn import datasets\nfrom sklearn.preprocessing import StandardScaler",
"Load Iris Flower Data",
"# Load data\niris = datasets.load_iris()\nX = iris.data\ny = iris.target",
"Standardize Features",
"# Standarize features\nscaler = StandardScaler()\nX_std = scaler.fit_transform(X)",
"Train Logistic Regression Using SAG solver",
"# Create logistic regression object using sag solver\nclf = LogisticRegression(random_state=0, solver='sag')\n\n# Train model\nmodel = clf.fit(X_std, y)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
nimish-jose/dlnd
|
DLND-your-first-network/dlnd-your-first-neural-network.ipynb
|
gpl-3.0
|
[
"Your first neural network\nIn this project, you'll build your first neural network and use it to predict daily bike rental ridership. We've provided some of the code, but left the implementation of the neural network up to you (for the most part). After you've submitted this project, feel free to explore the data and the model more.",
"%matplotlib inline\n%config InlineBackend.figure_format = 'retina'\n\nimport numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt",
"Load and prepare the data\nA critical step in working with neural networks is preparing the data correctly. Variables on different scales make it difficult for the network to efficiently learn the correct weights. Below, we've written the code to load and prepare the data. You'll learn more about this soon!",
"data_path = 'Bike-Sharing-Dataset/hour.csv'\n\nrides = pd.read_csv(data_path)\n\nrides.head()",
"Checking out the data\nThis dataset has the number of riders for each hour of each day from January 1 2011 to December 31 2012. The number of riders is split between casual and registered, summed up in the cnt column. You can see the first few rows of the data above.\nBelow is a plot showing the number of bike riders over the first 10 days in the data set. You can see the hourly rentals here. This data is pretty complicated! The weekends have lower over all ridership and there are spikes when people are biking to and from work during the week. Looking at the data above, we also have information about temperature, humidity, and windspeed, all of these likely affecting the number of riders. You'll be trying to capture all this with your model.",
"rides[:24*10].plot(x='dteday', y='cnt')",
"Dummy variables\nHere we have some categorical variables like season, weather, month. To include these in our model, we'll need to make binary dummy variables. This is simple to do with Pandas thanks to get_dummies().",
"dummy_fields = ['season', 'weathersit', 'mnth', 'hr', 'weekday']\nfor each in dummy_fields:\n dummies = pd.get_dummies(rides[each], prefix=each, drop_first=False)\n rides = pd.concat([rides, dummies], axis=1)\n\nfields_to_drop = ['instant', 'dteday', 'season', 'weathersit', \n 'weekday', 'atemp', 'mnth', 'workingday', 'hr']\ndata = rides.drop(fields_to_drop, axis=1)\ndata.head()",
"Scaling target variables\nTo make training the network easier, we'll standardize each of the continuous variables. That is, we'll shift and scale the variables such that they have zero mean and a standard deviation of 1.\nThe scaling factors are saved so we can go backwards when we use the network for predictions.",
"quant_features = ['casual', 'registered', 'cnt', 'temp', 'hum', 'windspeed']\n# Store scalings in a dictionary so we can convert back later\nscaled_features = {}\nfor each in quant_features:\n mean, std = data[each].mean(), data[each].std()\n scaled_features[each] = [mean, std]\n data.loc[:, each] = (data[each] - mean)/std",
"Splitting the data into training, testing, and validation sets\nWe'll save the last 21 days of the data to use as a test set after we've trained the network. We'll use this set to make predictions and compare them with the actual number of riders.",
"# Save the last 21 days \ntest_data = data[-21*24:]\ndata = data[:-21*24]\n\n# Separate the data into features and targets\ntarget_fields = ['cnt', 'casual', 'registered']\nfeatures, targets = data.drop(target_fields, axis=1), data[target_fields]\ntest_features, test_targets = test_data.drop(target_fields, axis=1), test_data[target_fields]",
"We'll split the data into two sets, one for training and one for validating as the network is being trained. Since this is time series data, we'll train on historical data, then try to predict on future data (the validation set).",
"# Hold out the last 60 days of the remaining data as a validation set\ntrain_features, train_targets = features[:-60*24], targets[:-60*24]\nval_features, val_targets = features[-60*24:], targets[-60*24:]",
"Time to build the network\nBelow you'll build your network. We've built out the structure and the backwards pass. You'll implement the forward pass through the network. You'll also set the hyperparameters: the learning rate, the number of hidden units, and the number of training passes.\nThe network has two layers, a hidden layer and an output layer. The hidden layer will use the sigmoid function for activations. The output layer has only one node and is used for the regression, the output of the node is the same as the input of the node. That is, the activation function is $f(x)=x$. A function that takes the input signal and generates an output signal, but takes into account the threshold, is called an activation function. We work through each layer of our network calculating the outputs for each neuron. All of the outputs from one layer become inputs to the neurons on the next layer. This process is called forward propagation.\nWe use the weights to propagate signals forward from the input to the output layers in a neural network. We use the weights to also propagate error backwards from the output back into the network to update our weights. This is called backpropagation.\n\nHint: You'll need the derivative of the output activation function ($f(x) = x$) for the backpropagation implementation. If you aren't familiar with calculus, this function is equivalent to the equation $y = x$. What is the slope of that equation? That is the derivative of $f(x)$.\n\nBelow, you have these tasks:\n1. Implement the sigmoid function to use as the activation function. Set self.activation_function in __init__ to your sigmoid function.\n2. Implement the forward pass in the train method.\n3. Implement the backpropagation algorithm in the train method, including calculating the output error.\n4. Implement the forward pass in the run method.",
"def sigmoid(x):\n return 1/(1 + np.exp(-1.0 * x))\n\nclass NeuralNetwork(object):\n def __init__(self, input_nodes, hidden_nodes, output_nodes, learning_rate):\n # Set number of nodes in input, hidden and output layers.\n self.input_nodes = input_nodes\n self.hidden_nodes = hidden_nodes\n self.output_nodes = output_nodes\n\n # Initialize weights\n self.weights_input_to_hidden = np.random.normal(0.0, self.hidden_nodes**-0.5, \n (self.hidden_nodes, self.input_nodes))\n\n self.weights_hidden_to_output = np.random.normal(0.0, self.output_nodes**-0.5, \n (self.output_nodes, self.hidden_nodes))\n self.lr = learning_rate\n \n #### Set this to your implemented sigmoid function ####\n # Activation function is the sigmoid function\n self.activation_function = sigmoid\n \n def train(self, inputs_list, targets_list):\n # Convert inputs list to 2d array\n inputs = np.array(inputs_list, ndmin=2).T\n targets = np.array(targets_list, ndmin=2).T\n \n #### Implement the forward pass here ####\n ### Forward pass ###\n # TODO: Hidden layer\n hidden_inputs = np.dot(self.weights_input_to_hidden, inputs)\n hidden_outputs = self.activation_function(hidden_inputs)\n \n # TODO: Output layer\n final_inputs = np.dot(self.weights_hidden_to_output, hidden_outputs)\n final_outputs = final_inputs\n \n #### Implement the backward pass here ####\n ### Backward pass ###\n \n # TODO: Output error\n output_errors = targets - final_outputs\n \n # TODO: Backpropagated error\n hidden_errors = output_errors*self.weights_hidden_to_output\n hidden_grad = hidden_outputs*(1 - hidden_outputs)*hidden_errors.T\n \n # TODO: Update the weights\n self.weights_hidden_to_output += self.lr*output_errors*hidden_outputs.T\n self.weights_input_to_hidden += self.lr*hidden_grad*inputs.T\n \n \n def run(self, inputs_list):\n # Run a forward pass through the network\n inputs = np.array(inputs_list, ndmin=2).T\n \n #### Implement the forward pass here ####\n # TODO: Hidden layer\n hidden_inputs = np.dot(self.weights_input_to_hidden, inputs)\n hidden_outputs = self.activation_function(hidden_inputs)\n \n # TODO: Output layer\n final_inputs = np.dot(self.weights_hidden_to_output, hidden_outputs)\n final_outputs = final_inputs \n \n return final_outputs\n\ndef MSE(y, Y):\n return np.mean((y-Y)**2)",
"Training the network\nHere you'll set the hyperparameters for the network. The strategy here is to find hyperparameters such that the error on the training set is low, but you're not overfitting to the data. If you train the network too long or have too many hidden nodes, it can become overly specific to the training set and will fail to generalize to the validation set. That is, the loss on the validation set will start increasing as the training set loss drops.\nYou'll also be using a method know as Stochastic Gradient Descent (SGD) to train the network. The idea is that for each training pass, you grab a random sample of the data instead of using the whole data set. You use many more training passes than with normal gradient descent, but each pass is much faster. This ends up training the network more efficiently. You'll learn more about SGD later.\nChoose the number of epochs\nThis is the number of times the dataset will pass through the network, each time updating the weights. As the number of epochs increases, the network becomes better and better at predicting the targets in the training set. You'll need to choose enough epochs to train the network well but not too many or you'll be overfitting.\nChoose the learning rate\nThis scales the size of weight updates. If this is too big, the weights tend to explode and the network fails to fit the data. A good choice to start at is 0.1. If the network has problems fitting the data, try reducing the learning rate. Note that the lower the learning rate, the smaller the steps are in the weight updates and the longer it takes for the neural network to converge.\nChoose the number of hidden nodes\nThe more hidden nodes you have, the more accurate predictions the model will make. Try a few different numbers and see how it affects the performance. You can look at the losses dictionary for a metric of the network performance. If the number of hidden units is too low, then the model won't have enough space to learn and if it is too high there are too many options for the direction that the learning can take. The trick here is to find the right balance in number of hidden units you choose.",
"import sys\n\n### Set the hyperparameters here ###\nepochs = 2500\nlearning_rate = 0.01\nhidden_nodes = 15\noutput_nodes = 1\n\nN_i = train_features.shape[1]\nnetwork = NeuralNetwork(N_i, hidden_nodes, output_nodes, learning_rate)\n\nlosses = {'train':[], 'validation':[]}\nfor e in range(epochs):\n # Go through a random batch of 128 records from the training data set\n batch = np.random.choice(train_features.index, size=128)\n for record, target in zip(train_features.ix[batch].values, \n train_targets.ix[batch]['cnt']):\n network.train(record, target)\n \n # Printing out the training progress\n train_loss = MSE(network.run(train_features), train_targets['cnt'].values)\n val_loss = MSE(network.run(val_features), val_targets['cnt'].values)\n sys.stdout.write(\"\\rProgress: \" + str(100 * e/float(epochs))[:4] \\\n + \"% ... Training loss: \" + str(train_loss)[:5] \\\n + \" ... Validation loss: \" + str(val_loss)[:5])\n \n losses['train'].append(train_loss)\n losses['validation'].append(val_loss)\n\nplt.plot(losses['train'], label='Training loss')\nplt.plot(losses['validation'], label='Validation loss')\nplt.legend()\nplt.ylim(ymax=0.5)",
"Check out your predictions\nHere, use the test data to view how well your network is modeling the data. If something is completely wrong here, make sure each step in your network is implemented correctly.",
"test_loss = MSE(network.run(test_features), test_targets['cnt'].values)\ntest_loss\n\nfig, ax = plt.subplots(figsize=(8,4))\n\nmean, std = scaled_features['cnt']\npredictions = network.run(test_features)*std + mean\nax.plot(predictions[0], label='Prediction')\nax.plot((test_targets['cnt']*std + mean).values, label='Data')\nax.set_xlim(right=len(predictions))\nax.legend()\n\ndates = pd.to_datetime(rides.ix[test_data.index]['dteday'])\ndates = dates.apply(lambda d: d.strftime('%b %d'))\nax.set_xticks(np.arange(len(dates))[12::24])\n_ = ax.set_xticklabels(dates[12::24], rotation=45)",
"Thinking about your results\nAnswer these questions about your results. How well does the model predict the data? Where does it fail? Why does it fail where it does?\n\nNote: You can edit the text in this cell by double clicking on it. When you want to render the text, press control + enter\n\nYour answer below\nThe model has a 76% accuracy when it comes to predicting the test dataset. As can be seen in the above graph it predicts high volatility periods very well like between Dec 11 - 21. However, it fails when the volatility is reduced and the spikes are not significant. In those cases, it overfits a spiky prediction. This is eveident in the period Dec 22 - 28. This failure is most likely due to it overfitting to frequent spikes in the training dataset.\nUnit tests\nRun these unit tests to check the correctness of your network implementation. These tests must all be successful to pass the project.",
"import unittest\n\ninputs = [0.5, -0.2, 0.1]\ntargets = [0.4]\ntest_w_i_h = np.array([[0.1, 0.4, -0.3], \n [-0.2, 0.5, 0.2]])\ntest_w_h_o = np.array([[0.3, -0.1]])\n\nclass TestMethods(unittest.TestCase):\n \n ##########\n # Unit tests for data loading\n ##########\n \n def test_data_path(self):\n # Test that file path to dataset has been unaltered\n self.assertTrue(data_path.lower() == 'bike-sharing-dataset/hour.csv')\n \n def test_data_loaded(self):\n # Test that data frame loaded\n self.assertTrue(isinstance(rides, pd.DataFrame))\n \n ##########\n # Unit tests for network functionality\n ##########\n\n def test_activation(self):\n network = NeuralNetwork(3, 2, 1, 0.5)\n # Test that the activation function is a sigmoid\n self.assertTrue(np.all(network.activation_function(0.5) == 1/(1+np.exp(-0.5))))\n\n def test_train(self):\n # Test that weights are updated correctly on training\n network = NeuralNetwork(3, 2, 1, 0.5)\n network.weights_input_to_hidden = test_w_i_h.copy()\n network.weights_hidden_to_output = test_w_h_o.copy()\n \n network.train(inputs, targets)\n self.assertTrue(np.allclose(network.weights_hidden_to_output, \n np.array([[ 0.37275328, -0.03172939]])))\n self.assertTrue(np.allclose(network.weights_input_to_hidden,\n np.array([[ 0.10562014, 0.39775194, -0.29887597],\n [-0.20185996, 0.50074398, 0.19962801]])))\n\n def test_run(self):\n # Test correctness of run method\n network = NeuralNetwork(3, 2, 1, 0.5)\n network.weights_input_to_hidden = test_w_i_h.copy()\n network.weights_hidden_to_output = test_w_h_o.copy()\n\n self.assertTrue(np.allclose(network.run(inputs), 0.09998924))\n\nsuite = unittest.TestLoader().loadTestsFromModule(TestMethods())\nunittest.TextTestRunner().run(suite)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
karenlmasters/ComputationalPhysicsUnit
|
StochasticMethods/RandomNumbersLecture1.ipynb
|
apache-2.0
|
[
"Random Processes in Computational Physics\nThe contents of this Jupyter Notebook lecture notes are: \n\nIntroduction to Random Numbers in Physics\nRandom Number Generation\nPython Packages for Random Numbers\nCoding for Probability (atomic decay example)\nNon-uniform random numbers\n\nAs usual I recommend you follow along by typing the code snippets into your own file. Don't forget to call the packages etc. at the start of each code file.",
"%matplotlib inline\nimport matplotlib.pyplot as plt\nimport numpy as np",
"Random Processes in Physics\nExamples of physical processes that are/can be modelled as random include: \n\nRadioactive decay - we know the probability of decay per unit time from quantum physics, but the exact time of the decay is random. \n\n\n\nBrownian motion - if we could track the motion of all atomic particles, this would not actually be random, but appears random as we cannot. \n Youtube Video of Brownian Motion: https://www.youtube.com/watch?v=cDcprgWiQEY\n\n\n\n\nChaotic systems - again not truely random in the sense of radioactive decay, but can be modelled as random. \n\n\nHuman or animal behaviour can also be modelled as random in some circumstances.\n\n\nRandom Number Generation\nThere are many different ways to generate uniform random numbers over a specified range (such as 0-1). Physically, we can for example: \n\n\nspin a roulette wheel\n\n\ndraw balls from a lottery\n\n\nthrow darts at a board\n\n\nthow dice \n\n\nHowever, when we wish to use the numbers in a computer, we need a way to generate the numbers algorithmically. \nNumerically/arithmetically - use a sequential method where each new number is a deterministic function of the previous numbers. \nBut: this destroys their true randomness and makes them at best, \"pseudo-random\". \nHowever, in most cases, it is sufficient if the numbers “look” uniformly distributed and have no correlation between them. i.e. they pass statistical tests and obey the central limit theorem.\nFor example consider the function: \n$x' = (ax + c) \\mod m$\nwhere $a$, $c$ and $m$ are integer constants, and $x$ is an integer variable. Recall that \"$n \\mod m$\" means you calculate the remainder when $n$ is divided by $m$.\nNow we can use this to generate a sequence of numbers by putting the outcome of this equation ($x'$) back in as the new starting value ($x$). These will act like random numbers. Try it.....\nClass Exercise\nStarting from $x = 1$ write a short programme which generates 100 values in this sequence and plots them on a graph. Please use the following inputs: \na = 1664525\nc = 1013904223\nm = 4294967296\nTip 1: python syntax for \"mod m\" is: \n\n%m\n\nSo your base code will look like: \n\nxp = (a*x+c)%m\n\nExtension problem: this won't work for all values of a, c and m. Can you find some which don't generate pseudo-random numbers? \nThis is an example of a simple pseudo-random number generator (PRNG). Technically it's a \"linear congruential random number generator\". Things to note: \n\nIt's not really random\nIt can only generate numbers betwen 0 and m-1. \nThe choices of a, c and m matter. \nThe choice of x also matters. Do you get the same values for x=2? \n\nFor many codes this is sufficient, but you can do better. Fortunately python (Numpy) comes with a number of better versions as in built packages, so we can benefit from the expertise of others in our computational physics codes. \nGood Pseudo-Random Number Generators\nAll pseudo-random number generators (PRNG) should possess a few key properties. Namely, they should \n\n\nbe fast and not memory intensive\n\n\nbe able to reproduce a given stream of random numbers (for debugging/verification of computer programs or so we can use identical numbers to compare different systems)\n\n\nbe able to produce several different independent “streams” of random numbers\n\n\nhave a long periodicity, so that they do not wrap around and produce the same numbers again within a reasonably long window.\n\n\nTo obtain a sequence of pseudo-random numbers: \n\ninitilize the state of the generator with a truly random \"seed\" value\ngenerator uses that seed to create an initial \"state\", then produces a pseudo-random sequence of numbers from that state. \n\nBut note: \n* The sequence will eventually repeat when the generator's state returns to that initial one.\n The length of the sequence of non-repeating numbers is the period* of the PRNG. \nIt is relatively easy to build PRNGs with periods long enough for many practical applications, but one must be cautious in applying PRNG's to problems that require very large quantities of random numbers.\nAlmost all languages and simulation packages have good built-in generators. In Python, we can use the NumPy random library, which is based on the Mersenne-Twister algorithm developed in 1997.\nPython Random Number Library",
"#Review the documentation for NumPy's random module:\nnp.random?",
"Some basic functions to point out (we'll get to others in a bit): \n\nrandom() - Uniformly distributed floats over [0, 1]. Will include zero, but not one. If you inclue a number, n in the bracket you get n random floats. \nrandint(n,m) - A single random integer from n to m-1",
"#print 5 uniformly distributed numbers between 0 and 1\nprint(np.random.random(5))\n\n#print another 5 - should be different\nprint(np.random.random(5))\n\n#print 5 uniformly distributed integers between 1 and 10\nprint(np.random.randint(1,11,5))\n\n#print another 5 - should be different\nprint(np.random.randint(1,11,5))",
"Notice you have to use 1-11 for the range. Why?",
"#If you want to save a random number for future use: \n\nz=np.random.random()\n\nprint(\"The number is \",z)\n#Rerun random\nprint(np.random.random())\n\nprint(\"The number is still\",z)\n",
"In Class Exercise - Rolling Dice\n\n\nWrite a programme that generates and prints out two random numbers between 1 and 6. This simulates the rolling of two dice.\n\n\nNow modify the programme to simulate making 2 million rolls of two dice. What fraction of the time do you get double six? \n\n\nExtension: Plot a histogram of the frequency of the total of the two dice over the 2 million rolls.\n\n\nSeeded Random Numbers\nSometimes in computational physics we want to generate the same series of pseudo-random numbers many times. This can be done with 'seeds'.",
"np.random.seed(42)\nfor i in range(4):\n print(np.random.random())\n\nnp.random.seed(42)\nfor i in range(4):\n print(np.random.random())\n\nnp.random.seed(39)\nfor i in range(4):\n print(np.random.random())",
"You might want to do this for: \n\nDebugging\nCode repeatability (i.e. when you hand in code for marking!). \n\nCoding For Probability\nIn some circumstances you will want to write code which simulates various events, each of which happen with a probability, $p$. \nThis can be coded with random numbers. You generate a random number between zero and 1, and allow the event to occur if that number is greater than $p$. \nFor example, consider a biased coin, which returns a head 20% of the time:",
"for i in range(10):\n if np.random.random()<0.2: \n print(\"Heads\")\n else:\n print(\"Tails\")",
"In Class Exercise: Radioactive Decay\nSimulate the decay of 1000 thallium atoms over time, using random number generators to mimick the random process of atomic decay. \nThallium-208 decays to stable lead (208) with a half life of 3.053 minutes.\nThe standard equation of radioactive decay (for the number of atoms in the sample as a function of time) is: \n$$N(t) = N(0) 2^{-t/\\tau}$$\nwhere tau is the half life, N(0) is the number of atoms at t=0. Notice that both t and tau must be in the same units. \nThe fraction of atoms which have not yet decayed at any time t, is then: \n$$\\frac{N(t)}{N(0)} = 2^{-t/\\tau}$$\nSo then the probability that any given atom has decayed by time t (which is the same as the fraction of atoms that have decayed by that time) is: \n$$p(t) = 1 - 2^{-t/\\tau}$$\nUse time steps of 1 second and make a plot of the number of thallium and lead atoms as a function of time until 20 minutes have passed. Overplot the half-life of thallium. \nNon-Uniform Random Numbers\nThe radioactive decay problem we did in class is an example of a problem which can be coded more efficiently if we could draw random numbers from a distribution other than uniform. \nIn radioactive decay, the probability of decay in a time step $dt$ is\n$$dp = 1 - 2^{-dt/\\tau}$$\nThis can be expressed as\n$$dp = 1 - \\exp(-\\frac{dt}{\\tau} \\ln2)$$\nand the second term can be expanded using it's Taylor expansion, to give a first order approximation of \n$$dp = \\frac{\\ln2}{\\tau} dt$$ \nIf we want to know the probability of decay between time t and dt this is then: \n$$P(t) dt = 2^{-t/\\tau} \\frac{\\ln2}{\\tau} dt$$ \nwhich is a non-uniform probability (it's larger for small $t$).\nWe could more quickly calculate the decay of a sample of N atoms by drawing random numbers from the above distribution to give decay times for each atom. \nGenerating Non-Uniform Random Numbers\nThere are at least two methods to generate non-uniform random numbers from a function like the Numpy random function which generates uniform random numbers: \n\nRejection Method\nTransformation Method\n\nIn the rejection method, you generate more random numbers than are needed, and reject them if they are above the value of the probability function you wish to sample from. \nThis is no faster than the probability method - it's basically equivalent.\nThe transformation method is faster. In the transformation method, you need to find a function $f(z)$ which converts unfirmly distribtued random numbers $z$ into random numbers with the desired non-uniform sampling. \nMost of the Numpy packages for non-uniform random numbers make use of the transformation method. For most physical processes you can make use of a pre-written package which generates random numbers with the desired shape, but in the interests of education, we will work through a couple of examples. \nTransformation Method\nSuppose you have a uniformly distributed random number, $z$, drawn from 0 to 1 (and zero elsewhere). You want a number, $x$ with a probability distribution $p(x)$, and there is going to be some function which relates $x$ and $z$. \nIt must be the case that \n$$p(x) dx = q(z) dz $$\n(where $q(z)$ is 1 in the interval zero to 1, and zero elsewhere). \nIf we integrate this equation we get\n$$ \\int_{-\\infty}^{x(z)} p(x) dx = \\int_0^z dz $$\nso then \n$$ z = \\int_{-\\infty}^{x(z)} p(x) dx $$\nwill tell us the function needed to transform the uniform random numbers to random numbers from the distribution p(x). \nGenerating Numbers with the Exponential distribution\nThe exponential probability distribution is\n$$p(x) = \\mu e^{-\\mu x}$$\nwhere the $\\mu$ is a normalization factor (so that the probability integrates to one from zero to infinity). \nThis is the radioactive decay probability in the case that $\\mu = \\ln2/\\tau$. \nPut this into the transformation equation above we get\n$$ z = \\int_{-\\infty}^{x(z)} \\mu e^{-\\mu x} dx = 1 - e^{-\\mu x} $$\nwhich can be used to generate numbers from the exponential distribution. \nHomework Exercise\nRedo the radioactive decay problem, but now make use of non-uniform random numbers, to draw N random numbers representing the decay time of N atoms. \nYou can make the plot much more quickly, with a bit of help from the sort function in numpy.\nExtension: Also try the rejection method. Notice how similar the code is to the first way we did the problem. \nExtension Exercise/Homework\nModelling Brownian Motion\nSimulate the Brownian motion of a particle in two dimension, and make an animation of the output. \nSet up the particle to be confined to a square grid of size $L \\times L$ (where $L$ is an odd number), and represent it's position using a two-dimensional array of integers. \nThe particles starts in the middle of the grid. At each time step it moves randomly one lattice point in any direction. This is called a 'random walk'. The particle will \"bounce\" off the walls of the grid (ie. at the borders it cannot move in the direction which would take it outside of the grid). \nMake an animation of the path of the particle over 1 million time steps."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
rochefort-lab/fissa
|
docs/examples/SIMA example.ipynb
|
gpl-3.0
|
[
"Using FISSA with SIMA\nSIMA is a toolbox for motion correction and cell detection.\nHere we illustrate how to create a workflow which uses SIMA to detect cells and FISSA to extract decontaminated signals from those cells.\nReference:\nKaifosh, P., Zaremba, J. D., Danielson, N. B., Losonczy, A. SIMA: Python software for analysis of dynamic fluorescence imaging data. Frontiers in neuroinformatics, 8(80), 2014. doi: 10.3389/fninf.2014.00080.\nPlease note that SIMA only supports Python 3.6 and below.\nImport packages",
"# FISSA toolbox\nimport fissa\n\n# SIMA toolbox\nimport sima\nimport sima.segment\n\n# File operations\nimport glob\n\n# For plotting our results, use numpy and matplotlib\nimport matplotlib.pyplot as plt\nimport numpy as np",
"Detecting cells with SIMA\nSetup data",
"# Define folder where tiffs are present\ntiff_folder = \"exampleData/20150529/\"\n\n# Find tiffs in folder\ntiffs = sorted(glob.glob(tiff_folder + \"/*.tif*\"))\n\n# define motion correction method\nmc_approach = sima.motion.DiscreteFourier2D()\n\n# Define SIMA dataset\nsequences = [sima.Sequence.create(\"TIFF\", tiff) for tiff in tiffs[:1]]\ntry:\n dataset = sima.ImagingDataset(sequences, \"example.sima\")\nexcept BaseException:\n dataset = sima.ImagingDataset.load(\"example.sima\")",
"Run SIMA segmentation algorithm",
"stica_approach = sima.segment.STICA(components=2)\nstica_approach.append(sima.segment.SparseROIsFromMasks())\nstica_approach.append(sima.segment.SmoothROIBoundaries())\nstica_approach.append(sima.segment.MergeOverlapping(threshold=0.5))\nrois = dataset.segment(stica_approach, \"auto_ROIs\")",
"Plot detected cells",
"# Plotting lines surrounding each of the ROIs\nplt.figure(figsize=(7, 6))\n\nfor roi in rois:\n # Plot border around cell\n plt.plot(roi.coords[0][:, 0], roi.coords[0][:, 1])\n\n# Invert the y-axis because image co-ordinates are labelled from top-left\nplt.gca().invert_yaxis()\nplt.show()",
"Extract decontaminated signals with FISSA\nFISSA needs either ImageJ ROIs or numpy arrays as inputs for the ROIs. \nSIMA outputs ROIs as numpy arrays, and can be directly read into FISSA.\nA given roi is given as\npython\nrois[i].coords[0][:, :2]\nFISSA expects rois to be provided as a list of lists\npython\n[[roiA1, roiA2, roiA3, ...]]\nSo some formatting will need to be done first.",
"rois_fissa = [roi.coords[0][:, :2] for roi in rois]\n\nrois[0].coords[0][:, :2].shape",
"We can then run FISSA on the data using the ROIs supplied by SIMA having converted them to a FISSA-compatible format, rois_fissa.",
"output_folder = \"fissa_sima_example\"\nexperiment = fissa.Experiment(tiff_folder, [rois_fissa], output_folder)\nexperiment.separate()",
"Plotting the results",
"# Fetch the colormap object for Cynthia Brewer's Paired color scheme\ncmap = plt.get_cmap(\"Paired\")\n\n# Select which trial (TIFF index) to plot\ntrial = 0\n\n# Plot the mean image and ROIs from the FISSA experiment\nplt.figure(figsize=(7, 7))\nplt.imshow(experiment.means[trial], cmap=\"gray\")\n\nfor i_roi in range(len(experiment.roi_polys)):\n # Plot border around ROI\n for contour in experiment.roi_polys[i_roi, trial][0]:\n plt.plot(\n contour[:, 1],\n contour[:, 0],\n color=cmap((i_roi * 2 + 1) % cmap.N),\n )\n\nplt.show()\n\n# Plot all ROIs and trials\n\n# Get the number of ROIs and trials\nn_roi = experiment.result.shape[0]\nn_trial = experiment.result.shape[1]\n\n# Find the maximum signal intensities for each ROI\nroi_max_raw = [\n np.max([np.max(experiment.raw[i_roi, i_trial][0]) for i_trial in range(n_trial)])\n for i_roi in range(n_roi)\n]\nroi_max_result = [\n np.max([np.max(experiment.result[i_roi, i_trial][0]) for i_trial in range(n_trial)])\n for i_roi in range(n_roi)\n]\nroi_max = np.maximum(roi_max_raw, roi_max_result)\n\n# Plot our figure using subplot panels\nplt.figure(figsize=(16, 10))\nfor i_roi in range(n_roi):\n for i_trial in range(n_trial):\n # Make subplot axes\n i_subplot = 1 + i_trial * n_roi + i_roi\n plt.subplot(n_trial, n_roi, i_subplot)\n # Plot the data\n plt.plot(\n experiment.raw[i_roi][i_trial][0, :],\n label=\"Raw (SIMA)\",\n color=cmap((i_roi * 2) % cmap.N),\n )\n plt.plot(\n experiment.result[i_roi][i_trial][0, :],\n label=\"FISSA\",\n color=cmap((i_roi * 2 + 1) % cmap.N),\n )\n # Labels and boiler plate\n plt.ylim([-0.05 * roi_max[i_roi], roi_max[i_roi] * 1.05])\n if i_roi == 0:\n plt.ylabel(\n \"Trial {}\\n\\nSignal intensity\\n(candela per unit area)\".format(\n i_trial + 1\n )\n )\n if i_trial == 0:\n plt.legend()\n plt.title(\"ROI {}\".format(i_roi))\n if i_trial == n_trial - 1:\n plt.xlabel(\"Time (frame number)\")\n\nplt.show()",
"The figure shows the raw signal from the ROI identified by SIMA (pale), and after decontaminating with FISSA (dark).\nThe hues match the ROI locations drawn above.\nEach column shows the results from one of the ROIs detected by SIMA.\nEach row shows the results from one of the three trials."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
kunbud1989/scraping-google-news-indonesia
|
2_Scraping_Content_Publisher_News_Indonesia.ipynb
|
mit
|
[
"Scraping Content Publsiher News Indonesia\nDalam tahapan ini, kita akan melakukan scraping isi dari sebuah berita.\nSilakan melakukan step pertama (1_Scraping_Google_News_Indonesia.pynb) untuk menghasilkan list_links_google_news_indonesia.txt yang selanjutnya akan menjadi acuhan kita mengambil content tersebut.\nRequirement\n\nGoose\n\nInstallation\nGoose\nsh\n$ git clone https://github.com/kunbud1989/python-goose.git\n$ cd python-goose\n$ pip install -r requirements.txt\n$ python setup.py install\nKode Program",
"from goose import Goose\nfrom pprint import pprint\nimport string\nimport datetime\n\nclass scrap_news(object):\n def __init__(self, url):\n self.url = url\n def scrap_publisher_news(self):\n g = Goose(\n {\n# 'browser_user_agent': 'Opera/9.80 (Android; Opera Mini/8.0.1807/36.1609; U; en) Presto/2.12.423 Version/12.16',\n 'use_meta_language': False, \n 'target_language':'id',\n 'enable_image_fetching': False,\n 'http_timeout': 2,\n }\n )\n article = g.extract(url=self.url)\n\n content = article.cleaned_text\n printable = set(string.printable)\n content = filter(lambda x: x in printable, content)\n\n title = article.title\n title = filter(lambda x: x in printable, title)\n if len(content) < 2 :\n article = g.extract(article.amphtml)\n content = article.cleaned_text\n content = filter(lambda x: x in printable, content)\n else:\n article = article \n\n if len(content) > 0 :\n title = title\n content = content.replace('\\n','')\n \n return (title, content)",
"Result\nDetik.com",
"url = '''https://news.detik.com/berita/3494173/polisi-jl-jend-sudirman-macet-karena-salju-palsu-dari-busa-air-got'''\nsn = scrap_news(url)\nresult = sn.scrap_publisher_news()\n\nprint('URL : %s' % url)\nprint('Title : %s' % result[0])\nprint('Content : %s' % result[1])",
"Kumparan",
"url = '''https://kumparan.com/kita-setara/menyingkirkan-stigma-buruk-hiv-aids'''\nsn = scrap_news(url)\nresult = sn.scrap_publisher_news()\n\nprint('URL : %s' % url)\nprint('Title : %s' % result[0])\nprint('Content : %s' % result[1])",
"Metro TV News",
"url = '''http://celebrity.okezone.com/read/2017/05/06/33/1684964/el-rumi-rayakan-kelulusan-di-puncak-gunung-penanggungan'''\nsn = scrap_news(url)\nresult = sn.scrap_publisher_news()\n\nprint('URL : %s' % url)\nprint('Title : %s' % result[0])\nprint('Content : %s' % result[1])\n\nf = open('list_links_google_news_indonesia.txt','r')\nlist_google_news = f.read().replace('[','').replace(']','').replace(\"u'\",\"\").replace(\"'\",\"\").split(',')\nset(list_google_news)\n\ncheckType = type(list_google_news)\npprint(checkType)\n\ntotal_link = len(list_google_news)\npprint(total_link)\n\nfor link in list_google_news[:5]:\n print(link)\n\nimport os\ndef generate_and_save_to_file(data):\n if(len(data[1]) > 0):\n fname = os.path.join('google_news',data[0]+'.txt')\n f = open(fname,'w')\n f.write(data[1])\n f.close()\n else:\n fname = 'CONTENT NOT VALID'\n return fname\n\nindex_link = 1\nfor link in list_google_news:\n try:\n url = '''%s''' % link\n sn = scrap_news(url)\n result = sn.scrap_publisher_news()\n fname = generate_and_save_to_file(result)\n print('%d / %d : %s' % (index_link,total_link,fname))\n except:\n print('%d / %d : %s' % (index_link,total_link,'ERROR'))\n pass\n index_link = index_link + 1\n\nos.listdir('google_news')"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
sdpython/ensae_teaching_cs
|
_doc/notebooks/td1a/td1a_pyramide_bigarree.ipynb
|
mit
|
[
"1A.1 - Tracer une pyramide bigarrée\nCet exercice est inspirée de l'article 2015-04-07 Motif, optimisation, biodiversité. Il s'agit de dessiner un motif.",
"%matplotlib inline\n\nfrom jyquickhelper import add_notebook_menu\nadd_notebook_menu()",
"Problème\nIl faut dessiner la pyramide suivante à l'aide de matplotlib.",
"from IPython.display import Image\nImage(\"http://www.xavierdupre.fr/app/code_beatrix/helpsphinx/_images/biodiversite_tri2.png\")",
"Idée de la solution\nOn sépare le problème en deux plus petits : \n\nTrouver la position des boules dans un repère cartésien.\nChoisir la bonne couleur.\n\nLe repère est hexagonal. L'image suivante est tirée de la page wikipédia empilement compact.",
"from pyquickhelper.helpgen import NbImage\nNbImage(\"data/hexa.png\")",
"A vous."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
Mithrillion/pokemon-go-simulator-solver
|
pokemon_location_simulator.ipynb
|
mit
|
[
"footprints:\nin game- 0: <10m, 1: 10-25m, 2: 25-100m, 3: 100-1000m\nfor simplicity, now assume pokemons do not disappear from radar (i.e. 3 footprints represent 50-inf)\nAlso assume there are always 9 pokemons on display initially, and their initial locations are random in a 100*100 square\nDetection: when a pokemon enters a 10m radius around the player, it is detected (tracking game success)\nThe game will report to the player the rough distance of each pokemon and their relative distance ranking\nThe goal of the tracking game is to find a particular pokemon",
"import numpy as np\nimport matplotlib.pyplot as plt\nimport random\nimport matplotlib.patches as patches\nfrom scipy.stats import gennorm\nfrom scipy.stats import gamma\n%matplotlib inline\n\ndef generate_initial_coordinates(side_length=2000, n_pokemon=9):\n pokemons = {}\n for i in range(n_pokemon):\n pokemons[i] = (random.uniform(-side_length/2, side_length/2), random.uniform(-side_length/2, side_length/2))\n return pokemons\n\npokemons = generate_initial_coordinates()\npokemons",
"Now we can visualise the relationship between player location, various detection radii and initial pokemon locations",
"plt.figure(figsize=(15,15))\n# non-target pokemons\nplt.scatter([x for x, y in [coord for coord in pokemons.values()]][1:], \n [y for x, y in [coord for coord in pokemons.values()]][1:])\n# target pokemon\nplt.scatter([x for x, y in [coord for coord in pokemons.values()]][0], \n [y for x, y in [coord for coord in pokemons.values()]][0],\n marker=\"*\", color='red', s=15)\nplt.axes().set_aspect(1)\nplt.axes().set_xlim((-1100, 1100))\nplt.axes().set_ylim((-1100, 1100))\n# player\nplt.scatter(0, 0, color='purple', s=15)\n# detection radii\ndists = {10:'green', 25:'blue', 100:'yellow', 1000:'red'}\nfor r in dists:\n plt.axes().add_patch(plt.Circle((0,0), r, fill=False, color=dists[r]))\nplt.show()\n\ndef distance(coord1, coord2):\n return np.sqrt((coord1[0] - coord2[0])**2 + (coord1[1] - coord2[1])**2)\n\n# this is not visible to players\ndef pokemon_distances(player_coord, pokemons):\n return {i: distance(player_coord, coord) for i, coord in pokemons.items()}\n\npokemon_distances((0, 0), pokemons)\n\ndef rank(input):\n output = [0] * len(input)\n for i, x in enumerate(sorted(range(len(input)), key=lambda y: input[y])):\n output[x] = i\n return output\n\n# player will be able to see this\ndef pokemon_rankings(player_coord, pokemons):\n dists = pokemon_distances(player_coord, pokemons)\n rankings = {}\n for i, x in enumerate(sorted(range(len(dists)), key=lambda y: dists[y])):\n rankings[x] = i\n return rankings\n\npokemon_rankings((0, 0), pokemons)\n\ndef plot_pokemon(player_coord, pokemons):\n plt.figure(figsize=(15,15))\n # non-target pokemons\n plt.scatter([x - player_coord[0] for x, y in [coord for coord in pokemons.values()]][1:], \n [y - player_coord[1] for x, y in [coord for coord in pokemons.values()]][1:])\n # target pokemon\n plt.scatter([x - player_coord[0] for x, y in [coord for coord in pokemons.values()]][0], \n [y - player_coord[1] for x, y in [coord for coord in pokemons.values()]][0],\n marker=\"*\", color='red', s=15)\n plt.axes().set_aspect(1)\n plt.axes().set_xlim((-1100, 1100))\n plt.axes().set_ylim((-1100, 1100))\n # player\n plt.scatter(0, 0 , color='purple', s=15)\n # detection radii\n dists = {10:'green', 25:'blue', 100:'yellow', 1000:'red'}\n for r in dists:\n plt.axes().add_patch(plt.Circle((0,0), r, fill=False, color=dists[r]))\n plt.show()\n\nplot_pokemon((0, 600), pokemons)\n\ndef footprint(distance):\n if distance < 10:\n return 0\n elif distance < 25:\n return 1\n elif distance < 100:\n return 2\n elif distance < 1000:\n return 3\n else:\n return np.nan\n\ndef footprint_counts(player_coord, pokemons):\n dists = pokemon_distances(player_coord, pokemons)\n return {i: footprint(v) for i,v in dists.items()}\n\nfootprint_counts((0, 0), pokemons)",
"movesets:\nmove up/down/left/right x (default 5) m\nrewards:\n\ntarget estimated distance increased: moderate penalty\ntarget estimated distance decreased: moderate reward\ntarget ranking increased: slight reward\ntarget ranking decreased: slight penalty\ntarget within catch distance: huge reward (game won)\n(optional) target lost (outside of detection range): large penalty\n\ncurrently new pokemon spawns / pokemons outside of inital detection range are omitted\nknown information:\n\ndistance ranking and distance estimate for all pokemons within detection range\ncurrent player location relative to starting point\nplayer moves so far (potentially equivalent to 2)\n\nlearning objective:\nefficient algorithm to reach the target pokemon",
"fig, ax = plt.subplots(4, 1)\nfig.set_figwidth(10)\nfig.set_figheight(15)\n\nbeta = 3\nx = np.linspace(-25, 25, 100)\nax[0].plot(x, gennorm.pdf(x / 10, beta), 'r-', lw=5, alpha=0.6, label='gennorm pdf')\nax[0].set_title(\"no footprints\")\n\nbeta0 = 3\nx = np.linspace(-25, 50, 100)\nax[1].plot(x, gennorm.pdf((x - 17.5) / 7.5, beta0), 'r-', lw=5, alpha=0.6, label='gennorm pdf')\nax[1].set_title(\"one footprint\")\n\nbeta1 = 4\n# x = np.linspace(gennorm.ppf(0.01, beta), gennorm.ppf(0.99, beta), 100)\nx = np.linspace(0, 150, 100)\nax[2].plot(x, gennorm.pdf((x - 62.5) / 37.5, beta1), 'r-', lw=5, alpha=0.6, label='gennorm pdf')\nax[2].set_title(\"two footprints\")\n\nbeta2 = 6\nx = np.linspace(-250, 1500, 100)\nax[3].plot(x, gennorm.pdf((x - 550) / 430, beta2), 'r-', lw=5, alpha=0.6, label='gennorm pdf')\nax[3].set_title(\"three footprints\")\n\nplt.show()",
"Assuming no knowledge of player movement history, the above graphs give us a rough probability distribution of actual distance of a pokemon given the estimated distance.\nWe may establish a relationship between player location plus n_footprints of a pokemon and the probable locations of the pokemon. Combine this with previous estimates of the location, we can improve our estimation step by step. This can be done with particle filter algorithm.\nHowever, the footprints only offer us very limited information (especially because the \"three footprints\" range is significantly longer than the other two). We must be able to infer information from the pokemon distance rankings.\nSuppose we have pokemon A and B. Let \">\" denote \"ranked before\". If A>B, we know that pokemon A is closer to the player than pokemon B. Suppose after some player movement, A and B swapped rankings. Now B>A and B is closer to the player than A. We may infer that:\n\nat the moment of swap, A and B are roughly the same distance from the player\n(following 1) in the joint A and B distribution, values where abs(distance(A) - distance(B)) is small should be more probable than values where the difference is large\n\nNow consider pokemon C and D. Both have three footprints but C>D. It is reasonable to believe that C is more likely to be in the inner range of the three-footprint radius and D is more likely to be in the outer range. If there are multiple pokemons in the three-footprint range, we may use a skewed probability distribution to estimate the locations of the highest and lowest ranking pokemons.",
"fig, ax = plt.subplots(3, 1)\nfig.set_figwidth(10)\nfig.set_figheight(15)\n\na = 2.5\nx = np.linspace(0, 1500, 100)\nax[0].plot(x, gamma.pdf((x - 75) / (450/3), a), 'r-', lw=5, alpha=0.6, label='gamma pdf')\nax[0].set_title(\"inner\")\n\nbeta2 = 6\nx = np.linspace(-250, 1500, 100)\nax[1].plot(x, gennorm.pdf((x - 550) / 450, beta2), 'r-', lw=5, alpha=0.6, label='gennorm pdf')\nax[1].set_title(\"middle\")\n\na0 = 2.5\nx = np.linspace(0, 1500, 100)\nax[2].plot(x, gamma.pdf((-x + 1100) / (450/3), a0), 'r-', lw=5, alpha=0.6, label='gamma pdf')\nax[2].set_title(\"outer\")\n\nplt.show()",
"Now for pokemons with three footprints, we apply these skewed distributions to estimate their distance if they are ranked first or last (or first k / last k, k adjustable).\nThe other question remains: how do we exploit the information from rank changes?\nSuppose we have m particles (estimated locations) for pokemon A and B respectively. The total number of combinations is m*m. For each of the combinations, we calculate the distance difference of A and B. The combinations we select from this population should follow a distribution that is highest at zero and decays as the variable increases.",
"fig, ax = plt.subplots(1, 1)\nfig.set_figwidth(10)\nfig.set_figheight(10)\n\na = 1\nx = np.linspace(0, 50, 100)\nax.plot(x, gamma.pdf(x / 10, a), 'r-', lw=5, alpha=0.6, label='gennorm pdf')\nax.set_title(\"distribution of distance difference\")",
"Also we might have to consider situations where a pokemon pops in / disappears from radar. This means they are almost certainly at that point on the edge of the detection radius. Their distance should follow a more skewed distribution.",
"fig, ax = plt.subplots(2, 1)\nfig.set_figwidth(10)\nfig.set_figheight(10)\n\na0 = 1.5\nx = np.linspace(0, 1500, 100)\nax[0].plot(x, gamma.pdf((-x + 1100) / (450/6), a0), 'r-', lw=5, alpha=0.6, label='gamma pdf')\nax[0].set_title(\"appearing in radar\")\n\na = 1.5\nx = np.linspace(800, 2000, 100)\nax[1].plot(x, gamma.pdf((x - 900) / (450/6), a), 'r-', lw=5, alpha=0.6, label='gamma pdf')\nax[1].set_title(\"disappearing from radar\")\n\nplt.show()",
"Situations where we need to re-estimate the distance:\n\ninitially, when we first receive the footprint counts and rankings\nwhen the footprint count of a pokemon changes\nwhen a swap in ranking happens (with multiple swaps at the same time, treat it as pairwise swaps)\nwhen the highest / lowest ranking pokemon changes\nwhen a pokemon enters / exits radar radius\n\nIn order to help learning an optimal policy, we need a reward / fitness function to estimate how close we are to locating / reaching the target pokemon.\nbase_fitness = weighed average distance from estimated locations of the target pokemon\nsmall bonus rewards could be given to triggering new weight-updating events as that offers us more information\nreward of a move = fitness change + bonus rewards for extra information\nA \"step\" or \"move\" can be restricted to traveling up/down/left/right 5m for simplicity.\nWe can know the base fitness change of a step before taking the move, but we do not know the bonus information until after we take the move.",
"def random_particle_generation(side_length=2000, n=1000):\n particles = [0] * n\n for i in range(n):\n particles[i] = (random.uniform(-side_length/2, side_length/2), random.uniform(-side_length/2, side_length/2))\n return particles\n\ndef plot_particles(player_coord, particles):\n plt.figure(figsize=(15,15))\n plt.scatter([p[0] - player_coord[0] for p in particles], \n [p[1] - player_coord[1] for p in particles])\n plt.axes().set_aspect(1)\n plt.axes().set_xlim((-1100, 1100))\n plt.axes().set_ylim((-1100, 1100))\n # player\n plt.scatter(0, 0 , color='purple', s=15)\n # detection radii\n dists = {10:'green', 25:'blue', 100:'yellow', 1000:'red'}\n for r in dists:\n plt.axes().add_patch(plt.Circle((0,0), r, fill=False, color=dists[r]))\n plt.show()\n\nparticles = random_particle_generation(n=3000)\nplot_particles((0, 0), particles)\n\n# sample according to distance distribution\nparticle_dists = list(map(lambda c: distance(c, (0, 0)), particles))\nplt.hist(particle_dists)\n\ndef three_middle(x):\n beta = 6\n return gennorm.pdf((x - 550) / 450, beta)\nparticle_probs = list(map(three_middle, particle_dists))\ndef two_fp(x):\n beta = 4\n return gennorm.pdf((x - 62.5) / 37.5, beta)\nparticle_probs = list(map(three_middle, particle_dists))\nplt.hist(particle_probs)\n\nnew_particles = [particles[np.random.choice(range(len(particles)), p=particle_probs / sum(particle_probs))] \n for i in range(len(particles))]\nplot_particles((0, 0), new_particles)\n\n# now suppose player moved to (200, 200) and the footprint count reduced to 2\nplayer_coord = (200, 200)\nparticle_dists = list(map(lambda c: distance(c, player_coord), particles))\nparticle_probs = list(map(two_fp, particle_dists))\nnew_particles = [particles[np.random.choice(range(len(particles)), p=particle_probs / sum(particle_probs))] \n for i in range(len(particles))]\nplot_particles(player_coord, new_particles)",
"Particle Filter Algorithm\n\n(sim) initialise pokemon positions\nget initial pokemon distance level and rankings\ncreate random particles for each pokemon on radar\nassign selection probabilities to each particle based on the respective distance levels and rankings of that pokemon\nresample the particle population of each pokemon based on the selection probabilities obtained in step 4\ncalculate the weighed average location of target pokemon\nstart moving in the direction of the weighed average location of target pokemon\nlisten for distance level or ranking change events. On event, halt moving and:\nre-calculate particle selection probabilities of affected pokemon(s)\nresample the particle population of affected pokemon(s)\nre-calculate weighed average location of target pokemon\nstart moving in the direction obtained in the previous step and resume listening for events\n\n\nprogram terminates when the target pokemon is within zero-footprint distance\n\n To be continued in particle_filter_algorithm"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
ianhamilton117/deep-learning
|
transfer-learning/Transfer_Learning.ipynb
|
mit
|
[
"Transfer Learning\nMost of the time you won't want to train a whole convolutional network yourself. Modern ConvNets training on huge datasets like ImageNet take weeks on multiple GPUs. Instead, most people use a pretrained network either as a fixed feature extractor, or as an initial network to fine tune. In this notebook, you'll be using VGGNet trained on the ImageNet dataset as a feature extractor. Below is a diagram of the VGGNet architecture.\n<img src=\"assets/cnnarchitecture.jpg\" width=700px>\nVGGNet is great because it's simple and has great performance, coming in second in the ImageNet competition. The idea here is that we keep all the convolutional layers, but replace the final fully connected layers with our own classifier. This way we can use VGGNet as a feature extractor for our images then easily train a simple classifier on top of that. What we'll do is take the first fully connected layer with 4096 units, including thresholding with ReLUs. We can use those values as a code for each image, then build a classifier on top of those codes.\nYou can read more about transfer learning from the CS231n course notes.\nPretrained VGGNet\nWe'll be using a pretrained network from https://github.com/machrisaa/tensorflow-vgg. Make sure to clone this repository to the directory you're working from. You'll also want to rename it so it has an underscore instead of a dash.\ngit clone https://github.com/machrisaa/tensorflow-vgg.git tensorflow_vgg\nThis is a really nice implementation of VGGNet, quite easy to work with. The network has already been trained and the parameters are available from this link. You'll need to clone the repo into the folder containing this notebook. Then download the parameter file using the next cell.",
"from urllib.request import urlretrieve\nfrom os.path import isfile, isdir\nfrom tqdm import tqdm\n\nvgg_dir = 'tensorflow_vgg/'\n# Make sure vgg exists\nif not isdir(vgg_dir):\n raise Exception(\"VGG directory doesn't exist!\")\n\nclass DLProgress(tqdm):\n last_block = 0\n\n def hook(self, block_num=1, block_size=1, total_size=None):\n self.total = total_size\n self.update((block_num - self.last_block) * block_size)\n self.last_block = block_num\n\nif not isfile(vgg_dir + \"vgg16.npy\"):\n with DLProgress(unit='B', unit_scale=True, miniters=1, desc='VGG16 Parameters') as pbar:\n urlretrieve(\n 'https://s3.amazonaws.com/content.udacity-data.com/nd101/vgg16.npy',\n vgg_dir + 'vgg16.npy',\n pbar.hook)\nelse:\n print(\"Parameter file already exists!\")",
"Flower power\nHere we'll be using VGGNet to classify images of flowers. To get the flower dataset, run the cell below. This dataset comes from the TensorFlow inception tutorial.",
"import tarfile\n\ndataset_folder_path = 'flower_photos'\n\nclass DLProgress(tqdm):\n last_block = 0\n\n def hook(self, block_num=1, block_size=1, total_size=None):\n self.total = total_size\n self.update((block_num - self.last_block) * block_size)\n self.last_block = block_num\n\nif not isfile('flower_photos.tar.gz'):\n with DLProgress(unit='B', unit_scale=True, miniters=1, desc='Flowers Dataset') as pbar:\n urlretrieve(\n 'http://download.tensorflow.org/example_images/flower_photos.tgz',\n 'flower_photos.tar.gz',\n pbar.hook)\n\nif not isdir(dataset_folder_path):\n with tarfile.open('flower_photos.tar.gz') as tar:\n tar.extractall()\n tar.close()",
"ConvNet Codes\nBelow, we'll run through all the images in our dataset and get codes for each of them. That is, we'll run the images through the VGGNet convolutional layers and record the values of the first fully connected layer. We can then write these to a file for later when we build our own classifier.\nHere we're using the vgg16 module from tensorflow_vgg. The network takes images of size $224 \\times 224 \\times 3$ as input. Then it has 5 sets of convolutional layers. The network implemented here has this structure (copied from the source code):\n```\nself.conv1_1 = self.conv_layer(bgr, \"conv1_1\")\nself.conv1_2 = self.conv_layer(self.conv1_1, \"conv1_2\")\nself.pool1 = self.max_pool(self.conv1_2, 'pool1')\nself.conv2_1 = self.conv_layer(self.pool1, \"conv2_1\")\nself.conv2_2 = self.conv_layer(self.conv2_1, \"conv2_2\")\nself.pool2 = self.max_pool(self.conv2_2, 'pool2')\nself.conv3_1 = self.conv_layer(self.pool2, \"conv3_1\")\nself.conv3_2 = self.conv_layer(self.conv3_1, \"conv3_2\")\nself.conv3_3 = self.conv_layer(self.conv3_2, \"conv3_3\")\nself.pool3 = self.max_pool(self.conv3_3, 'pool3')\nself.conv4_1 = self.conv_layer(self.pool3, \"conv4_1\")\nself.conv4_2 = self.conv_layer(self.conv4_1, \"conv4_2\")\nself.conv4_3 = self.conv_layer(self.conv4_2, \"conv4_3\")\nself.pool4 = self.max_pool(self.conv4_3, 'pool4')\nself.conv5_1 = self.conv_layer(self.pool4, \"conv5_1\")\nself.conv5_2 = self.conv_layer(self.conv5_1, \"conv5_2\")\nself.conv5_3 = self.conv_layer(self.conv5_2, \"conv5_3\")\nself.pool5 = self.max_pool(self.conv5_3, 'pool5')\nself.fc6 = self.fc_layer(self.pool5, \"fc6\")\nself.relu6 = tf.nn.relu(self.fc6)\n```\nSo what we want are the values of the first fully connected layer, after being ReLUd (self.relu6). To build the network, we use\nwith tf.Session() as sess:\n vgg = vgg16.Vgg16()\n input_ = tf.placeholder(tf.float32, [None, 224, 224, 3])\n with tf.name_scope(\"content_vgg\"):\n vgg.build(input_)\nThis creates the vgg object, then builds the graph with vgg.build(input_). Then to get the values from the layer,\nfeed_dict = {input_: images}\ncodes = sess.run(vgg.relu6, feed_dict=feed_dict)",
"import os\n\nimport numpy as np\nimport tensorflow as tf\n\nfrom tensorflow_vgg import vgg16\nfrom tensorflow_vgg import utils\n\ndata_dir = 'flower_photos/'\ncontents = os.listdir(data_dir)\nclasses = [each for each in contents if os.path.isdir(data_dir + each)]",
"Below I'm running images through the VGG network in batches.\n\nExercise: Below, build the VGG network. Also get the codes from the first fully connected layer (make sure you get the ReLUd values).",
"# Set the batch size higher if you can fit in in your GPU memory\nbatch_size = 10\ncodes_list = []\nlabels = []\nbatch = []\n\ncodes = None\n\nwith tf.Session() as sess:\n \n # TODO: Build the vgg network here\n vgg = vgg16.Vgg16()\n input_ = tf.placeholder(tf.float32, [None, 224, 224, 3])\n with tf.name_scope(\"content_vgg\"):\n vgg.build(input_)\n \n for each in classes:\n print(\"Starting {} images\".format(each))\n class_path = data_dir + each\n files = os.listdir(class_path)\n for ii, file in enumerate(files, 1):\n # Add images to the current batch\n # utils.load_image crops the input images for us, from the center\n img = utils.load_image(os.path.join(class_path, file))\n batch.append(img.reshape((1, 224, 224, 3)))\n labels.append(each)\n \n # Running the batch through the network to get the codes\n if ii % batch_size == 0 or ii == len(files):\n \n # Image batch to pass to VGG network\n images = np.concatenate(batch)\n \n # TODO: Get the values from the relu6 layer of the VGG network\n feed_dict = {input_: images}\n codes_batch = sess.run(vgg.relu6, feed_dict=feed_dict)\n \n # Here I'm building an array of the codes\n if codes is None:\n codes = codes_batch\n else:\n codes = np.concatenate((codes, codes_batch))\n \n # Reset to start building the next batch\n batch = []\n print('{} images processed'.format(ii))\n\n# write codes to file\nwith open('codes', 'w') as f:\n codes.tofile(f)\n \n# write labels to file\nimport csv\nwith open('labels', 'w') as f:\n writer = csv.writer(f, delimiter='\\n')\n writer.writerow(labels)",
"Building the Classifier\nNow that we have codes for all the images, we can build a simple classifier on top of them. The codes behave just like normal input into a simple neural network. Below I'm going to have you do most of the work.",
"# read codes and labels from file\nimport csv\n\nwith open('labels') as f:\n reader = csv.reader(f, delimiter='\\n')\n labels = np.array([each for each in reader if len(each) > 0]).squeeze()\nwith open('codes') as f:\n codes = np.fromfile(f, dtype=np.float32)\n codes = codes.reshape((len(labels), -1))",
"Data prep\nAs usual, now we need to one-hot encode our labels and create validation/test sets. First up, creating our labels!\n\nExercise: From scikit-learn, use LabelBinarizer to create one-hot encoded vectors from the labels.",
"from sklearn.preprocessing import LabelBinarizer\nlb = LabelBinarizer()\nlabels_vecs = lb.fit_transform(labels)",
"Now you'll want to create your training, validation, and test sets. An important thing to note here is that our labels and data aren't randomized yet. We'll want to shuffle our data so the validation and test sets contain data from all classes. Otherwise, you could end up with testing sets that are all one class. Typically, you'll also want to make sure that each smaller set has the same the distribution of classes as it is for the whole data set. The easiest way to accomplish both these goals is to use StratifiedShuffleSplit from scikit-learn.\nYou can create the splitter like so:\nss = StratifiedShuffleSplit(n_splits=1, test_size=0.2)\nThen split the data with \nsplitter = ss.split(x, y)\nss.split returns a generator of indices. You can pass the indices into the arrays to get the split sets. The fact that it's a generator means you either need to iterate over it, or use next(splitter) to get the indices. Be sure to read the documentation and the user guide.\n\nExercise: Use StratifiedShuffleSplit to split the codes and labels into training, validation, and test sets.",
"from sklearn.model_selection import StratifiedShuffleSplit\n\nss = StratifiedShuffleSplit(n_splits=1, test_size=0.2)\n\ntrain_idx, val_idx = next(ss.split(codes, labels))\n\nhalf_val_len = int(len(val_idx)/2)\nval_idx, test_idx = val_idx[:half_val_len], val_idx[half_val_len:]\n\ntrain_x, train_y = codes[train_idx], labels_vecs[train_idx]\nval_x, val_y = codes[val_idx], labels_vecs[val_idx]\ntest_x, test_y = codes[test_idx], labels_vecs[test_idx]\n\nprint(\"Train shapes (x, y):\", train_x.shape, train_y.shape)\nprint(\"Validation shapes (x, y):\", val_x.shape, val_y.shape)\nprint(\"Test shapes (x, y):\", test_x.shape, test_y.shape)",
"If you did it right, you should see these sizes for the training sets:\nTrain shapes (x, y): (2936, 4096) (2936, 5)\nValidation shapes (x, y): (367, 4096) (367, 5)\nTest shapes (x, y): (367, 4096) (367, 5)\nClassifier layers\nOnce you have the convolutional codes, you just need to build a classfier from some fully connected layers. You use the codes as the inputs and the image labels as targets. Otherwise the classifier is a typical neural network.\n\nExercise: With the codes and labels loaded, build the classifier. Consider the codes as your inputs, each of them are 4096D vectors. You'll want to use a hidden layer and an output layer as your classifier. Remember that the output layer needs to have one unit for each class and a softmax activation function. Use the cross entropy to calculate the cost.",
"inputs_ = tf.placeholder(tf.float32, shape=[None, codes.shape[1]])\nlabels_ = tf.placeholder(tf.int64, shape=[None, labels_vecs.shape[1]])\n\n# TODO: Classifier layers and operations\n\nfc = tf.contrib.layers.fully_connected(inputs_, 256)\n \nlogits = tf.contrib.layers.fully_connected(fc, labels_vecs.shape[1], activation_fn=None)\ncross_entropy = tf.nn.softmax_cross_entropy_with_logits(labels=labels_, logits=logits)\ncost = tf.reduce_mean(cross_entropy)\n\noptimizer = tf.train.AdamOptimizer().minimize(cost)\n\n# Operations for validation/test accuracy\npredicted = tf.nn.softmax(logits)\ncorrect_pred = tf.equal(tf.argmax(predicted, 1), tf.argmax(labels_, 1))\naccuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))",
"Batches!\nHere is just a simple way to do batches. I've written it so that it includes all the data. Sometimes you'll throw out some data at the end to make sure you have full batches. Here I just extend the last batch to include the remaining data.",
"def get_batches(x, y, n_batches=10):\n \"\"\" Return a generator that yields batches from arrays x and y. \"\"\"\n batch_size = len(x)//n_batches\n \n for ii in range(0, n_batches*batch_size, batch_size):\n # If we're not on the last batch, grab data with size batch_size\n if ii != (n_batches-1)*batch_size:\n X, Y = x[ii: ii+batch_size], y[ii: ii+batch_size] \n # On the last batch, grab the rest of the data\n else:\n X, Y = x[ii:], y[ii:]\n # I love generators\n yield X, Y",
"Training\nHere, we'll train the network.\n\nExercise: So far we've been providing the training code for you. Here, I'm going to give you a bit more of a challenge and have you write the code to train the network. Of course, you'll be able to see my solution if you need help. Use the get_batches function I wrote before to get your batches like for x, y in get_batches(train_x, train_y). Or write your own!",
"epochs = 10\niteration = 0\nsaver = tf.train.Saver()\nwith tf.Session() as sess:\n \n sess.run(tf.global_variables_initializer())\n for e in range(epochs):\n for x, y in get_batches(train_x, train_y):\n feed = {inputs_: x,\n labels_: y}\n loss, _ = sess.run([cost, optimizer], feed_dict=feed)\n print(\"Epoch: {}/{}\".format(e+1, epochs),\n \"Iteration: {}\".format(iteration),\n \"Training loss: {:.5f}\".format(loss))\n iteration += 1\n \n if iteration % 5 == 0:\n feed = {inputs_: val_x,\n labels_: val_y}\n val_acc = sess.run(accuracy, feed_dict=feed)\n print(\"Epoch: {}/{}\".format(e, epochs),\n \"Iteration: {}\".format(iteration),\n \"Validation Acc: {:.4f}\".format(val_acc))\n saver.save(sess, \"checkpoints/flowers.ckpt\")",
"Testing\nBelow you see the test accuracy. You can also see the predictions returned for images.",
"with tf.Session() as sess:\n saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))\n \n feed = {inputs_: test_x,\n labels_: test_y}\n test_acc = sess.run(accuracy, feed_dict=feed)\n print(\"Test accuracy: {:.4f}\".format(test_acc))\n\n%matplotlib inline\n\nimport matplotlib.pyplot as plt\nfrom scipy.ndimage import imread",
"Below, feel free to choose images and see how the trained classifier predicts the flowers in them.",
"test_img_path = 'flower_photos/roses/10894627425_ec76bbc757_n.jpg'\ntest_img = imread(test_img_path)\nplt.imshow(test_img)\n\n# Run this cell if you don't have a vgg graph built\nif 'vgg' in globals():\n print('\"vgg\" object already exists. Will not create again.')\nelse:\n #create vgg\n with tf.Session() as sess:\n input_ = tf.placeholder(tf.float32, [None, 224, 224, 3])\n vgg = vgg16.Vgg16()\n vgg.build(input_)\n\nwith tf.Session() as sess:\n img = utils.load_image(test_img_path)\n img = img.reshape((1, 224, 224, 3))\n\n feed_dict = {input_: img}\n code = sess.run(vgg.relu6, feed_dict=feed_dict)\n \nsaver = tf.train.Saver()\nwith tf.Session() as sess:\n saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))\n \n feed = {inputs_: code}\n prediction = sess.run(predicted, feed_dict=feed).squeeze()\n\nplt.imshow(test_img)\n\nplt.barh(np.arange(5), prediction)\n_ = plt.yticks(np.arange(5), lb.classes_)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
mangeshjoshi819/ml-learn-python3
|
week2/Week+2.ipynb
|
mit
|
[
"You are currently looking at version 1.0 of this notebook. To download notebooks and datafiles, as well as get help on Jupyter notebooks in the Coursera platform, visit the Jupyter Notebook FAQ course resource.\n\nThe Series Data Structure",
"import pandas as pd\npd.Series?\n\nanimals = ['Tiger', 'Bear', 'Moose']\npd.Series(animals)\n\nnumbers = [1, 2, 3]\npd.Series(numbers)\n\nanimals = ['Tiger', 'Bear', None]\npd.Series(animals)\n\nnumbers = [1, 2, None]\npd.Series(numbers)\n\nimport numpy as np\nnp.nan == None\n\nnp.nan == np.nan\n\nnp.isnan(np.nan)\n\nsports = {'Archery': 'Bhutan',\n 'Golf': 'Scotland',\n 'Sumo': 'Japan',\n 'Taekwondo': 'South Korea'}\ns = pd.Series(sports)\ns\n\ns.index\n\ns = pd.Series(['Tiger', 'Bear', 'Moose'], index=['India', 'America', 'Canada'])\ns\n\nsports = {'Archery': 'Bhutan',\n 'Golf': 'Scotland',\n 'Sumo': 'Japan',\n 'Taekwondo': 'South Korea'}\ns = pd.Series(sports, index=['Golf', 'Sumo', 'Hockey'])\ns",
"Querying a Series",
"sports = {'Archery': 'Bhutan',\n 'Golf': 'Scotland',\n 'Sumo': 'Japan',\n 'Taekwondo': 'South Korea'}\ns = pd.Series(sports)\ns\n\ns.iloc[3]\n\ns.loc['Golf']\n\ns[3]\n\ns['Golf']\n\nsports = {99: 'Bhutan',\n 100: 'Scotland',\n 101: 'Japan',\n 102: 'South Korea'}\ns = pd.Series(sports)\n\ns[0] #This won't call s.iloc[0] as one might expect, it generates an error instead\n\ns = pd.Series([100.00, 120.00, 101.00, 3.00])\ns\n\ntotal = 0\nfor item in s:\n total+=item\nprint(total)\n\nimport numpy as np\n\ntotal = np.sum(s)\nprint(total)\n\n#this creates a big series of random numbers\ns = pd.Series(np.random.randint(0,1000,10000))\ns.head()\n\nlen(s)\n\n%%timeit -n 100\nsummary = 0\nfor item in s:\n summary+=item\n\n%%timeit -n 100\nsummary = np.sum(s)\n\ns+=2 #adds two to each item in s using broadcasting\ns.head()\n\nfor label, value in s.iteritems():\n s.set_value(label, value+2)\ns.head()\n\n%%timeit -n 10\ns = pd.Series(np.random.randint(0,1000,10000))\nfor label, value in s.iteritems():\n s.loc[label]= value+2\n\n%%timeit -n 10\ns = pd.Series(np.random.randint(0,1000,10000))\ns+=2\n\n\ns = pd.Series([1, 2, 3])\ns.loc['Animal'] = 'Bears'\ns\n\noriginal_sports = pd.Series({'Archery': 'Bhutan',\n 'Golf': 'Scotland',\n 'Sumo': 'Japan',\n 'Taekwondo': 'South Korea'})\ncricket_loving_countries = pd.Series(['Australia',\n 'Barbados',\n 'Pakistan',\n 'England'], \n index=['Cricket',\n 'Cricket',\n 'Cricket',\n 'Cricket'])\nall_countries = original_sports.append(cricket_loving_countries)\n\noriginal_sports\n\ncricket_loving_countries\n\nall_countries\n\nall_countries.loc['Cricket']",
"The DataFrame Data Structure",
"import pandas as pd\npurchase_1 = pd.Series({'Name': 'Chris',\n 'Item Purchased': 'Dog Food',\n 'Cost': 22.50})\npurchase_2 = pd.Series({'Name': 'Kevyn',\n 'Item Purchased': 'Kitty Litter',\n 'Cost': 2.50})\npurchase_3 = pd.Series({'Name': 'Vinod',\n 'Item Purchased': 'Bird Seed',\n 'Cost': 5.00})\ndf = pd.DataFrame([purchase_1, purchase_2, purchase_3], index=['Store 1', 'Store 1', 'Store 2'])\ndf.head()\n\ndf.loc['Store 2']\n\ntype(df.loc['Store 2'])\n\ndf.loc['Store 1']\n\ndf.loc['Store 1', 'Cost']\n\ndf.T\n\ndf.T.loc['Cost']\n\ndf['Cost']\n\ndf.loc['Store 1']['Cost']\n\ndf.loc[:,['Name', 'Cost']]\n\ndf.drop('Store 1')\n\ndf\n\ncopy_df = df.copy()\ncopy_df = copy_df.drop('Store 1')\ncopy_df\n\ncopy_df.drop?\n\ndel copy_df['Name']\ncopy_df\n\ndf['Location'] = None\ndf",
"Dataframe Indexing and Loading",
"costs = df['Cost']\ncosts\n\ncosts+=2\ncosts\n\ndf\n\n!cat olympics.csv\n\ndf = pd.read_csv('olympics.csv')\ndf.head()\n\ndf = pd.read_csv('olympics.csv', index_col = 0, skiprows=1)\ndf.head()\n\ndf.columns\n\nfor col in df.columns:\n if col[:2]=='01':\n df.rename(columns={col:'Gold' + col[4:]}, inplace=True)\n if col[:2]=='02':\n df.rename(columns={col:'Silver' + col[4:]}, inplace=True)\n if col[:2]=='03':\n df.rename(columns={col:'Bronze' + col[4:]}, inplace=True)\n if col[:1]=='№':\n df.rename(columns={col:'#' + col[1:]}, inplace=True) \n\ndf.head()",
"Querying a DataFrame",
"df['Gold'] > 0\n\nonly_gold = df.where(df['Gold'] > 0)\nonly_gold.head()\n\nonly_gold['Gold'].count()\n\ndf['Gold'].count()\n\nonly_gold = only_gold.dropna()\nonly_gold.head()\n\nonly_gold = df[df['Gold'] > 0]\nonly_gold.head()\n\nlen(df[(df['Gold'] > 0) | (df['Gold.1'] > 0)])\n\ndf[(df['Gold.1'] > 0) & (df['Gold'] == 0)]",
"Indexing Dataframes",
"df.head()\n\ndf['country'] = df.index\ndf = df.set_index('Gold')\ndf.head()\n\ndf = df.reset_index()\ndf.head()\n\ndf = pd.read_csv('census.csv')\ndf.head()\n\ndf['SUMLEV'].unique()\n\ndf=df[df['SUMLEV'] == 50]\ndf.head()\n\ncolumns_to_keep = ['STNAME',\n 'CTYNAME',\n 'BIRTHS2010',\n 'BIRTHS2011',\n 'BIRTHS2012',\n 'BIRTHS2013',\n 'BIRTHS2014',\n 'BIRTHS2015',\n 'POPESTIMATE2010',\n 'POPESTIMATE2011',\n 'POPESTIMATE2012',\n 'POPESTIMATE2013',\n 'POPESTIMATE2014',\n 'POPESTIMATE2015']\ndf = df[columns_to_keep]\ndf.head()\n\ndf = df.set_index(['STNAME', 'CTYNAME'])\ndf.head()\n\ndf.loc['Michigan', 'Washtenaw County']\n\ndf.loc[ [('Michigan', 'Washtenaw County'),\n ('Michigan', 'Wayne County')] ]",
"Missing values",
"df = pd.read_csv('log.csv')\ndf\n\ndf.fillna?\n\ndf = df.set_index('time')\ndf = df.sort_index()\ndf\n\ndf = df.reset_index()\ndf = df.set_index(['time', 'user'])\ndf\n\ndf = df.fillna(method='ffill')\ndf.head()"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
transcranial/keras-js
|
notebooks/layers/convolutional/SeparableConv2D.ipynb
|
mit
|
[
"import numpy as np\nfrom keras.models import Model\nfrom keras.layers import Input\nfrom keras.layers.convolutional import SeparableConv2D\nfrom keras import backend as K\nimport json\nfrom collections import OrderedDict\n\ndef format_decimal(arr, places=6):\n return [round(x * 10**places) / 10**places for x in arr]\n\nDATA = OrderedDict()",
"SeparableConv2D\n[convolutional.SeparableConv2D.0] 4 3x3 filters on 5x5x2 input, strides=(1,1), padding='valid', data_format='channels_last', depth_multiplier=1, activation='linear', use_bias=True",
"data_in_shape = (5, 5, 2)\nconv = SeparableConv2D(4, (3,3), strides=(1,1),\n padding='valid', data_format='channels_last',\n depth_multiplier=1, activation='linear', use_bias=True)\n\nlayer_0 = Input(shape=data_in_shape)\nlayer_1 = conv(layer_0)\nmodel = Model(inputs=layer_0, outputs=layer_1)\n\n# set weights to random (use seed for reproducibility)\nweights = []\nfor w in model.get_weights():\n np.random.seed(160)\n weights.append(2 * np.random.random(w.shape) - 1)\nmodel.set_weights(weights)\nprint('depthwise_kernel shape:', weights[0].shape)\nprint('depthwise_kernel:', format_decimal(weights[0].ravel().tolist()))\nprint('pointwise_kernel shape:', weights[1].shape)\nprint('pointwise_kernel:', format_decimal(weights[1].ravel().tolist()))\nprint('b shape:', weights[2].shape)\nprint('b:', format_decimal(weights[2].ravel().tolist()))\n\ndata_in = 2 * np.random.random(data_in_shape) - 1\nresult = model.predict(np.array([data_in]))\ndata_out_shape = result[0].shape\ndata_in_formatted = format_decimal(data_in.ravel().tolist())\ndata_out_formatted = format_decimal(result[0].ravel().tolist())\nprint('')\nprint('in shape:', data_in_shape)\nprint('in:', data_in_formatted)\nprint('out shape:', data_out_shape)\nprint('out:', data_out_formatted)\n\nDATA['convolutional.SeparableConv2D.0'] = {\n 'input': {'data': data_in_formatted, 'shape': data_in_shape},\n 'weights': [{'data': format_decimal(w.ravel().tolist()), 'shape': w.shape} for w in weights],\n 'expected': {'data': data_out_formatted, 'shape': data_out_shape}\n}",
"[convolutional.SeparableConv2D.1] 4 3x3 filters on 5x5x2 input, strides=(1,1), padding='valid', data_format='channels_last', depth_multiplier=2, activation='relu', use_bias=True",
"data_in_shape = (5, 5, 2)\nconv = SeparableConv2D(4, (3,3), strides=(1,1),\n padding='valid', data_format='channels_last',\n depth_multiplier=2, activation='relu', use_bias=True)\n\nlayer_0 = Input(shape=data_in_shape)\nlayer_1 = conv(layer_0)\nmodel = Model(inputs=layer_0, outputs=layer_1)\n\n# set weights to random (use seed for reproducibility)\nweights = []\nfor w in model.get_weights():\n np.random.seed(161)\n weights.append(2 * np.random.random(w.shape) - 1)\nmodel.set_weights(weights)\nprint('depthwise_kernel shape:', weights[0].shape)\nprint('depthwise_kernel:', format_decimal(weights[0].ravel().tolist()))\nprint('pointwise_kernel shape:', weights[1].shape)\nprint('pointwise_kernel:', format_decimal(weights[1].ravel().tolist()))\nprint('b shape:', weights[2].shape)\nprint('b:', format_decimal(weights[2].ravel().tolist()))\n\ndata_in = 2 * np.random.random(data_in_shape) - 1\nresult = model.predict(np.array([data_in]))\ndata_out_shape = result[0].shape\ndata_in_formatted = format_decimal(data_in.ravel().tolist())\ndata_out_formatted = format_decimal(result[0].ravel().tolist())\nprint('')\nprint('in shape:', data_in_shape)\nprint('in:', data_in_formatted)\nprint('out shape:', data_out_shape)\nprint('out:', data_out_formatted)\n\nDATA['convolutional.SeparableConv2D.1'] = {\n 'input': {'data': data_in_formatted, 'shape': data_in_shape},\n 'weights': [{'data': format_decimal(w.ravel().tolist()), 'shape': w.shape} for w in weights],\n 'expected': {'data': data_out_formatted, 'shape': data_out_shape}\n}",
"[convolutional.SeparableConv2D.2] 16 3x3 filters on 5x5x4 input, strides=(1,1), padding='valid', data_format='channels_last', depth_multiplier=3, activation='relu', use_bias=True",
"data_in_shape = (5, 5, 4)\nconv = SeparableConv2D(16, (3,3), strides=(1,1),\n padding='valid', data_format='channels_last',\n depth_multiplier=3, activation='relu', use_bias=True)\n\nlayer_0 = Input(shape=data_in_shape)\nlayer_1 = conv(layer_0)\nmodel = Model(inputs=layer_0, outputs=layer_1)\n\n# set weights to random (use seed for reproducibility)\nweights = []\nfor w in model.get_weights():\n np.random.seed(162)\n weights.append(2 * np.random.random(w.shape) - 1)\nmodel.set_weights(weights)\nprint('depthwise_kernel shape:', weights[0].shape)\nprint('depthwise_kernel:', format_decimal(weights[0].ravel().tolist()))\nprint('pointwise_kernel shape:', weights[1].shape)\nprint('pointwise_kernel:', format_decimal(weights[1].ravel().tolist()))\nprint('b shape:', weights[2].shape)\nprint('b:', format_decimal(weights[2].ravel().tolist()))\n\ndata_in = 2 * np.random.random(data_in_shape) - 1\nresult = model.predict(np.array([data_in]))\ndata_out_shape = result[0].shape\ndata_in_formatted = format_decimal(data_in.ravel().tolist())\ndata_out_formatted = format_decimal(result[0].ravel().tolist())\nprint('')\nprint('in shape:', data_in_shape)\nprint('in:', data_in_formatted)\nprint('out shape:', data_out_shape)\nprint('out:', data_out_formatted)\n\nDATA['convolutional.SeparableConv2D.2'] = {\n 'input': {'data': data_in_formatted, 'shape': data_in_shape},\n 'weights': [{'data': format_decimal(w.ravel().tolist()), 'shape': w.shape} for w in weights],\n 'expected': {'data': data_out_formatted, 'shape': data_out_shape}\n}",
"[convolutional.SeparableConv2D.3] 4 3x3 filters on 5x5x2 input, strides=(2,2), padding='valid', data_format='channels_last', depth_multiplier=1, activation='relu', use_bias=True",
"data_in_shape = (5, 5, 2)\nconv = SeparableConv2D(4, (3,3), strides=(2,2),\n padding='valid', data_format='channels_last',\n depth_multiplier=1, activation='relu', use_bias=True)\n\nlayer_0 = Input(shape=data_in_shape)\nlayer_1 = conv(layer_0)\nmodel = Model(inputs=layer_0, outputs=layer_1)\n\n# set weights to random (use seed for reproducibility)\nweights = []\nfor w in model.get_weights():\n np.random.seed(163)\n weights.append(2 * np.random.random(w.shape) - 1)\nmodel.set_weights(weights)\nprint('depthwise_kernel shape:', weights[0].shape)\nprint('depthwise_kernel:', format_decimal(weights[0].ravel().tolist()))\nprint('pointwise_kernel shape:', weights[1].shape)\nprint('pointwise_kernel:', format_decimal(weights[1].ravel().tolist()))\nprint('b shape:', weights[2].shape)\nprint('b:', format_decimal(weights[2].ravel().tolist()))\n\ndata_in = 2 * np.random.random(data_in_shape) - 1\nresult = model.predict(np.array([data_in]))\ndata_out_shape = result[0].shape\ndata_in_formatted = format_decimal(data_in.ravel().tolist())\ndata_out_formatted = format_decimal(result[0].ravel().tolist())\nprint('')\nprint('in shape:', data_in_shape)\nprint('in:', data_in_formatted)\nprint('out shape:', data_out_shape)\nprint('out:', data_out_formatted)\n\nDATA['convolutional.SeparableConv2D.3'] = {\n 'input': {'data': data_in_formatted, 'shape': data_in_shape},\n 'weights': [{'data': format_decimal(w.ravel().tolist()), 'shape': w.shape} for w in weights],\n 'expected': {'data': data_out_formatted, 'shape': data_out_shape}\n}",
"[convolutional.SeparableConv2D.4] 4 3x3 filters on 5x5x2 input, strides=(1,1), padding='same', data_format='channels_last', depth_multiplier=1, activation='relu', use_bias=True",
"data_in_shape = (5, 5, 2)\nconv = SeparableConv2D(4, (3,3), strides=(1,1),\n padding='same', data_format='channels_last',\n depth_multiplier=1, activation='relu', use_bias=True)\n\nlayer_0 = Input(shape=data_in_shape)\nlayer_1 = conv(layer_0)\nmodel = Model(inputs=layer_0, outputs=layer_1)\n\n# set weights to random (use seed for reproducibility)\nweights = []\nfor w in model.get_weights():\n np.random.seed(164)\n weights.append(2 * np.random.random(w.shape) - 1)\nmodel.set_weights(weights)\nprint('depthwise_kernel shape:', weights[0].shape)\nprint('depthwise_kernel:', format_decimal(weights[0].ravel().tolist()))\nprint('pointwise_kernel shape:', weights[1].shape)\nprint('pointwise_kernel:', format_decimal(weights[1].ravel().tolist()))\nprint('b shape:', weights[2].shape)\nprint('b:', format_decimal(weights[2].ravel().tolist()))\n\ndata_in = 2 * np.random.random(data_in_shape) - 1\nresult = model.predict(np.array([data_in]))\ndata_out_shape = result[0].shape\ndata_in_formatted = format_decimal(data_in.ravel().tolist())\ndata_out_formatted = format_decimal(result[0].ravel().tolist())\nprint('')\nprint('in shape:', data_in_shape)\nprint('in:', data_in_formatted)\nprint('out shape:', data_out_shape)\nprint('out:', data_out_formatted)\n\nDATA['convolutional.SeparableConv2D.4'] = {\n 'input': {'data': data_in_formatted, 'shape': data_in_shape},\n 'weights': [{'data': format_decimal(w.ravel().tolist()), 'shape': w.shape} for w in weights],\n 'expected': {'data': data_out_formatted, 'shape': data_out_shape}\n}",
"[convolutional.SeparableConv2D.5] 4 3x3 filters on 5x5x2 input, strides=(1,1), padding='same', data_format='channels_last', depth_multiplier=2, activation='relu', use_bias=False",
"data_in_shape = (5, 5, 2)\nconv = SeparableConv2D(4, (3,3), strides=(1,1),\n padding='same', data_format='channels_last',\n depth_multiplier=2, activation='relu', use_bias=False)\n\nlayer_0 = Input(shape=data_in_shape)\nlayer_1 = conv(layer_0)\nmodel = Model(inputs=layer_0, outputs=layer_1)\n\n# set weights to random (use seed for reproducibility)\nweights = []\nfor w in model.get_weights():\n np.random.seed(165)\n weights.append(2 * np.random.random(w.shape) - 1)\nmodel.set_weights(weights)\nprint('depthwise_kernel shape:', weights[0].shape)\nprint('depthwise_kernel:', format_decimal(weights[0].ravel().tolist()))\nprint('pointwise_kernel shape:', weights[1].shape)\nprint('pointwise_kernel:', format_decimal(weights[1].ravel().tolist()))\n# print('b shape:', weights[2].shape)\n# print('b:', format_decimal(weights[2].ravel().tolist()))\n\ndata_in = 2 * np.random.random(data_in_shape) - 1\nresult = model.predict(np.array([data_in]))\ndata_out_shape = result[0].shape\ndata_in_formatted = format_decimal(data_in.ravel().tolist())\ndata_out_formatted = format_decimal(result[0].ravel().tolist())\nprint('')\nprint('in shape:', data_in_shape)\nprint('in:', data_in_formatted)\nprint('out shape:', data_out_shape)\nprint('out:', data_out_formatted)\n\nDATA['convolutional.SeparableConv2D.5'] = {\n 'input': {'data': data_in_formatted, 'shape': data_in_shape},\n 'weights': [{'data': format_decimal(w.ravel().tolist()), 'shape': w.shape} for w in weights],\n 'expected': {'data': data_out_formatted, 'shape': data_out_shape}\n}",
"[convolutional.SeparableConv2D.6] 4 3x3 filters on 5x5x2 input, strides=(2,2), padding='same', data_format='channels_last', depth_multiplier=2, activation='relu', use_bias=True",
"data_in_shape = (5, 5, 2)\nconv = SeparableConv2D(4, (3,3), strides=(2,2),\n padding='same', data_format='channels_last',\n depth_multiplier=2, activation='relu', use_bias=True)\n\nlayer_0 = Input(shape=data_in_shape)\nlayer_1 = conv(layer_0)\nmodel = Model(inputs=layer_0, outputs=layer_1)\n\n# set weights to random (use seed for reproducibility)\nweights = []\nfor w in model.get_weights():\n np.random.seed(166)\n weights.append(2 * np.random.random(w.shape) - 1)\nmodel.set_weights(weights)\nprint('depthwise_kernel shape:', weights[0].shape)\nprint('depthwise_kernel:', format_decimal(weights[0].ravel().tolist()))\nprint('pointwise_kernel shape:', weights[1].shape)\nprint('pointwise_kernel:', format_decimal(weights[1].ravel().tolist()))\nprint('b shape:', weights[2].shape)\nprint('b:', format_decimal(weights[2].ravel().tolist()))\n\ndata_in = 2 * np.random.random(data_in_shape) - 1\nresult = model.predict(np.array([data_in]))\ndata_out_shape = result[0].shape\ndata_in_formatted = format_decimal(data_in.ravel().tolist())\ndata_out_formatted = format_decimal(result[0].ravel().tolist())\nprint('')\nprint('in shape:', data_in_shape)\nprint('in:', data_in_formatted)\nprint('out shape:', data_out_shape)\nprint('out:', data_out_formatted)\n\nDATA['convolutional.SeparableConv2D.6'] = {\n 'input': {'data': data_in_formatted, 'shape': data_in_shape},\n 'weights': [{'data': format_decimal(w.ravel().tolist()), 'shape': w.shape} for w in weights],\n 'expected': {'data': data_out_formatted, 'shape': data_out_shape}\n}",
"export for Keras.js tests",
"import os\n\nfilename = '../../../test/data/layers/convolutional/SeparableConv2D.json'\nif not os.path.exists(os.path.dirname(filename)):\n os.makedirs(os.path.dirname(filename))\nwith open(filename, 'w') as f:\n json.dump(DATA, f)\n\nprint(json.dumps(DATA))"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
jastarex/DeepLearningCourseCodes
|
01_TF_basics_and_linear_regression/linear_regression_mx.ipynb
|
apache-2.0
|
[
"1 Environment, 环境 \n2 Hyper Parameters, 超参数 \n3 Training Data, 训练数据 \n4 Prepare for Training, 训练准备 \n 4.1 mx Graph Input, mxnet图输入 \n 4.2 Construct a linear model, 构造线性模型 \n 4.3 Mean squared error, 损失函数:均方差 \n5 Start training, 开始训练 \n6 Regression result, 回归结果\n\nEnvironment, 环境",
"from __future__ import print_function\nimport mxnet as mx\nfrom mxnet import nd, autograd\nimport numpy\nimport matplotlib.pyplot as plt\n\nmx.random.seed(1)",
"Hyper Parameters, 超参数",
"learning_rate = 0.01\ntraining_epochs = 1000\nsmoothing_constant = 0.01\ndisplay_step = 50\nctx = mx.cpu()",
"Training Data, 训练数据",
"train_X = numpy.asarray([3.3, 4.4, 5.5, 6.71, 6.93, 4.168, 9.779, 6.182, 7.59, 2.167,\n 7.042, 10.791, 5.313, 7.997, 5.654, 9.27,3.1])\ntrain_Y = numpy.asarray([1.7, 2.76, 2.09, 3.19, 1.694, 1.573, 3.366, 2.596, 2.53, 1.221,\n 2.827, 3.465, 1.65, 2.904, 2.42, 2.94, 1.3])\nn_samples = train_X.shape[0]",
"Prepare for Training, 训练准备\nmx Graph Input, mxnet图输入",
"# Set model weights,初始化网络模型的权重\nW = nd.random_normal(shape=1)\nb = nd.random_normal(shape=1)\n\nparams = [W, b]\nfor param in params:\n param.attach_grad()",
"Construct a linear model, 构造线性模型",
"def net(X):\n return X*W + b",
"Mean squared error, 损失函数:均方差",
"# Mean squared error,损失函数:均方差\ndef square_loss(yhat, y):\n return nd.mean((yhat - y) ** 2)\n\n# Gradient descent, 优化方式:梯度下降\ndef SGD(params, lr):\n for param in params:\n param[:] = param - lr * param.grad",
"Start training, 开始训练",
"# Fit training data\ndata = nd.array(train_X)\nlabel = nd.array(train_Y)\nlosses = []\nmoving_loss = 0\nniter = 0\n\nfor e in range(training_epochs):\n with autograd.record():\n output = net(data)\n loss = square_loss(output, label)\n loss.backward()\n SGD(params, learning_rate)\n\n ##########################\n # Keep a moving average of the losses\n ##########################\n niter +=1\n curr_loss = nd.mean(loss).asscalar()\n moving_loss = (1 - smoothing_constant) * moving_loss + (smoothing_constant) * curr_loss\n\n # correct the bias from the moving averages\n est_loss = moving_loss/(1-(1-smoothing_constant)**niter)\n\n losses.append(est_loss)\n if (e + 1) % display_step == 0:\n print(\"Epoch:\", '%04d' % (e), \"cost=\", \"{:.9f}\".format(curr_loss), \"W=\", W.asnumpy()[0], \"b=\", b.asnumpy()[0])",
"Regression result, 回归结果",
"def plot(losses, X, Y, n_samples=10):\n xs = list(range(len(losses)))\n f, (fg1, fg2) = plt.subplots(1, 2)\n fg1.set_title('Loss during training')\n fg1.plot(xs, losses, '-r')\n fg2.set_title('Estimated vs real function')\n fg2.plot(X.asnumpy(), net(X).asnumpy(), 'or', label='Estimated')\n fg2.plot(X.asnumpy(), Y.asnumpy(), '*g', label='Real')\n fg2.legend()\n plt.show()\n \nplot(losses, data, label)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
diego0020/va_course_2015
|
AstroML/notebooks/07_classification_example.ipynb
|
mit
|
[
"Classification Example\nYou'll need to modify the DATA_HOME variable to the location of the datasets. \nIn this tutorial we'll use the colors of over 700,000 stars and quasars from the \nSloan Digital Sky Survey. 500,000 of them are training data, spectroscopically \nidentified as stars or quasars. The remaining 200,000\nhave been classified based on their photometric colors.",
"import os\nDATA_HOME = os.path.abspath('C:/temp/AstroML/data/sdss_colors/')",
"Here we will use a Naive Bayes estimator to classify the objects.\nFirst, we will construct our training data and test data arrays:",
"import numpy as np\n\ntrain_data = np.load(os.path.join(DATA_HOME, 'sdssdr6_colors_class_train.npy'))\ntest_data = np.load(os.path.join(DATA_HOME, 'sdssdr6_colors_class.200000.npy'))",
"The data is stored as a record array, which is a convenient format for\ncollections of labeled data:",
"print(train_data.dtype.names)\n\nprint(train_data['u-g'].shape)",
"Now we must put these into arrays of shape (n_samples, n_features)\nin order to pass them to routines in scikit-learn. Training samples\nwith zero-redshift are stars, while samples with positive redshift are quasars:",
"X_train = np.vstack([train_data['u-g'],\n train_data['g-r'],\n train_data['r-i'],\n train_data['i-z']]).T\ny_train = (train_data['redshift'] > 0).astype(int)\n\nX_test = np.vstack([test_data['u-g'],\n test_data['g-r'],\n test_data['r-i'],\n test_data['i-z']]).T\ny_test = (test_data['label'] == 0).astype(int)\n\nprint(\"training data: \") \nprint(X_train.shape)\nprint(\"test data: \") \nprint(X_test.shape)",
"Notice that we’ve set this up so that quasars have y = 1,\nand stars have y = 0. Now we’ll set up a Naive Bayes classifier.\nThis will fit a four-dimensional uncorrelated gaussian to each\ndistribution, and from these gaussians quickly predict the label\nfor a test point:",
"from sklearn import naive_bayes\ngnb = naive_bayes.GaussianNB()\ngnb.fit(X_train, y_train)\ny_pred = gnb.predict(X_test)",
"Let’s check our accuracy. This is the fraction of labels that are correct:",
"accuracy = float(np.sum(y_test == y_pred)) / len(y_test)\nprint(accuracy)",
"We have 61% accuracy. Not very good. But we must be careful here:\nthe accuracy does not always tell the whole story. In our data,\nthere are many more stars than quasars",
"print(np.sum(y_test == 0))\n\nprint(np.sum(y_test == 1))",
"Stars outnumber Quasars by a factor of 14 to 1. In cases like this,\nit is much more useful to evaluate the fit based on precision and\nrecall. Because there are many fewer quasars than stars, we’ll call\na quasar a positive label and a star a negative label. The precision\nasks what fraction of positively labeled points are correctly labeled:\n$\\mathrm{precision = \\frac{True\\ Positives}{True\\ Positives + False\\ Positives}}$\nThe recall asks what fraction of positive samples are correctly identified:\n$\\mathrm{recall = \\frac{True\\ Positives}{True\\ Positives + False\\ Negatives}}$\nWe can calculate this for our results as follows:",
"TP = np.sum((y_pred == 1) & (y_test == 1)) # true positives\nFP = np.sum((y_pred == 1) & (y_test == 0)) # false positives\nFN = np.sum((y_pred == 0) & (y_test == 1)) # false negatives\nprint(\"precision:\") \nprint(TP / float(TP + FP))\nprint(\"recall: \") \nprint(TP / float(TP + FN))",
"For convenience, these can be computed using the tools in the metrics sub-package of scikit-learn:",
"from sklearn import metrics\nprint(\"precision:\") \nprint(metrics.precision_score(y_test, y_pred))\nprint(\"recall: \") \nprint(metrics.recall_score(y_test, y_pred))",
"Precision and Recall tell different stories about the performance of the classifier. Ideally one would try to create a classifier with a high precision and high recall but this is not always possible, and sometimes raising the precision will decrease the recall or viceversa (why?).\nThink about situations when you'll want a high precision classifier even if the recall is poor, and viceversa.\nAnother useful metric is the F1 score, which gives a single score based on the precision and recall for the class:\n$\\mathrm{F1 = 2\\frac{precision * recall}{precision + recall}}$\nIn a perfect classification, the precision, recall, and F1 score are all equal to 1.",
"print(\"F1 score:\") \nprint(metrics.f1_score(y_test, y_pred))",
"For convenience, sklearn.metrics provides a function that computes all\nof these scores, and returns a nicely formatted string. For example:",
"print(metrics.classification_report(y_test, y_pred, target_names=['Stars', 'QSOs']))",
"We see that for Gaussian Naive Bayes, our QSO recall is fairly good:\nwe are correctly identifying 95% of all quasars. The precision, on the\nother hand, is much worse. Of the points we label quasars, only 14% of\nthem are correctly labeled. This low precision leads to an F1-score of\nonly 0.25. This is not an optimal classification of our data.\nApparently Naive Bayes is a bit too naive for this problem, so lets check \nwith some other classifiers. Remember that the documentation is in http://scikit-learn.org/stable/supervised_learning.html\nReplace the classifier with a DecisionTreeClassifier and compare the results with the Naive Bayes. Which metric would you choose for doing the comparison? \nNow lets try with a Random Forest, that is a ensemble of a number of Decision Trees. Replace the classifier with a RandomForestClassifier and compare the results. \nWhich parameters can be adjusted in this classifier? Experiment with the most important parameters and try to create a better classifier"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
mne-tools/mne-tools.github.io
|
0.16/_downloads/plot_compute_mne_inverse_epochs_in_label.ipynb
|
bsd-3-clause
|
[
"%matplotlib inline",
"Compute MNE-dSPM inverse solution on single epochs\nCompute dSPM inverse solution on single trial epochs restricted\nto a brain label.",
"# Author: Alexandre Gramfort <alexandre.gramfort@telecom-paristech.fr>\n#\n# License: BSD (3-clause)\n\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nimport mne\nfrom mne.datasets import sample\nfrom mne.minimum_norm import apply_inverse_epochs, read_inverse_operator\nfrom mne.minimum_norm import apply_inverse\n\nprint(__doc__)\n\ndata_path = sample.data_path()\nfname_inv = data_path + '/MEG/sample/sample_audvis-meg-oct-6-meg-inv.fif'\nfname_raw = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif'\nfname_event = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw-eve.fif'\nlabel_name = 'Aud-lh'\nfname_label = data_path + '/MEG/sample/labels/%s.label' % label_name\n\nevent_id, tmin, tmax = 1, -0.2, 0.5\n\n# Using the same inverse operator when inspecting single trials Vs. evoked\nsnr = 3.0 # Standard assumption for average data but using it for single trial\nlambda2 = 1.0 / snr ** 2\n\nmethod = \"dSPM\" # use dSPM method (could also be MNE or sLORETA)\n\n# Load data\ninverse_operator = read_inverse_operator(fname_inv)\nlabel = mne.read_label(fname_label)\nraw = mne.io.read_raw_fif(fname_raw)\nevents = mne.read_events(fname_event)\n\n# Set up pick list\ninclude = []\n\n# Add a bad channel\nraw.info['bads'] += ['EEG 053'] # bads + 1 more\n\n# pick MEG channels\npicks = mne.pick_types(raw.info, meg=True, eeg=False, stim=False, eog=True,\n include=include, exclude='bads')\n# Read epochs\nepochs = mne.Epochs(raw, events, event_id, tmin, tmax, picks=picks,\n baseline=(None, 0), reject=dict(mag=4e-12, grad=4000e-13,\n eog=150e-6))\n\n# Get evoked data (averaging across trials in sensor space)\nevoked = epochs.average()\n\n# Compute inverse solution and stcs for each epoch\n# Use the same inverse operator as with evoked data (i.e., set nave)\n# If you use a different nave, dSPM just scales by a factor sqrt(nave)\nstcs = apply_inverse_epochs(epochs, inverse_operator, lambda2, method, label,\n pick_ori=\"normal\", nave=evoked.nave)\n\nstc_evoked = apply_inverse(evoked, inverse_operator, lambda2, method,\n pick_ori=\"normal\")\n\nstc_evoked_label = stc_evoked.in_label(label)\n\n# Mean across trials but not across vertices in label\nmean_stc = sum(stcs) / len(stcs)\n\n# compute sign flip to avoid signal cancellation when averaging signed values\nflip = mne.label_sign_flip(label, inverse_operator['src'])\n\nlabel_mean = np.mean(mean_stc.data, axis=0)\nlabel_mean_flip = np.mean(flip[:, np.newaxis] * mean_stc.data, axis=0)\n\n# Get inverse solution by inverting evoked data\nstc_evoked = apply_inverse(evoked, inverse_operator, lambda2, method,\n pick_ori=\"normal\")\n\n# apply_inverse() does whole brain, so sub-select label of interest\nstc_evoked_label = stc_evoked.in_label(label)\n\n# Average over label (not caring to align polarities here)\nlabel_mean_evoked = np.mean(stc_evoked_label.data, axis=0)",
"View activation time-series to illustrate the benefit of aligning/flipping",
"times = 1e3 * stcs[0].times # times in ms\n\nplt.figure()\nh0 = plt.plot(times, mean_stc.data.T, 'k')\nh1, = plt.plot(times, label_mean, 'r', linewidth=3)\nh2, = plt.plot(times, label_mean_flip, 'g', linewidth=3)\nplt.legend((h0[0], h1, h2), ('all dipoles in label', 'mean',\n 'mean with sign flip'))\nplt.xlabel('time (ms)')\nplt.ylabel('dSPM value')\nplt.show()",
"Viewing single trial dSPM and average dSPM for unflipped pooling over label\nCompare to (1) Inverse (dSPM) then average, (2) Evoked then dSPM",
"# Single trial\nplt.figure()\nfor k, stc_trial in enumerate(stcs):\n plt.plot(times, np.mean(stc_trial.data, axis=0).T, 'k--',\n label='Single Trials' if k == 0 else '_nolegend_',\n alpha=0.5)\n\n# Single trial inverse then average.. making linewidth large to not be masked\nplt.plot(times, label_mean, 'b', linewidth=6,\n label='dSPM first, then average')\n\n# Evoked and then inverse\nplt.plot(times, label_mean_evoked, 'r', linewidth=2,\n label='Average first, then dSPM')\n\nplt.xlabel('time (ms)')\nplt.ylabel('dSPM value')\nplt.legend()\nplt.show()"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
martinjrobins/hobo
|
examples/plotting/customise-pints-plots.ipynb
|
bsd-3-clause
|
[
"An example of customising Pints plots\nThis example builds on adaptive covariance MCMC and pairwise scatterplots, and shows you how to plot the parameter distributions with examples of customising the plots.\nSetting up an MCMC routine\nSee the adaptive covariance MCMC example for details.",
"import pints\nimport pints.toy as toy\nimport numpy as np\nimport matplotlib.pyplot as plt\n\n# Load a forward model\nmodel = toy.LogisticModel()\n\n# Create some toy data\nreal_parameters = [0.015, 500]\ntimes = np.linspace(0, 1000, 100)\norg_values = model.simulate(real_parameters, times)\n\n# Add noise\nnoise = 50\nvalues = org_values + np.random.normal(0, noise, org_values.shape)\nreal_parameters = np.array(real_parameters + [noise])\n\n# Get properties of the noise sample\nnoise_sample_mean = np.mean(values - org_values)\nnoise_sample_std = np.std(values - org_values)\n\n# Create an object with links to the model and time series\nproblem = pints.SingleOutputProblem(model, times, values)\n\n# Create a log-likelihood function (adds an extra parameter!)\nlog_likelihood = pints.GaussianLogLikelihood(problem)\n\n# Create a uniform prior over both the parameters and the new noise variable\nlog_prior = pints.UniformLogPrior(\n [0.01, 400, noise*0.1],\n [0.02, 600, noise*100]\n )\n\n# Create a posterior log-likelihood (log(likelihood * prior))\nlog_posterior = pints.LogPosterior(log_likelihood, log_prior)\n\n# Perform sampling using MCMC, with a single chain\nx0 = real_parameters * 1.1\nmcmc = pints.MCMCController(log_posterior, 1, [x0])\nmcmc.set_max_iterations(6000)\nmcmc.set_log_to_screen(False)",
"Plotting Pints' standard 1d histograms\nWe can now run the MCMC routine and plot the histograms of the inferred parameters.",
"print('Running...')\nchains = mcmc.run()\nprint('Done!')\n\n# Select chain 0 and discard warm-up\nchain = chains[0]\nchain = chain[3000:]\n\nimport pints.plot\n\n# Plot the 1d histogram of each parameter\npints.plot.histogram([chain])\nplt.show()",
"Customise the plots\nFor example, here our toy model is a logistic model of population growth\n$$f(t) = \\frac{k}{1 + (k/p_0 - 1)\\exp(-rt)},$$\nwhere $r$ is the growth rate, $k$ is the carrying capacity, and $p_0$ is the initial population (fixed constant). In this example, we have model parameters $r$ and $k$, together with a noise parameter $\\sigma$, to be inferred.\nWe will do the following update to the plots to make it prettier:\n1. We may want to change the labels to the names of parameters;\n2. We will add lines for (i) the mean and (ii) the 95% credible interval;\n3. We will make the figure size a bit bigger.",
"# Plot the 1d histogram of each parameter\nfig, axes = pints.plot.histogram([chain])\n\n# Customise the plots\nparameter_names = [r'$r$', r'$k$', r'$\\sigma$']\nfor i, ax in enumerate(axes):\n # (1) Add parameter name\n ax.set_xlabel(parameter_names[i])\n # (2i) Add mean\n ax.axvline(np.mean(chain[:, i]), color='k', label='Mean')\n # (2ii) Add 95% credible interval\n ax.axvline(np.percentile(chain[:, i], 2.5), color='C1', label='95% credible interval')\n ax.axvline(np.percentile(chain[:, i], 97.5), color='C1')\naxes[0].legend()\n# (3) Update figure size\nfig.set_size_inches(14, 9)\n\nplt.show()"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
d-k-b/udacity-deep-learning
|
image-classification/dlnd_image_classification.ipynb
|
mit
|
[
"Image Classification\nIn this project, you'll classify images from the CIFAR-10 dataset. The dataset consists of airplanes, dogs, cats, and other objects. You'll preprocess the images, then train a convolutional neural network on all the samples. The images need to be normalized and the labels need to be one-hot encoded. You'll get to apply what you learned and build a convolutional, max pooling, dropout, and fully connected layers. At the end, you'll get to see your neural network's predictions on the sample images.\nGet the Data\nRun the following cell to download the CIFAR-10 dataset for python.",
"\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\nfrom urllib.request import urlretrieve\nfrom os.path import isfile, isdir\nfrom tqdm import tqdm\nimport problem_unittests as tests\nimport tarfile\n\ncifar10_dataset_folder_path = 'cifar-10-batches-py'\n\n# Use Floyd's cifar-10 dataset if present\nfloyd_cifar10_location = '/input/cifar-10/python.tar.gz'\nif isfile(floyd_cifar10_location):\n tar_gz_path = floyd_cifar10_location\nelse:\n tar_gz_path = 'cifar-10-python.tar.gz'\n\nclass DLProgress(tqdm):\n last_block = 0\n\n def hook(self, block_num=1, block_size=1, total_size=None):\n self.total = total_size\n self.update((block_num - self.last_block) * block_size)\n self.last_block = block_num\n\nif not isfile(tar_gz_path):\n with DLProgress(unit='B', unit_scale=True, miniters=1, desc='CIFAR-10 Dataset') as pbar:\n urlretrieve(\n 'https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz',\n tar_gz_path,\n pbar.hook)\n\nif not isdir(cifar10_dataset_folder_path):\n with tarfile.open(tar_gz_path) as tar:\n tar.extractall()\n tar.close()\n\n\ntests.test_folder_path(cifar10_dataset_folder_path)",
"Explore the Data\nThe dataset is broken into batches to prevent your machine from running out of memory. The CIFAR-10 dataset consists of 5 batches, named data_batch_1, data_batch_2, etc.. Each batch contains the labels and images that are one of the following:\n* airplane\n* automobile\n* bird\n* cat\n* deer\n* dog\n* frog\n* horse\n* ship\n* truck\nUnderstanding a dataset is part of making predictions on the data. Play around with the code cell below by changing the batch_id and sample_id. The batch_id is the id for a batch (1-5). The sample_id is the id for a image and label pair in the batch.\nAsk yourself \"What are all possible labels?\", \"What is the range of values for the image data?\", \"Are the labels in order or random?\". Answers to questions like these will help you preprocess the data and end up with better predictions.",
"%matplotlib inline\n%config InlineBackend.figure_format = 'retina'\n\nimport helper\nimport numpy as np\n\n# Explore the dataset\nbatch_id = 1\nsample_id = 5\nhelper.display_stats(cifar10_dataset_folder_path, batch_id, sample_id)",
"Implement Preprocess Functions\nNormalize\nIn the cell below, implement the normalize function to take in image data, x, and return it as a normalized Numpy array. The values should be in the range of 0 to 1, inclusive. The return object should be the same shape as x.",
"def normalize(x):\n \"\"\"\n Normalize a list of sample image data in the range of 0 to 1\n : x: List of image data. The image shape is (32, 32, 3)\n : return: Numpy array of normalize data\n \"\"\"\n _x = np.array(x)\n return _x / 256\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_normalize(normalize)",
"One-hot encode\nJust like the previous code cell, you'll be implementing a function for preprocessing. This time, you'll implement the one_hot_encode function. The input, x, are a list of labels. Implement the function to return the list of labels as One-Hot encoded Numpy array. The possible values for labels are 0 to 9. The one-hot encoding function should return the same encoding for each value between each call to one_hot_encode. Make sure to save the map of encodings outside the function.\nHint: Don't reinvent the wheel.",
"import pandas as pd\n\ncategory_list = ['airplane','automobile','bird','cat','deer','dog','frog','horse','ship','truck']\ncategory_indicies = list(range(len(category_list)))\ncategory_encodings = pd.Series(category_indicies)\ncategory_encodings = pd.get_dummies(category_encodings)\n\ndef one_hot_encode(x):\n \"\"\"\n One hot encode a list of sample labels. Return a one-hot encoded vector for each label.\n : x: List of sample Labels\n : return: Numpy array of one-hot encoded labels\n \"\"\"\n return np.array([np.array(category_encodings[label]) for label in x])\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_one_hot_encode(one_hot_encode)",
"Randomize Data\nAs you saw from exploring the data above, the order of the samples are randomized. It doesn't hurt to randomize it again, but you don't need to for this dataset.\nPreprocess all the data and save it\nRunning the code cell below will preprocess all the CIFAR-10 data and save it to file. The code below also uses 10% of the training data for validation.",
"\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\n# Preprocess Training, Validation, and Testing Data\nhelper.preprocess_and_save_data(cifar10_dataset_folder_path, normalize, one_hot_encode)",
"Check Point\nThis is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.",
"\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nimport pickle\nimport problem_unittests as tests\nimport helper\nimport numpy as np\n\n# Load the Preprocessed Validation data\nvalid_features, valid_labels = pickle.load(open('preprocess_validation.p', mode='rb'))",
"Build the network\nFor the neural network, you'll build each layer into a function. Most of the code you've seen has been outside of functions. To test your code more thoroughly, we require that you put each layer in a function. This allows us to give you better feedback and test for simple mistakes using our unittests before you submit your project.\n\nNote: If you're finding it hard to dedicate enough time for this course each week, we've provided a small shortcut to this part of the project. In the next couple of problems, you'll have the option to use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages to build each layer, except the layers you build in the \"Convolutional and Max Pooling Layer\" section. TF Layers is similar to Keras's and TFLearn's abstraction to layers, so it's easy to pickup.\nHowever, if you would like to get the most out of this course, try to solve all the problems without using anything from the TF Layers packages. You can still use classes from other packages that happen to have the same name as ones you find in TF Layers! For example, instead of using the TF Layers version of the conv2d class, tf.layers.conv2d, you would want to use the TF Neural Network version of conv2d, tf.nn.conv2d. \n\nLet's begin!\nInput\nThe neural network needs to read the image data, one-hot encoded labels, and dropout keep probability. Implement the following functions\n* Implement neural_net_image_input\n * Return a TF Placeholder\n * Set the shape using image_shape with batch size set to None.\n * Name the TensorFlow placeholder \"x\" using the TensorFlow name parameter in the TF Placeholder.\n* Implement neural_net_label_input\n * Return a TF Placeholder\n * Set the shape using n_classes with batch size set to None.\n * Name the TensorFlow placeholder \"y\" using the TensorFlow name parameter in the TF Placeholder.\n* Implement neural_net_keep_prob_input\n * Return a TF Placeholder for dropout keep probability.\n * Name the TensorFlow placeholder \"keep_prob\" using the TensorFlow name parameter in the TF Placeholder.\nThese names will be used at the end of the project to load your saved model.\nNote: None for shapes in TensorFlow allow for a dynamic size.",
"import tensorflow as tf\n\ndef neural_net_image_input(image_shape):\n \"\"\"\n Return a Tensor for a batch of image input\n : image_shape: Shape of the images\n : return: Tensor for image input.\n \"\"\"\n return tf.placeholder(tf.float32, [None, image_shape[0], image_shape[1], image_shape[2]], name = 'x')\n\n\ndef neural_net_label_input(n_classes):\n \"\"\"\n Return a Tensor for a batch of label input\n : n_classes: Number of classes\n : return: Tensor for label input.\n \"\"\"\n return tf.placeholder(tf.float32, [None, n_classes], name = 'y')\n\n\ndef neural_net_keep_prob_input():\n \"\"\"\n Return a Tensor for keep probability\n : return: Tensor for keep probability.\n \"\"\"\n return tf.placeholder(tf.float32, name = 'keep_prob')\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntf.reset_default_graph()\ntests.test_nn_image_inputs(neural_net_image_input)\ntests.test_nn_label_inputs(neural_net_label_input)\ntests.test_nn_keep_prob_inputs(neural_net_keep_prob_input)",
"Convolution and Max Pooling Layer\nConvolution layers have a lot of success with images. For this code cell, you should implement the function conv2d_maxpool to apply convolution then max pooling:\n* Create the weight and bias using conv_ksize, conv_num_outputs and the shape of x_tensor.\n* Apply a convolution to x_tensor using weight and conv_strides.\n * We recommend you use same padding, but you're welcome to use any padding.\n* Add bias\n* Add a nonlinear activation to the convolution.\n* Apply Max Pooling using pool_ksize and pool_strides.\n * We recommend you use same padding, but you're welcome to use any padding.\nNote: You can't use TensorFlow Layers or TensorFlow Layers (contrib) for this layer, but you can still use TensorFlow's Neural Network package. You may still use the shortcut option for all the other layers.",
"def conv2d_maxpool(x_tensor, conv_num_outputs, conv_ksize, conv_strides, pool_ksize, pool_strides): # , dropout = None):\n \"\"\"\n Apply convolution then max pooling to x_tensor\n :param x_tensor: TensorFlow Tensor\n :param conv_num_outputs: Number of outputs for the convolutional layer\n :param conv_ksize: kernal size 2-D Tuple for the convolutional layer\n :param conv_strides: Stride 2-D Tuple for convolution\n :param pool_ksize: kernal size 2-D Tuple for pool\n :param pool_strides: Stride 2-D Tuple for pool\n : return: A tensor that represents convolution and max pooling of x_tensor\n \"\"\"\n\n weights = tf.Variable(tf.truncated_normal(\n [conv_ksize[0], conv_ksize[1], int(x_tensor.shape[3]), conv_num_outputs], \n stddev = 0.05,\n seed = 1234.56))\n biass = tf.Variable(tf.zeros(conv_num_outputs))\n \n flow = tf.nn.conv2d(x_tensor, weights, [1, conv_strides[0], conv_strides[1], 1], padding = 'SAME')\n flow = tf.nn.bias_add(flow, biass)\n flow = tf.nn.relu(flow)\n flow = tf.nn.max_pool(flow, [1, pool_ksize[0], pool_ksize[1], 1], [1, pool_strides[0], pool_strides[1], 1], padding = 'SAME')\n # flow = tf.layers.dropout(flow, rate = dropout) if dropout != None else flow\n return flow\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_con_pool(conv2d_maxpool)",
"Flatten Layer\nImplement the flatten function to change the dimension of x_tensor from a 4-D tensor to a 2-D tensor. The output should be the shape (Batch Size, Flattened Image Size). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.",
"def flatten(x_tensor):\n \"\"\"\n Flatten x_tensor to (Batch Size, Flattened Image Size)\n : x_tensor: A tensor of size (Batch Size, ...), where ... are the image dimensions.\n : return: A tensor of size (Batch Size, Flattened Image Size).\n \"\"\"\n batch = -1 # How to extract this???\n flattened_image = int(np.product([x_tensor.shape[1], x_tensor.shape[2], x_tensor.shape[3]]))\n return tf.reshape(x_tensor, [batch, flattened_image])\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_flatten(flatten)",
"Fully-Connected Layer\nImplement the fully_conn function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.",
"def fully_conn(x_tensor, num_outputs): # , dropout = None):\n \"\"\"\n Apply a fully connected layer to x_tensor using weight and bias\n : x_tensor: A 2-D tensor where the first dimension is batch size.\n : num_outputs: The number of output that the new tensor should be.\n : return: A 2-D tensor where the second dimension is num_outputs.\n \"\"\"\n\n flow = tf.layers.dense(x_tensor, units = num_outputs, activation = tf.nn.relu)\n # flow = tf.layers.dropout(flow, rate = dropout) if dropout != None else flow\n return flow\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_fully_conn(fully_conn)",
"Output Layer\nImplement the output function to apply a fully connected layer to x_tensor with the shape (Batch Size, num_outputs). Shortcut option: you can use classes from the TensorFlow Layers or TensorFlow Layers (contrib) packages for this layer. For more of a challenge, only use other TensorFlow packages.\nNote: Activation, softmax, or cross entropy should not be applied to this.",
"def output(x_tensor, num_outputs):\n \"\"\"\n Apply a output layer to x_tensor using weight and bias\n : x_tensor: A 2-D tensor where the first dimension is batch size.\n : num_outputs: The number of output that the new tensor should be.\n : return: A 2-D tensor where the second dimension is num_outputs.\n \"\"\"\n\n return tf.layers.dense(x_tensor, units = num_outputs, activation = None)\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_output(output)",
"Create Convolutional Model\nImplement the function conv_net to create a convolutional neural network model. The function takes in a batch of images, x, and outputs logits. Use the layers you created above to create this model:\n\nApply 1, 2, or 3 Convolution and Max Pool layers\nApply a Flatten Layer\nApply 1, 2, or 3 Fully Connected Layers\nApply an Output Layer\nReturn the output\nApply TensorFlow's Dropout to one or more layers in the model using keep_prob.",
"def conv_net(x, keep_prob):\n \"\"\"\n Create a convolutional neural network model\n : x: Placeholder tensor that holds image data.\n : keep_prob: Placeholder tensor that hold dropout keep probability.\n : return: Tensor that represents logits\n \"\"\"\n \n dropout_rate = 1 - keep_prob\n \n flow = conv2d_maxpool(x, 32, [3, 3], [1, 1], [2, 2], [2, 2])\n flow = tf.layers.dropout(flow, rate = dropout_rate)\n flow = conv2d_maxpool(flow, 64, [3, 3], [1, 1], [2, 2], [2, 2]) # , dropout = 1 - keep_prob)\n flow = tf.layers.dropout(flow, rate = dropout_rate)\n flow = conv2d_maxpool(flow, 128, [3, 3], [1, 1], [1, 1], [1, 1]) # , dropout = 1 - keep_prob)\n flow = tf.layers.dropout(flow, rate = dropout_rate)\n # flow = conv2d_maxpool(flow, 256, [1, 1], [1, 1], [1, 1], [1, 1])\n flow = flatten(flow)\n flow = fully_conn(flow, 128) # , dropout = 1 - keep_prob)\n flow = tf.layers.dropout(flow, rate = dropout_rate)\n flow = fully_conn(flow, 64) # , dropout = 1 - keep_prob)\n flow = tf.layers.dropout(flow, rate = dropout_rate)\n # flow = fully_conn(flow, 32)\n flow = output(flow, 10)\n return flow\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\n\n##############################\n## Build the Neural Network ##\n##############################\n\n# Remove previous weights, bias, inputs, etc..\ntf.reset_default_graph()\n\n# Inputs\nx = neural_net_image_input((32, 32, 3))\ny = neural_net_label_input(10)\nkeep_prob = neural_net_keep_prob_input()\n\n# Model\nlogits = conv_net(x, keep_prob)\n\n# Name logits Tensor, so that is can be loaded from disk after training\nlogits = tf.identity(logits, name='logits')\n\n# Loss and Optimizer\ncost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=y))\noptimizer = tf.train.AdamOptimizer().minimize(cost)\n\n# Accuracy\ncorrect_pred = tf.equal(tf.argmax(logits, 1), tf.argmax(y, 1))\naccuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32), name='accuracy')\n\ntests.test_conv_net(conv_net)",
"Train the Neural Network\nSingle Optimization\nImplement the function train_neural_network to do a single optimization. The optimization should use optimizer to optimize in session with a feed_dict of the following:\n* x for image input\n* y for labels\n* keep_prob for keep probability for dropout\nThis function will be called for each batch, so tf.global_variables_initializer() has already been called.\nNote: Nothing needs to be returned. This function is only optimizing the neural network.",
"def train_neural_network(session, optimizer, keep_probability, feature_batch, label_batch):\n \"\"\"\n Optimize the session on a batch of images and labels\n : session: Current TensorFlow session\n : optimizer: TensorFlow optimizer function\n : keep_probability: keep probability\n : feature_batch: Batch of Numpy image data\n : label_batch: Batch of Numpy label data\n \"\"\"\n \n session.run(\n optimizer, \n feed_dict = \\\n {\n x: feature_batch, \n y: label_batch, \n keep_prob: keep_probability\n })\n \n return\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_train_nn(train_neural_network)",
"Show Stats\nImplement the function print_stats to print loss and validation accuracy. Use the global variables valid_features and valid_labels to calculate validation accuracy. Use a keep probability of 1.0 to calculate the loss and validation accuracy.",
"def print_stats(session, feature_batch, label_batch, cost, accuracy):\n \"\"\"\n Print information about loss and validation accuracy\n : session: Current TensorFlow session\n : feature_batch: Batch of Numpy image data\n : label_batch: Batch of Numpy label data\n : cost: TensorFlow cost function\n : accuracy: TensorFlow accuracy function\n \"\"\"\n \n _cost = session.run(cost, feed_dict={x: feature_batch, y: label_batch, keep_prob: 1.0})\n _acc = session.run(accuracy, feed_dict={x: feature_batch, y: label_batch, keep_prob: 1.0})\n _valid_acc = session.run(accuracy, feed_dict={x: valid_features, y: valid_labels, keep_prob: 1.0})\n\n print('Cost: %s, Accuracy: %s, Validation Accuracy: %s' % (_cost, _acc, _valid_acc))\n \n return\n",
"Hyperparameters\nTune the following parameters:\n* Set epochs to the number of iterations until the network stops learning or start overfitting\n* Set batch_size to the highest number that your machine has memory for. Most people set them to common sizes of memory:\n * 64\n * 128\n * 256\n * ...\n* Set keep_probability to the probability of keeping a node using dropout",
"# TODO: Tune Parameters\nepochs = 12\n# epochs = 10 -- hadn't stopped getting more accurate on validation\n# epochs = 32 -- generally stopped getting more accurate on validation set after 12-15\n# epochs = 128 -- no higher than after 12-15 epochs\nbatch_size = 256\nkeep_probability = .75",
"Train on a Single CIFAR-10 Batch\nInstead of training the neural network on all the CIFAR-10 batches of data, let's use a single batch. This should save time while you iterate on the model to get a better accuracy. Once the final validation accuracy is 50% or greater, run the model on all the data in the next section.",
"\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nprint('Checking the Training on a Single Batch...')\nwith tf.Session() as sess:\n # Initializing the variables\n sess.run(tf.global_variables_initializer())\n \n # Training cycle\n for epoch in range(epochs):\n batch_i = 1\n for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):\n train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)\n print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='')\n print_stats(sess, batch_features, batch_labels, cost, accuracy)",
"Fully Train the Model\nNow that you got a good accuracy with a single CIFAR-10 batch, try it with all five batches.",
"\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nsave_model_path = './image_classification'\n\nprint('Training...')\nwith tf.Session() as sess:\n # Initializing the variables\n sess.run(tf.global_variables_initializer())\n \n # Training cycle\n for epoch in range(epochs):\n # Loop over all batches\n n_batches = 5\n for batch_i in range(1, n_batches + 1):\n for batch_features, batch_labels in helper.load_preprocess_training_batch(batch_i, batch_size):\n train_neural_network(sess, optimizer, keep_probability, batch_features, batch_labels)\n print('Epoch {:>2}, CIFAR-10 Batch {}: '.format(epoch + 1, batch_i), end='')\n print_stats(sess, batch_features, batch_labels, cost, accuracy)\n \n # Save Model\n saver = tf.train.Saver()\n save_path = saver.save(sess, save_model_path)",
"Checkpoint\nThe model has been saved to disk.\nTest Model\nTest your model against the test dataset. This will be your final accuracy. You should have an accuracy greater than 50%. If you don't, keep tweaking the model architecture and parameters.",
"\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\n%matplotlib inline\n%config InlineBackend.figure_format = 'retina'\n\nimport tensorflow as tf\nimport pickle\nimport helper\nimport random\n\n# Set batch size if not already set\ntry:\n if batch_size:\n pass\nexcept NameError:\n batch_size = 64\n\nsave_model_path = './image_classification'\nn_samples = 4\ntop_n_predictions = 3\n\ndef test_model():\n \"\"\"\n Test the saved model against the test dataset\n \"\"\"\n\n test_features, test_labels = pickle.load(open('preprocess_test.p', mode='rb'))\n loaded_graph = tf.Graph()\n\n with tf.Session(graph=loaded_graph) as sess:\n # Load model\n loader = tf.train.import_meta_graph(save_model_path + '.meta')\n loader.restore(sess, save_model_path)\n\n # Get Tensors from loaded model\n loaded_x = loaded_graph.get_tensor_by_name('x:0')\n loaded_y = loaded_graph.get_tensor_by_name('y:0')\n loaded_keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')\n loaded_logits = loaded_graph.get_tensor_by_name('logits:0')\n loaded_acc = loaded_graph.get_tensor_by_name('accuracy:0')\n \n # Get accuracy in batches for memory limitations\n test_batch_acc_total = 0\n test_batch_count = 0\n \n for test_feature_batch, test_label_batch in helper.batch_features_labels(test_features, test_labels, batch_size):\n test_batch_acc_total += sess.run(\n loaded_acc,\n feed_dict={loaded_x: test_feature_batch, loaded_y: test_label_batch, loaded_keep_prob: 1.0})\n test_batch_count += 1\n\n print('Testing Accuracy: {}\\n'.format(test_batch_acc_total/test_batch_count))\n\n # Print Random Samples\n random_test_features, random_test_labels = tuple(zip(*random.sample(list(zip(test_features, test_labels)), n_samples)))\n random_test_predictions = sess.run(\n tf.nn.top_k(tf.nn.softmax(loaded_logits), top_n_predictions),\n feed_dict={loaded_x: random_test_features, loaded_y: random_test_labels, loaded_keep_prob: 1.0})\n helper.display_image_predictions(random_test_features, random_test_labels, random_test_predictions)\n\n\ntest_model()",
"Why 50-80% Accuracy?\nYou might be wondering why you can't get an accuracy any higher. First things first, 50% isn't bad for a simple CNN. Pure guessing would get you 10% accuracy. However, you might notice people are getting scores well above 80%. That's because we haven't taught you all there is to know about neural networks. We still need to cover a few more techniques.\nSubmitting This Project\nWhen submitting this project, make sure to run all the cells before saving the notebook. Save the notebook file as \"dlnd_image_classification.ipynb\" and save it as a HTML file under \"File\" -> \"Download as\". Include the \"helper.py\" and \"problem_unittests.py\" files in your submission."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
JackDi/phys202-2015-work
|
assignments/assignment08/InterpolationEx02.ipynb
|
mit
|
[
"Interpolation Exercise 2",
"%matplotlib inline\nimport matplotlib.pyplot as plt\nimport seaborn as sns\nimport numpy as np\nsns.set_style('white')\n\nfrom scipy.interpolate import griddata",
"Sparse 2d interpolation\nIn this example the values of a scalar field $f(x,y)$ are known at a very limited set of points in a square domain:\n\nThe square domain covers the region $x\\in[-5,5]$ and $y\\in[-5,5]$.\nThe values of $f(x,y)$ are zero on the boundary of the square at integer spaced points.\nThe value of $f$ is known at a single interior point: $f(0,0)=1.0$.\nThe function $f$ is not known at any other points.\n\nCreate arrays x, y, f:\n\nx should be a 1d array of the x coordinates on the boundary and the 1 interior point.\ny should be a 1d array of the y coordinates on the boundary and the 1 interior point.\nf should be a 1d array of the values of f at the corresponding x and y coordinates.\n\nYou might find that np.hstack is helpful.",
"# YOUR CODE HERE\n\n\nfive_1=np.ones(11)*-5\nfour_1=np.ones(2)*-4\nthree_1=np.ones(2)*-3\ntwo_1=np.ones(2)*-2\none_1=np.ones(2)*-1\nzero=np.ones(3)*0\nfive=np.ones(11)*5\nfour=np.ones(2)*4\nthree=np.ones(2)*3\ntwo=np.ones(2)*2\none=np.ones(2)*1\ny=np.linspace(-5,5,11)\n\nnorm=np.array((-5,5))\nmid=np.array((-5,0,5))\n\nx=np.hstack((five_1,four_1,three_1,two_1,one_1,zero,one,two,three,four,five))\ny=np.hstack((y,norm,norm,norm,norm,mid,norm,norm,norm,norm,y))\n\n\ndef func(x,y):\n t=np.zeros(len(x))\n t[(len(x)/2)]=1\n return t\nf=func(x,y)\nf\n\n\n\n#The following plot should show the points on the boundary and the single point in the interior:\n\nfig=plt.figure()\nplt.scatter(x, y);\nplt.grid()\n\nassert x.shape==(41,)\nassert y.shape==(41,)\nassert f.shape==(41,)\nassert np.count_nonzero(f)==1",
"Use meshgrid and griddata to interpolate the function $f(x,y)$ on the entire square domain:\n\nxnew and ynew should be 1d arrays with 100 points between $[-5,5]$.\nXnew and Ynew should be 2d versions of xnew and ynew created by meshgrid.\nFnew should be a 2d array with the interpolated values of $f(x,y)$ at the points (Xnew,Ynew).\nUse cubic spline interpolation.",
"# YOUR CODE HERE\nfrom scipy.interpolate import interp2d \n\nxnew=np.linspace(-5,5,100)\nynew=np.linspace(-5,5,100)\n\nXnew, Ynew = np.meshgrid(xnew,ynew)\n\nFnew=griddata((x,y),f,(Xnew,Ynew),method='cubic')\n\n\n\nassert xnew.shape==(100,)\nassert ynew.shape==(100,)\nassert Xnew.shape==(100,100)\nassert Ynew.shape==(100,100)\nassert Fnew.shape==(100,100)",
"Plot the values of the interpolated scalar field using a contour plot. Customize your plot to make it effective and beautiful.",
"# YOUR CODE HERE\nplt.figure(figsize=(6,6))\ncont=plt.contour(Xnew,Ynew,Fnew, colors=('k','k'))\nplt.title(\"Contour Map of F(x)\")\nplt.ylabel(\"Y-Axis\")\nplt.xlabel('X-Axis')\n# plt.colorbar()\nplt.clabel(cont, inline=1, fontsize=10)\nplt.xlim(-5.5,5.5);\nplt.ylim(-5.5,5.5);\n# plt.grid()\n\nassert True # leave this to grade the plot"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
thehackerwithin/berkeley
|
code_examples/python_parallel/Classification_mpi4py.ipynb
|
bsd-3-clause
|
[
"import numpy as np\nimport matplotlib.pyplot as plt\n%matplotlib inline",
"Introduction to MPI and mpi4py\nMPI stands for Message Passing Interface. It is a library that allows to:\n- spawn several processes \n- adress them individually\n- have them communicate between them\nMPI can be used in many languages (C, C++, Fortran), and is extensively used in High-Performance Computing.\nmpi4py is the Python interface to MPI.\nInstallation\nconda install -c conda-forge mpi4py\n(The standard Anaconda channel for mpi4py is broken. It is necessary to use the conda-forge channel instead.)\nExample\nLet us try to get a feeling on how mpi4py works by looking at the example below:",
"%%file example.py\n\nfrom mpi4py.MPI import COMM_WORLD as communicator\nimport random\n\n# Draw one random integer between 0 and 100\ni = random.randint(0, 100)\nprint('Rank %d' %communicator.rank + ' drew a random integer: %d' %i )\n\n# Gather the results\ninteger_list = communicator.gather( i, root=0 )\nif communicator.rank == 0:\n print('\\nRank 0 gathered the results:')\n print(integer_list)\n\n! mpirun -np 3 python example.py",
"What happened?\n\n\n\"mpirun -np 3\" spawns 3 processes.\n\n\nAll processes execute the same code. (In this case, they all execute the same Python script: example.py.)\n\n\nEach process gets a unique identification number (communicator.rank).\n\n\nBased on this identifier and e.g. based on if statements, the different processes can be addressed individually, and perform different work.\n\n\nMPI provides functions (like communicator.gather) that allow processes to communicate data (even between different nodes).\n\n\nNB: There are many other communication functions, e.g.:\n- one-to-one communication (send, receive, isend, ireceive)\n- all-to-one communication (gather, reduce)\n- one-to-all communication (scatter, broadcast)\n- all-to-all (allgather, allreduce)\nSee the mpi4py documentation for more information.\n\n\nDigit classification with mpi4py\nLet us now apply mpi4py to our problem: digit classification. \nAs mentioned earlier, the parallelization, in this case, is conceptually trivial: the process should split the test data among themselves and each process should perform the prediction only its share of the data.\nOn two processes\nLet start with only with only two processes. In the script below, the data test_images is split into a smaller array small_test_images, which is different for each process.",
"%%file parallel_script.py\n\nfrom classification import nearest_neighbor_prediction\nimport numpy as np\nfrom mpi4py.MPI import COMM_WORLD as communicator\n\n# Load data\ntrain_images = np.load('./data/train_images.npy')\ntrain_labels = np.load('./data/train_labels.npy')\ntest_images = np.load('./data/test_images.npy')\n\n# Use only the data that this rank needs\nN_test = len(test_images)\nif communicator.rank == 0:\n i_start = 0\n i_end = N_test/2\nelif communicator.rank == 1:\n i_start = N_test/2\n i_end = N_test \nsmall_test_images = test_images[i_start:i_end]\n\n# Predict the results\nsmall_test_labels = nearest_neighbor_prediction(small_test_images, train_images, train_labels)\n\n# Assignement: gather the labels on one process and have it write it to a file\n# Hint: you can use np.hstack to merge a list of arrays into a single array, \n# and np.save to save an array to a file.\n\n%%time\n! mpirun -np 2 python parallel_script.py",
"The code executes faster than the serial example, because each process has a smaller amount of work, and the two processes execute this work in parallel.\nHowever, at the end of the script, each process has the corresponding label array small_test_labels. But these arrays still need to be concatenated together and written to a single file.\nAssignement: based on the previous script (example.py), use the functionalities of mpi4py to gather the labels on one rank, and have this rank write the data to a single file data/test_labels_parallel.npy.\nOn more processes\nThe above code works for two processes, but does not generalize easily to an arbitrary number of processes.\nIn order to split the initial array test_images into an arbitrary number of arrays (one per process), we can use the function np.array_split, which splits an array and returns a list of smaller arrays.\nNote: Below, the number 784 corresponds to 28x28, i.e. the number of pixels for each image.",
"# Load and split the set of test images\ntest_images = np.load('data/test_images.npy')\nsplit_arrays_list = np.array_split( test_images, 4 )\n\n# Print the corresponding shape\nprint( 'Shape of the original array:' )\nprint( test_images.shape )\nprint('Shape of the splitted arrays:')\nfor array in split_arrays_list:\n print( array.shape )",
"Assignement: in the code below, use the function array_split to split test_images between an arbitrary number of processes, and have each process pick their own small array.\nNote: Within the script, communicator.size gives the number of processes that have been spawn by mpirun.",
"%%file parallel_script.py\n\nfrom classification import nearest_neighbor_prediction\nimport numpy as np\nfrom mpi4py.MPI import COMM_WORLD as communicator\n\n# Load data\ntrain_images = np.load('./data/train_images.npy')\ntrain_labels = np.load('./data/train_labels.npy')\ntest_images = np.load('./data/test_images.npy')\n\n# Assignement: use the function np.array_split the data `test_images` among the processes\n# Have each process select their own small array.\nsmall_test_images = #.....\n\n# Predict the results and gather it on rank 0\nsmall_test_labels = nearest_neighbor_prediction(small_test_images, train_images, train_labels)\n\n# Assignement: gather the labels on one process and have it write it to a file\n# Hint: you can use np.hstack to merge a list of arrays into a single array, \n# and np.save to save an array to a file.\n\n%%time\n! mpirun -np 4 python parallel_script.py",
"Check the results\nFinally we can check that the results are valid.",
"# Load the data from the file\ntest_images = np.load('data/test_images.npy')\ntest_labels_parallel = np.load('data/test_labels_parallel.npy')\n\n# Define function to have a look at the data\ndef show_random_digit( images, labels=None ):\n \"\"\"\"Show a random image out of `images`, \n with the corresponding label if available\"\"\"\n i = np.random.randint(len(images))\n image = images[i].reshape((28, 28))\n plt.imshow( image, cmap='Greys' )\n if labels is not None:\n plt.title('Label: %d' %labels[i])\n\nshow_random_digit( test_images, test_labels_parallel )",
"Next tutorial\nLet us now see how to perform the same tasks with concurrent.futures."
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
josephcslater/array_to_latex
|
Examples.ipynb
|
mit
|
[
"Some brief examples on using array_to_latex to get nicely formatted latex versions of your arrays.",
"import pandas as pd\nimport numpy as np\nimport array_to_latex as a2l\n%load_ext autoreload\n%autoreload 2",
"Let's create an array and output it as $\\LaTeX$. We are going to use Python 3.0 string formatting \nThe following shows a float style output with 2 decimal places.",
"A = np.array([[1.23456, 23.45678],[456.23+1j, 8.239521]])\na2l.to_ltx(A, frmt = '{:.2f}', arraytype = 'array', mathform = True)\n",
"Design is to print results to the screen with no output being available. However, new usages have highlighted the need to enable outputs and hide printing. Thus the addition of the print_out boolean to turn off printing but instead return an output.",
"A = np.array([[1.23456, 23.45678],[456.23+1j, 8.239521]])\nlatex_code = a2l.to_ltx(A, frmt = '{:.2f}', arraytype = 'array', mathform = True, print_out=False)",
"We can still print the returned formatted latex code:",
"print(latex_code)",
"One can use a number before the decimal place. This defines the minimum width to use for the number, padding with spaces at the beginning. \nSince the largest number needs 6 characters (3 before the decimal, the decimal, and 2 after), putting a 6 in this location makes everything line up nicely. This would also be a nice default to code up.",
"A = np.array([[1.23456, 23.45678],[456.23+1j, 8.239521]])\na2l.to_ltx(A, frmt = '{:6.2f}', arraytype = 'array', mathform = True)",
"Let's put it in exponential form.",
"a2l.to_ltx(A, frmt = '{:.2e}', arraytype = 'array', mathform=False)",
"That's not how humans/textbooks write exponential form. Let's use mathform=True (which is the default).",
"a2l.to_ltx(A, frmt = '{:6.2e}', arraytype = 'array', mathform=True)",
"It's easier to make these columns line up than when using f format styling- so I believe it is working. \nOf course, the typeset $\\LaTeX$ will look better than the raw $\\LaTeX$.\nOne can also capture the string in the output. \nIt will also do column and row-vectors. It's the array is 1-D, the default is a row.",
"A = np.array([1.23456, 23.45678, 456.23, 8.239521])\na2l.to_ltx(A, frmt = '{:6.2f}', arraytype = 'array')\n\nA = np.array([[1.23456, 23.45678, 456.23, 8.239521]])\na2l.to_ltx(A, frmt = '{:6.2f}', arraytype = 'array')\n\nA = np.array([[1.23456, 23.45678, 456.23, 8.239521]]).T\na2l.to_ltx(A, frmt = '{:6.2f}', arraytype = 'array')",
"We can use the lambda function method to create a function with personalized defaults. This makes for a much more compact call, and one that can be adjusted for an entire session.",
"to_tex = lambda A : a2l.to_ltx(A, frmt = '{:6.2e}', arraytype = 'array', mathform=True)\nto_tex(A)\n\nto_tex = lambda A : a2l.to_ltx(A, frmt = '{:6.2f}', arraytype = 'array', mathform=True)\nto_tex(A)",
"Panda DataFrames\nYou can also produce tables or math arrays from Panda DataFrames.",
"df = pd.DataFrame(np.random.randint(low=0, high=10, size=(5, 5)),\n... columns=['a', 'b', 'c', 'd', 'e'])\n\ndf\n\nnp.array(df)\n\na2l.to_ltx(df, arraytype='bmatrix')\n\na2l.to_ltx(df, arraytype='tabular')\n\ndf2 = pd.DataFrame(['cat', 'dog', 'bird', 'snake', 'honey badger'], columns=['pets'])\ndf2\n\ndf_mixed = df.join(df2)\ndf_mixed\n\na2l.to_ltx(df_mixed, arraytype='tabular')\n\nA = np.array([[1.23456, 23.45678],[456.23, 8.239521]])\na2l.to_ltx(A, frmt = '{:6.2f}', arraytype = 'array')\n\nA = np.array([[1.23456, 23.45678],[456.72+392.71j, 8.239521]])"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
vravishankar/Jupyter-Books
|
Iterators and Generators.ipynb
|
mit
|
[
"Iterators\nIn Python most container objects can be looped using a for statement.\nFor example we can use for statement for looping over a list.",
"for i in [1,2,3]:\n print(i)",
"If we use it with a string, it loops over its characters.",
"for ch in 'test':\n print(ch)",
"If use it with a dictionary, it loops over its keys",
"for k in {1:'test1',2:'test'}:\n print(k)",
"So there are many types of objects which can be used with a for loop. These are called iterable objects.\nThere are many functions which consume these iterables.",
"\",\".join([\"a\",\"b\",\"c\"])\n\n\",\".join(('this','is','a','test'))\n\n\",\".join({'key1':'value','key2':'value2'})",
"Iteration Protocol",
"x = iter([1,2,3])\nprint(x)\nprint(next(x))\nprint(next(x))\nprint(next(x))\nprint(next(x)) # <-- will create an error",
"Having seen the mechanics behind the iterator protocol, it is easy to add iterator behavior to your classes. Define an _iter_() method which returns an object with a _next_() method. If the class defines _next_(), then _iter_() can just return self:",
"class Reverse:\n \"\"\"Iterator for looping over a sequence backwards.\"\"\"\n def __init__(self, data):\n self.data = data\n self.index = len(data)\n\n def __iter__(self):\n return self\n\n def __next__(self):\n if self.index == 0:\n raise StopIteration\n self.index = self.index - 1\n return self.data[self.index]\n\nrev = Reverse('spam')\nfor i in rev:\n print(i)",
"Generators\nGenerators are a simple and powerful tool for creating iterators. \nThey are written like regular functions but use the yield statement whenever they want to return data. Each time next() is called on it, the generator resumes where it left off (it remembers all the data values and which statement was last executed). An example shows that generators can be trivially easy to create:",
"def reverse(data):\n for index in range(len(data)-1, -1, -1):\n yield data[index]\n\nfor ch in reverse('shallow'):\n print(ch)",
"Anything that can be done with generators can also be done with class-based iterators as described in the previous section. What makes generators so compact is that the iter() and next() methods are created automatically.\nAnother key feature is that the local variables and execution state are automatically saved between calls. This made the function easier to write and much more clear than an approach using instance variables like self.index and self.data.\nIn addition to automatic method creation and saving program state, when generators terminate, they automatically raise StopIteration. In combination, these features make it easy to create iterators with no more effort than writing a regular function.\nThe following examples shows how generators work.",
"def samplegen():\n print(\"begin\")\n for i in range(3):\n print(\"before yield\", i)\n yield i\n print(\"after yield\", i)\n print(\"end\")\n \nf = samplegen()\nprint(next(f))\nprint(next(f))\nprint(next(f))\nprint(next(f))",
"Generator Expressions\nGenerator Expressions are generator version of list comprehensions. They look like list comprehensions, but returns a generator back instead of a list.",
"a = (x * x for x in range(10))\nsum(a)",
"Generator expressions are more compact but less versatile than full generator definitions and tend to be more memory friendly than equivalent list comprehensions.",
"xvec = [5,16,7]\nyvec = [4,12,18]\n\nsum(x * y for x,y in zip(xvec,yvec))\n\ndata = 'golf'\nlist(data[i] for i in range(len(data)-1, -1, -1))\n\n# unique_words = set(word for line in page for word in line.split())\n\n# valedictorian = max((student.gpa, student.name) for student in graduates)",
"Note that generators provide another way to deal with infinity, for example:",
"from time import gmtime, strftime\ndef myGen():\n while True:\n yield strftime(\"%a, %d %b %Y %H:%M:%S +0000\", gmtime()) \n\nmyGeneratorInstance = myGen()\nnext(myGeneratorInstance)\n\nnext(myGeneratorInstance)",
"Use of Generators\n1. Easy to Implement\nGenerators can be implemented in a clear and concise way as compared to their iterator class counterpart.",
"# Iterator Class\nclass PowTwo:\n def __init__(self, max = 0):\n self.max = max\n def __iter__(self):\n self.n = 0\n return self\n def __next__(self):\n if self.n > self.max:\n raise StopIteration\n \n result = 2 ** self.n\n self.n += 1\n return result",
"This was lengthy. Now lets do the same using a generator function.",
"def PowTwoGen(max = 0):\n n = 0\n while n < max:\n yield 2 ** n\n n += 1",
"Since generators keep track of details automatically, it was concise and much cleaner in implementation.\n2. Memory Efficient\nA normal function to return a sequence will create the entire sequence in memory before returning the result. This is an overkill if the number of items in the sequence is very large.\n3. Represent Infinite Stream\nGenerators are excellent medium to represent an infinite stream of data. Infinite streams cannot be stored in memory and since generators produce only one item at a time, it can represent infinite stream of data.",
"def all_event():\n n = 0\n while True:\n yield n\n n += 2",
"4. Pipelining Generators\nGenerators can be used to pipeline a series of operations.\nIf we are analysing a log file and if the log file has a 3rd column that keeps track of the ips every hour and we want to sum it to find unique ips in last 5 months.",
"with open('sells.log') as file:\n ip_col = (line[3] for line in file)\n per_hr = (int(x) for x in ip_col if x != 'N/A')\n print(\"IPs =\", sum(per_hr))",
"Using Itertools",
"import itertools\nhorses = [1,2,3,4]\nraces = itertools.permutations(horses)\nprint(list(races))"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
joommf/tutorial
|
workshops/Durham/tutorial3_dynamics.ipynb
|
bsd-3-clause
|
[
"Tutorial 3 - Dynamics\nThe dynamics of magnetisation field $\\mathbf{m}$ is governed by the Landau-Lifshitz-Gilbert (LLG) equation\n$$\\frac{d\\mathbf{m}}{dt} = \\underbrace{-\\gamma_{0}(\\mathbf{m} \\times \\mathbf{H}\\text{eff})}\\text{precession} + \\underbrace{\\alpha\\left(\\mathbf{m} \\times \\frac{d\\mathbf{m}}{dt}\\right)}_\\text{damping},$$\nwhere $\\gamma_{0}$ is the gyromagnetic ratio, $\\alpha$ is the Gilbert damping, and $\\mathbf{H}_\\text{eff}$ is the effective field. It consists of two terms: precession and damping. In this exercise, we will explore some basic properties of this equation to understand how to define it in simulations.\nWe will study the simplest \"zero-dimensional\" case - macrospin. In the first step, after we import necessary modules (oommfc and discretisedfield), we create the mesh which consists of a single finite difference cell.",
"import oommfc as oc\nimport discretisedfield as df\n%matplotlib inline\n\n# Define macro spin mesh (i.e. one discretisation cell).\np1 = (0, 0, 0) # first point of the mesh domain (m)\np2 = (1e-9, 1e-9, 1e-9) # second point of the mesh domain (m)\ncell = (1e-9, 1e-9, 1e-9) # discretisation cell size (m)\nmesh = oc.Mesh(p1=p1, p2=p2, cell=cell)",
"Now, we can create a micromagnetic system object.",
"system = oc.System(name=\"macrospin\")",
"Let us assume we have a simple Hamiltonian which consists of only Zeeman energy term\n$$\\mathcal{H} = -\\mu_{0}M_\\text{s}\\mathbf{m}\\cdot\\mathbf{H},$$\nwhere $M_\\text{s}$ is the saturation magnetisation, $\\mu_{0}$ is the magnetic constant, and $\\mathbf{H}$ is the external magnetic field. For more information on defining micromagnetic Hamiltonians, please refer to the Hamiltonian tutorial. We apply the external magnetic field with magnitude $H = 2 \\times 10^{6} \\,\\text{A}\\,\\text{m}^{-1}$ in the positive $z$ direction.",
"H = (0, 0, 2e6) # external magnetic field (A/m)\nsystem.hamiltonian = oc.Zeeman(H=H)",
"In the next step we can define the system's dynamics. Let us assume we have $\\gamma_{0} = 2.211 \\times 10^{5} \\,\\text{m}\\,\\text{A}^{-1}\\,\\text{s}^{-1}$ and $\\alpha=0.1$.",
"gamma = 2.211e5 # gyromagnetic ratio (m/As)\nalpha = 0.1 # Gilbert damping\n\nsystem.dynamics = oc.Precession(gamma=gamma) + oc.Damping(alpha=alpha)",
"To check what is our dynamics equation:",
"system.dynamics",
"Before we start running time evolution simulations, we need to initialise the magnetisation. In this case, our magnetisation is pointing in the positive $x$ direction with $M_\\text{s} = 8 \\times 10^{6} \\,\\text{A}\\,\\text{m}^{-1}$. The magnetisation is defined using Field class from the discretisedfield package we imported earlier.",
"initial_m = (1, 0, 0) # vector in x direction\nMs = 8e6 # magnetisation saturation (A/m)\n\nsystem.m = df.Field(mesh, value=initial_m, norm=Ms)",
"Now, we can run the time evolution using TimeDriver for $t=0.1 \\,\\text{ns}$ and save the magnetisation configuration in $n=200$ steps.",
"td = oc.TimeDriver()\n\ntd.drive(system, t=0.1e-9, n=200)",
"How different system parameters vary with time, we can inspect by showing the system's datatable.",
"system.dt",
"However, in our case it is much more informative if we plot the time evolution of magnetisation $z$ component $m_{z}(t)$.",
"system.dt.plot(\"t\", \"mz\");",
"Similarly, we can plot all three magnetisation components",
"system.dt.plot(\"t\", [\"mx\", \"my\", \"mz\"]);",
"We can see that after some time the macrospin aligns parallel to the external magnetic field in the $z$ direction. We can explore the effect of Gilbert damping $\\alpha = 0.2$ on the magnetisation dynamics.",
"system.dynamics.damping.alpha = 0.2\nsystem.m = df.Field(mesh, value=initial_m, norm=Ms)\n\ntd.drive(system, t=0.1e-9, n=200)\n\nsystem.dt.plot(\"t\", [\"mx\", \"my\", \"mz\"]);",
"Exercise 1\nBy looking at the previous example, explore the magnetisation dynamics for $\\alpha=0.005$ in the following code cell.",
"# insert missing code here.\nsystem.m = df.Field(mesh, value=initial_m, norm=Ms)\n\ntd.drive(system, t=0.1e-9, n=200)\n\nsystem.dt.plot(\"t\", [\"mx\", \"my\", \"mz\"]);",
"Exercise 2\nRepeat the simulation with $\\alpha=0.1$ and H = (0, 0, -2e6).",
"# insert missing code here.\nsystem.m = df.Field(mesh, value=initial_m, norm=Ms)\n\ntd.drive(system, t=0.1e-9, n=200)\n\nsystem.dt.plot(\"t\", [\"mx\", \"my\", \"mz\"]);",
"Exercise 3\nKeep using $\\alpha=0.1$. \nChange the field from H = (0, 0, -2e6) to H = (0, -1.41e6, -1.41e6), and plot\n$m_x(t)$, $m_y(t)$ and $m_z(t)$ as above. Can you explain the (initially non-intuitive) output?",
"system.hamiltonian.zeeman.H = (0, -1.41e6, -1.41e6)\n\ntd.drive(system, t=0.1e-9, n=200)\n\nsystem.dt.plot(\"t\", [\"mx\", \"my\", \"mz\"]);"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
JAmarel/Phys202
|
Interpolation/.ipynb_checkpoints/InterpolationEx01-checkpoint.ipynb
|
mit
|
[
"Interpolation Exercise 1",
"%matplotlib inline\nimport matplotlib.pyplot as plt\nimport seaborn as sns\nimport numpy as np\n\nfrom scipy.interpolate import interp1d",
"2D trajectory interpolation\nThe file trajectory.npz contains 3 Numpy arrays that describe a 2d trajectory of a particle as a function of time:\n\nt which has discrete values of time t[i].\nx which has values of the x position at those times: x[i] = x(t[i]).\nx which has values of the y position at those times: y[i] = y(t[i]).\n\nLoad those arrays into this notebook and save them as variables x, y and t:",
"dictionary = np.load('trajectory.npz')\n\ny = dictionary.items()[0][1]\nt = dictionary.items()[1][1]\nx = dictionary.items()[2][1]\n\nassert isinstance(x, np.ndarray) and len(x)==40\nassert isinstance(y, np.ndarray) and len(y)==40\nassert isinstance(t, np.ndarray) and len(t)==40",
"Use these arrays to create interpolated functions $x(t)$ and $y(t)$. Then use those functions to create the following arrays:\n\nnewt which has 200 points between ${t_{min},t_{max}}$.\nnewx which has the interpolated values of $x(t)$ at those times.\nnewy which has the interpolated values of $y(t)$ at those times.",
"x_approx = interp1d(t, x, kind='cubic')\ny_approx = interp1d(t, y, kind='cubic')\n\nnewt = np.linspace(0,4,200)\nnewx = x_approx(newt)\nnewy = y_approx(newt)\n\nassert newt[0]==t.min()\nassert newt[-1]==t.max()\nassert len(newt)==200\nassert len(newx)==200\nassert len(newy)==200",
"Make a parametric plot of ${x(t),y(t)}$ that shows the interpolated values and the original points:\n\nFor the interpolated points, use a solid line.\nFor the original points, use circles of a different color and no line.\nCustomize you plot to make it effective and beautiful.",
"plt.figure(figsize=(12,8));\nplt.plot(x, y, marker='o', linestyle='', label='Original Data')\nplt.plot(newx, newy, label='Interpolated Curve');\nplt.legend();\nplt.xlabel('X(t)');\nplt.ylabel('Y(t)');\nplt.title('Position as a Function of Time');\n\nassert True # leave this to grade the trajectory plot"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
SciTools/courses
|
course_content/iris_course/4.Joining_Cubes_Together.ipynb
|
gpl-3.0
|
[
"%matplotlib inline",
"Iris introduction course\n4. Joining Cubes Together\nLearning outcome: by the end of this section, you will be able to apply Iris functionality to combine multiple Iris cubes into a new larger cube.\nDuration: 30 minutes\nOverview:<br>\n4.1 Merge<br>\n4.2 Concatenate<br>\n4.3 Exercise<br>\n4.4 Summary of the Section\nSetup",
"import iris\nimport numpy as np",
"4.1 Merge<a id='merge'></a>\nWhen Iris loads data it tries to reduce the number of cubes returned by collecting together multiple fields with\nshared metadata into a single multidimensional cube. In Iris, this is known as merging.\nIn order to merge two cubes, they must be identical in everything but a scalar dimension, which goes on to become a new data dimension.\nThe diagram below shows how three 2D cubes, which have the same x and y coordinates but different z coordinates, are merged together to create a single 3D cube.\n\nThe iris.load_raw function can be used as a diagnostic tool to load the individual \"fields\" that Iris identifies in a given set of filenames before any merge takes place.\nLet's compare the behaviour of iris.load_raw and the behaviour of the general purpose loading function, iris.load\nFirst, we load in a file using iris.load:",
"fname = iris.sample_data_path('GloSea4', 'ensemble_008.pp')\ncubes = iris.load(fname)\n\nprint(cubes)",
"As you can see iris.load returns a CubeList containing a single 3D cube.\nNow let's try loading in the file using iris.load_raw:",
"fname = iris.sample_data_path('GloSea4', 'ensemble_008.pp')\nraw_cubes = iris.load_raw(fname)\n\nprint(raw_cubes)",
"This time, iris has returned six 2D cubes. \nPP files usually contain multiple 2D fields. iris.load_raw has returned a 2D cube for each of these fields, whereas iris.load has merged the cubes together then returned the resulting 3D cube.\nWhen we look in detail at the raw 2D cubes, we find that they are identical in every coordinate except for the scalar forecast_period and time coordinates:",
"print(raw_cubes[0])\nprint('--' * 40)\nprint(raw_cubes[1])",
"To merge a CubeList, we can use the merge or merge_cube methods. \nThe merge method will try to merge together the cubes in the CubeList in order to return a CubeList of as few cubes as possible.\nThe merge_cube method will do the same as merge but will return a single Cube. If the initial CubeList cannot be merged into a single Cube, merge_cube will raise an error, giving a helpful message explaining why the cubes cannot be merged.\nLet's merge the raw 2D cubes we previously loaded in:",
"merged_cubelist = raw_cubes.merge()\nprint(merged_cubelist)",
"merge has returned a cubelist of a single 3D cube.",
"merged_cube = merged_cubelist[0]\nprint(merged_cube)",
"<div class=\"alert alert-block alert-warning\">\n <b><font color='brown'>Exercise: </font></b>\n <p>Try merging <b><font face=\"courier\" color=\"black\">raw_cubes</font></b> using the <b><font face=\"courier\" color=\"black\">merge_cube</font></b> method.</p>\n</div>",
"#\n# edit space for user code ...\n#",
"When we look in more detail at our merged cube, we can see that the time coordinate has become a new dimension, as well as gaining another forecast_period auxiliary coordinate:",
"print(merged_cube.coord('time'))\nprint(merged_cube.coord('forecast_period'))",
"Identifying Merge Problems\nIn order to avoid the Iris merge functionality making inappropriate assumptions about the data, merge is strict with regards to the uniformity of the incoming cubes.\nFor example, if we load the fields from two ensemble members from the GloSea4 model sample data, we see we have 12 fields before any merge takes place:",
"fname = iris.sample_data_path('GloSea4', 'ensemble_00[34].pp')\ncubes = iris.load_raw(fname, 'surface_temperature')\nprint(len(cubes))",
"If we try to merge these 12 cubes we get 2 cubes rather than one:",
"incomplete_cubes = cubes.merge()\nprint(incomplete_cubes)",
"When we look in more detail at these two cubes, what is different between the two? (Hint: One value changes, another is completely missing)",
"print(incomplete_cubes[0])\nprint('--' * 40)\nprint(incomplete_cubes[1])",
"As mentioned earlier, if merge_cube cannot merge the given CubeList to return a single Cube, it will raise a helpful error message identifying the cause of the failiure.\n<div class=\"alert alert-block alert-warning\">\n <b><font color=\"brown\">Exercise: </font></b><p>Try merging the loaded <b><font face=\"courier\" color=\"black\">cubes</font></b> using <b><font face=\"courier\" color=\"black\">merge_cube</font></b> rather than <b><font face=\"courier\" color=\"black\">merge</font></b>.</p>\n</div>",
"#\n# edit space for user code ...\n#",
"By inspecting the cubes themselves or using the error message raised when using merge_cube we can see that some cubes are missing the realization coordinate.\nBy adding the missing coordinate, we can trigger a merge of the 12 cubes into a single cube, as expected:",
"for cube in cubes:\n if not cube.coords('realization'):\n cube.add_aux_coord(iris.coords.DimCoord(np.int32(3),\n 'realization'))\n\nmerged_cube = cubes.merge_cube()\nprint(merged_cube)",
"4.2 Concatenate<a id='concatenate'></a>\nWe have seen that merge combines a list of cubes with a common scalar coordinate to produce a single cube with a new dimension created from these scalar values.\nBut what happens if you try to combine cubes along a common dimension.\nLet's create a CubeList with two cubes that have been indexed along the time dimension of the original cube.",
"fname = iris.sample_data_path('A1B_north_america.nc')\ncube = iris.load_cube(fname)\n\ncube_1 = cube[:10]\ncube_2 = cube[10:20]\ncubes = iris.cube.CubeList([cube_1, cube_2])\nprint(cubes)",
"These cubes should be able to be joined together; after all, they have both come from the same original cube!\nHowever, merge returns two cubes, suggesting that these two cubes cannot be merged:",
"print(cubes.merge())",
"Merge cannot be used to combine common non-scalar coordinates. Instead we must use concatenate.\nConcatenate joins together (\"concatenates\") common non-scalar coordinates to produce a single cube with the common dimension extended.\nIn the below diagram, we see how three 3D cubes are concatenated together to produce a 3D cube with an extended t dimension.\n\nTo concatenate a CubeList, we can use the concatenate or concatenate_cube methods. \nSimilar to merging, concatenate will return a CubeList of as few cubes as possible, whereas concatenate_cube will attempt to return a cube, raising an error with a helpful message where this is not possible.\nIf we apply concatenate to our cubelist, we will see that it returns a CubeList with a single Cube:",
"print(cubes.concatenate())",
"<div class=\"alert alert-block alert-warning\">\n <b><font color='brown'>Exercise: </font></b>\n <p>Try concatenating <b><font face=\"courier\" color=\"black\">cubes</font></b> using the <b><font face=\"courier\" color=\"black\">concatenate_cube</font></b> method.\n</div>",
"#\n# edit space for user code ...\n#",
"4.3 Section Review Exercise<a id='exercise'></a>\nThe following exercise is designed to give you experience of solving issues that prevent a merge or concatenate from taking place.\nPart 1\nIdentify and resolve the issue preventing the air_potential_temperature cubes from the resources/merge_exercise.1.*.nc files from being joined together into a single cube.\na) Use iris.load_raw to load in the air_potential_temperature cubes from the files 'resources/merge_exercise.1.*.nc'. Store the cubes in a variable called raw_cubes.\nHint: Constraints can be given to the load_raw function as you would with the other load functions.",
"# EDIT for user code ...\n\n# SAMPLE SOLUTION : Un-comment and execute the following to see a possible solution ...\n# %load solutions/iris_exercise_4.3.1a",
"b) Try merging the loaded cubes into a single cube. Why does this raise an error?",
"# user code ...\n\n# SAMPLE SOLUTION\n# %load solutions/iris_exercise_4.3.1b",
"c) Fix the cubes such that they can be merged into a single cube. \nHint: You can use del to remove an item from a dictionary.",
"# user code ...\n\n# SAMPLE SOLUTION\n# %load solutions/iris_exercise_4.3.1c",
"Part 2\nIdentify and resolve the issue preventing the air_potential_temperature cubes from the resources/merge_exercise.5.*.nc files from being joined together into a single cube.\na) Use iris.load_raw to load in the air_potential_temperature cubes from the files 'resources/merge_exercise.5.*.nc'. Store the cubes in a variable called raw_cubes.",
"# user code ...\n\n# SAMPLE SOLUTION\n# %load solutions/iris_exercise_4.3.2a",
"b) Join the cubes together into a single cube. Should these cubes be merged or concatenated?",
"# user code ...\n\n# SAMPLE SOLUTION\n# %load solutions/iris_exercise_4.3.2b",
"4.4 Section Summary: Joining Cubes Together<a id='summary'></a>\nIn this section we learnt:\n* Merging and Concatenating can be used to join cubes into a larger combined dataset\n* Merging combines cubes along a dimension to produce a cube with an extra data dimension\n* Concatenating produces a cube with the same dimensionality as the input cubes"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
quantumlib/ReCirq
|
docs/benchmarks/rabi_oscillations.ipynb
|
apache-2.0
|
[
"Copyright 2020 The Cirq Developers",
"# @title Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.",
"Rabi oscillation experiment\n<table class=\"tfo-notebook-buttons\" align=\"left\">\n <td>\n <a target=\"_blank\" href=\"https://quantumai.google/cirq/experiments/benchmarks/rabi_oscillations.ipynb\"><img src=\"https://quantumai.google/site-assets/images/buttons/quantumai_logo_1x.png\" />View on QuantumAI</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://colab.research.google.com/github/quantumlib/ReCirq/blob/master/docs/benchmarks/rabi_oscillations.ipynb\"><img src=\"https://quantumai.google/site-assets/images/buttons/colab_logo_1x.png\" />Run in Google Colab</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://github.com/quantumlib/ReCirq/blob/master/docs/benchmarks/rabi_oscillations.ipynb\"><img src=\"https://quantumai.google/site-assets/images/buttons/github_logo_1x.png\" />View source on GitHub</a>\n </td>\n <td>\n <a href=\"https://storage.googleapis.com/tensorflow_docs/ReCirq/docs/benchmarks/rabi_oscillations.ipynb\"><img src=\"https://quantumai.google/site-assets/images/buttons/download_icon_1x.png\" />Download notebook</a>\n </td>\n</table>",
"try:\n import cirq\n import recirq\nexcept ImportError:\n !pip install -U pip\n !pip install --quiet cirq\n !pip install --quiet git+https://github.com/quantumlib/ReCirq\n import cirq\n import recirq\n\nimport numpy as np\nimport cirq_google",
"In this experiment, you are going to use Cirq to check that rotating a qubit by an increasing angle, and then measuring the qubit, produces Rabi oscillations. This requires you to do the following things:\n\nPrepare the $|0\\rangle$ state.\nRotate by an angle $\\theta$ around the $X$ axis.\nMeasure to see if the result is a 1 or a 0.\nRepeat steps 1-3 $k$ times.\nReport the fraction of $\\frac{\\text{Number of 1's}}{k}$\nfound in step 3.\n\n1. Getting to know Cirq\nCirq emphasizes the details of implementing quantum algorithms on near term devices.\nFor example, when you work on a qubit in Cirq you don't operate on an unspecified qubit that will later be mapped onto a device by a hidden step.\nInstead, you are always operating on specific qubits at specific locations that you specify.\nSuppose you are working with a 54 qubit Sycamore chip.\nThis device is included in Cirq by default.\nIt is called cirq_google.Sycamore, and you can see its layout by printing it.",
"working_device = cirq_google.Sycamore\nprint(working_device)",
"For this experiment you only need one qubit and you can just pick whichever one you like.",
"my_qubit = cirq.GridQubit(5, 6)",
"Once you've chosen your qubit you can build circuits that use it.",
"from cirq.contrib.svg import SVGCircuit\n\n# Create a circuit with X, Ry(pi/2) and H.\nmy_circuit = cirq.Circuit(\n # Rotate the qubit pi/2 radians around the X axis.\n cirq.rx(np.pi / 2).on(my_qubit),\n # Measure the qubit.\n cirq.measure(my_qubit, key=\"out\"),\n)\nSVGCircuit(my_circuit)",
"Now you can simulate sampling from your circuit using cirq.Simulator.",
"sim = cirq.Simulator()\nsamples = sim.sample(my_circuit, repetitions=10)",
"You can also get properties of the circuit, such as the density matrix of the circuit's output or the state vector just before the terminal measurement.",
"state_vector_before_measurement = sim.simulate(my_circuit[:-1])\nsampled_state_vector_after_measurement = sim.simulate(my_circuit)\n\nprint(f\"State before measurement:\")\nprint(state_vector_before_measurement)\nprint(f\"State after measurement:\")\nprint(sampled_state_vector_after_measurement)",
"You can also examine the outputs from a noisy environment.\nFor example, an environment where 10% depolarization is applied to each qubit after each operation in the circuit:",
"noisy_sim = cirq.DensityMatrixSimulator(noise=cirq.depolarize(0.1))\nnoisy_post_measurement_state = noisy_sim.simulate(my_circuit)\nnoisy_pre_measurement_state = noisy_sim.simulate(my_circuit[:-1])\n\nprint(\"Noisy state after measurement:\" + str(noisy_post_measurement_state))\nprint(\"Noisy state before measurement:\" + str(noisy_pre_measurement_state))",
"2. Parameterized Circuits and Sweeps\nNow that you have some of the basics end to end, you can create a parameterized circuit that rotates by an angle $\\theta$:",
"import sympy\n\ntheta = sympy.Symbol(\"theta\")\n\nparameterized_circuit = cirq.Circuit(\n cirq.rx(theta).on(my_qubit), cirq.measure(my_qubit, key=\"out\")\n)\nSVGCircuit(parameterized_circuit)",
"In the above block you saw that there is a sympy.Symbol that you placed in the circuit. Cirq supports symbolic computation involving circuits. What this means is that when you construct cirq.Circuit objects you can put placeholders in many of the classical control parameters of the circuit which you can fill with values later on.\nNow if you wanted to use cirq.simulate or cirq.sample with the parameterized circuit you would also need to specify a value for theta.",
"sim.sample(parameterized_circuit, params={theta: 2}, repetitions=10)",
"You can also specify multiple values of theta, and get samples back for each value.",
"sim.sample(parameterized_circuit, params=[{theta: 0.5}, {theta: np.pi}], repetitions=10)",
"Cirq has shorthand notation you can use to sweep theta over a range of values.",
"sim.sample(\n parameterized_circuit,\n params=cirq.Linspace(theta, start=0, stop=np.pi, length=5),\n repetitions=5,\n)",
"The result value being returned by sim.sample is a pandas.DataFrame object.\nPandas is a common library for working with table data in python.\nYou can use standard pandas methods to analyze and summarize your results.",
"import pandas\n\nbig_results = sim.sample(\n parameterized_circuit,\n params=cirq.Linspace(theta, start=0, stop=np.pi, length=20),\n repetitions=10_000,\n)\n\n# big_results is too big to look at. Plot cross tabulated data instead.\npandas.crosstab(big_results.theta, big_results.out).plot()",
"3. The ReCirq experiment\nReCirq comes with a pre-written Rabi oscillation experiment recirq.benchmarks.rabi_oscillations, which performs the steps outlined at the start of this tutorial to create a circuit that exhibits Rabi Oscillations or Rabi Cycles. \nThis method takes a cirq.Sampler, which could be a simulator or a network connection to real hardware, as well as a qubit to test and two iteration parameters, num_points and repetitions. It then runs repetitions many experiments on the provided sampler, where each experiment is a circuit that rotates the chosen qubit by some $\\theta$ Rabi angle around the $X$ axis (by applying an exponentiated $X$ gate). The result is a sequence of the expected probabilities of the chosen qubit at each of the Rabi angles.",
"import datetime\nfrom recirq.benchmarks import rabi_oscillations\n\nresult = rabi_oscillations(\n sampler=noisy_sim, qubit=my_qubit, num_points=50, repetitions=10000\n)\nresult.plot()",
"Notice that you can tell from the plot that you used the noisy simulator you defined earlier.\nYou can also tell that the amount of depolarization is roughly 10%.\n4. Exercise: Find the best qubit\nAs you have seen, you can use Cirq to perform a Rabi oscillation experiment.\nYou can either make the experiment yourself out of the basic pieces made available by Cirq, or use the prebuilt experiment method.\nNow you're going to put this knowledge to the test.\nThere is some amount of depolarizing noise on each qubit.\nYour goal is to characterize every qubit from the Sycamore chip using a Rabi oscillation experiment, and find the qubit with the lowest noise according to the secret noise model.",
"import hashlib\n\n\nclass SecretNoiseModel(cirq.NoiseModel):\n def noisy_operation(self, op):\n # Hey! No peeking!\n q = op.qubits[0]\n v = hashlib.sha256(str(q).encode()).digest()[0] / 256\n yield cirq.depolarize(v).on(q)\n yield op\n\n\nsecret_noise_sampler = cirq.DensityMatrixSimulator(noise=SecretNoiseModel())\n\nq = cirq_google.Sycamore.qubits[3]\nprint(\"qubit\", repr(q))\nrabi_oscillations(sampler=secret_noise_sampler, qubit=q).plot()"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
zhenxinlei/SpringSecurity1
|
ex2/ex2.ipynb
|
epl-1.0
|
[
"Exercise 2 - Logistic Regression\nbuild a logistic regression model to predict whether a student gets admitted into a university",
"import csv\nimport pandas as pd\nimport numpy as np\nfrom numpy import genfromtxt\n\ndata = pd.read_csv('./ex2data1.csv', delimiter=',',\n names=['exam1','exam2', 'amitted'])\ndata.head()",
"raw plot data",
"import matplotlib.pyplot as plt\n%matplotlib inline\n\namitted = data[data['amitted']==1]\nrejected = data[data['amitted']==0]\n\nfig, ax = plt.subplots(figsize=(12,8))\nax.scatter(amitted['exam1'], amitted['exam2'], s=50, c='b', marker='o', label='Admitted')\nax.scatter(rejected['exam1'], rejected['exam2'], s=50, c='r', marker='x', label='Not Admitted')\nax.legend()\nax.set_xlabel('Exam 1 Score')\nax.set_ylabel('Exam 2 Score')",
"Sigmoid function\nhypothesis function \n$$ h_{\\theta}(x)= g(\\theta^Tx)$$\n$$ g(z) =\\frac{1}{1+e^{-z}}$$",
"def sigmoid(z):\n return 1/(1+np.exp(-z))\n\n# sanity check \nnums = np.arange(-10,10,step =1 )\n\nfig, ax = plt.subplots()\nax.plot(nums, sigmoid(nums))",
"cost function\n$$J(\\theta) = \\frac{1}{m} \\sum_{i=1}^{m}[-y^{(i)} log(h_{\\theta}(x^{(i)})) -(1-y^{(i)})log(1-h_{\\theta}(x^{(i)}))]$$\nProof:\nprobability when y =1 or 0\n $$P(y=1 \\mid x, \\theta)= h_{\\theta}(x)$$ \n $$P(y=0 \\mid x, \\theta)= 1- h_{\\theta}(x)$$\ncompact above two \n $$P(y \\mid x, \\theta)= h_{\\theta}(x)^y ( 1- h_{\\theta}(x))^{1-y}$$\nLikelihood of $\\theta$ is \n$$L(\\theta)=P(\\vec{y}\\mid x, \\theta ) = \\prod_{i}P(y^{(i)}\\mid x^{(i)},\\theta)$$\n $$=\\prod_{i} (h_{\\theta}(x^{(i)})^{y^{(i)}} ( 1- h_{\\theta}(x^{(i)}))^{1-y^{(i)}})$$\nLog Likelihood \n$$l(\\theta) = log (L(\\theta) )$$\n$$=\\sum_{i=1}^m y^{(i)}log(h_{\\theta}(x^{(i)})) + (1-y^{(i)})log(1-h_{\\theta}(x^{(i)}))$$\nTo max log likelihood, we use \nUpdate $\\theta$ with step $\\alpha$ (learning rate ) to max $l(\\theta)$\n$$\\theta_j := \\theta_j + \\alpha \\triangledown_\\theta l(\\theta) $$\nand\n$$\\frac{\\partial }{\\partial \\theta_j}l(\\theta) = \\sum_{i=1}^m (y^{(i)}-h_{\\theta}(x^{(i)}))x_j^{(i)}$$",
"def cost(theta,x,y):\n theta = np.matrix(theta)\n x= np.matrix(x)\n y = np.matrix(y)\n \n first = np.multiply(-y, np.log(sigmoid(x*theta.T)))\n second = np.multiply( (1-y), np.log(1-sigmoid(x*theta.T)))\n \n return np.sum(first-second)/len(x)\n\n# add a ones column - this makes the matrix multiplication work out easier\ndata.insert(0, 'Ones', 1)\n\n# set X (training data) and y (target variable)\ncols = data.shape[1]\nX = data.iloc[:,0:cols-1]\ny = data.iloc[:,cols-1:cols]\n\n# convert to numpy arrays and initalize the parameter array theta\nX = np.array(X.values)\ny = np.array(y.values)\ntheta = np.zeros(3)\n\nX.shape, theta.shape, y.shape\n\ncost(theta, X, y)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
eford/rebound
|
ipython_examples/Checkpoints.ipynb
|
gpl-3.0
|
[
"Checkpoints\nYou can easily save and load a REBOUND simulation to a binary file. The binary file includes all information about the particles (mass, position, velocity, etc), as well as the current simulation settings such as time, integrator choise, etc.\nLet's add three particles to REBOUND and save them to a file.",
"import rebound\nsim = rebound.Simulation()\nsim.add(m=1.)\nsim.add(m=1e-6, a=1.)\nsim.add(a=2.)\nsim.integrator = \"whfast\"\nsim.save(\"checkpoint.bin\")\nsim.status()",
"The binary files are small in size and store every floating point number exactly, so you don't have to worry about efficiency or losing precision. You can make lots of checkpoints if you want!\nLet's delete the old REBOUND simulation (that frees up the memory from that simulation) and then read the binary file we just saved.",
"del sim\nsim = rebound.Simulation.from_file(\"checkpoint.bin\")\nsim.status()",
"Note that you will have to re-set any function pointers manually (if you're using them)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
ianctse/pvlib-python
|
docs/tutorials/solarposition.ipynb
|
bsd-3-clause
|
[
"solarposition.py tutorial\nThis tutorial needs your help to make it better!\nTable of contents:\n1. Setup\n2. SPA output\n2. Speed tests\nThis tutorial has been tested against the following package versions:\n* pvlib 0.3.0\n* Python 3.5.1\n* IPython 4.1\n* Pandas 0.18.0\nIt should work with other Python and Pandas versions. It requires pvlib > 0.3.0 and IPython > 3.0.\nAuthors:\n* Will Holmgren (@wholmgren), University of Arizona. July 2014, July 2015, March 2016\nSetup",
"import datetime\n\n# scientific python add-ons\nimport numpy as np\nimport pandas as pd\n\n# plotting stuff\n# first line makes the plots appear in the notebook\n%matplotlib inline \nimport matplotlib.pyplot as plt\n# seaborn makes your plots look better\ntry:\n import seaborn as sns\n sns.set(rc={\"figure.figsize\": (12, 6)})\nexcept ImportError:\n print('We suggest you install seaborn using conda or pip and rerun this cell')\n\n# finally, we import the pvlib library\nimport pvlib\n\nimport pvlib\nfrom pvlib.location import Location",
"SPA output",
"tus = Location(32.2, -111, 'US/Arizona', 700, 'Tucson')\nprint(tus)\ngolden = Location(39.742476, -105.1786, 'America/Denver', 1830, 'Golden')\nprint(golden)\ngolden_mst = Location(39.742476, -105.1786, 'MST', 1830, 'Golden MST')\nprint(golden_mst)\nberlin = Location(52.5167, 13.3833, 'Europe/Berlin', 34, 'Berlin')\nprint(berlin)\n\ntimes = pd.date_range(start=datetime.datetime(2014,6,23), end=datetime.datetime(2014,6,24), freq='1Min')\ntimes_loc = times.tz_localize(tus.pytz)\n\ntimes\n\npyephemout = pvlib.solarposition.pyephem(times_loc, tus.latitude, tus.longitude)\nspaout = pvlib.solarposition.spa_python(times_loc, tus.latitude, tus.longitude)\n\npyephemout['elevation'].plot(label='pyephem')\npyephemout['apparent_elevation'].plot(label='pyephem apparent')\nspaout['elevation'].plot(label='spa')\nplt.legend(ncol=2)\nplt.title('elevation')\n\nprint('pyephem')\nprint(pyephemout.head())\nprint('spa')\nprint(spaout.head())\n\nplt.figure()\npyephemout['elevation'].plot(label='pyephem')\nspaout['elevation'].plot(label='spa')\n(pyephemout['elevation'] - spaout['elevation']).plot(label='diff')\nplt.legend(ncol=3)\nplt.title('elevation')\n\nplt.figure()\npyephemout['apparent_elevation'].plot(label='pyephem apparent')\nspaout['elevation'].plot(label='spa')\n(pyephemout['apparent_elevation'] - spaout['elevation']).plot(label='diff')\nplt.legend(ncol=3)\nplt.title('elevation')\n\nplt.figure()\npyephemout['apparent_zenith'].plot(label='pyephem apparent')\nspaout['zenith'].plot(label='spa')\n(pyephemout['apparent_zenith'] - spaout['zenith']).plot(label='diff')\nplt.legend(ncol=3)\nplt.title('zenith')\n\nplt.figure()\npyephemout['apparent_azimuth'].plot(label='pyephem apparent')\nspaout['azimuth'].plot(label='spa')\n(pyephemout['apparent_azimuth'] - spaout['azimuth']).plot(label='diff')\nplt.legend(ncol=3)\nplt.title('azimuth')\n\npyephemout = pvlib.solarposition.pyephem(times.tz_localize(golden.tz), golden.latitude, golden.longitude)\nspaout = pvlib.solarposition.spa_python(times.tz_localize(golden.tz), golden.latitude, golden.longitude)\n\npyephemout['elevation'].plot(label='pyephem')\npyephemout['apparent_elevation'].plot(label='pyephem apparent')\nspaout['elevation'].plot(label='spa')\nplt.legend(ncol=2)\nplt.title('elevation')\n\nprint('pyephem')\nprint(pyephemout.head())\nprint('spa')\nprint(spaout.head())\n\npyephemout = pvlib.solarposition.pyephem(times.tz_localize(golden.tz), golden.latitude, golden.longitude)\nephemout = pvlib.solarposition.ephemeris(times.tz_localize(golden.tz), golden.latitude, golden.longitude)\n\npyephemout['elevation'].plot(label='pyephem')\npyephemout['apparent_elevation'].plot(label='pyephem apparent')\nephemout['elevation'].plot(label='ephem')\nplt.legend(ncol=2)\nplt.title('elevation')\n\nprint('pyephem')\nprint(pyephemout.head())\nprint('ephem')\nprint(ephemout.head())\n\nloc = berlin\n\npyephemout = pvlib.solarposition.pyephem(times.tz_localize(loc.tz), loc.latitude, loc.longitude)\nephemout = pvlib.solarposition.ephemeris(times.tz_localize(loc.tz), loc.latitude, loc.longitude)\n\npyephemout['elevation'].plot(label='pyephem')\npyephemout['apparent_elevation'].plot(label='pyephem apparent')\nephemout['elevation'].plot(label='ephem')\nephemout['apparent_elevation'].plot(label='ephem apparent')\nplt.legend(ncol=2)\nplt.title('elevation')\n\nprint('pyephem')\nprint(pyephemout.head())\nprint('ephem')\nprint(ephemout.head())\n\npyephemout['elevation'].plot(label='pyephem')\npyephemout['apparent_elevation'].plot(label='pyephem apparent')\nephemout['elevation'].plot(label='ephem')\nephemout['apparent_elevation'].plot(label='ephem apparent')\nplt.legend(ncol=2)\nplt.title('elevation')\nplt.xlim(pd.Timestamp('2015-06-28 03:00:00+02:00'), pd.Timestamp('2015-06-28 06:00:00+02:00'))\nplt.ylim(-10,10)\n\nloc = berlin\ntimes = pd.DatetimeIndex(start=datetime.date(2015,3,28), end=datetime.date(2015,3,29), freq='5min')\n\npyephemout = pvlib.solarposition.pyephem(times.tz_localize(loc.tz), loc.latitude, loc.longitude)\nephemout = pvlib.solarposition.ephemeris(times.tz_localize(loc.tz), loc.latitude, loc.longitude)\n\npyephemout['elevation'].plot(label='pyephem')\npyephemout['apparent_elevation'].plot(label='pyephem apparent')\nephemout['elevation'].plot(label='ephem')\nplt.legend(ncol=2)\nplt.title('elevation')\n\nplt.figure()\npyephemout['azimuth'].plot(label='pyephem')\nephemout['azimuth'].plot(label='ephem')\nplt.legend(ncol=2)\nplt.title('azimuth')\n\nprint('pyephem')\nprint(pyephemout.head())\nprint('ephem')\nprint(ephemout.head())\n\nloc = berlin\ntimes = pd.DatetimeIndex(start=datetime.date(2015,3,30), end=datetime.date(2015,3,31), freq='5min')\n\npyephemout = pvlib.solarposition.pyephem(times.tz_localize(loc.tz), loc.latitude, loc.longitude)\nephemout = pvlib.solarposition.ephemeris(times.tz_localize(loc.tz), loc.latitude, loc.longitude)\n\npyephemout['elevation'].plot(label='pyephem')\npyephemout['apparent_elevation'].plot(label='pyephem apparent')\nephemout['elevation'].plot(label='ephem')\nplt.legend(ncol=2)\nplt.title('elevation')\n\nplt.figure()\npyephemout['azimuth'].plot(label='pyephem')\nephemout['azimuth'].plot(label='ephem')\nplt.legend(ncol=2)\nplt.title('azimuth')\n\nprint('pyephem')\nprint(pyephemout.head())\nprint('ephem')\nprint(ephemout.head())\n\nloc = berlin\ntimes = pd.DatetimeIndex(start=datetime.date(2015,6,28), end=datetime.date(2015,6,29), freq='5min')\n\npyephemout = pvlib.solarposition.pyephem(times.tz_localize(loc.tz), loc.latitude, loc.longitude)\nephemout = pvlib.solarposition.ephemeris(times.tz_localize(loc.tz), loc.latitude, loc.longitude)\n\npyephemout['elevation'].plot(label='pyephem')\npyephemout['apparent_elevation'].plot(label='pyephem apparent')\nephemout['elevation'].plot(label='ephem')\nplt.legend(ncol=2)\nplt.title('elevation')\n\nplt.figure()\npyephemout['azimuth'].plot(label='pyephem')\nephemout['azimuth'].plot(label='ephem')\nplt.legend(ncol=2)\nplt.title('azimuth')\n\nprint('pyephem')\nprint(pyephemout.head())\nprint('ephem')\nprint(ephemout.head())",
"Speed tests",
"times_loc = times.tz_localize(loc.tz)\n\n%%timeit\n\npyephemout = pvlib.solarposition.pyephem(times_loc, loc.latitude, loc.longitude)\n#ephemout = pvlib.solarposition.ephemeris(times, loc)\n\n%%timeit\n\n#pyephemout = pvlib.solarposition.pyephem(times, loc)\nephemout = pvlib.solarposition.ephemeris(times_loc, loc.latitude, loc.longitude)\n\n%%timeit\n\n#pyephemout = pvlib.solarposition.pyephem(times, loc)\nephemout = pvlib.solarposition.get_solarposition(times_loc, loc.latitude, loc.longitude,\n method='nrel_numpy')",
"This numba test will only work properly if you have installed numba.",
"%%timeit\n\n#pyephemout = pvlib.solarposition.pyephem(times, loc)\nephemout = pvlib.solarposition.get_solarposition(times_loc, loc.latitude, loc.longitude,\n method='nrel_numba')",
"The numba calculation takes a long time the first time that it's run because it uses LLVM to compile the Python code to machine code. After that it's about 4-10 times faster depending on your machine. You can pass a numthreads argument to this function. The optimum numthreads depends on your machine and is equal to 4 by default.",
"%%timeit\n\n#pyephemout = pvlib.solarposition.pyephem(times, loc)\nephemout = pvlib.solarposition.get_solarposition(times_loc, loc.latitude, loc.longitude,\n method='nrel_numba', numthreads=16)\n\n%%timeit\n\nephemout = pvlib.solarposition.spa_python(times_loc, loc.latitude, loc.longitude,\n how='numba', numthreads=16)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
PySCeS/PyscesToolbox
|
example_notebooks/Symca.ipynb
|
bsd-3-clause
|
[
"notebook_dir = %pwd \nimport pysces \nimport psctb \nimport numpy \nfrom os import path \nfrom IPython.display import display, Image \nfrom sys import platform \n%matplotlib inline ",
"Symca\nSymca is used to perform symbolic metabolic control analysis [3,4] on metabolic pathway models in order to dissect the control properties of these pathways in terms of the different chains of local effects (or control patterns) that make up the total control coefficient values. Symbolic/algebraic expressions are generated for each control coefficient in a pathway which can be subjected to further analysis.\nFeatures\n\nGenerates symbolic expressions for each control coefficient of a metabolic pathway model. \nSplits control coefficients into control patterns that indicate the contribution of different chains of local effects.\nControl coefficient and control pattern expressions can be manipulated using standard SymPy functionality. \nValues of control coefficient and control pattern values are determined automatically and updated automatically following the calculation of standard (non-symbolic) control coefficient values subsequent to a parameter alteration.\nAnalysis sessions (raw expression data) can be saved to disk for later use. \nThe effect of parameter scans on control coefficient and control patters can be generated and displayed using ScanFig.\nVisualisation of control patterns by using ModelGraph functionality.\nSaving/loading of Symca sessions.\nSaving of control pattern results.\n\nUsage and feature walkthrough\nWorkflow\nPerforming symbolic control analysis with Symca usually requires the following steps:\n\nInstantiation of a Symca object using a PySCeS model object.\nGeneration of symbolic control coefficient expressions.\nAccess generated control coefficient expression results via cc_results and the corresponding control coefficient name (see Basic Usage)\nInspection of control coefficient values.\nInspection of control pattern values and their contributions towards the total control coefficient values. \nInspection of the effect of parameter changes (parameter scans) on the values of control coefficients and control patterns and the contribution of control patterns towards control coefficients.\nSession/result saving if required\nFurther analysis.\n\nObject instantiation\nInstantiation of a Symca analysis object requires PySCeS model object (PysMod) as an argument. Using the included lin4_fb.psc model a Symca session is instantiated as follows:",
"mod = pysces.model('lin4_fb')\nsc = psctb.Symca(mod)",
"Additionally Symca has the following arguments:\n\ninternal_fixed: This must be set to True in the case where an internal metabolite has a fixed concentration (default: False)\nauto_load: If True Symca will try to load a previously saved session. Saved data is unaffected by the internal_fixed argument above (default: False).\n\n.. note:: For the case where an internal metabolite is fixed see Fixed internal metabolites below.\nGenerating symbolic control coefficient expressions\nControl coefficient expressions can be generated as soon as a Symca object has been instantiated using the do_symca method. This process can potentially take quite some time to complete, therefore we recommend saving the generated expressions for later loading (see Saving/Loading Sessions below). In the case of lin4_fb.psc expressions should be generated within a few seconds.",
"sc.do_symca()",
"do_symca has the following arguments:\n\ninternal_fixed: This must be set to True in the case where an internal metabolite has a fixed concentration (default: False)\nauto_save_load: If set to True Symca will attempt to load a previously saved session and only generate new expressions in case of a failure. After generation of new results, these results will be saved instead. Setting internal_fixed to True does not affect previously saved results that were generated with this argument set to False (default: False).\n\nAccessing control coefficient expressions\nGenerated results may be accessed via a dictionary-like cc_results object (see Basic Usage - Tables). Inspecting this cc_results object in a IPython/Jupyter notebook yields a table of control coefficient values:",
"sc.cc_results",
"Inspecting an individual control coefficient yields a symbolic expression together with a value:",
"sc.cc_results.ccJR1_R4",
"In the above example, the expression of the control coefficient consists of two numerator terms and a common denominator shared by all the control coefficient expression signified by $\\Sigma$.\nVarious properties of this control coefficient can be accessed such as the:\n* Expression (as a SymPy expression)",
"sc.cc_results.ccJR1_R4.expression",
"Numerator expression (as a SymPy expression)",
"sc.cc_results.ccJR1_R4.numerator",
"Denominator expression (as a SymPy expression)",
"sc.cc_results.ccJR1_R4.denominator",
"Value (as a float64)",
"sc.cc_results.ccJR1_R4.value",
"Additional, less pertinent, attributes are abs_value, latex_expression, latex_expression_full, latex_numerator, latex_name, name and denominator_object.\nThe individual control coefficient numerator terms, otherwise known as control patterns, may also be accessed as follows:",
"sc.cc_results.ccJR1_R4.CP001\n\nsc.cc_results.ccJR1_R4.CP002",
"Each control pattern is numbered arbitrarily starting from 001 and has similar properties as the control coefficient object (i.e., their expression, numerator, value etc. can also be accessed).\nControl pattern percentage contribution\nAdditionally control patterns have a percentage field which indicates the degree to which a particular control pattern contributes towards the overall control coefficient value:",
"sc.cc_results.ccJR1_R4.CP001.percentage\n\nsc.cc_results.ccJR1_R4.CP002.percentage",
"Unlike conventional percentages, however, these values are calculated as percentage contribution towards the sum of the absolute values of all the control coefficients (rather than as the percentage of the total control coefficient value). This is done to account for situations where control pattern values have different signs.\nA particularly problematic example of where the above method is necessary, is a hypothetical control coefficient with a value of zero, but with two control patterns with equal value but opposite signs. In this case a conventional percentage calculation would lead to an undefined (NaN) result, whereas our methodology would indicate that each control pattern is equally ($50\\%$) responsible for the observed control coefficient value.\nDynamic value updating\nThe values of the control coefficients and their control patterns are automatically updated when new steady-state\nelasticity coefficients are calculated for the model. Thus changing a parameter of lin4_hill, such as the $V_{f}$ value of reaction 4, will lead to new control coefficient and control pattern values:",
"mod.reLoad()\n# mod.Vf_4 has a default value of 50\nmod.Vf_4 = 0.1\n# calculating new steady state\nmod.doMca()\n\n# now ccJR1_R4 and its two control patterns should have new values\nsc.cc_results.ccJR1_R4\n\n# original value was 0.000\nsc.cc_results.ccJR1_R4.CP001\n\n# original value was 0.964\nsc.cc_results.ccJR1_R4.CP002\n\n# resetting to default Vf_4 value and recalculating\nmod.reLoad()\nmod.doMca()",
"Control pattern graphs\nAs described under Basic Usage, Symca has the functionality to display the chains of local effects represented by control patterns on a scheme of a metabolic model. This functionality can be accessed via the highlight_patterns method:",
"# This path leads to the provided layout file \npath_to_layout = '~/Pysces/psc/lin4_fb.dict'\n\n# Correct path depending on platform - necessary for platform independent scripts\nif platform == 'win32' and pysces.version.current_version_tuple() < (0,9,8):\n path_to_layout = psctb.utils.misc.unix_to_windows_path(path_to_layout)\nelse:\n path_to_layout = path.expanduser(path_to_layout)\n\nsc.cc_results.ccJR1_R4.highlight_patterns(height = 350, pos_dic=path_to_layout)",
"highlight_patterns has the following optional arguments:\n\nwidth: Sets the width of the graph (default: 900).\nheight:Sets the height of the graph (default: 500).\nshow_dummy_sinks: If True reactants with the \"dummy\" or \"sink\" will not be displayed (default: False).\nshow_external_modifier_links: If True edges representing the interaction of external effectors with reactions will be shown (default: False).\n\nClicking either of the two buttons representing the control patterns highlights these patterns according according to their percentage contribution (as discussed above) towards the total control coefficient.",
"# clicking on CP002 shows that this control pattern representing \n# the chain of effects passing through the feedback loop\n# is totally responsible for the observed control coefficient value.\nsc.cc_results.ccJR1_R4.highlight_patterns(height = 350, pos_dic=path_to_layout)\n\n# clicking on CP001 shows that this control pattern representing \n# the chain of effects of the main pathway does not contribute\n# at all to the control coefficient value.\nsc.cc_results.ccJR1_R4.highlight_patterns(height = 350, pos_dic=path_to_layout)",
"Parameter scans\nParameter scans can be performed in order to determine the effect of a parameter change on either the control coefficient and control pattern values or of the effect of a parameter change on the contribution of the control patterns towards the control coefficient (as discussed above). The procedures for both the \"value\" and \"percentage\" scans are very much the same and rely on the same principles as described in the Basic Usage and RateChar sections.\nTo perform a parameter scan the do_par_scan method is called. This method has the following arguments:\n\nparameter: A String representing the parameter which should be varied.\nscan_range: Any iterable representing the range of values over which to vary the parameter (typically a NumPy ndarray generated by numpy.linspace or numpy.logspace).\nscan_type: Either \"percentage\" or \"value\" as described above (default: \"percentage\").\ninit_return: If True the parameter value will be reset to its initial value after performing the parameter scan (default: True).\npar_scan: If True, the parameter scan will be performed by multiple parallel processes rather than a single process, thus speeding performance (default: False).\npar_engine: Specifies the engine to be used for the parallel scanning processes. Can either be \"multiproc\" or \"ipcluster\". A discussion of the differences between these methods are beyond the scope of this document, see here for a brief overview of Multiprocessing in Python. (default: \"multiproc\").\nforce_legacy: If True do_par_scan will use a older and slower algorithm for performing the parameter scan. This is mostly used for debugging purposes. (default: False)\n\nBelow we will perform a percentage scan of $V_{f4}$ for 200 points between 0.01 and 1000 in log space:",
"percentage_scan_data = sc.cc_results.ccJR1_R4.do_par_scan(parameter='Vf_4',\n scan_range=numpy.logspace(-1,3,200),\n scan_type='percentage')",
"As previously described, these data can be displayed using ScanFig by calling the plot method of percentage_scan_data. Furthermore, lines can be enabled/disabled using the toggle_category method of ScanFig or by clicking on the appropriate buttons:",
"percentage_scan_plot = percentage_scan_data.plot()\n\n# set the x-axis to a log scale\npercentage_scan_plot.ax.semilogx()\n\n# enable all the lines\npercentage_scan_plot.toggle_category('Control Patterns', True)\npercentage_scan_plot.toggle_category('CP001', True)\npercentage_scan_plot.toggle_category('CP002', True)\n\n# display the plot\npercentage_scan_plot.interact()\n",
"A value plot can similarly be generated and displayed. In this case, however, an additional line indicating $C^{J}_{4}$ will also be present:",
"value_scan_data = sc.cc_results.ccJR1_R4.do_par_scan(parameter='Vf_4',\n scan_range=numpy.logspace(-1,3,200),\n scan_type='value')\n\nvalue_scan_plot = value_scan_data.plot()\n\n# set the x-axis to a log scale\nvalue_scan_plot.ax.semilogx()\n\n# enable all the lines\nvalue_scan_plot.toggle_category('Control Coefficients', True)\nvalue_scan_plot.toggle_category('ccJR1_R4', True)\n\nvalue_scan_plot.toggle_category('Control Patterns', True)\nvalue_scan_plot.toggle_category('CP001', True)\nvalue_scan_plot.toggle_category('CP002', True)\n\n# display the plot\nvalue_scan_plot.interact()\n",
"Fixed internal metabolites\nIn the case where the concentration of an internal intermediate is fixed (such as in the case of a GSDA) the internal_fixed argument must be set to True in either the do_symca method, or when instantiating the Symca object. This will typically result in the creation of a cc_results_N object for each separate reaction block, where N is a number starting at 0. Results can then be accessed via these objects as with normal free internal intermediate models.\nThus for a variant of the lin4_fb model where the intermediateS3 is fixed at its steady-state value the procedure is as follows:",
"# Create a variant of mod with 'C' fixed at its steady-state value\nmod_fixed_S3 = psctb.modeltools.fix_metabolite_ss(mod, 'S3')\n\n# Instantiate Symca object the 'internal_fixed' argument set to 'True'\nsc_fixed_S3 = psctb.Symca(mod_fixed_S3,internal_fixed=True)\n\n# Run the 'do_symca' method (internal_fixed can also be set to 'True' here)\nsc_fixed_S3.do_symca() ",
"The normal sc_fixed_S3.cc_results object is still generated, but will be invalid for the fixed model. Each additional cc_results_N contains control coefficient expressions that have the same common denominator and corresponds to a specific reaction block. These cc_results_N objects are numbered arbitrarily, but consistantly accross different sessions. Each results object accessed and utilised in the same way as the normal cc_results object. \nFor the mod_fixed_c model two additional results objects (cc_results_0 and cc_results_1) are generated:\n\ncc_results_1 contains the control coefficients describing the sensitivity of flux and concentrations within the supply block of S3 towards reactions within the supply block.",
"sc_fixed_S3.cc_results_1",
"cc_results_0 contains the control coefficients describing the sensitivity of flux and concentrations of either reaction block towards reactions in the other reaction block (i.e., all control coefficients here should be zero). Due to the fact that the S3 demand block consists of a single reaction, this object also contains the control coefficient of R4 on J_R4, which is equal to one. This results object is useful confirming that the results were generated as expected.",
"sc_fixed_S3.cc_results_0",
"If the demand block of S3 in this pathway consisted of multiple reactions, rather than a single reaction, there would have been an additional cc_results_N object containing the control coefficients of that reaction block. \nSaving results\nIn addition to being able to save parameter scan results (as previously described), a summary of the control coefficient and control pattern results can be saved using the save_results method. This saves a csv file (by default) to disk to any specified location. If no location is specified, a file named cc_summary_N is saved to the ~/Pysces/$modelname/symca/ directory, where N is a number starting at 0:",
"sc.save_results()",
"save_results has the following optional arguments:\n\nfile_name: Specifies a path to save the results to. If None, the path defaults as described above.\nseparator: The separator between fields (default: \",\")\n\nThe contents of the saved data file is as follows:",
"# the following code requires `pandas` to run\nimport pandas as pd\n# load csv file at default path\nresults_path = '~/Pysces/lin4_fb/symca/cc_summary_0.csv'\n\n# Correct path depending on platform - necessary for platform independent scripts\nif platform == 'win32' and pysces.version.current_version_tuple() < (0,9,8):\n results_path = psctb.utils.misc.unix_to_windows_path(results_path)\nelse:\n results_path = path.expanduser(results_path)\n\nsaved_results = pd.read_csv(results_path)\n# show first 20 lines\nsaved_results.head(n=20) ",
"Saving/loading sessions\nSaving and loading Symca sessions is very simple and works similar to RateChar. Saving a session takes place with the save_session method, whereas the load_session method loads the saved expressions. As with the save_results method and most other saving and loading functionality, if no file_name argument is provided, files will be saved to the default directory (see also Basic Usage). As previously described, expressions can also automatically be loaded/saved by do_symca by using the auto_save_load argument which saves and loads using the default path. Models with internal fixed metabolites are handled automatically.",
"# saving session\nsc.save_session()\n\n# create new Symca object and load saved results\nnew_sc = psctb.Symca(mod)\nnew_sc.load_session()\n\n# display saved results\nnew_sc.cc_results"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
qutip/qutip-notebooks
|
examples/nonmarkov-transfer-tensor-method.ipynb
|
lgpl-3.0
|
[
"QuTiP Example: The Transfer Tensor Method for Non-Markovian Open Quantum Systems\nArne L. Grimsmo <br>\nUniversité de Sherbrooke <br>\narne.grimsmo@gmail.com\n$\\newcommand{\\ket}[1]{\\left|#1\\right\\rangle}$\n$\\newcommand{\\bra}[1]{\\left\\langle#1\\right|}$\nIntroduction\nThe \"Transfer Tensor Method\" was introduced by Cerrillo and Cao in Phys. Rev. Lett 112, 110401 (2014) (arXiv link), as a general method for evolving non-Markovian open quantum systems.\nThe method takes as input a set of dynamical maps $\\mathcal{E}_k$, such that\n$$\n\\rho(t_k) = \\mathcal{E}_k \\rho(0)\n$$\nfor an intial set of times $t_k$. This set of dynamical maps could be the result of experimental process tomography of they could be precomputed through some other (typically costly) method. The idea is that based on knowledge of these maps, one can try to exptrapolate the, in general non-Markovian, time-evolution to larger times, $t_n > t_k$. The method assumes that there is no explicit time-dependence in the total system-bath Hamiltonian.\nPreamble\nImports",
"import numpy as np\n\nimport qutip as qt\nfrom qutip.ipynbtools import version_table\n\nimport qutip.nonmarkov.transfertensor as ttm",
"Plotting Support",
"%matplotlib inline\nimport matplotlib.pyplot as plt",
"Jaynes-Cummings model, with the cavity as a non-Markovian bath\nAs a simple example, we consider the Jaynes-Cummings mode, and the non-Markovian dynamics of the qubit when the cavity is traced out. In this example, the dynamical maps $\\mathcal{E}_k$ are the reduced time-propagators for the qubit, after evolving and tracing out the cavity, i.e.\n$$\n\\mathcal{E}k \\rho = {\\rm tr}{\\rm cav} \\left[ {\\rm e}^{\\mathcal{L} t_k} \\rho \\otimes \\rho_{0,{\\rm cav}} \\right],\n$$\nwhere $\\mathcal{L}$ is the Lindbladian for the dissipative JC model (defined below) and $\\rho_{0,{\\rm cav}}$ is the initial state of the cavity.\nProblem setup",
"kappa = 1.0 # cavity decay rate\nwc = 0.0*kappa # cavity frequency\nwa = 0.0*kappa # qubit frequency\ng = 10.0*kappa # coupling strength\nN = 3 # size of cavity basis\n\n# intial state\npsi0c = qt.basis(N,0)\nrho0c = qt.ket2dm(psi0c)\nrho0a = qt.ket2dm(qt.basis(2,0))\nrho0 = qt.tensor(rho0a,rho0c)\nrho0avec = qt.operator_to_vector(rho0a)\n\n# identity superoperator\nId = qt.tensor(qt.qeye(2),qt.qeye(N))\nE0 = qt.sprepost(Id,Id)\n\n# partial trace over the cavity, reprsented as a superoperator\nptracesuper = qt.tensor_contract(E0,(1,3))\n\n# intial state of the cavity, represented as a superoperator\nsuperrho0cav = qt.sprepost(qt.tensor(qt.qeye(2),psi0c),\n qt.tensor(qt.qeye(2),psi0c.dag()))\n\n# operators\na = qt.tensor(qt.qeye(2), qt.destroy(N))\nsm = qt.tensor(qt.sigmam(), qt.qeye(N))\nsz = qt.tensor(qt.sigmaz(), qt.qeye(N))\n\n# Hamiltonian\nH = wc * a.dag() * a + wa * sm.dag() * sm + g * (a.dag() * sm + a * sm.dag())\nc_ops = [np.sqrt(kappa)*a]",
"Exact timepropagators to learn from\nThe function dynmap generates an exact timepropagator for the qubit $\\mathcal{E}_{k}$ for a time $t_k$. <br>",
"def dynmap(t):\n # reduced dynamical map for the qubit at time t\n Et = qt.mesolve(H, E0, [0.,t], c_ops, []).states[-1]\n return ptracesuper*(Et*superrho0cav)",
"Exact time evolution using standard mesolve method",
"exacttimes = np.arange(0,5,0.01)\nexactsol = qt.mesolve(H, rho0, exacttimes, c_ops, [])",
"Approximate solution using the Transfer Tensor Method for different learning times",
"times = np.arange(0,5,0.1) # total extrapolation time\nttmsols = []\nmaxlearningtimes = [0.5, 2.0] # maximal learning times\nfor T in maxlearningtimes:\n learningtimes = np.arange(0,T,0.1)\n learningmaps = [dynmap(t) for t in learningtimes] # generate exact dynamical maps to learn from\n ttmsols.append(ttm.ttmsolve(learningmaps, rho0a, times)) # extrapolate using TTM",
"Visualize results",
"fig, ax = plt.subplots(figsize=(10,7))\nax.plot(exactsol.times, qt.expect(sz, exactsol.states),'-b',linewidth=3.0)\nstyle = ['og','or']\nfor i,ttmsol in enumerate(ttmsols):\n ax.plot(ttmsol.times, qt.expect(qt.sigmaz(), ttmsol.states),style[i],linewidth=1.5,)\nax.legend(['exact',str(maxlearningtimes[0]),str(maxlearningtimes[1])])\nax.set_xlabel(r'$\\kappa t$', fontsize=20)\nax.set_ylabel(r'$\\sigma_z$', fontsize=20)",
"Discussion\nThe figure above illustrates how the transfer tensor method needs a sufficiently long set of learning times to get good results. The green dots show results for learning times $t_k=0,0.1,\\dots,0.5$, which is clearly not sufficient. The red dots show results for $t_k=0,0.1,\\dots,2.0$, which gives results that are in very good agreement with the exact solution.\nEpilouge",
"version_table()"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
relopezbriega/mi-python-blog
|
content/notebooks/MachineLearningPractica.ipynb
|
gpl-2.0
|
[
"Ejemplo de Machine Learning con Python - Preprocesamiento y exploración\nEsta notebook fue creada originalmente como un blog post por Raúl E. López Briega en Matemáticas, Analisis de datos y Python. El contenido esta bajo la licencia BSD.\n<img alt=\"Machine Learning\" title=\"Machine Learning\" src=\"https://relopezbriega.github.io/images/machine-learning.jpg\">\nIntroducción\nEn mi artículo Machine Learning con Python, hice una breve introducción a los principales conceptos que debemos conocer de Machine Learning. En este artículo, la idea es profundizar un poco más en ellos y presentar algunos conceptos nuevos con la ayuda de un ejemplo práctico.\nDescripción del ejemplo\nEn el ejemplo que vamos a utilizar, vamos a imaginarnos que una organización sin fines de lucro soporta su\noperación mediante la organización periódica de una campaña para recaudar fondos por correo. Esta organización ha creado una base de datos con más de 40 mil personas que por lo menos una vez en el pasado ha sido donante.\nLa campaña de recaudación de fondos se realiza mediante el envío a una lista de correo (o un subconjunto de ella) de un regalo simbólico y la solicitud de una donación.\nUna vez que se planifica la campaña, el costo total de la misma se conoce de forma automática:\n[número de potenciales donantes a contactar] x ([costo de regalo] + [costo de correo])\nSin embargo, el resultado de la recaudación de fondos depende tanto del número de donantes que responde a la campaña, como del importe medio de dinero que es donado.\nLa idea es que, utilizando las técnicas de Machine Learning sobre la base de datos de esta organización, podamos ayudarla a maximizar los beneficios de la campaña de recaudación, esto es, lograr el máximo importe posible de dinero recaudado, minimizando lo más que se pueda el costo total de la campaña.\nDebemos tener en cuenta que un miembro de la organización le enviará el correo a un potencial donante, siempre que el rendimiento esperado del pedido excede el costo del correo con la solicitud de donación. Para nuestro ejemplo, el costo por donante de la campaña va a ser igual al [costo de regalo] + [costo de correo], y esto va a ser igual a \\$ 0.75 por correo enviado. Los ingresos netos de la campaña se calculan como la suma (importe de donación real - \\$ 0.75) sobre todos los donantes a los que se ha enviado el correo. Nuestro objetivo es ayudar a esta organización sin fines de lucro a seleccionar de su lista de correo los donantes a los que debe abordar a los efectos de maximizar los beneficios de la campaña de recaudación.\nEl Dataset\nEl dataset que vamos a utilizar, consiste en la base de datos de la organización sin fines de lucro con la lista de correo de los donantes de sus campañas anteriores. El mismo, ya lo hemos dividido en un dataset de aprendizaje que se pueden descargar del siguiente enlace; y un dataset que vamos a utilizar para realizar las predicciones, el cual se lo pueden descargar desde este otro enlace.\nAlgunos otros datos a tener en cuenta, son los siguientes:\n\nEl dataset de aprendizaje contiene 47720 registros y 481 columnas. La primera fila / cabecera del mismo contiene los nombres de cada campo.\nEl dataset de validación contiene 47692 registros y 479 columnas. Al igual que en el caso anterior, la primera fila contiene los nombres de cada campo.\nLos registros del dataset de validación son idénticos a los registros del dataset de aprendizaje, excepto que los valores para nuestros campos objetivo que necesitamos para el aprendizaje, no existen(es decir, las columnas DONOR_FLAG y DONOR_AMOUNT no están incluidas en el dataset de validación).\nLos espacios en blanco en los campos de tipo texto y los puntos en los campos de tipo numérico corresponden a valores faltantes o perdidos.\nCada registro tiene un identificador único de registro o índice (campo IDX). Para cada registro, hay dos variables objetivo (campos DONOR_FLAG y DONOR_AMOUNT). DONOR_FLAG es una variable binaria que indica si ese registro fue donante o no; mientras que DONOR_AMOUNT contiene el importe de la donación para los casos que fueron donantes.\nAlgunos de los valores en el dataset pueden contener errores de formato o de ingreso. Por lo que se deberían corregir o limpiar.\nUna descripción detallada del significado de cada columna del dataset, la pueden encontrar en el siguiente enlace.\n\nAnálisis exploratorio y preprocesamiento\nEl primer paso que deberíamos emprender, es realizar un pequeño análisis exploratorio de nuestro dataset; es decir, valernos de algunos herramientas de la estadística, junto con algunas visualizaciones para entender un poco más los datos de los que disponemos. Veamos como podemos hacer esto.",
"# <!-- collapse=True -->\n# Importando las librerías que vamos a utilizar\nimport pandas as pd\nimport numpy as np \nimport matplotlib.pyplot as plt \nimport seaborn as sns \nfrom sklearn.preprocessing import LabelEncoder\n\n# graficos incrustados\n%matplotlib inline\n\n# parametros esteticos de seaborn\nsns.set_palette(\"deep\", desat=.6)\nsns.set_context(rc={\"figure.figsize\": (8, 4)})\n\n# importando el dataset a un Dataframe de Pandas\nONG_data = pd.read_csv('LEARNING.csv', header=0)\n\n# Examinando las primeras 10 filas y 10 columnas del dataset\nONG_data.ix[:10, :10]\n\n# Controlando la cantidad de registros\nONG_data['DONOR_AMOUNT'].count()",
"Como podemos ver, utilizando simples expresiones de Python, podemos cargar la base de datos de la ONG en un Dataframe de Pandas; lo que nos va a permitir manipular los datos con suma facilidad. Comenzemos a explorar un poco más en detalle este dataset!\nEn primer lugar, lo que deberíamos hacer es controlar si existen valores faltantes o nulos; esto lo podemos realizar utilizando el método isnull() del siguiente modo:",
"# Controlando valores nulos\nONG_data.isnull().any().any()",
"Como podemos ver, el método nos devuelve el valor \"True\", lo que indica que existen valores nulos en nuestro dataset. Estos valores pueden tener una influencia significativa en nuestro modelo predictivo, por lo que siempre es una decisión importante determinar la forma en que los vamos a manejar. Las alternativas que tenemos son:\n\nDejarlos como están, lo que a la larga nos va a traer bastantes dolores de cabeza ya que en general los algoritmos no los suelen procesar correctamente y provocan errores.\nEliminarlos, lo que es una alternativa viable aunque, dependiendo la cantidad de valores nulos, puede afectar significativamente el resultado final de nuestro modelo predictivo.\nInferir su valor. En este caso, lo que podemos hacer es tratar de inferir el valor faltante y reemplazarlo por el valor inferido. Esta suele ser generalmente la mejor alternativa a seguir.\n\nEn este ejemplo, yo voy a utilizar la última alternativa. Vamos a inferir los valores faltantes utilizando la media aritmética para los datos cuantitativos y la <a href=\"https://es.wikipedia.org/wiki/Moda_(estad%C3%ADstica)\">moda</a> para los datos categóricos.\nComo vamos a utilizar dos métodos distintos para reemplazar a los valores faltantes, dependiendo de si son numéricos o categóricos, el primer paso que debemos realizar es tratar de identificar que columnas de nuestro dataset corresponde a cada tipo de datos; para realizar esto vamos a utilizar el atributo dtypes del Dataframe de Pandas.",
"# Agrupando columnas por tipo de datos\ntipos = ONG_data.columns.to_series().groupby(ONG_data.dtypes).groups\n\n# Armando lista de columnas categóricas\nctext = tipos[np.dtype('object')]\nlen(ctext) # cantidad de columnas con datos categóricos. \n\n# Armando lista de columnas numéricas\ncolumnas = ONG_data.columns # lista de todas las columnas\ncnum = list(set(columnas) - set(ctext))\nlen(cnum)",
"Ahora ya logramos separar a las 481 columnas que tiene nuestro dataset. 68 columnas contienen datos categóricos y 413 contienen datos cuantitativos. Procedamos a inferir los valores faltantes.",
"# Completando valores faltantas datos cuantititavos\nfor c in cnum:\n mean = ONG_data[c].mean()\n ONG_data[c] = ONG_data[c].fillna(mean)\n\n# Completando valores faltantas datos categóricos\nfor c in ctext:\n mode = ONG_data[c].mode()[0]\n ONG_data[c] = ONG_data[c].fillna(mode)\n\n# Controlando que no hayan valores faltantes\nONG_data.isnull().any().any()\n\n# Guardando el dataset preprocesado\n# Save transform datasets\nONG_data.to_csv(\"LEARNING_procesado.csv\", index=False)",
"Perfecto! Ahora tenemos un dataset limpio de valores faltantes. Ya estamos listos para comenzar a explorar los datos, comencemos por determinar el porcentaje de personas que alguna vez fue donante de la ONG y están incluidos en la base de datos con la que estamos trabajando.",
"# Calculando el porcentaje de donantes sobre toda la base de datos\nporcent_donantes = (ONG_data[ONG_data.DONOR_AMOUNT \n > 0]['DONOR_AMOUNT'].count() * 1.0\n / ONG_data['DONOR_AMOUNT'].count()) * 100.0\nprint(\"El procentaje de donantes de la base de datos es {0:.2f}%\"\n .format(porcent_donantes))\n\n# Grafico de totas del porcentaje de donantes\n# Agrupando por DONOR_FLAG\ndonantes = ONG_data.groupby('DONOR_FLAG').IDX.count() \n# Creando las leyendas del grafico.\nlabels = [ 'Donante\\n' + str(round(x * 1.0 / donantes.sum() * \n 100.0, 2)) + '%' for x in donantes ]\nlabels[0] = 'No ' + labels[0]\n\nplt.pie(donantes, labels=labels)\nplt.title('Porcion de donantes')\nplt.show()\n\n# Creando subset con solo los donates\nONG_donantes = ONG_data[ONG_data.DONOR_AMOUNT > 0]\n\n# cantidad de donantes\nlen(ONG_donantes)",
"Aquí podemos ver que el porcentaje de personas que fueron donantes en el pasado es realmente muy bajo, solo un 5 % del total de la base de datos (2423 personas). Este es un dato importante a tener en cuenta ya que al existir tanta diferencia entre las clases a clasificar, esto puede afectar considerablemente a nuestro algoritmo de aprendizaje.\nExploremos también un poco más en detalle a este grupo pequeño de personas que fueron donantes; veamos por ejemplo como se dividen de acuerdo a la cantidad de dinero donado.",
"# Analizando el importe de donanciones\n# Creando un segmentos de importes\nimp_segm = pd.cut(ONG_donantes['DONOR_AMOUNT'], \n [0, 10, 20, 30, 40, 50, 60, 100, 200])\n# Creando el grafico de barras desde pandas\nplot = pd.value_counts(imp_segm).plot(kind='bar',\n title='Importes de donacion')\nplot.set_ylabel('Cant de donantes')\nplot.set_xlabel('Rango de importes')\nplt.show()\n\n# Agrupación por segmento segun importe donado.\npd.value_counts(imp_segm)\n\n# importe de donación promedio\nONG_donantes['DONOR_AMOUNT'].mean()\n\n# Gráfico de cajas del importe de donación\nsns.boxplot(list(ONG_donantes['DONOR_AMOUNT']))\nplt.title('importe de donación')\nplt.show()",
"Este análisis nos muestra que la mayor cantidad de donaciones caen en un rango de importes entre 0 y 30, siendo la donación promedio 15.60. También podemos ver que donaciones que superen un importe de 50 son casos realmente poco frecuentes, por lo que constituyen valores atípicos y sería prudente eliminar estos casos al entrenar nuestro modelo para que no distorsionen los resultados.\nOtra exploración interesante que podríamos realizar sobre nuestro dataset relacionado con los donantes, es ver como se divide este grupo en términos de género y edad. Comencemos con el género!",
"# Grafico del género de los donantes\nONG_donantes.groupby('GENDER').size().plot(kind='bar')\nplt.title('Distribución por género')\nplt.show()\n\n# Donaciones segun el género\nONG_donantes[(ONG_donantes.DONOR_AMOUNT <= 50)\n & (ONG_donantes.GENDER.isin(['F', 'M'])\n )][['DONOR_AMOUNT', 'GENDER']].boxplot(by='GENDER')\nplt.title('Donantes segun sexo')\nplt.show()\n\n# Media de impote donado por mujeres\nONG_donantes[ONG_donantes.GENDER == 'F'][['DONOR_AMOUNT']].mean()\n\n# Media de impote donado por hombres\nONG_donantes[ONG_donantes.GENDER == 'M'][['DONOR_AMOUNT']].mean()",
"Aquí vemos que las mujeres suelen estar más propensas a donar, aunque donan un importe promedio menor (14.61) al que donan los hombres (16.82). Veamos ahora como se comportan las donaciones respecto a la edad.",
"# Distribución de la edad de los donantes\nONG_donantes['AGE'].hist().set_title('Distribución de donantes segun edad')\nplt.show()\n\n# Agrupando la edad por rango de a 10\nAGE2 = pd.cut(ONG_donantes['AGE'], range(0, 100, 10))\nONG_donantes['AGE2'] = AGE2\n\n# Gráfico de barras de donaciones por edad\npd.value_counts(AGE2).plot(kind='bar', title='Donaciones por edad')\nplt.show()\n\n# Importes de donación por grango de edad\nONG_donantes[ONG_donantes.DONOR_AMOUNT <= 50][['DONOR_AMOUNT', \n 'AGE2']].boxplot(by='AGE2')\nplt.title('Importe de donación por edad')\nplt.show()",
"En este último análisis podemos ver que la mayor cantidad de los donantes son personas de entre 60 y 70 años, aunque la media de importe donado más alta la tienen las personas que van desde los 30 a los 60 años. \nCon esto concluyo este análisis; en próximos artículos voy a continuar con el ejemplo completando los restantes pasos que incluye un proyecto de Machine Learning hasta concluir el modelo y poder utilizarlo para realizar predicciones (selección de atributos - armado de modelo - entrenamiento - evaluación - métricas - predicción). Espero lo hayan disfrutado tanto como yo disfrute al escribirlo!\nSaludos!\nEste post fue escrito utilizando IPython notebook. Pueden descargar este notebook o ver su version estática en nbviewer."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
ysh329/Homework
|
Python for Genomic Data Science/Lecture 8 Quiz.ipynb
|
mit
|
[
"Lecture 8 Quiz Please Note: No Grace Period\nQuestion 1\nWhat module can we use to run BLAST over the internet in Biopython:\n\nBio.Blast.NCBIXML\nNCBIXML\nBio.Blast.NCBIWWW\nWWW",
"from Bio.Blast import NCBIWWW\nfasta_string = \"GATGCTACGGTGCTAAAAGCATTACGCCCTATAGTGATTTTCGACATACTGTGTTTTTAAATATAGTATTGCC\"\nresult_handle = NCBIWWW.qblast(\"blastn\", \"nt\", fasta_string)\n\nprint \"type(result_handle):\", type(result_handle)\nprint \"len(str(result_handle)):\", len(str(result_handle))\nprint \"len(str(result_handle)[0]):\", len(str(result_handle)[0])\nprint \"str(result_handle):\", str(result_handle)\n\nfrom Bio.Blast import NCBIXML\nblast_record = NCBIXML.read(result_handle)\nprint \"blast_record:\", blast_record\nprint \"type(blast_record):\", type(blast_record)\nprint \"len(blast_record.alignments):\", len(blast_record.alignments)\n\nE_VALUE_THRESH = 0.01\nfor alignment in blast_record.alignments:\n for hsp in alignment.hsps:\n if hsp.expect < E_VALUE_THRESH:\n print('****Alignment****')\n print('sequence:', alignment.title)\n print('length:', alignment.length)\n print('e value:', hsp.expect)\n print(hsp.query)\n print(hsp.match)\n print(hsp.sbjct)\n\nhelp(NCBIWWW.qblast)",
"Question 2\nWhich one of the following modules is not part of the Bio.Blast package in Biopython:\n\nParseBlastTable\nNCBIXML\nFastaIO\nApplications",
"import Bio.Blast\nhelp(Bio.Blast)",
"Question 3\nUsing Biopython find out what species the following unknown DNA sequence comes from:\nTGGGCCTCATATTTATCCTATATACCATGTTCGTATGGTGGCGCGATGTTCTACGTGAATCCACGTTCGAAGGACATCATACCAAAGTCGTAC\nAATTAGGACCTCGATATGGTTTTATTCTGTTTATCGTATCGGAGGTTATGTTCTTTTTTGCTCTTTTTCGGGCTTCTTCTCATTCTTCTTTGGCAC\nCTACGGTAGAG\nHint. Identify the alignment with the lowest E value.\n* Nicotiana tabacum\n* Salvia miltiorrhiza\n* Capsicum annuum\n* Hyoscyamus niger",
"from Bio.Blast import NCBIWWW\nfasta_string = \"TGGGCCTCATATTTATCCTATATACCATGTTCGTATGGTGGCGCGATGTTCTACGTGAATCCACGTTCGAAGGACATCATACCAAAGTCGTACAATTAGGACCTCGATATGGTTTTATTCTGTTTATCGTATCGGAGGTTATGTTCTTTTTTGCTCTTTTTCGGGCTTCTTCTCATTCTTCTTTGGCACCTACGGTAGAG\"\nresult_handle = NCBIWWW.qblast(\"blastn\", \"nt\", fasta_string)\n\nfrom Bio.Blast import NCBIXML\nblast_record = NCBIXML.read(result_handle)\nprint \"blast_record:\", blast_record\nprint \"type(blast_record):\", type(blast_record)\nprint \"len(blast_record.alignments):\", len(blast_record.alignments)\n\nE_VALUE_THRESH = 0.01\nfor alignment in blast_record.alignments:\n for hsp in alignment.hsps:\n if hsp.expect < E_VALUE_THRESH:\n print('****Alignment****')\n print('sequence:', alignment.title)\n print('length:', alignment.length)\n print('e value:', hsp.expect)\n print(hsp.query)\n print(hsp.match)\n print(hsp.sbjct)",
"Question 4\nSeq is a sequence object that can be imported from Biopython using the following statement:\n'''\nfrom Bio.Seq import Seq \n'''\nIf my_seq is a Seq object, what is the correct Biopython code to print the reverse complement of my_seq?\nHint. Use the built-in function help you find out the methods of the Seq object. \n\nprint('reverse complement is %s' % complement(my_seq)) \nprint('reverse complement is %s' % my_seq.reverse_complement())\nprint('reverse complement is %s' % my_seq.reverse())\nprint('reverse complement is %s' % reverse(my_seq.complement()))",
"from Bio.Seq import Seq\nfrom Bio.Alphabet import generic_protein\n\nhelp(Seq)\nmy_seq = Seq(\"MELKI\", generic_protein) + \"LV\"\n\nhelp(my_seq)\n\nfrom Bio.Seq import Seq\nfrom Bio.Alphabet import IUPAC\nmy_dna = Seq(\"CCCCCGATAG\", IUPAC.unambiguous_dna)\nmy_dna\nmy_dna.complement()\n\nfrom Bio.Seq import Seq\nfrom Bio.Alphabet import IUPAC\nmy_dna = Seq(\"CCCCCGATAGNR\", IUPAC.ambiguous_dna)\nmy_dna\nmy_dna.reverse_complement()\n\nif isinstance(my_seq, Seq):\n print('reverse complement is %s' % my_seq.reverse_complement())",
"Question 5\nCreate a Biopython Seq object that represents the following sequence:\nTGGGCCTCATATTTATCCTATATACCATGTTCGTATGGTGGCGCGATGTTCTACGTGAATCCACGTTCGAAGGACATCATACCAAAGTCGTAC\nAATTAGGACCTCGATATGGTTTTATTCTGTTTATCGTATCGGAGGTTATGTTCTTTTTTGCTCTTTTTCGGGCTTCTTCTCATTCTTCTTTGGCAC\nCTACGGTAGAG\nIts protein translation is: \n\nILASYLSYIPCSYGGAMFYVNPRSKDIIPKSYN*DLDMVLLFIVSEVMFFFALFRASSHSSLAPTV\nNFGLIFILYTMFVWWRDVLRQSTFEGHHTKVVQLGPRYGFIVYRIGGYVLFCSFSGFFSFFFGTYG\nFWPHIYPIYHVRMVARCSTSIHVRRTSYQSRTIRTSIWFYCLSYRRLCSFLLFFGLLLILLWHLR \nWASYLSYIPCSYGGAMFYVNPRSKDIIPKSYN*DLDMVLFCLSYRRLCSFLLFFGLLLILLWHLR",
"from Bio.Blast import NCBIWWW\nmy_seq = Seq(\"TGGGCCTCATATTTATCCTATATACCATGTTCGTATGGTGGCGCGATGTTCTACGTGAATCCACGTTCGAAGGACATCATACCAAAGTCGTACAATTAGGACCTCGATATGGTTTTATTCTGTTTATCGTATCGGAGGTTATGTTCTTTTTTGCTCTTTTTCGGGCTTCTTCTCATTCTTCTTTGGCACCTACGGTAGAG\")\nprint my_seq.translate()"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
DJCordhose/ai
|
notebooks/ml/1-classic-code.ipynb
|
mit
|
[
"Classic Approach",
"import warnings\nwarnings.filterwarnings('ignore')\n\n%matplotlib inline\n%pylab inline\n\nimport pandas as pd\nprint(pd.__version__)",
"First Step: Load Data and disassemble for our purposes",
"df = pd.read_csv('./insurance-customers-300.csv', sep=';')\n\ny=df['group']\n\ndf.drop('group', axis='columns', inplace=True)\n\nX = df.as_matrix()\n\ndf.describe()",
"Second Step: Visualizing Prediction",
"# ignore this, it is just technical code\n# should come from a lib, consider it to appear magically \n# http://scikit-learn.org/stable/auto_examples/neighbors/plot_classification.html\n\nimport matplotlib.pyplot as plt\nfrom matplotlib.colors import ListedColormap\n\ncmap_print = ListedColormap(['#AA8888', '#004000', '#FFFFDD'])\ncmap_bold = ListedColormap(['#AA4444', '#006000', '#AAAA00'])\ncmap_light = ListedColormap(['#FFAAAA', '#AAFFAA', '#FFFFDD'])\nfont_size=25\n\ndef meshGrid(x_data, y_data):\n h = 1 # step size in the mesh\n x_min, x_max = x_data.min() - 1, x_data.max() + 1\n y_min, y_max = y_data.min() - 1, y_data.max() + 1\n xx, yy = np.meshgrid(np.arange(x_min, x_max, h),\n np.arange(y_min, y_max, h))\n return (xx,yy)\n \ndef plotPrediction(clf, x_data, y_data, x_label, y_label, colors, title=\"\", mesh=True, fname=None):\n xx,yy = meshGrid(x_data, y_data)\n plt.figure(figsize=(20,10))\n\n if clf and mesh:\n Z = clf.predict(np.c_[yy.ravel(), xx.ravel()])\n # Put the result into a color plot\n Z = Z.reshape(xx.shape)\n plt.pcolormesh(xx, yy, Z, cmap=cmap_light)\n \n plt.xlim(xx.min(), xx.max())\n plt.ylim(yy.min(), yy.max())\n if fname:\n plt.scatter(x_data, y_data, c=colors, cmap=cmap_print, s=200, marker='o', edgecolors='k')\n else:\n plt.scatter(x_data, y_data, c=colors, cmap=cmap_bold, s=80, marker='o', edgecolors='k')\n plt.xlabel(x_label, fontsize=font_size)\n plt.ylabel(y_label, fontsize=font_size)\n plt.title(title, fontsize=font_size)\n if fname:\n plt.savefig(fname)\n\n# 0: red\n# 1: green\n# 2: yellow\n\nclass ClassifierBase:\n def predict(self, X):\n return np.array([ self.predict_single(x) for x in X])\n def score(self, X, y):\n n = len(y)\n correct = 0\n predictions = self.predict(X)\n for prediction, ground_truth in zip(predictions, y):\n if prediction == ground_truth:\n correct = correct + 1\n return correct / n\n\nfrom random import randrange\n\nclass RandomClassifier(ClassifierBase):\n def predict_single(self, x):\n return randrange(3)\n\nrandom_clf = RandomClassifier()\n\nplotPrediction(random_clf, X[:, 1], X[:, 0], \n 'Age', 'Max Speed', y,\n title=\"Max Speed vs Age (Random)\")",
"By just randomly guessing, we get approx. 1/3 right, which is what we expect",
"random_clf.score(X, y)",
"Third Step: Creating a Base Line\nCreating a naive classifier manually, how much better is it?",
"class BaseLineClassifier(ClassifierBase):\n def predict_single(self, x):\n try:\n speed, age, km_per_year = x\n except:\n speed, age = x\n km_per_year = 0\n if age < 25:\n if speed > 180:\n return 0\n else:\n return 2\n if age > 75:\n return 0\n if km_per_year > 50:\n return 0\n if km_per_year > 35:\n return 2\n return 1\n\nbase_clf = BaseLineClassifier()\n\nplotPrediction(base_clf, X[:, 1], X[:, 0], \n 'Age', 'Max Speed', y,\n title=\"Max Speed vs Age with Classification\")",
"This is the baseline we have to beat",
"base_clf.score(X, y)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
Tsiems/machine-learning-projects
|
In_Class/ICA2_MachineLearning_PartA.ipynb
|
mit
|
[
"# Ebnable HTML/CSS \nfrom IPython.core.display import HTML\nHTML(\"<link href='https://fonts.googleapis.com/css?family=Passion+One' rel='stylesheet' type='text/css'><style>div.attn { font-family: 'Helvetica Neue'; font-size: 30px; line-height: 40px; color: #FFFFFF; text-align: center; margin: 30px 0; border-width: 10px 0; border-style: solid; border-color: #5AAAAA; padding: 30px 0; background-color: #DDDDFF; }hr { border: 0; background-color: #ffffff; border-top: 1px solid black; }hr.major { border-top: 10px solid #5AAA5A; }hr.minor { border: none; background-color: #ffffff; border-top: 5px dotted #CC3333; }div.bubble { width: 65%; padding: 20px; background: #DDDDDD; border-radius: 15px; margin: 0 auto; font-style: italic; color: #f00; }em { color: #AAA; }div.c1{visibility:hidden;margin:0;height:0;}div.note{color:red;}</style>\")",
"Enter Team Member Names here (double click to edit):\n\nName 1:\nName 2:\nName 3:\n\n\nIn Class Assignment Two\nIn the following assignment you will be asked to fill in python code and derivations for a number of different problems. Please read all instructions carefully and turn in the rendered notebook (or HTML of the rendered notebook) before the end of class (or right after class). The initial portion of this notebook is given before class and the remainder is given during class. Please answer the initial questions before class, to the best of your ability. Once class has started you may rework your answers as a team for the initial part of the assignment. \n<a id=\"top\"></a>\nContents\n\n<a href=\"#Loading\">Loading the Data</a>\n<a href=\"#svm\">Linear SVMs</a>\n\nAvailable only during the in class assignment:\n* <a href=\"#svm_using\">Using Linear SVMs</a>\n* <a href=\"#nonlinear\">Non-linear SVMs</a>\n\n<a id=\"Loading\"></a>\n<a href=\"#top\">Back to Top</a>\nLoading the Data\nPlease run the following code to read in the \"olivetti faces\" dataset from sklearn's data loading module. \nThis will load the data into the variable ds. ds is a bunch object with fields like ds.data and ds.target. The field ds.data is a numpy matrix of the continuous features in the dataset. The object is not a pandas dataframe. It is a numpy matrix. Each row is a set of observed instances, each column is a different feature. It also has a field called ds.target that is an integer value we are trying to predict (i.e., a specific integer represents a specific person). Each entry in ds.target is a label for each row of the ds.data matrix.",
"# fetch the images for the dataset\n# this will take a long time the first run because it needs to download\n# after the first time, the dataset will be save to your disk (in sklearn package somewhere) \n# if this does not run, you may need additional libraries installed on your system (install at your own risk!!)\nfrom sklearn.datasets import fetch_lfw_people\n\nlfw_people = fetch_lfw_people(min_faces_per_person=20, resize=None)\n\n# get some of the specifics of the dataset\nX = lfw_people.data\ny = lfw_people.target\nnames = lfw_people.target_names\n\nn_samples, n_features = X.shape\n_, h, w = lfw_people.images.shape\nn_classes = len(names)\n\nprint(\"n_samples: {}\".format(n_samples))\nprint(\"n_features: {}\".format(n_features))\nprint(\"n_classes: {}\".format(n_classes))\nprint(\"Original Image Sizes {} by {}\".format(h,w))\nprint (125*94) # the size of the images are the size of the feature vectors",
"Question 1: For the faces dataset, describe what the data represents? That is, what is each column? What is each row? What do the unique class values represent?\nEvery column is a pixel location in a 125x94 photograph.\nEach row is a single image of someone's face.\nThe unique class values are the names of the people in the photographs.\n<a id=\"svm\"></a>\n<a href=\"#top\">Back to Top</a>\nLinear Support Vector Machines\nQuestion 2: If we were to train a linear Support Vector Machine (SVM) upon the faces data, how many parameters would need to be optimized in the model? That is, how many coefficients would need to be calculated?\n11750",
"# Enter any scratchwork or calculations here\n\n",
"Question 3: \n- Part A: Given the number of parameters calculated above, would you expect the model to train quickly using batch optimization techniques? Why or why not? \n- Part B: Is there a way to reduce training time?\n- Part C: If we transformed the X data using principle components analysis (PCA) with 100 components, how many parameters would we need to find for a linear Support Vector Machine (SVM)?\nEnter you answer here (double click)\nA. No. Parallelizing wouldn't scale well to 11,750 threads.\nB. Train on a subsample\nC. 100",
"# Enter any scratchwork or calculations here\n\nprint('Part C. With 100 features: ', '100')",
"Remaining questions will be available during class for the actual assignment. Good luck!"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
lcharleux/numerical_analysis
|
doc/Traitement_signal/signal_processing.ipynb
|
gpl-2.0
|
[
"# Setup\n%matplotlib inline\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport matplotlib\nparams = {'font.size' : 14,\n 'figure.figsize':(12.0, 8.0),\n 'lines.linewidth': 2.,\n 'lines.markersize': 8,}\nmatplotlib.rcParams.update(params)",
"Signal Processing\nScope\nSignal processing today:\n\nCommunication,\nSensors,\nImages,\nVideo,\n...\n\nAnalog vs. Digital signals\nAnalog signal\nAnalog signal is a continuous function of time:\n$$\nx : t \\mapsto x(t) \n$$\nDigital signal\nDigital signal is a discrete function of time:\n$$\nt_n = t_0 + n \\times \\delta t\n$$\n$$\nx_n = x(t_n) \n$$\nSampling rate $f_s = 1/\\delta t$. \nExample",
"\nimport numpy as np\nimport matplotlib.pyplot as plt\n\ndef signal(t, f = 1.): \n return np.cos(2. * np.pi * f * t)\n\n\nD = 3.2 # duration\nt = np.linspace(0., D, 1000)\nx = signal(t)\n\nfs = 10. # sampling rate\ntn = np.linspace(0., D, fs * D) \nxn = signal(tn)\n\nplt.plot(t, x, \"k-\", label = \"Analog\")\nplt.plot(tn, xn, \"bo--\", label = \"Digital\")\nplt.grid()\nplt.xlabel(\"Time, $t$\")\nplt.ylabel(\"Amplitude, $x$\")\nplt.legend()\nplt.show()",
"Digital signals\nEffect of the sampling rate",
"t = np.linspace(0., D, 1000)\nx = signal(t)\n\nFs = [1., 2., 10.] # sampling rate\n\nplt.plot(t, x, \"k-\", label = \"Analog\")\nfor fs in Fs:\n tn = np.linspace(0., D, fs * D) \n xn = signal(tn)\n plt.plot(tn, xn, \"o--\", label = \"fs = {0}\".format(fs))\nplt.grid()\nplt.xlabel(\"Time, $t$\")\nplt.ylabel(\"Amplitude, $x$\")\nplt.legend()\nplt.show()",
"Higher sampling rate means better signal description,\nLower sampling rate means loss of information,\n\nAliasing",
"\n\n\nD = 1.5 # duration\nt = np.linspace(0., D, 1000)\nfs = 2.5 # sampling rate\ntn = np.linspace(0., D, fs * D) \nxn = signal(tn)\n\nF = .5 + np.arange(3) * fs # sampling rate\n\n\ntn = np.linspace(0., D, fs * D) \nxn = signal(tn, f = F[0])\nplt.plot(tn, xn, \"ok\", label = \"Samples\") \n\n\nfor f in F:\n x = signal(t, f = f) \n plt.plot(t, x, \"-\")\n \nplt.grid()\nplt.xlabel(\"Time, $t$\")\nplt.ylabel(\"Amplitude, $x$\")\nplt.legend()\nplt.show()"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
tensorflow/docs-l10n
|
site/en-snapshot/lite/models/modify/model_maker/question_answer.ipynb
|
apache-2.0
|
[
"Copyright 2020 The TensorFlow Authors.",
"#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.",
"BERT Question Answer with TensorFlow Lite Model Maker\n<table class=\"tfo-notebook-buttons\" align=\"left\">\n <td>\n <a target=\"_blank\" href=\"https://www.tensorflow.org/lite/models/modify/model_maker/question_answer\"><img src=\"https://www.tensorflow.org/images/tf_logo_32px.png\" />View on TensorFlow.org</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://colab.research.google.com/github/tensorflow/tensorflow/blob/master/tensorflow/lite/g3doc/models/modify/model_maker/question_answer.ipynb\"><img src=\"https://www.tensorflow.org/images/colab_logo_32px.png\" />Run in Google Colab</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://github.com/tensorflow/tensorflow/blob/master/tensorflow/lite/g3doc/models/modify/model_maker/question_answer.ipynb\"><img src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\" />View source on GitHub</a>\n </td>\n <td>\n <a href=\"https://storage.googleapis.com/tensorflow_docs/tensorflow/tensorflow/lite/g3doc/models/modify/model_maker/question_answer.ipynb\"><img src=\"https://www.tensorflow.org/images/download_logo_32px.png\" />Download notebook</a>\n </td>\n</table>\n\nThe TensorFlow Lite Model Maker library simplifies the process of adapting and converting a TensorFlow model to particular input data when deploying this model for on-device ML applications.\nThis notebook shows an end-to-end example that utilizes the Model Maker library to illustrate the adaptation and conversion of a commonly-used question answer model for question answer task.\nIntroduction to BERT Question Answer Task\nThe supported task in this library is extractive question answer task, which means given a passage and a question, the answer is the span in the passage. The image below shows an example for question answer.\n<p align=\"center\"><img src=\"https://storage.googleapis.com/download.tensorflow.org/models/tflite/screenshots/model_maker_squad_showcase.png\" width=\"500\"></p>\n\n<p align=\"center\">\n <em>Answers are spans in the passage (image credit: <a href=\"https://rajpurkar.github.io/mlx/qa-and-squad/\">SQuAD blog</a>) </em>\n</p>\n\nAs for the model of question answer task, the inputs should be the passage and question pair that are already preprocessed, the outputs should be the start logits and end logits for each token in the passage.\nThe size of input could be set and adjusted according to the length of passage and question.\nEnd-to-End Overview\nThe following code snippet demonstrates how to get the model within a few lines of code. The overall process includes 5 steps: (1) choose a model, (2) load data, (3) retrain the model, (4) evaluate, and (5) export it to TensorFlow Lite format.\n```python\nChooses a model specification that represents the model.\nspec = model_spec.get('mobilebert_qa')\nGets the training data and validation data.\ntrain_data = DataLoader.from_squad(train_data_path, spec, is_training=True)\nvalidation_data = DataLoader.from_squad(validation_data_path, spec, is_training=False)\nFine-tunes the model.\nmodel = question_answer.create(train_data, model_spec=spec)\nGets the evaluation result.\nmetric = model.evaluate(validation_data)\nExports the model to the TensorFlow Lite format with metadata in the export directory.\nmodel.export(export_dir)\n```\nThe following sections explain the code in more detail.\nPrerequisites\nTo run this example, install the required packages, including the Model Maker package from the GitHub repo.",
"!sudo apt -y install libportaudio2\n!pip install -q tflite-model-maker-nightly",
"Import the required packages.",
"import numpy as np\nimport os\n\nimport tensorflow as tf\nassert tf.__version__.startswith('2')\n\nfrom tflite_model_maker import model_spec\nfrom tflite_model_maker import question_answer\nfrom tflite_model_maker.config import ExportFormat\nfrom tflite_model_maker.question_answer import DataLoader",
"The \"End-to-End Overview\" demonstrates a simple end-to-end example. The following sections walk through the example step by step to show more detail.\nChoose a model_spec that represents a model for question answer\nEach model_spec object represents a specific model for question answer. The Model Maker currently supports MobileBERT and BERT-Base models.\nSupported Model | Name of model_spec | Model Description\n--- | --- | ---\nMobileBERT | 'mobilebert_qa' | 4.3x smaller and 5.5x faster than BERT-Base while achieving competitive results, suitable for on-device scenario.\nMobileBERT-SQuAD | 'mobilebert_qa_squad' | Same model architecture as MobileBERT model and the initial model is already retrained on SQuAD1.1.\nBERT-Base | 'bert_qa' | Standard BERT model that widely used in NLP tasks.\nIn this tutorial, MobileBERT-SQuAD is used as an example. Since the model is already retrained on SQuAD1.1, it could coverage faster for question answer task.",
"spec = model_spec.get('mobilebert_qa_squad')",
"Load Input Data Specific to an On-device ML App and Preprocess the Data\nThe TriviaQA is a reading comprehension dataset containing over 650K question-answer-evidence triples. In this tutorial, you will use a subset of this dataset to learn how to use the Model Maker library.\nTo load the data, convert the TriviaQA dataset to the SQuAD1.1 format by running the converter Python script with --sample_size=8000 and a set of web data. Modify the conversion code a little bit by:\n* Skipping the samples that couldn't find any answer in the context document;\n* Getting the original answer in the context without uppercase or lowercase.\nDownload the archived version of the already converted dataset.",
"train_data_path = tf.keras.utils.get_file(\n fname='triviaqa-web-train-8000.json',\n origin='https://storage.googleapis.com/download.tensorflow.org/models/tflite/dataset/triviaqa-web-train-8000.json')\nvalidation_data_path = tf.keras.utils.get_file(\n fname='triviaqa-verified-web-dev.json',\n origin='https://storage.googleapis.com/download.tensorflow.org/models/tflite/dataset/triviaqa-verified-web-dev.json')",
"You can also train the MobileBERT model with your own dataset. If you are running this notebook on Colab, upload your data by using the left sidebar.\n<img src=\"https://storage.googleapis.com/download.tensorflow.org/models/tflite/screenshots/model_maker_question_answer.png\" alt=\"Upload File\" width=\"800\" hspace=\"100\">\nIf you prefer not to upload your data to the cloud, you can also run the library offline by following the guide.\nUse the DataLoader.from_squad method to load and preprocess the SQuAD format data according to a specific model_spec. You can use either SQuAD2.0 or SQuAD1.1 formats. Setting parameter version_2_with_negative as True means the formats is SQuAD2.0. Otherwise, the format is SQuAD1.1. By default, version_2_with_negative is False.",
"train_data = DataLoader.from_squad(train_data_path, spec, is_training=True)\nvalidation_data = DataLoader.from_squad(validation_data_path, spec, is_training=False)",
"Customize the TensorFlow Model\nCreate a custom question answer model based on the loaded data. The create function comprises the following steps:\n\nCreates the model for question answer according to model_spec.\nTrain the question answer model. The default epochs and the default batch size are set according to two variables default_training_epochs and default_batch_size in the model_spec object.",
"model = question_answer.create(train_data, model_spec=spec)",
"Have a look at the detailed model structure.",
"model.summary()",
"Evaluate the Customized Model\nEvaluate the model on the validation data and get a dict of metrics including f1 score and exact match etc. Note that metrics are different for SQuAD1.1 and SQuAD2.0.",
"model.evaluate(validation_data)",
"Export to TensorFlow Lite Model\nConvert the trained model to TensorFlow Lite model format with metadata so that you can later use in an on-device ML application. The vocab file are embedded in metadata. The default TFLite filename is model.tflite.\nIn many on-device ML application, the model size is an important factor. Therefore, it is recommended that you apply quantize the model to make it smaller and potentially run faster.\nThe default post-training quantization technique is dynamic range quantization for the BERT and MobileBERT models.",
"model.export(export_dir='.')",
"You can use the TensorFlow Lite model file in the bert_qa reference app using BertQuestionAnswerer API in TensorFlow Lite Task Library by downloading it from the left sidebar on Colab.\nThe allowed export formats can be one or a list of the following:\n\nExportFormat.TFLITE\nExportFormat.VOCAB\nExportFormat.SAVED_MODEL\n\nBy default, it just exports TensorFlow Lite model with metadata. You can also selectively export different files. For instance, exporting only the vocab file as follows:",
"model.export(export_dir='.', export_format=ExportFormat.VOCAB)",
"You can also evaluate the tflite model with the evaluate_tflite method. This step is expected to take a long time.",
"model.evaluate_tflite('model.tflite', validation_data)",
"Advanced Usage\nThe create function is the critical part of this library in which the model_spec parameter defines the model specification. The BertQASpec class is currently supported. There are 2 models: MobileBERT model, BERT-Base model. The create function comprises the following steps:\n\nCreates the model for question answer according to model_spec.\nTrain the question answer model.\n\nThis section describes several advanced topics, including adjusting the model, tuning the training hyperparameters etc.\nAdjust the model\nYou can adjust the model infrastructure like parameters seq_len and query_len in the BertQASpec class.\nAdjustable parameters for model:\n\nseq_len: Length of the passage to feed into the model.\nquery_len: Length of the question to feed into the model.\ndoc_stride: The stride when doing a sliding window approach to take chunks of the documents.\ninitializer_range: The stdev of the truncated_normal_initializer for initializing all weight matrices.\ntrainable: Boolean, whether pre-trained layer is trainable.\n\nAdjustable parameters for training pipeline:\n\nmodel_dir: The location of the model checkpoint files. If not set, temporary directory will be used.\ndropout_rate: The rate for dropout.\nlearning_rate: The initial learning rate for Adam.\npredict_batch_size: Batch size for prediction.\ntpu: TPU address to connect to. Only used if using tpu.\n\nFor example, you can train the model with a longer sequence length. If you change the model, you must first construct a new model_spec.",
"new_spec = model_spec.get('mobilebert_qa')\nnew_spec.seq_len = 512",
"The remaining steps are the same. Note that you must rerun both the dataloader and create parts as different model specs may have different preprocessing steps.\nTune training hyperparameters\nYou can also tune the training hyperparameters like epochs and batch_size to impact the model performance. For instance,\n\nepochs: more epochs could achieve better performance, but may lead to overfitting.\nbatch_size: number of samples to use in one training step.\n\nFor example, you can train with more epochs and with a bigger batch size like:\npython\nmodel = question_answer.create(train_data, model_spec=spec, epochs=5, batch_size=64)\nChange the Model Architecture\nYou can change the base model your data trains on by changing the model_spec. For example, to change to the BERT-Base model, run:\npython\nspec = model_spec.get('bert_qa')\nThe remaining steps are the same.\nCustomize Post-training quantization on the TensorFlow Lite model\nPost-training quantization is a conversion technique that can reduce model size and inference latency, while also improving CPU and hardware accelerator inference speed, with a little degradation in model accuracy. Thus, it's widely used to optimize the model.\nModel Maker library applies a default post-training quantization techique when exporting the model. If you want to customize post-training quantization, Model Maker supports multiple post-training quantization options using QuantizationConfig as well. Let's take float16 quantization as an instance. First, define the quantization config.\npython\nconfig = QuantizationConfig.for_float16()\nThen we export the TensorFlow Lite model with such configuration.\npython\nmodel.export(export_dir='.', tflite_filename='model_fp16.tflite', quantization_config=config)\nRead more\nYou can read our BERT Question and Answer example to learn technical details. For more information, please refer to:\n\nTensorFlow Lite Model Maker guide and API reference.\nTask Library: BertQuestionAnswerer for deployment.\nThe end-to-end reference apps: Android and iOS."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.