repo_name
stringlengths 6
77
| path
stringlengths 8
215
| license
stringclasses 15
values | cells
list | types
list |
|---|---|---|---|---|
jhillairet/scikit-rf
|
doc/source/examples/metrology/Calibration With Three Receivers.ipynb
|
bsd-3-clause
|
[
"Calibration With Three Receivers\nIntro\nIt has long been known that full error correction is possible given a VNA with only three receivers and no internal coaxial switch. However, since no modern VNA employs such an architecture, the software required to make fully corrected measurements is not available on today's modern VNA's. \nRecently, the application of Frequency Extender units containing only three receivers has become more common. Thus, there is a need for full error correction capability on systems with three receivers and no internal coaxial switch. This document describes how to use scikit-rf to fully correct two-port measurements made on such a system.\nTheory\nA circuit model for a switch-less three receiver system is shown below.",
"from IPython.display import *\nSVG('three_receiver_cal/pics/vnaBlockDiagramForwardRotated.svg')",
"To fully correct an arbitrary two-port, the device must be measured in two orientations, call these forward and reverse. Because there is no switch present, this requires the operator to physically flip the device, and save the pair of measurements. In on-wafer scenarios, one could fabricate two identical devices, one in each orientation. In either case, a pair of measurements are required for each DUT before correction can occur. \nWhile in reality the device is being flipped, one can imaging that the device is static, and the entire VNA circuitry is flipped. This interpretation lends itself to implementation, as the existing 12-term correction can be re-used by simply copying the forward error coefficients into the corresponding reverse error coefficients. This is what scikit-rf does internally.\nWorked Example\nThis example demonstrates how to create a TwoPortOnePath and EnhancedResponse calibration from measurements taken on a Agilent PNAX with a set of VDI WR-12 TXRX-RX Frequency Extender heads. Comparisons between the two algorithms are made by correcting an asymmetric DUT.\nRead in the Data\nThe measurements of the calibration standards and DUT's were downloaded from the VNA by saving touchstone files of the raw s-parameter data to disk.",
"ls three_receiver_cal/data/",
"These files can be read by scikit-rf into Networks with the following.",
"import skrf as rf \n%matplotlib inline\nfrom pylab import * \nrf.stylely()\n\n\nraw = rf.read_all_networks('three_receiver_cal/data/')\n# list the raw measurements \nraw.keys()",
"Each Network can be accessed through the dictionary raw.",
"thru = raw['thru']\nthru",
"If we look at the raw measurement of the flush thru, it can be seen that only $S_{11}$ and $S_{21}$ contain meaningful data. The other s-parameters are noise.",
"thru.plot_s_db()",
"Create Calibration\nIn the code that follows a TwoPortOnePath calibration is created from corresponding measured and ideal responses of the calibration standards. The measured networks are read from disk, while their corresponding ideal responses are generated using scikit-rf. More information about using scikit-rf to do offline calibrations can be found here.",
"from skrf.calibration import TwoPortOnePath\nfrom skrf.media import RectangularWaveguide\nfrom skrf import two_port_reflect as tpr\nfrom skrf import mil\n\n# pull frequency information from measurements\nfrequency = raw['short'].frequency\n\n# the media object \nwg = RectangularWaveguide(frequency=frequency, a=120*mil, z0=50)\n\n# list of 'ideal' responses of the calibration standards\nideals = [wg.short(nports=2),\n tpr(wg.delay_short( 90,'deg'), wg.match()),\n wg.match(nports=2),\n wg.thru()]\n\n# corresponding measurements to the 'ideals'\nmeasured = [raw['short'],\n raw['quarter wave delay short'],\n raw['load'],\n raw['thru']]\n\n# the Calibration object\ncal = TwoPortOnePath(measured = measured, ideals = ideals )",
"Apply Correction\nThere are two types of correction possible with a 3-receiver system. \n\nFull (TwoPortOnePath)\nPartial (EnhancedResponse)\n\nscikit-rf uses the same Calibration object for both, but employs different correction algorithms depending on the type of the DUT. The DUT used in this example is a WR-15 shim cascaded with a WR-12 1\" straight waveguide, as shown in the picture below. Measurements of this DUT are corrected with both full and partial correction and the results are compared below.",
"Image('three_receiver_cal/pics/asymmetic DUT.jpg', width='75%')",
"Full Correction ( TwoPortOnePath)\nFull correction is achieved by measuring each device in both orientations, forward and reverse. To be clear, this means that the DUT must be physically removed, flipped, and re-inserted. The resulting pair of measurements are then passed to the apply_cal() function as a tuple. This returns a single corrected response. \nPartial Correction (Enhanced Response)\nIf you pass a single measurement to the apply_cal() function, then the calibration will employ partial correction. This type of correction is known as EnhancedResponse. Depending on the measurement application, this type of correction may be good enough, and perhaps the only choice.\nComparison\nBelow are direct comparisons of the DUT shown above corrected with full and partial algorithms. It shows that the partial calibration produces a large ripple on the reflect measurements, and slightly larger ripple on the transmissive measurements.",
"from pylab import * \n\nsimulation = raw['simulation']\n\ndutf = raw['wr15 shim and swg (forward)']\ndutr = raw['wr15 shim and swg (reverse)']\n\ncorrected_full = cal.apply_cal((dutf, dutr)) \ncorrected_partial = cal.apply_cal(dutf) \n\n\n\n# plot results\n\nf, ax = subplots(1,2, figsize=(8,4))\n\nax[0].set_title ('$S_{11}$')\nax[1].set_title ('$S_{21}$')\n\ncorrected_partial.plot_s_db(0,0, label='Partial Correction',ax=ax[0])\ncorrected_partial.plot_s_db(1,0, label='Partial Correction',ax=ax[1])\n\ncorrected_full.plot_s_db(0,0, label='Full Correction', ax = ax[0])\ncorrected_full.plot_s_db(1,0, label='Full Correction', ax = ax[1])\n\nsimulation.plot_s_db(0,0,label='Simulation', ax=ax[0], color='k')\nsimulation.plot_s_db(1,0,label='Simulation', ax=ax[1], color='k')\n\ntight_layout()",
"What if my DUT is Symmetric??\nIf the DUT is known to be reciprocal ( $S_{21}=S_{12}$ ) and symmetric ( $S_{11}=S_{22}$ ), then its response should be the identical for both forward and reverse orientations. In this case, measuring the device twice is unnecessary, and can be circumvented. This is explored in the example: TwoPortOnePath, EnhancedResponse, and FakeFlip"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
DominikDitoIvosevic/Uni
|
STRUCE/2018/.ipynb_checkpoints/SU-2018-LAB01-Regresija-checkpoint.ipynb
|
mit
|
[
"Sveučilište u Zagrebu\nFakultet elektrotehnike i računarstva \nStrojno učenje 2018/2019\nhttp://www.fer.unizg.hr/predmet/su\n\nLaboratorijska vježba 1: Regresija\nVerzija: 1.1\nZadnji put ažurirano: 12. listopada 2018.\n(c) 2015-2018 Jan Šnajder, Domagoj Alagić, Mladen Karan \nObjavljeno: 12. listopada 2018.\nRok za predaju: 22. listopada 2018. u 07:00h\n\nUpute\nPrva laboratorijska vježba sastoji se od osam zadataka. U nastavku slijedite upute navedene u ćelijama s tekstom. Rješavanje vježbe svodi se na dopunjavanje ove bilježnice: umetanja ćelije ili više njih ispod teksta zadatka, pisanja odgovarajućeg kôda te evaluiranja ćelija. \nOsigurajte da u potpunosti razumijete kôd koji ste napisali. Kod predaje vježbe, morate biti u stanju na zahtjev asistenta (ili demonstratora) preinačiti i ponovno evaluirati Vaš kôd. Nadalje, morate razumjeti teorijske osnove onoga što radite, u okvirima onoga što smo obradili na predavanju. Ispod nekih zadataka možete naći i pitanja koja služe kao smjernice za bolje razumijevanje gradiva (nemojte pisati odgovore na pitanja u bilježnicu). Stoga se nemojte ograničiti samo na to da riješite zadatak, nego slobodno eksperimentirajte. To upravo i jest svrha ovih vježbi.\nVježbe trebate raditi samostalno. Možete se konzultirati s drugima o načelnom načinu rješavanja, ali u konačnici morate sami odraditi vježbu. U protivnome vježba nema smisla.",
"# Učitaj osnovne biblioteke...\nimport numpy as np\nimport sklearn\nimport matplotlib.pyplot as plt\nimport scipy as sp\n%pylab inline",
"Zadatci\n1. Jednostavna regresija\nZadan je skup primjera $\\mathcal{D}={(x^{(i)},y^{(i)})}_{i=1}^4 = {(0,4),(1,1),(2,2),(4,5)}$. Primjere predstavite matrixom $\\mathbf{X}$ dimenzija $N\\times n$ (u ovom slučaju $4\\times 1$) i vektorom oznaka $\\textbf{y}$, dimenzija $N\\times 1$ (u ovom slučaju $4\\times 1$), na sljedeći način:",
"X = np.array([[0],[1],[2],[4]])\ny = np.array([4,1,2,5])",
"(a)\nProučite funkciju PolynomialFeatures iz biblioteke sklearn i upotrijebite je za generiranje matrice dizajna $\\mathbf{\\Phi}$ koja ne koristi preslikavanje u prostor više dimenzije (samo će svakom primjeru biti dodane dummy jedinice; $m=n+1$).",
"from sklearn.preprocessing import PolynomialFeatures\nPhi = PolynomialFeatures(1, False, True).fit_transform(X)\nprint(Phi)",
"(b)\nUpoznajte se s modulom linalg. Izračunajte težine $\\mathbf{w}$ modela linearne regresije kao $\\mathbf{w}=(\\mathbf{\\Phi}^\\intercal\\mathbf{\\Phi})^{-1}\\mathbf{\\Phi}^\\intercal\\mathbf{y}$. Zatim se uvjerite da isti rezultat možete dobiti izračunom pseudoinverza $\\mathbf{\\Phi}^+$ matrice dizajna, tj. $\\mathbf{w}=\\mathbf{\\Phi}^+\\mathbf{y}$, korištenjem funkcije pinv.",
"from numpy import linalg\n\nw = np.dot(np.dot(np.linalg.inv(np.dot(np.transpose(Phi), Phi)), np.transpose(Phi)), y)\nprint(w)\n\nw2 = np.dot(np.linalg.pinv(Phi), y)\nprint(w2)",
"Radi jasnoće, u nastavku je vektor $\\mathbf{x}$ s dodanom dummy jedinicom $x_0=1$ označen kao $\\tilde{\\mathbf{x}}$.\n(c)\nPrikažite primjere iz $\\mathcal{D}$ i funkciju $h(\\tilde{\\mathbf{x}})=\\mathbf{w}^\\intercal\\tilde{\\mathbf{x}}$. Izračunajte pogrešku učenja prema izrazu $E(h|\\mathcal{D})=\\frac{1}{2}\\sum_{i=1}^N(\\tilde{\\mathbf{y}}^{(i)} - h(\\tilde{\\mathbf{x}}))^2$. Možete koristiti funkciju srednje kvadratne pogreške mean_squared_error iz modula sklearn.metrics.\nQ: Gore definirana funkcija pogreške $E(h|\\mathcal{D})$ i funkcija srednje kvadratne pogreške nisu posve identične. U čemu je razlika? Koja je \"realnija\"?",
"from sklearn.metrics import mean_squared_error\n\nh = np.dot(Phi, w)\nprint (h)\n\nerror = mean_squared_error(y, h)\nprint (error)\n\nplt.plot(X, y, '+', X, h, linewidth = 1)\nplt.axis([-3, 6, -1, 7])",
"(d)\nUvjerite se da za primjere iz $\\mathcal{D}$ težine $\\mathbf{w}$ ne možemo naći rješavanjem sustava $\\mathbf{w}=\\mathbf{\\Phi}^{-1}\\mathbf{y}$, već da nam doista treba pseudoinverz.\nQ: Zašto je to slučaj? Bi li se problem mogao riješiti preslikavanjem primjera u višu dimenziju? Ako da, bi li to uvijek funkcioniralo, neovisno o skupu primjera $\\mathcal{D}$? Pokažite na primjeru.",
"try:\n np.dot(np.linalg.inv(Phi), y)\nexcept LinAlgError as err:\n print(err)",
"(e)\nProučite klasu LinearRegression iz modula sklearn.linear_model. Uvjerite se da su težine koje izračunava ta funkcija (dostupne pomoću atributa coef_ i intercept_) jednake onima koje ste izračunali gore. Izračunajte predikcije modela (metoda predict) i uvjerite se da je pogreška učenja identična onoj koju ste ranije izračunali.",
"from sklearn.linear_model import LinearRegression\n\nlr = LinearRegression().fit(Phi, y)\n\nw2 = [lr.intercept_, lr.coef_[1]]\nh2 = lr.predict(Phi)\nerror2 = mean_squared_error(y, h)\n\nprint ('staro: ')\nprint (w)\nprint (h)\nprint (error)\n\nprint('novo: ')\nprint (w2)\nprint (h2)\nprint (error2)",
"2. Polinomijalna regresija i utjecaj šuma\n(a)\nRazmotrimo sada regresiju na većem broju primjera. Koristite funkciju make_labels(X, f, noise=0) koja uzima matricu neoznačenih primjera $\\mathbf{X}{N\\times n}$ te generira vektor njihovih oznaka $\\mathbf{y}{N\\times 1}$. Oznake se generiraju kao $y^{(i)} = f(x^{(i)})+\\mathcal{N}(0,\\sigma^2)$, gdje je $f:\\mathbb{R}^n\\to\\mathbb{R}$ stvarna funkcija koja je generirala podatke (koja nam je u stvarnosti nepoznata), a $\\sigma$ je standardna devijacija Gaussovog šuma, definirana parametrom noise. Za generiranje šuma koristi se funkcija numpy.random.normal. \nGenerirajte skup za učenje od $N=50$ primjera uniformno distribuiranih u intervalu $[-5,5]$ pomoću funkcije $f(x) = 5 + x -2 x^2 -5 x^3$ uz šum $\\sigma=200$:",
"from numpy.random import normal\ndef make_labels(X, f, noise=0) :\n return map(lambda x : f(x) + (normal(0,noise) if noise>0 else 0), X)\n\ndef make_instances(x1, x2, N) :\n return sp.array([np.array([x]) for x in np.linspace(x1,x2,N)])\n\nN = 50\nsigma = 200\nfun = lambda x :5 + x - 2*x**2 - 5*x**3\nx = make_instances(-5, 5, N)\ny = list(make_labels(x, fun, sigma))\n\ny6a = y\nx6a = x",
"Prikažite taj skup funkcijom scatter.",
"plt.figure(figsize=(10, 5))\nplt.plot(x, fun(x), 'r', linewidth = 1)\nplt.scatter(x, y)\n",
"(b)\nTrenirajte model polinomijalne regresije stupnja $d=3$. Na istom grafikonu prikažite naučeni model $h(\\mathbf{x})=\\mathbf{w}^\\intercal\\tilde{\\mathbf{x}}$ i primjere za učenje. Izračunajte pogrešku učenja modela.",
"from sklearn.preprocessing import PolynomialFeatures\n\nPhi = PolynomialFeatures(3).fit_transform(x.reshape(-1, 1))\nw = np.dot(np.linalg.pinv(Phi), y)\nh = np.dot(Phi, w)\nerror = mean_squared_error(y, h)\nprint(error)\n\nplt.figure(figsize=(10,5))\nplt.scatter(x, y)\nplt.plot(x, h, 'r', linewidth=1)",
"3. Odabir modela\n(a)\nNa skupu podataka iz zadatka 2 trenirajte pet modela linearne regresije $\\mathcal{H}_d$ različite složenosti, gdje je $d$ stupanj polinoma, $d\\in{1,3,5,10,20}$. Prikažite na istome grafikonu skup za učenje i funkcije $h_d(\\mathbf{x})$ za svih pet modela (preporučujemo koristiti plot unutar for petlje). Izračunajte pogrešku učenja svakog od modela.\nQ: Koji model ima najmanju pogrešku učenja i zašto?",
"Phi_d = []; \nw_d = []; \nh_d = [];\nerr_d = [];\n\nd = [1, 3, 5, 10, 20]\n\nfor i in d:\n Phi_d.append(PolynomialFeatures(i).fit_transform(x.reshape(-1,1)))\n \nfor i in range(0, len(d)):\n w_d.insert(i, np.dot(np.linalg.pinv(Phi_d[i]), y))\n h_d.insert(i, np.dot(Phi_d[i], w_d[i]))\n\nfor i in range(0, len(d)):\n err_d.insert(i, mean_squared_error(y, h_d[i]))\n print (str(d[i]) + ': ' + str(err_d[i]))\n\n\nfig = plt.figure(figsize=(15, 20))\nfig.subplots_adjust(wspace=0.2) \n\nfor i in range(0, len(d)):\n ax = fig.add_subplot(5, 2, i+1)\n ax.scatter(x, y);\n ax.plot(x, h_d[i], 'r', linewidth = 1)\n ",
"(b)\nRazdvojite skup primjera iz zadatka 2 pomoću funkcije cross_validation.train_test_split na skup za učenja i skup za ispitivanje u omjeru 1:1. Prikažite na jednom grafikonu pogrešku učenja i ispitnu pogrešku za modele polinomijalne regresije $\\mathcal{H}_d$, sa stupnjem polinoma $d$ u rasponu $d\\in [1,2,\\ldots,20]$. Radi preciznosti, funkcije $h(\\mathbf{x})$ iscrtajte na cijelom skupu primjera (ali pogrešku generalizacije računajte, naravno, samo na ispitnome skupu). Budući da kvadratna pogreška brzo raste za veće stupnjeve polinoma, umjesto da iscrtate izravno iznose pogrešaka, iscrtajte njihove logaritme.\nNB: Podjela na skupa za učenje i skup za ispitivanje mora za svih pet modela biti identična.\nQ: Je li rezultat u skladu s očekivanjima? Koji biste model odabrali i zašto?\nQ: Pokrenite iscrtavanje više puta. U čemu je problem? Bi li problem bio jednako izražen kad bismo imali više primjera? Zašto?",
"from sklearn.model_selection import train_test_split\n\nX_train, X_test, y_train, y_test = train_test_split(x, y, test_size = 0.5)\nerr_train = [];\nerr_test = [];\nd = range(0, 20)\n\nfor i in d:\n Phi_train = PolynomialFeatures(i).fit_transform(X_train.reshape(-1, 1))\n Phi_test = PolynomialFeatures(i).fit_transform(X_test.reshape(-1, 1))\n w_train = np.dot(np.linalg.pinv(Phi_train), y_train)\n h_train = np.dot(Phi_train, w_train)\n h_test = np.dot(Phi_test, w_train)\n \n err_train.insert(i, np.log(mean_squared_error(y_train, h_train)))\n err_test.insert(i, np.log(mean_squared_error(y_test, h_test)))\n\nplt.figure(figsize=(10,5))\nplt.plot(d, err_train, d, err_test)\nplt.grid()",
"(c)\nTočnost modela ovisi o (1) njegovoj složenosti (stupanj $d$ polinoma), (2) broju primjera $N$, i (3) količini šuma. Kako biste to analizirali, nacrtajte grafikone pogrešaka kao u 3b, ali za sve kombinacija broja primjera $N\\in{100,200,1000}$ i količine šuma $\\sigma\\in{100,200,500}$ (ukupno 9 grafikona). Upotrijebite funkciju subplots kako biste pregledno posložili grafikone u tablicu $3\\times 3$. Podatci se generiraju na isti način kao u zadatku 2.\nNB: Pobrinite se da svi grafikoni budu generirani nad usporedivim skupovima podataka, na sljedeći način. Generirajte najprije svih 1000 primjera, podijelite ih na skupove za učenje i skupove za ispitivanje (dva skupa od po 500 primjera). Zatim i od skupa za učenje i od skupa za ispitivanje načinite tri različite verzije, svaka s drugačijom količinom šuma (ukupno 2x3=6 verzija podataka). Kako bi simulirali veličinu skupa podataka, od tih dobivenih 6 skupova podataka uzorkujte trećinu, dvije trećine i sve podatke. Time ste dobili 18 skupova podataka -- skup za učenje i za testiranje za svaki od devet grafova.\nQ: Jesu li rezultati očekivani? Obrazložite.",
"N2 = [100, 200, 1000];\nsigma = [100, 200, 500];\n\nX_train4c_temp = [];\nX_test4c_temp = [];\ny_train4c_temp = [];\ny_test4c_temp = [];\n\nx_tmp = np.linspace(-5, 5, 1000);\nX_train, X_test = train_test_split(x_tmp, test_size = 0.5)\n\nfor i in range(0, 3):\n \n y_tmp_train = list(make_labels(X_train, fun, sigma[i]))\n y_tmp_test = list(make_labels(X_test, fun, sigma[i]))\n for j in range(0,3): \n X_train4c_temp.append(X_train[0:int(N2[j]/2)])\n X_test4c_temp.append(X_test[0:int(N2[j]/2)])\n y_train4c_temp.append(y_tmp_train[0:int(N2[j]/2)])\n y_test4c_temp.append(y_tmp_test[0:int(N2[j]/2)])\n\n \nerr_tr = [];\nerr_tst = [];\n\nfor i in range(0, 9):\n X_train4c = X_train4c_temp[i]\n X_test4c = X_test4c_temp[i]\n y_train4c = y_train4c_temp[i]\n y_test4c = y_test4c_temp[i]\n \n err_train4c = [];\n err_test4c = [];\n d4c = range(0, 20)\n\n for j in d4c:\n Phi_train4c = PolynomialFeatures(j).fit_transform(X_train4c.reshape(-1, 1))\n Phi_test4c = PolynomialFeatures(j).fit_transform(X_test4c.reshape(-1, 1))\n w_train4c = np.dot(np.linalg.pinv(Phi_train4c), y_train4c)\n h_train4c = np.dot(Phi_train4c, w_train4c)\n h_test4c = np.dot(Phi_test4c, w_train4c)\n err_train4c.insert(j, np.log(mean_squared_error(y_train4c, h_train4c)))\n err_test4c.insert(j, np.log(mean_squared_error(y_test4c, h_test4c)))\n \n err_tr.append(err_train4c);\n err_tst.append(err_test4c);\n \n \nfig = plt.figure(figsize=(15, 10))\nfig.subplots_adjust(wspace=0.2, hspace = 0.35) \n\nNn = [100, 200, 1000, 100, 200, 1000, 100, 200, 1000]\nsgm = [100, 100, 100, 200, 200, 200, 500, 500, 500]\n\nfor i in range(0, 9): \n ax = fig.add_subplot(3, 3, i+1)\n plt.plot(d, err_tr[i], d, err_tst[i]); grid;\n ax.grid();",
"4. Regularizirana regresija\n(a)\nU gornjim eksperimentima nismo koristili regularizaciju. Vratimo se najprije na primjer iz zadatka 1. Na primjerima iz tog zadatka izračunajte težine $\\mathbf{w}$ za polinomijalni regresijski model stupnja $d=3$ uz L2-regularizaciju (tzv. ridge regression), prema izrazu $\\mathbf{w}=(\\mathbf{\\Phi}^\\intercal\\mathbf{\\Phi}+\\lambda\\mathbf{I})^{-1}\\mathbf{\\Phi}^\\intercal\\mathbf{y}$. Napravite izračun težina za regularizacijske faktore $\\lambda=0$, $\\lambda=1$ i $\\lambda=10$ te usporedite dobivene težine.\nQ: Kojih je dimenzija matrica koju treba invertirati?\nQ: Po čemu se razlikuju dobivene težine i je li ta razlika očekivana? Obrazložite.",
"lam = [0, 1, 10]\ny = np.array([4,1,2,5])\n\nPhi4a = PolynomialFeatures(3).fit_transform(X)\nw_L2 = [];\n\ndef w_reg(lam): \n t1 = np.dot(Phi4a.T, Phi4a) + np.dot(lam, np.eye(4))\n t2 = np.dot(np.linalg.inv(t1), Phi4a.T)\n return np.dot(t2, y)\n\nfor i in range(0, 3):\n w_L2.insert(i, w_reg(lam[i]))\n print (w_reg(lam[i]))\n",
"(b)\nProučite klasu Ridge iz modula sklearn.linear_model, koja implementira L2-regularizirani regresijski model. Parametar $\\alpha$ odgovara parametru $\\lambda$. Primijenite model na istim primjerima kao u prethodnom zadatku i ispišite težine $\\mathbf{w}$ (atributi coef_ i intercept_).\nQ: Jesu li težine identične onima iz zadatka 4a? Ako nisu, objasnite zašto je to tako i kako biste to popravili.",
"from sklearn.linear_model import Ridge\n\nfor i in lam:\n w = [];\n w_L22 = Ridge(alpha = i).fit(Phi4a, y)\n \n w.append(w_L22.intercept_)\n for i in range(0, len(w_L22.coef_[1:])):\n w.append(w_L22.coef_[i])\n \n print (w)",
"5. Regularizirana polinomijalna regresija\n(a)\nVratimo se na slučaj $N=50$ slučajno generiranih primjera iz zadatka 2. Trenirajte modele polinomijalne regresije $\\mathcal{H}_{\\lambda,d}$ za $\\lambda\\in{0,100}$ i $d\\in{2,10}$ (ukupno četiri modela). Skicirajte pripadne funkcije $h(\\mathbf{x})$ i primjere (na jednom grafikonu; preporučujemo koristiti plot unutar for petlje).\nQ: Jesu li rezultati očekivani? Obrazložite.",
"x5a = linspace(-5, 5, 50);\nf = (5 + x5a - 2*(x5a**2) - 5*(x5a**3));\ny5a = f + normal(0, 200, 50);\n\nlamd = [0, 100]\ndd = [2, 10]\nh5a = []\n\nfor i in lamd:\n for j in dd:\n Phi5a = PolynomialFeatures(j).fit_transform(x5a.reshape(-1,1))\n w_5a = np.dot(np.dot(np.linalg.inv(np.dot(Phi5a.T, Phi5a) + np.dot(i, np.eye(j+1))), Phi5a.T), y5a);\n h_5a = np.dot(Phi5a, w_5a)\n h5a.append(h_5a)\n \n\nlamdd = [0, 0, 100, 100]\nddd = [2, 10, 2, 10]\n\nfig = plt.figure(figsize=(15, 10))\nfig.subplots_adjust(wspace=0.2, hspace = 0.2) \n\nfor i in range(0, len(lamdd)): \n ax = fig.add_subplot(2, 2, i+1)\n plt.plot(x5a, h5a[i], 'r', linewidth = 2)\n plt.scatter(x5a, y5a);",
"(b)\nKao u zadataku 3b, razdvojite primjere na skup za učenje i skup za ispitivanje u omjeru 1:1. Prikažite krivulje logaritama pogreške učenja i ispitne pogreške u ovisnosti za model $\\mathcal{H}_{d=20,\\lambda}$, podešavajući faktor regularizacije $\\lambda$ u rasponu $\\lambda\\in{0,1,\\dots,50}$.\nQ: Kojoj strani na grafikonu odgovara područje prenaučenosti, a kojoj podnaučenosti? Zašto?\nQ: Koju biste vrijednosti za $\\lambda$ izabrali na temelju ovih grafikona i zašto?",
"X5a_train, X5a_test, y5a_train, y5a_test = train_test_split(x5a, y5a, test_size = 0.5)\nerr5a_train = [];\nerr5a_test = [];\nd = 20;\nlambda5a = range(0, 50)\n\nfor i in lambda5a:\n Phi5a_train = PolynomialFeatures(d).fit_transform(X5a_train.reshape(-1, 1))\n Phi5a_test = PolynomialFeatures(d).fit_transform(X5a_test.reshape(-1, 1))\n w5a_train = np.dot(np.dot(np.linalg.inv(np.dot(Phi5a_train.T, Phi5a_train) + np.dot(i, np.eye(d+1))), Phi5a_train.T), y5a_train);\n h5a_train = np.dot(Phi5a_train, w5a_train)\n h5a_test = np.dot(Phi5a_test, w5a_train)\n \n err5a_train.insert(i, np.log(mean_squared_error(y5a_train, h5a_train)))\n err5a_test.insert(i, np.log(mean_squared_error(y5a_test, h5a_test)))\n\nplt.figure(figsize=(8,4))\nplt.plot(lambda5a, err5a_train, lambda5a, err5a_test);\nplt.grid(), plt.xlabel('$\\lambda$'), plt.ylabel('err');\nplt.legend(['ucenje', 'ispitna'], loc='best');",
"6. L1-regularizacija i L2-regularizacija\nSvrha regularizacije jest potiskivanje težina modela $\\mathbf{w}$ prema nuli, kako bi model bio što jednostavniji. Složenost modela može se okarakterizirati normom pripadnog vektora težina $\\mathbf{w}$, i to tipično L2-normom ili L1-normom. Za jednom trenirani model možemo izračunati i broj ne-nul značajki, ili L0-normu, pomoću sljedeće funkcije:",
"def nonzeroes(coef, tol=1e-6): \n return len(coef) - len(coef[sp.isclose(0, coef, atol=tol)])",
"(a)\nZa ovaj zadatak upotrijebite skup za učenje i skup za testiranje iz zadatka 3b. Trenirajte modele L2-regularizirane polinomijalne regresije stupnja $d=20$, mijenjajući hiperparametar $\\lambda$ u rasponu ${1,2,\\dots,100}$. Za svaki od treniranih modela izračunajte L{0,1,2}-norme vektora težina $\\mathbf{w}$ te ih prikažite kao funkciju od $\\lambda$.\nQ: Objasnite oblik obiju krivulja. Hoće li krivulja za $\\|\\mathbf{w}\\|_2$ doseći nulu? Zašto? Je li to problem? Zašto?\nQ: Za $\\lambda=100$, koliki je postotak težina modela jednak nuli, odnosno koliko je model rijedak?",
"from sklearn.linear_model import Ridge\nfrom sklearn.linear_model import Lasso\n\nlambda6a = range(1,100)\nd6a = 20\nX6a_train, X6a_test, y6a_train, y6a_test = train_test_split(x6a, y6a, test_size = 0.5)\nPhi6a_train = PolynomialFeatures(d6a).fit_transform(X6a_train.reshape(-1,1))\n\nL0 = [];\nL1 = [];\nL2 = [];\n\nL1_norm = lambda w: sum(abs(w));\nL2_norm = lambda w: math.sqrt(np.dot(w.T, w));\n\nfor i in lambda6a:\n w6a = np.dot(np.dot(np.linalg.inv(np.dot(Phi6a_train.T, Phi6a_train) + np.dot(i, np.eye(d6a+1))), Phi6a_train.T), y6a_train);\n \n L0.append(nonzeroes(w6a))\n L1.append(L1_norm(w6a))\n L2.append(L2_norm(w6a))\n \nplot(lambda6a, L0, lambda6a, L1, lambda6a, L2, linewidth = 1)\ngrid()",
"(b)\nGlavna prednost L1-regularizirane regresije (ili LASSO regression) nad L2-regulariziranom regresijom jest u tome što L1-regularizirana regresija rezultira rijetkim modelima (engl. sparse models), odnosno modelima kod kojih su mnoge težine pritegnute na nulu. Pokažite da je to doista tako, ponovivši gornji eksperiment s L1-regulariziranom regresijom, implementiranom u klasi Lasso u modulu sklearn.linear_model.",
"\nL0 = [];\nL1 = [];\nL2 = [];\n\nfor i in lambda6a:\n lass = Lasso(alpha = i, tol = 0.115).fit(Phi6a_train, y6a_train)\n w6b = lass.coef_\n \n L0.append(nonzeroes(w6b))\n L1.append(L1_norm(w6b))\n L2.append(L2_norm(w6b))\n \nplot(lambda6a, L0, lambda6a, L1, lambda6a, L2, linewidth = 1)\nlegend(['L0', 'L1', 'L2'], loc = 'best')\ngrid()",
"7. Značajke različitih skala\nČesto se u praksi možemo susreti sa podatcima u kojima sve značajke nisu jednakih magnituda. Primjer jednog takvog skupa je regresijski skup podataka grades u kojem se predviđa prosjek ocjena studenta na studiju (1--5) na temelju dvije značajke: bodova na prijamnom ispitu (1--3000) i prosjeka ocjena u srednjoj školi. Prosjek ocjena na studiju izračunat je kao težinska suma ove dvije značajke uz dodani šum.\nKoristite sljedeći kôd kako biste generirali ovaj skup podataka.",
"n_data_points = 500\nnp.random.seed(69)\n\n# Generiraj podatke o bodovima na prijamnom ispitu koristeći normalnu razdiobu i ograniči ih na interval [1, 3000].\nexam_score = np.random.normal(loc=1500.0, scale = 500.0, size = n_data_points) \nexam_score = np.round(exam_score)\nexam_score[exam_score > 3000] = 3000\nexam_score[exam_score < 0] = 0\n\n# Generiraj podatke o ocjenama iz srednje škole koristeći normalnu razdiobu i ograniči ih na interval [1, 5].\ngrade_in_highschool = np.random.normal(loc=3, scale = 2.0, size = n_data_points)\ngrade_in_highschool[grade_in_highschool > 5] = 5\ngrade_in_highschool[grade_in_highschool < 1] = 1\n\n# Matrica dizajna.\ngrades_X = np.array([exam_score,grade_in_highschool]).T\n\n# Završno, generiraj izlazne vrijednosti.\nrand_noise = np.random.normal(loc=0.0, scale = 0.5, size = n_data_points)\nexam_influence = 0.9\ngrades_y = ((exam_score / 3000.0) * (exam_influence) + (grade_in_highschool / 5.0) \\\n * (1.0 - exam_influence)) * 5.0 + rand_noise\ngrades_y[grades_y < 1] = 1\ngrades_y[grades_y > 5] = 5",
"a)\nIscrtajte ovisnost ciljne vrijednosti (y-os) o prvoj i o drugoj značajki (x-os). Iscrtajte dva odvojena grafa.",
"plt.figure()\nplot(exam_score, grades_y, 'r+')\ngrid()\n\nplt.figure()\nplot(grade_in_highschool, grades_y, 'g+')\ngrid()",
"b)\nNaučite model L2-regularizirane regresije ($\\lambda = 0.01$), na podacima grades_X i grades_y:",
"\n w = Ridge(alpha = 0.01).fit(grades_X, grades_y).coef_\n print(w)",
"Sada ponovite gornji eksperiment, ali prvo skalirajte podatke grades_X i grades_y i spremite ih u varijable grades_X_fixed i grades_y_fixed. Za tu svrhu, koristite StandardScaler.",
"from sklearn.preprocessing import StandardScaler\n\n#grades_y.reshape(-1, 1)\n\nscaler = StandardScaler()\nscaler.fit(grades_X)\ngrades_X_fixed = scaler.transform(grades_X)\n\nscaler2 = StandardScaler()\nscaler2.fit(grades_y.reshape(-1, 1))\ngrades_y_fixed = scaler2.transform(grades_y.reshape(-1, 1))",
"Q: Gledajući grafikone iz podzadatka (a), koja značajka bi trebala imati veću magnitudu, odnosno važnost pri predikciji prosjeka na studiju? Odgovaraju li težine Vašoj intuiciji? Objasnite. \n8. Multikolinearnost i kondicija matrice\na)\nIzradite skup podataka grades_X_fixed_colinear tako što ćete u skupu grades_X_fixed iz\nzadatka 7b duplicirati zadnji stupac (ocjenu iz srednje škole). Time smo efektivno uveli savršenu multikolinearnost.",
"grades_X_fixed_colinear = [[g[0],g[1],g[1]] for g in grades_X_fixed]\n\n",
"Ponovno, naučite na ovom skupu L2-regularizirani model regresije ($\\lambda = 0.01$).",
"\nw = Ridge(alpha = 0.01).fit(grades_X_fixed_colinear, grades_y_fixed).coef_\nprint(w)",
"Q: Usporedite iznose težina s onima koje ste dobili u zadatku 7b. Što se dogodilo?\nb)\nSlučajno uzorkujte 50% elemenata iz skupa grades_X_fixed_colinear i naučite dva modela L2-regularizirane regresije, jedan s $\\lambda=0.01$, a jedan s $\\lambda=1000$. Ponovite ovaj pokus 10 puta (svaki put s drugim podskupom od 50% elemenata). Za svaki model, ispišite dobiveni vektor težina u svih 10 ponavljanja te ispišite standardnu devijaciju vrijednosti svake od težina (ukupno šest standardnih devijacija, svaka dobivena nad 10 vrijednosti).",
"w_001s = []\nw_1000s = []\n\nfor i in range(10):\n X_001, X_1000, y_001, y_1000 = train_test_split(grades_X_fixed_colinear, grades_y_fixed, test_size = 0.5)\n w_001 = Ridge(alpha = 0.01).fit(X_001, y_001).coef_\n w_1000 = Ridge(alpha = 0.01).fit(X_1000, y_1000).coef_\n \n w_001s.append(w_001[0])\n w_1000s.append(w_1000[0])\n #print(w_001)\n #print(w_1000)\n #print()\n \nfrom sklearn.preprocessing import StandardScaler\nscaler = StandardScaler()\nscaler.fit(w_001s)\nprint(scaler.mean_)\n\nscaler = StandardScaler()\nscaler.fit(w_1000s)\nprint(scaler.mean_)",
"Q: Kako regularizacija utječe na stabilnost težina?\nQ: Jesu li koeficijenti jednakih magnituda kao u prethodnom pokusu? Objasnite zašto.\nc)\nKoristeći numpy.linalg.cond izračunajte kondicijski broj matrice $\\mathbf{\\Phi}^\\intercal\\mathbf{\\Phi}+\\lambda\\mathbf{I}$, gdje je $\\mathbf{\\Phi}$ matrica dizajna (grades_fixed_X_colinear). Ponovite i za $\\lambda=0.01$ i za $\\lambda=10$.",
"lam = 0.01\n\nphi = grades_X_fixed_colinear\ns = np.add(np.dot(np.transpose(phi), phi), lam * np.identity(len(a)))\nprint(np.linalg.cond(s))\n\nlam = 10\n\nphi = grades_X_fixed_colinear\ns = np.add(np.dot(np.transpose(phi), phi), lam * np.identity(len(a)))\nprint(np.linalg.cond(s))",
"Q: Kako regularizacija utječe na kondicijski broj matrice $\\mathbf{\\Phi}^\\intercal\\mathbf{\\Phi}+\\lambda\\mathbf{I}$?"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
ajfriend/cyscs
|
ipynb/dask-play.ipynb
|
mit
|
[
"Example: $\\hat{A}x=\\hat{b}$\n\nwant to solve $\\hat{A}x=\\hat{b}$\n$\\hat{A}$ is so big that we can't work on it on a single machine\nsplit $\\hat{A}$ into groups of rows\n$$\n\\hat{A}x=\\hat{b}\n$$\n$$\n\\begin{bmatrix}\n-\\ A_1 - \\\n-\\ A_2 - \\\n-\\ A_3 - \n\\end{bmatrix}x =\n\\begin{bmatrix}\nb_1 \\\nb_2 \\\nb_3\n\\end{bmatrix}\n$$\n$\\hat{A}x=\\hat{b}$ equivalent to set intersection problem\n$$\nx \\in \\lbrace z \\mid A_1z = b_1 \\rbrace \\cap \\lbrace z \\mid A_2z = b_2 \\rbrace \\cap \\lbrace z \\mid A_3z = b_3 \\rbrace\n$$\ni.e., find $x$ in the intersection of subspaces\non each machine, easy to project onto its subspace:\n$$\n\\begin{array}{ll}\n\\mbox{minimize} & \\|x - x_0 \\|_2^2 \\\n\\mbox{subject to} & A_i x = b_i\n\\end{array}\n$$\n\"easy\" because it's just linear algebra; involves a matrix factorization that can be reused at each iteration\nlet $\\mbox{proj}_i(x_0)$ be the projection of $x_0$ onto the subspace\n$$\\lbrace z \\mid A_iz = b_i \\rbrace$$\n\nProjection\nprojection of $x_0$ onto $\\lbrace x \\mid Ax = b\\rbrace$ given by\n$$\n\\mbox{proj}(x_0) = \\left(I - A^T (AA^T)^{-1}A\\right)x_0 + A^T (AA^T)^{-1}b = \\left(I - A^T (AA^T)^{-1}A\\right)x_0 + c\n$$\nOne-time computation\n\ncompute $AA^T$ and form its Cholesky factorization once to reuse at each iteration\ncompute $c = A^T (AA^T)^{-1}b$ to reuse\n\nIteration\n\n$z_i^{k+1} = \\mbox{proj}_i(x^k)$\n$\\bar{x}^{k+1} = \\frac{1}{N} \\sum_{i=1}^N z_i^{k+1}$\nthe projection operation involves using the Cholesky factorization of $AA^T$ to compute $A^T (AA^T)^{-1}Ax^k$",
"import numpy as np\nfrom scipy.linalg import cho_factor, cho_solve\n%matplotlib inline\nimport matplotlib.pyplot as plt\n\ndef factor(A,b):\n \"\"\"Return cholesky factorization data to project onto Ax=b.\"\"\"\n AAt = A.dot(A.T)\n chol = cho_factor(AAt, overwrite_a=True)\n \n c = cho_solve(chol, b, overwrite_b=False)\n c = A.T.dot(c)\n \n proj_data = dict(A=A, b=b, chol=chol,c=c)\n \n return proj_data\n\ndef proj(proj_data, x0):\n \"\"\"Use cholesky factorization data to project onto Ax=b.\"\"\"\n A, chol, c = (proj_data[k] for k in 'A chol c'.split())\n\n x = A.dot(x0)\n x = cho_solve(chol, x, overwrite_b=True)\n x = A.T.dot(x)\n x = x0 - x + c\n return x\n\ndef average(*vals):\n \"\"\" Come to a consensus \"\"\"\n return np.mean(vals, axis=0)\n\ndef make_data(k, rows, seed=0):\n \"\"\"Make some random test data.\"\"\"\n # each of k chunks gets 'rows' rows of the full matrix\n n = rows*k\n\n np.random.seed(seed)\n\n Ahat = np.random.randn(n,n)\n bhat = np.random.randn(n)\n\n x_true = np.linalg.solve(Ahat,bhat)\n\n x0 = np.random.randn(n)\n\n As = []\n bs = []\n\n for i in range(k):\n s = slice(i*rows,(i+1)*rows)\n As += [Ahat[s,:]]\n bs += [bhat[s]]\n \n return As, bs, x_true, x0",
"Problem data\n\nthis algorithm has the same flavor as the thing I'd like to do, but actually converges very slowly\nwill take a very long time to converge anything other than the smallest examples\ndon't worry if convergence plots look flat when dealing with 100s of rows",
"As, bs, x_true, x0 = make_data(4, 2, seed=0)\n\nproj_data = list(map(factor, As, bs))\n\nx = x0\n\nr = []\nfor i in range(1000):\n z = (proj(d,x) for d in proj_data)\n x = average(*z)\n r.append(np.linalg.norm(x_true-x))\n \nplt.semilogy(r)\n\nAs, bs, x_true, x0 = make_data(4, 100, seed=0)\nproj_data = list(map(factor, As, bs))\n\nx = x0\n\nr = []\nfor i in range(1000):\n z = (proj(d,x) for d in proj_data)\n x = average(*z)\n r.append(np.linalg.norm(x_true-x))\n \n\nplt.semilogy(r)",
"I'll fix the test data to something large enough so that each iteration's computational task is significant\njust 10 iterations of the algorithm (along with the setup factorizations) in serial takes about a second on my laptop",
"As, bs, x_true, x0 = make_data(4, 1000, seed=0)\n\n%%time\nproj_data = list(map(factor, As, bs))\n\nx = x0\n\nr = []\nfor i in range(10):\n z = (proj(d,x) for d in proj_data)\n x = average(*z)\n r.append(np.linalg.norm(x_true-x))",
"parallel map",
"As, bs, x_true, x0 = make_data(4, 3000, seed=0)\nproj_data = list(map(factor, As, bs))\n\n%%timeit -n1 -r50\na= list(map(lambda d: proj(d, x0), proj_data))\n\nimport concurrent.futures\nfrom multiprocessing.pool import ThreadPool\n\nex = concurrent.futures.ThreadPoolExecutor(2)\npool = ThreadPool(2)\n\n%timeit -n1 -r50 list(ex.map(lambda d: proj(d, x0), proj_data))\n\n%timeit -n1 -r50 list(pool.map(lambda d: proj(d, x0), proj_data))\n\n242/322.0",
"Dask Solution\n\nI create a few weird functions to have pretty names in dask graphs",
"import dask\nfrom dask import do, value, compute, visualize, get\nfrom dask.imperative import Value\nfrom dask.dot import dot_graph\n\nfrom itertools import repeat\n\ndef enum_values(vals, name=None):\n \"\"\"Create values with a name and a subscript\"\"\"\n if not name:\n raise ValueError('Need a name.')\n return [value(v,name+'_%d'%i) for i,v in enumerate(vals)]\n\ndef rename(value, name):\n \"\"\"Rename a Value.\"\"\"\n d = dict(value.dask)\n d[name] = d[value.key]\n del d[value.key]\n return Value(name, [d])\n \ndef enum_map(func, *args, name=None):\n \"\"\"Map `func` over `args` to create `Value`s with a name and a subscript.\"\"\"\n if not name:\n raise ValueError('Need a name.')\n \n values = (do(func)(*a) for a in zip(*args))\n return [rename(v, name+'_%d'%i) for i, v in enumerate(values)]\n\n\ndef step(proj_data, xk, k=None):\n \"\"\"One step of the projection iteration.\"\"\"\n if k is None:\n sufx = '^k+1'\n else:\n sufx = '^%d'%k\n \n z = enum_map(proj, proj_data, repeat(xk), name='z'+sufx)\n xkk = do(average)(*z)\n xkk = rename(xkk, 'x'+sufx)\n return xkk",
"Visualize\n\nthe setup step involving the matrix factorizations",
"lAs = enum_values(As, 'A')\nlbs = enum_values(bs, 'b')\nproj_data = enum_map(factor, lAs, lbs, name='proj_data')\n\nvisualize(*proj_data)",
"visualize one iteration",
"pd_val = [pd.compute() for pd in proj_data]\n\nxk = value(x0,'x^k')\nxkk = step(pd_val, xk)\nxkk.visualize()",
"The setup step along with 3 iterations gives the following dask graph. (Which I'm showing mostly because it was satisfying to make.)",
"x = value(x0,'x^0')\nfor k in range(3):\n x = step(proj_data, x, k+1)\n \nx.visualize()",
"Reuse dask graph\nObviously, it's not efficient to make a huge dask graph, especially if I'll be doing thousands of iterations.\nI really just want to create the dask graph for computing $x^{k+1}$ from $x^k$ and re-apply it at every iteration.\nIs it more efficient to create that dask graph once and reuse it? Maybe that's a premature optimization... I'll do it anyway for fun.",
"proj_data = enum_map(factor, As, bs, name='proj_data')\nproj_data = compute(*proj_data)\n\nx = value(0,'x^k')\nx = step(proj_data, x)\n\ndsk_step = x.dask\ndot_graph(dsk_step)\n\ndask.set_options(get=dask.threaded.get) # multiple threads\n#dask.set_options(get=dask.async.get_sync) # single thread\n\n%%time\n\n# do one-time computation of factorizations\nproj_data = enum_map(factor, As, bs, name='proj_data')\n\n# realize the computations, so they aren't recomputed at each iteration\nproj_data = compute(*proj_data)\n\n# get dask graph for reuse\nx = value(x0,'x^k')\nx = step(proj_data, x)\ndsk_step = x.dask\n\nK = 100\nr = []\nfor k in range(K):\n dsk_step['x^k'] = get(dsk_step, 'x^k+1')\n r.append(np.linalg.norm(x_true-dsk_step['x^k']))\n\n%%time\n# serial execution\nproj_data = list(map(factor, As, bs))\n\nx = x0\n\nK = 100\nr = []\nfor i in range(K):\n z = (proj(d,x) for d in proj_data)\n x = average(*z)\n r.append(np.linalg.norm(x_true-x))",
"iterative projection algorithm thoughts\n\nI don't see any performance gain in using the threaded scheduler, but I don't see what I'm doing wrong here\nI don't see any difference in runtime switching between dask.set_options(get=dask.threaded.get) and dask.set_options(get=dask.async.get_sync); not sure if it's actually changing the scheduler, but I haven't looked into it closely\n\nrandom Dask thoughts\n\nwould be nice to be able to rename a Value and change the data that it points to\nat least for visualizing, I wanted more control over intermediate value names and the ability to enumerate those names with subscripts, which lead to my kludgy functions above. probably not a high-priority item for you tho...\nwould be nice if dask.visualize also worked on Dask graph dictionaries, so I don't have to remember dask.dot.dot_graph\n\nDask attempt 2\n\nI tried a different method for using the threaded scheduler, but got similar results",
"%%time\n\n# do one-time computation of factorizations\nproj_data = enum_map(factor, As, bs, name='proj_data')\n\n# realize the computations, so they aren't recomputed at each iteration\nproj_data = compute(*proj_data, get=dask.threaded.get, num_workers=2)\n\n# get dask graph for reuse\nx = value(x0,'x^k')\nx = step(proj_data, x)\ndsk_step = x.dask\n\nK = 100\nr = []\nfor k in range(K):\n dsk_step['x^k'] = dask.threaded.get(dsk_step, 'x^k+1', num_workers=2)\n r.append(np.linalg.norm(x_true-dsk_step['x^k']))",
"Runtime error\n\nAs I was experimenting and switching schedulers and between my first and second dask attempts, I would very often get the following \"can't start new thread\" error\nI would also occasionally get an \"TypeError: get_async() got multiple values for argument 'num_workers'\" even though I had thought I'd set dask.set_options(get=dask.threaded.get)",
"%%time\n\n# do one-time computation of factorizations\nproj_data = enum_map(factor, As, bs, name='proj_data')\n\n# realize the computations, so they aren't recomputed at each iteration\nproj_data = compute(*proj_data)\n\n# get dask graph for reuse\nx = value(x0,'x^k')\nx = step(proj_data, x)\ndsk_step = x.dask\n\nK = 100\nr = []\nfor k in range(K):\n dsk_step['x^k'] = get(dsk_step, 'x^k+1', num_workers=2)\n r.append(np.linalg.norm(x_true-dsk_step['x^k']))\n\n%%time\n\n# do one-time computation of factorizations\nproj_data = enum_map(factor, As, bs, name='proj_data')\n\n# realize the computations, so they aren't recomputed at each iteration\nproj_data = compute(*proj_data)\n\n# get dask graph for reuse\nx = value(x0,'x^k')\nx = step(proj_data, x)\ndsk_step = x.dask\n\nK = 100\nr = []\nfor k in range(K):\n dsk_step['x^k'] = get(dsk_step, 'x^k+1', num_workers=2)\n r.append(np.linalg.norm(x_true-dsk_step['x^k']))\n\nnp.__config__.show()"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
BinRoot/TensorFlow-Book
|
ch06_hmm/Concept01_forward.ipynb
|
mit
|
[
"Ch 06: Concept 01\nHidden Markov model forward algorithm\nOof this code's a bit complicated if you don't already know how HMMs work. Please see the book chapter for step-by-step explanations. I'll try to improve the documentation, or feel free to send a pull request with your own documentation!\nFirst, let's import TensorFlow and NumPy:",
"import numpy as np\nimport tensorflow as tf",
"Define the HMM model:",
"class HMM(object):\n def __init__(self, initial_prob, trans_prob, obs_prob):\n self.N = np.size(initial_prob)\n self.initial_prob = initial_prob\n self.trans_prob = trans_prob\n self.emission = tf.constant(obs_prob)\n\n assert self.initial_prob.shape == (self.N, 1)\n assert self.trans_prob.shape == (self.N, self.N)\n assert obs_prob.shape[0] == self.N\n\n self.obs_idx = tf.placeholder(tf.int32)\n self.fwd = tf.placeholder(tf.float64)\n\n def get_emission(self, obs_idx):\n slice_location = [0, obs_idx]\n num_rows = tf.shape(self.emission)[0]\n slice_shape = [num_rows, 1]\n return tf.slice(self.emission, slice_location, slice_shape)\n\n def forward_init_op(self):\n obs_prob = self.get_emission(self.obs_idx)\n fwd = tf.multiply(self.initial_prob, obs_prob)\n return fwd\n\n def forward_op(self):\n transitions = tf.matmul(self.fwd, tf.transpose(self.get_emission(self.obs_idx)))\n weighted_transitions = transitions * self.trans_prob\n fwd = tf.reduce_sum(weighted_transitions, 0)\n return tf.reshape(fwd, tf.shape(self.fwd))",
"Define the forward algorithm:",
"def forward_algorithm(sess, hmm, observations):\n fwd = sess.run(hmm.forward_init_op(), feed_dict={hmm.obs_idx: observations[0]})\n for t in range(1, len(observations)):\n fwd = sess.run(hmm.forward_op(), feed_dict={hmm.obs_idx: observations[t], hmm.fwd: fwd})\n prob = sess.run(tf.reduce_sum(fwd))\n return prob",
"Let's try it out:",
"if __name__ == '__main__':\n initial_prob = np.array([[0.6], [0.4]])\n trans_prob = np.array([[0.7, 0.3], [0.4, 0.6]])\n obs_prob = np.array([[0.1, 0.4, 0.5], [0.6, 0.3, 0.1]])\n\n hmm = HMM(initial_prob=initial_prob, trans_prob=trans_prob, obs_prob=obs_prob)\n\n observations = [0, 1, 1, 2, 1]\n with tf.Session() as sess:\n prob = forward_algorithm(sess, hmm, observations)\n print('Probability of observing {} is {}'.format(observations, prob))"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
nreimers/deeplearning4nlp-tutorial
|
2015-10_Lecture/Lecture2/code/1_Intro_Theano.ipynb
|
apache-2.0
|
[
"Introduction to Theano\nFor a Theano tutorial please see: http://deeplearning.net/software/theano/tutorial/index.html\nBasic Operations\nFor more details see: http://deeplearning.net/software/theano/tutorial/adding.html\nTask: Use Theano to compute a simple polynomial function $$f(x,y) = 3x+xy+3y$$\nHints:\n- First define two input variables with the correct type (http://deeplearning.net/software/theano/library/tensor/basic.html#all-fully-typed-constructors)\n- Define the computation of the function and store it in a variable\n- Use the theano.function() to compile your computation graph",
"import theano\nimport theano.tensor as T\n\n#Put your code here",
"Now you can invoke f and pass the input values, i.e. f(1,1), f(10,-3) and the result for this operation is returned.",
"print f(1,1)\nprint f(10,-3)",
"Printing of the graph\nYou can print the graph for the above value of z. For details see:\nhttp://deeplearning.net/software/theano/library/printing.html\nhttp://deeplearning.net/software/theano/tutorial/printing_drawing.html\nTo print the graph, futher libraries must be installed. In 99% of your development time you don't need the graph printing function. Feel free to skip this section",
"#Graph for z\ntheano.printing.pydotprint(z, outfile=\"pics/z_graph.png\", var_with_name_simple=True) \n\n#Graph for function f (after optimization)\ntheano.printing.pydotprint(f, outfile=\"pics/f_graph.png\", var_with_name_simple=True) ",
"The graph fo z:\n<img src=\"files/pics/z_graph.png\">\nThe graph for f:\n<img src=\"files/pics/f_graph.png\">\nSimple matrix multiplications\nThe following types for input variables are typically used:\nbyte: bscalar, bvector, bmatrix, btensor3, btensor4\n16-bit integers: wscalar, wvector, wmatrix, wtensor3, wtensor4\n32-bit integers: iscalar, ivector, imatrix, itensor3, itensor4\n64-bit integers: lscalar, lvector, lmatrix, ltensor3, ltensor4\nfloat: fscalar, fvector, fmatrix, ftensor3, ftensor4\ndouble: dscalar, dvector, dmatrix, dtensor3, dtensor4\ncomplex: cscalar, cvector, cmatrix, ctensor3, ctensor4\n\nscalar: One element (one number)\nvector: 1-dimension\nmatrix: 2-dimensions\ntensor3: 3-dimensions\ntensor4: 4-dimensions\nAs we do not need perfect precision we use mainly float instead of double. Most GPUs are also not able to handle doubles.\nSo in practice you need: iscalar, ivector, imatrix and fscalar, fvector, vmatrix.\nTask: Implement the function $$f(x,W,b) = \\tanh(xW+b)$$ with $x \\in \\mathbb{R}^n, b \\in \\mathbb{R}^k, W \\in \\mathbb{R}^{n \\times k}$.\n$n$ input dimension and $k$ output dimension",
"import theano\nimport theano.tensor as T\nimport numpy as np\n\n# Put your code here",
"Next we define some NumPy-Array with data and let Theano compute the result for $f(x,W,b)$",
"inputX = np.asarray([0.1, 0.2, 0.3], dtype='float32')\ninputW = np.asarray([[0.1,-0.2],[-0.4,0.5],[0.6,-0.7]], dtype='float32')\ninputB = np.asarray([0.1,0.2], dtype='float32')\n\nprint \"inputX.shape\",inputX.shape\nprint \"inputW.shape\",inputW.shape\n\nf(inputX, inputW, inputB)",
"Don't confuse x,W, b with inputX, inputW, inputB. x,W,b contain pointer to your symbols in the compute graph. inputX,inputW,inputB contains your data.\nShared Variables and Updates\nSee: http://deeplearning.net/software/theano/tutorial/examples.html#using-shared-variables\n\nUsing shared variables, we can create an internal state.\nCreation of a accumulator:\nAt the beginning initialize the state to 0\nWith each function call update the state by certain value\n\n\nLater, in your neural networks, the weight matrices $W$ and the bias values $b$ will be stored as internal state / as shared variable. \nShared variables improve performance, as you need less transfer between your Python code and the execution of the compute graph (which is written & compiled from C code)\nShared variables can also be store on your graphic card",
"import theano\nimport theano.tensor as T\nimport numpy as np\n\n#Define my internal state\ninit_value = 1\n\nstate = theano.shared(value=init_value, name='state')\n\n#Define my operation f(x) = 2*x\nx = T.lscalar('x')\nz = 2*x\n\naccumulator = theano.function(inputs=[], outputs=z, givens={x: state})\n\nprint accumulator()\nprint accumulator()",
"Shared Variables\n\nWe use theano.shared() to share a variable (i.e. make it internally available for Theano)\nInternal state variables are passed by compile time via the parameter givens. So to compute the ouput z, use the shared variable state for the input variable x\nFor information on the borrow=True parameter see: http://deeplearning.net/software/theano/tutorial/aliasing.html\nIn most cases we can set it to true and increase by this the performance.\n\nUpdating Shared Variables\n\nUsing the updates-parameter, we can specify how our shared variables should be updated\nThis is useful to create a train function for a neural network. \nWe create a function train(data) which computes the error and gradient\nThe computed gradient is then used in the same call to update the shared weights\nTraining just becomes: for mini_batch in mini_batches: train(mini_batch)",
"#New accumulator function, now with an update\n\n# Put your code here to update the internal counter\n\nprint accumulator(1)\nprint accumulator(1)\nprint accumulator(1)",
"In the above example we increase the state by the variable inc\nThe value for inc is passed as value to our function"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
FinTechies/HedgingRL
|
notebooks/TD Learning Black Scholes1.ipynb
|
mit
|
[
"import pandas as pd\n\nimport numpy as np\nimport scipy as sp\nimport matplotlib.pyplot as plt\nimport seaborn as sns\nfrom pandas_datareader import data\nfrom datetime import datetime",
"Plan\n\nSome data: look at some stock price series\ndevise a model for stock price series: Geometric Brownian Motion (GBM)\nExample for a contingent claim: call option\nPricing of a call option under the assumtpion of GBM\nChallenges\n\nSome data: look at some stock price series\nWe import data from Yahoo finance: two examples are IBM and Apple",
"aapl = data.DataReader('AAPL', 'yahoo', '2000-01-01')\nprint(aapl.head())",
"$\\Rightarrow$ various different price series",
"plt.plot(aapl.Close)",
"$\\Longrightarrow$ There was a stock split 7:1 on 06/09/2014.\nAs we do not want to take care of things like that, we use the Adjusted close price!",
"ibm = data.DataReader('AAPl', 'yahoo', '2000-1-1')\n\nprint(ibm['Adj Close'].head())\n%matplotlib inline\nibm['Adj Close'].plot(figsize=(10,6))\nplt.ylabel('price')\nplt.xlabel('year')\nplt.title('Price history of IBM stock')\n\n\n\nibm = data.DataReader('IBM', 'yahoo', '2000-1-1')\n\nprint(ibm['Adj Close'].head())\n%matplotlib inline\nibm['Adj Close'].plot(figsize=(10,6))\nplt.ylabel('price')\nplt.xlabel('year')\nplt.title('Price history of IBM stock')\n",
"Define new financial instruments\nWhat we have now prices of financial instruments:\n - bonds (assume: fixed price)\n - stocks\n - exchange rates\n - oil\n - dots\n$\\Longrightarrow$ Tradeables with variable prices\nWe can form a portfolio by \n - holding some cash (possibly less than 0, that's called debts)\n - buying some stock/currency etc. (perhaps less than 0, that's called 'short)\nWhy do we need more ?\n\nyou want to play\nyou are producing something and you want to assure, that the prices you achieve in one year are suffiientlyone a stock, and arewant to protect yourself against lower prices\nyou want to protect yourself against higher prices\nyou want to protect yourself against an increase in volatility\nyou want to protect yourself against extreme price movements \nyou want to ...\n\n$\\Longrightarrow$ Essentially you want to be able to control the final value of your portfolio!\nYou go the bank, the bank offers you some product, you buy and are happy ....°\nObligations for the bank\n\nconstruct a product\nprice the product\nhedge the product\n\nFor this talk, we take one of the easiest such products:, a call option.\nCall option\nDefinition\nCall option on a stock $S$ with strike price $K$ and expiry $T$: \nThe buyer of the call option has the right, but not the obligation, to buy $1$ stock $S$ (the underlying) from the seller of the option at a certain time (the expiration date $T$) for a certain price (the strike price $K$).\nPayoff: $$C_T =\\max(0, S-K)\\,.$$\nWhat can you do with a Call-option?\nExample: you want to buy a stock next year, 01.01.2018, for 100:\nBuy now a call for 100 (strike price).\nNext year you can distinguish two distinct cases:\n\nstock trades at 80 < 100 $\\Longrightarrow$ buy stock for 80 Euro, forget call option - it call is worthless\nstock trades at 120 > 100 $\\Longrightarrow$ use call to buy stock for 100\n\nHow to price the call option?\n\nmatch expectations\nutility pricing\narbitrage free pricing $\\Longrightarrow$ this is the price, enforced by the market\n...\n\nWhat is a fair price for an option with strike price $K$ and expiry $T$?\nIf the stock trades at a price $S$ at time $T$, then the payoff is: \nPayoff: $C_T =\\max(0, S-K)\\,.$\nIf the interest rate is $r$, we discount future cashflows with $e^{- r T}$. Thus if the stock traded at price $S$ at expire, the resulting cash-flow would be worth(time $t = 0$)\n$$C_0 = e^{- r T} \\max(0, S-K)\\,.$$\nProblem: we do not know $S_T$ at time $0$.\nSolution: we take the expectation of $S$. This yields\n$$C_{0, S} = e^{- r T} \\mathbb{E}\\left[ \\max(0, S-K)\\right]\\,.$$\nCaveat: We have hidden a lot!!\nThe formal deduction is complicated via arbitrage free pricing and the Feynmann-Kac-theorem\nHow to constructthe expectation? Need a model for the stock price!\nConstruct a theoretical model for stock price movements: Geometric Brownian motion\nFor the apple chart one can see, that price increments seem to correlate with the price: thus we plot logarithmic prices:\nLet us plot the data logarithmically:",
"Log_Data = plt.figure()\n%matplotlib inline\nplt.plot(np.log(aapl['Adj Close']))\nplt.ylabel('logarithmic price')\nplt.xlabel('year')\nplt.title('Logarithmic price history of Apple stock')",
"Now the roughness of the chart looks more even $\\Rightarrow$ We should model increments proportional to the stock price!\nThis leads us to some assumptions for the stock price process: \n- the distribution of relative changes is constant over time\n- Small changes appear often, large changes rarly: changes are normally distributed\n$\\Rightarrow$ use an exponential Gaussian distribution for increments:\n $$S_{n+1} = S_n e^{\\sigma X+ \\mu} $$\n where $X \\sim N(0,1)$, $\\sigma$ denotes the variance and $\\mu$ the mean growth rate.\nLet us simulate this: \ntypical values for $\\mu$ and $\\sigma$ per year are:\n- $\\mu_y= 0,08$\n- $\\sigma_y = 0.2$\n$\\Rightarrow$ assuming 252 business days a year we get\n$$\\mu = \\mu_d = \\frac{\\mu_y}{252}\\sim 0.0003$$\n$$\\sigma = \\sigma_d = \\frac{\\sigma_y}{\\sqrt{252}}\\sim 0,012$$",
"S0 = 1\nsigma = 0.2/np.sqrt(252)\nmu = 0.08/252\n\n%matplotlib inline\nfor i in range(0, 5):\n r = np.random.randn((1000))\n plt.plot(S0 * np.cumprod(np.exp(sigma *r +mu)))\n\nS0 = 1.5 # start price\nK = 1.0 # strike price\nmu = 0 # average growth\nsigma = 0.2/np.sqrt(252) # volatility\nN = 10000 # runs\nM = 252*4 # length of each run (252 business days per year times 4 years)\n\ndef call_price(S, K):\n return max(0.0, S-K)\n\ndef MC_call_price(S0, K, mu, sigma, N, M):\n CSum = 0\n SSum = 0\n for n in range(N):\n r = np.random.randn((M))\n S = S0 * np.cumprod(np.exp(sigma *r))\n SSum += S\n CSum += call_price(S[M-1], K)\n return CSum/N\n",
"Optionprices:",
"S0 = np.linspace(0.0, 2.0,21)\nC = []\nfor k in range(21):\n C.append(MC_call_price(k*2/20, K, mu, sigma, N, M))\nC\n\nplt.plot(S0, C)\nplt.ylabel('Call price')\nplt.xlabel('Start price')\nplt.title('Call price')\nplt.show()",
"This curve can also be calculated theoretically. Using stochastic calculus, one can deduce the famous Black-Scholes equation, to calculate this curve. We will not go into detail ...",
"from IPython.display import Image\nImage(\"Picture_Then_Miracle_Occurs.PNG\")",
"... but will just state the final result!\nBlack Scholes formula:\n$${\\displaystyle d_{1}={\\frac {1}{\\sigma {\\sqrt {T-t}}}}\\left[\\ln \\left({\\frac {S_{t}}{K}}\\right)+(r-q+{\\frac {1}{2}}\\sigma ^{2})(T-t)\\right]}$$\n$${\\displaystyle d_{2}=d_{1}-\\sigma {\\sqrt {T-t}}={\\frac {1}{\\sigma {\\sqrt {T-t}}}}\\left[\\ln \\left({\\frac {S_{t}}{K}}\\right)+(r-q-{\\frac {1}{2}}\\sigma ^{2})(T-t)\\right]}$$\nBlack-Scholes Formula for the call price:\n$${\\displaystyle C(S_{t},t)=e^{-r(T-t)}[S_tN(d_{1})-KN(d_{2})]\\,}$$\n$\\Delta$ describes the change in the price of the option if the stock price changes by $1$.\nBlack Scholes formula for the Delta:\n$$ \\Delta(C, t) = e^{-r(T-t)} N(d_1)$$",
"d_1 = lambda σ, T, t, S, K: 1. / σ / np.sqrt(T - t) * (np.log(S / K) + 0.5 * (σ ** 2) * (T-t))\nd_2 = lambda σ, T, t, S, K: 1. / σ / np.sqrt(T - t) * (np.log(S / K) - 0.5 * (σ ** 2) * (T-t))\n\ncall = lambda σ, T, t, S, K: S * sp.stats.norm.cdf( d_1(σ, T, t, S, K) ) - K * sp.stats.norm.cdf( d_2(σ, T, t, S, K) )\nDelta = lambda σ, T, t, S, K: sp.stats.norm.cdf( d_1(σ, T, t, S, K) )\n\nplt.plot(np.linspace(sigma, 4., 100), call(1., 1., .9, np.linspace(0.1, 4., 100), 1.))\n\nplt.plot(d_1(1., 1., 0., np.linspace(0.1, 2.9, 10), 1))\n\n#plt.plot(np.linspace(sigma, 4., 100), Delta(1., 1., .9, np.linspace(0.1, 4., 100), 1.))\nplt.plot(np.linspace(sigma, 1.9, 100), Delta(1., 1., 0.2, np.linspace(0.01, 1.9, 100), 1.))\nplt.plot(np.linspace(sigma, 1.9, 100), Delta(1., 1., 0.6, np.linspace(0.01, 1.9, 100), 1.))\nplt.plot(np.linspace(sigma, 1.9, 100), Delta(1., 1., 0.9, np.linspace(0.01, 1.9, 100), 1.))\nplt.plot(np.linspace(sigma, 1.9, 100), Delta(1., 1., 0.99, np.linspace(0.01, 1.9, 100), 1.))\nplt.plot(np.linspace(sigma, 1.9, 100), Delta(1., 1., 0.9999, np.linspace(0.01, 1.9, 100), 1.))\nplt.xlabel(\"Price/strike price\")\nplt.ylabel(\"$\\Delta$\")\nplt.legend(['t = 0.2','t = 0.6', 't = 0.9', 't = 0.99', 't = 0.9999'], loc = 2)",
"For small prices we do not need to own shares, to hedge the option. For high prices we need exactly one share. The interesting area is around the strike price.\nSimulate a portfolio consisting of 1 call option and $-\\Delta$ Shares:\n$$P = C - \\Delta S$$\nIn approximation, the portfolio value should be constant!",
"N = 10 #runs\n\ndef Simulate_Price_Series(S0, sigma, N, M):\n for n in (1,N):\n r = np.random.randn((M))\n S = S0 * np.cumprod(np.exp(sigma *r))\n \n for m in (1,M):\n P.append = Delta(sigma, M, m, S, K)*\n \n return S\n\nplt.plot(1+np.cumsum(np.diff(S) * Delta(sigma, 4, 0, S, K)[1, M-1]))\nplt.plot(S)\n\nS\n\n\nlen(Delta(sigma, 4, 0, S, K)[[1:999]])\n\n\ndef Calculate_Portfolio(S0, K, mu, sigma, N, M):\n S = Simulate_Price_Series(S0, sigma, N, M)\n StockDelta = Delta(sigma, 4, 0, S, K) )\n vol = vol0 * np.cumprod(np.exp(sigma*r2)\n S = S0 * np.cumprod(np.exp(vol * r))\n SSum += S\n CSum += call_price(S[M-1], K)",
"Challenges\n\n1) the price depends on the calibration of $\\sigma$! Parameters may not be constant over time!\n2) the price depends on the validity of the model\n\nThe main problem is the second one:\nA)\n$\\sigma$ and $\\mu$ may change over time. Hence changes of volatility should adapted in the price\n$\\Longrightarrow$ new more complex models describing stochastic volatility are introduced, for example: \n- Heston model, \n- Ball-Roma model, \n- SABR-model and many more\nB)\nlet us look at the log-returns:",
"np.histogram(np.diff(aapl['Adj Close']))\n\nplt.hist(np.diff(aapl['Adj Close']), bins='auto') # plt.hist passes it's arguments to np.histogram\nplt.title(\"Histogram of daily returns for Apple\")\nplt.show()",
"This is not a normal distribution!\n2) normally distributed increments are not realistic. Real distributions are\n- Heavy tails:\n- Gain/Loss asymmetry \n- Aggregational Gaussianity\n- Intermittency (parameter changes over time)\n- Volatility clustering\n- Leverage effect\n- Volume/volatility correlation:\n- Slow decay of autocorrelation in absolute returns:\n- Asymmetry in time scales\n(see for example: Rama Cont: Empirical properties of asset returns: stylized facts and statistical issues, Journal of quantitative finance, Volume 1 (2001) 223–236)\nThe option price depends on the model, on the calibration.\nAlternative model: Local volatility model\nThe closest alternative to the Black-Scholes model are local volatility models.",
"def MC_call_price_Loc_Vol(S0, K, mu, sigma, N, M):\n CSum = 0\n SSum = 0\n for n in range(N):\n r = np.random.randn((M))\n r2 = np.random.randn((M))\n vol = vol0 * np.cumprod(np.exp(sigma*r2)\n S = S0 * np.cumprod(np.exp(vol * r))\n SSum += S\n CSum += call_price(S[M-1], K)\n return CSum/N\n\nS0 = np.linspace(0.0, 2.0,21)\nCLoc = []\nfor k in range(21):\n CLoc.append(MC_call_price_Loc_Vol(k*2/20, K, mu, 0.1*sigma, N, M))\n \n\nCLoc\n\nplt.plot(S0, C)\nplt.plot(S0, CLoc)\nplt.ylabel('Call price')\nplt.xlabel('Start price')\nplt.title('Call price')\nplt.show()",
"Proposed solution\nFind a way to price an option without the assumption of a market model, without the need to calibrate and recalibrate the model.",
"def iterate_series(n=1000, S0 = 1):\n while True:\n r = np.random.randn((n))\n S = np.cumsum(r) + S0\n yield S, r\n\nfor (s, r) in iterate_series():\n t, t_0 = 0, 0\n for t in np.linspace(0, len(s)-1, 100):\n r = s[int(t)] / s[int(t_0)]\n t_0 = t\n break\n\nstate = (stock_val, besitz)"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
fmalazemi/CheatSheets
|
python3/built-ins.ipynb
|
gpl-3.0
|
[
"Built-in Functions\nabs(x)\nx is an integer.",
"abs(3)",
"all(iterable)\ntakes an iteratable and return true if each element is true. It simple join all elements of an iterable with an and operator. Remember zero and empty string are treated as False in python 3.",
"#def all(iterable):\n# for element in iterable:\n# if not element:\n# return False\n# return True\nx = [1,2,0]\nprint(all(x))\ny = [1,2,3,4]\nprint(all(y))",
"any(iterable)\nSimilar to all(iterable) but use or operator.",
"#def any(iterable):\n# for element in iterable:\n# if element:\n# return True\n# return False\nprint(any([0]))\nprint(any([1,2,3]))\nprint(any(['']))\nprint(any([' '])) #Rememebr: space is not equivilant to empty string",
"bin(x)\nreturn a string of binary representation of x (x in any radix.)",
"print(bin(7))\ntype(bin(7))",
"int(x, base)\nConvert a number of string x of base base to an integer in base 10",
"int(3.4)\nint('3')\nint(\"10\", 10)",
"ord(c)\nreturn integer of given unicode c",
"ord('W')",
"pow(x, y [, z])\nreturn x^y, if z is given return x^y mod z",
"pow(3,4)\npow()"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
Upward-Spiral-Science/spect-team
|
Code/Assignment-9/Independent Analysis.ipynb
|
apache-2.0
|
[
"Independent Analysis - Srinivas (handle: thewickedaxe)\n PLEASE SCROLL TO THE BOTTOM OF THE NOTEBOOK TO FIND THE QUESTIONS AND THEIR ANSWERS\n In this notebook we we explore dimensionality reduction with ISOMAP and MDS and their effects on classification\nInitial Data Cleaning",
"# Standard\nimport pandas as pd\nimport numpy as np\n%matplotlib inline\nimport matplotlib.pyplot as plt\n\n# Dimensionality reduction and Clustering\nfrom sklearn.decomposition import PCA\nfrom sklearn.cluster import KMeans\nfrom sklearn.cluster import MeanShift, estimate_bandwidth\nfrom sklearn import manifold, datasets\nfrom itertools import cycle\n\n# Plotting tools and classifiers\nfrom matplotlib.colors import ListedColormap\nfrom sklearn.linear_model import LogisticRegression\nfrom sklearn.cross_validation import train_test_split\nfrom sklearn.preprocessing import StandardScaler\nfrom sklearn.datasets import make_moons, make_circles, make_classification\nfrom sklearn.neighbors import KNeighborsClassifier\nfrom sklearn.svm import SVC\nfrom sklearn.tree import DecisionTreeClassifier\nfrom sklearn.ensemble import RandomForestClassifier, AdaBoostClassifier\nfrom sklearn.naive_bayes import GaussianNB\nfrom sklearn.discriminant_analysis import LinearDiscriminantAnalysis as LDA\nfrom sklearn.discriminant_analysis import QuadraticDiscriminantAnalysis as QDA\nfrom sklearn import cross_validation\nfrom sklearn.cross_validation import LeaveOneOut\nfrom sklearn.cross_validation import LeavePOut\n\n\n# Let's read the data in and clean it\n\ndef get_NaNs(df):\n columns = list(df.columns.get_values()) \n row_metrics = df.isnull().sum(axis=1)\n rows_with_na = []\n for i, x in enumerate(row_metrics):\n if x > 0: rows_with_na.append(i)\n return rows_with_na\n\ndef remove_NaNs(df):\n rows_with_na = get_NaNs(df)\n cleansed_df = df.drop(df.index[rows_with_na], inplace=False) \n return cleansed_df\n\ninitial_data = pd.DataFrame.from_csv('Data_Adults_1_reduced.csv')\ncleansed_df = remove_NaNs(initial_data)\n\n# Let's also get rid of nominal data\nnumerics = ['int16', 'int32', 'int64', 'float16', 'float32', 'float64']\nX = cleansed_df.select_dtypes(include=numerics)\nprint X.shape\n\n# Let's now clean columns getting rid of certain columns that might not be important to our analysis\n\ncols2drop = ['GROUP_ID', 'doa', 'Baseline_header_id', 'Concentration_header_id',\n 'Baseline_Reading_id', 'Concentration_Reading_id']\nX = X.drop(cols2drop, axis=1, inplace=False)\nprint X.shape\n\n# For our studies children skew the data, it would be cleaner to just analyse adults\nX = X.loc[X['Age'] >= 18]\n\nprint X.shape",
"we've now dropped the last of the discrete numerical inexplicable data, and removed children from the mix\nExtracting the samples we are interested in",
"# Let's extract ADHd and Bipolar patients (mutually exclusive)\n\nADHD = X.loc[X['ADHD'] == 1]\nADHD = ADHD.loc[ADHD['Bipolar'] == 0]\n\nBP = X.loc[X['Bipolar'] == 1]\nBP = BP.loc[BP['ADHD'] == 0]\n\nprint ADHD.shape\nprint BP.shape\n\n# Keeping a backup of the data frame object because numpy arrays don't play well with certain scikit functions\nADHD_df = ADHD.copy(deep = True)\nBP_df = BP.copy(deep = True)\nADHD = pd.DataFrame(ADHD.drop(['Patient_ID'], axis = 1, inplace = False))\nBP = pd.DataFrame(BP.drop(['Patient_ID'], axis = 1, inplace = False))",
"we see here that there 1383 people who have ADHD but are not Bipolar and 440 people who are Bipolar but do not have ADHD\nDimensionality reduction\nPCA",
"combined = pd.concat([ADHD, BP])\ncombined_backup = pd.concat([ADHD, BP])\n\n\npca = PCA(n_components = 24, whiten = \"True\").fit(combined)\ncombined = pca.transform(combined)\nprint sum(pca.explained_variance_ratio_)\n\ncombined = pd.DataFrame(combined)\n\nADHD_reduced_df = combined[:1383]\nBP_reduced_df = combined[1383:]\n\nADHD_reduced_df_id = ADHD_reduced_df.copy(deep = True)\nBP_reduced_df_id = BP_reduced_df.copy(deep = True)\n\nADHD_reduced_df_id['Patient_ID'] = 123\nBP_reduced_df_id['Patient_ID'] = 123\n\nprint ADHD_reduced_df.shape\nprint BP_reduced_df.shape\n\nprint ADHD_reduced_df_id.shape\nprint BP_reduced_df_id.shape\n\n# resorting to some hacky crap, that I am ashamed to write, but pandas is refusing to cooperate\nz = []\nfor x in BP_df['Patient_ID']:\n z.append(x)\nBP_reduced_df_id['Patient_ID'] = z\n\nz = []\nfor x in ADHD_df['Patient_ID']:\n z.append(x)\nADHD_reduced_df_id['Patient_ID'] = z\n\nADHD_pca = ADHD_reduced_df.copy(deep = True)\nBP_pca = BP_reduced_df.copy(deep = True)",
"We see here that most of the variance is preserved with just 24 features. \nManifold Techniques\nISOMAP",
"combined = manifold.Isomap(20, 20).fit_transform(combined_backup)\nADHD_iso = combined[:1383]\nBP_iso = combined[1383:]\nprint pd.DataFrame(ADHD_iso).head()",
"Multi dimensional scaling",
"mds = manifold.MDS(20).fit_transform(combined_backup)\nADHD_mds = combined[:1383]\nBP_mds = combined[1383:]\nprint pd.DataFrame(ADHD_mds).head()",
"As is evident above, the 2 manifold techniques don't really offer very different dimensionality reductions. Therefore we are just going to roll with Multi dimensional scaling\nClustering and other grouping experiments\nMean-Shift - mds",
"ADHD_clust = pd.DataFrame(ADHD_mds)\nBP_clust = pd.DataFrame(BP_mds)\n\n# This is a consequence of how we dropped columns, I apologize for the hacky code \ndata = pd.concat([ADHD_clust, BP_clust])\n\n# Let's see what happens with Mean Shift clustering\nbandwidth = estimate_bandwidth(data.get_values(), quantile=0.2, n_samples=1823) * 0.8\nms = MeanShift(bandwidth=bandwidth)\nms.fit(data.get_values())\nlabels = ms.labels_\n\ncluster_centers = ms.cluster_centers_\nlabels_unique = np.unique(labels)\nn_clusters_ = len(labels_unique)\nprint('Estimated number of clusters: %d' % n_clusters_)\n\nfor cluster in range(n_clusters_): \n ds = data.get_values()[np.where(labels == cluster)] \n plt.plot(ds[:,0], ds[:,1], '.') \n lines = plt.plot(cluster_centers[cluster, 0], cluster_centers[cluster, 1], 'o')",
"Though I'm not sure how to tweak the hyper-parameters of the bandwidth estimation function, there doesn't seem to be much difference. Minute variations to the bandwidth result in large cluster differences. Perhaps the data isn't very suitable for a contrived clustering technique like Mean-Shift. Therefore let us attempt something more naive and simplistic like K-Means\nK-Means clustering - mds",
"kmeans = KMeans(n_clusters=2)\nkmeans.fit(data.get_values())\nlabels = kmeans.labels_\ncentroids = kmeans.cluster_centers_\nprint('Estimated number of clusters: %d' % len(centroids))\nprint data.shape\n\nfor label in [0, 1]:\n ds = data.get_values()[np.where(labels == label)] \n plt.plot(ds[:,0], ds[:,1], '.') \n lines = plt.plot(centroids[i,0], centroids[i,1], 'o')",
"As is evident from the above 2 experiments, no clear clustering is apparent.But there is some significant overlap and there 2 clear groups\nClassification Experiments\nLet's experiment with a bunch of classifiers",
"ADHD_mds = pd.DataFrame(ADHD_mds)\nBP_mds = pd.DataFrame(BP_mds)\n\nBP_mds['ADHD-Bipolar'] = 0\nADHD_mds['ADHD-Bipolar'] = 1\n\ndata = pd.concat([ADHD_mds, BP_mds])\nclass_labels = data['ADHD-Bipolar']\ndata = data.drop(['ADHD-Bipolar'], axis = 1, inplace = False)\nprint data.shape\ndata = data.get_values()\n\n# Leave one Out cross validation\ndef leave_one_out(classifier, values, labels):\n leave_one_out_validator = LeaveOneOut(len(values))\n classifier_metrics = cross_validation.cross_val_score(classifier, values, labels, cv=leave_one_out_validator)\n accuracy = classifier_metrics.mean()\n deviation = classifier_metrics.std()\n return accuracy, deviation\n\np_val = 100\nknn = KNeighborsClassifier(n_neighbors = 5)\nsvc = SVC(gamma = 2, C = 1)\nrf = RandomForestClassifier(n_estimators = 22) \ndt = DecisionTreeClassifier(max_depth = 22) \nqda = QDA()\ngnb = GaussianNB()\nclassifier_accuracy_list = []\nclassifiers = [(knn, \"KNN\"), (svc, \"SVM\"), (rf, \"Random Forest\"), (dt, \"Decision Tree\"),\n (qda, \"QDA\"), (gnb, \"Gaussian NB\")]\nfor classifier, name in classifiers:\n accuracy, deviation = leave_one_out(classifier, data, class_labels)\n print '%s accuracy is %0.4f (+/- %0.3f)' % (name, accuracy, deviation)\n classifier_accuracy_list.append((name, accuracy))",
"given the number of people who have ADHD and Bipolar disorder the chance line would be at around 0.6. The classifiers fall between 0.7 and 0.75 which makes them just barely better than chance. This is still an improvement over last time.\nQuestions and Answers\nQ1) Did you attempt any further data cleaning?<br/>\nA1) Yes, we threw out the technician assesment of RCBF and the survey responses all together. The reason for this patients have already been systematically clinically diagnosed. Survey responses and technician assesments are subjective. Sometime the respondent didn't even respond to the surveys. This helped us salvage around 300 record<br/>\nQ2) through Q3) Did you attempt any manifold dimensionality reduction techniques?<br/>\nA2 through A3) Yes<br/>\nI found that ISOMAP and MDS do not offer drastically different dimensionaity reductions.<br/>\nQ4) Given the new dimensionality reduction was any clustering apparent?<br/>\nA4) No, the groups are more spatially distributed than before, but with not much separation<br/>\nQ4) through Q)9 Did you attempt a cross validated train test with a classifier? what were the results?<br/>\nA4) through A)9<br/>\nKNN accuracy is 0.7071 (+/- 0.455)<br/>\nSVM accuracy is 0.7586 (+/- 0.428)<br/>\nRandom Forest accuracy is 0.7301 (+/- 0.444)<br/>\nDecision Tree accuracy is 0.6643 (+/- 0.472)<br/>\nQDA accuracy is 0.6961 (+/- 0.460)<br/>\nGaussian NB accuracy is 0.7345 (+/- 0.442)<br/>"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
KshitijT/fundamentals_of_interferometry
|
6_Deconvolution/6_4_residuals_and_iqa.ipynb
|
gpl-2.0
|
[
"Outline\nGlossary\n6. Deconvolution in Imaging \nPrevious: 6.3 CLEAN Implementations \nNext: 6.5 Source Finding and Detection\n\n\n\n\nImport standard modules:",
"import numpy as np\nimport matplotlib.pyplot as plt\n%matplotlib inline\nfrom IPython.display import HTML \nHTML('../style/course.css') #apply general CSS",
"Import section specific modules:",
"import matplotlib.image as mpimg\nfrom IPython.display import Image\nfrom astropy.io import fits\nimport aplpy\n\n#Disable astropy/aplpy logging\nimport logging\nlogger0 = logging.getLogger('astropy')\nlogger0.setLevel(logging.CRITICAL)\nlogger1 = logging.getLogger('aplpy')\nlogger1.setLevel(logging.CRITICAL)\n\nfrom IPython.display import HTML\nHTML('../style/code_toggle.html')",
"6.4 Residuals and Image Quality<a id='deconv:sec:iqa'></a>\nUsing CLEAN or another deconvolution methods produces 'nicer' images than the dirty image (except when deconvolution gets out of control). What it means for an image to be 'nicer' is not a well defined metric, in fact it is almost completely undefined. When we talk of the quality of an image in synthesis imaging we rarely use a quantitative metric, but instead rely on the subjective opinion of the people looking at the image. I know, this is not very scientific. The field of computer vision has been around for decades, this is a field which has developed the objective metrics and techniques for image quality assessment. At some point in the future these methods will need to be incorporated into radio astronomy. This is bound to happen as we have moved to using automated calibration, imaging, and deconvolution pipelines.\nWe have two some what related questions we need to answer when we are reducing visibilities into a final image:\n\nWhen should you halt the deconvolution process?\nWhat makes a good image?\n\nIn $\\S$ 6.2 ➞ we covered how we can separate out a sky model from noise using an iterative CLEAN deconvolution process. But, we did not discuss at what point we halt the process. There is no well-defined point in which to stop the process. Typically and ad hoc decision is made to run deconvolution for a fixed number of iterations or down to a certain flux level. These halting limits are set by adjusting the CLEAN parameters until a 'nice' image is produced. Or, if the visibilities have been flux calibrated, which is possible with some arrays, the signal is fixed to some real flux scale. Having knowledge about the array and observation a theoretical noise floor can be computed, then CLEAN can be ran to a known noise level. One could imagine there is a more automated way to decide when to halt CLEAN, perhaps keeping track of the iterations and deciding if there is a convergence?\nAs a thought experiment we can think about a observation with perfect calibration (we discuss calibration in Chapter 8, but for now it is sufficient to know that the examples we have been using have perfect calibration). When we run CLEAN on this observation, each iteration will transfer some flux from the residual image to the sky model (see figure below). If we run this long enough then we will reach the observation noise floor. Then, we will start to deconvolve the noise from the image. And if you run this process for infinite iteration we will eventually have a sky model which contains all the flux, both from the sources and the noise. The residual image in this case will be empty. Now, this extreme case results in our sky model containing noise sources, this is not ideal. But, if we have not deconvoled enough flux then the sky model is incomplete and the residual image will contain PSF structure from the remaining flux. Thus, the challenge is to determine what is enough deconvolution to remove most of the true sky signal but not over-deconvolved such that noise is added to the sky model. As stated earlier, the typical way to do that at the moment is to do multiple deconvolutions, adjusting the parameters until a subjective solution is reached.\nWe can see an example of over-deconvolution below. Using the same example from the previous section ➞, if we deconvolve beyond 300 iterations (as we found to result in a well-deconvoled sky model) then noise from the residual image is added to the sky model. This can be seen as the low flux sources around the edge of the image. Over-deconvolution can lead to <cite data-cite='1998AJ....115.1693C'>clean bias</cite> ⤴ effects.",
"def generalGauss2d(x0, y0, sigmax, sigmay, amp=1., theta=0.):\n \"\"\"Return a normalized general 2-D Gaussian function\n x0,y0: centre position\n sigmax, sigmay: standard deviation\n amp: amplitude\n theta: rotation angle (deg)\"\"\"\n #norm = amp * (1./(2.*np.pi*(sigmax*sigmay))) #normalization factor\n norm = amp\n rtheta = theta * 180. / np.pi #convert to radians\n \n #general function parameters (https://en.wikipedia.org/wiki/Gaussian_function)\n a = (np.cos(rtheta)**2.)/(2.*(sigmax**2.)) + (np.sin(rtheta)**2.)/(2.*(sigmay**2.))\n b = -1.*(np.sin(2.*rtheta))/(4.*(sigmax**2.)) + (np.sin(2.*rtheta))/(4.*(sigmay**2.))\n c = (np.sin(rtheta)**2.)/(2.*(sigmax**2.)) + (np.cos(rtheta)**2.)/(2.*(sigmay**2.))\n return lambda x,y: norm * np.exp(-1. * (a * ((x - x0)**2.) - 2.*b*(x-x0)*(y-y0) + c * ((y-y0)**2.)))\n\ndef genRstoredBeamImg(fitsImg):\n \"\"\"Generate an image of the restored PSF beam based on the FITS header and image size\"\"\"\n fh = fits.open(fitsImg)\n \n #get the restoring beam information from the FITS header\n bmin = fh[0].header['BMIN'] #restored beam minor axis (deg)\n bmaj = fh[0].header['BMAJ'] #restored beam major axis (deg)\n bpa = fh[0].header['BPA'] #restored beam angle (deg)\n dRA = fh[0].header['CDELT1'] #pixel size in RA direction (deg)\n ra0 = fh[0].header['CRPIX1'] #centre RA pixel\n dDec = fh[0].header['CDELT2'] #pixel size in Dec direction (deg)\n dec0 = fh[0].header['CRPIX2'] #centre Dec pixel\n\n #construct 2-D ellipitcal Gaussian function\n gFunc = generalGauss2d(0., 0., bmin/2., bmaj/2., theta=bpa)\n\n #produce an restored PSF beam image\n imgSize = 2.*(ra0-1) #assumes a square image\n xpos, ypos = np.mgrid[0:imgSize, 0:imgSize].astype(float) #make a grid of pixel indicies\n xpos -= ra0 #recentre\n ypos -= dec0 #recentre\n xpos *= dRA #convert pixel number to degrees\n ypos *= dDec #convert pixel number to degrees\n return gFunc(xpos, ypos) #restored PSF beam image\n \ndef convolveBeamSky(beamImg, skyModel):\n \"\"\"Convolve a beam (PSF or restored) image with a sky model image, images must be the same shape\"\"\"\n sampFunc = np.fft.fft2(beamImg) #sampling function\n skyModelVis = np.fft.fft2(skyModel[0,0]) #sky model visibilities\n sampModelVis = sampFunc * skyModelVis #sampled sky model visibilities\n return np.abs(np.fft.fftshift(np.fft.ifft2(sampModelVis))) #sky model convolved with restored beam\n\nfig = plt.figure(figsize=(16, 7))\n \nfh = fits.open('../data/fits/deconv/KAT-7_6h60s_dec-30_10MHz_10chans_uniform_n1000-residual.fits')\nresidualImg = fh[0].data\nfh = fits.open('../data/fits/deconv/KAT-7_6h60s_dec-30_10MHz_10chans_uniform_n1000-model.fits')\nskyModel = fh[0].data\n \n#generate a retored PSF beam image\nrestBeam = genRstoredBeamImg(\n '../data/fits/deconv/KAT-7_6h60s_dec-30_10MHz_10chans_uniform_n1000-residual.fits')\n \n#convolve restored beam image with skymodel\nconvImg = convolveBeamSky(restBeam, skyModel)\n \ngc1 = aplpy.FITSFigure(residualImg, figure=fig, subplot=[0.1,0.1,0.35,0.8])\ngc1.show_colorscale(vmin=-1.5, vmax=2, cmap='viridis')\ngc1.hide_axis_labels()\ngc1.hide_tick_labels()\nplt.title('Residual Image (niter=1000)')\ngc1.add_colorbar()\n \ngc2 = aplpy.FITSFigure(convImg, figure=fig, subplot=[0.5,0.1,0.35,0.8])\ngc2.show_colorscale(vmin=0., vmax=2.5, cmap='viridis')\ngc2.hide_axis_labels()\ngc2.hide_tick_labels()\nplt.title('Sky Model')\ngc2.add_colorbar()\n \nfig.canvas.draw()",
"Figure: residual image and sky model after 1000 deconvolution iterations. The residual image has been over-deconvolved leading to noise components being added to the sky model.\nThe second question of what makes a good image is why we still use subjective opinion. If we consider the realistic case of imaging and deconvolving a real set of visibilities then we have the added problem that there will be always be, at some level, calibration errors. These errors, and cause of these errors, can be identified by a trained eye whether it is poor gain calibration, interference, strong source sidelobes, or any number of other issues. Errors can cause a deconvolution process to diverge resulting in an unrealistic sky model. Humans are very good at looking at images and deciding if they make sense, but we can not easily describe how we do our image processing, thus we find it hard to implement algorithms to do the same. Looking at the dirty image and deconvolved image of the same field below most people would say the deconvoled image is objectively 'better' than the dirty image. Yet we do not know exactly why that is the case.",
"fig = plt.figure(figsize=(16, 7))\n\ngc1 = aplpy.FITSFigure('../data/fits/deconv/KAT-7_6h60s_dec-30_10MHz_10chans_uniform_n100-dirty.fits', \\\n figure=fig, subplot=[0.1,0.1,0.35,0.8])\ngc1.show_colorscale(vmin=-1.5, vmax=3., cmap='viridis')\ngc1.hide_axis_labels()\ngc1.hide_tick_labels()\nplt.title('Dirty Image')\ngc1.add_colorbar()\n\ngc2 = aplpy.FITSFigure('../data/fits/deconv/KAT-7_6h60s_dec-30_10MHz_10chans_uniform_n100-image.fits', \\\n figure=fig, subplot=[0.5,0.1,0.35,0.8])\ngc2.show_colorscale(vmin=-1.5, vmax=3., cmap='viridis')\ngc2.hide_axis_labels()\ngc2.hide_tick_labels()\nplt.title('Deconvolved Image')\ngc2.add_colorbar()\n\nfig.canvas.draw()",
"Left: dirty image from a 6 hour KAT-7 observation at a declination of $-30^{\\circ}$. Right: deconvolved image.\nThe deconvolved image does not have the same noisy PSF structures around the sources that the dirty image does. We could say that these imaging artefacts are localized and related to the PSF response to bright sources. The aim of deconvolution is to remove these PSF like structures and replace them with a simple sky model which is decoupled fro the observing system. Most of difficult work in radio interferometry is the attempt to understand and remove the instrumental effects in order to recover the sky signal. Thus, we have some context for why the deconvolved image is 'better' than the dirty image. The challenge in automatically answering what makes a good image is some how encoding both the context and human intuition. Indeed, a challenge left to the reader.\n6.4.1 Dynamic Range and Signal-to-Noise Ratio\nDynamic range is the standard metric, which has been used for decades, to describe the quality of an interferometric image. The dynamic range (DR) is defined as the ratio of the peak flux $I_{\\textrm{peak}}$ to the standard deviation of the noise in the image $\\sigma_I$. The dynamic range can be computed for either a dirty or deconvolved image.\n$$\\textrm{DR} = \\frac{I_{\\textrm{peak}}}{\\sigma_I}$$\nNow this definition of the dynamic range is not well defined. First, how is the peak flux defined? Typically, the peak pixel value anywhere in the image is taken to be the peak flux. But, be careful, changing the resolution of the image will result in different flux values. Decreasing the resolution can result in more flux being included in a single pixel, likewise by increasing the resolution the flux will be spread across more pixels. The second issue is how the noise of the image is computed, possible options are:\n\nUse the entire image\nUse the entire residual image\nRandomly sample the image\nChoose a 'relatively' empty region\n\nThis is not an exhaustive list of methods, but the typical method is option 4. After deconvolution, the image is loaded into a viewer and the standard deviation of the noise is computed from a region the relatively free of sources. As I write this I am aware of how ridiculous that might sound. Using the same image we can see how the dynamic range varies by using these different methods. The dynamic range for the image deconvolved image above is:",
"#load deconvolved image\nfh = fits.open('../data/fits/deconv/KAT-7_6h60s_dec-30_10MHz_10chans_uniform_n100-image.fits')\ndeconvImg = fh[0].data\n#load residual image\nfh = fits.open('../data/fits/deconv/KAT-7_6h60s_dec-30_10MHz_10chans_uniform_n100-residual.fits')\nresidImg = fh[0].data\n\npeakI = np.max(deconvImg)\nprint 'Peak Flux: %f Jy'%(peakI)\n\nprint 'Dynamic Range:'\n#method 1\nnoise = np.std(deconvImg)\nprint '\\tMethod 1:', peakI/noise\n\n#method 2\nnoise = np.std(residImg)\nprint '\\tMethod 2:', peakI/noise\n\n#method 3\nnoise = np.std(np.random.choice(deconvImg.flatten(), int(deconvImg.size*.01))) #randomly sample 1% of pixels\nprint '\\tMethod 3:', peakI/noise\n\n#method 4, region 1\nnoise = np.std(deconvImg[0,0,0:128,0:128]) #corner of image\nprint '\\tMethod 4a:', peakI/noise\n\n#method 4, region 2\nnoise = np.std(deconvImg[0,0,192:320,192:320]) #centre of image\nprint '\\tMethod 4b:', peakI/noise",
"Method 1 will always result in a lower dynamic range than Method 2 as the deconvoled image includes the sources where method 2 only uses the residuals. Method 3 will result in a dynamic range which varies depending on the number of pixels sampled and which pixels are sampled. One could imagine an unlucky sampling where every pixel chosen is part of a source, resulting in a large standard deviation. Method 4 depends on the region used to compute the noise. In the Method 4a result a corner of the image, where there are essentially no sources, results in a high dynamic range. On the other hand, choosing the centre region to compute the noise standard deviation results in a low dynamic range. This variation between methods can lead to people playing 'the dynamic range game' where someone can pick the result that best fits what they want to say about the image. Be careful, and make sure your dynamic range metric is well defined and unbiased.\nThere is a qualitative explanation for computing the image noise and the dynamic range by human interaction. Humans are very good at image processing, so we can quickly select regions which are 'noise-like', so it is easier to just look at an image then to try to come up with a complicated algorithm to find these regions. The dynamic range has a number of issues, but it is correlated with image quality. For a fixed visibility set, improving the dynamic range of an image usually results in a improvement in the quality of the image, as determined by a human.\nA significant disadvantage to using dynamic range is that it is a global metric which reduced an image down to a single number. It provides no information about local artefacts. This is becoming an important issue in modern synthesis imaging as we push into imaging significant portions of the primary beam and need to account for direction-dependent effects. These topics are discussed in Chapter 7. But, as is noted in <cite data-cite='taylor1999synthesis'>Synthesis Imaging in Radio Astronomy II (Lecture 13) </cite> ⤴ an valid argument can be made for using dynamic range as a proxy (at least a partial one) for image quality. As of this writing dynamic range is the standard method to measure image quality.\n6.4.2 The Residual Image\nWe have noted that the results of a deconvolution process is a sky model and a residual image. An example residual image is shown below.",
"fig = plt.figure(figsize=(8, 7))\n\ngc1 = aplpy.FITSFigure('../data/fits/deconv/KAT-7_6h60s_dec-30_10MHz_10chans_uniform_n100-residual.fits', \\\n figure=fig)\ngc1.show_colorscale(vmin=-1.5, vmax=3., cmap='viridis')\ngc1.hide_axis_labels()\ngc1.hide_tick_labels()\nplt.title('Residual Image')\ngc1.add_colorbar()\n\nfig.canvas.draw()",
"Figure: Residual image of a KAT-7 observation resulting from CLEAN deconvolution.\nThe image shows that most of the sources bright sources have been deconvolved, but some flux remains. Thus, the centre of the image, where the brightest sources are, is noisier than the edges where the sources are weaker. This is typical as the primary beam is most sensitive at it's centre. This residual image could possibly be further deconvolved but the remaining flux from the sources is close to the image noise and we are in danger of deconvolving into the noise.\nThe residual image provides the best insight to how well the deconvolution and calibration process was preformed. The ideal residual image is completely noise-like with no apparent structure throughout. This ideal is rarely reached as there is often deconvolution or calibration artefacts present. Looking at the residual image you can determine if there are poorly calibrated baselines, RFI present, not enough devolution, the wrong deconvolution parameters, the w-term correction has not been applied, there are direction-dependent effects unaccounted for, remaining extended structure, or any number of other effects. Inspection of the residual image for these different effects requires intuition which will develop with time.\n6.4.3 Image Quality Assessment\nAssessing the quality of a sythnesized image is an open problem in interferometry. By default we use subjective human assessment. But this approach is not very scientific and can result in different measures of quality for the same image. With any hope this section will soon be expanded with better solutions to the image quality assessment problem.\nThe process of deconvolution produces a sky model, but that model may not be realistic in CLEAN where the sky model is a set of $\\delta$-functions even if a source is extended. We can take sky modelling one step further by using source finding techniques to determine what in the sky model is an isolated source, what is a collection of nearby components which make up an extended source, or what is noise which resulted from an imperfect deconvolution. This will be discussed in the next section.\n\nNext: 6.5 Source Finding and Detection\n<div class=warn><b>Future Additions:</b></div>\n\n\nexamples of under and over deconvolution\nexamples of poor deconvolution of extended sources with CLEAN\nexample: deconvolve the same field (cygnus a?) with different methods and show results\nexamples: sources of imaging artefacts, need real data examples"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
tjwei/HackNTU_Data_2017
|
Week07/01-Keras-pretrained.ipynb
|
mit
|
[
"# windows only hack for graphviz path \nimport os\nfor path in os.environ['PATH'].split(os.pathsep):\n if path.endswith(\"Library\\\\bin\"):\n os.environ['PATH']+=os.pathsep+os.path.join(path, 'graphviz')\n\nimport keras\nfrom keras.models import Sequential\nfrom PIL import Image\nimport numpy as np\n\nimport keras.backend as K\n# 設定 channels first\nK.set_image_data_format('channels_last')\n\n# 第一次使用時,系統會下載權重,會需要一點時間\npretrained = keras.applications.vgg16.VGG16()\n\npretrained\n\n# 看一下網路的樣子\nfrom IPython.display import SVG, display\nfrom keras.utils.vis_utils import model_to_dot\n\nSVG(model_to_dot(pretrained, show_shapes=True).create(prog='dot', format='svg'))",
"看一下 imagenet 的分類",
"from keras.applications import imagenet_utils\n\nimagenet_utils.CLASS_INDEX_PATH\n\nfrom urllib.request import urlopen\nimport json\nwith urlopen(imagenet_utils.CLASS_INDEX_PATH) as jsonf:\n data = jsonf.read()\n\nclass_dict = json.loads(data.decode())\n[class_dict[str(i)][1] for i in range(1000)]",
"Imagenet 2012 網頁\nhttp://image-net.org/challenges/LSVRC/2012/signup\n資料下載\nhttp://academictorrents.com/browse.php?search=imagenet\n一千張圖片\nhttps://www.dropbox.com/s/vippynksgd8c6qt/ILSVRC2012_val_1000.tar?dl=0",
"# 下載 圖片\nimport os\nimport urllib\nfrom urllib.request import urlretrieve\ndataset = 'ILSVRC2012_val_1000.tar'\ndef reporthook(a,b,c):\n print(\"\\rdownloading: %5.1f%%\"%(a*b*100.0/c), end=\"\")\n \nif not os.path.isfile(dataset):\n origin = \"https://www.dropbox.com/s/vippynksgd8c6qt/ILSVRC2012_val_1000.tar?dl=1\"\n print('Downloading data from %s' % origin)\n urlretrieve(origin, dataset, reporthook=reporthook)\n\n# 解開圖片\nfrom tarfile import TarFile\ntar = TarFile(dataset)\ntar.extractall()\n\n# 讀取圖片\nfrom PIL import Image as pimage\nfrom glob import glob\nimgs = []\nfiles = list(glob('ILSVRC2012_img_val/ILSVRC2012_val_*.JPEG'))\nfor fn in files:\n img = pimage.open(fn)\n if img.mode != 'RGB':\n img = img.convert('RGB')\n img = np.array(img.resize((224,224))) \n imgs.append(img)\nimgs = np.array(imgs)\n\n# 準備資料,轉成通用的格式(扣掉顏色的中間值)\np_imgs = imagenet_utils.preprocess_input(np.float32(imgs))\ndel imgs\n\n# 實際\npredictions = pretrained.predict(p_imgs)\n\n# 對應編碼\nresults = imagenet_utils.decode_predictions(predictions)\n\nfrom IPython.display import Image, HTML, display\nfor fn, res in zip(files[:100], results[:100]):\n res_text = \"\".join(\"<li>{:05.2f}% : {}</li>\".format(x[2]*100, x[1]) for x in res)\n display(HTML(\"\"\"\n <table><tr>\n <td><img width=200 src=\"{}\" /></td>\n <td><ul>{}</ul></td>\n </tr>\n </table>\n \"\"\".format(fn, res_text))) "
] |
[
"code",
"markdown",
"code",
"markdown",
"code"
] |
sf-wind/caffe2
|
caffe2/python/tutorials/Multi-GPU_Training.ipynb
|
apache-2.0
|
[
"Multi-GPU Training with Caffe2\n\nFor this tutorial we will explore multi-GPU training. We will show you a basic structure for using the data_parallel_model to quickly process a subset of the ImageNet database along the same design as the ResNet-50 model. We will also get a chance to look under the hood at a few of Caffe2's C++ operators that efficiently handle your image pipeline, build a ResNet model, train on a single GPU and show some optimizations that are included with data_parallel_model, and finally we'll scale it up and show you how to parallelize your model so you can run it on multiple GPUs.\nAbout the Dataset\nA commonly used dataset for benchmarking image recognition technologies is ImageNet. It is huge. It has images that cover the gamut, and they're categorized by labels so that you can create image subsets of animals, plants, fungi, people, objects, you name it. It's the focus of yearly competitions and this is where deep learning and convolutional neural networks (CNN) really made its name. During the 2012 ImageNet Large-Scale Visual Recognition Challenge a CNN demonstrated accuracy more than 10% beyond the next competing method. Going from around 75% accuracy to around 85% accuracy when every year the gains were only a percent or two is a significant accomplishment. \n\nSo let's play with ImageNet and train our own model on a bunch of GPUs! You're going to need a lot space to host the 14 million images in ImageNet. How much disk space do you have? You should clear up about 300GB of space... on SSD. Spinning discs are so 2000. How much time do you have? With two GPUs maybe we'll be done in just under a week. Ready?\n\nThat's way too much space and way too long for a tutorial! If you happened to have that much space and 128 GPUs on the latest NVIDIA V100's then you're super awesome and you can replicate our recent results shown below. You might even be able to train ImageNet in under an hour. Given how this performance seems to scale, maybe YOU can train ImageNet in a minute! Think about all of the things you could accomplish... a model for millions of hours of video? Catalogue every cat video on YouTube? Look for your doppleganger on Imgur?\nInstead of tons of GPUs and the full set of data, we're going to do this cooking show style. We're going to use a small batch images to train on, and show how you can scale that up. We chose a small slice of ImageNet: a set of 640 cars and 640 boats for our training set. We have 48 cars and 48 boats for our test set. This makes our database of images around 130 MB.\nResNet-50 Model Training Overview\nBelow is an overview of what is needed to train and test this model across multiple GPUs. You see that it is generally not that long, nor is it that complicated. Some of the interactions for creating the parallelized model are handled by custom functions you have to write and we'll go over those later.\n\nuse brew to create a model for training (we'll create one for testing later)\ncreate a database reader using the model helper object's CreateDB to pull the images\ncreate functions to run a ResNet-50 model for one or more GPUs\ncreate the parallelized model\nloop through the number of epochs you want to run, then for each epoch\nrun the train model till you finish each batch of images\nrun the test model\ncalculate times, accuracies, and display the results\n\n\n\nPart 1: Setup\nYour first assignment is to get your training and testing image database setup. We've created one for you and all you have to do run the code block below. This assumes you know how to use IPython. When we say run a code block, you can click the block and hit the Play button above or hit Ctrl-Enter on your keyboard. If this is news to you it is advisable that you start with introductory tutorials and get used to IPython and Caffe2 basics first.\nThe code below will download a small database of boats and cars images and their labels for you if it doesn't already exist. The images were pulled from ImageNet and added to a lmdb format database. You can download it directly here unzip it, and change the folder locations to an NFS if that better suits your situation. The tutorial's default location is for you to place it in ~/caffe2_notebooks/tutorial_data/resnet_trainer.\nYou can also swap out the database with your own as long as it is in lmdb and you change the train_data_count and test_data_count variables below. For your first time just use that database we made for you.\nWe're going to give you all the dependencies needed for the tutorial in the block below. \nTask: Run the Setup Code\nRead and then run the code block below. Note what modules are being imported and where we're accessing the database. Note and troubleshoot any errors in case something is wrong with your environment. Don't worry about the nccl and gloo warning messages.",
"from caffe2.python import core, workspace, model_helper, net_drawer, memonger, brew\nfrom caffe2.python import data_parallel_model as dpm\nfrom caffe2.python.models import resnet\nfrom caffe2.proto import caffe2_pb2\n\nimport numpy as np\nimport time\nimport os\nfrom IPython import display\n \nworkspace.GlobalInit(['caffe2', '--caffe2_log_level=2'])\n\n# This section checks if you have the training and testing databases\ncurrent_folder = os.path.join(os.path.expanduser('~'), 'caffe2_notebooks')\ndata_folder = os.path.join(current_folder, 'tutorial_data', 'resnet_trainer')\n\n# Train/test data\ntrain_data_db = os.path.join(data_folder, \"imagenet_cars_boats_train\")\ntrain_data_db_type = \"lmdb\"\n# actually 640 cars and 640 boats = 1280\ntrain_data_count = 1280\ntest_data_db = os.path.join(data_folder, \"imagenet_cars_boats_val\")\ntest_data_db_type = \"lmdb\"\n# actually 48 cars and 48 boats = 96\ntest_data_count = 96\n\n# Get the dataset if it is missing\ndef DownloadDataset(url, path):\n import requests, zipfile, StringIO\n print(\"Downloading {} ... \".format(url))\n r = requests.get(url, stream=True)\n z = zipfile.ZipFile(StringIO.StringIO(r.content))\n z.extractall(path)\n print(\"Done downloading to {}!\".format(path))\n\n# Make the data folder if it doesn't exist\nif not os.path.exists(data_folder):\n os.makedirs(data_folder)\nelse:\n print(\"Data folder found at {}\".format(data_folder))\n# See if you already have to db, and if not, download it\nif not os.path.exists(train_data_db):\n DownloadDataset(\"https://download.caffe2.ai/databases/resnet_trainer.zip\", data_folder) ",
"Task: Check the Database\nTake a look at your data folder. You should find two subfolders, each of which will contain a single data.mdb file (or possibly also a lock file):\n1. imagenet_cars_boats_train (train for training, not locomotives!)\n2. imagenet_cars_boats_val (val for validation or testing)\nPart 2: Configure the Training\nBelow you can tinker with some of the settings for how the model will be created. One obvious setting to try is the gpus. By removing one or adding one you're directly impacting the amount of time it will take to run even on this small dataset.\nbatch_per_device is the number of images processed at a time on each GPU. Using the default of 32 for 2 GPUs will equate to 32 images on each GPU for a total of 64 per mini-batch, so we'll go through the whole database and complete an epoch in 20 iterations. This is something you would want to adjust if you're sharing the GPU or otherwise want to adjust how much memory this training run is going to take up. You can see in the line below it being set to 32 we're adjusting the total_batch_size based on the number of GPUs.\nbase_learning_rate and weight_decay will both influence training and can be interesting to change and witness the impact on accuracy or confidence is the results that are shown in the last section of this tutorial.",
"# Configure how you want to train the model and with how many GPUs\n# This is set to use two GPUs in a single machine, but if you have more GPUs, extend the array [0, 1, 2, n]\ngpus = [0]\n\n# Batch size of 32 sums up to roughly 5GB of memory per device\nbatch_per_device = 32\ntotal_batch_size = batch_per_device * len(gpus)\n\n# This model discriminates between two labels: car or boat\nnum_labels = 2\n\n# Initial learning rate (scale with total batch size)\nbase_learning_rate = 0.0004 * total_batch_size\n\n# only intends to influence the learning rate after 10 epochs\nstepsize = int(10 * train_data_count / total_batch_size)\n\n# Weight decay (L2 regularization)\nweight_decay = 1e-4",
"Part 3:\nUsing Caffe2 Operators to Create a CNN\nCaffe2 comes with ModelHelper which will do a lot of the heavy lifting for you when setting up a model. Throughout the docs and tutorial this may also be called a model helper object. The only required parameter is name. It is an arbitrary name for referencing the network in your workspace: you could call it tacos or boatzncarz. For example:\npython\ntaco_model = model_helper.ModelHelper(name=\"tacos\")\nYou should also reset your workspace if you run these parts multiple times. Do this just before creating the new model helper object.\npython\nworkspace.ResetWorkspace()\nReading from the Database\nAnother handy function for feeding your network with images is CreateDB, which in this case we need to serve as a database reader for the database we've already created. You can create a reader object like this: \npython\nreader = taco_model.CreateDB(name, db, db_type)\nTask: Create a Model Helper Object\nRemember, we have two databases and each will have their own model, but for now we only need to create the training model for the training db. Use the Work Area below. Also, while you do this, experiment with IPython's development hooks by typing the first part of the name from the imported class or module and hitting the tab key. For example when creating the object you type: train_model = model_helper. and after the dot, hit \"tab\". You should see a full list of available functions. Then when you choose ModelHelper hit \"(\" then hit tab and you should see a full list of params. This is very handy when you're exploring new modules and their functions!\nTask: Create a Reader\nWe also need one reader. We have established the db location, train_data_db, and type, train_data_db_type, in \"Part 1: Setup\", so all you have to do is name it and pass in the configs. Again, name is arbitrary so you could call it \"kindle\" if you wanted. Use the Work Area below, and when you are finished run the code block.",
"# LAB WORK AREA FOR PART 3\n\n# Clear workspace to free allocated memory, in case you are running this for a second time.\nworkspace.ResetWorkspace()\n\n# 1. Create your model helper object for the training model with ModelHelper\n\n\n# 2. Create your database reader with CreateDB\n\n",
"Part 4: Image Transformations (requires Caffe2 to be compiled with opencv)\nNow that we have a reader we should take a look at how we're going to process the images. Since images that are found in the wild can be wildly different sizes, aspect ratios, and orientations we can and should train on as much variety as we can. ImageNet is no exception here. The average resolution is 496x387, and as interesting as that factoid might be, the bottom line is that you have a lot of variation. \nAs the training images are ingested we would want to conform them to a standard size. The most direct process of doing so could follow a simple ingest where you transform the image to 256x256. We talked about the drawbacks of doing this in Image Pre-Processing. Therefore for more accurate results, we should probably rescale, then crop. Even this approach with cropping has the drawbacks of losing some info from the original photo. What get chopped off doesn't make into the training data. If you ran the pre-processing tutorial on the image of the astronauts you will recall that some of the astronauts didn't make the cut. Where'd they go? Wash-out lane? Planet of the Apes? If your model was to detect people, then those lost astronauts would not be getting due credit when you run inference or face detection later using the model.\nIntroducing... the ImageInput Operator\nWhat could be seen as a loss turns into an opportunity. You can crop randomly around the image to create many deriviates of the original image, boosting your training data set, thereby adding robustness to the model. What if the image only has half a car or the front of a boat? You still want your model to be able to detect it! In the image below only the front a boat is shown and the model shows a 50% confidence in detection.\n\nCaffe2 has a solution for this in its ImageInput operator, a C++ image manipulation op that's used under the hood of several of the Caffe2 Python APIs.\nHere is a reference implementation:\npython\ndef add_image_input_ops(model):\n # utilize the ImageInput operator to prep the images\n data, label = model.ImageInput(\n reader,\n [\"data\", \"label\"],\n batch_size=batch_per_device,\n # mean: to remove color values that are common\n mean=128.,\n # std is going to be modified randomly to influence the mean subtraction\n std=128.,\n # scale to rescale each image to a common size\n scale=256,\n # crop to the square each image to exact dimensions\n crop=224,\n # not running in test mode\n is_test=False,\n # mirroring of the images will occur randomly\n mirror=1\n )\n # prevent back-propagation: optional performance improvement; may not be observable at small scale\n data = model.StopGradient(data, data)\n\nmean: remove info that's common in most images\nstd: used to create a randomization for both cropping and mirroring\nscale: downres each image so that its shortest side matches this base resolution\ncrop: the image size we want every image to be (using random crops from the scaled down image)\nmirror: randomly mirror the images so we can train on both representations\n\nThe StopGradient operator does no numerical computation. It is used here to prevent back propagation which isn't wanted in this network.\nTask: Implement the InputImage Operator\nUse the Work Area below to finish the stubbed out function. Refer to the reference implementation for help on this task. \n\nWhat happens if you don't add a mean, don't add a std, or don't mirror. How does this change your accuracy when you run it for many epochs?\nWhat would happen if we didn't do StopGradient?",
"# LAB WORK AREA FOR PART 4\n\ndef add_image_input_ops(model):\n raise NotImplementedError # Remove this from the function stub\n ",
"Part 5: Creating a Residual Network\nNow you get the opportunity to use Caffe2's Resnet-50 creation function! During our Setup we from caffe2.python.models import resnet. We can use that for our create_resnet50_model_ops function that we still need to create and the main part of that will be the resnet.create_resnet50() function as described below:\npython\ncreate_resnet50(\n model, \n data, \n num_input_channels, \n num_labels, \n label=None, \n is_test=False, \n no_loss=False, \n no_bias=0, \n conv1_kernel=7, \n conv1_stride=2, \n final_avg_kernel=7\n)\nBelow is a reference implementation of the function using resnet.create_resnet50().\npython\ndef create_resnet50_model_ops(model, loss_scale):\n # Creates a residual network\n [softmax, loss] = resnet.create_resnet50(\n model,\n \"data\",\n num_input_channels=3,\n num_labels=num_labels,\n label=\"label\",\n )\n prefix = model.net.Proto().name\n loss = model.Scale(loss, prefix + \"_loss\", scale=loss_scale)\n model.Accuracy([softmax, \"label\"], prefix + \"_accuracy\")\n return [loss]\nTask: Implement the forward_pass_builder_fun Using Resnet-50\nIn the code block above where we stubbed out the create_resnet50_model_ops function, utilize resnet.create_resnet50() to create a residual network, then returning the loss. Refer to the reference implementation for help on this task.\n\nBonus points: if you take a look at the resnet class in the Caffe2 docs you'll notice a function to create a 32x32 model. Try it out.",
"# LAB WORK AREA FOR PART 5\n\ndef create_resnet50_model_ops(model, loss_scale):\n raise NotImplementedError #remove this from the function stub\n ",
"Part 6: Make the Network Learn\nCaffe2 model helper object has several built in functions that will help with this learning by using backpropagation where it will be adjusting weights as it runs through iterations.\n\nAddWeightDecay\nIter\nnet.LearningRate\n\nBelow is a reference implementation:\n```python\ndef add_parameter_update_ops(model):\n model.AddWeightDecay(weight_decay)\n iter = model.Iter(\"iter\")\n lr = model.net.LearningRate(\n [iter],\n \"lr\",\n base_lr=base_learning_rate,\n policy=\"step\",\n stepsize=stepsize,\n gamma=0.1,\n )\n # Momentum SGD update\n for param in model.GetParams():\n param_grad = model.param_to_grad[param]\n param_momentum = model.param_init_net.ConstantFill(\n [param], param + '_momentum', value=0.0\n )\n # Update param_grad and param_momentum in place\n model.net.MomentumSGDUpdate(\n [param_grad, param_momentum, lr, param],\n [param_grad, param_momentum, param],\n momentum=0.9,\n # Nesterov Momentum works slightly better than standard momentum\n nesterov=1,\n )\n\n```\nTask: Implement the forward_pass_builder_fun Using Resnet-50\nSeveral of our Configuration variables will get used in this step. Take a look at the Configuration section from Part 2 and refresh your memory. We stubbed out the add_parameter_update_ops function, so to finish it, utilize model.AddWeightDecay and set weight_decay. Calculate your stepsize using int(10 * train_data_count / total_batch_size) or pull the value from the config. Instantiate the learning iterations with iter = model.Iter(\"iter\"). Use model.net.LearningRate() to finalize your parameter update operations. You can optionally update you SGD's momentum. It might not make a difference in this small implementation, but if you're gonna go big later, then you'll want to do this.\nRefer to the reference implementation for help on this task.",
"# LAB WORK AREA FOR PART 6\n\ndef add_parameter_update_ops(model):\n raise NotImplementedError #remove this from the function stub\n ",
"Part 7: Gradient Optimization\nIf you run the network as is you may have issues with memory. Without memory optimization we could reduce the batch size, but we shouldn't have to do that. Caffe2 has a memonger function for this purpose which will find ways to reuse gradients that we created. Below is a reference implementation.\npython\ndef optimize_gradient_memory(model, loss):\n model.net._net = memonger.share_grad_blobs(\n model.net,\n loss,\n set(model.param_to_grad.values()),\n # Due to memonger internals, we need a namescope here. Let's make one up; we'll need it later!\n namescope=\"imonaboat\",\n share_activations=False)\nTask: Implement memonger\nWe're going to use the reference for help here, otherwise it is a little difficult to cover for the scope of this tutorial. The function is ready to go for you, but you should still soak up what's been done in this function. One of the key gotchas here is making sure you give it a namescope so that you can access the gradients you'll be creating in the next step. This name can be anything.",
"# LAB WORK AREA FOR PART 7\n\ndef optimize_gradient_memory(model, loss):\n raise NotImplementedError # Remove this from the function stub",
"Part 8: Training the Network with One GPU\nNow that you've established be basic components to run ResNet-50, you can try it out on one GPU. Now, this could be a lot easier just going straight into the data_parallel_model and all of its optimizations, but to help explain the components needed and to build the helper functions to run GPU_Parallelize, we may as well start simple! \nIf you're paying attention you might be wondering about the gpus array we made in the config and how that might throw things off. Also, when we looked at the config earlier you may have updated gpus[0] to have more than one GPU. That's fine. We can leave it like that for the next part because we will force our script to use just one GPU.\nLet's stitch together those functions from Parts 4-7 to run our residual network! Take a look at the code below, so you understand how the pieces fit together.\n```python\nWe need to give the network context and force it to run on the first GPU even if there are more.\ndevice_opt = core.DeviceOption(caffe2_pb2.CUDA, gpus[0])\nHere's where that NameScope comes into play\nwith core.NameScope(\"imonaboat\"):\n # Picking that one GPU\n with core.DeviceScope(device_opt):\n # Run our reader, and create the layers that transform the images\n add_image_input_ops(train_model)\n # Generate our residual network and return the losses\n losses = create_resnet50_model_ops(train_model)\n # Create gradients for each loss\n blobs_to_gradients = train_model.AddGradientOperators(losses)\n # Kick off the learning and managing of the weights\n add_parameter_update_ops(train_model)\n # Optimize memory usage by consolidating where we can\n optimize_gradient_memory(train_model, [blobs_to_gradients[losses[0]]])\nStartup the network\nworkspace.RunNetOnce(train_model.param_init_net)\nLoad all of the initial weights; overwrite lets you run this multiple times\nworkspace.CreateNet(train_model.net, overwrite=True)\n```\nTask: Pull It All Together & Run It!\nThings are getting a little hairy, so we gave you the full reference ready to go. Just run the code block below (hit ctrl-enter). Normally you might not use overwrite=True since that could be bad for what you're doing by accidentally erasing your earlier work, so try removing it and running the block multiple times to see what happens. Imagine the case where you have multiple networks going that have the same name. You don't want to overwrite, so you might want to start up a new workspace or modify the names.",
"# LAB WORK AREA FOR PART 8\n\ndevice_opt = core.DeviceOption(caffe2_pb2.CUDA, gpus[0])\nwith core.NameScope(\"imonaboat\"):\n with core.DeviceScope(device_opt):\n add_image_input_ops(train_model)\n losses = create_resnet50_model_ops(train_model)\n blobs_to_gradients = train_model.AddGradientOperators(losses)\n add_parameter_update_ops(train_model)\n optimize_gradient_memory(train_model, [blobs_to_gradients[losses[0]]])\n\n\nworkspace.RunNetOnce(train_model.param_init_net)\nworkspace.CreateNet(train_model.net, overwrite=True)",
"Part 8 ... part ~~2~~ Deux: Train!\nHere's the fun part where you can tinker with the number of epochs to run and mess with the display. We'll leave this for you to play with as a fait accompli since you worked so hard to get this far!",
"num_epochs = 1\nfor epoch in range(num_epochs):\n # Split up the images evenly: total images / batch size\n num_iters = int(train_data_count / total_batch_size)\n for iter in range(num_iters):\n # Stopwatch start!\n t1 = time.time()\n # Run this iteration!\n workspace.RunNet(train_model.net.Proto().name)\n t2 = time.time()\n dt = t2 - t1\n \n # Stopwatch stopped! How'd we do?\n print((\n \"Finished iteration {:>\" + str(len(str(num_iters))) + \"}/{}\" +\n \" (epoch {:>\" + str(len(str(num_epochs))) + \"}/{})\" + \n \" ({:.2f} images/sec)\").\n format(iter+1, num_iters, epoch+1, num_epochs, total_batch_size/dt))",
"Part 9: Getting Parallelized\nYou get bonus points if you can say \"getting parallelized\" three times fast without messing up. You just saw some interesting numbers in the last step. Take note of those and see how things scale up when we use more GPUs. \nWe're going to use Caffe2's data_parallel_model and its function called Parallelize_GPU to help us accomplish this task. The task to setup the parallel model, not to say it fast. Here's the spec on Parallelize_GPU:\npython\nParallelize_GPU(\n model_helper_obj, \n input_builder_fun, \n forward_pass_builder_fun, \n param_update_builder_fun, \n devices=range(0, workspace.NumCudaDevices()), \n rendezvous=None, \n net_type='dag', \n broadcast_computed_params=True, \n optimize_gradient_memory=False)\nWe're not ready to just call this function though. As you can see in the second, third, and fourth input parameters, they are expecting functions to be passed to them. More API details here. The three functions expected are:\n\ninput_build_fun: adds the input operators. Note: Remember to instantiate reader outside of this function so all GPUs share same reader object. Signature: input_builder_fun(model)\nforward_pass_builder_fun: adds the operators to the model. Must return list of loss-blob references that are used to build the gradient. Loss scale parameter is passed, as you should scale the loss of your model by 1.0 / the total number of gpus. Signature: forward_pass_builder_fun(model, loss_scale)\nparam_update_builder_fun: adds operators that are run after gradient update, such as updating the weights and weight decaying. Signature: param_update_builder_fun(model)\n\nFor the input_build_fun we're going to use the reader we created with CreateDB along with a function that leverages Caffe2's ImageInput operator. Sound familiar? You already did this in Part 4!\nFor the forward_pass_builder_fun we need to have residual neural network. You already did this in Part 5!\nFor the param_update_builder_fun we need a function to adjust the weights as the network runs. You already did this in Part 6! \nLet's stub out the Parallelize_GPU function with the parameters that we're going to use. Recall that in the setup we from caffe2.python import data_parallel_model as dpm, so we can use dpm.Parallelize_GPU() to access the Parallelize_GPU function. First we'll stub out the three other functions to that this expects, add the params based on these functions names and our gpu count, then come back to the lab cell below to populate them with some logic and test them. Below is a reference implementation:\npython\ndpm.Parallelize_GPU(\n train_model,\n input_builder_fun=add_image_input_ops,\n forward_pass_builder_fun=create_resnet50_model_ops,\n param_update_builder_fun=add_parameter_update_ops,\n devices=gpus,\n optimize_gradient_memory=True,\n)\nTask: Make Your Helper Functions\nYou already did this the Parts 4 through 6 and in Part 7 you had to deal with gradient optimizations that are baked into Parallelize_GPU. The three helper function stubs below can be eliminated or if you want to see everything together go ahead and copy the functions there, so you can run them from the work area block below.\nTask: Parallelize!\nNow you can stub out a call to Parallelize_GPU. Use the reference implementation above if you get stuck.\n* model_helper_object: created in Part 3; maybe you called it taco_model, or if you weren't copying and pasting you thoughtfully called it train_model or training_model.\n* Now pass the function name for each of the three functions you just created, e.g. input_builder_fun=add_image_input_ops\n* devices: we can pass in our gpus array from our earlier Setup.\n* optimize_gradient_memory: the default is False but let's set it to True; this takes care of what we had to do in Step 7 with memonger.\n* other params: ignore/don't pass anything to accept their defaults",
"# LAB WORK AREA for Part 9\n\n# Reinitializing our configuration variables to accomodate 2 (or more, if you have them) GPUs.\ngpus = [0, 1]\n\n# Batch size of 32 sums up to roughly 5GB of memory per device\nbatch_per_device = 32\ntotal_batch_size = batch_per_device * len(gpus)\n\n# This model discriminates between two labels: car or boat\nnum_labels = 2\n\n# Initial learning rate (scale with total batch size)\nbase_learning_rate = 0.0004 * total_batch_size\n\n# only intends to influence the learning rate after 10 epochs\nstepsize = int(10 * train_data_count / total_batch_size)\n\n# Weight decay (L2 regularization)\nweight_decay = 1e-4\n\n# Clear workspace to free network and memory allocated in previous steps.\nworkspace.ResetWorkspace()\n\n# Create input_build_fun\ndef add_image_input_ops(model):\n # This will utilize the reader to pull images and feed them to the training model's helper object\n # Use the model.ImageInput operator to load data from reader & apply transformations to the images.\n raise NotImplementedError # Remove this from the function stub\n \n\n# Create forward_pass_builder_fun\ndef create_resnet50_model_ops(model, loss_scale):\n # Use resnet module to create a residual net\n raise NotImplementedError # Remove this from the function stub\n\n\n# Create param_update_builder_fun\ndef add_parameter_update_ops(model):\n raise NotImplementedError # Remove this from the function stub\n\n \n# Create new train model\ntrain_model = NotImplementedError\n\n# Create new reader\nreader = NotImplementedError\n\n# Create parallelized model using dpm.Parallelize_GPU\n\n\n# Use workspace.RunNetOnce and workspace.CreateNet to fire up the train network\nworkspace.RunNetOnce(train_model.param_init_net)\nworkspace.CreateNet(train_model.net, overwrite=True)",
"Part 10: Create a Test Model\nAfter every epoch of training, we like to run some validation data through our model to see how it performs.\nLike training, this is another net, with its own data reader. Unlike training, this net does not perform backpropagation. It only does a forward pass and compares the output of the network with the label of the validation data.\nYou've already done these steps once before when you created the training network, so do it again, but name it something different, like \"test\".\nTask: Create a Test Model\n\nUse ModelHelper to create a model helper object called \"test\"\nUse CreateDB to create a reader and call it \"test_reader\"\nUse Parallelize_GPU to parallelize the model, but set param_update_builder_fun=None to skip backpropagation\nUse workspace.RunNetOnce and workspace.CreateNet to fire up the test network",
"# LAB WORK AREA for Part 10\n\n# Create your test model with ModelHelper\n\n\n# Create your reader with CreateDB\n\n\n# Use multi-GPU with Parallelize_GPU, but don't utilize backpropagation\n\n\n# Use workspace.RunNetOnce and workspace.CreateNet to fire up the test network\nworkspace.RunNetOnce(test_model.param_init_net)\nworkspace.CreateNet(test_model.net, overwrite=True)\n",
"Get Ready to Display the Results\nAt the end of every epoch we will take a look at how the network performs visually. We will also report on the accuracy of the training model and the test model. Let's not force you to write your own reporting and display code, so just run the code block below to get those features ready.",
"%matplotlib inline\nfrom caffe2.python import visualize\nfrom matplotlib import pyplot as plt\n\ndef display_images_and_confidence():\n images = []\n confidences = []\n n = 16\n data = workspace.FetchBlob(\"gpu_0/data\")\n label = workspace.FetchBlob(\"gpu_0/label\")\n softmax = workspace.FetchBlob(\"gpu_0/softmax\")\n for arr in zip(data[0:n], label[0:n], softmax[0:n]):\n # CHW to HWC, normalize to [0.0, 1.0], and BGR to RGB\n bgr = (arr[0].swapaxes(0, 1).swapaxes(1, 2) + 1.0) / 2.0\n rgb = bgr[...,::-1]\n images.append(rgb)\n confidences.append(arr[2][arr[1]])\n\n # Create grid for images\n fig, rows = plt.subplots(nrows=4, ncols=4, figsize=(12, 12))\n plt.tight_layout(h_pad=2)\n\n # Display images and the models confidence in their label\n items = zip([ax for cols in rows for ax in cols], images, confidences)\n for (ax, image, confidence) in items:\n ax.imshow(image)\n if confidence >= 0.5:\n ax.set_title(\"RIGHT ({:.1f}%)\".format(confidence * 100.0), color='green')\n else:\n ax.set_title(\"WRONG ({:.1f}%)\".format(confidence * 100.0), color='red')\n\n plt.show()\n\n \ndef accuracy(model):\n accuracy = []\n prefix = model.net.Proto().name\n for device in model._devices:\n accuracy.append(\n np.asscalar(workspace.FetchBlob(\"gpu_{}/{}_accuracy\".format(device, prefix))))\n return np.average(accuracy)",
"Part 11: Run Multi-GPU Training and Get Test Results\nYou've come a long way. Now is the time to see it all pay off. Since you already ran ResNet once, you can glance at the code below and run it. The big difference this time is your model is parallelized! \nThe additional components at the end deal with accuracy so you may want to dig into those specifics as a bonus task. You can try it again: just adjust the num_epochs value below, run the block, and see the results. You can also go back to Part 10 to reinitialize the model, and run this step again. (You may want to add workspace.ResetWorkspace() before you run the new models again.)\nGo back and check the images/sec from when you ran single GPU. Note how you can scale up with a small amount of overhead. \nTask: How many GPUs would it take to train ImageNet in under a minute?",
"# Start looping through epochs where we run the batches of images to cover the entire dataset\n# Usually you would want to run a lot more epochs to increase your model's accuracy\nnum_epochs = 2\nfor epoch in range(num_epochs):\n # Split up the images evenly: total images / batch size\n num_iters = int(train_data_count / total_batch_size)\n for iter in range(num_iters):\n # Stopwatch start!\n t1 = time.time()\n # Run this iteration!\n workspace.RunNet(train_model.net.Proto().name)\n t2 = time.time()\n dt = t2 - t1\n \n # Stopwatch stopped! How'd we do?\n print((\n \"Finished iteration {:>\" + str(len(str(num_iters))) + \"}/{}\" +\n \" (epoch {:>\" + str(len(str(num_epochs))) + \"}/{})\" + \n \" ({:.2f} images/sec)\").\n format(iter+1, num_iters, epoch+1, num_epochs, total_batch_size/dt))\n \n # Get the average accuracy for the training model\n train_accuracy = accuracy(train_model)\n \n # Run the test model and assess accuracy\n test_accuracies = []\n for _ in range(test_data_count / total_batch_size):\n # Run the test model\n workspace.RunNet(test_model.net.Proto().name)\n test_accuracies.append(accuracy(test_model))\n test_accuracy = np.average(test_accuracies)\n\n print(\n \"Train accuracy: {:.3f}, test accuracy: {:.3f}\".\n format(train_accuracy, test_accuracy))\n \n # Output images with confidence scores as the caption\n display_images_and_confidence()\n",
"If you enjoyed this tutorial and would like to see it in action in a different way, check Caffe2's Python examples to try a script version of this multi-GPU trainer. We also have some more info below in the Appendix and a Solutions section that you can use to run the expected output of this tutorial.\nAppendix\nHere are a few things you may want to play with.\nExplore the workspace and the protobuf outputs",
"print(str(train_model.param_init_net.Proto())[:1000] + '\\n...')",
"Solutions\nThis section below contains working examples for your reference. You should be able to execute these cells in order and see the expected output. Note: this assumes you have at least 2 GPUs",
"# SOLUTION for Part 1\n\nfrom caffe2.python import core, workspace, model_helper, net_drawer, memonger, brew\nfrom caffe2.python import data_parallel_model as dpm\nfrom caffe2.python.models import resnet\nfrom caffe2.proto import caffe2_pb2\n\nimport numpy as np\nimport time\nimport os\nfrom IPython import display\n \nworkspace.GlobalInit(['caffe2', '--caffe2_log_level=2'])\n\n# This section checks if you have the training and testing databases\ncurrent_folder = os.path.join(os.path.expanduser('~'), 'caffe2_notebooks')\ndata_folder = os.path.join(current_folder, 'tutorial_data', 'resnet_trainer')\n\n# Train/test data\ntrain_data_db = os.path.join(data_folder, \"imagenet_cars_boats_train\")\ntrain_data_db_type = \"lmdb\"\n# actually 640 cars and 640 boats = 1280\ntrain_data_count = 1280\ntest_data_db = os.path.join(data_folder, \"imagenet_cars_boats_val\")\ntest_data_db_type = \"lmdb\"\n# actually 48 cars and 48 boats = 96\ntest_data_count = 96\n\n# Get the dataset if it is missing\ndef DownloadDataset(url, path):\n import requests, zipfile, StringIO\n print(\"Downloading {} ... \".format(url))\n r = requests.get(url, stream=True)\n z = zipfile.ZipFile(StringIO.StringIO(r.content))\n z.extractall(path)\n print(\"Done downloading to {}!\".format(path))\n\n# Make the data folder if it doesn't exist\nif not os.path.exists(data_folder):\n os.makedirs(data_folder)\nelse:\n print(\"Data folder found at {}\".format(data_folder))\n# See if you already have to db, and if not, download it\nif not os.path.exists(train_data_db):\n DownloadDataset(\"https://download.caffe2.ai/databases/resnet_trainer.zip\", data_folder) \n\n# PART 1 TROUBLESHOOTING\n\n# lmdb error or unable to open database: look in the database folder from terminal and (sudo) delete the lock file and try again\n\n# SOLUTION for Part 2\n\n# Configure how you want to train the model and with how many GPUs\n# This is set to use two GPUs in a single machine, but if you have more GPUs, extend the array [0, 1, 2, n]\ngpus = [0, 1]\n\n# Batch size of 32 sums up to roughly 5GB of memory per device\nbatch_per_device = 32\ntotal_batch_size = batch_per_device * len(gpus)\n\n# This model discriminates between two labels: car or boat\nnum_labels = 2\n\n# Initial learning rate (scale with total batch size)\nbase_learning_rate = 0.0004 * total_batch_size\n\n# only intends to influence the learning rate after 10 epochs\nstepsize = int(10 * train_data_count / total_batch_size)\n\n# Weight decay (L2 regularization)\nweight_decay = 1e-4\n\n# SOLUTION for Part 3\n\nworkspace.ResetWorkspace()\n# 1. Use the model helper to create a CNN for us\ntrain_model = model_helper.ModelHelper(\n # Arbitrary name for referencing the network in your workspace: you could call it tacos or boatzncarz\n name=\"train\",\n)\n\n\n# 2. Create a database reader\n# This training data reader is shared between all GPUs.\n# When reading data, the trainer runs ImageInputOp for each GPU to retrieve their own unique batch of training data.\n# CreateDB is inherited by ModelHelper from model_helper.py\n# We are going to name it \"train_reader\" and pass in the db configurations we set earlier\nreader = train_model.CreateDB(\n \"train_reader\",\n db=train_data_db,\n db_type=train_data_db_type,\n)\n\n\n# SOLUTION for Part 4\n\ndef add_image_input_ops(model):\n # utilize the ImageInput operator to prep the images\n data, label = brew.image_input(\n model,\n reader,\n [\"data\", \"label\"],\n batch_size=batch_per_device,\n # mean: to remove color values that are common\n mean=128.,\n # std is going to be modified randomly to influence the mean subtraction\n std=128.,\n # scale to rescale each image to a common size\n scale=256,\n # crop to the square each image to exact dimensions\n crop=224,\n # not running in test mode\n is_test=False,\n # mirroring of the images will occur randomly\n mirror=1\n )\n # prevent back-propagation: optional performance improvement; may not be observable at small scale\n data = model.net.StopGradient(data, data)\n\n\n# SOLUTION for Part 5\n\ndef create_resnet50_model_ops(model, loss_scale=1.0):\n # Creates a residual network\n [softmax, loss] = resnet.create_resnet50(\n model,\n \"data\",\n num_input_channels=3,\n num_labels=num_labels,\n label=\"label\",\n )\n prefix = model.net.Proto().name\n loss = model.net.Scale(loss, prefix + \"_loss\", scale=loss_scale)\n brew.accuracy(model, [softmax, \"label\"], prefix + \"_accuracy\")\n return [loss]\n\n\n# SOLUTION for Part 6\n\ndef add_parameter_update_ops(model):\n brew.add_weight_decay(model, weight_decay)\n iter = brew.iter(model, \"iter\")\n lr = model.net.LearningRate(\n [iter],\n \"lr\",\n base_lr=base_learning_rate,\n policy=\"step\",\n stepsize=stepsize,\n gamma=0.1,\n )\n for param in model.GetParams():\n param_grad = model.param_to_grad[param]\n param_momentum = model.param_init_net.ConstantFill(\n [param], param + '_momentum', value=0.0\n )\n\n # Update param_grad and param_momentum in place\n model.net.MomentumSGDUpdate(\n [param_grad, param_momentum, lr, param],\n [param_grad, param_momentum, param],\n # almost 100% but with room to grow\n momentum=0.9,\n # netsterov is a defenseman for the Montreal Canadiens, but\n # Nesterov Momentum works slightly better than standard momentum\n nesterov=1,\n )\n\n# SOLUTION for Part 7\n\ndef optimize_gradient_memory(model, loss):\n model.net._net = memonger.share_grad_blobs(\n model.net,\n loss,\n set(model.param_to_grad.values()),\n namescope=\"imonaboat\",\n share_activations=False,\n )\n \n\n# SOLUTION for Part 8\n\ndevice_opt = core.DeviceOption(caffe2_pb2.CUDA, gpus[0])\nwith core.NameScope(\"imonaboat\"):\n with core.DeviceScope(device_opt):\n add_image_input_ops(train_model)\n losses = create_resnet50_model_ops(train_model)\n blobs_to_gradients = train_model.AddGradientOperators(losses)\n add_parameter_update_ops(train_model)\n optimize_gradient_memory(train_model, [blobs_to_gradients[losses[0]]])\n\n\nworkspace.RunNetOnce(train_model.param_init_net)\nworkspace.CreateNet(train_model.net, overwrite=True)\n\n# SOLUTION for Part 8 Part Deux\nnum_epochs = 1\nfor epoch in range(num_epochs):\n # Split up the images evenly: total images / batch size\n num_iters = int(train_data_count / batch_per_device)\n for iter in range(num_iters):\n # Stopwatch start!\n t1 = time.time()\n # Run this iteration!\n workspace.RunNet(train_model.net.Proto().name)\n t2 = time.time()\n dt = t2 - t1\n \n # Stopwatch stopped! How'd we do?\n print((\n \"Finished iteration {:>\" + str(len(str(num_iters))) + \"}/{}\" +\n \" (epoch {:>\" + str(len(str(num_epochs))) + \"}/{})\" + \n \" ({:.2f} images/sec)\").\n format(iter+1, num_iters, epoch+1, num_epochs, batch_per_device/dt))\n\n# SOLUTION for Part 9 Prep\n\n# Reinitializing our configuration variables to accomodate 2 (or more, if you have them) GPUs.\ngpus = [0, 1]\n\n# Batch size of 32 sums up to roughly 5GB of memory per device\nbatch_per_device = 32\ntotal_batch_size = batch_per_device * len(gpus)\n\n# This model discriminates between two labels: car or boat\nnum_labels = 2\n\n# Initial learning rate (scale with total batch size)\nbase_learning_rate = 0.0004 * total_batch_size\n\n# only intends to influence the learning rate after 10 epochs\nstepsize = int(10 * train_data_count / total_batch_size)\n\n# Weight decay (L2 regularization)\nweight_decay = 1e-4\n\n# Reset workspace to clear out memory allocated during our first run.\nworkspace.ResetWorkspace()\n\n# 1. Use the model helper to create a CNN for us\ntrain_model = model_helper.ModelHelper(\n # Arbitrary name for referencing the network in your workspace: you could call it tacos or boatzncarz\n name=\"train\",\n)\n\n# 2. Create a database reader\n# This training data reader is shared between all GPUs.\n# When reading data, the trainer runs ImageInputOp for each GPU to retrieve their own unique batch of training data.\n# CreateDB is inherited by cnn.ModelHelper from model_helper.py\n# We are going to name it \"train_reader\" and pass in the db configurations we set earlier\nreader = train_model.CreateDB(\n \"train_reader\",\n db=train_data_db,\n db_type=train_data_db_type,\n)\n\n# SOLUTION for Part 9\n# assumes you're using the functions created in Part 4, 5, 6\ndpm.Parallelize_GPU(\n train_model,\n input_builder_fun=add_image_input_ops,\n forward_pass_builder_fun=create_resnet50_model_ops,\n param_update_builder_fun=add_parameter_update_ops,\n devices=gpus,\n optimize_gradient_memory=True,\n)\n\nworkspace.RunNetOnce(train_model.param_init_net)\nworkspace.CreateNet(train_model.net)\n\n# SOLUTION for Part 10\ntest_model = model_helper.ModelHelper(\n name=\"test\",\n)\n\nreader = test_model.CreateDB(\n \"test_reader\",\n db=test_data_db,\n db_type=test_data_db_type,\n)\n\n# Validation is parallelized across devices as well\ndpm.Parallelize_GPU(\n test_model,\n input_builder_fun=add_image_input_ops,\n forward_pass_builder_fun=create_resnet50_model_ops,\n param_update_builder_fun=None,\n devices=gpus,\n)\n\nworkspace.RunNetOnce(test_model.param_init_net)\nworkspace.CreateNet(test_model.net)\n\n# SOLUTION for Part 10 - display reporting setup\n%matplotlib inline\nfrom caffe2.python import visualize\nfrom matplotlib import pyplot as plt\n\ndef display_images_and_confidence():\n images = []\n confidences = []\n n = 16\n data = workspace.FetchBlob(\"gpu_0/data\")\n label = workspace.FetchBlob(\"gpu_0/label\")\n softmax = workspace.FetchBlob(\"gpu_0/softmax\")\n for arr in zip(data[0:n], label[0:n], softmax[0:n]):\n # CHW to HWC, normalize to [0.0, 1.0], and BGR to RGB\n bgr = (arr[0].swapaxes(0, 1).swapaxes(1, 2) + 1.0) / 2.0\n rgb = bgr[...,::-1]\n images.append(rgb)\n confidences.append(arr[2][arr[1]])\n\n # Create grid for images\n fig, rows = plt.subplots(nrows=4, ncols=4, figsize=(12, 12))\n plt.tight_layout(h_pad=2)\n\n # Display images and the models confidence in their label\n items = zip([ax for cols in rows for ax in cols], images, confidences)\n for (ax, image, confidence) in items:\n ax.imshow(image)\n if confidence >= 0.5:\n ax.set_title(\"RIGHT ({:.1f}%)\".format(confidence * 100.0), color='green')\n else:\n ax.set_title(\"WRONG ({:.1f}%)\".format(confidence * 100.0), color='red')\n\n plt.show()\n\n \ndef accuracy(model):\n accuracy = []\n prefix = model.net.Proto().name\n for device in model._devices:\n accuracy.append(\n np.asscalar(workspace.FetchBlob(\"gpu_{}/{}_accuracy\".format(device, prefix))))\n return np.average(accuracy)\n\n# SOLUTION for Part 11\n\n# Start looping through epochs where we run the batches of images to cover the entire dataset\n# Usually you would want to run a lot more epochs to increase your model's accuracy\nnum_epochs = 2\nfor epoch in range(num_epochs):\n # Split up the images evenly: total images / batch size\n num_iters = int(train_data_count / total_batch_size)\n for iter in range(num_iters):\n # Stopwatch start!\n t1 = time.time()\n # Run this iteration!\n workspace.RunNet(train_model.net.Proto().name)\n t2 = time.time()\n dt = t2 - t1\n \n # Stopwatch stopped! How'd we do?\n print((\n \"Finished iteration {:>\" + str(len(str(num_iters))) + \"}/{}\" +\n \" (epoch {:>\" + str(len(str(num_epochs))) + \"}/{})\" + \n \" ({:.2f} images/sec)\").\n format(iter+1, num_iters, epoch+1, num_epochs, total_batch_size/dt))\n \n # Get the average accuracy for the training model\n train_accuracy = accuracy(train_model)\n \n # Run the test model and assess accuracy\n test_accuracies = []\n for _ in range(test_data_count / total_batch_size):\n # Run the test model\n workspace.RunNet(test_model.net.Proto().name)\n test_accuracies.append(accuracy(test_model))\n test_accuracy = np.average(test_accuracies)\n\n print(\n \"Train accuracy: {:.3f}, test accuracy: {:.3f}\".\n format(train_accuracy, test_accuracy))\n \n # Output images with confidence scores as the caption\n display_images_and_confidence()\n",
"TO DO:\n(or things to explore on your own to improve this tutorial!)\n* Create your own database of images\n* Explore the layers\n* Print out images of the intermediates/activations to show what's happening under the hood\n* Make some interactions between epochs (change of params to show impact)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
mne-tools/mne-tools.github.io
|
stable/_downloads/758680cba517820dcb0b486577bea58f/70_fnirs_processing.ipynb
|
bsd-3-clause
|
[
"%matplotlib inline",
"Preprocessing functional near-infrared spectroscopy (fNIRS) data\nThis tutorial covers how to convert functional near-infrared spectroscopy\n(fNIRS) data from raw measurements to relative oxyhaemoglobin (HbO) and\ndeoxyhaemoglobin (HbR) concentration, view the average waveform, and\ntopographic representation of the response.\nHere we will work with the fNIRS motor data <fnirs-motor-dataset>.",
"import os.path as op\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom itertools import compress\n\nimport mne\n\n\nfnirs_data_folder = mne.datasets.fnirs_motor.data_path()\nfnirs_cw_amplitude_dir = op.join(fnirs_data_folder, 'Participant-1')\nraw_intensity = mne.io.read_raw_nirx(fnirs_cw_amplitude_dir, verbose=True)\nraw_intensity.load_data()",
"Providing more meaningful annotation information\nFirst, we attribute more meaningful names to the trigger codes which are\nstored as annotations. Second, we include information about the duration of\neach stimulus, which was 5 seconds for all conditions in this experiment.\nThird, we remove the trigger code 15, which signaled the start and end\nof the experiment and is not relevant to our analysis.",
"raw_intensity.annotations.set_durations(5)\nraw_intensity.annotations.rename({'1.0': 'Control',\n '2.0': 'Tapping/Left',\n '3.0': 'Tapping/Right'})\nunwanted = np.nonzero(raw_intensity.annotations.description == '15.0')\nraw_intensity.annotations.delete(unwanted)",
"Viewing location of sensors over brain surface\nHere we validate that the location of sources-detector pairs and channels\nare in the expected locations. Source-detector pairs are shown as lines\nbetween the optodes, channels (the mid point of source-detector pairs) are\noptionally shown as orange dots. Source are optionally shown as red dots and\ndetectors as black.",
"subjects_dir = op.join(mne.datasets.sample.data_path(), 'subjects')\n\nbrain = mne.viz.Brain(\n 'fsaverage', subjects_dir=subjects_dir, background='w', cortex='0.5')\nbrain.add_sensors(\n raw_intensity.info, trans='fsaverage',\n fnirs=['channels', 'pairs', 'sources', 'detectors'])\nbrain.show_view(azimuth=20, elevation=60, distance=400)",
"Selecting channels appropriate for detecting neural responses\nFirst we remove channels that are too close together (short channels) to\ndetect a neural response (less than 1 cm distance between optodes).\nThese short channels can be seen in the figure above.\nTo achieve this we pick all the channels that are not considered to be short.",
"picks = mne.pick_types(raw_intensity.info, meg=False, fnirs=True)\ndists = mne.preprocessing.nirs.source_detector_distances(\n raw_intensity.info, picks=picks)\nraw_intensity.pick(picks[dists > 0.01])\nraw_intensity.plot(n_channels=len(raw_intensity.ch_names),\n duration=500, show_scrollbars=False)",
"Converting from raw intensity to optical density\nThe raw intensity values are then converted to optical density.",
"raw_od = mne.preprocessing.nirs.optical_density(raw_intensity)\nraw_od.plot(n_channels=len(raw_od.ch_names),\n duration=500, show_scrollbars=False)",
"Evaluating the quality of the data\nAt this stage we can quantify the quality of the coupling\nbetween the scalp and the optodes using the scalp coupling index. This\nmethod looks for the presence of a prominent synchronous signal in the\nfrequency range of cardiac signals across both photodetected signals.\nIn this example the data is clean and the coupling is good for all\nchannels, so we will not mark any channels as bad based on the scalp\ncoupling index.",
"sci = mne.preprocessing.nirs.scalp_coupling_index(raw_od)\nfig, ax = plt.subplots()\nax.hist(sci)\nax.set(xlabel='Scalp Coupling Index', ylabel='Count', xlim=[0, 1])",
"In this example we will mark all channels with a SCI less than 0.5 as bad\n(this dataset is quite clean, so no channels are marked as bad).",
"raw_od.info['bads'] = list(compress(raw_od.ch_names, sci < 0.5))",
"At this stage it is appropriate to inspect your data\n(for instructions on how to use the interactive data visualisation tool\nsee tut-visualize-raw)\nto ensure that channels with poor scalp coupling have been removed.\nIf your data contains lots of artifacts you may decide to apply\nartifact reduction techniques as described in ex-fnirs-artifacts.\nConverting from optical density to haemoglobin\nNext we convert the optical density data to haemoglobin concentration using\nthe modified Beer-Lambert law.",
"raw_haemo = mne.preprocessing.nirs.beer_lambert_law(raw_od, ppf=0.1)\nraw_haemo.plot(n_channels=len(raw_haemo.ch_names),\n duration=500, show_scrollbars=False)",
"Removing heart rate from signal\nThe haemodynamic response has frequency content predominantly below 0.5 Hz.\nAn increase in activity around 1 Hz can be seen in the data that is due to\nthe person's heart beat and is unwanted. So we use a low pass filter to\nremove this. A high pass filter is also included to remove slow drifts\nin the data.",
"fig = raw_haemo.plot_psd(average=True)\nfig.suptitle('Before filtering', weight='bold', size='x-large')\nfig.subplots_adjust(top=0.88)\nraw_haemo = raw_haemo.filter(0.05, 0.7, h_trans_bandwidth=0.2,\n l_trans_bandwidth=0.02)\nfig = raw_haemo.plot_psd(average=True)\nfig.suptitle('After filtering', weight='bold', size='x-large')\nfig.subplots_adjust(top=0.88)",
"Extract epochs\nNow that the signal has been converted to relative haemoglobin concentration,\nand the unwanted heart rate component has been removed, we can extract epochs\nrelated to each of the experimental conditions.\nFirst we extract the events of interest and visualise them to ensure they are\ncorrect.",
"events, event_dict = mne.events_from_annotations(raw_haemo)\nfig = mne.viz.plot_events(events, event_id=event_dict,\n sfreq=raw_haemo.info['sfreq'])\nfig.subplots_adjust(right=0.7) # make room for the legend",
"Next we define the range of our epochs, the rejection criteria,\nbaseline correction, and extract the epochs. We visualise the log of which\nepochs were dropped.",
"reject_criteria = dict(hbo=80e-6)\ntmin, tmax = -5, 15\n\nepochs = mne.Epochs(raw_haemo, events, event_id=event_dict,\n tmin=tmin, tmax=tmax,\n reject=reject_criteria, reject_by_annotation=True,\n proj=True, baseline=(None, 0), preload=True,\n detrend=None, verbose=True)\nepochs.plot_drop_log()",
"View consistency of responses across trials\nNow we can view the haemodynamic response for our tapping condition.\nWe visualise the response for both the oxy- and deoxyhaemoglobin, and\nobserve the expected peak in HbO at around 6 seconds consistently across\ntrials, and the consistent dip in HbR that is slightly delayed relative to\nthe HbO peak.",
"epochs['Tapping'].plot_image(combine='mean', vmin=-30, vmax=30,\n ts_args=dict(ylim=dict(hbo=[-15, 15],\n hbr=[-15, 15])))",
"We can also view the epoched data for the control condition and observe\nthat it does not show the expected morphology.",
"epochs['Control'].plot_image(combine='mean', vmin=-30, vmax=30,\n ts_args=dict(ylim=dict(hbo=[-15, 15],\n hbr=[-15, 15])))",
"View consistency of responses across channels\nSimilarly we can view how consistent the response is across the optode\npairs that we selected. All the channels in this data are located over the\nmotor cortex, and all channels show a similar pattern in the data.",
"fig, axes = plt.subplots(nrows=2, ncols=2, figsize=(15, 6))\nclims = dict(hbo=[-20, 20], hbr=[-20, 20])\nepochs['Control'].average().plot_image(axes=axes[:, 0], clim=clims)\nepochs['Tapping'].average().plot_image(axes=axes[:, 1], clim=clims)\nfor column, condition in enumerate(['Control', 'Tapping']):\n for ax in axes[:, column]:\n ax.set_title('{}: {}'.format(condition, ax.get_title()))",
"Plot standard fNIRS response image\nNext we generate the most common visualisation of fNIRS data: plotting\nboth the HbO and HbR on the same figure to illustrate the relation between\nthe two signals.",
"evoked_dict = {'Tapping/HbO': epochs['Tapping'].average(picks='hbo'),\n 'Tapping/HbR': epochs['Tapping'].average(picks='hbr'),\n 'Control/HbO': epochs['Control'].average(picks='hbo'),\n 'Control/HbR': epochs['Control'].average(picks='hbr')}\n\n# Rename channels until the encoding of frequency in ch_name is fixed\nfor condition in evoked_dict:\n evoked_dict[condition].rename_channels(lambda x: x[:-4])\n\ncolor_dict = dict(HbO='#AA3377', HbR='b')\nstyles_dict = dict(Control=dict(linestyle='dashed'))\n\nmne.viz.plot_compare_evokeds(evoked_dict, combine=\"mean\", ci=0.95,\n colors=color_dict, styles=styles_dict)",
"View topographic representation of activity\nNext we view how the topographic activity changes throughout the response.",
"times = np.arange(-3.5, 13.2, 3.0)\ntopomap_args = dict(extrapolate='local')\nepochs['Tapping'].average(picks='hbo').plot_joint(\n times=times, topomap_args=topomap_args)",
"Compare tapping of left and right hands\nFinally we generate topo maps for the left and right conditions to view\nthe location of activity. First we visualise the HbO activity.",
"times = np.arange(4.0, 11.0, 1.0)\nepochs['Tapping/Left'].average(picks='hbo').plot_topomap(\n times=times, **topomap_args)\nepochs['Tapping/Right'].average(picks='hbo').plot_topomap(\n times=times, **topomap_args)",
"And we also view the HbR activity for the two conditions.",
"epochs['Tapping/Left'].average(picks='hbr').plot_topomap(\n times=times, **topomap_args)\nepochs['Tapping/Right'].average(picks='hbr').plot_topomap(\n times=times, **topomap_args)",
"And we can plot the comparison at a single time point for two conditions.",
"fig, axes = plt.subplots(nrows=2, ncols=4, figsize=(9, 5),\n gridspec_kw=dict(width_ratios=[1, 1, 1, 0.1]))\nvmin, vmax, ts = -8, 8, 9.0\n\nevoked_left = epochs['Tapping/Left'].average()\nevoked_right = epochs['Tapping/Right'].average()\n\nevoked_left.plot_topomap(ch_type='hbo', times=ts, axes=axes[0, 0],\n vmin=vmin, vmax=vmax, colorbar=False,\n **topomap_args)\nevoked_left.plot_topomap(ch_type='hbr', times=ts, axes=axes[1, 0],\n vmin=vmin, vmax=vmax, colorbar=False,\n **topomap_args)\nevoked_right.plot_topomap(ch_type='hbo', times=ts, axes=axes[0, 1],\n vmin=vmin, vmax=vmax, colorbar=False,\n **topomap_args)\nevoked_right.plot_topomap(ch_type='hbr', times=ts, axes=axes[1, 1],\n vmin=vmin, vmax=vmax, colorbar=False,\n **topomap_args)\n\nevoked_diff = mne.combine_evoked([evoked_left, evoked_right], weights=[1, -1])\n\nevoked_diff.plot_topomap(ch_type='hbo', times=ts, axes=axes[0, 2:],\n vmin=vmin, vmax=vmax, colorbar=True,\n **topomap_args)\nevoked_diff.plot_topomap(ch_type='hbr', times=ts, axes=axes[1, 2:],\n vmin=vmin, vmax=vmax, colorbar=True,\n **topomap_args)\n\nfor column, condition in enumerate(\n ['Tapping Left', 'Tapping Right', 'Left-Right']):\n for row, chroma in enumerate(['HbO', 'HbR']):\n axes[row, column].set_title('{}: {}'.format(chroma, condition))\nfig.tight_layout()",
"Lastly, we can also look at the individual waveforms to see what is\ndriving the topographic plot above.",
"fig, axes = plt.subplots(nrows=1, ncols=1, figsize=(6, 4))\nmne.viz.plot_evoked_topo(epochs['Left'].average(picks='hbo'), color='b',\n axes=axes, legend=False)\nmne.viz.plot_evoked_topo(epochs['Right'].average(picks='hbo'), color='r',\n axes=axes, legend=False)\n\n# Tidy the legend:\nleg_lines = [line for line in axes.lines if line.get_c() == 'b'][:1]\nleg_lines.append([line for line in axes.lines if line.get_c() == 'r'][0])\nfig.legend(leg_lines, ['Left', 'Right'], loc='lower right')"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
mne-tools/mne-tools.github.io
|
0.23/_downloads/775a4c9edcb81275d5a07fdad54343dc/channel_epochs_image.ipynb
|
bsd-3-clause
|
[
"%matplotlib inline",
"Visualize channel over epochs as an image\nThis will produce what is sometimes called an event related\npotential / field (ERP/ERF) image.\nTwo images are produced, one with a good channel and one with a channel\nthat does not show any evoked field.\nIt is also demonstrated how to reorder the epochs using a 1D spectral\nembedding as described in :footcite:GramfortEtAl2010.",
"# Authors: Alexandre Gramfort <alexandre.gramfort@inria.fr>\n#\n# License: BSD (3-clause)\n\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nimport mne\nfrom mne import io\nfrom mne.datasets import sample\n\nprint(__doc__)\n\ndata_path = sample.data_path()",
"Set parameters",
"raw_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw.fif'\nevent_fname = data_path + '/MEG/sample/sample_audvis_filt-0-40_raw-eve.fif'\nevent_id, tmin, tmax = 1, -0.2, 0.4\n\n# Setup for reading the raw data\nraw = io.read_raw_fif(raw_fname)\nevents = mne.read_events(event_fname)\n\n# Set up pick list: EEG + MEG - bad channels (modify to your needs)\nraw.info['bads'] = ['MEG 2443', 'EEG 053']\n\n# Create epochs, here for gradiometers + EOG only for simplicity\nepochs = mne.Epochs(raw, events, event_id, tmin, tmax, proj=True,\n picks=('grad', 'eog'), baseline=(None, 0), preload=True,\n reject=dict(grad=4000e-13, eog=150e-6))",
"Show event-related fields images",
"# and order with spectral reordering\n# If you don't have scikit-learn installed set order_func to None\nfrom sklearn.manifold import spectral_embedding # noqa\nfrom sklearn.metrics.pairwise import rbf_kernel # noqa\n\n\ndef order_func(times, data):\n this_data = data[:, (times > 0.0) & (times < 0.350)]\n this_data /= np.sqrt(np.sum(this_data ** 2, axis=1))[:, np.newaxis]\n return np.argsort(spectral_embedding(rbf_kernel(this_data, gamma=1.),\n n_components=1, random_state=0).ravel())\n\n\ngood_pick = 97 # channel with a clear evoked response\nbad_pick = 98 # channel with no evoked response\n\n# We'll also plot a sample time onset for each trial\nplt_times = np.linspace(0, .2, len(epochs))\n\nplt.close('all')\nmne.viz.plot_epochs_image(epochs, [good_pick, bad_pick], sigma=.5,\n order=order_func, vmin=-250, vmax=250,\n overlay_times=plt_times, show=True)",
"References\n.. footbibliography::"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
greenelab/GCB535
|
24_Prelab_Python-II/Lesson2.ipynb
|
bsd-3-clause
|
[
"Lesson 2: if / else and Functions\n\nTable of Contents\n\nConditionals I: The \"if / else\" statement\nBuilt-in functions\nModules\nTest your understanding: practice set 2\n\n1. Conditionals I: The \"if / else\" statement\n\nProgramming is a lot like giving someone instructions or directions. For example, if I wanted to give you directions to my house, I might say...\n\nTurn right onto Main Street\nTurn left onto Maple Ave\nIf there is construction, continue straight on Maple Ave, turn right on Cat Lane, and left on Fake Street; else, cut through the empty lot to Fake Street\nGo straight on Fake Street until house 123\n\nThe same directions, but in code:",
"construction = False\n\nprint \"Turn right onto Main Street\"\nprint \"Turn left onto Maple Ave\"\n\nif construction:\n print \"Continue straight on Maple Ave\" \n print \"Turn right onto Cat Lane\"\n print \"Turn left onto Fake Street\"\nelse:\n print \"Cut through the empty lot to Fake Street\"\n print \"Go straight on Fake Street until house 123\"\n",
"This is called an \"if / else\" statement. It basically allows you to create a \"fork\" in the flow of your program based on a condition that you define. If the condition is True, the \"if\"-block of code is executed. If the condition is False, the else-block is executed. \nHere, our condition is simply the value of the variable construction. Since we defined this variable to quite literally hold the value False (this is a special data type called a Boolean, more on that in a minute), this means that we skip over the if-block and only execute the else-block. If instead we had set construction to True, we would have executed only the if-block.\nLet's define Booleans and if / else statements more formally now.\n\n[ Definition ] Booleans\n\nA Boolean (\"bool\") is a type of variable, like a string, int, or float. \nHowever, a Boolean is much more restricted than these other data types because it is only allowed to take two values: True or False.\nIn Python, True and False are always capitalized and never in quotes. \nDon't think of True and False as words! You can't treat them like you would strings. To Python, they're actually interpreted as the numbers 1 and 0, respectively.\nBooleans are most often used to create the \"conditional statements\" used in if / else statements and loops. \n\n\n[ Definition ] The if / else statement\nPurpose: creates a fork in the flow of the program based on whether a conditional statement is True or False. \nSyntax:\nif (conditional statement):\n this code is executed\nelse:\n this code is executed\n\nNotes:\n\nBased on the Boolean (True / False) value of a conditional statement, either executes the if-block or the else-block\nThe \"blocks\" are indicated by indentation.\nThe else-block is optional. \nColons are required after the if condition and after the else.\nAll code that is part of the if or else blocks must be indented.\n\nExample:",
"x = 5\nif (x > 0):\n print \"x is positive\"\nelse:\n print \"x is negative\"",
"So what types of conditionals are we allowed to use in an if / else statement? Anything that can be evaluated as True or False! For example, in natural language we might ask the following true/false questions:\n\nis a True?\nis a less than b?\nis a equal to b?\nis a equal to \"ATGCTG\"?\nis (a greater than b) and (b greater than c)?\n\nTo ask these questions in our code, we need to use a special set of symbols/words. These are called the logical operators, because they allow us to form logical (true/false) statements. Below is a chart that lists the most common logical operators:\n\nMost of these are pretty intuitive. The big one people tend to mess up on in the beginning is ==. Just remember: a single equals sign means assignment, and a double equals means is the same as/is equal to. You will NEVER use a single equals sign in a conditional statement because assignment is not allowed in a conditional! Only True / False questions are allowed!\nif / else statements in action\nBelow are several examples of code using if / else statements. For each code block, first try to guess what the output will be, and then run the block to see the answer.",
"a = True\nif a:\n print \"Hooray, a was true!\"\n\na = True\nif a:\n print \"Hooray, a was true!\"\nprint \"Goodbye now!\"\n\na = False\nif a:\n print \"Hooray, a was true!\"\nprint \"Goodbye now!\"",
"Since the line print \"Goodbye now!\" is not indented, it is NOT considered part of the if-statement.\nTherefore, it is always printed regardless of whether the if-statement was True or False.",
"a = True\nb = False\nif a and b:\n print \"Apple\"\nelse:\n print \"Banana\"",
"Since a and b are not both True, the conditional statement \"a and b\" as a whole is False. Therefore, we execute the else-block.",
"a = True\nb = False\nif a and not b:\n print \"Apple\"\nelse:\n print \"Banana\"",
"By using \"not\" before b, we negate its current value (False), making b True. Thus the entire conditional as a whole becomes True, and we execute the if-block.",
"a = True\nb = False\nif not a and b:\n print \"Apple\"\nelse:\n print \"Banana\"",
"\"not\" only applies to the variable directly in front of it (in this case, a). So here, a becomes False, so the conditional as a whole becomes False.",
"a = True\nb = False\nif not (a and b):\n print \"Apple\"\nelse:\n print \"Banana\"",
"When we use parentheses in a conditional, whatever is within the parentheses is evaluated first. So here, the evaluation proceeds like this: \nFirst Python decides how to evaluate (a and b). As we saw above, this must be False because a and b are not both True. \nThen Python applies the \"not\", which flips that False into a True. So then the final answer is True!",
"a = True\nb = False\nif a or b:\n print \"Apple\"\nelse:\n print \"Banana\"",
"As you would probably expect, when we use \"or\", we only need a or b to be True in order for the whole conditional to be True.",
"cat = \"Mittens\"\nif cat == \"Mittens\":\n print \"Awwww\"\nelse:\n print \"Get lost, cat\"\n\na = 5\nb = 10\nif (a == 5) and (b > 0):\n print \"Apple\"\nelse:\n print \"Banana\"\n\na = 5\nb = 10\nif ((a == 1) and (b > 0)) or (b == (2 * a)):\n print \"Apple\"\nelse:\n print \"Banana\"",
"Ok, this one is a little bit much! Try to avoid complex conditionals like this if possible, since it can be difficult to tell if they're actually testing what you think they're testing. If you do need to use a complex conditional, use parentheses to make it more obvious which terms will be evaluated first!\n\nNote on indentation\n\nIndentation is very important in Python; it’s how Python tells what code belongs to which control statements\nConsecutive lines of code with the same indenting are sometimes called \"blocks\"\nIndenting should only be done in specific circumstances (if statements are one example, and we'll see a few more soon). Indent anywhere else and you'll get an error.\nYou can indent by however much you want, but you must be consistent. Pick one indentation scheme (e.g. 1 tab per indent level, or 4 spaces) and stick to it.\n\n[ Check yourself! ] if/else practice\nThink you got it? In the code block below, write an if/else statement to print a different message depending on whether x is positive or negative.",
"x = 6 * -5 - 4 * 2 + -7 * -8 + 3\n\n# ******add your code here!*********",
"2. Built-in functions\n\nPython provides some useful built-in functions that perform specific tasks. What makes them \"built-in\"? Simply that you don’t have to \"import\" anything in order to use them -- they're always available. This is in contrast the the non-built-in functions, which are packaged into modules of similar functions (e.g. \"math\") that you must import before using. More on this in a minute!\nWe've already seen some examples of built-in functions, such as print, int(), float(), and str(). Now we'll look at a few more that are particularly useful: raw_input(), len(), abs(), and round().\n\n[ Definition ] raw_input()\nDescription: A built-in function that allows user input to be read from the terminal. \nSyntax:\nraw_input(\"Optional prompt: \")\n\nNotes:\n\nThe execution of the code will pause when it reaches the raw_input() function and wait for the user to input something. \nThe input ends when the user hits \"enter\". \nThe user input that is read by raw_input() can then be stored in a variable and used in the code.\nImportant: This function always returns a string, even if the user entered a number! You must convert the input with int() or float() if you expect a number input.\n\nExamples:",
"name = raw_input(\"Your name: \")\nprint \"Hi there\", name, \"!\"\n\nage = int(raw_input(\"Your age: \")) #convert input to an int\nprint \"Wow, I can't believe you're only\", age",
"[ Definition ] len()\nDescription: Returns the length of a string (also works on certain data structures). Doesn’t work on numerical types.\nSyntax:\nlen(string)\n\nExamples:",
"print len(\"cat\")\n\nprint len(\"hi there\")\n\nseqLength = len(\"ATGGTCGCAT\")\nprint seqLength",
"[ Definition ] abs()\nDescription: Returns the absolute value of a numerical value. Doesn't accept strings.\nSyntax:\nabs(number)\n\nExamples:",
"print abs(-10)\n\nprint abs(int(\"-10\"))\n\npositiveNum = abs(-23423)\nprint positiveNum",
"[ Definition ] round()\nDescription: Rounds a float to the indicated number of decimal places. If no number of decimal places is indicated, rounds to zero decimal places.\nSynatx:\nround(someNumber, numDecimalPlaces)\n\nExamples:",
"print round(10.12345)\n\nprint round(10.12345, 2)\n\nprint round(10.9999, 2)",
"If you want to learn more built in functions, go here: https://docs.python.org/2/library/functions.html \n3. Modules\n\nModules are groups of additional functions that come with Python, but unlike the built-in functions we just saw, these functions aren't accessible until you import them. Why aren’t all functions just built-in? Basically, it improves speed and memory usage to only import what is needed (there are some other considerations, too, but we won't get into it here).\nThe functions in a module are usually all related to a certain kind of task or subject area. For example, there are modules for doing advanced math, generating random numbers, running code in parallel, accessing your computer's file system, and so on. We’ll go over just two modules today: math and random. See the full list here: https://docs.python.org/2.7/py-modindex.html \nHow to use a module\nUsing a module is very simple. First you import the module. Add this to the top of your script:\nimport <moduleName>\n\nThen, to use a function of the module, you prefix the function name with the name of the module (using a period between them):\n<moduleName>.<functionName>\n\n(Replace <moduleName> with the name of the module you want, and <functionName> with the name of a function in the module.)\nThe <moduleName>.<functionName> synatx is needed so that Python knows where the function comes from. Sometimes, especially when using user created modules, there can be a function with the same name as a function that's already part of Python. Using this syntax prevents functions from overwriting each other or causing ambiguity.\n\n[ Definition ] The math module\nDescription: Contains many advanced math-related functions.\nSee full list of functions here: https://docs.python.org/2/library/math.html\nExamples:",
"import math\n\nprint math.sqrt(4)\nprint math.log10(1000)\nprint math.sin(1)\nprint math.cos(0)",
"[ Definition ] The random module\nDescription: contains functions for generating random numbers.\nSee full list of functions here: https://docs.python.org/2/library/random.html\nExamples:",
"import random\n\nprint random.random() # Return a random floating point number in the range [0.0, 1.0)\nprint random.randint(0, 10) # Return a random integer between the specified range (inclusive)\nprint random.gauss(5, 2) # Draw from the normal distribution given a mean and standard deviation\n\n# this code will output something different every time you run it!",
"4. Test your understanding: practice set 2\n\nFor the following blocks of code, first try to guess what the output will be, and then run the code yourself. These examples may introduce some ideas and common pitfalls that were not explicitly covered in the text above, so be sure to complete this section.\nThe first block below holds the variables that will be used in the problems. Since variables are shared across blocks in Jupyter notebooks, you just need to run this block once and then those variables can be used in any other code block.",
"# RUN THIS BLOCK FIRST TO SET UP VARIABLES!\na = True\nb = False\nx = 2\ny = -2\ncat = \"Mittens\"\n\nprint a\n\nprint (not a)\n\nprint (a == b)\n\nprint (a != b)\n\nprint (x == y)\n\nprint (x > y)\n\nprint (x = 2)\n\nprint (a and b)\n\nprint (a and not b)\n\nprint (a or b)\n\nprint (not b or a)\n\nprint not (b or a)\n\nprint (not b) or a\n\nprint (not b and a)\n\nprint not (b and a)\n\nprint (not b) and a\n\nprint (x == abs(y))\n\nprint len(cat)\n\nprint cat + x\n\nprint cat + str(x)\n\nprint float(x)\n\nprint (\"i\" in cat)\n\nprint (\"g\" in cat)\n\nprint (\"Mit\" in cat)\n\nif (x % 2) == 0:\n print \"x is even\"\nelse:\n print \"x is odd\"\n\nif (x - 4*y) < 0:\n print \"Invalid!\"\nelse:\n print \"Banana\"\n\nif \"Mit\" in cat:\n print \"Hey Mits!\"\nelse:\n print \"Where's Mits?\"\n\nx = \"C\"\nif x == \"A\" or \"B\":\n print \"yes\"\nelse:\n print \"no\"\n\nx = \"C\"\nif (x == \"A\") or (x == \"B\"):\n print \"yes\"\nelse:\n print \"no\"",
"Surprised by the last two? It's important to note that when you want compare a variable against multiple things, you only compare it to one thing at a time. Although it makes sense in English to say, is x equal to A or B?, in Python you must write: ((x == \"A\") or (x == \"B\")) to accomplish this. The same goes for e.g. ((x > 5) and (x < 10)) and anything along those lines.\nSo why does the first version give the answer \"yes\"? Basically, anything that isn't False or the literal number 0 is considered to be True in Python. So when you say 'x == \"A\" or \"B\"', this evaluates to 'False or True', which is True!"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
austinjalexander/sandbox
|
python/py/nanodegree/intro_ml/EnronPOI-Copy2.ipynb
|
mit
|
[
"import sys\nfrom time import time\nimport pickle\nsys.path.append(\"ud120-projects/tools/\")\nsys.path.append(\"ud120-projects/final_project/\")\nimport numpy as np\nimport pandas as pd\nfrom matplotlib import pyplot as plt\nimport seaborn as sns\n%matplotlib inline\n\nfrom feature_format import featureFormat, targetFeatureSplit\nfrom tester import test_classifier, dump_classifier_and_data\n\nfrom sklearn.cross_validation import train_test_split\nfrom sklearn.preprocessing import Imputer\nfrom sklearn.preprocessing import MinMaxScaler\nfrom sklearn.decomposition import RandomizedPCA\nfrom sklearn.feature_selection import SelectKBest\nfrom sklearn.pipeline import make_pipeline\nfrom sklearn.grid_search import GridSearchCV\n\nfrom sklearn.naive_bayes import GaussianNB\nfrom sklearn.svm import SVC\nfrom sklearn.tree import DecisionTreeClassifier\nfrom sklearn.cluster import KMeans\n\nfrom sklearn.metrics import precision_score\nfrom sklearn.metrics import recall_score\nfrom sklearn.metrics import classification_report\nfrom sklearn.metrics import confusion_matrix\n\n### Load the dictionary containing the dataset\ndata_dict = pickle.load(open(\"ud120-projects/final_project/final_project_dataset.pkl\", \"r\") )\n\n### Task 1: Select what features you'll use.\n### features_list is a list of strings, each of which is a feature name.\n### The first feature must be \"poi\".\nfeatures_list = ['poi','salary'] # You will need to use more features\n\n### Task 2: Remove outliers\n### Task 3: Create new feature(s)\n### Store to my_dataset for easy export below.\nmy_dataset = data_dict\n\n### Extract features and labels from dataset for local testing\ndata = featureFormat(my_dataset, features_list, sort_keys = True)\nlabels, features = targetFeatureSplit(data)\n\n### Task 4: Try a varity of classifiers\n### Please name your classifier clf for easy export below.\n### Note that if you want to do PCA or other multi-stage operations,\n### you'll need to use Pipelines. For more info:\n### http://scikit-learn.org/stable/modules/pipeline.html\n\nclf = GaussianNB() # Provided to give you a starting point. Try a varity of classifiers.\n\n### Task 5: Tune your classifier to achieve better than .3 precision and recall \n### using our testing script.\n### Because of the small size of the dataset, the script uses stratified\n### shuffle split cross validation. For more info: \n### http://scikit-learn.org/stable/modules/generated/sklearn.cross_validation.StratifiedShuffleSplit.html\n\ntest_classifier(clf, my_dataset, features_list)\n\n### Dump your classifier, dataset, and features_list so \n### anyone can run/check your results.\n\ndump_classifier_and_data(clf, my_dataset, features_list)\n\nclf\n\nprint my_dataset.keys()[0]\nmy_dataset.itervalues().next()\n\nfeatures_list",
"",
"### Task 1: Select what features you'll use.\n### features_list is a list of strings, each of which is a feature name.\n### The first feature must be \"poi\".\n\nnames = np.array(my_dataset.keys())\nprint names.shape, names[:5], \"\\n\"\nfeatures_list = my_dataset.itervalues().next().keys()\nfeatures_list.sort()\nfeatures_list.remove('poi')\nfeatures_list.insert(0, 'poi')\nfeatures_list.remove('email_address')\nprint features_list\n\n### convert dictionary to pandas dataframe\n\ndf = pd.DataFrame([entry for entry in my_dataset.itervalues()])\ndf = df.drop('email_address', axis=1)\ndf = df[features_list]\n#df.dtypes\n#df.describe()\n#df.count()\ndf.poi = df.poi.astype('int')\ndf = df.convert_objects(convert_numeric=True)\n\nfor col in list(df.columns):\n df[col] = df[col].round(decimals=3)\n \nprint \"POI Count:\\n\", df.poi.value_counts()\ndf.head()\n\n# create labels\ny = df.poi.values\nprint y.shape\nprint y[:5]\n\n# create initial features\nX = df.drop('poi', axis=1).values\nprint X.shape\n\n# imputation for 'NaN' values\nimp = Imputer(missing_values='NaN', strategy='mean', axis=0)\nimp.fit(X)\nX = imp.transform(X)\nprint X[:5]\n\n### Task 2: Remove outliers\nnum_rows = X.shape[0]\nnum_cols = X.shape[1]\nrows_to_remove = set()\n\nfor i in xrange(num_cols):\n point_five_percentile = np.percentile(X[:,i], 0.5)\n ninety_nine_point_five_percentile = np.percentile(X[:,i], 99.5)\n \n for j in xrange(num_rows):\n if X[j,i] < point_five_percentile:\n #print \"\\tlow outlier: \", \"row: \", j, \"col: \", i, \" -> \", X[j,i]\n rows_to_remove.add(j)\n elif X[j,i] > ninety_nine_point_five_percentile:\n #print \"\\thigh outlier: \", \"row: \", j, \"col: \", i, \" -> \", X[j,i]\n rows_to_remove.add(j)\n\nX = np.delete(X, list(rows_to_remove), axis=0)\ny = np.delete(y, list(rows_to_remove))\n\nprint \"names associated with outlier-containing rows to remove:\"\nfor i in rows_to_remove:\n print \"\\t\",names[i]\n \nnames = np.delete(names, list(rows_to_remove))\n\nprint \"\\nnew X shape: \", X.shape\nprint \"\\ntotal rows removed: \", len(rows_to_remove), \"({})\".format(round(len(rows_to_remove)/float(num_rows), 2))\n\n# split into training and testing data\nX_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.20, random_state=42)\nprint X_train.shape, X_test.shape, y_train.shape, y_test.shape",
"",
"### Task 3: Create new feature(s)\n# scale\nscaler = MinMaxScaler()\nscaler = scaler.fit(X_train)\nX_train = scaler.transform(X_train)\nprint X_train.shape\nX_test = scaler.transform(X_test)\nprint X_test.shape\n\nX_train",
"",
"### Task 4: Try a varity of classifiers\n### Please name your classifier clf for easy export below.\n### Note that if you want to do PCA or other multi-stage operations,\n### you'll need to use Pipelines. For more info:\n### http://scikit-learn.org/stable/modules/pipeline.html\n\nclassifiers = dict()\n\ndef grid_searcher(clf):\n \n # PUT IN FUNCTION FOR MINMAXSCALER\n \n t0 = time()\n \n even_range = range(2,X.shape[1],2)\n random_state = [42]\n t_or_f = [True, False]\n #powers_of_ten = [10**x for x in range(-5,5)]\n logspace = np.logspace(-5, 5, 10)\n #kernels = ['linear', 'poly', 'rbf', 'sigmoid'] # takes too long, unfortunately\n kernels = ['rbf']\n criteria = ['gini', 'entropy']\n splitters = ['best', 'random']\n max_features = ['auto', 'sqrt', 'log2', None]\n inits = ['k-means++', 'random']\n \n # pca and select K best\n pipeline = make_pipeline(RandomizedPCA(), SelectKBest(), clf)\n \n params = dict(randomizedpca__n_components = even_range,\n randomizedpca__whiten = t_or_f,\n randomizedpca__random_state = random_state,\n selectkbest__k = ['all'])\n \n \n if pipeline.steps[2][0] == 'decisiontreeclassifier':\n params['decisiontreeclassifier__criterion'] = criteria\n params['decisiontreeclassifier__splitter'] = splitters\n params['decisiontreeclassifier__max_features'] = max_features\n params['decisiontreeclassifier__random_state'] = random_state\n \n if pipeline.steps[2][0] == 'svc':\n params['svc__C'] = logspace\n params['svc__kernel'] = kernels\n #params['svc__degree'] = [1,2,3,4,5] # for use with 'poly'\n params['svc__gamma'] = logspace\n params['svc__random_state'] = random_state\n\n if pipeline.steps[2][0] == 'kmeans':\n params['kmeans__n_clusters'] = [2]\n params['kmeans__init'] = inits\n params['kmeans__random_state'] = random_state\n \n grid_search = GridSearchCV(pipeline, param_grid=params, n_jobs=4)\n\n grid_search = grid_search.fit(X_train, y_train)\n\n print \"*\"*15, pipeline.steps[2][0].upper(), \"*\"*15\n #print \"\\nbest estimator: \", grid_search.best_estimator_, \"\\n\" \n print \"\\nBEST SCORE: \", grid_search.best_score_, \"\\n\"\n #print \"\\nbest params: \", grid_search.best_params_, \"\\n\"\n\n #print \"#\"*50\n print \"\\nBEST ESTIMATOR:\"\n clf = grid_search.best_estimator_.fit(X_train, y_train)\n \n #classifiers[pipeline.steps[2][0]] = clf\n \n X_test_pca = clf.steps[0][1].transform(X_test)\n X_test_skb = clf.steps[1][1].transform(X_test_pca) \n print \"new X_test shape: \", X_test_skb.shape\n \n #print \"#\"*50\n print \"\\nPREDICTIONS:\"\n #test_classifier(clf, my_dataset, features_list)\n print \"\\nground truth:\\n\", y_test \n \n y_pred = clf.steps[2][1].predict(X_test_skb)\n print \"\\npredictions:\\n\", y_pred\n\n #print \"#\"*50\n print \"\\nEVALUATIONS:\"\n print \"\\nconfusion matrix:\\n\", confusion_matrix(y_test, y_pred)\n \n print \"\\nclassification report:\\n\", classification_report(y_test, y_pred, target_names=[\"non-poi\", \"poi\"])\n \n print \"ELAPSED TIME: \", round(time()-t0,3), \"s\"",
"Initial Results\nGaussianNB()\n Accuracy: 0.25560 Precision: 0.18481 Recall: 0.79800 F1: 0.30011 F2: 0.47968\n Total predictions: 10000 True positives: 1596 False positives: 7040 False negatives: 404\n True negatives: 960\nNew Results",
"grid_searcher(GaussianNB())\n\ngrid_searcher(DecisionTreeClassifier())\n\ngrid_searcher(SVC())\n\ngrid_searcher(KMeans())",
""
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
mbeyeler/opencv-machine-learning
|
notebooks/10.03-Using-Random-Forests-for-Face-Recognition.ipynb
|
mit
|
[
"<!--BOOK_INFORMATION-->\n<a href=\"https://www.packtpub.com/big-data-and-business-intelligence/machine-learning-opencv\" target=\"_blank\"><img align=\"left\" src=\"data/cover.jpg\" style=\"width: 76px; height: 100px; background: white; padding: 1px; border: 1px solid black; margin-right:10px;\"></a>\nThis notebook contains an excerpt from the book Machine Learning for OpenCV by Michael Beyeler.\nThe code is released under the MIT license,\nand is available on GitHub.\nNote that this excerpt contains only the raw code - the book is rich with additional explanations and illustrations.\nIf you find this content useful, please consider supporting the work by\nbuying the book!\n<!--NAVIGATION-->\n< Combining Decision Trees Into a Random Forest | Contents | Implementing AdaBoost >\nUsing Random Forests for Face Recognition\nA popular dataset that we haven't talked much about yet is the Olivetti face dataset.\nThe Olivetti face dataset was collected in 1990 by AT&T Laboratories Cambridge. The\ndataset comprises facial images of 40 distinct subjects, taken at different times and under\ndifferent lighting conditions. In addition, subjects varied their facial expression\n(open/closed eyes, smiling/not smiling) and their facial details (glasses/no glasses).\nImages were then quantized to 256 grayscale levels and stored as unsigned 8-bit integers.\nBecause there are 40 distinct subjects, the dataset comes with 40 distinct target labels.\nRecognizing faces thus constitutes an example of a multiclass classification task.\nLoading the dataset\nLike many other classic datasets, the Olivetti face dataset can be loaded using scikit-learn:",
"from sklearn.datasets import fetch_olivetti_faces\ndataset = fetch_olivetti_faces()\n\nX = dataset.data\ny = dataset.target",
"Although the original images consisted of 92 x 112 pixel images, the version available\nthrough scikit-learn contains images downscaled to 64 x 64 pixels.\nTo get a sense of the dataset, we can plot some example images. Let's pick eight indices\nfrom the dataset in a random order:",
"import numpy as np\nnp.random.seed(21)\nidx_rand = np.random.randint(len(X), size=8)",
"We can plot these example images using Matplotlib, but we need to make sure we reshape\nthe column vectors to 64 x 64 pixel images before plotting:",
"import matplotlib.pyplot as plt\n%matplotlib inline\nplt.figure(figsize=(14, 8))\nfor p, i in enumerate(idx_rand):\n plt.subplot(2, 4, p + 1)\n plt.imshow(X[i, :].reshape((64, 64)), cmap='gray')\n plt.axis('off')",
"You can see how all the faces are taken against a dark background and are upright. The\nfacial expression varies drastically from image to image, making this an interesting\nclassification problem. Try not to laugh at some of them!\nPreprocessing the dataset\nBefore we can pass the dataset to the classifier, we need to preprocess it following the best\npractices from Chapter 4, Representing Data and Engineering Features.\nSpecifically, we want to make sure that all example images have the same mean grayscale\nlevel:",
"n_samples, n_features = X.shape\nX -= X.mean(axis=0)",
"We repeat this procedure for every image to make sure the feature values of every data\npoint (that is, a row in X) are centered around zero:",
"X -= X.mean(axis=1).reshape(n_samples, -1)",
"The preprocessed data can be visualized using the preceding code:",
"plt.figure(figsize=(14, 8))\nfor p, i in enumerate(idx_rand):\n plt.subplot(2, 4, p + 1)\n plt.imshow(X[i, :].reshape((64, 64)), cmap='gray')\n plt.axis('off')\nplt.savefig('olivetti-pre.png')",
"Training and testing the random forest\nWe continue to follow our best practice to split the data into training and test sets:",
"from sklearn.model_selection import train_test_split\nX_train, X_test, y_train, y_test = train_test_split(\n X, y, random_state=21\n)",
"Then we are ready to apply a random forest to the data:",
"import cv2\nrtree = cv2.ml.RTrees_create()",
"Here we want to create an ensemble with 50 decision trees:",
"num_trees = 50\neps = 0.01\ncriteria = (cv2.TERM_CRITERIA_MAX_ITER + cv2.TERM_CRITERIA_EPS,\n num_trees, eps)\nrtree.setTermCriteria(criteria)",
"Because we have a large number of categories (that is, 40), we want to make sure the\nrandom forest is set up to handle them accordingly:",
"rtree.setMaxCategories(len(np.unique(y)))",
"We can play with other optional arguments, such as the number of data points required in a\nnode before it can be split:",
"rtree.setMinSampleCount(2)",
"However, we might not want to limit the depth of each tree. This is again, a parameter we\nwill have to experiment with in the end. But for now, let's set it to a large integer value,\nmaking the depth effectively unconstrained:",
"rtree.setMaxDepth(1000)",
"Then we can fit the classifier to the training data:",
"rtree.train(X_train, cv2.ml.ROW_SAMPLE, y_train);",
"We can check the resulting depth of the tree using the following function:",
"rtree.getMaxDepth()",
"This means that although we allowed the tree to go up to depth 1000, in the end only 25\nlayers were needed.\nThe evaluation of the classifier is done once again by predicting the labels first (y_hat) and\nthen passing them to the accuracy_score function:",
"_, y_hat = rtree.predict(X_test)\n\nfrom sklearn.metrics import accuracy_score\naccuracy_score(y_test, y_hat)",
"We find 87% accuracy, which turns out to be much better than with a single decision tree:",
"from sklearn.tree import DecisionTreeClassifier\ntree = DecisionTreeClassifier(random_state=21, max_depth=25)\ntree.fit(X_train, y_train)\ntree.score(X_test, y_test)",
"Not bad! We can play with the optional parameters to see if we get better. The most\nimportant one seems to be the number of trees in the forest. We can repeat the experiment\nwith a forest made from 100 trees:",
"num_trees = 100\neps = 0.01\ncriteria = (cv2.TERM_CRITERIA_MAX_ITER + cv2.TERM_CRITERIA_EPS,\n num_trees, eps)\nrtree.setTermCriteria(criteria)\nrtree.train(X_train, cv2.ml.ROW_SAMPLE, y_train);\n_, y_hat = rtree.predict(X_test)\naccuracy_score(y_test, y_hat)",
"With this configuration, we get 91% accuracy!\nAnother interesting use case of decision tree ensembles is Adaptive Boosting or AdaBoost.\n<!--NAVIGATION-->\n< Combining Decision Trees Into a Random Forest | Contents | Implementing AdaBoost >"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
yedivanseven/bestPy
|
examples/04.1_AlgorithmsBaseline.ipynb
|
gpl-3.0
|
[
"CHAPTER 4\n4.1 Algorithms: Baseline\nNow that we have a profound knowledge of the primary datastructure used in bestPy, the question is what to do with it. Obviously, to arrive at a recommendation for a customer, some sort of algorithm needs to operate on that data. To introduce the basic properties of bestPy's algorithms, we will examine the simplest (and fastest) of them all, the Baseline, before we do anything fancy.\nNot to be underestimated, though, some sort of baseline algorithm is critical to the recommendation business. Specifically it is needed to provide a\n+ recommendation to new cutomers,\n+ fallback if other algorithms fail,\n+ benchmark for other algorithms to beat.\nPreliminaries\nWe only need this because the examples folder is a subdirectory of the bestPy package.",
"import sys\nsys.path.append('../..')",
"Imports, logging, and data\nOn top of doing the things we already know, we now need to import also the Baseline algorithm, which is conveniently accessible through the bestPy.algorithms subpackage.",
"from bestPy import write_log_to\nfrom bestPy.datastructures import Transactions\nfrom bestPy.algorithms import Baseline # Additionally import the baseline algorithm\n\nlogfile = 'logfile.txt'\nwrite_log_to(logfile, 20)\n\nfile = 'examples_data.csv'\ndata = Transactions.from_csv(file)",
"Creating a new Baseline object\nThis is really easy. All you need to do is:",
"algorithm = Baseline()",
"Inspecting the new recommendation object with Tab completion reveals binarize as a first attribute.",
"algorithm.binarize",
"What its default value or True means is that, instead of judging an article's popularity by how many times it was bought, we are only going to count each unique customer only once. How often a given customer bought a given article no longer matters. It's 0 or 1. Hence the attribute's name. You can set it to False if you want to take into account multiple buys by the same customer.\nThis decision depends on the use case. Do I really like an article more because I bought more than one unit of it? If you sell both consumables and more specialized items, the answer is not so clear. Suppose I bought 6 pairs of socks (which I ended up hating) and one copy of a book (which I ended up loving). Does it really make sense to base your recommendation on the assumption that I liked the socks 6 times as much as the book? Probably not. So this is a case where the default value of True for the binarize attribute might make sense.\nIf, on the other hand, you are selling consumables only, then the number of times I buy an item might indeed hint towards me liking that item more than others and setting the binarize attribute to False might be adequate.",
"algorithm.binarize = False",
"Up to you to test and to act accordingly.\nAn that's it with setting up the configurable parameters of the Baseline algorithm. Without data, there is nothing else we can do for now, other than convincing us that there is indeed no data associated with the algorithm yet.",
"algorithm.has_data",
"Attaching data to the Baseline algorithm\nTo let the algorithm act on our data, we call its operating_on() method, which takes a data object of type Transactions as argument. Inspecting the has_data attribute again tells us whether we were successful or not.",
"recommendation = algorithm.operating_on(data)\nrecommendation.has_data",
"Note: Of course, you can also directly instantiate the algorithm with data attached\npython\nrecommendation = Baseline().operating_on(data)\nand configure its parameters (the binarize attribute) later.\nMaking a baseline recommendation\nNow that we have data attached to our algorithm, Tab completion shows us that an additional method for_one() has mysteriously appeared. This method, which does not make any sense without data and was, therefore, hidded before, returns an array of numbers, one for each article with the first for the article with index 0, the next for the article with index 1, etc. The highest number indicates the most and the lowest the least recommended article.",
"recommendation.for_one()",
"As discussed above, these numbers correpsond to either the count of unique buyers or the count of buys, depending on whether the attribute binarize is set to True or False, respectively.",
"recommendation.binarize = True\nrecommendation.for_one()",
"An that's all for the baseline algorithm\nRemark on the side\nWhat actually happens when you try to set the attrribute binarize to something else than the boolean values True or False? Let's try!",
"recommendation.binarize = 'foo'",
"And that's an error! If you examine your logfile, you should find the according line there.\n[ERROR ]: Attempt to set \"binarize\" to non-boolean type. (baseline|__check_boolean_type_of)\nRemember to check you logfile every once in a while to see what's going on!"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
saketkc/notebooks
|
python/AndrewsCurves.ipynb
|
bsd-2-clause
|
[
"%pylab inline\nimport pandas as pd\nimport matplotlib.patches as mpatches\nfrom sklearn.decomposition import PCA\n\nplt.style.use ('seaborn-colorblind')\n\nCB_color_cycle = ['#377eb8', '#ff7f00', '#4daf4a',\n '#f781bf', '#a65628', '#984ea3',\n '#999999', '#e41a1c', '#dede00']",
"Andrews Curves\nD. F. Andrews introduced 'Andrews Curves' in his 1972 paper for plotthing high dimensional data in two dimeion. The underlying principle is simple: Embed the high dimensiona data in high diemnsion only using a space of functions and then visualizing these functions.\nConsider A $d$ dimensional data point $\\mathbf{x} = (x_1, x_2, \\dots, x_d)$. Define the following function:\n$$f_x(t) = \\begin{cases}\n\\frac{x_1}{\\sqrt{2}} + x_2 \\sin(t) + x_3 \\cos(t) + x_4 \\sin (2t) + x_5\\cos(2t) + \\dots + x_{2k} \\sin(kt) + x_{2k+1} \\cos(kt) + \\dots + x_{d-2}\\sin( (\\frac{d}{2} -1)t) + x_{d-1}\\cos( (\\frac{d}{2} -1)t) + x_{d} \\sin(\\frac{d}{2}t) & d \\text{ even}\\\n\\frac{x_1}{\\sqrt{2}} + x_2 \\sin(t) + x_3 \\cos(t) + x_4 \\sin (2t) + x_5\\cos(2t) + \\dots + x_{2k} \\sin(kt) + x_{2k+1} \\cos(kt) + \\dots + x_{d-3}\\sin( \\frac{d-3}{2} t) + x_{d-2}\\cos( \\frac{d-3}{2}t) + x_{d-1} \\sin(\\frac{d-1}{2}t) + x_{d} \\cos(\\frac{d-1}{2}t)) & d \\text{ odd}\\\n\\end{cases}\n$$\nThis representation yields one dimensional projections, which may reveal clustering, outliers or orther patterns that occur in this subspace. All such one dimensional projections can then be plotted on one graph.\nProperties\nAndrews Curves has some intersting properties that makes it useful as a 2D tool:\nMean\nIf $\\bar{\\mathbf{x}}$ represents the mean of $\\bar{x}$ for $n$ observations: $\\bar{\\mathbf{x}} = \\frac{1}{n} \\mathbf{x_i}$. then,\n$$ f_{\\bar{\\mathbf{x}}}(t) = \\frac{1}{n} \\sum_{i=1}{n} f_{\\mathbf{x_i}}(t)$$\nProof: \nWe consider an odd $d$. \n\\begin{align}\nf_{\\bar{\\mathbf{x}}}(t) &= \\frac{\\bar{\\mathbf{x_1}}}{\\sqrt{2}} + \\bar{\\mathbf{x_2}} \\sin(t) + \\bar{\\mathbf{x_3}} \\cos(t) + \\bar{\\mathbf{x_4}} \\sin(2t) + \\bar{\\mathbf{x_5}} \\cos(2t) + \\dots + \\bar{\\mathbf{x_d}} \\sin(\\frac{d}{2}t) \\\n&= \\frac{\\sum_{j=1}^n x_{1j}}{\\sqrt{2}} + \\frac{\\sum_{j=1}x_{2j}}{n} \\sin(t) + \\frac{\\sum_{j=1}x_{3j}}{n} \\cos(t) + \\frac{\\sum_{j=1}x_{4j}}{n}\\sin(2t) + \\frac{\\sum_{j=1}x_{5j}}{n}\\cos(2t) + \\dots + \\frac{\\sum_{j=1}x_{dj}}{n} \\sin(\\frac{d}{2}t)\\\n&= \\frac{1}{n} \\sum_{i=1}^n f_{x_i} (t)\n\\end{align}\nDistance\nEuclidean distance is preserved. Consider two points $\\mathbf{x}$ and $\\mathbf{y}$\n$$||\\mathbf{x} - \\mathbf{y}||2^2 = \\sum{j=1}^d |x_j-y_j|^2$$\nLet's consider $||f_{\\mathbf{x}}(t) - f_{\\mathbf{y}}(t) ||2^2 = \\int{-\\pi}^{\\pi} (f_{\\mathbf{x}}(t) - f_{\\mathbf{y}}(t))^2 dt $\n\\begin{align}\n\\int_{-\\pi}^{\\pi} (f_{\\mathbf{x}}(t) - f_{\\mathbf{y}}(t))^2 dt &= \\frac{(x_1-y_1)^2}{2}(2\\pi) + \\int_{-\\pi}^{\\pi} (x_1-y_1)^2 \\sin^2{t}\\ dt + \\int_{-\\pi}^{\\pi} (x_2-y_2)^2 \\cos^2{t}\\ dt + \\int_{-\\pi}^{\\pi} (x_3-y_3)^2 \\sin^2{2t}\\ dt + \\int_{-\\pi}^{\\pi} (x_4-y_4)^2 \\cos^2{2t}\\ dt + \\dots\n\\end{align}\n\\begin{align} \n\\int^{\\pi}{-\\pi} \\sin^2 (kt) dt &= \\frac{1}{k}\\int{-k\\pi}^{k\\pi} \\sin^2 (t') dt'\\\n&= \\frac{1}{k} \\left( \\frac{\\int_{-k\\pi}^{k\\pi} (1-\\cos{(2t'))}dt'}{2} \\right)\\\n&= \\frac{1}{k} \\frac{2k\\pi}{2}\\\n&= \\pi\\\n\\int^{\\pi}{-\\pi} \\cos^2 (kt) dt &= \\int^{\\pi}{-\\pi} (1-\\sin^2 (kt)) dt\\\n&= 2\\pi-\\pi\\\n&= \\pi\n\\end{align}\nThus,\n\\begin{align}\n\\int_{-\\pi}^{\\pi} (f_{\\mathbf{x}}(t) - f_{\\mathbf{y}}(t))^2 dt &= \\pi ||\\mathbf{x} - \\mathbf{y}||_2^2\n\\end{align}\nVariance\nIf the $d$ features/components are all indepdent and have a common variance $\\sigma^2$\nThen\n\\begin{align}\n\\text{Var}f_{\\mathbf{x}(t)} &= \\text{Var} \\left(\\frac{x_1}{\\sqrt{2}} + x_2 \\sin(t) + x_3 \\cos(t) + x_4 \\sin (2t) + x_5\\cos(2t) + \\dots + x_{2k} \\sin(kt) + x_{2k+1} \\cos(kt) + \\dots + x_{d-2}\\sin( (\\frac{d}{2} -1)t) + x_{d-1}\\cos( (\\frac{d}{2} -1)t) + x_{d} \\sin(\\frac{d}{2}t) \\right)\\\n&= \\sigma^2 \\left( \\frac{1}{2} + \\sin^2 + \\cos^2 t + \\sin^2 2t + \\cos^2 2t + \\dots \\right)\\\n&= \\begin{cases}\n\\sigma^2(\\frac{1}{2} + \\frac{k-1}{2}) & d \\text{ odd }\\\n\\sigma^2(\\frac{1}{2} + \\frac{k}{2} - 1 + \\sin^2 {\\frac{kt}{2}} ) & d \\text{ even }\\\n\\end{cases}\\\n&= \\begin{cases}\n\\frac{k\\sigma^2}{2} & d \\text{ odd }\\\n\\sigma^2(\\frac{k-1}{2} + \\sin^2 {\\frac{kt}{2}} ) & d \\text{ even }\\\n\\end{cases}\n\\end{align}\nIn the even case the variance is boundded between $[\\sigma^2(\\frac{k-1}{2}), \\sigma^2(\\frac{k+1}{2})]$\nSince the variance is indepedent of $t$, the plotted functions will be smooth!\nInterpretation\nClustering\nFunctions close together, forming a band imply the corresponding points are also close in the euclidean space\nTest of significance at particular values of $t$\nTo test $f_{\\mathbf{x}}(t) = f_{\\mathbf{y}}(t)$ for some hypothesize $\\mathbf{y}$ and assuming the $\\text{Var}[f_{\\mathbf{x}}(t)]$ is known then testing can be done using the usual $z$ score:\n$$\nz = \\frac{f_{\\mathbf{x}}(t)-f_{\\mathbf{y}}(t)}{(\\text{Var}[{f_{\\mathbf{x}}(t)}])^{\\frac{1}{2}}}\n$$\nassuming that the comoponets $x_i$ are independent normal random variables.\nDetecting outliers\nIf comonents $x_i$ are independent normal $ x_i \\sim \\mathcal{N}(\\mu_i, \\sigma^2)$, then $\\frac{|\\mathbf{x}-\\mathbf{\\mu}}{\\sigma^2}$ follows a $\\chi^2_d$ distirbution.\nConsider a vector $v = \\frac{f_\\mathbf{1}(t)}{||f_\\mathbf{1}(t)||}$ then : \n\\begin{align}\n|(\\mathbf{x}-\\mathbf{\\mu})'v|^2 &= \\frac{||f_{\\mathbf{x}}(t) - f_{\\mathbf{\\mu}}(t)||^2 }{||f_\\mathbf{1}(t)||^2}\n \\frac{||f_{\\mathbf{x}}(t) - f_{\\mathbf{\\mu}}(t)||^2 }{||f_\\mathbf{1}(t)||^2} &\\leq \\chi_d^2(\\alpha)\n\\end{align}\nNow, \n\\begin{align}\n||f_\\mathbf{1}(t)||^2 &= \\frac{1}{2} + \\sin^2 + \\cos^2 t + \\dots + \\\n&\\leq \\frac{d+1}{2}\n\\end{align}\nThus,\n\\begin{align}\n||f_{\\mathbf{x}}(t) - f_{\\mathbf{\\mu}}(t)||^2 \\leq \\sigma^2 ||f_\\mathbf{1}(t)||^2 \\chi^2_d(\\alpha) &\\leq \\sigma^2 \\frac{d+1}{2} \\chi^2_d(\\alpha)\\\n\\end{align}\nLinear relationships\nThe \"Sandwich\" theorem: If $\\mathbf{y}$ lies on a line joining $\\mathbf{x}$ and $\\mathbf{z}$, then $\\forall t$ : $f_\\mathbf{y}(t)$ lies between $f_\\mathbf{x}(t)$ and $f_\\mathbf{z}(t)$. This is straightforward.",
"def andrews_curves(data, granularity=1000):\n \"\"\"\n Parameters\n -----------\n data : array like\n ith row is the ith observation\n jth column is the jth feature\n Size (m, n) => m replicats with n features\n granularity : int\n linspace granularity for theta\n Returns\n -------\n matrix : array\n Size (m, granularity) => \n \n \"\"\"\n n_obs, n_features = data.shape\n theta = np.linspace(-np.pi, np.pi, granularity)\n # transpose\n theta = np.reshape(theta, (-1, theta.shape[0]))\n t = np.arange(1, np.floor(n_features/2)+1)\n t = np.reshape(t, (t.shape[0], 1))\n sin_bases = np.sin(t*theta)\n cos_bases = np.cos(t*theta)\n if n_features % 2 == 0: \n # Remove the last row of cosine bases\n # for even values\n cos_bases = cos_bases[:-1,:]\n c = np.empty((sin_bases.shape[0] + cos_bases.shape[0], sin_bases.shape[1] ), \n dtype=sin_bases.dtype)\n c[0::2,:] = sin_bases\n c[1::2,:] = cos_bases\n constant = 1/np.sqrt(2) * np.ones((1, c.shape[1]))\n matrix = np.vstack([constant, c])\n return (np.dot(data,matrix))\n",
"Andrews Curves for iris dataset",
"df = pd.read_csv('https://raw.githubusercontent.com/pandas-dev/pandas/master/pandas/tests/data/iris.csv')\ndf_grouped = df.groupby('Name')\n\ndf_setosa = df.query(\"Name=='Iris-setosa'\")\nfig, ax = plt.subplots(figsize=(8,8))\n\n\nindex = 0 \npatches = []\nfor key, group in df_grouped:\n group = group.drop('Name', axis=1)\n for row in andrews_curves(group.as_matrix()):\n plot = ax.plot(row, CB_color_cycle[index])\n patch = mpatches.Patch(color=CB_color_cycle[index], label=key)\n \n index +=1\n\n patches.append(patch)\nax.legend(handles=patches)\nfig.tight_layout()",
"PCA",
"\nX = df[['SepalLength', 'SepalWidth', 'PetalLength', 'PetalWidth']]\ny = df['Name'].astype('category').cat.codes\ntarget_names = df['Name'].astype('category').unique()\n\npca = PCA(n_components=2)\nX_r = pca.fit(X).transform(X)\n\nfig, ax = plt.subplots(figsize=(8,8))\ncolors = CB_color_cycle[:3]\nlw = 2\n\nfor color, i, target_name in zip(colors, [0, 1, 2], target_names):\n plt.scatter(X_r[y == i, 0], X_r[y == i, 1], color=color, alpha=.8, lw=lw,\n label=target_name)\nax.legend(loc='best', shadow=False, scatterpoints=1)\nax.set_xlabel('Variance explained: {:.2f}'.format(pca.explained_variance_ratio_[0]))\nax.set_ylabel('Variance explained: {:.2f}'.format(pca.explained_variance_ratio_[1]))\nax.set_title('PCA of IRIS dataset')\nfig.tight_layout()",
"Clearly setos and virginica lie close to each other and hence appear as merged clusters in PCA and merged bands in Andrews Curves"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
feststelltaste/software-analytics
|
notebooks/Travis CI Build Breaker Analysis.ipynb
|
gpl-3.0
|
[
"Travis CI Build Breaker Analysis\nContext\nIn the age of agile software development, continous integration became an integral part of every development team. With the time, there may be so many test jobs and nightly builds running, that many teams are wasting much time in fixing the test instead of creating new features for their customers. On top of that, in the age of distributed systems, many build jobs might simply break because an external system isn't available or is behaving erroneous. These errors are very annoying because they occur sporadic and are very hard to trace back.\nIn this blog post, I want to present you some options to find weird behaving build jobs. For this, I've downloaded some build log dumps from Travis Torrent, unpacked them and read them into Pandas to get some insight about breaking build jobs.\nIdea\nAs complex as build logs might look like, they are semi-structured texts that contain recurring information in each job run.\nThis means we can read a build log by a data analysis framework that can read in kind of structured data. Of course, we use Pandas for this to get our effective analysis of build logs started. In this example, I choose one of the biggest Travis CI build log archive for an Java application that I could find (I used a Google BigQuery to find the biggest ones). It's SonarQube – a static quality assurance tools for use in continuous integration (what a nice coincidence).\nThe idea of this notebook is to show you what kind of information you can extract from a build log file to get more insight on how your build is working or failing. I won't show you every information detail that you can extract from a build log file, because they tend to get very specific and highly dependent on your own situation. But with the \"mechanics\" I'll show, you can adopt some analysis to your specific needs as well.\nWe read in the all the files in from the build log dump with glob. This allows us to quickly get the relevant files.\nNote: Because our log files are all in the same directory, we don't need to use the recursive feature that glob provides to get files from subdirectories as well.",
"import glob\nimport os\nROOT_DIR = \"C:/dev/data/build_logs/SonarSource@sonar-java-light\"\nROOT_DIR = \"C:/dev/data/build_logs/SonarSource@sonar-java\"\n\nGLOB_PATTERN = \"*.log\"\nlog_file_paths = glob.glob(os.path.join(ROOT_DIR, GLOB_PATTERN))\nlog_file_paths[:5]",
"We import the raw data as soon as possible into a Pandas DataFrame to avoid custom Python glue code and to be able to use the \"standardized\" methods for data wrangling of the Pandas framework.",
"import pandas as pd\n# set width of column for nicer output\npd.set_option('max_colwidth', 130)\n\nraw_logs = pd.DataFrame(log_file_paths, columns=['path'])\nraw_logs.head()",
"We clean up these ugly, different, OS-specific file separators by using a common one. \nNote: We could have also used os.sep that gives us the OS-specific separator. In Windows, this would be \\. But if you plan to extract data later e. g. by regular expressions, this is getting really unreadable, because \\ is also the character to escape certain other characters.",
"raw_logs['path'] = raw_logs['path'].str.replace(\"\\\\\", \"/\")\nraw_logs.head()",
"The are many information in the file path alone:\n\nThe last directory in the path contains the name of the build job\nThe first part of the file name is the build number\nThe second part of the file name is the build id\n\nLet's say we need that information later on, so we extract it with a nice regular expression with named groups.",
"# TODO: uses still regex, too slow? Consider \"split\"?\nlogs = raw_logs.join(raw_logs['path'].str.extract(\n r\"^.*\" + \\\n \"/(?P<jobname>.*)/\" + \\\n \"(?P<build_number>.*?)_\" + \\\n \"(?P<build_id>.*?)_.*\\.log$\", expand=True))\nlogs.head()",
"In the case of the Travis build log dumps, we got multiple files for each build run. We just need the first ones, that's why we throw away all the other build logs with the same build number.",
"logs = logs.drop_duplicates(subset=['build_number'], keep='first')\nlogs.head()",
"After dropping possible multiple build log files, we can use the build number as new index (aka key) for our DataFrame.",
"logs = logs.set_index(['build_number'], drop=True)\nlogs.head()",
"So far, we've just extracted metadata from the file path of the build log files. Now we are getting to the interesting parts: Extracting information from the content of the bulid log files. For this, we need to load the content of the log files into our DataFrame.\nWe do this with a little helper method that simply returns the contents of a file who's file_path was given:",
"def load_file_content(file_path):\n with open(file_path, mode='r', encoding=\"utf-8\") as f:\n return f.read()",
"We use the function above in the apply call upon the path Series.\nNote: For many big files, this could take some time to finish.",
"logs['content'] = logs['path'].apply(load_file_content)\nlogs.head()",
"Because it could get a little bit confusing with so much columns in a single DataFrame, we delete the path columns because it's not needed anymore.",
"log_data = logs.copy()\ndel(log_data['path'])\nlog_data.head()",
"Let's have a look at some of the contents of a build log file. This is where the analysis gets very specific depending on the used contiuous integration server, the build system, the programming language etc. . But the main idea is the same: Extract some interesing features that show what's going on in your build!\nLet's take a look at our scenario: A Travis CI job of a Java application that's build with Maven.\nHere are the first lines of one build log.",
"# TODO put every line in a new row",
"This could also take a while because we are doing a string operation which is slow.",
"entries_per_row = log_data[0:4].content.str.split(\"\\n\",expand=True)\nlog_rows = pd.DataFrame(entries_per_row.stack(), columns=[\"data\"])\nlog_rows\n\nlog_status = log_rows[log_rows[0].str.contains(\"Done.\")]\nlog_status.head()\n\n%matplotlib inline\nlog_status[0].value_counts().plot(kind='pie')",
"Halde\nBrowsing manually through the log, there are some interesting features. E. g. the start time of the build.",
"print(log_data.iloc[0]['content'][1400:1800])",
"So let's check for some errors and warnings. For this, we create a new DataFrame because it's another kind of information.",
"logs['finished'] = logs.content.str[-100:].str.extract(\"(.*)\\n*$\", expand=False)\npd.DataFrame(logs['finished'].value_counts().head(10))\n\nmapping = {\n \"Done. Your build exited with 0.\" : \"SUCCESS\",\n \"Done. Build script exited with: 0\" : \"SUCCESS\",\n \"Done. Build script exited with 0\" : \"SUCCUESS\",\n \"Your build has been stopped.\" : \"STOPPED\",\n \"The build has been terminated.\" : \"TERMINATED\",\n \"The build has been terminated\" : \"TERMINATED\",\n \"Done. Your build exited with 1.\" : \"ERROR\",\n \"Done. Build script exited with: 1\" : \"ERROR\",\n \"Your test run exceeded \" : \"ABORTED\"\n}\nlogs['finished_state'] = logs['finished'].map(mapping)\nlogs.loc[logs['finished_state'].isnull(), \"finished_state\"] = \"UNKNOWN\"\nlogs['finished_state'].value_counts()\n\nlogs['start_time'] = logs['content'].str.extract(\n r\"travis_time:end:.*:start=([0-9]*),\", expand=False)\nlogs['start_time'].head()\n\nlogs['start_time'] = pd.to_datetime(\n pd.to_numeric(\n logs['start_time'])/1000, unit='us')\nlogs['start_time'].head()\n\nlogs['end_time'] = logs['content'].str[-500:].str.extract(\n r\"travis_time:end:.*:start=[0-9]*,finish=([0-9]*)\",expand=True)\nlogs['end_time'] = pd.to_datetime(pd.to_numeric(logs['end_time'])/1000, unit='us')\nlogs['end_time'].head()\n\nlogs['duration'] = logs['end_time'] - logs['start_time']\n\nsuccessful_builds = logs[logs['finished_state'] == \"SUCCESS\"].dropna()\nsuccessful_builds['duration'].mean()\n\nlen(logs)\n\nt = successful_builds['duration']\nt.memory_usage()\n\nsuccessful_builds['duration_in_min'] = successful_builds['duration'].astype('timedelta64[m]')\nsuccessful_builds['duration_in_min'].head()\n\nsuccessful_builds_over_time = successful_builds.reset_index().set_index(pd.DatetimeIndex(successful_builds['start_time'])).resample('1W').mean()\nsuccessful_builds_over_time.head()",
"Visualisierung",
"%matplotlib inline\nimport matplotlib\nmatplotlib.style.use('ggplot')\nsuccessful_builds_over_time['duration_in_min'].plot(\n title=\"Build time in minutes\")"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
KshitijT/fundamentals_of_interferometry
|
1_Radio_Science/1_8_astronomical_radio_sources.ipynb
|
gpl-2.0
|
[
"Outline\nGlossary\n1. Radio Science using Interferometric Arrays\nPrevious: 1.7 Line emission\nNext: 1.9 A brief introduction to interferometry\n\n\n\n\nSection status: <span style=\"background-color:yellow\"> </span>\nImport standard modules:",
"import numpy as np\nimport matplotlib.pyplot as plt\n%matplotlib inline\nfrom IPython.display import HTML \nHTML('../style/course.css') #apply general CSS",
"Import section specific modules:",
"from IPython.display import Image",
"1.8 Astronomical radio sources<a id='science:sec:astronomical_radio_sources'></a>\nThe field of radio astronomy is extremely broad with applications to many areas of science. However, when studying astronomical sources, it is important to have information at as large a frequency range as possible. Radio astronomy should therefore be combined with observations at other wavelengths (eg. infra-red, optical and x-ray) to fully investigate the nature of astronomical sources. Below we give an incomplete list of some sources that have been studied extensively in the radio band. \n1.8.1 The Sun\nThe sun is the brightest radio object observed from earth. The radio emission from the sun can be divided into two parts: emission from the so-called \"quiet Sun\" and that from solar activities like coronal mass ejections.\nThe emission from the \"quiet sun\" is a mixture of thermal and non-thermal emission. However, the strongest radio emission from the sun comes from coronal mass ejections and solar flares. The flux associated with these events can reach as high as $10^9$Jy.\nThe sun is observed in radio by radio interferometers such as the Owens valley Solar Array and the Nancay Heliograph. Solar observations are also one of the key science projects of the LOFAR array. The upcoming SKA telescope will be able to study radio emission from the Sun with unprecedented resolution and sensitivity. \n1.8.2 Planets\nThe planets in our solar system also emit in the radio band. The radio emission from the planets is mostly thermal in origin. \n<span style=\"background-color:red\"> LB:AC: This contradicts what is said here. I don't see the point of adding this section if we are going to write one sentence in it. </span>\n1.8.3 The cosmic microwave background\nThe cosmic microwave background (CMB) radiation is the most famous example of thermal emission in radio astronomy. See $\\S$ 1.5 ➞ for the details on the temperature and spectrum of CMB (not to be confused with the power spectrum of CMB temperature fluctuations).\n<span style=\"background-color:yellow\"> LB:AC: We can really give a bit more history here. Mention how the CMB was discovered by accident for example. </span>\n1.8.4 Radio Galaxies\nRadio galaxies are extra-galactic radio sources which are believed to be powered by supermassive black holes at their centres (most elliptical galaxies are believed to be of this kind). Radio emission may extend up to Mpc scales, many times larger than the optical extent of the galaxy. Synchroton emission is the dominant type of radio emission from these sources. The figure below shows the Fornax A radio galaxy. As can be seen from the figure, the visible (in optical) part of the galaxy is much smaller than the radio counterpart.",
"Image(filename='figures/fornax_a_lo.jpg', width=300)",
"Figure 1.8.1 Fornax A radio galaxy (Image credit: Image courtesy of NRAO/AUI and J. M. Uson)\n1.8.4 Star Forming Galaxies:\nStar forming galaxies are \"normal\" galaxies which are undergoing active star formation. The process of star formation produces radio emission (predominantly in spiral galaxies) which is a mix of sychrotron emission from supernova remnants and free-free emission from ionized hydrogen (HII) regions. The figure below shows M82, a starburst galaxy. The bright spots in the image show both supernova remnants and HII regions.",
"Image(filename='figures/m82.png', width=300)",
"Figure 1.8.2 The starburst galaxy M82 (Image credit: Josh Marvil (NM Tech/NRAO), Bill Saxton (NRAO/AUI/NSF), NASA)\n\n\nNext: 1.9 A brief introduction to interferometry\n\n<div class=warn><b>Future Additions:</b></div>\n\n\nFor each class of objects above described, we need to include the general method of observation and observing frequency."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
tensorflow/federated
|
docs/tutorials/custom_aggregators.ipynb
|
apache-2.0
|
[
"Copyright 2021 The TensorFlow Federated Authors.",
"#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.",
"Implementing Custom Aggregations\n<table class=\"tfo-notebook-buttons\" align=\"left\">\n <td>\n <a target=\"_blank\" href=\"https://www.tensorflow.org/federated/tutorials/custom_aggregators\"><img src=\"https://www.tensorflow.org/images/tf_logo_32px.png\" />View on TensorFlow.org</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://colab.research.google.com/github/tensorflow/federated/blob/v0.27.0/docs/tutorials/custom_aggregators.ipynb\"><img src=\"https://www.tensorflow.org/images/colab_logo_32px.png\" />Run in Google Colab</a>\n </td>\n <td>\n <a target=\"_blank\" href=\"https://github.com/tensorflow/federated/blob/v0.27.0/docs/tutorials/custom_aggregators.ipynb\"><img src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\" />View source on GitHub</a>\n </td>\n <td>\n <a href=\"https://storage.googleapis.com/tensorflow_docs/federated/docs/tutorials/custom_aggregators.ipynb\"><img src=\"https://www.tensorflow.org/images/download_logo_32px.png\" />Download notebook</a>\n </td>\n</table>\n\nIn this tutorial, we explain design principles behind the tff.aggregators module and best practices for implementing custom aggregation of values from clients to server.\nPrerequisites. This tutorial assumes you are already familiar with basic concepts of Federated Core such as placements (tff.SERVER, tff.CLIENTS), how TFF represents computations (tff.tf_computation, tff.federated_computation) and their type signatures.",
"#@test {\"skip\": true}\n!pip install --quiet --upgrade tensorflow-federated\n!pip install --quiet --upgrade nest-asyncio\n\nimport nest_asyncio\nnest_asyncio.apply()",
"Design summary\nIn TFF, \"aggregation\" refers to the movement of a set of values on tff.CLIENTS to produce an aggregate value of the same type on tff.SERVER. That is, each individual client value need not be available. For example in federated learning, client model updates are averaged to get an aggregate model update to apply to the global model on the server.\nIn addition to operators accomplishing this goal such as tff.federated_sum, TFF provides tff.templates.AggregationProcess (a stateful process) which formalizes the type signature for aggregation computation so it can generalize to more complex forms than a simple sum.\nThe main components of the tff.aggregators module are factories for creation of the AggregationProcess, which are designed to be generally useful and replacable building blocks of TFF in two aspects:\n\nParameterized computations. Aggregation is an independent building block that can be plugged into other TFF modules designed to work with tff.aggregators to parameterize their necessary aggregation.\n\nExample:\nlearning_process = tff.learning.build_federated_averaging_process(\n ...,\n model_update_aggregation_factory=tff.aggregators.MeanFactory())\n\nAggregation composition. An aggregation building block can be composed with other aggregation building blocks to create more complex composite aggregations.\n\nExample:\nsecure_mean = tff.aggregators.MeanFactory(\n value_sum_factory=tff.aggregators.SecureSumFactory(...))\nThe rest of this tutorial explains how these two goals are achieved.\nAggregation process\nWe first summarize the tff.templates.AggregationProcess, and follow with the factory pattern for its creation.\nThe tff.templates.AggregationProcess is an tff.templates.MeasuredProcess with type signatures specified for aggregation. In particular, the initialize and next functions have the following type signatures:\n\n( -> state_type@SERVER)\n(<state_type@SERVER, {value_type}@CLIENTS, *> -> <state_type@SERVER, value_type@SERVER, measurements_type@SERVER>)\n\nThe state (of type state_type) must be placed at server. The next function takes as input argument the state and a value to be aggregated (of type value_type) placed at clients. The * means optional other input arguments, for instance weights in a weighted mean. It returns an updated state object, the aggregated value of the same type placed at server, and some measurements.\nNote that both the state to be passed between executions of the next function, and the reported measurements intended to report any information depending on a specific execution of the next function, may be empty. Nevertheless, they have to be explicitly specified for other parts of TFF to have a clear contract to follow.\nOther TFF modules, for instance the model updates in tff.learning, are expected to use the tff.templates.AggregationProcess to parameterize how values are aggregated. However, what exactly are the values aggregated and what their type signatures are, depends on other details of the model being trained and the learning algorithm used to do it.\nTo make aggregation independent of the other aspects of computations, we use the factory pattern -- we create the appropriate tff.templates.AggregationProcess once the relevant type signatures of objects to be aggregated are available, by invoking the create method of the factory. Direct handling of the aggregation process is thus needed only for library authors, who are responsible for this creation.\nAggregation process factories\nThere are two abstract base factory classes for unweighted and weighted aggregation. Their create method takes type signatures of value to be aggregated and returns a tff.templates.AggregationProcess for aggregation of such values.\nThe process created by tff.aggregators.UnweightedAggregationFactory takes two input arguments: (1) state at server and (2) value of specified type value_type.\nAn example implementation is tff.aggregators.SumFactory.\nThe process created by tff.aggregators.WeightedAggregationFactory takes three input arguments: (1) state at server, (2) value of specified type value_type and (3) weight of type weight_type, as specified by the factory's user when invoking its create method.\nAn example implementation is tff.aggregators.MeanFactory which computes a weighted mean.\nThe factory pattern is how we achieve the first goal stated above; that aggregation is an independent building block. For example, when changing which model variables are trainable, a complex aggregation does not necessarily need to change; the factory representing it will be invoked with a different type signature when used by a method such as tff.learning.build_federated_averaging_process.\nCompositions\nRecall that a general aggregation process can encapsulate (a) some preprocessing of the values at clients, (b) movement of values from client to server, and (c) some postprocessing of the aggregated value at the server. The second goal stated above, aggregation composition, is realized inside the tff.aggregators module by structuring the implementation of the aggregation factories such that part (b) can be delegated to another aggregation factory.\nRather than implementing all necessary logic within a single factory class, the implementations are by default focused on a single aspect relevant for aggregation. When needed, this pattern then enables us to replace the building blocks one at a time.\nAn example is the weighted tff.aggregators.MeanFactory. Its implementation multiplies provided values and weights at clients, then sums both weighted values and weights independently, and then divides the sum of weighted values by the sum of weights at the server. Instead of implementing the summations by directly using the tff.federated_sum operator, the summation is delegated to two instances of tff.aggregators.SumFactory.\nSuch structure makes it possible for the two default summations to be replaced by different factories, which realize the sum differently. For example, a tff.aggregators.SecureSumFactory, or a custom implementation of the tff.aggregators.UnweightedAggregationFactory. Conversely, time, tff.aggregators.MeanFactory can itself be an inner aggregation of another factory such as tff.aggregators.clipping_factory, if the values are to be clipped before averaging.\nSee the previous Tuning recommended aggregations for learning tutorial for receommended uses of the composition mechanism using existing factories in the tff.aggregators module.\nBest practices by example\nWe are going to illustrate the tff.aggregators concepts in detail by implementing a simple example task, and make it progressively more general. Another way to learn is to look at the implementation of existing factories.",
"import collections\nimport tensorflow as tf\nimport tensorflow_federated as tff",
"Instead of summing value, the example task is to sum value * 2.0 and then divide the sum by 2.0. The aggregation result is thus mathematically equivalent to directly summing the value, and could be thought of as consisting of three parts: (1) scaling at clients (2) summing across clients (3) unscaling at server.\nNOTE: This task is not necessarily useful in practice. Nevertheless, it is helpful in explaining the underlying concepts.\nFollowing the design explained above, the logic will be implemented as a subclass of tff.aggregators.UnweightedAggregationFactory, which creates appropriate tff.templates.AggregationProcess when given a value_type to aggregate:\nMinimal implementation\nFor the example task, the computations necessary are always the same, so there is no need for using state. It is thus empty, and represented as tff.federated_value((), tff.SERVER). The same holds for measurements, for now.\nThe minimal implementation of the task is thus as follows:",
"class ExampleTaskFactory(tff.aggregators.UnweightedAggregationFactory):\n\n def create(self, value_type):\n @tff.federated_computation()\n def initialize_fn():\n return tff.federated_value((), tff.SERVER)\n\n @tff.federated_computation(initialize_fn.type_signature.result,\n tff.type_at_clients(value_type))\n def next_fn(state, value):\n scaled_value = tff.federated_map(\n tff.tf_computation(lambda x: x * 2.0), value)\n summed_value = tff.federated_sum(scaled_value)\n unscaled_value = tff.federated_map(\n tff.tf_computation(lambda x: x / 2.0), summed_value)\n measurements = tff.federated_value((), tff.SERVER)\n return tff.templates.MeasuredProcessOutput(\n state=state, result=unscaled_value, measurements=measurements)\n\n return tff.templates.AggregationProcess(initialize_fn, next_fn)",
"Whether everything works as expected can be verified with the following code:",
"client_data = [1.0, 2.0, 5.0]\nfactory = ExampleTaskFactory()\naggregation_process = factory.create(tff.TensorType(tf.float32))\nprint(f'Type signatures of the created aggregation process:\\n'\n f' - initialize: {aggregation_process.initialize.type_signature}\\n'\n f' - next: {aggregation_process.next.type_signature}\\n')\n\nstate = aggregation_process.initialize()\noutput = aggregation_process.next(state, client_data)\nprint(f'Aggregation result: {output.result} (expected 8.0)')",
"Statefulness and measurements\nStatefulness is broadly used in TFF to represent computations that are expected to be executed iteratively and change with each iteration. For example, the state of a learning computation contains the weights of the model being learned.\nTo illustrate how to use state in an aggregation computation, we modify the example task. Instead of multiplying value by 2.0, we multiply it by the iteration index - the number of times the aggregation has been executed.\nTo do so, we need a way to keep track of the iteration index, which is achieved through the concept of state. In the initialize_fn, instead of creating an empty state, we initialize the state to be a scalar zero. Then, state can be used in the next_fn in three steps: (1) increment by 1.0, (2) use to multiply value, and (3) return as the new updated state.\nOnce this is done, you may note: But exactly the same code as above can be used to verify all works as expected. How do I know something has actually changed?\nGood question! This is where the concept of measurements becomes useful. In general, measurements can report any value relevant to a single execution of the next function, which could be used for monitoring. In this case, it can be the summed_value from the previous example. That is, the value before the \"unscaling\" step, which should depend on the iteration index. Again, this is not necessarily useful in practice, but illustrates the relevant mechanism.\nThe stateful answer to the task thus looks as follows:",
"class ExampleTaskFactory(tff.aggregators.UnweightedAggregationFactory):\n\n def create(self, value_type):\n @tff.federated_computation()\n def initialize_fn():\n return tff.federated_value(0.0, tff.SERVER)\n\n @tff.federated_computation(initialize_fn.type_signature.result,\n tff.type_at_clients(value_type))\n def next_fn(state, value):\n new_state = tff.federated_map(\n tff.tf_computation(lambda x: x + 1.0), state)\n state_at_clients = tff.federated_broadcast(new_state)\n scaled_value = tff.federated_map(\n tff.tf_computation(lambda x, y: x * y), (value, state_at_clients))\n summed_value = tff.federated_sum(scaled_value)\n unscaled_value = tff.federated_map(\n tff.tf_computation(lambda x, y: x / y), (summed_value, new_state))\n return tff.templates.MeasuredProcessOutput(\n state=new_state, result=unscaled_value, measurements=summed_value)\n\n return tff.templates.AggregationProcess(initialize_fn, next_fn)",
"Note that the state that comes into next_fn as input is placed at server. In order to use it at clients, it first needs to be communicated, which is achieved using the tff.federated_broadcast operator.\nTo verify all works as expected, we can now look at the reported measurements, which should be different with each round of execution, even if run with the same client_data.",
"client_data = [1.0, 2.0, 5.0]\nfactory = ExampleTaskFactory()\naggregation_process = factory.create(tff.TensorType(tf.float32))\nprint(f'Type signatures of the created aggregation process:\\n'\n f' - initialize: {aggregation_process.initialize.type_signature}\\n'\n f' - next: {aggregation_process.next.type_signature}\\n')\n\nstate = aggregation_process.initialize()\n\noutput = aggregation_process.next(state, client_data)\nprint('| Round #1')\nprint(f'| Aggregation result: {output.result} (expected 8.0)')\nprint(f'| Aggregation measurements: {output.measurements} (expected 8.0 * 1)')\n\noutput = aggregation_process.next(output.state, client_data)\nprint('\\n| Round #2')\nprint(f'| Aggregation result: {output.result} (expected 8.0)')\nprint(f'| Aggregation measurements: {output.measurements} (expected 8.0 * 2)')\n\noutput = aggregation_process.next(output.state, client_data)\nprint('\\n| Round #3')\nprint(f'| Aggregation result: {output.result} (expected 8.0)')\nprint(f'| Aggregation measurements: {output.measurements} (expected 8.0 * 3)')",
"Structured types\nThe model weights of a model trained in federated learning are usually represented as a collection of tensors, rather than a single tensor. In TFF, this is represented as tff.StructType and generally useful aggregation factories need to be able to accept the structured types.\nHowever, in the above examples, we only worked with a tff.TensorType object. If we try to use the previous factory to create the aggregation process with a tff.StructType([(tf.float32, (2,)), (tf.float32, (3,))]), we get a strange error because TensorFlow will try to multiply a tf.Tensor and a list.\nThe problem is that instead of multiplying the structure of tensors by a constant, we need to multiply each tensor in the structure by a constant. The usual solution to this problem is to use the tf.nest module inside of the created tff.tf_computations.\nThe version of the previous ExampleTaskFactory compatible with structured types thus looks as follows:",
"@tff.tf_computation()\ndef scale(value, factor):\n return tf.nest.map_structure(lambda x: x * factor, value)\n\n@tff.tf_computation()\ndef unscale(value, factor):\n return tf.nest.map_structure(lambda x: x / factor, value)\n\n@tff.tf_computation()\ndef add_one(value):\n return value + 1.0\n\nclass ExampleTaskFactory(tff.aggregators.UnweightedAggregationFactory):\n\n def create(self, value_type):\n @tff.federated_computation()\n def initialize_fn():\n return tff.federated_value(0.0, tff.SERVER)\n\n @tff.federated_computation(initialize_fn.type_signature.result,\n tff.type_at_clients(value_type))\n def next_fn(state, value):\n new_state = tff.federated_map(add_one, state)\n state_at_clients = tff.federated_broadcast(new_state)\n scaled_value = tff.federated_map(scale, (value, state_at_clients))\n summed_value = tff.federated_sum(scaled_value)\n unscaled_value = tff.federated_map(unscale, (summed_value, new_state))\n return tff.templates.MeasuredProcessOutput(\n state=new_state, result=unscaled_value, measurements=summed_value)\n\n return tff.templates.AggregationProcess(initialize_fn, next_fn)",
"This example highlights a pattern which may be useful to follow when structuring TFF code. When not dealing with very simple operations, the code becomes more legible when the tff.tf_computations that will be used as building blocks inside a tff.federated_computation are created in a separate place. Inside of the tff.federated_computation, these building blocks are only connected using the intrinsic operators.\nTo verify it works as expected:",
"client_data = [[[1.0, 2.0], [3.0, 4.0, 5.0]],\n [[1.0, 1.0], [3.0, 0.0, -5.0]]]\nfactory = ExampleTaskFactory()\naggregation_process = factory.create(\n tff.to_type([(tf.float32, (2,)), (tf.float32, (3,))]))\nprint(f'Type signatures of the created aggregation process:\\n'\n f' - initialize: {aggregation_process.initialize.type_signature}\\n'\n f' - next: {aggregation_process.next.type_signature}\\n')\n\nstate = aggregation_process.initialize()\noutput = aggregation_process.next(state, client_data)\nprint(f'Aggregation result: [{output.result[0]}, {output.result[1]}]\\n'\n f' Expected: [[2. 3.], [6. 4. 0.]]')",
"Inner aggregations\nThe final step is to optionally enable delegation of the actual aggregation to other factories, in order to allow easy composition of different aggregation techniques.\nThis is achieved by creating an optional inner_factory argument in the constructor of our ExampleTaskFactory. If not specified, tff.aggregators.SumFactory is used, which applies the tff.federated_sum operator used directly in the previous section.\nWhen create is called, we can first call create of the inner_factory to create the inner aggregation process with the same value_type.\nThe state of our process returned by initialize_fn is a composition of two parts: the state created by \"this\" process, and the state of the just created inner process.\nThe implementation of the next_fn differs in that the actual aggregation is delegated to the next function of the inner process, and in how the final output is composed. The state is again composed of \"this\" and \"inner\" state, and measurements are composed in a similar manner as an OrderedDict.\nThe following is an implementation of such pattern.",
"@tff.tf_computation()\ndef scale(value, factor):\n return tf.nest.map_structure(lambda x: x * factor, value)\n\n@tff.tf_computation()\ndef unscale(value, factor):\n return tf.nest.map_structure(lambda x: x / factor, value)\n\n@tff.tf_computation()\ndef add_one(value):\n return value + 1.0\n\nclass ExampleTaskFactory(tff.aggregators.UnweightedAggregationFactory):\n\n def __init__(self, inner_factory=None):\n if inner_factory is None:\n inner_factory = tff.aggregators.SumFactory()\n self._inner_factory = inner_factory\n\n def create(self, value_type):\n inner_process = self._inner_factory.create(value_type)\n\n @tff.federated_computation()\n def initialize_fn():\n my_state = tff.federated_value(0.0, tff.SERVER)\n inner_state = inner_process.initialize()\n return tff.federated_zip((my_state, inner_state))\n\n @tff.federated_computation(initialize_fn.type_signature.result,\n tff.type_at_clients(value_type))\n def next_fn(state, value):\n my_state, inner_state = state\n my_new_state = tff.federated_map(add_one, my_state)\n my_state_at_clients = tff.federated_broadcast(my_new_state)\n scaled_value = tff.federated_map(scale, (value, my_state_at_clients))\n\n # Delegation to an inner factory, returning values placed at SERVER.\n inner_output = inner_process.next(inner_state, scaled_value)\n\n unscaled_value = tff.federated_map(unscale, (inner_output.result, my_new_state))\n\n new_state = tff.federated_zip((my_new_state, inner_output.state))\n measurements = tff.federated_zip(\n collections.OrderedDict(\n scaled_value=inner_output.result,\n example_task=inner_output.measurements))\n\n return tff.templates.MeasuredProcessOutput(\n state=new_state, result=unscaled_value, measurements=measurements)\n\n return tff.templates.AggregationProcess(initialize_fn, next_fn)",
"When delegating to the inner_process.next function, the return structure we get is a tff.templates.MeasuredProcessOutput, with the same three fields - state, result and measurements. When creating the overall return structure of the composed aggregation process, the state and measurements fields should be generally composed and returned together. In contrast, the result field corresponds to the value being aggregated and instead \"flows through\" the composed aggregation.\nThe state object should be seen as an implementation detail of the factory, and thus the composition could be of any structure. However, measurements correspond to values to be reported to the user at some point. Therefore, we recommend to use OrderedDict, with composed naming such that it would be clear where in an composition does a reported metric comes from.\nNote also the use of the tff.federated_zip operator. The state object contolled by the created process should be a tff.FederatedType. If we had instead returned (this_state, inner_state) in the initialize_fn, its return type signature would be a tff.StructType containing a 2-tuple of tff.FederatedTypes. The use of tff.federated_zip \"lifts\" the tff.FederatedType to the top level. This is similarly used in the next_fn when preparing the state and measurements to be returned.\nFinally, we can see how this can be used with the default inner aggregation:",
"client_data = [1.0, 2.0, 5.0]\nfactory = ExampleTaskFactory()\naggregation_process = factory.create(tff.TensorType(tf.float32))\nstate = aggregation_process.initialize()\n\noutput = aggregation_process.next(state, client_data)\nprint('| Round #1')\nprint(f'| Aggregation result: {output.result} (expected 8.0)')\nprint(f'| measurements[\\'scaled_value\\']: {output.measurements[\"scaled_value\"]}')\nprint(f'| measurements[\\'example_task\\']: {output.measurements[\"example_task\"]}')\n\noutput = aggregation_process.next(output.state, client_data)\nprint('\\n| Round #2')\nprint(f'| Aggregation result: {output.result} (expected 8.0)')\nprint(f'| measurements[\\'scaled_value\\']: {output.measurements[\"scaled_value\"]}')\nprint(f'| measurements[\\'example_task\\']: {output.measurements[\"example_task\"]}')",
"... and with a different inner aggregation. For example, an ExampleTaskFactory:",
"client_data = [1.0, 2.0, 5.0]\n# Note the inner delegation can be to any UnweightedAggregaionFactory.\n# In this case, each factory creates process that multiplies by the iteration\n# index (1, 2, 3, ...), thus their combination multiplies by (1, 4, 9, ...).\nfactory = ExampleTaskFactory(ExampleTaskFactory())\naggregation_process = factory.create(tff.TensorType(tf.float32))\nstate = aggregation_process.initialize()\n\noutput = aggregation_process.next(state, client_data)\nprint('| Round #1')\nprint(f'| Aggregation result: {output.result} (expected 8.0)')\nprint(f'| measurements[\\'scaled_value\\']: {output.measurements[\"scaled_value\"]}')\nprint(f'| measurements[\\'example_task\\']: {output.measurements[\"example_task\"]}')\n\noutput = aggregation_process.next(output.state, client_data)\nprint('\\n| Round #2')\nprint(f'| Aggregation result: {output.result} (expected 8.0)')\nprint(f'| measurements[\\'scaled_value\\']: {output.measurements[\"scaled_value\"]}')\nprint(f'| measurements[\\'example_task\\']: {output.measurements[\"example_task\"]}')",
"Summary\nIn this tutorial, we explained the best practices to follow in order to create a general-purpose aggregation building block, represented as an aggregation factory. The generality comes through the design intent in two ways:\n\nParameterized computations. Aggregation is an independent building block that can be plugged into other TFF modules designed to work with tff.aggregators to parameterize their necessary aggregation, such as tff.learning.build_federated_averaging_process.\nAggregation composition. An aggregation building block can be composed with other aggregation building blocks to create more complex composite aggregations."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
nikbearbrown/Deep_Learning
|
NEU/Sai_Raghuram_Kothapalli_DL/Hands_on_Keras_Tutorial.ipynb
|
mit
|
[
"import tensorflow as tf\n\nimport numpy as np\n\n!pip install keras\n\nimport keras\n\nfrom keras.models import Sequential\n\nmodel = Sequential()",
"The Sequential model is a linear stack of layers.\nYou can create a Sequential model by passing a list of layer instances to the constructor:",
"from keras.models import Sequential\nfrom keras.layers import Dense, Activation\n\nmodel = Sequential([\n Dense(32, input_shape=(784,)),\n Activation('relu'),\n Dense(10),\n Activation('softmax'),\n])",
"You can also simply add layers via the .add() method:",
"model = Sequential()\nmodel.add(Dense(32, input_dim=784))\nmodel.add(Activation('relu'))\n",
"Specifying the input shape\nThe model needs to know what input shape it should expect. For this reason, the first layer in a Sequential model (and only the first, because following layers can do automatic shape inference) needs to receive information about its input shape. There are several possible ways to do this:\nPass an input_shape argument to the first layer. This is a shape tuple (a tuple of integers or None entries, where None indicates that any positive integer may be expected). In input_shape, the batch dimension is not included.\nSome 2D layers, such as Dense, support the specification of their input shape via the argument input_dim, and some 3D temporal layers support the arguments input_dim and input_length.\nIf you ever need to specify a fixed batch size for your inputs (this is useful for stateful recurrent networks), you can pass a batch_size argument to a layer. If you pass both batch_size=32 and input_shape=(6, 8) to a layer, it will then expect every batch of inputs to have the batch shape (32, 6, 8).\nAs such, the following snippets are strictly equivalent:",
"model = Sequential()\nmodel.add(Dense(32, input_shape=(784,)))\nmodel = Sequential()\nmodel.add(Dense(32, input_dim=784))\n",
"Compilation\nBefore training a model, you need to configure the learning process, which is done via the compile method. It receives three arguments:\n\nAn optimizer. This could be the string identifier of an existing optimizer (such as rmsprop or adagrad), or an instance of the Optimizer class. See: optimizers.\nA loss function. This is the objective that the model will try to minimize. It can be the string identifier of an existing loss function (such as categorical_crossentropy or mse), or it can be an objective function. See: losses.\nA list of metrics. For any classification problem you will want to set this to metrics=['accuracy']. A metric could be the string identifier of an existing metric or a custom metric function.",
"# For a multi-class classification problem\nmodel.compile(optimizer='rmsprop',\n loss='categorical_crossentropy',\n metrics=['accuracy'])\n\n# For a binary classification problem\nmodel.compile(optimizer='rmsprop',\n loss='binary_crossentropy',\n metrics=['accuracy'])\n\n# For a mean squared error regression problem\nmodel.compile(optimizer='rmsprop',\n loss='mse')\n\n# For custom metrics\nimport keras.backend as K\n\ndef mean_pred(y_true, y_pred):\n return K.mean(y_pred)\n\nmodel.compile(optimizer='rmsprop',\n loss='binary_crossentropy',\n metrics=['accuracy', mean_pred])\n",
"Training\nKeras models are trained on Numpy arrays of input data and labels. For training a model, you will typically use the fit function. Read its documentation here.",
"# For a single-input model with 2 classes (binary classification):\n\nmodel = Sequential()\nmodel.add(Dense(32, activation='relu', input_dim=100))\nmodel.add(Dense(1, activation='sigmoid'))\nmodel.compile(optimizer='rmsprop',\n loss='binary_crossentropy',\n metrics=['accuracy'])\n\n# Generate dummy data\nimport numpy as np\ndata = np.random.random((1000, 100))\nlabels = np.random.randint(2, size=(1000, 1))\n\n# Train the model, iterating on the data in batches of 32 samples\nmodel.fit(data, labels, epochs=10, batch_size=32)\n\n\n# For a single-input model with 10 classes (categorical classification):\n\nmodel = Sequential()\nmodel.add(Dense(32, activation='relu', input_dim=100))\nmodel.add(Dense(10, activation='softmax'))\nmodel.compile(optimizer='rmsprop',\n loss='categorical_crossentropy',\n metrics=['accuracy'])\n\n# Generate dummy data\nimport numpy as np\ndata = np.random.random((1000, 100))\nlabels = np.random.randint(10, size=(1000, 1))\n\n# Convert labels to categorical one-hot encoding\none_hot_labels = keras.utils.to_categorical(labels, num_classes=10)\n\n# Train the model, iterating on the data in batches of 32 samples\nmodel.fit(data, one_hot_labels, epochs=10, batch_size=32)\n",
"Examples\nHere are a few examples to get you started!\nMultilayer Perceptron (MLP) for multi-class softmax classification:",
"import keras\nfrom keras.models import Sequential\nfrom keras.layers import Dense, Dropout, Activation\nfrom keras.optimizers import SGD\n\n# Generate dummy data\nimport numpy as np\nx_train = np.random.random((1000, 20))\ny_train = keras.utils.to_categorical(np.random.randint(10, size=(1000, 1)), num_classes=10)\nx_test = np.random.random((100, 20))\ny_test = keras.utils.to_categorical(np.random.randint(10, size=(100, 1)), num_classes=10)\n\nmodel = Sequential()\n# Dense(64) is a fully-connected layer with 64 hidden units.\n# in the first layer, you must specify the expected input data shape:\n# here, 20-dimensional vectors.\nmodel.add(Dense(64, activation='relu', input_dim=20))\nmodel.add(Dropout(0.5))\nmodel.add(Dense(64, activation='relu'))\nmodel.add(Dropout(0.5))\nmodel.add(Dense(10, activation='softmax'))\n\nsgd = SGD(lr=0.01, decay=1e-6, momentum=0.9, nesterov=True)\nmodel.compile(loss='categorical_crossentropy',\n optimizer=sgd,\n metrics=['accuracy'])\n\nmodel.fit(x_train, y_train,\n epochs=20,\n batch_size=128)\nscore = model.evaluate(x_test, y_test, batch_size=128)\n\n",
"MLP for binary classification:",
"import numpy as np\nfrom keras.models import Sequential\nfrom keras.layers import Dense, Dropout\n\n# Generate dummy data\nx_train = np.random.random((1000, 20))\ny_train = np.random.randint(2, size=(1000, 1))\nx_test = np.random.random((100, 20))\ny_test = np.random.randint(2, size=(100, 1))\n\nmodel = Sequential()\nmodel.add(Dense(64, input_dim=20, activation='relu'))\nmodel.add(Dropout(0.5))\nmodel.add(Dense(64, activation='relu'))\nmodel.add(Dropout(0.5))\nmodel.add(Dense(1, activation='sigmoid'))\n\nmodel.compile(loss='binary_crossentropy',\n optimizer='rmsprop',\n metrics=['accuracy'])\n\nmodel.fit(x_train, y_train,\n epochs=20,\n batch_size=128)\nscore = model.evaluate(x_test, y_test, batch_size=128)\n"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
SaTa999/pyPanair
|
examples/tutorial2/tutorial2.ipynb
|
mit
|
[
"pyPanair Tutorial#2 Tapered Wing\nIn this tutorial we will perform an analysis of a tapered wing.\nThe wing is defined by five different wing sections at $\\eta=0.000, 0.126, 0.400, 0.700, 1.000$. \nBelow are the wing planform and airfoil stack, respectively.\n(The wing is based on the DLR-F4<sup>1</sup>)",
"%matplotlib notebook\nimport matplotlib.pyplot as plt\nfrom pyPanair.preprocess import wgs_creator\nfor eta in (\"0000\", \"0126\", \"0400\", \"0700\", \"1000\"):\n af = wgs_creator.read_airfoil(\"eta{}.csv\".format(eta)) \n plt.plot(af[:,0], af[:,2], \"k-\", lw=1.)\nplt.plot((0.5049,), (0,), \"ro\", label=\"Center of rotation\")\nplt.legend(loc=\"best\")\nplt.xlabel(\"$x$ [m]\")\nplt.xlabel(\"$z$ [m]\")\nplt.show()",
"1.Defining the geometry\nJust as we have done in tutorial 1, we will use the wgs_creator module to define the geometry of the wing.\nFirst off, we initialize a LaWGS object.",
"from pyPanair.preprocess import wgs_creator\nwgs = wgs_creator.LaWGS(\"tapered_wing\")",
"Next, we create a Line object that defines the coordinates of the airfoil at the root of the wing.\nTo do so, we will read a csv file that contains the coordinates of the airfoil, using the read_airfoil function. \nFive csv files, eta0000.csv, eta0126.csv, eta0400.csv, eta0700.csv, and eta1000.csv have been prepared for this tutorial. \nBefore creating the Line object, we will take a quick view at these files.\nFor example, eta0000.csv looks like ...",
"import pandas as pd\npd.set_option(\"display.max_rows\", 10)\npd.read_csv(\"eta0000.csv\")",
"The first and second columns xup and zup represent the xz-coordinates of the upper surface of the airfoil.\nThe third and fourth columns xlow and zlow represent the xz-coordinates of the lower surface of the airfoil. \nThe csv file must follow four rules:\n1. Data in the first row correspond to the xz-coordinates of the leading edge of the airfoil\n2. Data in the last row correspond to the xz-coordinates of the trailing edge of the airfoil\n3. For the first row, the coordinates (xup, zup) and (xlow, zlow) are the same\n4. For the last row, the coordinates (xup, zup) and (xlow, zlow) are the same (i.e. the airfoil has a sharp TE) \nNow we shall create a Line object for the root of the wing.",
"wingsection1 = wgs_creator.read_airfoil(\"eta0000.csv\", y_coordinate=0.)",
"The first variable specifies the name of the csv file.\nThe y_coordinate variable defines the y-coordinate of the points included in the Line.\nLine objects for the remaining four wing sections can be created in the same way.",
"wingsection2 = wgs_creator.read_airfoil(\"eta0126.csv\", y_coordinate=0.074211)\nwingsection3 = wgs_creator.read_airfoil(\"eta0400.csv\", y_coordinate=0.235051)\nwingsection4 = wgs_creator.read_airfoil(\"eta0700.csv\", y_coordinate=0.410350)\nwingsection5 = wgs_creator.read_airfoil(\"eta1000.csv\", y_coordinate=0.585650)",
"Next, we create four networks by linearly interpolating these wing sections.",
"wingnet1 = wingsection1.linspace(wingsection2, num=4)\nwingnet2 = wingsection2.linspace(wingsection3, num=8)\nwingnet3 = wingsection3.linspace(wingsection4, num=9)\nwingnet4 = wingsection4.linspace(wingsection5, num=9)",
"Then, we concatenate the networks using the concat_row method.",
"wing = wingnet1.concat_row((wingnet2, wingnet3, wingnet4))",
"The concatenated network is displayed below.",
"wing.plot_wireframe()",
"After creating the Network for the wing, we create networks for the wingtip and wake.",
"wingtip_up, wingtip_low = wingsection5.split_half()\nwingtip_low = wingtip_low.flip()\nwingtip = wingtip_up.linspace(wingtip_low, num=5)\n\nwake_length = 50 * 0.1412\nwingwake = wing.make_wake(edge_number=3, wake_length=wake_length)",
"Next, the Networks will be registered to the wgs object.",
"wgs.append_network(\"wing\", wing, 1)\nwgs.append_network(\"wingtip\", wingtip, 1)\nwgs.append_network(\"wingwake\", wingwake, 18)",
"Then, we create a stl file to check that there are no errors in the model.",
"wgs.create_stl()",
"Last, we create input files for panin",
"wgs.create_aux(alpha=(-2, 0, 2), mach=0.6, cbar=0.1412, span=1.1714, sref=0.1454, xref=0.5049, zref=0.)\nwgs.create_wgs()",
"2. Analysis\nThe analysis can be done in the same way as tutorial 1.\nPlace panair, panin, tapered_wing.aux, and tapered_wing.wgs in the same directory, \nand run panin and panair.\nbash\n$ ./panin\n Prepare input for PanAir\n Version 1.0 (4Jan2000)\n Ralph L. Carmichael, Public Domain Aeronautical Software\n Enter the name of the auxiliary file: \ntapered_wing.aux\n 10 records copied from auxiliary file.\n 9 records in the internal data file.\n Geometry data to be read from tapered_wing.wgs \n Reading WGS file...\n Reading network wing\n Reading network wingtip\n Reading network wingwake\n Reading input file instructions...\n Command 1 MACH 0.6\n Command 11 ALPHA -2 0 2\n Command 6 cbar 0.1412\n Command 7 span 1.1714\n Command 2 sref 0.1454\n Command 3 xref 0.5049\n Command 5 zref 0.0\n Command 35 BOUN 1 1 18\n Writing PanAir input file...\n Files a502.in added to your directory.\n Also, file panin.dbg\n Normal termination of panin, version 1.0 (4Jan2000)\n Normal termination of panin\nbash\n$ ./panair\n Panair High Order Panel Code, Version 15.0 (10 December 2009)\n Enter name of input file:\na502.in\nAfter the analysis finishes, place panair.out, agps, and ffmf in the tutorial2 directory.\n3. Visualization\nVisualization of the results can be done in the same manner as tutorial 2.",
"from pyPanair.postprocess import write_vtk\nwrite_vtk(n_wake=1)\n\nfrom pyPanair.postprocess import calc_section_force\ncalc_section_force(aoa=2, mac=0.1412, rot_center=(0.5049,0,0), casenum=3, networknum=1)\n\nsection_force = pd.read_csv(\"section_force.csv\")\nsection_force\n\nplt.plot(section_force.pos / 0.5857, section_force.cl * section_force.chord, \"s\", mfc=\"None\", mec=\"b\")\nplt.xlabel(\"spanwise position [normalized]\")\nplt.ylabel(\"cl * chord\")\nplt.grid()\nplt.show()",
"The ffmf file can be parsed using the read_ffmf and write_ffmf methods.",
"from pyPanair.postprocess import write_ffmf, read_ffmf\nread_ffmf()\n\nwrite_ffmf()",
"The read_ffmf method parses the ffmf file and converts it to a pandas DataFrame.\nThe write_ffmf will convert the ffmf file to a csv file. (The default name of the converted file is ffmf.csv)\nThis is the end of tutorial 2.\nReference\n\nRedeker, G., \"A selection of experimental test cases for the validation of CFD codes,\"\n AGARD AR-303, 1994."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
ES-DOC/esdoc-jupyterhub
|
notebooks/dwd/cmip6/models/sandbox-2/land.ipynb
|
gpl-3.0
|
[
"ES-DOC CMIP6 Model Properties - Land\nMIP Era: CMIP6\nInstitute: DWD\nSource ID: SANDBOX-2\nTopic: Land\nSub-Topics: Soil, Snow, Vegetation, Energy Balance, Carbon Cycle, Nitrogen Cycle, River Routing, Lakes. \nProperties: 154 (96 required)\nModel descriptions: Model description details\nInitialized From: -- \nNotebook Help: Goto notebook help page\nNotebook Initialised: 2018-02-15 16:53:57\nDocument Setup\nIMPORTANT: to be executed each time you run the notebook",
"# DO NOT EDIT ! \nfrom pyesdoc.ipython.model_topic import NotebookOutput \n\n# DO NOT EDIT ! \nDOC = NotebookOutput('cmip6', 'dwd', 'sandbox-2', 'land')",
"Document Authors\nSet document authors",
"# Set as follows: DOC.set_author(\"name\", \"email\") \n# TODO - please enter value(s)",
"Document Contributors\nSpecify document contributors",
"# Set as follows: DOC.set_contributor(\"name\", \"email\") \n# TODO - please enter value(s)",
"Document Publication\nSpecify document publication status",
"# Set publication status: \n# 0=do not publish, 1=publish. \nDOC.set_publication_status(0)",
"Document Table of Contents\n1. Key Properties\n2. Key Properties --> Conservation Properties\n3. Key Properties --> Timestepping Framework\n4. Key Properties --> Software Properties\n5. Grid\n6. Grid --> Horizontal\n7. Grid --> Vertical\n8. Soil\n9. Soil --> Soil Map\n10. Soil --> Snow Free Albedo\n11. Soil --> Hydrology\n12. Soil --> Hydrology --> Freezing\n13. Soil --> Hydrology --> Drainage\n14. Soil --> Heat Treatment\n15. Snow\n16. Snow --> Snow Albedo\n17. Vegetation\n18. Energy Balance\n19. Carbon Cycle\n20. Carbon Cycle --> Vegetation\n21. Carbon Cycle --> Vegetation --> Photosynthesis\n22. Carbon Cycle --> Vegetation --> Autotrophic Respiration\n23. Carbon Cycle --> Vegetation --> Allocation\n24. Carbon Cycle --> Vegetation --> Phenology\n25. Carbon Cycle --> Vegetation --> Mortality\n26. Carbon Cycle --> Litter\n27. Carbon Cycle --> Soil\n28. Carbon Cycle --> Permafrost Carbon\n29. Nitrogen Cycle\n30. River Routing\n31. River Routing --> Oceanic Discharge\n32. Lakes\n33. Lakes --> Method\n34. Lakes --> Wetlands \n1. Key Properties\nLand surface key properties\n1.1. Model Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of land surface model.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.model_overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.2. Model Name\nIs Required: TRUE Type: STRING Cardinality: 1.1\nName of land surface model code (e.g. MOSES2.2)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.model_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.3. Description\nIs Required: TRUE Type: STRING Cardinality: 1.1\nGeneral description of the processes modelled (e.g. dymanic vegation, prognostic albedo, etc.)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.4. Land Atmosphere Flux Exchanges\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nFluxes exchanged with the atmopshere.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.land_atmosphere_flux_exchanges') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"water\" \n# \"energy\" \n# \"carbon\" \n# \"nitrogen\" \n# \"phospherous\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"1.5. Atmospheric Coupling Treatment\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the treatment of land surface coupling with the Atmosphere model component, which may be different for different quantities (e.g. dust: semi-implicit, water vapour: explicit)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.atmospheric_coupling_treatment') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.6. Land Cover\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nTypes of land cover defined in the land surface model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.land_cover') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"bare soil\" \n# \"urban\" \n# \"lake\" \n# \"land ice\" \n# \"lake ice\" \n# \"vegetated\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"1.7. Land Cover Change\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe how land cover change is managed (e.g. the use of net or gross transitions)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.land_cover_change') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.8. Tiling\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the general tiling procedure used in the land surface (if any). Include treatment of physiography, land/sea, (dynamic) vegetation coverage and orography/roughness",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"2. Key Properties --> Conservation Properties\nTODO\n2.1. Energy\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe if/how energy is conserved globally and to what level (e.g. within X [units]/year)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.conservation_properties.energy') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"2.2. Water\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe if/how water is conserved globally and to what level (e.g. within X [units]/year)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.conservation_properties.water') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"2.3. Carbon\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe if/how carbon is conserved globally and to what level (e.g. within X [units]/year)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.conservation_properties.carbon') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"3. Key Properties --> Timestepping Framework\nTODO\n3.1. Timestep Dependent On Atmosphere\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs a time step dependent on the frequency of atmosphere coupling?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.timestepping_framework.timestep_dependent_on_atmosphere') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"3.2. Time Step\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nOverall timestep of land surface model (i.e. time between calls)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.timestepping_framework.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"3.3. Timestepping Method\nIs Required: TRUE Type: STRING Cardinality: 1.1\nGeneral description of time stepping method and associated time step(s)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.timestepping_framework.timestepping_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"4. Key Properties --> Software Properties\nSoftware properties of land surface code\n4.1. Repository\nIs Required: FALSE Type: STRING Cardinality: 0.1\nLocation of code for this component.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.software_properties.repository') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"4.2. Code Version\nIs Required: FALSE Type: STRING Cardinality: 0.1\nCode version identifier.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.software_properties.code_version') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"4.3. Code Languages\nIs Required: FALSE Type: STRING Cardinality: 0.N\nCode language(s).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.key_properties.software_properties.code_languages') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"5. Grid\nLand surface grid\n5.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of the grid in the land surface",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.grid.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6. Grid --> Horizontal\nThe horizontal grid in the land surface\n6.1. Description\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the general structure of the horizontal grid (not including any tiling)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.grid.horizontal.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6.2. Matches Atmosphere Grid\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nDoes the horizontal grid match the atmosphere?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.grid.horizontal.matches_atmosphere_grid') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"7. Grid --> Vertical\nThe vertical grid in the soil\n7.1. Description\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the general structure of the vertical grid in the soil (not including any tiling)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.grid.vertical.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"7.2. Total Depth\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nThe total depth of the soil (in metres)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.grid.vertical.total_depth') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"8. Soil\nLand surface soil\n8.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of soil in the land surface",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8.2. Heat Water Coupling\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the coupling between heat and water in the soil",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.heat_water_coupling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8.3. Number Of Soil layers\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nThe number of soil layers",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.number_of_soil layers') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"8.4. Prognostic Variables\nIs Required: TRUE Type: STRING Cardinality: 1.1\nList the prognostic variables of the soil scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"9. Soil --> Soil Map\nKey properties of the land surface soil map\n9.1. Description\nIs Required: TRUE Type: STRING Cardinality: 1.1\nGeneral description of soil map",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"9.2. Structure\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the soil structure map",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.structure') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"9.3. Texture\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the soil texture map",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.texture') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"9.4. Organic Matter\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the soil organic matter map",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.organic_matter') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"9.5. Albedo\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the soil albedo map",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.albedo') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"9.6. Water Table\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the soil water table map, if any",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.water_table') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"9.7. Continuously Varying Soil Depth\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nDoes the soil properties vary continuously with depth?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.continuously_varying_soil_depth') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"9.8. Soil Depth\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the soil depth map",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.soil_map.soil_depth') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"10. Soil --> Snow Free Albedo\nTODO\n10.1. Prognostic\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs snow free albedo prognostic?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.snow_free_albedo.prognostic') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"10.2. Functions\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nIf prognostic, describe the dependancies on snow free albedo calculations",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.snow_free_albedo.functions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"vegetation type\" \n# \"soil humidity\" \n# \"vegetation state\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"10.3. Direct Diffuse\nIs Required: FALSE Type: ENUM Cardinality: 0.1\nIf prognostic, describe the distinction between direct and diffuse albedo",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.snow_free_albedo.direct_diffuse') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"distinction between direct and diffuse albedo\" \n# \"no distinction between direct and diffuse albedo\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"10.4. Number Of Wavelength Bands\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nIf prognostic, enter the number of wavelength bands used",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.snow_free_albedo.number_of_wavelength_bands') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"11. Soil --> Hydrology\nKey properties of the land surface soil hydrology\n11.1. Description\nIs Required: TRUE Type: STRING Cardinality: 1.1\nGeneral description of the soil hydrological model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"11.2. Time Step\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nTime step of river soil hydrology in seconds",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"11.3. Tiling\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the soil hydrology tiling, if any.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"11.4. Vertical Discretisation\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the typical vertical discretisation",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.vertical_discretisation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"11.5. Number Of Ground Water Layers\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nThe number of soil layers that may contain water",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.number_of_ground_water_layers') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"11.6. Lateral Connectivity\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nDescribe the lateral connectivity between tiles",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.lateral_connectivity') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"perfect connectivity\" \n# \"Darcian flow\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"11.7. Method\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nThe hydrological dynamics scheme in the land surface model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Bucket\" \n# \"Force-restore\" \n# \"Choisnel\" \n# \"Explicit diffusion\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"12. Soil --> Hydrology --> Freezing\nTODO\n12.1. Number Of Ground Ice Layers\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nHow many soil layers may contain ground ice",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.freezing.number_of_ground_ice_layers') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"12.2. Ice Storage Method\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the method of ice storage",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.freezing.ice_storage_method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"12.3. Permafrost\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the treatment of permafrost, if any, within the land surface scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.freezing.permafrost') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"13. Soil --> Hydrology --> Drainage\nTODO\n13.1. Description\nIs Required: TRUE Type: STRING Cardinality: 1.1\nGeneral describe how drainage is included in the land surface scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.drainage.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"13.2. Types\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nDifferent types of runoff represented by the land surface model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.hydrology.drainage.types') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Gravity drainage\" \n# \"Horton mechanism\" \n# \"topmodel-based\" \n# \"Dunne mechanism\" \n# \"Lateral subsurface flow\" \n# \"Baseflow from groundwater\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"14. Soil --> Heat Treatment\nTODO\n14.1. Description\nIs Required: TRUE Type: STRING Cardinality: 1.1\nGeneral description of how heat treatment properties are defined",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.heat_treatment.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"14.2. Time Step\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nTime step of soil heat scheme in seconds",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.heat_treatment.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"14.3. Tiling\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the soil heat treatment tiling, if any.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.heat_treatment.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"14.4. Vertical Discretisation\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the typical vertical discretisation",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.heat_treatment.vertical_discretisation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"14.5. Heat Storage\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nSpecify the method of heat storage",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.heat_treatment.heat_storage') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Force-restore\" \n# \"Explicit diffusion\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"14.6. Processes\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nDescribe processes included in the treatment of soil heat",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.soil.heat_treatment.processes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"soil moisture freeze-thaw\" \n# \"coupling with snow temperature\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"15. Snow\nLand surface snow\n15.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of snow in the land surface",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"15.2. Tiling\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the snow tiling, if any.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"15.3. Number Of Snow Layers\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nThe number of snow levels used in the land surface scheme/model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.number_of_snow_layers') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"15.4. Density\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nDescription of the treatment of snow density",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.density') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"constant\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"15.5. Water Equivalent\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nDescription of the treatment of the snow water equivalent",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.water_equivalent') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"15.6. Heat Content\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nDescription of the treatment of the heat content of snow",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.heat_content') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"15.7. Temperature\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nDescription of the treatment of snow temperature",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.temperature') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"15.8. Liquid Water Content\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nDescription of the treatment of snow liquid water",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.liquid_water_content') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"15.9. Snow Cover Fractions\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nSpecify cover fractions used in the surface snow scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.snow_cover_fractions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"ground snow fraction\" \n# \"vegetation snow fraction\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"15.10. Processes\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nSnow related processes in the land surface scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.processes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"snow interception\" \n# \"snow melting\" \n# \"snow freezing\" \n# \"blowing snow\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"15.11. Prognostic Variables\nIs Required: TRUE Type: STRING Cardinality: 1.1\nList the prognostic variables of the snow scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"16. Snow --> Snow Albedo\nTODO\n16.1. Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nDescribe the treatment of snow-covered land albedo",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.snow_albedo.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"prescribed\" \n# \"constant\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"16.2. Functions\nIs Required: FALSE Type: ENUM Cardinality: 0.N\n*If prognostic, *",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.snow.snow_albedo.functions') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"vegetation type\" \n# \"snow age\" \n# \"snow density\" \n# \"snow grain type\" \n# \"aerosol deposition\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"17. Vegetation\nLand surface vegetation\n17.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of vegetation in the land surface",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"17.2. Time Step\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nTime step of vegetation scheme in seconds",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"17.3. Dynamic Vegetation\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs there dynamic evolution of vegetation?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.dynamic_vegetation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"17.4. Tiling\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the vegetation tiling, if any.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"17.5. Vegetation Representation\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nVegetation classification used",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.vegetation_representation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"vegetation types\" \n# \"biome types\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"17.6. Vegetation Types\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nList of vegetation types in the classification, if any",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.vegetation_types') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"broadleaf tree\" \n# \"needleleaf tree\" \n# \"C3 grass\" \n# \"C4 grass\" \n# \"vegetated\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"17.7. Biome Types\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nList of biome types in the classification, if any",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.biome_types') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"evergreen needleleaf forest\" \n# \"evergreen broadleaf forest\" \n# \"deciduous needleleaf forest\" \n# \"deciduous broadleaf forest\" \n# \"mixed forest\" \n# \"woodland\" \n# \"wooded grassland\" \n# \"closed shrubland\" \n# \"opne shrubland\" \n# \"grassland\" \n# \"cropland\" \n# \"wetlands\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"17.8. Vegetation Time Variation\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nHow the vegetation fractions in each tile are varying with time",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.vegetation_time_variation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"fixed (not varying)\" \n# \"prescribed (varying from files)\" \n# \"dynamical (varying from simulation)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"17.9. Vegetation Map\nIs Required: FALSE Type: STRING Cardinality: 0.1\nIf vegetation fractions are not dynamically updated , describe the vegetation map used (common name and reference, if possible)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.vegetation_map') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"17.10. Interception\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs vegetation interception of rainwater represented?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.interception') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"17.11. Phenology\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nTreatment of vegetation phenology",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.phenology') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic (vegetation map)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"17.12. Phenology Description\nIs Required: FALSE Type: STRING Cardinality: 0.1\nGeneral description of the treatment of vegetation phenology",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.phenology_description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"17.13. Leaf Area Index\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nTreatment of vegetation leaf area index",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.leaf_area_index') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prescribed\" \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"17.14. Leaf Area Index Description\nIs Required: FALSE Type: STRING Cardinality: 0.1\nGeneral description of the treatment of leaf area index",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.leaf_area_index_description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"17.15. Biomass\nIs Required: TRUE Type: ENUM Cardinality: 1.1\n*Treatment of vegetation biomass *",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.biomass') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"17.16. Biomass Description\nIs Required: FALSE Type: STRING Cardinality: 0.1\nGeneral description of the treatment of vegetation biomass",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.biomass_description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"17.17. Biogeography\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nTreatment of vegetation biogeography",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.biogeography') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"17.18. Biogeography Description\nIs Required: FALSE Type: STRING Cardinality: 0.1\nGeneral description of the treatment of vegetation biogeography",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.biogeography_description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"17.19. Stomatal Resistance\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nSpecify what the vegetation stomatal resistance depends on",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.stomatal_resistance') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"light\" \n# \"temperature\" \n# \"water availability\" \n# \"CO2\" \n# \"O3\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"17.20. Stomatal Resistance Description\nIs Required: FALSE Type: STRING Cardinality: 0.1\nGeneral description of the treatment of vegetation stomatal resistance",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.stomatal_resistance_description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"17.21. Prognostic Variables\nIs Required: TRUE Type: STRING Cardinality: 1.1\nList the prognostic variables of the vegetation scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.vegetation.prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"18. Energy Balance\nLand surface energy balance\n18.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of energy balance in land surface",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.energy_balance.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"18.2. Tiling\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the energy balance tiling, if any.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.energy_balance.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"18.3. Number Of Surface Temperatures\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nThe maximum number of distinct surface temperatures in a grid cell (for example, each subgrid tile may have its own temperature)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.energy_balance.number_of_surface_temperatures') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"18.4. Evaporation\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nSpecify the formulation method for land surface evaporation, from soil and vegetation",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.energy_balance.evaporation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"alpha\" \n# \"beta\" \n# \"combined\" \n# \"Monteith potential evaporation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"18.5. Processes\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nDescribe which processes are included in the energy balance scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.energy_balance.processes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"transpiration\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"19. Carbon Cycle\nLand surface carbon cycle\n19.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of carbon cycle in land surface",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"19.2. Tiling\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the carbon cycle tiling, if any.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"19.3. Time Step\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nTime step of carbon cycle in seconds",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"19.4. Anthropogenic Carbon\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nDescribe the treament of the anthropogenic carbon pool",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.anthropogenic_carbon') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"grand slam protocol\" \n# \"residence time\" \n# \"decay time\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"19.5. Prognostic Variables\nIs Required: TRUE Type: STRING Cardinality: 1.1\nList the prognostic variables of the carbon scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"20. Carbon Cycle --> Vegetation\nTODO\n20.1. Number Of Carbon Pools\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nEnter the number of carbon pools used",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.number_of_carbon_pools') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"20.2. Carbon Pools\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList the carbon pools used",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.carbon_pools') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"20.3. Forest Stand Dynamics\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the treatment of forest stand dyanmics",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.forest_stand_dynamics') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"21. Carbon Cycle --> Vegetation --> Photosynthesis\nTODO\n21.1. Method\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the general method used for photosynthesis (e.g. type of photosynthesis, distinction between C3 and C4 grasses, Nitrogen depencence, etc.)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.photosynthesis.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"22. Carbon Cycle --> Vegetation --> Autotrophic Respiration\nTODO\n22.1. Maintainance Respiration\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the general method used for maintainence respiration",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.maintainance_respiration') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"22.2. Growth Respiration\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the general method used for growth respiration",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.autotrophic_respiration.growth_respiration') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"23. Carbon Cycle --> Vegetation --> Allocation\nTODO\n23.1. Method\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the general principle behind the allocation scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"23.2. Allocation Bins\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nSpecify distinct carbon bins used in allocation",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_bins') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"leaves + stems + roots\" \n# \"leaves + stems + roots (leafy + woody)\" \n# \"leaves + fine roots + coarse roots + stems\" \n# \"whole plant (no distinction)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"23.3. Allocation Fractions\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nDescribe how the fractions of allocation are calculated",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.allocation.allocation_fractions') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"fixed\" \n# \"function of vegetation type\" \n# \"function of plant allometry\" \n# \"explicitly calculated\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"24. Carbon Cycle --> Vegetation --> Phenology\nTODO\n24.1. Method\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the general principle behind the phenology scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.phenology.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"25. Carbon Cycle --> Vegetation --> Mortality\nTODO\n25.1. Method\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the general principle behind the mortality scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.vegetation.mortality.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"26. Carbon Cycle --> Litter\nTODO\n26.1. Number Of Carbon Pools\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nEnter the number of carbon pools used",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.litter.number_of_carbon_pools') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"26.2. Carbon Pools\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList the carbon pools used",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.litter.carbon_pools') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"26.3. Decomposition\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList the decomposition methods used",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.litter.decomposition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"26.4. Method\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList the general method used",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.litter.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"27. Carbon Cycle --> Soil\nTODO\n27.1. Number Of Carbon Pools\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nEnter the number of carbon pools used",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.soil.number_of_carbon_pools') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"27.2. Carbon Pools\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList the carbon pools used",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.soil.carbon_pools') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"27.3. Decomposition\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList the decomposition methods used",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.soil.decomposition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"27.4. Method\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList the general method used",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.soil.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"28. Carbon Cycle --> Permafrost Carbon\nTODO\n28.1. Is Permafrost Included\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs permafrost included?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.is_permafrost_included') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"28.2. Emitted Greenhouse Gases\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList the GHGs emitted",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.emitted_greenhouse_gases') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"28.3. Decomposition\nIs Required: FALSE Type: STRING Cardinality: 0.1\nList the decomposition methods used",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.decomposition') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"28.4. Impact On Soil Properties\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the impact of permafrost on soil properties",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.carbon_cycle.permafrost_carbon.impact_on_soil_properties') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"29. Nitrogen Cycle\nLand surface nitrogen cycle\n29.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of the nitrogen cycle in the land surface",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.nitrogen_cycle.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"29.2. Tiling\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the notrogen cycle tiling, if any.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.nitrogen_cycle.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"29.3. Time Step\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nTime step of nitrogen cycle in seconds",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.nitrogen_cycle.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"29.4. Prognostic Variables\nIs Required: TRUE Type: STRING Cardinality: 1.1\nList the prognostic variables of the nitrogen scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.nitrogen_cycle.prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"30. River Routing\nLand surface river routing\n30.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of river routing in the land surface",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"30.2. Tiling\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the river routing, if any.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.tiling') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"30.3. Time Step\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nTime step of river routing scheme in seconds",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"30.4. Grid Inherited From Land Surface\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs the grid inherited from land surface?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.grid_inherited_from_land_surface') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"30.5. Grid Description\nIs Required: FALSE Type: STRING Cardinality: 0.1\nGeneral description of grid, if not inherited from land surface",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.grid_description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"30.6. Number Of Reservoirs\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nEnter the number of reservoirs",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.number_of_reservoirs') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"30.7. Water Re Evaporation\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nTODO",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.water_re_evaporation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"flood plains\" \n# \"irrigation\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"30.8. Coupled To Atmosphere\nIs Required: FALSE Type: BOOLEAN Cardinality: 0.1\nIs river routing coupled to the atmosphere model component?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.coupled_to_atmosphere') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"30.9. Coupled To Land\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the coupling between land and rivers",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.coupled_to_land') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"30.10. Quantities Exchanged With Atmosphere\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nIf couple to atmosphere, which quantities are exchanged between river routing and the atmosphere model components?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.quantities_exchanged_with_atmosphere') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"heat\" \n# \"water\" \n# \"tracers\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"30.11. Basin Flow Direction Map\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nWhat type of basin flow direction map is being used?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.basin_flow_direction_map') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"present day\" \n# \"adapted for other periods\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"30.12. Flooding\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the representation of flooding, if any",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.flooding') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"30.13. Prognostic Variables\nIs Required: TRUE Type: STRING Cardinality: 1.1\nList the prognostic variables of the river routing",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"31. River Routing --> Oceanic Discharge\nTODO\n31.1. Discharge Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nSpecify how rivers are discharged to the ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.oceanic_discharge.discharge_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"direct (large rivers)\" \n# \"diffuse\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"31.2. Quantities Transported\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nQuantities that are exchanged from river-routing to the ocean model component",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.river_routing.oceanic_discharge.quantities_transported') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"heat\" \n# \"water\" \n# \"tracers\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"32. Lakes\nLand surface lakes\n32.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of lakes in the land surface",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"32.2. Coupling With Rivers\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nAre lakes coupled to the river routing model component?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.coupling_with_rivers') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"32.3. Time Step\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nTime step of lake scheme in seconds",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"32.4. Quantities Exchanged With Rivers\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nIf coupling with rivers, which quantities are exchanged between the lakes and rivers",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.quantities_exchanged_with_rivers') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"heat\" \n# \"water\" \n# \"tracers\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"32.5. Vertical Grid\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the vertical grid of lakes",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.vertical_grid') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"32.6. Prognostic Variables\nIs Required: TRUE Type: STRING Cardinality: 1.1\nList the prognostic variables of the lake scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"33. Lakes --> Method\nTODO\n33.1. Ice Treatment\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs lake ice included?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.method.ice_treatment') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"33.2. Albedo\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nDescribe the treatment of lake albedo",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.method.albedo') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"prognostic\" \n# \"diagnostic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"33.3. Dynamics\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nWhich dynamics of lakes are treated? horizontal, vertical, etc.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.method.dynamics') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"No lake dynamics\" \n# \"vertical\" \n# \"horizontal\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"33.4. Dynamic Lake Extent\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs a dynamic lake extent scheme included?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.method.dynamic_lake_extent') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"33.5. Endorheic Basins\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nBasins not flowing to ocean included?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.method.endorheic_basins') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"34. Lakes --> Wetlands\nTODO\n34.1. Description\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the treatment of wetlands, if any",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.land.lakes.wetlands.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"©2017 ES-DOC"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
AEW2015/PYNQ_PR_Overlay
|
Pynq-Z1/notebooks/examples/tracebuffer_i2c.ipynb
|
bsd-3-clause
|
[
"Trace Buffer - Tracing IIC Transactions\nThe Trace_Buffer class can monitor the waveform and transations on PMODA, PMODB, and ARDUINO connectors.\nThis demo shows how to use this class to track IIC transactions. For this demo, users have to connect the Pmod TMP2 sensor to PMODA.\nStep 1: Overlay Management\nUsers have to import all the necessary classes. Make sure to use the right bitstream.",
"from pprint import pprint\nfrom time import sleep\nfrom pynq import PL\nfrom pynq import Overlay\nfrom pynq.drivers import Trace_Buffer\nfrom pynq.iop import Pmod_TMP2\nfrom pynq.iop import PMODA\nfrom pynq.iop import PMODB\nfrom pynq.iop import ARDUINO\n\nol = Overlay(\"base.bit\")\nol.download()\npprint(PL.ip_dict)",
"Step 2: Instantiating Temperature Sensor\nAlthough this demo can also be done on PMODB, we use PMODA in this demo.\nSet the log interval to be 1ms. This means the IO Processor (IOP) will read temperature values every 1ms.",
"tmp2 = Pmod_TMP2(PMODA)\ntmp2.set_log_interval_ms(1)",
"Step 3: Tracking Transactions\nInstantiating the trace buffer with IIC protocol. The sample rate is set to 1MHz. Although the IIC clock is only 100kHz, we still have to use higher sample rate to keep track of IIC control signals from IOP.\nAfter starting the trace buffer DMA, also start to issue IIC reads for 1 second. Then stop the trace buffer DMA.",
"tr_buf = Trace_Buffer(PMODA,\"i2c\",samplerate=1000000)\n\n# Start the trace buffer\ntr_buf.start()\n\n# Issue reads for 1 second\ntmp2.start_log()\nsleep(1)\ntmp2_log = tmp2.get_log()\n\n# Stop the trace buffer\ntr_buf.stop()",
"Step 4: Parsing and Decoding Transactions\nThe trace buffer object is able to parse the transactions into a *.csv file (saved into the same folder as this script). The input arguments for the parsing method is:\n * start : the starting sample number of the trace.\n * stop : the stopping sample number of the trace.\n * tri_sel: masks for tri-state selection bits.\n * tri_0: masks for pins selected when the corresponding tri_sel = 0.\n * tri_0: masks for pins selected when the corresponding tri_sel = 1.\n * mask: mask for pins selected always.\nFor PMODB, the configuration of the masks can be:\n * tri_sel=[0x40000<<32,0x80000<<32]\n * tri_0=[0x4<<32,0x8<<32]\n * tri_1=[0x400<<32,0x800<<32]\n * mask = 0x0\nThen the trace buffer object can also decode the transactions using the open-source sigrok decoders. The decoded file (*.pd) is saved into the same folder as this script.\nReference:\nhttps://sigrok.org/wiki/Main_Page",
"# Configuration for PMODA\nstart = 600\nstop = 10000\ntri_sel=[0x40000,0x80000]\ntri_0=[0x4,0x8]\ntri_1=[0x400,0x800]\nmask = 0x0\n\n# Parsing and decoding\ntr_buf.parse(\"i2c_trace.csv\",\n start,stop,mask,tri_sel,tri_0,tri_1)\ntr_buf.set_metadata(['SDA','SCL'])\ntr_buf.decode(\"i2c_trace.pd\")",
"Step 5: Displaying the Result\nThe final waveform and decoded transactions are shown using the open-source wavedrom library. The two input arguments (s0 and s1 ) indicate the starting and stopping location where the waveform is shown. \nThe valid range for s0 and s1 is: 0 < s0 < s1 < (stop-start), where start and stop are defined in the last step.\nReference:\nhttps://www.npmjs.com/package/wavedrom",
"s0 = 1\ns1 = 5000\ntr_buf.display(s0,s1)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
GoogleCloudPlatform/training-data-analyst
|
courses/ai-for-time-series/notebooks/01-explore.ipynb
|
apache-2.0
|
[
"# Copyright 2021 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.",
"Overview\nIn this notebook, you will learn how to load, explore, visualize, and pre-process a time-series dataset. The output of this notebook is a processed dataset that will be used in following notebooks to build a machine learning model.\nDataset\nCTA - Ridership - Daily Boarding Totals: This dataset shows systemwide boardings for both bus and rail services provided by Chicago Transit Authority, dating back to 2001.\nObjective\nThe goal is to forecast future transit ridership in the City of Chicago, based on previous ridership.\nInstall packages and dependencies\nRestarting the kernel may be required to use new packages.",
"%pip install -U statsmodels scikit-learn --user",
"Note: To restart the Kernel, navigate to Kernel > Restart Kernel... on the Jupyter menu.\nImport libraries and define constants",
"from pandas.plotting import register_matplotlib_converters\nfrom statsmodels.graphics.tsaplots import plot_acf\nfrom statsmodels.tsa.seasonal import seasonal_decompose\nfrom statsmodels.tsa.stattools import grangercausalitytests\n\nimport matplotlib.pyplot as plt\nimport pandas as pd\nimport seaborn as sns\n\n# Enter your project and region. Then run the cell to make sure the\n# Cloud SDK uses the right project for all the commands in this notebook.\n\nPROJECT = 'your-project-name' # REPLACE WITH YOUR PROJECT NAME \nREGION = 'us-central-1' # REPLACE WITH YOUR REGION e.g. us-central1\n\n#Don't change the following command - this is to check if you have changed the project name above.\nassert PROJECT != 'your-project-name', 'Don''t forget to change the project variables!'\n\ntarget = 'total_rides' # The variable you are predicting\ntarget_description = 'Total Rides' # A description of the target variable\nfeatures = {'day_type': 'Day Type'} # Weekday = W, Saturday = A, Sunday/Holiday = U\nts_col = 'service_date' # The name of the column with the date field\n\nraw_data_file = 'https://data.cityofchicago.org/api/views/6iiy-9s97/rows.csv?accessType=DOWNLOAD'\nprocessed_file = 'cta_ridership.csv' # Which file to save the results to",
"Load data",
"# Import CSV file\n\ndf = pd.read_csv(raw_data_file, index_col=[ts_col], parse_dates=[ts_col])\n\n# Model data prior to 2020 \n\ndf = df[df.index < '2020-01-01']\n\n# Drop duplicates\n\ndf = df.drop_duplicates()\n\n# Sort by date\n\ndf = df.sort_index()",
"Explore data",
"# Print the top 5 rows\n\ndf.head()",
"TODO 1: Analyze the patterns\n\nIs ridership changing much over time?\nIs there a difference in ridership between the weekday and weekends?\nIs the mix of bus vs rail ridership changing over time?",
"# Initialize plotting\n\nregister_matplotlib_converters() # Addresses a warning\nsns.set(rc={'figure.figsize':(16,4)})\n\n# Explore total rides over time\n\nsns.lineplot(data=df, x=df.index, y=df[target]).set_title('Total Rides')\nfig = plt.show()\n\n# Explore rides by day type: Weekday (W), Saturday (A), Sunday/Holiday (U)\n\nsns.lineplot(data=df, x=df.index, y=df[target], hue=df['day_type']).set_title('Total Rides by Day Type')\nfig = plt.show()\n\n# Explore rides by transportation type\n\nsns.lineplot(data=df[['bus','rail_boardings']]).set_title('Total Rides by Transportation Type')\nfig = plt.show()",
"TODO 2: Review summary statistics\n\nHow many records are in the dataset?\nWhat is the average # of riders per day?",
"df[target].describe().apply(lambda x: round(x))",
"TODO 3: Explore seasonality\n\nIs there much difference between months?\nCan you extract the trend and seasonal pattern from the data?",
"# Show the distribution of values for each day of the week in a boxplot:\n# Min, 25th percentile, median, 75th percentile, max \n\ndaysofweek = df.index.to_series().dt.dayofweek\n\nfig = sns.boxplot(x=daysofweek, y=df[target])\n\n# Show the distribution of values for each month in a boxplot:\n\nmonths = df.index.to_series().dt.month\n\nfig = sns.boxplot(x=months, y=df[target])\n\n# Decompose the data into trend and seasonal components\n\nresult = seasonal_decompose(df[target], period=365)\nfig = result.plot()",
"Auto-correlation\nNext, we will create an auto-correlation plot, to show how correlated a time-series is with itself. Each point on the x-axis indicates the correlation at a given lag. The shaded area indicates the confidence interval.\nNote that the correlation gradually decreases over time, but reflects weekly seasonality (e.g. t-7 and t-14 stand out).",
"plot_acf(df[target])\n\nfig = plt.show()",
"Export data\nThis will generate a CSV file, which you will use in the next labs of this quest.\nInspect the CSV file to see what the data looks like.",
"df[[target]].to_csv(processed_file, index=True, index_label=ts_col)",
"Conclusion\nYou've successfully completed the exploration and visualization lab.\nYou've learned how to:\n* Create a query that groups data into a time series\n* Visualize data\n* Decompose time series into trend and seasonal components"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
aschaffn/phys202-2015-work
|
assignments/assignment06/InteractEx05.ipynb
|
mit
|
[
"Interact Exercise 5\nImports\nPut the standard imports for Matplotlib, Numpy and the IPython widgets in the following cell.",
"import numpy as np\nfrom matplotlib import pyplot as plt\nfrom IPython.display import display, SVG\n",
"Interact with SVG display\nSVG is a simple way of drawing vector graphics in the browser. Here is a simple example of how SVG can be used to draw a circle in the Notebook:",
"s = \"\"\"\n<svg width=\"100\" height=\"100\">\n <circle cx=\"50\" cy=\"50\" r=\"20\" fill=\"aquamarine\" />\n</svg>\n\"\"\"\n\nSVG(s)",
"Write a function named draw_circle that draws a circle using SVG. Your function should take the parameters of the circle as function arguments and have defaults as shown. You will have to write the raw SVG code as a Python string and then use the IPython.display.SVG object and IPython.display.display function.",
"def draw_circle(width=100, height=100, cx=25, cy=25, r=5, fill='red'):\n \"\"\"Draw an SVG circle.\n \n Parameters\n ----------\n width : int\n The width of the svg drawing area in px.\n height : int\n The height of the svg drawing area in px.\n cx : int\n The x position of the center of the circle in px.\n cy : int\n The y position of the center of the circle in px.\n r : int\n The radius of the circle in px.\n fill : str\n The fill color of the circle.\n \"\"\"\n \n s1 = \"<svg width=\\\"\" + str(width) + \"\\\" height= \\\"\" + str(height) + \"\\\">\"\n s2 = \"<circle cx=\\\"\" + str(cx) + \"\\\" cy= \\\"\" + str(cy) + \"\\\" r=\\\"\" + str(r) + \"\\\" fill=\\\"\" + fill + \"\\\"\" + \"/></svg>\"\n s = s1 + s2\n display(SVG(s))\n\ndraw_circle(cx=10, cy=10, r=10, fill='blue')\n\nassert True # leave this to grade the draw_circle function",
"Use interactive to build a user interface for exploing the draw_circle function:\n\nwidth: a fixed value of 300px\nheight: a fixed value of 300px\ncx/cy: a slider in the range [0,300]\nr: a slider in the range [0,50]\nfill: a text area in which you can type a color's name\n\nSave the return value of interactive to a variable named w.",
"from IPython.html.widgets import interact, interactive, fixed\n\n\nw = interactive(draw_circle, width=fixed(300), height=fixed(300), cx = (0,300), cy=(0,300), r = (0,50), fill=\"red\" )\n\nc = w.children\nassert c[0].min==0 and c[0].max==300\nassert c[1].min==0 and c[1].max==300\nassert c[2].min==0 and c[2].max==50\nassert c[3].value=='red'",
"Use the display function to show the widgets created by interactive:",
"display(w)\n\n\nassert True # leave this to grade the display of the widget",
"Play with the sliders to change the circles parameters interactively."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
CodyKochmann/battle_tested
|
tutorials/.ipynb_checkpoints/as_a_feeler-checkpoint.ipynb
|
mit
|
[
"# this is just to silence \n%xmode plain",
"Using battle_tested to feel out new libraries.\nbattle_tested doesn't necessisarily need to be used as a fuzzer. I like to use its testing \nfuncionality to literally \"feel out\" a library that is recommended to me so I know what works\nand what will cause issues.\nHere is how I used battle_tested to \"feel out\" sqlitedict so when I'm using it, there aren't \nany surprises.\nFirst, lets import SqliteDict and make a harness that will allow us to test what can be assigned and what will cause random explosions to happen.",
"from sqlitedict import SqliteDict\n\ndef harness(key, value):\n \"\"\" this tests what can be assigned in SqliteDict's keys and values \"\"\"\n mydict = SqliteDict(\":memory:\")\n mydict[key] = value",
"Now, we import the tools we need from battle_tested and fuzz it.",
"from battle_tested import fuzz, success_map, crash_map\n\nfuzz(harness, keep_testing=True) # keep testing allows us to collect \"all\" crashes",
"Now we can call success_map() and crash_map() to start to get a feel for what is accepted and what isn't.",
"crash_map()\n\nsuccess_map()"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
zomansud/coursera
|
ml-foundations/week-2/Assignment - Week 2.ipynb
|
mit
|
[
"Load GrahpLab Create",
"import graphlab",
"Basic settings",
"#limit number of worker processes to 4\ngraphlab.set_runtime_config('GRAPHLAB_DEFAULT_NUM_PYLAMBDA_WORKERS', 4)\n\n#set canvas to open inline\ngraphlab.canvas.set_target('ipynb')",
"Load House Sales Data",
"sales = graphlab.SFrame('home_data.gl/')",
"Assignment begins\n1. Selection and summary statistics\nIn the notebook we covered in the module, we discovered which neighborhood (zip code) of Seattle had the highest average house sale price. Now, take the sales data, select only the houses with this zip code, and compute the average price. Save this result to answer the quiz at the end.",
"highest_avg_price_zipcode = '98039'\n\nsales_zipcode = sales[sales['zipcode'] == highest_avg_price_zipcode]\n\navg_price_highest_zipcode = sales_zipcode['price'].mean()\n\nprint avg_price_highest_zipcode",
"2. Filtering data\nUsing logical filters, first select the houses that have ‘sqft_living’ higher than 2000 sqft but no larger than 4000 sqft.\nWhat fraction of the all houses have ‘sqft_living’ in this range? Save this result to answer the quiz at the end.\nTotal number of houses",
"total_houses = sales.num_rows()\n\nprint total_houses",
"Houses with the above criteria",
"filtered_houses = sales[(sales['sqft_living'] > 2000) & (sales['sqft_living'] <= 4000)]\n\nprint filtered_houses.num_rows()\n\nfiltered_houses = sales[sales.apply(lambda x: (x['sqft_living'] > 2000) & (x['sqft_living'] <= 4000))]\n\nprint filtered_houses.num_rows()\n\ntotal_filtered_houses = filtered_houses.num_rows()\n\nprint total_filtered_houses",
"Fraction of Houses",
"filtered_houses_fraction = total_filtered_houses / float(total_houses)\n\nprint filtered_houses_fraction",
"3. Building a regression model with several more features\nBuild the feature set",
"advanced_features = [\n'bedrooms', 'bathrooms', 'sqft_living', 'sqft_lot', 'floors', 'zipcode',\n'condition', # condition of house\t\t\t\t\n'grade', # measure of quality of construction\t\t\t\t\n'waterfront', # waterfront property\t\t\t\t\n'view', # type of view\t\t\t\t\n'sqft_above', # square feet above ground\t\t\t\t\n'sqft_basement', # square feet in basement\t\t\t\t\n'yr_built', # the year built\t\t\t\t\n'yr_renovated', # the year renovated\t\t\t\t\n'lat', 'long', # the lat-long of the parcel\t\t\t\t\n'sqft_living15', # average sq.ft. of 15 nearest neighbors \t\t\t\t\n'sqft_lot15', # average lot size of 15 nearest neighbors \n]\n\nprint advanced_features\n\nmy_features = ['bedrooms', 'bathrooms', 'sqft_living', 'sqft_lot', 'floors', 'zipcode']",
"Create train and test data",
"train_data, test_data = sales.random_split(.8, seed=0)",
"Compute the RMSE\nRMSE(root mean squared error) on the test_data for the model using just my_features, and for the one using advanced_features.",
"my_feature_model = graphlab.linear_regression.create(train_data, target='price', features=my_features, validation_set=None)\n\nprint my_feature_model.evaluate(test_data)\n\nprint test_data['price'].mean()\n\nadvanced_feature_model = graphlab.linear_regression.create(train_data, target='price', features=advanced_features, validation_set=None)\n\nprint advanced_feature_model.evaluate(test_data)",
"Difference in RMSE\nWhat is the difference in RMSE between the model trained with my_features and the one trained with advanced_features? Save this result to answer the quiz at the end.",
"print my_feature_model.evaluate(test_data)['rmse'] - advanced_feature_model.evaluate(test_data)['rmse']",
"That's all folks!"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
udacity/deep-learning
|
intro-to-tflearn/TFLearn_Sentiment_Analysis.ipynb
|
mit
|
[
"Sentiment analysis with TFLearn\nIn this notebook, we'll continue Andrew Trask's work by building a network for sentiment analysis on the movie review data. Instead of a network written with Numpy, we'll be using TFLearn, a high-level library built on top of TensorFlow. TFLearn makes it simpler to build networks just by defining the layers. It takes care of most of the details for you.\nWe'll start off by importing all the modules we'll need, then load and prepare the data.",
"import pandas as pd\nimport numpy as np\nimport tensorflow as tf\nimport tflearn\nfrom tflearn.data_utils import to_categorical",
"Preparing the data\nFollowing along with Andrew, our goal here is to convert our reviews into word vectors. The word vectors will have elements representing words in the total vocabulary. If the second position represents the word 'the', for each review we'll count up the number of times 'the' appears in the text and set the second position to that count. I'll show you examples as we build the input data from the reviews data. Check out Andrew's notebook and video for more about this.\nRead the data\nUse the pandas library to read the reviews and postive/negative labels from comma-separated files. The data we're using has already been preprocessed a bit and we know it uses only lower case characters. If we were working from raw data, where we didn't know it was all lower case, we would want to add a step here to convert it. That's so we treat different variations of the same word, like The, the, and THE, all the same way.",
"reviews = pd.read_csv('reviews.txt', header=None)\nlabels = pd.read_csv('labels.txt', header=None)",
"Counting word frequency\nTo start off we'll need to count how often each word appears in the data. We'll use this count to create a vocabulary we'll use to encode the review data. This resulting count is known as a bag of words. We'll use it to select our vocabulary and build the word vectors. You should have seen how to do this in Andrew's lesson. Try to implement it here using the Counter class.\n\nExercise: Create the bag of words from the reviews data and assign it to total_counts. The reviews are stores in the reviews Pandas DataFrame. If you want the reviews as a Numpy array, use reviews.values. You can iterate through the rows in the DataFrame with for idx, row in reviews.iterrows(): (documentation). When you break up the reviews into words, use .split(' ') instead of .split() so your results match ours.",
"from collections import Counter\n\ntotal_counts = # bag of words here\n\nprint(\"Total words in data set: \", len(total_counts))",
"Let's keep the first 10000 most frequent words. As Andrew noted, most of the words in the vocabulary are rarely used so they will have little effect on our predictions. Below, we'll sort vocab by the count value and keep the 10000 most frequent words.",
"vocab = sorted(total_counts, key=total_counts.get, reverse=True)[:10000]\nprint(vocab[:60])",
"What's the last word in our vocabulary? We can use this to judge if 10000 is too few. If the last word is pretty common, we probably need to keep more words.",
"print(vocab[-1], ': ', total_counts[vocab[-1]])",
"The last word in our vocabulary shows up in 30 reviews out of 25000. I think it's fair to say this is a tiny proportion of reviews. We are probably fine with this number of words.\nNote: When you run, you may see a different word from the one shown above, but it will also have the value 30. That's because there are many words tied for that number of counts, and the Counter class does not guarantee which one will be returned in the case of a tie.\nNow for each review in the data, we'll make a word vector. First we need to make a mapping of word to index, pretty easy to do with a dictionary comprehension.\n\nExercise: Create a dictionary called word2idx that maps each word in the vocabulary to an index. The first word in vocab has index 0, the second word has index 1, and so on.",
"word2idx = ## create the word-to-index dictionary here",
"Text to vector function\nNow we can write a function that converts a some text to a word vector. The function will take a string of words as input and return a vector with the words counted up. Here's the general algorithm to do this:\n\nInitialize the word vector with np.zeros, it should be the length of the vocabulary.\nSplit the input string of text into a list of words with .split(' '). Again, if you call .split() instead, you'll get slightly different results than what we show here.\nFor each word in that list, increment the element in the index associated with that word, which you get from word2idx.\n\nNote: Since all words aren't in the vocab dictionary, you'll get a key error if you run into one of those words. You can use the .get method of the word2idx dictionary to specify a default returned value when you make a key error. For example, word2idx.get(word, None) returns None if word doesn't exist in the dictionary.",
"def text_to_vector(text):\n \n pass",
"If you do this right, the following code should return\n```\ntext_to_vector('The tea is for a party to celebrate '\n 'the movie so she has no time for a cake')[:65]\narray([0, 1, 0, 0, 2, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 2, 0, 1, 0, 0, 0,\n 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0,\n 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0])\n```",
"text_to_vector('The tea is for a party to celebrate '\n 'the movie so she has no time for a cake')[:65]",
"Now, run through our entire review data set and convert each review to a word vector.",
"word_vectors = np.zeros((len(reviews), len(vocab)), dtype=np.int_)\nfor ii, (_, text) in enumerate(reviews.iterrows()):\n word_vectors[ii] = text_to_vector(text[0])\n\n# Printing out the first 5 word vectors\nword_vectors[:5, :23]",
"Train, Validation, Test sets\nNow that we have the word_vectors, we're ready to split our data into train, validation, and test sets. Remember that we train on the train data, use the validation data to set the hyperparameters, and at the very end measure the network performance on the test data. Here we're using the function to_categorical from TFLearn to reshape the target data so that we'll have two output units and can classify with a softmax activation function. We actually won't be creating the validation set here, TFLearn will do that for us later.",
"Y = (labels=='positive').astype(np.int_)\nrecords = len(labels)\n\nshuffle = np.arange(records)\nnp.random.shuffle(shuffle)\ntest_fraction = 0.9\n\ntrain_split, test_split = shuffle[:int(records*test_fraction)], shuffle[int(records*test_fraction):]\ntrainX, trainY = word_vectors[train_split,:], to_categorical(Y.values[train_split, 0], 2)\ntestX, testY = word_vectors[test_split,:], to_categorical(Y.values[test_split, 0], 2)\n\ntrainY",
"Building the network\nTFLearn lets you build the network by defining the layers. \nInput layer\nFor the input layer, you just need to tell it how many units you have. For example, \nnet = tflearn.input_data([None, 100])\nwould create a network with 100 input units. The first element in the list, None in this case, sets the batch size. Setting it to None here leaves it at the default batch size.\nThe number of inputs to your network needs to match the size of your data. For this example, we're using 10000 element long vectors to encode our input data, so we need 10000 input units.\nAdding layers\nTo add new hidden layers, you use \nnet = tflearn.fully_connected(net, n_units, activation='ReLU')\nThis adds a fully connected layer where every unit in the previous layer is connected to every unit in this layer. The first argument net is the network you created in the tflearn.input_data call. It's telling the network to use the output of the previous layer as the input to this layer. You can set the number of units in the layer with n_units, and set the activation function with the activation keyword. You can keep adding layers to your network by repeated calling net = tflearn.fully_connected(net, n_units).\nOutput layer\nThe last layer you add is used as the output layer. Therefore, you need to set the number of units to match the target data. In this case we are predicting two classes, positive or negative sentiment. You also need to set the activation function so it's appropriate for your model. Again, we're trying to predict if some input data belongs to one of two classes, so we should use softmax.\nnet = tflearn.fully_connected(net, 2, activation='softmax')\nTraining\nTo set how you train the network, use \nnet = tflearn.regression(net, optimizer='sgd', learning_rate=0.1, loss='categorical_crossentropy')\nAgain, this is passing in the network you've been building. The keywords: \n\noptimizer sets the training method, here stochastic gradient descent\nlearning_rate is the learning rate\nloss determines how the network error is calculated. In this example, with the categorical cross-entropy.\n\nFinally you put all this together to create the model with tflearn.DNN(net). So it ends up looking something like \nnet = tflearn.input_data([None, 10]) # Input\nnet = tflearn.fully_connected(net, 5, activation='ReLU') # Hidden\nnet = tflearn.fully_connected(net, 2, activation='softmax') # Output\nnet = tflearn.regression(net, optimizer='sgd', learning_rate=0.1, loss='categorical_crossentropy')\nmodel = tflearn.DNN(net)\n\nExercise: Below in the build_model() function, you'll put together the network using TFLearn. You get to choose how many layers to use, how many hidden units, etc.",
"# Network building\ndef build_model():\n # This resets all parameters and variables, leave this here\n tf.reset_default_graph()\n \n #### Your code ####\n \n model = tflearn.DNN(net)\n return model",
"Intializing the model\nNext we need to call the build_model() function to actually build the model. In my solution I haven't included any arguments to the function, but you can add arguments so you can change parameters in the model if you want.\n\nNote: You might get a bunch of warnings here. TFLearn uses a lot of deprecated code in TensorFlow. Hopefully it gets updated to the new TensorFlow version soon.",
"model = build_model()",
"Training the network\nNow that we've constructed the network, saved as the variable model, we can fit it to the data. Here we use the model.fit method. You pass in the training features trainX and the training targets trainY. Below I set validation_set=0.1 which reserves 10% of the data set as the validation set. You can also set the batch size and number of epochs with the batch_size and n_epoch keywords, respectively. Below is the code to fit our the network to our word vectors.\nYou can rerun model.fit to train the network further if you think you can increase the validation accuracy. Remember, all hyperparameter adjustments must be done using the validation set. Only use the test set after you're completely done training the network.",
"# Training\nmodel.fit(trainX, trainY, validation_set=0.1, show_metric=True, batch_size=128, n_epoch=10)",
"Testing\nAfter you're satisified with your hyperparameters, you can run the network on the test set to measure its performance. Remember, only do this after finalizing the hyperparameters.",
"predictions = (np.array(model.predict(testX))[:,0] >= 0.5).astype(np.int_)\ntest_accuracy = np.mean(predictions == testY[:,0], axis=0)\nprint(\"Test accuracy: \", test_accuracy)",
"Try out your own text!",
"# Helper function that uses your model to predict sentiment\ndef test_sentence(sentence):\n positive_prob = model.predict([text_to_vector(sentence.lower())])[0][1]\n print('Sentence: {}'.format(sentence))\n print('P(positive) = {:.3f} :'.format(positive_prob), \n 'Positive' if positive_prob > 0.5 else 'Negative')\n\nsentence = \"Moonlight is by far the best movie of 2016.\"\ntest_sentence(sentence)\n\nsentence = \"It's amazing anyone could be talented enough to make something this spectacularly awful\"\ntest_sentence(sentence)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
sz2472/foundations-homework
|
data and database/Homework_4_database_shengyingzhao.ipynb
|
mit
|
[
"Homework #4\nThese problem sets focus on list comprehensions, string operations and regular expressions.\nProblem set #1: List slices and list comprehensions\nLet's start with some data. The following cell contains a string with comma-separated integers, assigned to a variable called numbers_str:",
"numbers_str = '496,258,332,550,506,699,7,985,171,581,436,804,736,528,65,855,68,279,721,120'",
"In the following cell, complete the code with an expression that evaluates to a list of integers derived from the raw numbers in numbers_str, assigning the value of this expression to a variable numbers. If you do everything correctly, executing the cell should produce the output 985 (not '985').",
"raw_numbers = numbers_str.split(\",\")\nnumbers_list=[int(x) for x in raw_numbers]\nmax(numbers_list)",
"Great! We'll be using the numbers list you created above in the next few problems.\nIn the cell below, fill in the square brackets so that the expression evaluates to a list of the ten largest values in numbers. Expected output:\n[506, 528, 550, 581, 699, 721, 736, 804, 855, 985]\n\n(Hint: use a slice.)",
"sorted(numbers_list)[-10:]",
"In the cell below, write an expression that evaluates to a list of the integers from numbers that are evenly divisible by three, sorted in numerical order. Expected output:\n[120, 171, 258, 279, 528, 699, 804, 855]",
"sorted(x for x in numbers_list if x%3==0)",
"Okay. You're doing great. Now, in the cell below, write an expression that evaluates to a list of the square roots of all the integers in numbers that are less than 100. In order to do this, you'll need to use the sqrt function from the math module, which I've already imported for you. Expected output:\n[2.6457513110645907, 8.06225774829855, 8.246211251235321]\n\n(These outputs might vary slightly depending on your platform.)",
"from math import sqrt\n\n[sqrt(x) for x in numbers_list if x < 100]",
"Problem set #2: Still more list comprehensions\nStill looking good. Let's do a few more with some different data. In the cell below, I've defined a data structure and assigned it to a variable planets. It's a list of dictionaries, with each dictionary describing the characteristics of a planet in the solar system. Make sure to run the cell before you proceed.",
"planets = [\n {'diameter': 0.382,\n 'mass': 0.06,\n 'moons': 0,\n 'name': 'Mercury',\n 'orbital_period': 0.24,\n 'rings': 'no',\n 'type': 'terrestrial'},\n {'diameter': 0.949,\n 'mass': 0.82,\n 'moons': 0,\n 'name': 'Venus',\n 'orbital_period': 0.62,\n 'rings': 'no',\n 'type': 'terrestrial'},\n {'diameter': 1.00,\n 'mass': 1.00,\n 'moons': 1,\n 'name': 'Earth',\n 'orbital_period': 1.00,\n 'rings': 'no',\n 'type': 'terrestrial'},\n {'diameter': 0.532,\n 'mass': 0.11,\n 'moons': 2,\n 'name': 'Mars',\n 'orbital_period': 1.88,\n 'rings': 'no',\n 'type': 'terrestrial'},\n {'diameter': 11.209,\n 'mass': 317.8,\n 'moons': 67,\n 'name': 'Jupiter',\n 'orbital_period': 11.86,\n 'rings': 'yes',\n 'type': 'gas giant'},\n {'diameter': 9.449,\n 'mass': 95.2,\n 'moons': 62,\n 'name': 'Saturn',\n 'orbital_period': 29.46,\n 'rings': 'yes',\n 'type': 'gas giant'},\n {'diameter': 4.007,\n 'mass': 14.6,\n 'moons': 27,\n 'name': 'Uranus',\n 'orbital_period': 84.01,\n 'rings': 'yes',\n 'type': 'ice giant'},\n {'diameter': 3.883,\n 'mass': 17.2,\n 'moons': 14,\n 'name': 'Neptune',\n 'orbital_period': 164.8,\n 'rings': 'yes',\n 'type': 'ice giant'}]",
"Now, in the cell below, write a list comprehension that evaluates to a list of names of the planets that have a radius greater than four earth radii. Expected output:\n['Jupiter', 'Saturn', 'Uranus']",
"[x['name'] for x in planets if x['diameter']>4]",
"In the cell below, write a single expression that evaluates to the sum of the mass of all planets in the solar system. Expected output: 446.79",
"sum(x['mass'] for x in planets)",
"Good work. Last one with the planets. Write an expression that evaluates to the names of the planets that have the word giant anywhere in the value for their type key. Expected output:\n['Jupiter', 'Saturn', 'Uranus', 'Neptune']",
"[x['name'] for x in planets if 'giant' in x['type']]",
"EXTREME BONUS ROUND: Write an expression below that evaluates to a list of the names of the planets in ascending order by their number of moons. (The easiest way to do this involves using the key parameter of the sorted function, which we haven't yet discussed in class! That's why this is an EXTREME BONUS question.) Expected output:\n['Mercury', 'Venus', 'Earth', 'Mars', 'Neptune', 'Uranus', 'Saturn', 'Jupiter']",
"[x['name'] for x in sorted(planets, key=lambda x:x['moons'])] #can't sort a dictionary, sort the dictionary by the number of moons\n\ndef get_moon_count(d):\n return d['moons']\n[x['name'] for x in sorted(planets, key=get_moon_count)]\n\n#sort the dictionary by reverse order of the diameter:\n[x['name'] for x in sorted(planets, key=lambda d:d['diameter'],reverse=True)]\n\n[x['name'] for x in \\\nsorted(planets, key=lambda d:d['diameter'], reverse=True) \\\n if x['diameter'] >4]",
"Problem set #3: Regular expressions\nIn the following section, we're going to do a bit of digital humanities. (I guess this could also be journalism if you were... writing an investigative piece about... early 20th century American poetry?) We'll be working with the following text, Robert Frost's The Road Not Taken. Make sure to run the following cell before you proceed.",
"import re\npoem_lines = ['Two roads diverged in a yellow wood,',\n 'And sorry I could not travel both',\n 'And be one traveler, long I stood',\n 'And looked down one as far as I could',\n 'To where it bent in the undergrowth;',\n '',\n 'Then took the other, as just as fair,',\n 'And having perhaps the better claim,',\n 'Because it was grassy and wanted wear;',\n 'Though as for that the passing there',\n 'Had worn them really about the same,',\n '',\n 'And both that morning equally lay',\n 'In leaves no step had trodden black.',\n 'Oh, I kept the first for another day!',\n 'Yet knowing how way leads on to way,',\n 'I doubted if I should ever come back.',\n '',\n 'I shall be telling this with a sigh',\n 'Somewhere ages and ages hence:',\n 'Two roads diverged in a wood, and I---',\n 'I took the one less travelled by,',\n 'And that has made all the difference.']",
"In the cell above, I defined a variable poem_lines which has a list of lines in the poem, and imported the re library.\nIn the cell below, write a list comprehension (using re.search()) that evaluates to a list of lines that contain two words next to each other (separated by a space) that have exactly four characters. (Hint: use the \\b anchor. Don't overthink the \"two words in a row\" requirement.)\nExpected result:\n['Then took the other, as just as fair,',\n 'Had worn them really about the same,',\n 'And both that morning equally lay',\n 'I doubted if I should ever come back.',\n 'I shall be telling this with a sigh']",
"[line for line in poem_lines if re.search(r\"\\b\\w{4}\\s\\w{4}\\b\",line)]",
"Good! Now, in the following cell, write a list comprehension that evaluates to a list of lines in the poem that end with a five-letter word, regardless of whether or not there is punctuation following the word at the end of the line. (Hint: Try using the ? quantifier. Is there an existing character class, or a way to write a character class, that matches non-alphanumeric characters?) Expected output:\n['And be one traveler, long I stood',\n 'And looked down one as far as I could',\n 'And having perhaps the better claim,',\n 'Though as for that the passing there',\n 'In leaves no step had trodden black.',\n 'Somewhere ages and ages hence:']",
"[line for line in poem_lines if re.search(r\"\\b\\w{5}\\b.?$\",line)]",
"Okay, now a slightly trickier one. In the cell below, I've created a string all_lines which evaluates to the entire text of the poem in one string. Execute this cell.",
"all_lines = \" \".join(poem_lines)\nall_lines",
"Now, write an expression that evaluates to all of the words in the poem that follow the word 'I'. (The strings in the resulting list should not include the I.) Hint: Use re.findall() and grouping! Expected output:\n['could', 'stood', 'could', 'kept', 'doubted', 'should', 'shall', 'took']",
"match = re.findall(r\"I \\w+\", all_lines) #():only find things after 'I', re.search() returns object in regular expression, \n #while re.findall() return a list\n\nmatch = re.findall(r\"I (\\w+)\", all_lines)\nmatch",
"Finally, something super tricky. Here's a list of strings that contains a restaurant menu. Your job is to wrangle this plain text, slightly-structured data into a list of dictionaries.",
"entrees = [\n \"Yam, Rosemary and Chicken Bowl with Hot Sauce $10.95\",\n \"Lavender and Pepperoni Sandwich $8.49\",\n \"Water Chestnuts and Peas Power Lunch (with mayonnaise) $12.95 - v\",\n \"Artichoke, Mustard Green and Arugula with Sesame Oil over noodles $9.95 - v\",\n \"Flank Steak with Lentils And Tabasco Pepper With Sweet Chilli Sauce $19.95\",\n \"Rutabaga And Cucumber Wrap $8.49 - v\"\n]",
"You'll need to pull out the name of the dish and the price of the dish. The v after the hyphen indicates that the dish is vegetarian---you'll need to include that information in your dictionary as well. I've included the basic framework; you just need to fill in the contents of the for loop.\nExpected output:\n[{'name': 'Yam, Rosemary and Chicken Bowl with Hot Sauce ',\n 'price': 10.95,\n 'vegetarian': False},\n {'name': 'Lavender and Pepperoni Sandwich ',\n 'price': 8.49,\n 'vegetarian': False},\n {'name': 'Water Chestnuts and Peas Power Lunch (with mayonnaise) ',\n 'price': 12.95,\n 'vegetarian': True},\n {'name': 'Artichoke, Mustard Green and Arugula with Sesame Oil over noodles ',\n 'price': 9.95,\n 'vegetarian': True},\n {'name': 'Flank Steak with Lentils And Tabasco Pepper With Sweet Chilli Sauce ',\n 'price': 19.95,\n 'vegetarian': False},\n {'name': 'Rutabaga And Cucumber Wrap ', 'price': 8.49, 'vegetarian': True}]",
"menu = []\nfor item in entrees:\n pass # replace 'pass' with your code\nmenu",
"Great work! You are done. Go cavort in the sun, or whatever it is you students do when you're done with your homework"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
tpin3694/tpin3694.github.io
|
machine-learning/cropping_images.ipynb
|
mit
|
[
"Title: Cropping Images\nSlug: cropping_images\nSummary: How to cropping images using OpenCV in Python. \nDate: 2017-09-11 12:00\nCategory: Machine Learning\nTags: Preprocessing Images \nAuthors: Chris Albon\nPreliminaries",
"# Load image\nimport cv2\nimport numpy as np\nfrom matplotlib import pyplot as plt",
"Load Image As Greyscale",
"# Load image as grayscale\nimage = cv2.imread('images/plane_256x256.jpg', cv2.IMREAD_GRAYSCALE)",
"Crop Image",
"# Select first half of the columns and all rows\nimage_cropped = image[:,:126]",
"View Image",
"# View image\nplt.imshow(image_cropped, cmap='gray'), plt.axis(\"off\")\nplt.show()"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
fastai/fastai
|
dev_nbs/course/rossman_data_clean.ipynb
|
apache-2.0
|
[
"Rossman data preparation\nTo illustrate the techniques we need to apply before feeding all the data to a Deep Learning model, we are going to take the example of the Rossmann sales Kaggle competition. Given a wide range of information about a store, we are going to try predict their sale number on a given day. This is very useful to be able to manage stock properly and be able to properly satisfy the demand without wasting anything. The official training set was giving a lot of informations about various stores in Germany, but it was also allowed to use additional data, as long as it was made public and available to all participants.\nWe are going to reproduce most of the steps of one of the winning teams that they highlighted in Entity Embeddings of Categorical Variables. In addition to the official data, teams in the top of the leaderboard also used information about the weather, the states of the stores or the Google trends of those days. We have assembled all that additional data in one file available for download here if you want to replicate those steps.\nA first look at the data\nFirst things first, let's import everything we will need.",
"from fastai.tabular.all import *",
"If you have download the previous file and decompressed it in a folder named rossmann in the fastai data folder, you should see the following list of files with this instruction:",
"path = Config().data/'rossmann'\npath.ls()",
"The data that comes from Kaggle is in 'train.csv', 'test.csv', 'store.csv' and 'sample_submission.csv'. The other files are the additional data we were talking about. Let's start by loading everything using pandas.",
"table_names = ['train', 'store', 'store_states', 'state_names', 'googletrend', 'weather', 'test']\ntables = [pd.read_csv(path/f'{fname}.csv', low_memory=False) for fname in table_names]\ntrain, store, store_states, state_names, googletrend, weather, test = tables",
"To get an idea of the amount of data available, let's just look at the length of the training and test tables.",
"len(train), len(test)",
"So we have more than one million records available. Let's have a look at what's inside:",
"train.head()",
"The Store column contains the id of the stores, then we are given the id of the day of the week, the exact date, if the store was open on that day, if there were any promotion in that store during that day, and if it was a state or school holiday. The Customers column is given as an indication, and the Sales column is what we will try to predict.\nIf we look at the test table, we have the same columns, minus Sales and Customers, and it looks like we will have to predict on dates that are after the ones of the train table.",
"test.head()",
"The other table given by Kaggle contains some information specific to the stores: their type, what the competition looks like, if they are engaged in a permanent promotion program, and if so since then.",
"store.head().T",
"Now let's have a quick look at our four additional dataframes. store_states just gives us the abbreviated name of the sate of each store.",
"store_states.head()",
"We can match them to their real names with state_names.",
"state_names.head()",
"Which is going to be necessary if we want to use the weather table:",
"weather.head().T",
"Lastly the googletrend table gives us the trend of the brand in each state and in the whole of Germany.",
"googletrend.head()",
"Before we apply the fastai preprocessing, we will need to join the store table and the additional ones with our training and test table. Then, as we saw in our first example in chapter 1, we will need to split our variables between categorical and continuous. Before we do that, though, there is one type of variable that is a bit different from the others: dates.\nWe could turn each particular day in a category but there are cyclical information in dates we would miss if we did that. We already have the day of the week in our tables, but maybe the day of the month also bears some significance. People might be more inclined to go shopping at the beginning or the end of the month. The number of the week/month is also important to detect seasonal influences.\nThen we will try to exctract meaningful information from those dates. For instance promotions on their own are important inputs, but maybe the number of running weeks with promotion is another useful information as it will influence customers. A state holiday in itself is important, but it's more significant to know if we are the day before or after such a holiday as it will impact sales. All of those might seem very specific to this dataset, but you can actually apply them in any tabular data containing time information.\nThis first step is called feature-engineering and is extremely important: your model will try to extract useful information from your data but any extra help you can give it in advance is going to make training easier, and the final result better. In Kaggle Competitions using tabular data, it's often the way people prepared their data that makes the difference in the final leaderboard, not the exact model used.\nFeature Engineering\nMerging tables\nTo merge tables together, we will use this little helper function that relies on the pandas library. It will merge the tables left and right by looking at the column(s) which names are in left_on and right_on: the information in right will be added to the rows of the tables in left when the data in left_on inside left is the same as the data in right_on inside right. If left_on and right_on are the same, we don't have to pass right_on. We keep the fields in right that have the same names as fields in left and add a _y suffix (by default) to those field names.",
"def join_df(left, right, left_on, right_on=None, suffix='_y'):\n if right_on is None: right_on = left_on\n return left.merge(right, how='left', left_on=left_on, right_on=right_on, \n suffixes=(\"\", suffix))",
"First, let's replace the state names in the weather table by the abbreviations, since that's what is used in the other tables.",
"weather = join_df(weather, state_names, \"file\", \"StateName\")\nweather[['file', 'Date', 'State', 'StateName']].head()",
"To double-check the merge happened without incident, we can check that every row has a State with this line:",
"len(weather[weather.State.isnull()])",
"We can now safely remove the columns with the state names (file and StateName) since they we'll use the short codes.",
"weather.drop(columns=['file', 'StateName'], inplace=True)",
"To add the weather informations to our store table, we first use the table store_states to match a store code with the corresponding state, then we merge with our weather table.",
"store = join_df(store, store_states, 'Store')\nstore = join_df(store, weather, 'State')",
"And again, we can check if the merge went well by looking if new NaNs where introduced.",
"len(store[store.Mean_TemperatureC.isnull()])",
"Next, we want to join the googletrend table to this store table. If you remember from our previous look at it, it's not exactly in the same format:",
"googletrend.head()",
"We will need to change the column with the states and the columns with the dates:\n- in the column fil, the state names contain Rossmann_DE_XX with XX being the code of the state, so we want to remove Rossmann_DE. We will do this by creating a new column containing the last part of a split of the string by '_'.\n- in the column week, we will extract the date corresponding to the beginning of the week in a new column by taking the last part of a split on ' - '.\nIn pandas, creating a new column is very easy: you just have to define them.",
"googletrend['Date'] = googletrend.week.str.split(' - ', expand=True)[0]\ngoogletrend['State'] = googletrend.file.str.split('_', expand=True)[2]\ngoogletrend.head()",
"Let's check everything went well by looking at the values in the new State column of our googletrend table.",
"store['State'].unique(),googletrend['State'].unique()",
"We have two additional values in the second (None and 'SL') but this isn't a problem since they'll be ignored when we join. One problem however is that 'HB,NI' in the first table is named 'NI' in the second one, so we need to change that.",
"googletrend.loc[googletrend.State=='NI', \"State\"] = 'HB,NI'",
"Why do we have a None in state? As we said before, there is a global trend for Germany that corresponds to Rosmann_DE in the field file. For those, the previous split failed which gave the None value. We will keep this global trend and put it in a new column.",
"trend_de = googletrend[googletrend.file == 'Rossmann_DE'][['Date', 'trend']]",
"Then we can merge it with the rest of our trends, by adding the suffix '_DE' to know it's the general trend.",
"googletrend = join_df(googletrend, trend_de, 'Date', suffix='_DE')",
"Then at this stage, we can remove the columns file and weeksince they won't be useful anymore, as well as the rows where State is None (since they correspond to the global trend that we saved in another column).",
"googletrend.drop(columns=['file', 'week'], axis=1, inplace=True)\ngoogletrend = googletrend[~googletrend['State'].isnull()]",
"The last thing missing to be able to join this with or store table is to extract the week from the date in this table and in the store table: we need to join them on week values since each trend is given for the full week that starts on the indicated date. This is linked to the next topic in feature engineering: extracting dateparts.\nAdding dateparts\nIf your table contains dates, you will need to split the information there in several column for your Deep Learning model to be able to train properly. There is the basic stuff, such as the day number, week number, month number or year number, but anything that can be relevant to your problem is also useful. Is it the beginning or the end of the month? Is it a holiday?\nTo help with this, the fastai library as a convenience function called add_datepart. It will take a dataframe and a column you indicate, try to read it as a date, then add all those new columns. If we go back to our googletrend table, we now have gour columns.",
"googletrend.head()",
"If we add the dateparts, we will gain a lot more",
"googletrend = add_datepart(googletrend, 'Date', drop=False)\n\ngoogletrend.head().T",
"We chose the option drop=False as we want to keep the Date column for now. Another option is to add the time part of the date, but it's not relevant to our problem here. \nNow we can join our Google trends with the information in the store table, it's just a join on ['Week', 'Year'] once we apply add_datepart to that table. Note that we only keep the initial columns of googletrend with Week and Year to avoid all the duplicates.",
"googletrend = googletrend[['trend', 'State', 'trend_DE', 'Week', 'Year']]\nstore = add_datepart(store, 'Date', drop=False)\nstore = join_df(store, googletrend, ['Week', 'Year', 'State'])",
"At this stage, store contains all the information about the stores, the weather on that day and the Google trends applicable. We only have to join it with our training and test table. We have to use make_date before being able to execute that merge, to convert the Date column of train and test to proper date format.",
"make_date(train, 'Date')\nmake_date(test, 'Date')\ntrain_fe = join_df(train, store, ['Store', 'Date'])\ntest_fe = join_df(test, store, ['Store', 'Date'])",
"Elapsed times\nAnother feature that can be useful is the elapsed time before/after a certain event occurs. For instance the number of days since the last promotion or before the next school holiday. Like for the date parts, there is a fastai convenience function that will automatically add them.\nOne thing to take into account here is that you will need to use that function on the whole time series you have, even the test data: there might be a school holiday that takes place during the training data and it's going to impact those new features in the test data.",
"all_ftrs = train_fe.append(test_fe, sort=False)",
"We will consider the elapsed times for three events: 'Promo', 'StateHoliday' and 'SchoolHoliday'. Note that those must correspondon to booleans in your dataframe. 'Promo' and 'SchoolHoliday' already are (only 0s and 1s) but 'StateHoliday' has multiple values.",
"all_ftrs['StateHoliday'].unique()",
"If we refer to the explanation on Kaggle, 'b' is for Easter, 'c' for Christmas and 'a' for the other holidays. We will just converts this into a boolean that flags any holiday.",
"all_ftrs.StateHoliday = all_ftrs.StateHoliday!='0'",
"Now we can add, for each store, the number of days since or until the next promotion, state or school holiday. This will take a little while since the whole table is big.",
"all_ftrs = add_elapsed_times(all_ftrs, ['Promo', 'StateHoliday', 'SchoolHoliday'], \n date_field='Date', base_field='Store')",
"It added a four new features. If we look at 'StateHoliday' for instance:",
"[c for c in all_ftrs.columns if 'StateHoliday' in c]",
"The column 'AfterStateHoliday' contains the number of days since the last state holiday, 'BeforeStateHoliday' the number of days until the next one. As for 'StateHoliday_bw' and 'StateHoliday_fw', they contain the number of state holidays in the past or future seven days respectively. The same four columns have been added for 'Promo' and 'SchoolHoliday'.\nNow that we have added those features, we can split again our tables between the training and the test one.",
"train_df = all_ftrs.iloc[:len(train_fe)]\ntest_df = all_ftrs.iloc[len(train_fe):]",
"One last thing the authors of this winning solution did was to remove the rows with no sales, which correspond to exceptional closures of the stores. This might not have been a good idea since even if we don't have access to the same features in the test data, it can explain why we have some spikes in the training data.",
"train_df = train_df[train_df.Sales != 0.]",
"We will use those for training but since all those steps took a bit of time, it's a good idea to save our progress until now. We will just pickle those tables on the hard drive.",
"train_df.to_pickle(path/'train_clean')\ntest_df.to_pickle(path/'test_clean')"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
rsignell-usgs/notebook
|
NEXRAD/THREDDS_Radar_Server.ipynb
|
mit
|
[
"Using Python to Access NCEI Archived NEXRAD Level 2 Data\nThis notebook shows how to access the THREDDS Data Server (TDS) instance that is serving up archived NEXRAD Level 2 data hosted on Amazon S3. The TDS provides a mechanism to query for available data files, as well as provides access to the data as native volume files, through OPeNDAP, and using its own CDMRemote protocol. Since we're using Python, we can take advantage of Unidata's Siphon package, which provides an easy API for talking to THREDDS servers.\nNOTE: Due to data charges, the TDS instance in AWS only allows access to .edu domains. For other users interested in using Siphon to access radar data, you can access recent (2 weeks') data by changing the server URL below to: http://thredds.ucar.edu/thredds/radarServer/nexrad/level2/IDD/\nBut first!\nBookmark these resources for when you want to use Siphon later!\n+ latest Siphon documentation\n+ Siphon github repo\n+ TDS documentation\nDownloading the single latest volume\nJust a bit of initial set-up to use inline figures and quiet some warnings.",
"import matplotlib\nimport warnings\nwarnings.filterwarnings(\"ignore\", category=matplotlib.cbook.MatplotlibDeprecationWarning)\n%matplotlib inline",
"First we'll create an instance of RadarServer to point to the appropriate radar server access URL.",
"# The S3 URL did not work for me, despite .edu domain\n#url = 'http://thredds-aws.unidata.ucar.edu/thredds/radarServer/nexrad/level2/S3/'\n\n#Trying motherlode URL\nurl = 'http://thredds.ucar.edu/thredds/radarServer/nexrad/level2/IDD/'\nfrom siphon.radarserver import RadarServer\nrs = RadarServer(url)",
"Next, we'll create a new query object to help request the data. Using the chaining methods, let's ask for the latest data at the radar KLVX (Louisville, KY). We see that when the query is represented as a string, it shows the encoded URL.",
"from datetime import datetime, timedelta\nquery = rs.query()\nquery.stations('KLVX').time(datetime.utcnow())",
"We can use the RadarServer instance to check our query, to make sure we have required parameters and that we have chosen valid station(s) and variable(s)",
"rs.validate_query(query)",
"Make the request, which returns an instance of TDSCatalog; this handles parsing the returned XML information.",
"catalog = rs.get_catalog(query)",
"We can look at the datasets on the catalog to see what data we found by the query. We find one volume in the return, since we asked for the volume nearest to a single time.",
"catalog.datasets",
"We can pull that dataset out of the dictionary and look at the available access URLs. We see URLs for OPeNDAP, CDMRemote, and HTTPServer (direct download).",
"ds = list(catalog.datasets.values())[0]\nds.access_urls",
"We'll use the CDMRemote reader in Siphon and pass it the appropriate access URL.",
"from siphon.cdmr import Dataset\ndata = Dataset(ds.access_urls['CdmRemote'])",
"We define some helper functions to make working with the data easier. One takes the raw data and converts it to floating point values with the missing data points appropriately marked. The other helps with converting the polar coordinates (azimuth and range) to Cartesian (x and y).",
"import numpy as np\ndef raw_to_masked_float(var, data):\n # Values come back signed. If the _Unsigned attribute is set, we need to convert\n # from the range [-127, 128] to [0, 255].\n if var._Unsigned:\n data = data & 255\n\n # Mask missing points\n data = np.ma.array(data, mask=data==0)\n\n # Convert to float using the scale and offset\n return data * var.scale_factor + var.add_offset\n\ndef polar_to_cartesian(az, rng):\n az_rad = np.deg2rad(az)[:, None]\n x = rng * np.sin(az_rad)\n y = rng * np.cos(az_rad)\n return x, y",
"The CDMRemote reader provides an interface that is almost identical to the usual python NetCDF interface. We pull out the variables we need for azimuth and range, as well as the data itself.",
"sweep = 0\nref_var = data.variables['Reflectivity_HI']\nref_data = ref_var[sweep]\nrng = data.variables['distanceR_HI'][:]\naz = data.variables['azimuthR_HI'][sweep]",
"Then convert the raw data to floating point values and the polar coordinates to Cartesian.",
"ref = raw_to_masked_float(ref_var, ref_data)\nx, y = polar_to_cartesian(az, rng)",
"MetPy is a Python package for meteorology (Documentation: http://metpy.readthedocs.org and GitHub: http://github.com/MetPy/MetPy). We import MetPy and use it to get the colortable and value mapping information for the NWS Reflectivity data.",
"from metpy.plots import ctables # For NWS colortable\nref_norm, ref_cmap = ctables.registry.get_with_steps('NWSReflectivity', 5, 5)",
"Finally, we plot them up using matplotlib and cartopy. We create a helper function for making a map to keep things simpler later.",
"import matplotlib.pyplot as plt\nimport cartopy\n\ndef new_map(fig, lon, lat):\n # Create projection centered on the radar. This allows us to use x\n # and y relative to the radar.\n proj = cartopy.crs.LambertConformal(central_longitude=lon, central_latitude=lat)\n\n # New axes with the specified projection\n ax = fig.add_subplot(1, 1, 1, projection=proj)\n\n # Add coastlines\n ax.coastlines('50m', 'black', linewidth=2, zorder=2)\n\n # Grab state borders\n state_borders = cartopy.feature.NaturalEarthFeature(\n category='cultural', name='admin_1_states_provinces_lines',\n scale='50m', facecolor='none')\n ax.add_feature(state_borders, edgecolor='black', linewidth=1, zorder=3)\n \n return ax",
"Download a collection of historical data\nThis time we'll make a query based on a longitude, latitude point and using a time range.",
"query = rs.query()\n#dt = datetime(2012, 10, 29, 15) # Our specified time\ndt = datetime(2016, 6, 8, 18) # Our specified time\nquery.lonlat_point(-73.687, 41.175).time_range(dt, dt + timedelta(hours=1))",
"The specified longitude, latitude are in NY and the TDS helpfully finds the closest station to that point. We can see that for this time range we obtained multiple datasets.",
"cat = rs.get_catalog(query)\ncat.datasets",
"Grab the first dataset so that we can get the longitude and latitude of the station and make a map for plotting. We'll go ahead and specify some longitude and latitude bounds for the map.",
"ds = list(cat.datasets.values())[0]\ndata = Dataset(ds.access_urls['CdmRemote'])\n# Pull out the data of interest\nsweep = 0\nrng = data.variables['distanceR_HI'][:]\naz = data.variables['azimuthR_HI'][sweep]\nref_var = data.variables['Reflectivity_HI']\n\n# Convert data to float and coordinates to Cartesian\nref = raw_to_masked_float(ref_var, ref_var[sweep])\nx, y = polar_to_cartesian(az, rng)",
"Use the function to make a new map and plot a colormapped view of the data",
"fig = plt.figure(figsize=(10, 10))\nax = new_map(fig, data.StationLongitude, data.StationLatitude)\n\n# Set limits in lat/lon space\nax.set_extent([-77, -70, 38, 42])\n\n# Add ocean and land background\nocean = cartopy.feature.NaturalEarthFeature('physical', 'ocean', scale='50m',\n edgecolor='face',\n facecolor=cartopy.feature.COLORS['water'])\nland = cartopy.feature.NaturalEarthFeature('physical', 'land', scale='50m',\n edgecolor='face',\n facecolor=cartopy.feature.COLORS['land'])\n\nax.add_feature(ocean, zorder=-1)\nax.add_feature(land, zorder=-1)\n#ax = new_map(fig, data.StationLongitude, data.StationLatitude)\nax.pcolormesh(x, y, ref, cmap=ref_cmap, norm=ref_norm, zorder=0);",
"Now we can loop over the collection of returned datasets and plot them. As we plot, we collect the returned plot objects so that we can use them to make an animated plot. We also add a timestamp for each plot.",
"meshes = []\nfor item in sorted(cat.datasets.items()):\n # After looping over the list of sorted datasets, pull the actual Dataset object out\n # of our list of items and access over CDMRemote\n ds = item[1]\n data = Dataset(ds.access_urls['CdmRemote'])\n\n # Pull out the data of interest\n sweep = 0\n rng = data.variables['distanceR_HI'][:]\n az = data.variables['azimuthR_HI'][sweep]\n ref_var = data.variables['Reflectivity_HI']\n\n # Convert data to float and coordinates to Cartesian\n ref = raw_to_masked_float(ref_var, ref_var[sweep])\n x, y = polar_to_cartesian(az, rng)\n\n # Plot the data and the timestamp\n mesh = ax.pcolormesh(x, y, ref, cmap=ref_cmap, norm=ref_norm, zorder=0)\n text = ax.text(0.65, 0.03, data.time_coverage_start, transform=ax.transAxes,\n fontdict={'size':16})\n \n # Collect the things we've plotted so we can animate\n meshes.append((mesh, text))",
"Using matplotlib, we can take a collection of Artists that have been plotted and turn them into an animation. With matplotlib 1.5 (1.5-rc2 is available now!), this animation can be converted to HTML5 video viewable in the notebook.",
"# Set up matplotlib to do the conversion to HTML5 video\nimport matplotlib\nmatplotlib.rcParams['animation.html'] = 'html5'\n\n# Create an animation\nfrom matplotlib.animation import ArtistAnimation\nArtistAnimation(fig, meshes)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
InsightSoftwareConsortium/SimpleITK-Notebooks
|
Python/34_Segmentation_Evaluation.ipynb
|
apache-2.0
|
[
"Segmentation Evaluation <a href=\"https://mybinder.org/v2/gh/InsightSoftwareConsortium/SimpleITK-Notebooks/master?filepath=Python%2F34_Segmentation_Evaluation.ipynb\"><img style=\"float: right;\" src=\"https://mybinder.org/badge_logo.svg\"></a>\nEvaluating segmentation algorithms is most often done using reference data to which you compare your results. \nIn the medical domain reference data is commonly obtained via manual segmentation by an expert (don't forget to thank your clinical colleagues for their hard work). When you are resource limited, the reference data may be defined by a single expert. This is less than ideal. When multiple experts provide you with their input then you can potentially combine them to obtain reference data that is closer to the ever elusive \"ground truth\". In this notebook we show two approaches to combining input from multiple observers, majority vote and the Simultaneous Truth and Performance Level\nEstimation (STAPLE).\nOnce we have a reference, we compare the algorithm's performance using multiple criteria, as usually there is no single evaluation measure that conveys all of the relevant information. In this notebook we illustrate the use of the following evaluation criteria:\n* Overlap measures:\n * Jaccard and Dice coefficients \n * false negative and false positive errors\n* Surface distance measures:\n * Hausdorff distance (symmetric)\n * mean, median, max and standard deviation between surfaces\n* Volume measures:\n * volume similarity $ \\frac{2*(v1-v2)}{v1+v2}$\nThe relevant criteria are task dependent, so you need to ask yourself whether you are interested in detecting spurious errors or not (mean or max surface distance), whether over/under segmentation should be differentiated (volume similarity and Dice or just Dice), and what is the ratio between acceptable errors and the size of the segmented object (Dice coefficient may be too sensitive to small errors when the segmented object is small and not sensitive enough to large errors when the segmented object is large).\nThe data we use in the notebook is a set of manually segmented liver tumors from a single clinical CT scan. The relevant publication is: T. Popa et al., \"Tumor Volume Measurement and Volume Measurement Comparison Plug-ins for VolView Using ITK\", SPIE Medical Imaging: Visualization, Image-Guided Procedures, and Display, 2006.\nNote: The approach described here can also be used to evaluate Registration, as illustrated in the free form deformation notebook.\nRecommended read: A community effort describing limitations of various evaluation metrics, \nA. Reinke et al., \"Common Limitations of Image Processing Metrics: A Picture Story\", available from arxiv (PDF).",
"import SimpleITK as sitk\n\nimport numpy as np\n\n%run update_path_to_download_script\nfrom downloaddata import fetch_data as fdata\n\n%matplotlib inline\nimport matplotlib.pyplot as plt\nimport gui\n\nfrom ipywidgets import interact, fixed",
"Utility method for display",
"def display_with_overlay(\n segmentation_number, slice_number, image, segs, window_min, window_max\n):\n \"\"\"\n Display a CT slice with segmented contours overlaid onto it. The contours are the edges of\n the labeled regions.\n \"\"\"\n img = image[:, :, slice_number]\n msk = segs[segmentation_number][:, :, slice_number]\n overlay_img = sitk.LabelMapContourOverlay(\n sitk.Cast(msk, sitk.sitkLabelUInt8),\n sitk.Cast(\n sitk.IntensityWindowing(\n img, windowMinimum=window_min, windowMaximum=window_max\n ),\n sitk.sitkUInt8,\n ),\n opacity=1,\n contourThickness=[2, 2],\n )\n # We assume the original slice is isotropic, otherwise the display would be distorted\n plt.imshow(sitk.GetArrayViewFromImage(overlay_img))\n plt.axis(\"off\")\n plt.show()",
"Fetch the data\nRetrieve a single CT scan and three manual delineations of a liver tumor. Visual inspection of the data highlights the variability between experts.",
"image = sitk.ReadImage(fdata(\"liverTumorSegmentations/Patient01Homo.mha\"))\nsegmentation_file_names = [\n \"liverTumorSegmentations/Patient01Homo_Rad01.mha\",\n \"liverTumorSegmentations/Patient01Homo_Rad02.mha\",\n \"liverTumorSegmentations/Patient01Homo_Rad03.mha\",\n]\n\nsegmentations = [\n sitk.ReadImage(fdata(file_name), sitk.sitkUInt8)\n for file_name in segmentation_file_names\n]\n\ninteract(\n display_with_overlay,\n segmentation_number=(0, len(segmentations) - 1),\n slice_number=(0, image.GetSize()[2] - 1),\n image=fixed(image),\n segs=fixed(segmentations),\n window_min=fixed(-1024),\n window_max=fixed(976),\n);",
"Derive a reference\nThere are a variety of ways to derive a reference segmentation from multiple expert inputs. Several options, there are more, are described in \"A comparison of ground truth estimation methods\", A. M. Biancardi, A. C. Jirapatnakul, A. P. Reeves. \nTwo methods that are available in SimpleITK are <b>majority vote</b> and the <b>STAPLE</b> algorithm.",
"# Use majority voting to obtain the reference segmentation. Note that this filter does not resolve ties. In case of\n# ties, it will assign max_label_value+1 or a user specified label value (labelForUndecidedPixels) to the result.\n# Before using the results of this filter you will have to check whether there were ties and modify the results to\n# resolve the ties in a manner that makes sense for your task. The filter implicitly accommodates multiple labels.\nlabelForUndecidedPixels = 10\nreference_segmentation_majority_vote = sitk.LabelVoting(\n segmentations, labelForUndecidedPixels\n)\n\nmanual_plus_majority_vote = list(segmentations)\n# Append the reference segmentation to the list of manual segmentations\nmanual_plus_majority_vote.append(reference_segmentation_majority_vote)\n\ninteract(\n display_with_overlay,\n segmentation_number=(0, len(manual_plus_majority_vote) - 1),\n slice_number=(0, image.GetSize()[1] - 1),\n image=fixed(image),\n segs=fixed(manual_plus_majority_vote),\n window_min=fixed(-1024),\n window_max=fixed(976),\n);\n\n# Use the STAPLE algorithm to obtain the reference segmentation. This implementation of the original algorithm\n# combines a single label from multiple segmentations, the label is user specified. The result of the\n# filter is the voxel's probability of belonging to the foreground. We then have to threshold the result to obtain\n# a reference binary segmentation.\nforegroundValue = 1\nthreshold = 0.95\nreference_segmentation_STAPLE_probabilities = sitk.STAPLE(\n segmentations, foregroundValue\n)\n# We use the overloaded operator to perform thresholding, another option is to use the BinaryThreshold function.\nreference_segmentation_STAPLE = reference_segmentation_STAPLE_probabilities > threshold\n\nmanual_plus_staple = list(segmentations)\n# Append the reference segmentation to the list of manual segmentations\nmanual_plus_staple.append(reference_segmentation_STAPLE)\n\ninteract(\n display_with_overlay,\n segmentation_number=(0, len(manual_plus_staple) - 1),\n slice_number=(0, image.GetSize()[1] - 1),\n image=fixed(image),\n segs=fixed(manual_plus_staple),\n window_min=fixed(-1024),\n window_max=fixed(976),\n);",
"Evaluate segmentations using the reference\nOnce we derive a reference from our experts input we can compare segmentation results to it.\nNote that in this notebook we compare the expert segmentations to the reference derived from them. This is not relevant for algorithm evaluation, but it can potentially be used to rank your experts.\nIn this specific implementation we take advantage of the fact that we have a binary segmentation with 1 for foreground and 0 for background.",
"from enum import Enum\n\n# Use enumerations to represent the various evaluation measures\nclass OverlapMeasures(Enum):\n jaccard, dice, volume_similarity, false_negative, false_positive = range(5)\n\n\nclass SurfaceDistanceMeasures(Enum):\n (\n hausdorff_distance,\n mean_surface_distance,\n median_surface_distance,\n std_surface_distance,\n max_surface_distance,\n ) = range(5)\n\n\n# Select which reference we want to use (majority vote or STAPLE)\nreference_segmentation = reference_segmentation_STAPLE\n\n# Empty numpy arrays to hold the results\noverlap_results = np.zeros(\n (len(segmentations), len(OverlapMeasures.__members__.items()))\n)\nsurface_distance_results = np.zeros(\n (len(segmentations), len(SurfaceDistanceMeasures.__members__.items()))\n)\n\n# Compute the evaluation criteria\n\n# Note that for the overlap measures filter, because we are dealing with a single label we\n# use the combined, all labels, evaluation measures without passing a specific label to the methods.\noverlap_measures_filter = sitk.LabelOverlapMeasuresImageFilter()\n\nhausdorff_distance_filter = sitk.HausdorffDistanceImageFilter()\n\nreference_surface = sitk.LabelContour(reference_segmentation)\n# Use the absolute values of the distance map to compute the surface distances (distance map sign, outside or inside\n# relationship, is irrelevant)\nreference_distance_map = sitk.Abs(\n sitk.SignedMaurerDistanceMap(\n reference_surface, squaredDistance=False, useImageSpacing=True\n )\n)\n\nstatistics_image_filter = sitk.StatisticsImageFilter()\n# Get the number of pixels in the reference surface by counting all pixels that are 1.\nstatistics_image_filter.Execute(reference_surface)\nnum_reference_surface_pixels = int(statistics_image_filter.GetSum())\n\nfor i, seg in enumerate(segmentations):\n # Overlap measures\n overlap_measures_filter.Execute(seg, reference_segmentation)\n overlap_results[\n i, OverlapMeasures.jaccard.value\n ] = overlap_measures_filter.GetJaccardCoefficient()\n overlap_results[\n i, OverlapMeasures.dice.value\n ] = overlap_measures_filter.GetDiceCoefficient()\n overlap_results[\n i, OverlapMeasures.volume_similarity.value\n ] = overlap_measures_filter.GetVolumeSimilarity()\n overlap_results[\n i, OverlapMeasures.false_negative.value\n ] = overlap_measures_filter.GetFalseNegativeError()\n overlap_results[\n i, OverlapMeasures.false_positive.value\n ] = overlap_measures_filter.GetFalsePositiveError()\n # Hausdorff distance\n hausdorff_distance_filter.Execute(reference_segmentation, seg)\n\n surface_distance_results[\n i, SurfaceDistanceMeasures.hausdorff_distance.value\n ] = hausdorff_distance_filter.GetHausdorffDistance()\n segmented_surface = sitk.LabelContour(seg)\n # Symmetric surface distance measures\n segmented_distance_map = sitk.Abs(\n sitk.SignedMaurerDistanceMap(\n segmented_surface, squaredDistance=False, useImageSpacing=True\n )\n )\n\n # Multiply the binary surface segmentations with the distance maps. The resulting distance\n # maps contain non-zero values only on the surface (they can also contain zero on the surface)\n seg2ref_distance_map = reference_distance_map * sitk.Cast(\n segmented_surface, sitk.sitkFloat32\n )\n ref2seg_distance_map = segmented_distance_map * sitk.Cast(\n reference_surface, sitk.sitkFloat32\n )\n\n # Get the number of pixels in the reference surface by counting all pixels that are 1.\n statistics_image_filter.Execute(segmented_surface)\n num_segmented_surface_pixels = int(statistics_image_filter.GetSum())\n\n # Get all non-zero distances and then add zero distances if required.\n seg2ref_distance_map_arr = sitk.GetArrayViewFromImage(seg2ref_distance_map)\n seg2ref_distances = list(seg2ref_distance_map_arr[seg2ref_distance_map_arr != 0])\n seg2ref_distances = seg2ref_distances + list(\n np.zeros(num_segmented_surface_pixels - len(seg2ref_distances))\n )\n ref2seg_distance_map_arr = sitk.GetArrayViewFromImage(ref2seg_distance_map)\n ref2seg_distances = list(ref2seg_distance_map_arr[ref2seg_distance_map_arr != 0])\n ref2seg_distances = ref2seg_distances + list(\n np.zeros(num_reference_surface_pixels - len(ref2seg_distances))\n )\n\n all_surface_distances = seg2ref_distances + ref2seg_distances\n\n # The maximum of the symmetric surface distances is the Hausdorff distance between the surfaces. In\n # general, it is not equal to the Hausdorff distance between all voxel/pixel points of the two\n # segmentations, though in our case it is. More on this below.\n surface_distance_results[\n i, SurfaceDistanceMeasures.mean_surface_distance.value\n ] = np.mean(all_surface_distances)\n surface_distance_results[\n i, SurfaceDistanceMeasures.median_surface_distance.value\n ] = np.median(all_surface_distances)\n surface_distance_results[\n i, SurfaceDistanceMeasures.std_surface_distance.value\n ] = np.std(all_surface_distances)\n surface_distance_results[\n i, SurfaceDistanceMeasures.max_surface_distance.value\n ] = np.max(all_surface_distances)\n\n# Print the matrices\nnp.set_printoptions(precision=3)\nprint(overlap_results)\nprint(surface_distance_results)",
"Improved output\nIf the pandas package is installed in your Python environment then you can easily produce high quality output.",
"import pandas as pd\nfrom IPython.display import display, HTML\n\n# Graft our results matrix into pandas data frames\noverlap_results_df = pd.DataFrame(\n data=overlap_results,\n index=list(range(len(segmentations))),\n columns=[name for name, _ in OverlapMeasures.__members__.items()],\n)\nsurface_distance_results_df = pd.DataFrame(\n data=surface_distance_results,\n index=list(range(len(segmentations))),\n columns=[name for name, _ in SurfaceDistanceMeasures.__members__.items()],\n)\n\n# Display the data as HTML tables and graphs\ndisplay(HTML(overlap_results_df.to_html(float_format=lambda x: \"%.3f\" % x)))\ndisplay(HTML(surface_distance_results_df.to_html(float_format=lambda x: \"%.3f\" % x)))\noverlap_results_df.plot(kind=\"bar\").legend(bbox_to_anchor=(1.6, 0.9))\nsurface_distance_results_df.plot(kind=\"bar\").legend(bbox_to_anchor=(1.6, 0.9))",
"You can also export the data as a table for your LaTeX manuscript using the to_latex function.\n<b>Note</b>: You will need to add the \\usepackage{booktabs} to your LaTeX document's preamble. \nTo create the minimal LaTeX document which will allow you to see the difference between the tables below, copy paste:\n\\documentclass{article}\n\\usepackage{booktabs}\n\\begin{document}\npaste the tables here\n\\end{document}",
"# The formatting of the table using the default settings is less than ideal\nprint(overlap_results_df.to_latex())\n\n# We can improve on this by specifying the table's column format and the float format\nprint(\n overlap_results_df.to_latex(\n column_format=\"ccccccc\", float_format=lambda x: \"%.3f\" % x\n )\n)",
"Segmentation Representation and the Hausdorff Distance\nThe results of segmentation can be represented as a set of closed contours/surfaces or as the discrete set of points (pixels/voxels) belonging to the segmented objects. Ideally using either representation would yield the same values for the segmentation evaluation metrics. Unfortunately, the Hausdorff distance computed directly from each of these representations will generally not yield the same results. In some cases, such as the one above, the two values do match (table entries hausdorff_distance and max_surface_distance).\nThe following example illustrates that the Hausdorff distance for the contour/surface representation and the discrete point set representing the segmented object differ, and that there is no correlation between the two.\nOur object of interest is annulus shaped (e.g. myocardium in short axis MRI). It has an internal radius, $r$, and an external radius $R>r$. We over-segmented the object and obtained a filled circle of radius $R$.\nThe contour/surface based Hausdorff distance is $R-r$, the distance between external contours is zero and between internal and external contours is $R-r$. The pixel/voxel object based Hausdorff distance is $r$, corresponding to the distance between the center point in the over-segmented result to the inner circle contour. For different values of $r$ we can either have $R-r \\geq r$ or $R-r \\leq r$.\nNote: Both computations of Hausdorff distance are valid, though the common approach is to use the pixel/voxel based representation for computing the Hausdorff distance.\nThe following cells show these differences in detail.",
"# Create our segmentations and display\nimage_size = [64, 64]\ncircle_center = [30, 30]\ncircle_radius = [20, 20]\n\n# A filled circle with radius R\nseg = (\n sitk.GaussianSource(sitk.sitkUInt8, image_size, circle_radius, circle_center) > 200\n)\n# A torus with inner radius r\nreference_segmentation1 = seg - (\n sitk.GaussianSource(sitk.sitkUInt8, image_size, circle_radius, circle_center) > 240\n)\n# A torus with inner radius r_2<r\nreference_segmentation2 = seg - (\n sitk.GaussianSource(sitk.sitkUInt8, image_size, circle_radius, circle_center) > 250\n)\n\ngui.multi_image_display2D(\n [reference_segmentation1, reference_segmentation2, seg],\n [\"reference 1\", \"reference 2\", \"segmentation\"],\n figure_size=(12, 4),\n);\n\ndef surface_hausdorff_distance(reference_segmentation, seg):\n \"\"\"\n Compute symmetric surface distances and take the maximum.\n \"\"\"\n reference_surface = sitk.LabelContour(reference_segmentation)\n reference_distance_map = sitk.Abs(\n sitk.SignedMaurerDistanceMap(\n reference_surface, squaredDistance=False, useImageSpacing=True\n )\n )\n\n statistics_image_filter = sitk.StatisticsImageFilter()\n # Get the number of pixels in the reference surface by counting all pixels that are 1.\n statistics_image_filter.Execute(reference_surface)\n num_reference_surface_pixels = int(statistics_image_filter.GetSum())\n\n segmented_surface = sitk.LabelContour(seg)\n segmented_distance_map = sitk.Abs(\n sitk.SignedMaurerDistanceMap(\n segmented_surface, squaredDistance=False, useImageSpacing=True\n )\n )\n\n # Multiply the binary surface segmentations with the distance maps. The resulting distance\n # maps contain non-zero values only on the surface (they can also contain zero on the surface)\n seg2ref_distance_map = reference_distance_map * sitk.Cast(\n segmented_surface, sitk.sitkFloat32\n )\n ref2seg_distance_map = segmented_distance_map * sitk.Cast(\n reference_surface, sitk.sitkFloat32\n )\n\n # Get the number of pixels in the reference surface by counting all pixels that are 1.\n statistics_image_filter.Execute(segmented_surface)\n num_segmented_surface_pixels = int(statistics_image_filter.GetSum())\n\n # Get all non-zero distances and then add zero distances if required.\n seg2ref_distance_map_arr = sitk.GetArrayViewFromImage(seg2ref_distance_map)\n seg2ref_distances = list(seg2ref_distance_map_arr[seg2ref_distance_map_arr != 0])\n seg2ref_distances = seg2ref_distances + list(\n np.zeros(num_segmented_surface_pixels - len(seg2ref_distances))\n )\n ref2seg_distance_map_arr = sitk.GetArrayViewFromImage(ref2seg_distance_map)\n ref2seg_distances = list(ref2seg_distance_map_arr[ref2seg_distance_map_arr != 0])\n ref2seg_distances = ref2seg_distances + list(\n np.zeros(num_reference_surface_pixels - len(ref2seg_distances))\n )\n all_surface_distances = seg2ref_distances + ref2seg_distances\n return np.max(all_surface_distances)\n\n\nhausdorff_distance_filter = sitk.HausdorffDistanceImageFilter()\n\n# Use reference1, larger inner annulus radius, the surface based computation\n# has a smaller difference.\nhausdorff_distance_filter.Execute(reference_segmentation1, seg)\nprint(\n \"HausdorffDistanceImageFilter result (reference1-segmentation): \"\n + str(hausdorff_distance_filter.GetHausdorffDistance())\n)\nprint(\n \"Surface Hausdorff result (reference1-segmentation): \"\n + str(surface_hausdorff_distance(reference_segmentation1, seg))\n)\n\n# Use reference2, smaller inner annulus radius, the surface based computation\n# has a larger difference.\nhausdorff_distance_filter.Execute(reference_segmentation2, seg)\nprint(\n \"HausdorffDistanceImageFilter result (reference2-segmentation): \"\n + str(hausdorff_distance_filter.GetHausdorffDistance())\n)\nprint(\n \"Surface Hausdorff result (reference2-segmentation): \"\n + str(surface_hausdorff_distance(reference_segmentation2, seg))\n)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
szitenberg/ReproPhyloVagrant
|
notebooks/Tutorials/Basic/3.10 Tree annotation and report.ipynb
|
mit
|
[
"The last section of this tutorial is about producing annotated tree figures and a human readable report. First we have to load our Project again:",
"from reprophylo import *\npj = unpickle_pj('outputs/my_project.pkpj', git=False)",
"3.10.1 Updating the metadata after the tree has been built\nOften, we want to display information that did not exist in the Project when we first built our trees. This is not an issue. We can add metadata now and propagate it to all the parts of the Project, including to our preexisting trees. For example, I add here some morphological information. Some of the species in our data have a morphological structure called porocalyx,",
"genera_with_porocalices = ['Cinachyrella', \n 'Cinachyra', \n 'Amphitethya',\n 'Fangophilina',\n 'Acanthotetilla',\n 'Paratetilla']",
"while others do not:",
"genera_without_porocalices = ['Craniella',\n 'Tetilla',\n 'Astrophorida']",
"The following command will add the value 'present' to a new qualifier called 'porocalices' in sequence features of species that belong to genera_with_porocalices:",
"for genus in genera_with_porocalices:\n pj.if_this_then_that(genus, 'genus', 'present', 'porocalices')",
"and the following command will add the value 'absent' to a new qualifier called 'porocalices' to sequence features of species that belong to genera_without_porocalices:",
"for genus in genera_without_porocalices:\n pj.if_this_then_that(genus, 'genus', 'absent', 'porocalices')",
"The new qualifier porocalices in now updated in the SeqRecord objects within the pj.records list (more on this in section 3.4). But in order for it to exist in the Tree objects, stored in the pj.trees dictionary, we have to run this command:",
"pj.propagate_metadata()",
"Only now the new qualifier is available for tree annotation. Note that qualifiers that existed in the Project when we built the trees, will be included in the Tree object by default. \n3.10.2 Configuring and writing a tree figure\nThe annotate Project method will produce one figure for each tree in the Project according to the settings. Colors can be indicated with X11 color names. The following settings can be controlled:\n\nfig_folder: The path for the output figure file\nroot_meta and root_value: The qualifier and its value that will indicate the outgroup. It can be 'mid' and 'mid' for a midpoint root, or for example, 'source_organism' and 'Some species binomial' to set a group of leaves with a shared value as an outgroup (required).\nleaf_labels_txt_meta: A list of qualifiers which values will be used as leaf labels, required.\nleaf_node_color_meta and leaf_label_colors: The qualifier that determines clade background colors and a dictionary assigning colors to the qualifier's values (defaults to None and None).\nftype and fsize: Leaf label font and font size (default 'verdana' and 10)\nnode_bg_meta and node_bg_color: A qualifier that determines the leaf label colors and a dictionary assigning colors to its values (defaults to None and None).\nnode_support_dict and support_bullet_size: A dictionary assigning support ranges to bullet colors, and the size of the bullets (defaults to None and 5),\nheat_map_meta and heat_map_colour_scheme: A list of qualifiers which will be the heatmap's columns, and the color scheme (defaults to None and 2 see ETE for color schemes)\npic_meta, pic_paths, pic_w and pic_h: You can put small images next to leaves. pic_meta will determine the qualifier according to which values an image will be assigned. pic_paths is a dictionary assigning image file paths to the qualifier's values. pic_w and pic_h are the dimensions of the images in pixels (the defaults are None for all the four keywords). \nmultifurc: Branch support cutoff under which to multifurcate nodes (default - None).\nbranch_width and branch_color (defaults: 2 and DimGray)\nscale: This will determine the width of the tree (default 1000)\nhtml: The path to which to write an html file with a list of the figures and links to the figure files (default None)\n\n3.10.2.1 Example 1, the metadata determines background colours",
"bg_colors = {'present':'red',\n 'absent': 'white'}\n\nsupports = {'black': [100,99],\n 'gray': [99,80]}\n\npj.annotate('./images/', # Path to write figs to\n \n 'genus', 'Astrophorida', # Set OTUs that have 'Astrophorida'\n # in their 'genus' qualifier\n # as outgroup\n \n ['source_organism', 'record_id'], # leaf labels\n \n node_bg_meta='porocalices', # The qualifier that\n # will determine bg colors\n node_bg_color=bg_colors, # The colors assigned to \n # each qualifier value\n \n node_support_dict=supports, \n \n html='./images/figs.html'\n )\n\npj.clear_tree_annotations()",
"In the resulting figure (below), clades of species with porocalices have red background, node with maximal relBootstrap support have black bullets, and nodes with branch support > 80 has gray bullets.",
"from IPython.display import Image\nImage('./images/example1.png', width=300)",
"3.10.2.2 Example 2, the metadata as a heatmap\nThe second example introduces midpoint rooting and a heatmap. There are three columns in this heatmap, representing numerical values of three qualifiers. In this instance, the values are 0 or 1 for presence and absence. In addition, we change the branch colour to black and assign shades of gray to the genera.",
"bg_colors = {'Cinachyrella': 'gray', \n 'Cinachyra': 'silver', \n 'Amphitethya': 'white',\n 'Fangophilina':'white',\n 'Acanthotetilla':'silver',\n 'Paratetilla':'white',\n 'Craniella': 'gray',\n 'Tetilla': 'silver',\n 'Astrophorida': 'white'}\n\npj.clear_tree_annotations()\n\npj.annotate('./images/', # Path to write figs to\n \n 'mid', 'mid', # Set midpoint root\n \n ['source_organism'], # leaf labels\n \n fsize=13,\n \n node_bg_meta='genus', # The qualifier that\n # will determine bg colors\n node_bg_color=bg_colors, # The colors assigned to \n # each qualifier value\n \n # heatmap columns\n heat_map_meta=['porocalyx', 'cortex', 'calthrops'],\n \n heat_map_colour_scheme=0,\n \n branch_color='black',\n \n html='./images/figs.html'\n )",
"And this is what it looks like:",
"from IPython.display import Image\nImage('./images/example2.png', width=300)",
"2.10.3 Archive the analysis as a zip file\nThe publish function will produce an html human readable report containing a description of the data, alignments, trees, and the methods that created them in various ways. The following options control this function: \n\nfolder_name: zip file name or directory name for the report (will be created)\nfigures_folder: where did you save your tree figures?\nsize: 'small' = don't print alignment statistics graph, or 'large': print them. If 'large' is chosen, for each alignment and trimmed alignment, gap scores and conservation scores plots will be printed. (default - 'small'). \ncompare_trees: a list of algorithms to use to formally compare trees. The algorithms to choose from are 'topology', 'branch-length' and 'proportional'. (default, [])\ncompare_meta: Similar to the OTU qualifier required for data concatenation, we need to say which qualifier identifies a discrete sample, that will allow to compare trees of different genes. By default, it will look for a Concatenation object and will use the OTU meta that is specified there. If there are no Concatenation objects, and we have not specified a compare_meta, it will raise an error. \ntrees_to_compare: A list of keys from the pj.trees dictionary. This allows to control what trees will go into the pairwise comparisons and also control their order of appearance in the results. (default, 'all') \nunrooted_trees: True or False (default). If True, the algorithm will minimize the difference before determining it.\n\nThis is a minimal example, which does not include tree comparisons. Tree comparisons are shown later.",
"publish(pj, 'my_report', './images/', size='large')\n\npickle_pj(pj, 'outputs/my_project.pkpj')"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
kerimlcr/ab2017-dpyo
|
ornek/osmnx/osmnx-0.3/examples/06-example-osmnx-networkx.ipynb
|
gpl-3.0
|
[
"Use OSMnx to analyze a NetworkX street network, including routing\n\nOverview of OSMnx\nGitHub repo\nExamples, demos, tutorials",
"import osmnx as ox, networkx as nx, matplotlib.cm as cm, pandas as pd, numpy as np\n%matplotlib inline\nox.config(log_file=True, log_console=True, use_cache=True)",
"Calculate basic street network measures (topological and metric)",
"# get the network for Piedmont, calculate its basic stats, then show the average circuity\nstats = ox.basic_stats(ox.graph_from_place('Piedmont, California, USA'))\nstats['circuity_avg']",
"To calculate density-based metrics, you must also pass the network's bounding area in square meters (otherwise basic_stats() will just skip them in the calculation):",
"# get the street network for a place, and its area in square meters\nplace = 'Piedmont, California, USA'\ngdf = ox.gdf_from_place(place)\narea = ox.project_gdf(gdf).unary_union.area\nG = ox.graph_from_place(place, network_type='drive_service')\n\n# calculate basic and extended network stats, merge them together, and display\nstats = ox.basic_stats(G, area=area)\nextended_stats = ox.extended_stats(G, ecc=True, bc=True, cc=True)\nfor key, value in extended_stats.items():\n stats[key] = value\npd.Series(stats)",
"Streets/intersection counts and proportions are nested dicts inside the stats dict. To convert these stats to a pandas dataframe (to compare/analyze multiple networks against each other), just unpack these nested dicts first:",
"# unpack dicts into individiual keys:values\nstats = ox.basic_stats(G, area=area)\nfor k, count in stats['streets_per_node_counts'].items():\n stats['int_{}_count'.format(k)] = count\nfor k, proportion in stats['streets_per_node_proportion'].items():\n stats['int_{}_prop'.format(k)] = proportion\n\n# delete the no longer needed dict elements\ndel stats['streets_per_node_counts']\ndel stats['streets_per_node_proportion']\n\n# load as a pandas dataframe\npd.DataFrame(pd.Series(stats)).T",
"Inspect betweenness centrality",
"G_projected = ox.project_graph(G)\nmax_node, max_bc = max(extended_stats['betweenness_centrality'].items(), key=lambda x: x[1])\nmax_node, max_bc",
"In the city of Piedmont, California, the node with the highest betweenness centrality has 29.4% of all shortest paths running through it. Let's highlight it in the plot:",
"nc = ['r' if node==max_node else '#336699' for node in G_projected.nodes()]\nns = [50 if node==max_node else 8 for node in G_projected.nodes()]\nfig, ax = ox.plot_graph(G_projected, node_size=ns, node_color=nc, node_zorder=2)",
"29.4% of all shortest paths run through the node highlighted in red. Let's look at the relative betweenness centrality of every node in the graph:",
"# get a color for each node\ndef get_color_list(n, color_map='plasma', start=0, end=1):\n return [cm.get_cmap(color_map)(x) for x in np.linspace(start, end, n)]\n\ndef get_node_colors_by_stat(G, data, start=0, end=1):\n df = pd.DataFrame(data=pd.Series(data).sort_values(), columns=['value'])\n df['colors'] = get_color_list(len(df), start=start, end=end)\n df = df.reindex(G.nodes())\n return df['colors'].tolist()\n\nnc = get_node_colors_by_stat(G_projected, data=extended_stats['betweenness_centrality'])\nfig, ax = ox.plot_graph(G_projected, node_color=nc, node_edgecolor='gray', node_size=20, node_zorder=2)",
"Above, the nodes are visualized by betweenness centrality, from low (dark violet) to high (light yellow).\nRouting: calculate the network path from the centermost node to some other node\nLet the origin node be the node nearest the location and let the destination node just be the last node in the network. Then find the shortest path between origin and destination, using weight='length' to find the shortest spatial path (otherwise it treats each edge as weight=1).",
"# define a lat-long point, create network around point, define origin/destination nodes\nlocation_point = (37.791427, -122.410018)\nG = ox.graph_from_point(location_point, distance=500, distance_type='network', network_type='walk')\norigin_node = ox.get_nearest_node(G, location_point)\ndestination_node = list(G.nodes())[-1]\n\n# find the route between these nodes then plot it\nroute = nx.shortest_path(G, origin_node, destination_node)\nfig, ax = ox.plot_graph_route(G, route)\n\n# project the network to UTM (zone calculated automatically) then plot the network/route again\nG_proj = ox.project_graph(G)\nfig, ax = ox.plot_graph_route(G_proj, route)",
"Routing: plot network path from one lat-long to another",
"# define origin/desination points then get the nodes nearest to each\norigin_point = (37.792896, -122.412325)\ndestination_point = (37.790495, -122.408353)\norigin_node = ox.get_nearest_node(G, origin_point)\ndestination_node = ox.get_nearest_node(G, destination_point)\norigin_node, destination_node\n\n# find the shortest path between origin and destination nodes\nroute = nx.shortest_path(G, origin_node, destination_node, weight='length')\nstr(route)\n\n# plot the route showing origin/destination lat-long points in blue\nfig, ax = ox.plot_graph_route(G, route, origin_point=origin_point, destination_point=destination_point)",
"Demonstrate routing with one-way streets",
"G = ox.graph_from_address('N. Sicily Pl., Chandler, Arizona', distance=800, network_type='drive')\norigin = (33.307792, -111.894940)\ndestination = (33.312994, -111.894998)\norigin_node = ox.get_nearest_node(G, origin)\ndestination_node = ox.get_nearest_node(G, destination)\nroute = nx.shortest_path(G, origin_node, destination_node)\nfig, ax = ox.plot_graph_route(G, route, save=True, filename='route')",
"Also, when there are parallel edges between nodes in the route, OSMnx picks the shortest edge to plot",
"location_point = (33.299896, -111.831638)\nG = ox.graph_from_point(location_point, distance=500, clean_periphery=False)\norigin = (33.301821, -111.829871)\ndestination = (33.301402, -111.833108)\norigin_node = ox.get_nearest_node(G, origin)\ndestination_node = ox.get_nearest_node(G, destination)\nroute = nx.shortest_path(G, origin_node, destination_node)\nfig, ax = ox.plot_graph_route(G, route)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
hivert/CombiFIIL
|
TP/Python2/TP1GenerationRecursive.ipynb
|
gpl-2.0
|
[
"Génération récursive\nCe TP est une introduction à la notion de génération récursive d'objets combinatoires.\nLe Yield python\nCette section est une introduction à la notion de générateurs en Python qui nous sera utilie par la suite. \nVous avez vu en cours le principe d'un generateur, le langage python possède une instruction très pratique pour créer des générateurs : yield.",
"def stupid_generator(end):\n i = 0\n while i < end:\n yield i\n i+=1\n\nstupid_generator(3)",
"Le but de la fonction stupid_generator est de lister les entiers inférieurs à end. Cependant, elle ne retourne pas directement la liste mais un générateur sur cette liste. Comparez avec la fonction suivante.",
"def stupid_list(end):\n i = 0\n result = []\n while i < end:\n result.append(i)\n i+=1\n return result\n\nstupid_list(3)",
"Pour récupérer les objets de stupid_generator, il faut le transformer explicitement en liste ou alors parcourir les objets à travers une boucle.",
"it = stupid_generator(3)\n\nit.next()\n\nlist(stupid_generator(3))\n\nfor v in stupid_generator(3):\n print v",
"Remarque : les instructions de stupid_generator ne sont pas exécutées lors de l'appel initial de la fonction mais seulement lorsque l'on commence à parcourir le générateur pour récupérer le premier objet. L'instruction yield stoppe alors l'exécution et retourne le premier objet. Si l'on demande un dexuième objet, l'exécution sera reprise là où elle a été arrétée.",
"def test_generator():\n print \"Cette instruction est exécutée lors de l'appel du premier objet\"\n yield 1\n print \"Cette instruction est exécutée lors de l'appel du deuxième objet\"\n yield 2\n print \"Cette instruction est exécutée lors de l'appel du troisième objet\"\n yield 3\n\nit = test_generator()\n\nit.next()\n\nit.next()\n\nit.next()\n\nit.next()",
"Exercice : implanter la fonction suivante dont le but est de générer les n premiers nombre de Fibonacci La suite de fibonacci est définie par :\n$f_0 = 0$\n$f_1 = 1$\n$f_n = f_{n-1} + f_{n-2}$ pour $n \\geq 2$.",
"def first_fibonacci_generator(n):\n \"\"\"\n Return a generator for the first ``n`` Fibonacci numbers\n \"\"\"\n # write code here",
"Votre fonction doit passer les tests suivants :",
"import types\nassert(type(first_fibonacci_generator(3)) == types.GeneratorType)\nassert(list(first_fibonacci_generator(0)) == [])\nassert(list(first_fibonacci_generator(1)) == [0])\nassert(list(first_fibonacci_generator(2)) == [0,1])\nassert(list(first_fibonacci_generator(8)) == [0,1,1,2,3,5,8,13])",
"Dans les cas précédent, le générateur s'arrête de lui même au bout d'un certain temps. Cependant, il est aussi possible d'écrire des générateurs infinis. Dans ce cas, la responsabilité de l'arrêt revient à la l'appelant.",
"def powers2():\n v = 1\n while True:\n yield v\n v*=2\n\nfor v in powers2():\n print v\n if v > 1000000:\n break",
"Exercice: Implantez les fonctions suivantes",
"def fibonacci_generator():\n \"\"\"\n Return an infinite generator for Fibonacci numbers\n \"\"\"\n # write code here\n\nit = fibonacci_generator()\n\nit.next()\n\ndef fibonacci_finder(n):\n \"\"\"\n Return the first Fibonacci number greater than or equal to n\n \"\"\"\n # write code here\n\nassert(fibonacci_finder(10) == 13)\nassert(fibonacci_finder(100) == 144)\nassert(fibonacci_finder(1000) == 1597)\nassert(fibonacci_finder(1000000) == 1346269)",
"Mots binaires\nNous allons nous intéresser à la génération récursive de mots binaires vérifiants certaines propriétés. Nous allons représenter les mots binaires par des chaines de carcatères, par exemples.",
"binaires1 = [\"0\", \"1\"]\nbinaires2 = [\"00\", \"01\", \"10\", \"11\"]",
"Les fonctions suivantes génèrent les mots binaires de taille 0,1, et 2.",
"def binary_word_generator0():\n yield \"\"\n \ndef binary_word_generator1():\n yield \"0\"\n yield \"1\"\n \ndef binary_word_generator2():\n for b in binary_word_generator1():\n yield b + \"0\"\n yield b + \"1\"\n\nlist(binary_word_generator2())",
"En vous inspirant des fonctions précédentes (mais sans les utiliser) ou en reprenant la fonction du cours, implantez de façon récursive la fonction suivante qui engendre l'ensemble des mots binaires d'une taille donnée.",
"def binary_word_generator(n):\n \"\"\"\n Return a generator on binary words of size n in lexicographic order\n \n Input :\n - n, the length of the words\n \"\"\"\n # write code here\n\nlist(binary_word_generator(3))\n\n# tests\nimport types\nassert(type(binary_word_generator(0)) == types.GeneratorType)\nassert(list(binary_word_generator(0)) == [''])\nassert(list(binary_word_generator(1)) == ['0', '1'])\nassert(list(binary_word_generator(2)) == ['00', '01', '10', '11'])\nassert(list(binary_word_generator(3)) == ['000', '001', '010', '011', '100', '101', '110', '111'])\nassert(len(list(binary_word_generator(4))) == 16)\nassert(len(list(binary_word_generator(7))) == 128)",
"Sur le même modèle, implanter la fonction suivante. (un peu plus dur)\nPosez-vous la question de cette façon : si j'ai un mot de taille $n$ qui termine par 0 et qui contient $k$ fois 1, combien de 1 contenait le mot taille $n-1$ à partir duquel il a été créé ? De même s'il termine par 1.\nRemarque : l'ordre des mots n'est plus imposé",
"def binary_kword_generator(n,k):\n \"\"\"\n Return a generator on binary words of size n such that each word contains exacty k occurences of 1\n \n Input :\n \n - n, the size of the words\n - k, the number of 1\n \"\"\"\n # write code here\n\nlist(binary_kword_generator(4,2))\n\n# tests\nimport types\nassert(type(binary_kword_generator(0,0)) == types.GeneratorType)\nassert(list(binary_kword_generator(0,0)) == [''])\nassert(list(binary_kword_generator(0,1)) == [])\nassert(list(binary_kword_generator(1,1)) == ['1'])\nassert(list(binary_kword_generator(4,4)) == ['1111'])\nassert(list(binary_kword_generator(4,0)) == ['0000'])\nassert(set(binary_kword_generator(4,2)) == set(['0011', '0101', '1001', '0110', '1010', '1100']))\nassert(len(list(binary_kword_generator(7,3))) == 35)",
"Et pour finir\nOn appelle un prefixe de Dyck un mot binaire de taille $n$ avec $k$ occurences de 1, tel que dans tout préfixe, le nombre de 1 soit supérieur ou égal au nombre de 0.\nPar exemple : $1101$ est un préfixe de Dyck pour $n=4$ et $k=3$. Mais $1001$ n'en est pas un car dans le prefixe $100$ le nombre de 0 est supérieur au nombre de 1.",
"def dyck_prefix_generator(n,k):\n \"\"\"\n Return a generator on binary words of size n such that each word contains exacty k occurences of 1, \n and in any prefix, the number of 1 is greater than or equal to the number of 0.\n \n Input :\n \n - n, the size of the words\n - k, the number of 1\n \"\"\"\n # write code here\n\nlist(dyck_prefix_generator(4,2))\n\nassert(len(list(dyck_prefix_generator(0,0))) == 1)\nassert(len(list(dyck_prefix_generator(0,1))) == 0)\nassert(len(list(dyck_prefix_generator(1,0))) == 0)\nassert(list(dyck_prefix_generator(1,1)) == ['1'])\nassert(set(dyck_prefix_generator(3,2)) == set([\"110\",\"101\"] ))\nassert(len(set(dyck_prefix_generator(10,5))) == 42)",
"Exécutez la ligne suivante et copiez la liste des nombres obentus dans Google.",
"[len(set(dyck_prefix_generator(2*n, n))) for n in xrange(8)]",
"Aller plus loin\nComme vous avez pu le voir, ces nombres comptent de nombreuses familles d'objets combinatoires.\n\nQuelle famille d'objets avons-nous implantée ?\nPouvez-vous implanter les bijections entre cette famille et d'autres familles combinatoires telles que :\nles partitions non croisées\nles arbres binaires\n..."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
billzhao1990/CS231n-Spring-2017
|
assignment2/.ipynb_checkpoints/BatchNormalization-checkpoint.ipynb
|
mit
|
[
"Batch Normalization\nOne way to make deep networks easier to train is to use more sophisticated optimization procedures such as SGD+momentum, RMSProp, or Adam. Another strategy is to change the architecture of the network to make it easier to train. One idea along these lines is batch normalization which was recently proposed by [3].\nThe idea is relatively straightforward. Machine learning methods tend to work better when their input data consists of uncorrelated features with zero mean and unit variance. When training a neural network, we can preprocess the data before feeding it to the network to explicitly decorrelate its features; this will ensure that the first layer of the network sees data that follows a nice distribution. However even if we preprocess the input data, the activations at deeper layers of the network will likely no longer be decorrelated and will no longer have zero mean or unit variance since they are output from earlier layers in the network. Even worse, during the training process the distribution of features at each layer of the network will shift as the weights of each layer are updated.\nThe authors of [3] hypothesize that the shifting distribution of features inside deep neural networks may make training deep networks more difficult. To overcome this problem, [3] proposes to insert batch normalization layers into the network. At training time, a batch normalization layer uses a minibatch of data to estimate the mean and standard deviation of each feature. These estimated means and standard deviations are then used to center and normalize the features of the minibatch. A running average of these means and standard deviations is kept during training, and at test time these running averages are used to center and normalize features.\nIt is possible that this normalization strategy could reduce the representational power of the network, since it may sometimes be optimal for certain layers to have features that are not zero-mean or unit variance. To this end, the batch normalization layer includes learnable shift and scale parameters for each feature dimension.\n[3] Sergey Ioffe and Christian Szegedy, \"Batch Normalization: Accelerating Deep Network Training by Reducing\nInternal Covariate Shift\", ICML 2015.",
"# As usual, a bit of setup\nfrom __future__ import print_function\nimport time\nimport numpy as np\nimport matplotlib.pyplot as plt\nfrom cs231n.classifiers.fc_net import *\nfrom cs231n.data_utils import get_CIFAR10_data\nfrom cs231n.gradient_check import eval_numerical_gradient, eval_numerical_gradient_array\nfrom cs231n.solver import Solver\n\n%matplotlib inline\nplt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots\nplt.rcParams['image.interpolation'] = 'nearest'\nplt.rcParams['image.cmap'] = 'gray'\n\n# for auto-reloading external modules\n# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython\n%load_ext autoreload\n%autoreload 2\n\ndef rel_error(x, y):\n \"\"\" returns relative error \"\"\"\n return np.max(np.abs(x - y) / (np.maximum(1e-8, np.abs(x) + np.abs(y))))\n\n# Load the (preprocessed) CIFAR10 data.\n\ndata = get_CIFAR10_data()\nfor k, v in data.items():\n print('%s: ' % k, v.shape)",
"Batch normalization: Forward\nIn the file cs231n/layers.py, implement the batch normalization forward pass in the function batchnorm_forward. Once you have done so, run the following to test your implementation.",
"# Check the training-time forward pass by checking means and variances\n# of features both before and after batch normalization\n\n# Simulate the forward pass for a two-layer network\nnp.random.seed(231)\nN, D1, D2, D3 = 200, 50, 60, 3\nX = np.random.randn(N, D1)\nW1 = np.random.randn(D1, D2)\nW2 = np.random.randn(D2, D3)\na = np.maximum(0, X.dot(W1)).dot(W2)\n\nprint('Before batch normalization:')\nprint(' means: ', a.mean(axis=0))\nprint(' stds: ', a.std(axis=0))\n\n# Means should be close to zero and stds close to one\nprint('After batch normalization (gamma=1, beta=0)')\na_norm, _ = batchnorm_forward(a, np.ones(D3), np.zeros(D3), {'mode': 'train'})\nprint(' mean: ', a_norm.mean(axis=0))\nprint(' std: ', a_norm.std(axis=0))\n\n# Now means should be close to beta and stds close to gamma\ngamma = np.asarray([1.0, 2.0, 3.0])\nbeta = np.asarray([11.0, 12.0, 13.0])\na_norm, _ = batchnorm_forward(a, gamma, beta, {'mode': 'train'})\nprint('After batch normalization (nontrivial gamma, beta)')\nprint(' means: ', a_norm.mean(axis=0))\nprint(' stds: ', a_norm.std(axis=0))\n\n# Check the test-time forward pass by running the training-time\n# forward pass many times to warm up the running averages, and then\n# checking the means and variances of activations after a test-time\n# forward pass.\nnp.random.seed(231)\nN, D1, D2, D3 = 200, 50, 60, 3\nW1 = np.random.randn(D1, D2)\nW2 = np.random.randn(D2, D3)\n\nbn_param = {'mode': 'train'}\ngamma = np.ones(D3)\nbeta = np.zeros(D3)\nfor t in range(50):\n X = np.random.randn(N, D1)\n a = np.maximum(0, X.dot(W1)).dot(W2)\n batchnorm_forward(a, gamma, beta, bn_param)\nbn_param['mode'] = 'test'\nX = np.random.randn(N, D1)\na = np.maximum(0, X.dot(W1)).dot(W2)\na_norm, _ = batchnorm_forward(a, gamma, beta, bn_param)\n\n# Means should be close to zero and stds close to one, but will be\n# noisier than training-time forward passes.\nprint('After batch normalization (test-time):')\nprint(' means: ', a_norm.mean(axis=0))\nprint(' stds: ', a_norm.std(axis=0))",
"Batch Normalization: backward\nNow implement the backward pass for batch normalization in the function batchnorm_backward.\nTo derive the backward pass you should write out the computation graph for batch normalization and backprop through each of the intermediate nodes. Some intermediates may have multiple outgoing branches; make sure to sum gradients across these branches in the backward pass.\nOnce you have finished, run the following to numerically check your backward pass.",
"# Gradient check batchnorm backward pass\nnp.random.seed(231)\nN, D = 4, 5\nx = 5 * np.random.randn(N, D) + 12\ngamma = np.random.randn(D)\nbeta = np.random.randn(D)\ndout = np.random.randn(N, D)\n\nbn_param = {'mode': 'train'}\nfx = lambda x: batchnorm_forward(x, gamma, beta, bn_param)[0]\nfg = lambda a: batchnorm_forward(x, a, beta, bn_param)[0]\nfb = lambda b: batchnorm_forward(x, gamma, b, bn_param)[0]\n\ndx_num = eval_numerical_gradient_array(fx, x, dout)\nda_num = eval_numerical_gradient_array(fg, gamma.copy(), dout)\ndb_num = eval_numerical_gradient_array(fb, beta.copy(), dout)\n\n_, cache = batchnorm_forward(x, gamma, beta, bn_param)\ndx, dgamma, dbeta = batchnorm_backward(dout, cache)\n\nprint('dx error: ', rel_error(dx_num, dx))\nprint('dgamma error: ', rel_error(da_num, dgamma))\nprint('dbeta error: ', rel_error(db_num, dbeta))",
"Batch Normalization: alternative backward (OPTIONAL, +3 points extra credit)\nIn class we talked about two different implementations for the sigmoid backward pass. One strategy is to write out a computation graph composed of simple operations and backprop through all intermediate values. Another strategy is to work out the derivatives on paper. For the sigmoid function, it turns out that you can derive a very simple formula for the backward pass by simplifying gradients on paper.\nSurprisingly, it turns out that you can also derive a simple expression for the batch normalization backward pass if you work out derivatives on paper and simplify. After doing so, implement the simplified batch normalization backward pass in the function batchnorm_backward_alt and compare the two implementations by running the following. Your two implementations should compute nearly identical results, but the alternative implementation should be a bit faster.\nNOTE: This part of the assignment is entirely optional, but we will reward 3 points of extra credit if you can complete it.",
"np.random.seed(231)\nN, D = 100, 500\nx = 5 * np.random.randn(N, D) + 12\ngamma = np.random.randn(D)\nbeta = np.random.randn(D)\ndout = np.random.randn(N, D)\n\nbn_param = {'mode': 'train'}\nout, cache = batchnorm_forward(x, gamma, beta, bn_param)\n\nt1 = time.time()\ndx1, dgamma1, dbeta1 = batchnorm_backward(dout, cache)\nt2 = time.time()\ndx2, dgamma2, dbeta2 = batchnorm_backward_alt(dout, cache)\nt3 = time.time()\n\nprint('dx difference: ', rel_error(dx1, dx2))\nprint('dgamma difference: ', rel_error(dgamma1, dgamma2))\nprint('dbeta difference: ', rel_error(dbeta1, dbeta2))\nprint('speedup: %.2fx' % ((t2 - t1) / (t3 - t2)))",
"Fully Connected Nets with Batch Normalization\nNow that you have a working implementation for batch normalization, go back to your FullyConnectedNet in the file cs2312n/classifiers/fc_net.py. Modify your implementation to add batch normalization.\nConcretely, when the flag use_batchnorm is True in the constructor, you should insert a batch normalization layer before each ReLU nonlinearity. The outputs from the last layer of the network should not be normalized. Once you are done, run the following to gradient-check your implementation.\nHINT: You might find it useful to define an additional helper layer similar to those in the file cs231n/layer_utils.py. If you decide to do so, do it in the file cs231n/classifiers/fc_net.py.",
"np.random.seed(231)\nN, D, H1, H2, C = 2, 15, 20, 30, 10\nX = np.random.randn(N, D)\ny = np.random.randint(C, size=(N,))\n\nfor reg in [0, 3.14]:\n print('Running check with reg = ', reg)\n model = FullyConnectedNet([H1, H2], input_dim=D, num_classes=C,\n reg=reg, weight_scale=5e-2, dtype=np.float64,\n use_batchnorm=True)\n\n loss, grads = model.loss(X, y)\n print('Initial loss: ', loss)\n\n for name in sorted(grads):\n f = lambda _: model.loss(X, y)[0]\n grad_num = eval_numerical_gradient(f, model.params[name], verbose=False, h=1e-5)\n print('%s relative error: %.2e' % (name, rel_error(grad_num, grads[name])))\n if reg == 0: print()",
"Batchnorm for deep networks\nRun the following to train a six-layer network on a subset of 1000 training examples both with and without batch normalization.",
"np.random.seed(231)\n# Try training a very deep net with batchnorm\nhidden_dims = [100, 100, 100, 100, 100]\n\nnum_train = 1000\nsmall_data = {\n 'X_train': data['X_train'][:num_train],\n 'y_train': data['y_train'][:num_train],\n 'X_val': data['X_val'],\n 'y_val': data['y_val'],\n}\n\nweight_scale = 2e-2\nbn_model = FullyConnectedNet(hidden_dims, weight_scale=weight_scale, use_batchnorm=True)\nmodel = FullyConnectedNet(hidden_dims, weight_scale=weight_scale, use_batchnorm=False)\n\nbn_solver = Solver(bn_model, small_data,\n num_epochs=10, batch_size=50,\n update_rule='adam',\n optim_config={\n 'learning_rate': 1e-3,\n },\n verbose=True, print_every=200)\nbn_solver.train()\n\nsolver = Solver(model, small_data,\n num_epochs=10, batch_size=50,\n update_rule='adam',\n optim_config={\n 'learning_rate': 1e-3,\n },\n verbose=True, print_every=200)\nsolver.train()",
"Run the following to visualize the results from two networks trained above. You should find that using batch normalization helps the network to converge much faster.",
"plt.subplot(3, 1, 1)\nplt.title('Training loss')\nplt.xlabel('Iteration')\n\nplt.subplot(3, 1, 2)\nplt.title('Training accuracy')\nplt.xlabel('Epoch')\n\nplt.subplot(3, 1, 3)\nplt.title('Validation accuracy')\nplt.xlabel('Epoch')\n\nplt.subplot(3, 1, 1)\nplt.plot(solver.loss_history, 'o', label='baseline')\nplt.plot(bn_solver.loss_history, 'o', label='batchnorm')\n\nplt.subplot(3, 1, 2)\nplt.plot(solver.train_acc_history, '-o', label='baseline')\nplt.plot(bn_solver.train_acc_history, '-o', label='batchnorm')\n\nplt.subplot(3, 1, 3)\nplt.plot(solver.val_acc_history, '-o', label='baseline')\nplt.plot(bn_solver.val_acc_history, '-o', label='batchnorm')\n \nfor i in [1, 2, 3]:\n plt.subplot(3, 1, i)\n plt.legend(loc='upper center', ncol=4)\nplt.gcf().set_size_inches(15, 15)\nplt.show()",
"Batch normalization and initialization\nWe will now run a small experiment to study the interaction of batch normalization and weight initialization.\nThe first cell will train 8-layer networks both with and without batch normalization using different scales for weight initialization. The second layer will plot training accuracy, validation set accuracy, and training loss as a function of the weight initialization scale.",
"np.random.seed(231)\n# Try training a very deep net with batchnorm\nhidden_dims = [50, 50, 50, 50, 50, 50, 50]\n\nnum_train = 1000\nsmall_data = {\n 'X_train': data['X_train'][:num_train],\n 'y_train': data['y_train'][:num_train],\n 'X_val': data['X_val'],\n 'y_val': data['y_val'],\n}\n\nbn_solvers = {}\nsolvers = {}\nweight_scales = np.logspace(-4, 0, num=20)\nfor i, weight_scale in enumerate(weight_scales):\n print('Running weight scale %d / %d' % (i + 1, len(weight_scales)))\n bn_model = FullyConnectedNet(hidden_dims, weight_scale=weight_scale, use_batchnorm=True)\n model = FullyConnectedNet(hidden_dims, weight_scale=weight_scale, use_batchnorm=False)\n\n bn_solver = Solver(bn_model, small_data,\n num_epochs=10, batch_size=50,\n update_rule='adam',\n optim_config={\n 'learning_rate': 1e-3,\n },\n verbose=False, print_every=200)\n bn_solver.train()\n bn_solvers[weight_scale] = bn_solver\n\n solver = Solver(model, small_data,\n num_epochs=10, batch_size=50,\n update_rule='adam',\n optim_config={\n 'learning_rate': 1e-3,\n },\n verbose=False, print_every=200)\n solver.train()\n solvers[weight_scale] = solver\n\n# Plot results of weight scale experiment\nbest_train_accs, bn_best_train_accs = [], []\nbest_val_accs, bn_best_val_accs = [], []\nfinal_train_loss, bn_final_train_loss = [], []\n\nfor ws in weight_scales:\n best_train_accs.append(max(solvers[ws].train_acc_history))\n bn_best_train_accs.append(max(bn_solvers[ws].train_acc_history))\n \n best_val_accs.append(max(solvers[ws].val_acc_history))\n bn_best_val_accs.append(max(bn_solvers[ws].val_acc_history))\n \n final_train_loss.append(np.mean(solvers[ws].loss_history[-100:]))\n bn_final_train_loss.append(np.mean(bn_solvers[ws].loss_history[-100:]))\n \nplt.subplot(3, 1, 1)\nplt.title('Best val accuracy vs weight initialization scale')\nplt.xlabel('Weight initialization scale')\nplt.ylabel('Best val accuracy')\nplt.semilogx(weight_scales, best_val_accs, '-o', label='baseline')\nplt.semilogx(weight_scales, bn_best_val_accs, '-o', label='batchnorm')\nplt.legend(ncol=2, loc='lower right')\n\nplt.subplot(3, 1, 2)\nplt.title('Best train accuracy vs weight initialization scale')\nplt.xlabel('Weight initialization scale')\nplt.ylabel('Best training accuracy')\nplt.semilogx(weight_scales, best_train_accs, '-o', label='baseline')\nplt.semilogx(weight_scales, bn_best_train_accs, '-o', label='batchnorm')\nplt.legend()\n\nplt.subplot(3, 1, 3)\nplt.title('Final training loss vs weight initialization scale')\nplt.xlabel('Weight initialization scale')\nplt.ylabel('Final training loss')\nplt.semilogx(weight_scales, final_train_loss, '-o', label='baseline')\nplt.semilogx(weight_scales, bn_final_train_loss, '-o', label='batchnorm')\nplt.legend()\nplt.gca().set_ylim(1.0, 3.5)\n\nplt.gcf().set_size_inches(10, 15)\nplt.show()",
"Question:\nDescribe the results of this experiment, and try to give a reason why the experiment gave the results that it did.\nAnswer:"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
intel-analytics/analytics-zoo
|
docs/docs/colab-notebook/chronos/chronos_nyc_taxi_tsdataset_forecaster.ipynb
|
apache-2.0
|
[
"<a href=\"https://colab.research.google.com/github/intel-analytics/analytics-zoo/blob/master/docs/docs/colab-notebook/chronos/chronos_autots_nyc_taxi.ipynb\" target=\"_parent\"><img src=\"https://colab.research.google.com/assets/colab-badge.svg\" alt=\"Open In Colab\"/></a>\n\nCopyright 2018 Analytics Zoo Authors.",
"#@title Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n#",
"Environment Preparation\nInstall Analytics Zoo\nYou can install the latest pre-release version with chronos support using pip install --pre --upgrade analytics-zoo[automl].",
"# Install latest pre-release version of Analytics Zoo \n# Installing Analytics Zoo from pip will automatically install pyspark, bigdl, and their dependencies.\n!pip install --pre --upgrade analytics-zoo[automl]\nexit() # restart the runtime to refresh installed pkg",
"Step 0: Download & prepare dataset\nWe used NYC taxi passengers dataset in Numenta Anomaly Benchmark (NAB) for demo, which contains 10320 records, each indicating the total number of taxi passengers in NYC at a corresonponding time spot.",
"# download the dataset\n!wget https://raw.githubusercontent.com/numenta/NAB/v1.0/data/realKnownCause/nyc_taxi.csv\n\n# load the dataset. The downloaded dataframe contains two columns, \"timestamp\" and \"value\".\nimport pandas as pd\ndf = pd.read_csv(\"nyc_taxi.csv\", parse_dates=[\"timestamp\"])",
"Time series forecasting using Chronos Forecaster\nForecaster Step1. Data transformation and feature engineering using Chronos TSDataset\nTSDataset is our abstract of time series dataset for data transformation and feature engineering. Here we use it to preprocess the data.",
"from zoo.chronos.data import TSDataset\nfrom sklearn.preprocessing import StandardScaler",
"Initialize train, valid and test tsdataset from raw pandas dataframe.",
"tsdata_train, tsdata_valid, tsdata_test = TSDataset.from_pandas(df, dt_col=\"timestamp\", target_col=\"value\",\n with_split=True, val_ratio=0.1, test_ratio=0.1)",
"Preprocess the datasets. Here we perform:\n- deduplicate: remove those identical data records\n- impute: fill the missing values\n- gen_dt_feature: generate feature from datetime (e.g. month, day...)\n- scale: scale each feature to standard distribution.\n- roll: sample the data with sliding window.\nFor forecasting task, we will look back 3 hours' historical data (6 records) and predict the value of next 30 miniutes (1 records).\nWe perform the same transformation processes on train, valid and test set.",
"lookback, horizon = 6, 1\n\nscaler = StandardScaler()\nfor tsdata in [tsdata_train, tsdata_valid, tsdata_test]:\n tsdata.deduplicate()\\\n .impute()\\\n .gen_dt_feature()\\\n .scale(scaler, fit=(tsdata is tsdata_train))\\\n .roll(lookback=lookback, horizon=horizon)",
"Forecaster Step 2: Time series forecasting using Chronos Forecaster\nAfter preprocessing the datasets. We can use Chronos Forecaster to handle the forecasting tasks.\nTransform TSDataset to sampled numpy ndarray and feed them to forecaster.",
"from zoo.chronos.forecaster.tcn_forecaster import TCNForecaster\n\nx, y = tsdata_train.to_numpy()\n# x.shape = (num of sample, lookback, num of input feature)\n# y.shape = (num of sample, horizon, num of output feature)\n\nforecaster = TCNForecaster(past_seq_len=lookback, # number of steps to look back\n future_seq_len=horizon, # number of steps to predict\n input_feature_num=x.shape[-1], # number of feature to use\n output_feature_num=y.shape[-1], # number of feature to predict\n seed=0)\nres = forecaster.fit((x, y), epochs=3)",
"Forecaster Step 3: Further deployment with fitted forecaster\nUse fitted forecaster to predict test data and plot the result",
"x_test, y_test = tsdata_test.to_numpy()\npred = forecaster.predict(x_test)\npred_unscale, groundtruth_unscale = tsdata_test.unscale_numpy(pred), tsdata_test.unscale_numpy(y_test)\n\nimport matplotlib.pyplot as plt\n\nplt.figure(figsize=(24,6))\nplt.plot(pred_unscale[:,:,0])\nplt.plot(groundtruth_unscale[:,:,0])\nplt.legend([\"prediction\", \"ground truth\"])",
"Save & restore the forecaster.",
"forecaster.save(\"nyc_taxi.fxt\")\nforecaster.load(\"nyc_taxi.fxt\")"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
nmacri/twitter-bots-smw-2016
|
notebooks/1 - Setting Up a Twitter Bot in 5 Easy Steps.ipynb
|
mit
|
[
"Setting Up a Twitter Bot in 5 Easy Steps\n\nGet our project directory organized\nMake a Twitter app\nMake a Twitter account\nOAuth secret handshake\nStore our secrets somewhere safe\n\n1. First things first, let's get our project directory organized\nif you're running this locally check out README.md otherwise skip to the next section...",
"# cd into your project directory in my case this is, but yours may be different\n%cd ~/twitter-bots-smw-2016/\n\n# install dependencies if you haven't already :)\n!pip install -r requirements.txt\n\n# import your libraries \nimport twitter\nimport json\nimport webbrowser\nfrom rauth import OAuth1Service",
"2. Good, that went smoothly, now let's go deal with twitter\nTo run our bot we'll need to use a protocol called OAuth which sounds a little bit daunting, but really it's just a kind of secret handshake that we agree on with twitter so they know that we're cool.\n \nFirst thing you'll need to do is make an \"app\". It's pretty straightforward process that you can go through here https://apps.twitter.com/. \nThis is what my settings looked like:\n\nIn the end you'll get two tokens (YOUR_APP_KEY and YOUR_APP_SECRET) that you should store somewhere safe. I'm storing mine in a file called secrets.json there is an example (secrets_example.json) in the project root directory that you can use as a template. It looks like this:",
"f = open('secrets_example.json','rb')\nprint \"\".join(f.readlines())\nf.close()",
"3. Make your Bot's Account!\nTwitter's onboarding process isn't really optimized for the bot use-case, but once you get to the welcome screen you'll be logged in and ready for the next step (iow, you can keep the \"all the stuff you love\" to yourself).\n<br>\n<br>\n<div class=\"container\" style=\"width: 80%;\">\n <div class=\"theme-table-image col-sm-5\">\n <img src=\"http://cl.ly/2l2t380q393G/Image%202016-02-20%20at%209.06.21%20PM.png\">\n </div>\n <div class=\"col-sm-2\">\n </div>\n <div class=\"theme-table-image col-sm-5\">\n <img src=\"http://cl.ly/050O2M362Q1B/Image%202016-02-20%20at%209.07.35%20PM.png\">\n </div>\n</div>\n\n4. Final OAuth step: Secret handshake!\nLoad in your fresh new file of secrets (secrets.json)",
"f = open('secrets.json','rb')\nsecrets = json.load(f)\nf.close()",
"Use a library that knows how to implement OAuth1 (trust me, it's not fun to figure out by scratch). I'm using rauth but there are tons more out there.",
"tw_oauth_service = OAuth1Service(\n consumer_key=secrets['twitter']['app']['consumer_key'],\n consumer_secret=secrets['twitter']['app']['consumer_secret'],\n name='twitter',\n access_token_url='https://api.twitter.com/oauth/access_token',\n authorize_url='https://api.twitter.com/oauth/authorize',\n request_token_url='https://api.twitter.com/oauth/request_token',\n base_url='https://api.twitter.com/1.1/')\n\nrequest_token, request_token_secret = tw_oauth_service.get_request_token()\nurl = tw_oauth_service.get_authorize_url(request_token=request_token)\nwebbrowser.open_new(url)",
"The cells above will open a permissions dialog for you in a new tab:\n\nIf you're cool w/ it, authorize your app against your bot user you will then be redirected to the callback url you specified when you set up your app. I get redirected to something that looks like this\nhttp://127.0.0.1:9999/?oauth_token=JvutuAAAAAAAkfBmbVABUwFD6pI&oauth_verifier=pPktmz2xoFtjysR4DHSlFKcdahuUG\nIt will like an error, but it's not!, all you need to do is parse out two parameters from the url they bounce you back to: the oauth_token and the oauth_verifier. \nOnly one more step to go. You are so brave!",
"# Once you go through the flow and land on an error page http://127.0.0.1:9999 something\n# enter your token and verifier below like so. The \n# The example below (which won't work until you update the parameters) is from the following url: \n# http://127.0.0.1:9999/?oauth_token=JvutuAAAAAAAkfBmbVABUwFD6pI&oauth_verifier=pPktmz2xoFtjysR4DHSlFKcdahuUGciE\n\noauth_token='JvutuAAAAAAAkfBmbVABUwFD6pI'\noauth_verifier='pPktmz2xoFtjysR4DHSlFKcdahuUGciE'\n\nsession = tw_oauth_service.get_auth_session(request_token,\n request_token_secret,\n method='POST',\n data={'oauth_verifier': oauth_verifier})",
"5. Store your secrets somewhere safe",
"# Copy this guy into your secrets file \n\n# {\n# \"user_id\": \"701177805317472256\",\n# \"screen_name\": \"SmwKanye\",\n# HERE ----> \"token_key\": \"YOUR_TOKEN_KEY\",\n# \"token_secret\": \"YOUR_TOKEN_SECRET\"\n# },\nsession.access_token\n\n# Copy this guy into your secrets file \n\n# {\n# \"user_id\": \"701177805317472256\",\n# \"screen_name\": \"SmwKanye\",\n# \"token_key\": \"YOUR_TOKEN_KEY\",\n# HERE ----> \"token_secret\": \"YOUR_TOKEN_SECRET\"\n# },\nsession.access_token_secret",
"Awesome, now we have our user access tokens and secret. Store them in secrets.json and test below to see if they work. You don't really need 3 test accounts, so if you don't want to repeat the process just keep \"production\".\nFinally, test to see that your secrets are good...",
"f = open('secrets.json','rb')\nsecrets = json.load(f)\nf.close()\n\ntw_api_client = twitter.Api(consumer_key = secrets['twitter']['app']['consumer_key'],\n consumer_secret = secrets['twitter']['app']['consumer_secret'],\n access_token_key = secrets['twitter']['accounts']['production']['token_key'],\n access_token_secret = secrets['twitter']['accounts']['production']['token_secret'],\n )\n\ntw_api_client.GetUser(screen_name='SmwKanye').AsDict()\n\ntw_api_client.",
""
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
dvklopfenstein/PrincetonAlgorithms
|
notebooks/ElemSymbolTbls.ipynb
|
gpl-2.0
|
[
"Elementary Symbol Tables\nPython Code\n\nSymbol Table API \n A Symbol Table is a collection of key-value pairs, where the key is the Symbol. \n 1.1) Date.py is an example of an user-created immutable type which can be used as a key \n 1.2) Client for ST.py: FrequencyCounter.py \nElementary Symbol Table Implementations \n 2.1) SequentialSearchST.py, an unordered linked-list \n 2.2) BinarySearchST.py, ordered array. Fast lookup (slow insert) \nOrdered Operations: get, put, delete, size, min, max, floor, ceiling, rank, etc. \n 3.1) ST.py \nBinary Search Trees A binary tree in symmetric order \n A classic data structure that enables us to provide efficient \n implementations of Symbol Table algorithms \n 4.1) BST.py \nOrdered Operations in BSTs \nDeletion in BSTs \n\nTable of Contents for Examples\n\nEX1 Order of \"put\"s determine tree shape \n\nExamples\nEX1 Order of \"put\"s determine tree shape\n14:30 There are many different BSTs that correspond to the same set of keys. \nThe number of compares depends on the order in which the keys come in.",
"# Setup for running examples\nimport sys\nimport os\nsys.path.insert(0, '{GIT}/PrincetonAlgorithms/py'.format(GIT=os.environ['GIT']))\nfrom AlgsSedgewickWayne.BST import BST\n\n# Function to convert keys to key-value pairs where\n# 1. the key is the letter and\n# 2. the value is the index into the key list\nget_kv = lambda keys: [(k, v) for v, k in enumerate(keys)]",
"Tree shape: Best case",
"# All of these will create the same best case tree shape \n# Each example has the same keys, but different values\nBST(get_kv(['H', 'C', 'S', 'A', 'E', 'R', 'X'])).wr_png(\"BST_bc0.png\")\nBST(get_kv(['H', 'S', 'X', 'R', 'C', 'E', 'A'])).wr_png(\"BST_bc1.png\")\nBST(get_kv(['H', 'C', 'A', 'E', 'S', 'R', 'X'])).wr_png(\"BST_bc2.png\")",
"Best Case Tree Shape\n\nTree shape: Worst case",
"# These will create worst case tree shapes\nBST(get_kv(['A', 'C', 'E', 'H', 'R', 'S', 'X'])).wr_png(\"BST_wc_fwd.png\")\nBST(get_kv(['X', 'S', 'R', 'H', 'E', 'C', 'A'])).wr_png(\"BST_wc_rev.png\")",
"Worst Case Tree Shape\n|Incrementing Order: A C E H R S X|Decrementing Order: X S R H E C A|\n|---|---|\n|||"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
melissawm/oceanobiopython
|
Notebooks/Aula_3.ipynb
|
gpl-3.0
|
[
"Leitura e Escrita em Arquivos\nNo Python, se quisermos abrir um arquivo de texto puro para leitura, temos várias construções possíveis. Porém, a mais utilizada e que é, em geral, mais simples, é a seguinte:",
"import os\ndiretorio = os.path.join(os.getcwd(), \"..\",\"exemplos/exemplo_2\")\nwith open(os.path.join(diretorio,\"file1.txt\"), \"r\") as arquivo:\n numlinhas = 0\n for line in arquivo:\n numlinhas += 1\n \nprint(numlinhas)",
"Atenção: Os exemplos foram formulados no Linux; se você estiver usando Windows ou MacOS, talvez seja necessário alterar os separadores de diretórios/arquivos para uma contrabarra, por exemplo.\nO bloco with open(...), quando executado, abre o arquivo com a opção escohida (no caso acima, 'r', pois queremos apenas ler o arquivo), e automaticamente fecha o arquivo quando é concluido. \nObserve que as linhas do arquivo podem ser acessadas diretamente, uma a uma, através de um bloco for:",
"with open(os.path.join(diretorio,\"file1.txt\"), \"r\") as meuarquivo:\n for linha in meuarquivo:\n print(linha)",
"Exemplo\nTentar encontrar uma string específica (no nosso caso, \"sf\") dentro do arquivo file1.txt",
"string = \"sf\"\nb = []\nwith open(os.path.join(diretorio,\"file1.txt\"),\"r\") as arquivo:\n for line in arquivo:\n if string in line:\n b.append(line.rstrip(\"\\n\"))",
"Neste momento, a lista b contém todas as linhas do arquivo que continham a string desejada:",
"print(b)",
"Para alguns casos específicos, pode ser interessante carregar um arquivo completo na memória. Para isso, usamos",
"with open(os.path.join(diretorio,\"file1.txt\"),\"r\") as arquivo:\n conteudo = arquivo.read()\n \nprint(conteudo)",
"Também podemos usar o comando readline:",
"with open(os.path.join(diretorio,\"file1.txt\"), \"r\") as arquivo:\n print(arquivo.readline())",
"Sem argumentos, ele lê a próxima linha do arquivo; isto quer dizer que se ele é executado diversas vezes em sequência, com o arquivo aberto, ele lê a cada vez que é executado uma das linhas do arquivo.",
"with open(os.path.join(diretorio,\"file1.txt\"), \"r\") as arquivo:\n for i in range(0,5):\n print(arquivo.readline())",
"Exemplo\nLer a 10a linha do arquivo, de três maneiras diferentes:",
"with open(os.path.join(diretorio,\"file1.txt\"), \"r\") as arquivo:\n for i in range(0,10):\n linha = arquivo.readline()\n if i == 9:\n print(linha)\n\nwith open(os.path.join(diretorio,\"file1.txt\"), \"r\") as arquivo:\n i = 0\n for linha in arquivo:\n if i == 9:\n print(linha)\n i = i + 1\n\nwith open(os.path.join(diretorio,\"file1.txt\"), \"r\") as arquivo:\n conteudo = list(arquivo.read().split(\"\\n\"))\n \nprint(conteudo[9])",
"Exemplo:\nLer a primeira linha de cada arquivo de um diretorio e escrever o resultado em outro arquivo.\nPrimeiro, usamos uma list comprehension para obtermos uma lista dos arquivos no diretorio em que estamos interessados, mas queremos excluir o arquivo teste.txt e queremos que os arquivos estejam listados com seu caminho completo.",
"print([os.path.join(diretorio,item) for item in os.listdir(diretorio) if item != \"teste.txt\"])\n\nlista = [os.path.join(diretorio,item) for item in os.listdir(diretorio) if item != \"teste.txt\"]\n\nlista",
"Agora, vamos ler apenas a primeira linha de cada arquivo:",
"for item in lista:\n with open(item,\"r\") as arquivo:\n print(arquivo.readline())\n\nwith open(\"resumo.txt\", \"w\") as arquivo_saida:\n for item in lista:\n with open(item,\"r\") as arquivo:\n arquivo_saida.write(arquivo.readline()+\"\\n\")",
"Agora, vamos desfazer o exemplo:",
"os.remove(\"resumo.txt\")",
"Alguns links importantes\nDocumentação sobre funções built-in: https://docs.python.org/3/library/functions.html\nDocumentação oficial: https://docs.python.org/3\n(Fim da Aula 3, ministrada em 20/09/2016)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
garciparedes/python-examples
|
numerical/math/stats/stochastic_processes/entrega-01.ipynb
|
mpl-2.0
|
[
"import numpy as np\n\ndef stationary_distribution(P: np.array) -> np.array:\n A = P - np.eye(len(P))\n A[:, (len(P) - 1)] = np.ones([len(P)])\n \n p_stationary = np.linalg.pinv(A)[len(P) - 1, :]\n return p_stationary",
"Exercise: Ruiz Family\nLa familia Ruiz recibe el periódico todas las mañanas, y lo coloca en el revistero después de leerlo. Cada tarde, con probabilidad 0.3, alguien coge todos los periódicos del revistero y los lleva al contenedor de papel. Por otro lado, si hay al menos 5 periódicos en el montón, el señor Ruiz los lleva al contenedor.\na) Construye una cadena de Markov que cuente el número de periódicos que hay en el revistero cada noche. ¿Cómo son los estados?",
"transition_ruiz = np.array([[0.0, 1.0, 0.0, 0.0, 0.0],\n [0.3, 0.0, 0.7, 0.0, 0.0],\n [0.3, 0.0, 0.0, 0.7, 0.0],\n [0.3, 0.0, 0.0, 0.0, 0.7],\n [1.0, 0.0, 0.0, 0.0, 0.0]])",
"b) Si el domingo por la noche está vacío el revistero, ¿Cuál es la probabilidad de que haya 1 periódico el miércoles por la noche?",
"np.linalg.matrix_power(transition_ruiz, 4)",
"c) Calcula la probabilidad, a largo plazo, de que el revistero esté vacío una noche cualquiera.",
"stationary_distribution(transition_ruiz)",
"Exercise 1.36",
"transition_36 = np.array([[ 0, 0, 1],\n [0.05, 0.95, 0],\n [ 0, 0.02, 0.98]])",
"Exercise 1.36 a)",
"stationary_distribution(transition_36)",
"Exercise 1.36 b)",
"1 / stationary_distribution(transition_36)",
"Exercise 1.48",
"n = 12\ntransition_48 = np.zeros([n, n])\nfor i in range(n):\n transition_48[i, [(i - 1) % n, (i + 1) % n]] = [0.5] * 2\ntransition_48",
"Exercise 1.48 a)\nLa distribución estacionaria será: \n$$\\pi_{i} = 1 / 12 \\ \\forall i \\in {1, 2, ..., 12}$$\nPor cumplir la matriz de transición la propiedad de ser doblemente estocástica (tanto filas como columnas suman la unidad). Por tanto, dado que se pide el número medio de pasos:\n$$ E_i(T_i) = \\frac{1}{\\pi_i} = \\frac{1}{1 / 12} = 12 \\ \\forall i \\in {1, 2, ..., 12}$$\nEl mismo resultado se obtiene al ejecutar las operaciones:",
"1 / stationary_distribution(transition_48)",
"Exercise 1.48 b)",
"import numpy as np\n\nn = 100000\ny = 0\nd = 12\nfor n_temp in range(1, n + 1):\n visited = set()\n k = np.random.choice(range(d))\n \n position = k\n s = str(position % d) + ' '\n visited.add(position % d)\n \n position += np.random.choice([-1, 1])\n s += str(position % d) + ' '\n visited.add(position % d)\n \n while(position % d != k):\n position += np.random.choice([-1, 1])\n s += str(position % d) + ' '\n visited.add(position % d)\n \n y += (len(visited) == d)\n \n # print(y, s, sep=', ')\n if n_temp % 10000 == 0:\n print(y / n_temp)\nprint(y / n)"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
VenkatRepaka/deep-learning
|
embeddings/Skip-Gram_word2vec.ipynb
|
mit
|
[
"Skip-gram word2vec\nIn this notebook, I'll lead you through using TensorFlow to implement the word2vec algorithm using the skip-gram architecture. By implementing this, you'll learn about embedding words for use in natural language processing. This will come in handy when dealing with things like machine translation.\nReadings\nHere are the resources I used to build this notebook. I suggest reading these either beforehand or while you're working on this material.\n\nA really good conceptual overview of word2vec from Chris McCormick \nFirst word2vec paper from Mikolov et al.\nNIPS paper with improvements for word2vec also from Mikolov et al.\nAn implementation of word2vec from Thushan Ganegedara\nTensorFlow word2vec tutorial\n\nWord embeddings\nWhen you're dealing with words in text, you end up with tens of thousands of classes to predict, one for each word. Trying to one-hot encode these words is massively inefficient, you'll have one element set to 1 and the other 50,000 set to 0. The matrix multiplication going into the first hidden layer will have almost all of the resulting values be zero. This a huge waste of computation. \n\nTo solve this problem and greatly increase the efficiency of our networks, we use what are called embeddings. Embeddings are just a fully connected layer like you've seen before. We call this layer the embedding layer and the weights are embedding weights. We skip the multiplication into the embedding layer by instead directly grabbing the hidden layer values from the weight matrix. We can do this because the multiplication of a one-hot encoded vector with a matrix returns the row of the matrix corresponding the index of the \"on\" input unit.\n\nInstead of doing the matrix multiplication, we use the weight matrix as a lookup table. We encode the words as integers, for example \"heart\" is encoded as 958, \"mind\" as 18094. Then to get hidden layer values for \"heart\", you just take the 958th row of the embedding matrix. This process is called an embedding lookup and the number of hidden units is the embedding dimension.\n<img src='assets/tokenize_lookup.png' width=500>\nThere is nothing magical going on here. The embedding lookup table is just a weight matrix. The embedding layer is just a hidden layer. The lookup is just a shortcut for the matrix multiplication. The lookup table is trained just like any weight matrix as well.\nEmbeddings aren't only used for words of course. You can use them for any model where you have a massive number of classes. A particular type of model called Word2Vec uses the embedding layer to find vector representations of words that contain semantic meaning.\nWord2Vec\nThe word2vec algorithm finds much more efficient representations by finding vectors that represent the words. These vectors also contain semantic information about the words. Words that show up in similar contexts, such as \"black\", \"white\", and \"red\" will have vectors near each other. There are two architectures for implementing word2vec, CBOW (Continuous Bag-Of-Words) and Skip-gram.\n<img src=\"assets/word2vec_architectures.png\" width=\"500\">\nIn this implementation, we'll be using the skip-gram architecture because it performs better than CBOW. Here, we pass in a word and try to predict the words surrounding it in the text. In this way, we can train the network to learn representations for words that show up in similar contexts.\nFirst up, importing packages.",
"import time\n\nimport numpy as np\nimport tensorflow as tf\n\nimport utils",
"Load the text8 dataset, a file of cleaned up Wikipedia articles from Matt Mahoney. The next cell will download the data set to the data folder. Then you can extract it and delete the archive file to save storage space.",
"from urllib.request import urlretrieve\nfrom os.path import isfile, isdir\nfrom tqdm import tqdm\nimport zipfile\n\ndataset_folder_path = 'data'\ndataset_filename = 'text8.zip'\ndataset_name = 'Text8 Dataset'\n\nclass DLProgress(tqdm):\n last_block = 0\n\n def hook(self, block_num=1, block_size=1, total_size=None):\n self.total = total_size\n self.update((block_num - self.last_block) * block_size)\n self.last_block = block_num\n\nif not isfile(dataset_filename):\n with DLProgress(unit='B', unit_scale=True, miniters=1, desc=dataset_name) as pbar:\n urlretrieve(\n 'http://mattmahoney.net/dc/text8.zip',\n dataset_filename,\n pbar.hook)\n\nif not isdir(dataset_folder_path):\n with zipfile.ZipFile(dataset_filename) as zip_ref:\n zip_ref.extractall(dataset_folder_path)\n \nwith open('data/text8') as f:\n text = f.read()",
"Preprocessing\nHere I'm fixing up the text to make training easier. This comes from the utils module I wrote. The preprocess function coverts any punctuation into tokens, so a period is changed to <PERIOD>. In this data set, there aren't any periods, but it will help in other NLP problems. I'm also removing all words that show up five or fewer times in the dataset. This will greatly reduce issues due to noise in the data and improve the quality of the vector representations. If you want to write your own functions for this stuff, go for it.",
"words = utils.preprocess(text)\nprint(words[:30])\n\nprint(\"Total words: {}\".format(len(words)))\nprint(\"Unique words: {}\".format(len(set(words))))",
"And here I'm creating dictionaries to covert words to integers and backwards, integers to words. The integers are assigned in descending frequency order, so the most frequent word (\"the\") is given the integer 0 and the next most frequent is 1 and so on. The words are converted to integers and stored in the list int_words.",
"vocab_to_int, int_to_vocab = utils.create_lookup_tables(words)\nint_words = [vocab_to_int[word] for word in words]",
"Subsampling\nWords that show up often such as \"the\", \"of\", and \"for\" don't provide much context to the nearby words. If we discard some of them, we can remove some of the noise from our data and in return get faster training and better representations. This process is called subsampling by Mikolov. For each word $w_i$ in the training set, we'll discard it with probability given by \n$$ P(w_i) = 1 - \\sqrt{\\frac{t}{f(w_i)}} $$\nwhere $t$ is a threshold parameter and $f(w_i)$ is the frequency of word $w_i$ in the total dataset.\nI'm going to leave this up to you as an exercise. This is more of a programming challenge, than about deep learning specifically. But, being able to prepare your data for your network is an important skill to have. Check out my solution to see how I did it.\n\nExercise: Implement subsampling for the words in int_words. That is, go through int_words and discard each word given the probablility $P(w_i)$ shown above. Note that $P(w_i)$ is the probability that a word is discarded. Assign the subsampled data to train_words.",
"## Your code here\nfrom collections import Counter\nimport random\n\n#threshold = 1e-5\n#word_count = Counter(int_words)\n#total_count_words = len(int_words)\n#frequency = {word: count/total_count_words for word,count in word_count.items()}\n#p_drop = {word: 1 - np.sqrt(threshold/frequency[word]) for word, count in word_count.items()}\n \n#train_words = [word for word in int_words if p_drop[word] < random.random()]# The final subsampled word list\n\nthreshold = 1e-5\nword_counts = Counter(int_words)\ntotal_count = len(int_words)\nfreqs = {word: count/total_count for word, count in word_counts.items()}\np_drop = {word: 1 - np.sqrt(threshold/freqs[word]) for word in word_counts}\ntrain_words = [word for word in int_words if random.random() < (1 - p_drop[word])]",
"Making batches\nNow that our data is in good shape, we need to get it into the proper form to pass it into our network. With the skip-gram architecture, for each word in the text, we want to grab all the words in a window around that word, with size $C$. \nFrom Mikolov et al.: \n\"Since the more distant words are usually less related to the current word than those close to it, we give less weight to the distant words by sampling less from those words in our training examples... If we choose $C = 5$, for each training word we will select randomly a number $R$ in range $< 1; C >$, and then use $R$ words from history and $R$ words from the future of the current word as correct labels.\"\n\nExercise: Implement a function get_target that receives a list of words, an index, and a window size, then returns a list of words in the window around the index. Make sure to use the algorithm described above, where you choose a random number of words from the window.",
"def get_target(words, idx, window_size=5):\n ''' Get a list of words in a window around an index. '''\n \n # Your code here\n #start_index = 0\n #end_index = 0;\n #last_index = len(words)-1\n #to_pick = random.randint(1,window_size+1)\n #if (idx-window_size) < 0:\n # start_index = 0\n #if (idx+end_index) > last_index:\n # end_index = last_index\n #return words[start_index:end_index]\n\n R = np.random.randint(1, window_size+1)\n start = idx - R if (idx - R) > 0 else 0\n stop = idx + R\n target_words = set(words[start:idx] + words[idx+1:stop+1])\n \n return list(target_words)",
"Here's a function that returns batches for our network. The idea is that it grabs batch_size words from a words list. Then for each of those words, it gets the target words in the window. I haven't found a way to pass in a random number of target words and get it to work with the architecture, so I make one row per input-target pair. This is a generator function by the way, helps save memory.",
"def get_batches(words, batch_size, window_size=5):\n ''' Create a generator of word batches as a tuple (inputs, targets) '''\n \n n_batches = len(words)//batch_size\n print('no of batches' + str(n_batches))\n \n # only full batches\n words = words[:n_batches*batch_size]\n \n for idx in range(0, len(words), batch_size):\n x, y = [], []\n batch = words[idx:idx+batch_size]\n for ii in range(len(batch)):\n batch_x = batch[ii]\n batch_y = get_target(batch, ii, window_size)\n y.extend(batch_y)\n x.extend([batch_x]*len(batch_y))\n yield x, y\n ",
"Building the graph\nFrom Chris McCormick's blog, we can see the general structure of our network.\n\nThe input words are passed in as integers. This will go into a hidden layer of linear units, then into a softmax layer. We'll use the softmax layer to make a prediction like normal.\nThe idea here is to train the hidden layer weight matrix to find efficient representations for our words. We can discard the softmax layer becuase we don't really care about making predictions with this network. We just want the embedding matrix so we can use it in other networks we build from the dataset.\nI'm going to have you build the graph in stages now. First off, creating the inputs and labels placeholders like normal.\n\nExercise: Assign inputs and labels using tf.placeholder. We're going to be passing in integers, so set the data types to tf.int32. The batches we're passing in will have varying sizes, so set the batch sizes to [None]. To make things work later, you'll need to set the second dimension of labels to None or 1.",
"train_graph = tf.Graph()\nwith train_graph.as_default():\n inputs = tf.placeholder(tf.int32, [None], name='inputs')\n labels = tf.placeholder(tf.int32, [None, None], name='labels')",
"Embedding\nThe embedding matrix has a size of the number of words by the number of units in the hidden layer. So, if you have 10,000 words and 300 hidden units, the matrix will have size $10,000 \\times 300$. Remember that we're using tokenized data for our inputs, usually as integers, where the number of tokens is the number of words in our vocabulary.\n\nExercise: Tensorflow provides a convenient function tf.nn.embedding_lookup that does this lookup for us. You pass in the embedding matrix and a tensor of integers, then it returns rows in the matrix corresponding to those integers. Below, set the number of embedding features you'll use (200 is a good start), create the embedding matrix variable, and use tf.nn.embedding_lookup to get the embedding tensors. For the embedding matrix, I suggest you initialize it with a uniform random numbers between -1 and 1 using tf.random_uniform.",
"n_vocab = len(int_to_vocab)\nn_embedding = 200 # Number of embedding features \nwith train_graph.as_default():\n embedding = tf.Variable(tf.random_uniform((n_vocab, n_embedding), minval=-1, maxval=1))# create embedding weight matrix here\n embed = tf.nn.embedding_lookup(embedding, ids=inputs) # use tf.nn.embedding_lookup to get the hidden layer output",
"Negative sampling\nFor every example we give the network, we train it using the output from the softmax layer. That means for each input, we're making very small changes to millions of weights even though we only have one true example. This makes training the network very inefficient. We can approximate the loss from the softmax layer by only updating a small subset of all the weights at once. We'll update the weights for the correct label, but only a small number of incorrect labels. This is called \"negative sampling\". Tensorflow has a convenient function to do this, tf.nn.sampled_softmax_loss.\n\nExercise: Below, create weights and biases for the softmax layer. Then, use tf.nn.sampled_softmax_loss to calculate the loss. Be sure to read the documentation to figure out how it works.",
"# Number of negative labels to sample\nn_sampled = 100\nwith train_graph.as_default():\n softmax_w = tf.Variable(tf.truncated_normal((n_vocab, n_embedding), stddev=0.1)) # create softmax weight matrix here\n softmax_b = tf.Variable(tf.zeros(n_vocab))# create softmax biases here\n \n # Calculate the loss using negative sampling\n loss = tf.nn.sampled_softmax_loss(softmax_w, softmax_b, labels, embed, n_sampled, n_vocab)\n \n cost = tf.reduce_mean(loss)\n optimizer = tf.train.AdamOptimizer().minimize(cost)",
"Validation\nThis code is from Thushan Ganegedara's implementation. Here we're going to choose a few common words and few uncommon words. Then, we'll print out the closest words to them. It's a nice way to check that our embedding table is grouping together words with similar semantic meanings.",
"with train_graph.as_default():\n ## From Thushan Ganegedara's implementation\n valid_size = 16 # Random set of words to evaluate similarity on.\n valid_window = 100\n # pick 8 samples from (0,100) and (1000,1100) each ranges. lower id implies more frequent \n valid_examples = np.array(random.sample(range(valid_window), valid_size//2))\n valid_examples = np.append(valid_examples, \n random.sample(range(1000,1000+valid_window), valid_size//2))\n\n valid_dataset = tf.constant(valid_examples, dtype=tf.int32)\n \n # We use the cosine distance:\n norm = tf.sqrt(tf.reduce_sum(tf.square(embedding), 1, keep_dims=True))\n normalized_embedding = embedding / norm\n valid_embedding = tf.nn.embedding_lookup(normalized_embedding, valid_dataset)\n similarity = tf.matmul(valid_embedding, tf.transpose(normalized_embedding))\n\n# If the checkpoints directory doesn't exist:\n!mkdir checkpoints",
"Training\nBelow is the code to train the network. Every 100 batches it reports the training loss. Every 1000 batches, it'll print out the validation words.",
"epochs = 10\nbatch_size = 1000\nwindow_size = 10\n\nwith train_graph.as_default():\n saver = tf.train.Saver()\n\nwith tf.Session(graph=train_graph) as sess:\n iteration = 1\n loss = 0\n sess.run(tf.global_variables_initializer())\n\n for e in range(1, epochs+1):\n batches = get_batches(train_words, batch_size, window_size)\n start = time.time()\n for x, y in batches:\n \n feed = {inputs: x,\n labels: np.array(y)[:, None]}\n train_loss, _ = sess.run([cost, optimizer], feed_dict=feed)\n \n loss += train_loss\n \n if iteration % 100 == 0: \n end = time.time()\n print(\"Epoch {}/{}\".format(e, epochs),\n \"Iteration: {}\".format(iteration),\n \"Avg. Training loss: {:.4f}\".format(loss/100),\n \"{:.4f} sec/batch\".format((end-start)/100))\n loss = 0\n start = time.time()\n \n if iteration % 1000 == 0:\n ## From Thushan Ganegedara's implementation\n # note that this is expensive (~20% slowdown if computed every 500 steps)\n sim = similarity.eval()\n for i in range(valid_size):\n valid_word = int_to_vocab[valid_examples[i]]\n top_k = 8 # number of nearest neighbors\n nearest = (-sim[i, :]).argsort()[1:top_k+1]\n log = 'Nearest to %s:' % valid_word\n for k in range(top_k):\n close_word = int_to_vocab[nearest[k]]\n log = '%s %s,' % (log, close_word)\n print(log)\n \n iteration += 1\n sess.run(normalized_embedding)\n save_path = saver.save(sess, \"checkpoints/text8.ckpt\")\n print('normalizing embedding')\n embed_mat = sess.run(normalized_embedding)",
"Restore the trained network if you need to:",
"with train_graph.as_default():\n saver = tf.train.Saver()\n\nwith tf.Session(graph=train_graph) as sess:\n saver.restore(sess, tf.train.latest_checkpoint('checkpoints'))\n embed_mat = sess.run(embedding)",
"Visualizing the word vectors\nBelow we'll use T-SNE to visualize how our high-dimensional word vectors cluster together. T-SNE is used to project these vectors into two dimensions while preserving local stucture. Check out this post from Christopher Olah to learn more about T-SNE and other ways to visualize high-dimensional data.",
"%matplotlib inline\n%config InlineBackend.figure_format = 'retina'\n\nimport matplotlib.pyplot as plt\nfrom sklearn.manifold import TSNE\n\nviz_words = 500\ntsne = TSNE()\nembed_tsne = tsne.fit_transform(embed_mat[:viz_words, :])\n\nfig, ax = plt.subplots(figsize=(14, 14))\nfor idx in range(viz_words):\n plt.scatter(*embed_tsne[idx, :], color='steelblue')\n plt.annotate(int_to_vocab[idx], (embed_tsne[idx, 0], embed_tsne[idx, 1]), alpha=0.7)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
ZhukovGreen/UMLND
|
boston_housing/boston_housing.ipynb
|
gpl-3.0
|
[
"Machine Learning Engineer Nanodegree\nModel Evaluation & Validation\nProject: Predicting Boston Housing Prices\nWelcome to the first project of the Machine Learning Engineer Nanodegree! In this notebook, some template code has already been provided for you, and you will need to implement additional functionality to successfully complete this project. You will not need to modify the included code beyond what is requested. Sections that begin with 'Implementation' in the header indicate that the following block of code will require additional functionality which you must provide. Instructions will be provided for each section and the specifics of the implementation are marked in the code block with a 'TODO' statement. Please be sure to read the instructions carefully!\nIn addition to implementing code, there will be questions that you must answer which relate to the project and your implementation. Each section where you will answer a question is preceded by a 'Question X' header. Carefully read each question and provide thorough answers in the following text boxes that begin with 'Answer:'. Your project submission will be evaluated based on your answers to each of the questions and the implementation you provide. \n\nNote: Code and Markdown cells can be executed using the Shift + Enter keyboard shortcut. In addition, Markdown cells can be edited by typically double-clicking the cell to enter edit mode.\n\nGetting Started\nIn this project, you will evaluate the performance and predictive power of a model that has been trained and tested on data collected from homes in suburbs of Boston, Massachusetts. A model trained on this data that is seen as a good fit could then be used to make certain predictions about a home — in particular, its monetary value. This model would prove to be invaluable for someone like a real estate agent who could make use of such information on a daily basis.\nThe dataset for this project originates from the UCI Machine Learning Repository. The Boston housing data was collected in 1978 and each of the 506 entries represent aggregated data about 14 features for homes from various suburbs in Boston, Massachusetts. For the purposes of this project, the following preprocessing steps have been made to the dataset:\n- 16 data points have an 'MEDV' value of 50.0. These data points likely contain missing or censored values and have been removed.\n- 1 data point has an 'RM' value of 8.78. This data point can be considered an outlier and has been removed.\n- The features 'RM', 'LSTAT', 'PTRATIO', and 'MEDV' are essential. The remaining non-relevant features have been excluded.\n- The feature 'MEDV' has been multiplicatively scaled to account for 35 years of market inflation.\nRun the code cell below to load the Boston housing dataset, along with a few of the necessary Python libraries required for this project. You will know the dataset loaded successfully if the size of the dataset is reported.",
"# Import libraries necessary for this project\nimport numpy as np\nimport pandas as pd\nfrom sklearn.cross_validation import ShuffleSplit\nimport matplotlib.pyplot as plt\n\n# Import supplementary visualizations code visuals.py\nimport visuals as vs\n\n# Load the Boston housing dataset\ndata = pd.read_csv('housing.csv')\nprices = data['MEDV']\nfeatures = data.drop('MEDV', axis=1)\n\n# Success\nprint \"Boston housing dataset has {} data points with {} variables each.\".format(*data.shape)",
"Data Exploration\nIn this first section of this project, you will make a cursory investigation about the Boston housing data and provide your observations. Familiarizing yourself with the data through an explorative process is a fundamental practice to help you better understand and justify your results.\nSince the main goal of this project is to construct a working model which has the capability of predicting the value of houses, we will need to separate the dataset into features and the target variable. The features, 'RM', 'LSTAT', and 'PTRATIO', give us quantitative information about each data point. The target variable, 'MEDV', will be the variable we seek to predict. These are stored in features and prices, respectively.\nImplementation: Calculate Statistics\nFor your very first coding implementation, you will calculate descriptive statistics about the Boston housing prices. Since numpy has already been imported for you, use this library to perform the necessary calculations. These statistics will be extremely important later on to analyze various prediction results from the constructed model.\nIn the code cell below, you will need to implement the following:\n- Calculate the minimum, maximum, mean, median, and standard deviation of 'MEDV', which is stored in prices.\n - Store each calculation in their respective variable.",
"# TODO: Minimum price of the data\nminimum_price = np.min(prices)\n\n# TODO: Maximum price of the data\nmaximum_price = np.max(prices)\n\n# TODO: Mean price of the data\nmean_price = np.mean(prices)\n\n# TODO: Median price of the data\nmedian_price = np.median(prices)\n\n# TODO: Standard deviation of prices of the data\nstd_price = np.std(prices)\n\n# Show the calculated statistics\nprint \"Statistics for Boston housing dataset:\\n\"\nprint \"Minimum price: ${:,.2f}\".format(minimum_price)\nprint \"Maximum price: ${:,.2f}\".format(maximum_price)\nprint \"Mean price: ${:,.2f}\".format(mean_price)\nprint \"Median price ${:,.2f}\".format(median_price)\nprint \"Standard deviation of prices: ${:,.2f}\".format(std_price)",
"Question 1 - Feature Observation\nAs a reminder, we are using three features from the Boston housing dataset: 'RM', 'LSTAT', and 'PTRATIO'. For each data point (neighborhood):\n- 'RM' is the average number of rooms among homes in the neighborhood.\n- 'LSTAT' is the percentage of homeowners in the neighborhood considered \"lower class\" (working poor).\n- 'PTRATIO' is the ratio of students to teachers in primary and secondary schools in the neighborhood.\nUsing your intuition, for each of the three features above, do you think that an increase in the value of that feature would lead to an increase in the value of 'MEDV' or a decrease in the value of 'MEDV'? Justify your answer for each.\nHint: Would you expect a home that has an 'RM' value of 6 be worth more or less than a home that has an 'RM' value of 7?",
"fig = plt.figure()\nfor plt_num, feature in enumerate(features):\n graphs = fig.add_subplot(2, 2, plt_num + 1)\n graphs.scatter(features[feature], prices)\n lin_reg = np.poly1d(np.polyfit(features[feature], prices, deg=1))\n lin_x = np.linspace(features[feature].min(), features[feature].max(), 2)\n lin_y = lin_reg(lin_x)\n graphs.plot(lin_x, lin_y, color='r')\n graphs.set_title(feature)",
"Answer: I see here only RM parameter increasing will lead to increase of MEDV. Increasing of the rest features will be likely degradate the MEDV.\nRM positively correlated with the price because the room number should be proportional to the dwelling area. Dwelling living area positively correlates with its price.\nLSTAT - rich people don't like to live in the area with poor people. So they're willing to pay more for the dwelling with lower LSTAT\nPTRATIO - people wants more attention to their kids, so lower amount of pupils per a teacher is attractive, but not always affordable. Since there a demand to such schools, then the price of dwellings should rise.\n\nDeveloping a Model\nIn this second section of the project, you will develop the tools and techniques necessary for a model to make a prediction. Being able to make accurate evaluations of each model's performance through the use of these tools and techniques helps to greatly reinforce the confidence in your predictions.\nImplementation: Define a Performance Metric\nIt is difficult to measure the quality of a given model without quantifying its performance over training and testing. This is typically done using some type of performance metric, whether it is through calculating some type of error, the goodness of fit, or some other useful measurement. For this project, you will be calculating the coefficient of determination, R<sup>2</sup>, to quantify your model's performance. The coefficient of determination for a model is a useful statistic in regression analysis, as it often describes how \"good\" that model is at making predictions. \nThe values for R<sup>2</sup> range from 0 to 1, which captures the percentage of squared correlation between the predicted and actual values of the target variable. A model with an R<sup>2</sup> of 0 is no better than a model that always predicts the mean of the target variable, whereas a model with an R<sup>2</sup> of 1 perfectly predicts the target variable. Any value between 0 and 1 indicates what percentage of the target variable, using this model, can be explained by the features. A model can be given a negative R<sup>2</sup> as well, which indicates that the model is arbitrarily worse than one that always predicts the mean of the target variable.\nFor the performance_metric function in the code cell below, you will need to implement the following:\n- Use r2_score from sklearn.metrics to perform a performance calculation between y_true and y_predict.\n- Assign the performance score to the score variable.",
"# TODO: Import 'r2_score'\nfrom sklearn.metrics import r2_score\n\ndef performance_metric(y_true, y_predict):\n \"\"\" Calculates and returns the performance score between \n true and predicted values based on the metric chosen. \"\"\"\n \n # TODO: Calculate the performance score between 'y_true' and 'y_predict'\n score = r2_score(y_true, y_predict)\n \n # Return the score\n return score",
"Question 2 - Goodness of Fit\nAssume that a dataset contains five data points and a model made the following predictions for the target variable:\n| True Value | Prediction |\n| :-------------: | :--------: |\n| 3.0 | 2.5 |\n| -0.5 | 0.0 |\n| 2.0 | 2.1 |\n| 7.0 | 7.8 |\n| 4.2 | 5.3 |\nWould you consider this model to have successfully captured the variation of the target variable? Why or why not? \nRun the code cell below to use the performance_metric function and calculate this model's coefficient of determination.",
"# Calculate the performance of this model\nscore = performance_metric([3, -0.5, 2, 7, 4.2], [2.5, 0.0, 2.1, 7.8, 5.3])\nprint \"Model has a coefficient of determination, R^2, of {:.3f}.\".format(score)",
"Answer: Model has a coefficient of determination, R^2, of 0.923. The hypotesis correctly captured the variation of the target variable. The R_2 is high and variance y_true - y_predicted looks reasonably small\nImplementation: Shuffle and Split Data\nYour next implementation requires that you take the Boston housing dataset and split the data into training and testing subsets. Typically, the data is also shuffled into a random order when creating the training and testing subsets to remove any bias in the ordering of the dataset.\nFor the code cell below, you will need to implement the following:\n- Use train_test_split from sklearn.cross_validation to shuffle and split the features and prices data into training and testing sets.\n - Split the data into 80% training and 20% testing.\n - Set the random_state for train_test_split to a value of your choice. This ensures results are consistent.\n- Assign the train and testing splits to X_train, X_test, y_train, and y_test.",
"# TODO: Import 'train_test_split'\nfrom sklearn import cross_validation\n\n# TODO: Shuffle and split the data into training and testing subsets\nX_train, X_test, y_train, y_test = cross_validation.train_test_split(features, prices,\n test_size=0.2,\n random_state=1)\n# Success\nprint \"Training and testing split was successful.\"",
"Question 3 - Training and Testing\nWhat is the benefit to splitting a dataset into some ratio of training and testing subsets for a learning algorithm?\nHint: What could go wrong with not having a way to test your model?\nAnswer: The benefit of splitting the training set is that we're able to pick a hypotesis based on minimum generalization error, instead of a training error. By using this approach we're resolving a bias/variance trade off. It is also the way to troubleshoot the bad performace of the hypotesis.\nIf we're not splitting the dataset and use it whole for a training, then we can simply overfit (or underfit) the hypotesis and come up to a terrible hypotesis performance on a new data.\n\nAnalyzing Model Performance\nIn this third section of the project, you'll take a look at several models' learning and testing performances on various subsets of training data. Additionally, you'll investigate one particular algorithm with an increasing 'max_depth' parameter on the full training set to observe how model complexity affects performance. Graphing your model's performance based on varying criteria can be beneficial in the analysis process, such as visualizing behavior that may not have been apparent from the results alone.\nLearning Curves\nThe following code cell produces four graphs for a decision tree model with different maximum depths. Each graph visualizes the learning curves of the model for both training and testing as the size of the training set is increased. Note that the shaded region of a learning curve denotes the uncertainty of that curve (measured as the standard deviation). The model is scored on both the training and testing sets using R<sup>2</sup>, the coefficient of determination. \nRun the code cell below and use these graphs to answer the following question.",
"# Produce learning curves for varying training set sizes and maximum depths\nvs.ModelLearning(features, prices)",
"Question 4 - Learning the Data\nChoose one of the graphs above and state the maximum depth for the model. What happens to the score of the training curve as more training points are added? What about the testing curve? Would having more training points benefit the model?\nHint: Are the learning curves converging to particular scores?\nAnswer: I pick the top right graph (max_depth = 3).\nThe training curve having the downward slope and the training r2 score decreases with the increase of the training set size.\nThe testing curve having an opposite behaviour and the testiting score is rising with the number of datatpoints in the training set.\nHaving more training points benefits the model until the certain amount (in our case it is about 300 datapoints), then the training and testing scores last relatively stable. This transition state can indicate the convergence of the algorithm. \nComplexity Curves\nThe following code cell produces a graph for a decision tree model that has been trained and validated on the training data using different maximum depths. The graph produces two complexity curves — one for training and one for validation. Similar to the learning curves, the shaded regions of both the complexity curves denote the uncertainty in those curves, and the model is scored on both the training and validation sets using the performance_metric function. \nRun the code cell below and use this graph to answer the following two questions.",
"vs.ModelComplexity(X_train, y_train)",
"Question 5 - Bias-Variance Tradeoff\nWhen the model is trained with a maximum depth of 1, does the model suffer from high bias or from high variance? How about when the model is trained with a maximum depth of 10? What visual cues in the graph justify your conclusions?\nHint: How do you know when a model is suffering from high bias or high variance?\nAnswer: When max depth equal to 1, then the model suffer from high bias. When max depth is 10, then the model suffer from high variance.\nThe indicator of a model with high bias is that with adding more complexity to the model both training and generalization scores are rising and are not far from each other. And when the generalization score stabilize and starting degradade when, in the mean time, the training score continue to rise it means we're overfitting the model and the model having high variance\nQuestion 6 - Best-Guess Optimal Model\nWhich maximum depth do you think results in a model that best generalizes to unseen data? What intuition lead you to this answer?\nAnswer: Max depth of 5, has the minimum generalisation error and furhter adding the complexity won't help.\n\nEvaluating Model Performance\nIn this final section of the project, you will construct a model and make a prediction on the client's feature set using an optimized model from fit_model.\nQuestion 7 - Grid Search\nWhat is the grid search technique and how it can be applied to optimize a learning algorithm?\nAnswer: Grid Search technique is used to tune the estimator by applying different setups of estimator's arguments. For the given estimator Grid Search generates all possible permutations of a hyperparameters (exhaustive search). Then it applies “fit” and a “score” method to evaulate a certain candidate of the hyperparameteres. Then returns the setup with the best score and relevant estimator arguments.\nIn contrast, randomized search samples hyperparameteres from a distribution of possible parameters values. Here you specify the number of itereation explicitly and, doing this, you're able to control the time of execution, what could be a benefit in certain situations.\nQuestion 8 - Cross-Validation\nWhat is the k-fold cross-validation training technique? What benefit does this technique provide for grid search when optimizing a model?\nHint: Much like the reasoning behind having a testing set, what could go wrong with using grid search without a cross-validated set?\nAnswer: k-fold cross validation is a model evaluation method. It spltis the dataset into k equal subsets. It uses merged k-1 subsets as a training set and test it on k-th subset, then switching the training subset to another one and repeat the procedure.\nBy using k-fold cross validation technique we're eliminating the risk of high variance, which could occur if the testing and training subsets contains some specific to this particular subsets datapoints.\nK-fold cross validation helps us to select hyperparameteres for a hypotesis which fits well to unseen data, rather than a validation data. This is a primary benefit.\nIn k-fold cv the score of k runs to be avereged. Then the best score yields the best hyperparameters setup.\nImplementation: Fitting a Model\nYour final implementation requires that you bring everything together and train a model using the decision tree algorithm. To ensure that you are producing an optimized model, you will train the model using the grid search technique to optimize the 'max_depth' parameter for the decision tree. The 'max_depth' parameter can be thought of as how many questions the decision tree algorithm is allowed to ask about the data before making a prediction. Decision trees are part of a class of algorithms called supervised learning algorithms.\nIn addition, you will find your implementation is using ShuffleSplit() for an alternative form of cross-validation (see the 'cv_sets' variable). While it is not the K-Fold cross-validation technique you describe in Question 8, this type of cross-validation technique is just as useful!. The ShuffleSplit() implementation below will create 10 ('n_splits') shuffled sets, and for each shuffle, 20% ('test_size') of the data will be used as the validation set. While you're working on your implementation, think about the contrasts and similarities it has to the K-fold cross-validation technique.\nPlease note that ShuffleSplit has different parameters in scikit-learn versions 0.17 and 0.18.\nFor the fit_model function in the code cell below, you will need to implement the following:\n- Use DecisionTreeRegressor from sklearn.tree to create a decision tree regressor object.\n - Assign this object to the 'regressor' variable.\n- Create a dictionary for 'max_depth' with the values from 1 to 10, and assign this to the 'params' variable.\n- Use make_scorer from sklearn.metrics to create a scoring function object.\n - Pass the performance_metric function as a parameter to the object.\n - Assign this scoring function to the 'scoring_fnc' variable.\n- Use GridSearchCV from sklearn.grid_search to create a grid search object.\n - Pass the variables 'regressor', 'params', 'scoring_fnc', and 'cv_sets' as parameters to the object. \n - Assign the GridSearchCV object to the 'grid' variable.",
"# TODO: Import 'make_scorer', 'DecisionTreeRegressor', and 'GridSearchCV'\n\nfrom sklearn.metrics import make_scorer\nfrom sklearn.model_selection import ShuffleSplit\nfrom sklearn.tree import DecisionTreeRegressor\nfrom sklearn.model_selection import GridSearchCV\n\n\ndef fit_model(X, y):\n \"\"\" Performs grid search over the 'max_depth' parameter for a\n decision tree regressor trained on the input data [X, y]. \"\"\"\n\n # Create cross-validation sets from the training data\n cv = ShuffleSplit(n_splits=10, test_size=0.20, random_state=0)\n\n # TODO: Create a decision tree regressor object\n regressor = DecisionTreeRegressor()\n\n # TODO: Create a dictionary for the parameter 'max_depth' with a range from 1 to 10\n params = dict(max_depth=np.arange(1, 11))\n\n # TODO: Transform 'performance_metric' into a scoring function using 'make_scorer'\n scoring_fnc = make_scorer(performance_metric)\n\n # TODO: Create the grid search object\n grid = GridSearchCV(estimator=regressor, param_grid=params, scoring=scoring_fnc, cv=cv)\n\n # Fit the grid search object to the data to compute the optimal model\n grid = grid.fit(X, y)\n\n # Return the optimal model after fitting the data\n return grid.best_estimator_",
"Making Predictions\nOnce a model has been trained on a given set of data, it can now be used to make predictions on new sets of input data. In the case of a decision tree regressor, the model has learned what the best questions to ask about the input data are, and can respond with a prediction for the target variable. You can use these predictions to gain information about data where the value of the target variable is unknown — such as data the model was not trained on.\nQuestion 9 - Optimal Model\nWhat maximum depth does the optimal model have? How does this result compare to your guess in Question 6? \nRun the code block below to fit the decision tree regressor to the training data and produce an optimal model.",
"# Fit the training data to the model using grid search\nreg = fit_model(X_train, y_train)\n\n# Produce the value for 'max_depth'\nprint \"Parameter 'max_depth' is {} for the optimal model.\".format(reg.get_params()['max_depth'])",
"Answer: Parameter 'max_depth' is 5 for the optimal model. My guess in Q6 was 5, what is close to the best max_depth from the grid search analysis.\nQuestion 10 - Predicting Selling Prices\nImagine that you were a real estate agent in the Boston area looking to use this model to help price homes owned by your clients that they wish to sell. You have collected the following information from three of your clients:\n| Feature | Client 1 | Client 2 | Client 3 |\n| :---: | :---: | :---: | :---: |\n| Total number of rooms in home | 5 rooms | 4 rooms | 8 rooms |\n| Neighborhood poverty level (as %) | 17% | 32% | 3% |\n| Student-teacher ratio of nearby schools | 15-to-1 | 22-to-1 | 12-to-1 |\nWhat price would you recommend each client sell his/her home at? Do these prices seem reasonable given the values for the respective features?\nHint: Use the statistics you calculated in the Data Exploration section to help justify your response. \nRun the code block below to have your optimized model make predictions for each client's home.",
"# Produce a matrix for client data\nclient_data = [[5, 17, 15], # Client 1\n [4, 32, 22], # Client 2\n [8, 3, 12]] # Client 3\n\n# Show predictions\nfor i, price in enumerate(reg.predict(client_data)):\n print \"Predicted selling price for Client {}'s home: ${:,.2f}\".format(i+1, price)\nprint '\\n\\nFEATURES'\nprint features.describe()\nprint '\\n\\nPRICES'\nprint prices.describe()",
"Answer: \nPredicted selling price for Client 1's home: $424,935.00.\nPredicted selling price for Client 2's home: $284,200.00\nPredicted selling price for Client 3's home: $933,975.00\nClient 1 has the price around the mean of the price distribution. RM and LSTAT is not that good, but it is compensted by PTRATIO, which is rather good here.\nClient 2 has a low price (<Q1). Low amount of rooms, high percent of not reach people and big amoutn of pupils per one teacher.\nClient 3 has very high price. A lot of rooms, almot the maximum in the distribution, LSTAT is very low and number of pupils per teacher is really low.\nSo the predictions seems very good\nSensitivity\nAn optimal model is not necessarily a robust model. Sometimes, a model is either too complex or too simple to sufficiently generalize to new data. Sometimes, a model could use a learning algorithm that is not appropriate for the structure of the data given. Other times, the data itself could be too noisy or contain too few samples to allow a model to adequately capture the target variable — i.e., the model is underfitted. Run the code cell below to run the fit_model function ten times with different training and testing sets to see how the prediction for a specific client changes with the data it's trained on.",
"vs.PredictTrials(features, prices, fit_model, client_data)",
"Question 11 - Applicability\nIn a few sentences, discuss whether the constructed model should or should not be used in a real-world setting.\nHint: Some questions to answering:\n- How relevant today is data that was collected from 1978?\n- Are the features present in the data sufficient to describe a home?\n- Is the model robust enough to make consistent predictions?\n- Would data collected in an urban city like Boston be applicable in a rural city?\nAnswer: \nThe data collected from 1978 likely won't be relevant today. Features we have seems ok for describing the home if we need a r2_score around 0.8. The model may be used for predicting prices of homes in suburbs of Boston, Massachusetts in the years around 1978. But likely not nowadays.\n\nNote: Once you have completed all of the code implementations and successfully answered each question above, you may finalize your work by exporting the iPython Notebook as an HTML document. You can do this by using the menu above and navigating to\nFile -> Download as -> HTML (.html). Include the finished document along with this notebook as your submission."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
dataplumber/nexus
|
esip-workshop/student-material/workshop1/4 - Student Exercise.ipynb
|
apache-2.0
|
[
"Step 1: Notebook Setup\nThe cell below contains a number of helper functions used throughout this walkthrough. They are mainly wrappers around existing matplotlib functionality and are provided for the sake of simplicity in the steps to come.\nTake a moment to read the descriptions for each method so you understand what they can be used for. You will use these \"helper methods\" as you work through this notebook below.\nIf you are familiar with matplotlib, feel free to alter the functions as you please.\nTODOs\n\nClick in the cell below and run the cell.",
"# TODO: Make sure you run this cell before continuing!\n\n%matplotlib inline\nimport matplotlib.pyplot as plt\n\ndef show_plot(x_data, y_data, x_label, y_label):\n \"\"\"\n Display a simple line plot.\n \n :param x_data: Numpy array containing data for the X axis\n :param y_data: Numpy array containing data for the Y axis\n :param x_label: Label applied to X axis\n :param y_label: Label applied to Y axis\n \"\"\"\n plt.figure(figsize=(10,5), dpi=100)\n plt.plot(x_data, y_data, 'b-', marker='|', markersize=2.0, mfc='b')\n plt.grid(b=True, which='major', color='k', linestyle='-')\n plt.xlabel(x_label)\n plt.ylabel (y_label)\n plt.show()\n \ndef plot_box(bbox):\n \"\"\"\n Display a Green bounding box on an image of the blue marble.\n \n :param bbox: Shapely Polygon that defines the bounding box to display\n \"\"\"\n min_lon, min_lat, max_lon, max_lat = bbox.bounds\n import matplotlib.pyplot as plt1\n from matplotlib.patches import Polygon\n from mpl_toolkits.basemap import Basemap\n\n map = Basemap()\n map.bluemarble(scale=0.5)\n poly = Polygon([(min_lon,min_lat),(min_lon,max_lat),(max_lon,max_lat),(max_lon,min_lat)],facecolor=(0,0,0,0.0),edgecolor='green',linewidth=2)\n plt1.gca().add_patch(poly)\n plt1.gcf().set_size_inches(10,15)\n \n plt1.show()\n \ndef show_plot_two_series(x_data_a, x_data_b, y_data_a, y_data_b, x_label, y_label_a, y_label_b, series_a_label, series_b_label):\n \"\"\"\n Display a line plot of two series\n \n :param x_data_a: Numpy array containing data for the Series A X axis\n :param x_data_b: Numpy array containing data for the Series B X axis\n :param y_data_a: Numpy array containing data for the Series A Y axis\n :param y_data_b: Numpy array containing data for the Series B Y axis\n :param x_label: Label applied to X axis\n :param y_label_a: Label applied to Y axis for Series A\n :param y_label_b: Label applied to Y axis for Series B\n :param series_a_label: Name of Series A\n :param series_b_label: Name of Series B\n \"\"\"\n fig, ax1 = plt.subplots(figsize=(10,5), dpi=100)\n series_a, = ax1.plot(x_data_a, y_data_a, 'b-', marker='|', markersize=2.0, mfc='b', label=series_a_label)\n ax1.set_ylabel(y_label_a, color='b')\n ax1.tick_params('y', colors='b')\n ax1.set_ylim(min(0, *y_data_a), max(y_data_a)+.1*max(y_data_a))\n ax1.set_xlabel(x_label)\n \n ax2 = ax1.twinx()\n series_b, = ax2.plot(x_data_b, y_data_b, 'r-', marker='|', markersize=2.0, mfc='r', label=series_b_label)\n ax2.set_ylabel(y_label_b, color='r')\n ax2.set_ylim(min(0, *y_data_b), max(y_data_b)+.1*max(y_data_b))\n ax2.tick_params('y', colors='r')\n \n plt.grid(b=True, which='major', color='k', linestyle='-')\n plt.legend(handles=(series_a, series_b), bbox_to_anchor=(1.1, 1), loc=2, borderaxespad=0.)\n plt.show()\n",
"Step 2: List available Datasets\nNow we can interact with NEXUS using the nexuscli python module. The nexuscli module has a number of useful methods that allow you to easily interact with the NEXUS webservice API. One of those methods is nexuscli.dataset_list which returns a list of Datasets in the system along with their start and end times.\nHowever, in order to use the client, it must be told where the NEXUS webservice is running. The nexuscli.set_target(url) method is used to target NEXUS. An instance of NEXUS is already running for you and is available at http://nexus-webapp:8083.\nTODOs\n\nImport the nexuscli python module.\nCall nexuscli.dataset_list() and print the results",
"# TODO: Import the nexuscli python module.\n\n\n# Target the nexus webapp server\nnexuscli.set_target(\"http://nexus-webapp:8083\")\n\n# TODO: Call nexuscli.dataset_list() and print the results\n",
"Step 3: Run a Time Series\nNow that we can interact with NEXUS using the nexuscli python module, we would like to run a time series. To do this, we will use the nexuscli.time_series method. The signature for this method is described below:\n\nnexuscli.time_series(datasets, bounding_box, start_datetime, end_datetime, spark=False) \nSend a request to NEXUS to calculate a time series. \ndatasets Sequence (max length 2) of the name of the dataset(s)\nbounding_box Bounding box for area of interest as a shapely.geometry.polygon.Polygon\nstart_datetime Start time as a datetime.datetime\nend_datetime End time as a datetime.datetime\nspark Optionally use spark. Default: False\nreturn List of nexuscli.nexuscli.TimeSeries namedtuples\n```\n\nAs you can see, there are a number of options available. Let's try investigating The Blob in the Pacific Ocean. The Blob is an abnormal warming of the Sea Surface Temperature that was first observed in 2013.\nGenerate a time series for the AVHRR_OI_L4_GHRSST_NCEI SST dataset for the time period 2013-01-01 through 2014-03-01 and a bounding box -150, 40, -120, 55 (west, south, east, north).\nTODOs\n\nCreate the bounding box using shapely's box method\nPlot the bounding box using the plot_box helper method\nGenerate the Time Series by calling the time_series method in the nexuscli module\nHint: datetime is already imported for you. You can create a datetime using the method datetime(int: year, int: month, int: day)\nHint: pass spark=True to the time_series function to speed up the computation\nPlot the result using the show_plot helper method",
"import time\nimport nexuscli\nfrom datetime import datetime\n\nfrom shapely.geometry import box\n\n# TODO: Create a bounding box using the box method imported above\n\n# TODO: Plot the bounding box using the helper method plot_box\n\n\n# Do not modify this line ##\nstart = time.perf_counter()#\n############################\n\n\n# TODO: Call the time_series method for the AVHRR_OI_L4_GHRSST_NCEI dataset using \n# your bounding box and time period 2013-01-01 through 2014-03-01\n\n\n# Enter your code above this line\nprint(\"Time Series took {} seconds to generate\".format(time.perf_counter() - start))\n\n\n# TODO: Plot the result using the `show_plot` helper method\n\n",
"Step 3a: Run for a Longer Time Period\nNow that you have successfully generated a time series for approximately one year of data. Try generating a longer time series by increasing the end date to 2016-12-31. This will take a little bit longer to execute, since there is more data to analyze, but should finish in under a minute.\nThe significant increase in sea surface temperature due to the blob should be visible as an upward trend between 2013 and 2015 in this longer time series.\nTODOs\n\nGenerate a longer time series from 2013-01-01 to 2016-12-31\nPlot the result using the show_plot helper method. Make sure you pass spark=True to the time_series function to speed up the analysis\n\nAdvanced (Optional)\n\nFor an extra challenge, try plotting the trend line.\nHint numpy and scipy packages are installed and can be used by importing them: import numpy or import scipy\nHint You will need to convert the TimeSeries.time array to numbers in order to generate a polynomial fit line. matplotlib has a built in function capable of doing this: matplotlib.dates.date2num and it's inverse matplotlib.dates.num2date",
"import time\nimport nexuscli\nfrom datetime import datetime\n\nfrom shapely.geometry import box\n\nbbox = box(-150, 40, -120, 55)\nplot_box(bbox)\n\n# Do not modify this line ##\nstart = time.perf_counter()#\n############################\n\n# TODO: Call the time_series method for the AVHRR_OI_L4_GHRSST_NCEI dataset using \n# your bounding box and time period 2013-01-01 through 2016-12-31\n# Make sure you pass spark=True to the time_series function to speed up the analysis\n\n\n# Enter your code above this line\nprint(\"Time Series took {} seconds to generate\".format(time.perf_counter() - start))\n\n# TODO: Plot the result using the `show_plot` helper method\n\n",
"Step 4: Run two Time Series' and plot them side-by-side\nThe time_series method can be used on up to two datasets at one time for comparison. Let's take a look at another region and see how to generate two time series and plot them side by side.\n\nHurricane Katrina passed to the southwest of Florida on Aug 27, 2005. The ocean response in a 1 x 1 degree region is captured by a number of satellites. The initial ocean response was an immediate cooling of the surface waters by 2 degrees Celcius that lingers for several days. The SST drop is correlated to both wind and precipitation\ndata.\n_A study of a Hurricane Katrina–induced phytoplankton bloom using satellite observations and model simulations\nXiaoming Liu, Menghua Wang, and Wei Shi1\nJOURNAL OF GEOPHYSICAL RESEARCH, VOL. 114, C03023, doi:10.1029/2008JC004934, 2009\nhttp://shoni2.princeton.edu/ftp/lyo/journals/Ocean/phybiogeochem/Liu-etal-KatrinaChlBloom-JGR2009.pdf _\n\nPlot the time series for the AVHRR_OI_L4_GHRSST_NCEI SST dataset and the TRMM_3B42_daily Precipitation dataset for the region -84.5, 23.5, -83.5, 24.5 and time frame of 2005-08-24 through 2005-09-10. Plot the result using the show_plot_two_series helper method and see if you can recognize the correlation between the spike in precipitation and the decrease in temperature.\nTODOs\n\nCreate a bounding box for the region in the Gulf of Mexico that Hurricane Katrina passed through (-84.5, 23.5, -83.5, 24.5)\nPlot the bounding box using the helper method plot_box\nGenerate the Time Series by calling the time_series method in the nexuscli module\nPlot the result using the show_plot_two_series helper method",
"import time\nimport nexuscli\nfrom datetime import datetime\n\nfrom shapely.geometry import box\n\n# TODO: Create a bounding box using the box method imported above\n\n\n# TODO: Plot the bounding box using the helper method plot_box\n\n\n\n# Do not modify this line ##\nstart = time.perf_counter()#\n############################\n\n# TODO: Call the time_series method for the AVHRR_OI_L4_GHRSST_NCEI dataset and the `TRMM_3B42_daily` dataset\n# using your bounding box and time period 2005-08-24 through 2005-09-10\n\n\n\n\n# Enter your code above this line\nprint(\"Time Series took {} seconds to generate\".format(time.perf_counter() - start))\n\n# TODO: Plot the result using the `show_plot_two_series` helper method\n\n",
"Step 5: Run a Daily Difference Average (Anomaly) calculation\nLet's return to The Blob region. But this time we're going to use a different calculation, Daily Difference Average (aka. Anomaly plot). \nThe Daily Difference Average algorithm compares a dataset against a climatological mean and produces a time series of the difference from that mean. Given The Blob region, we should expect to see a positive difference from the mean temperature in that region (indicating higher temperatures than normal) between 2013 and 2014.\nThis time, using the nexuscli module, call the daily_difference_average method. The signature for that method is reprinted below:\n\nGenerate an anomaly Time series for a given dataset, bounding box, and timeframe. \ndataset Name of the dataset as a String\nbounding_box Bounding box for area of interest as a shapely.geometry.polygon.Polygon\nstart_datetime Start time as a datetime.datetime\nend_datetime End time as a datetime.datetime \nreturn List of nexuscli.nexuscli.TimeSeries namedtuples\n\nGenerate an anomaly time series using the AVHRR_OI_L4_GHRSST_NCEI SST dataset for the time period 2013-01-01 through 2016-12-31 and a bounding box -150, 40, -120, 55 (west, south, east, north).\nTODOs\n\nGenerate the Anomaly Time Series by calling the daily_difference_average method in the nexuscli module\nPlot the result using the show_plot helper method\n\nAdvanced (Optional)\n\nGenerate an Anomaly Time Series for the El Niño 3.4 region (bounding box -170, -5, -120, 5) from 2010 to 2015.",
"import time\nimport nexuscli\nfrom datetime import datetime\n\nfrom shapely.geometry import box\n\nbbox = box(-150, 40, -120, 55)\nplot_box(bbox)\n\n# Do not modify this line ##\nstart = time.perf_counter()#\n############################\n\n\n# TODO: Call the daily_difference_average method for the AVHRR_OI_L4_GHRSST_NCEI dataset using \n# your bounding box and time period 2013-01-01 through 2016-12-31. Be sure to pass spark=True as a parameter\n# to speed up processing.\n\n\n\n\n\n\n# Enter your code above this line\nprint(\"Daily Difference Average took {} seconds to generate\".format(time.perf_counter() - start))\n\n# TODO: Plot the result using the `show_plot` helper method\n",
"Congratulations!\nYou have finished this workbook.\nIf others are still working, please feel free to modify the examples and play with the client module or go back and complete the \"Advanced\" challenges if you skipped them. Further technical information about NEXUS can be found in the GitHub repository.\nIf you would like to save this notebook for reference later, click on File -> Download as... and choose your preferred format."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
uber/pyro
|
tutorial/source/normalizing_flows_i.ipynb
|
apache-2.0
|
[
"Normalizing Flows - Introduction (Part 1)\nThis tutorial introduces Pyro's normalizing flow library. It is independent of much of Pyro, but users may want to read about distribution shapes in the Tensor Shapes Tutorial.\nIntroduction\nIn standard probabilistic modeling practice, we represent our beliefs over unknown continuous quantities with simple parametric distributions like the normal, exponential, and Laplacian distributions. However, using such simple forms, which are commonly symmetric and unimodal (or have a fixed number of modes when we take a mixture of them), restricts the performance and flexibility of our methods. For instance, standard variational inference in the Variational Autoencoder uses independent univariate normal distributions to represent the variational family. The true posterior is neither independent nor normally distributed, which results in suboptimal inference and simplifies the model that is learnt. In other scenarios, we are likewise restricted by not being able to model multimodal distributions and heavy or light tails.\nNormalizing Flows [1-4] are a family of methods for constructing flexible learnable probability distributions, often with neural networks, which allow us to surpass the limitations of simple parametric forms. Pyro contains state-of-the-art normalizing flow implementations, and this tutorial explains how you can use this library for learning complex models and performing flexible variational inference. We introduce the main idea of Normalizing Flows (NFs) and demonstrate learning simple univariate distributions with element-wise, multivariate, and conditional flows.\nUnivariate Distributions\nBackground\nNormalizing Flows are a family of methods for constructing flexible distributions. Let's first restrict our attention to representing univariate distributions. The basic idea is that a simple source of noise, for example a variable with a standard normal distribution, $X\\sim\\mathcal{N}(0,1)$, is passed through a bijective (i.e. invertible) function, $g(\\cdot)$ to produce a more complex transformed variable $Y=g(X)$.\nFor a given random variable, we typically want to perform two operations: sampling and scoring. Sampling $Y$ is trivial. First, we sample $X=x$, then calculate $y=g(x)$. Scoring $Y$, or rather, evaluating the log-density $\\log(p_Y(y))$, is more involved. How does the density of $Y$ relate to the density of $X$? We can use the substitution rule of integral calculus to answer this. Suppose we want to evaluate the expectation of some function of $X$. Then,\n\\begin{align}\n\\mathbb{E}{p_X(\\cdot)}\\left[f(X)\\right] &= \\int{\\text{supp}(X)}f(x)p_X(x)dx\\\n&= \\int_{\\text{supp}(Y)}f(g^{-1}(y))p_X(g^{-1}(y))\\left|\\frac{dx}{dy}\\right|dy\\\n&= \\mathbb{E}_{p_Y(\\cdot)}\\left[f(g^{-1}(Y))\\right],\n\\end{align}\nwhere $\\text{supp}(X)$ denotes the support of $X$, which in this case is $(-\\infty,\\infty)$. Crucially, we used the fact that $g$ is bijective to apply the substitution rule in going from the first to the second line. Equating the last two lines we get,\n\\begin{align}\n\\log(p_Y(y)) &= \\log(p_X(g^{-1}(y)))+\\log\\left(\\left|\\frac{dx}{dy}\\right|\\right)\\\n&= \\log(p_X(g^{-1}(y)))-\\log\\left(\\left|\\frac{dy}{dx}\\right|\\right).\n\\end{align}\nInituitively, this equation says that the density of $Y$ is equal to the density at the corresponding point in $X$ plus a term that corrects for the warp in volume around an infinitesimally small length around $Y$ caused by the transformation.\nIf $g$ is cleverly constructed (and we will see several examples shortly), we can produce distributions that are more complex than standard normal noise and yet have easy sampling and computationally tractable scoring. Moreover, we can compose such bijective transformations to produce even more complex distributions. By an inductive argument, if we have $L$ transforms $g_{(0)}, g_{(1)},\\ldots,g_{(L-1)}$, then the log-density of the transformed variable $Y=(g_{(0)}\\circ g_{(1)}\\circ\\cdots\\circ g_{(L-1)})(X)$ is\n\\begin{align}\n\\log(p_Y(y)) &= \\log\\left(p_X\\left(\\left(g_{(L-1)}^{-1}\\circ\\cdots\\circ g_{(0)}^{-1}\\right)\\left(y\\right)\\right)\\right)+\\sum^{L-1}{l=0}\\log\\left(\\left|\\frac{dg^{-1}{(l)}(y_{(l)})}{dy'}\\right|\\right),\n%\\left( g^{(l)}(y^{(l)})\n%\\right).\n\\end{align}\nwhere we've defined $y_{(0)}=x$, $y_{(L-1)}=y$ for convenience of notation.\nIn a latter section, we will see how to generalize this method to multivariate $X$. The field of Normalizing Flows aims to construct such $g$ for multivariate $X$ to transform simple i.i.d. standard normal noise into complex, learnable, high-dimensional distributions. The methods have been applied to such diverse applications as image modeling, text-to-speech, unsupervised language induction, data compression, and modeling molecular structures. As probability distributions are the most fundamental component of probabilistic modeling we will likely see many more exciting state-of-the-art applications in the near future.\nFixed Univariate Transforms in Pyro\nPyTorch contains classes for representing fixed univariate bijective transformations, and sampling/scoring from transformed distributions derived from these. Pyro extends this with a comprehensive library of learnable univariate and multivariate transformations using the latest developments in the field. As Pyro imports all of PyTorch's distributions and transformations, we will work solely with Pyro. We also note that the NF components in Pyro can be used independently of the probabilistic programming functionality of Pyro, which is what we will be doing in the first two tutorials.\nLet us begin by showing how to represent and manipulate a simple transformed distribution,\n\\begin{align}\nX &\\sim \\mathcal{N}(0,1)\\\nY &= \\text{exp}(X).\n\\end{align}\nYou may have recognized that this is by definition, $Y\\sim\\text{LogNormal}(0,1)$.\nWe begin by importing the relevant libraries:",
"import torch\nimport pyro\nimport pyro.distributions as dist\nimport pyro.distributions.transforms as T\nimport matplotlib.pyplot as plt\nimport seaborn as sns\nimport os\nsmoke_test = ('CI' in os.environ)",
"A variety of bijective transformations live in the pyro.distributions.transforms module, and the classes to define transformed distributions live in pyro.distributions. We first create the base distribution of $X$ and the class encapsulating the transform $\\text{exp}(\\cdot)$:",
"dist_x = dist.Normal(torch.zeros(1), torch.ones(1))\nexp_transform = T.ExpTransform()",
"The class ExpTransform derives from Transform and defines the forward, inverse, and log-absolute-derivative operations for this transform,\n\\begin{align}\ng(x) &= \\text{exp(x)}\\\ng^{-1}(y) &= \\log(y)\\\n\\log\\left(\\left|\\frac{dg}{dx}\\right|\\right) &= y.\n\\end{align}\nIn general, a transform class defines these three operations, from which it is sufficient to perform sampling and scoring.\nThe class TransformedDistribution takes a base distribution of simple noise and a list of transforms, and encapsulates the distribution formed by applying these transformations in sequence. We use it as:",
"dist_y = dist.TransformedDistribution(dist_x, [exp_transform])",
"Now, plotting samples from both to verify that we that have produced the log-normal distribution:",
"plt.subplot(1, 2, 1)\nplt.hist(dist_x.sample([1000]).numpy(), bins=50)\nplt.title('Standard Normal')\nplt.subplot(1, 2, 2)\nplt.hist(dist_y.sample([1000]).numpy(), bins=50)\nplt.title('Standard Log-Normal')\nplt.show()",
"Our example uses a single transform. However, we can compose transforms to produce more expressive distributions. For instance, if we apply an affine transformation we can produce the general log-normal distribution,\n\\begin{align}\nX &\\sim \\mathcal{N}(0,1)\\\nY &= \\text{exp}(\\mu+\\sigma X).\n\\end{align}\nor rather, $Y\\sim\\text{LogNormal}(\\mu,\\sigma^2)$. In Pyro this is accomplished, e.g. for $\\mu=3, \\sigma=0.5$, as follows:",
"dist_x = dist.Normal(torch.zeros(1), torch.ones(1))\naffine_transform = T.AffineTransform(loc=3, scale=0.5)\nexp_transform = T.ExpTransform()\ndist_y = dist.TransformedDistribution(dist_x, [affine_transform, exp_transform])\n\nplt.subplot(1, 2, 1)\nplt.hist(dist_x.sample([1000]).numpy(), bins=50)\nplt.title('Standard Normal')\nplt.subplot(1, 2, 2)\nplt.hist(dist_y.sample([1000]).numpy(), bins=50)\nplt.title('Log-Normal')\nplt.show()",
"For the forward operation, transformations are applied in the order of the list that is the second argument to TransformedDistribution. In this case, first AffineTransform is applied to the base distribution and then ExpTransform.\nLearnable Univariate Distributions in Pyro\nHaving introduced the interface for invertible transforms and transformed distributions, we now show how to represent learnable transforms and use them for density estimation. Our dataset in this section and the next will comprise samples along two concentric circles. Examining the joint and marginal distributions:",
"import numpy as np\nfrom sklearn import datasets\nfrom sklearn.preprocessing import StandardScaler\n\nn_samples = 1000\nX, y = datasets.make_circles(n_samples=n_samples, factor=0.5, noise=0.05)\nX = StandardScaler().fit_transform(X)\n\nplt.title(r'Samples from $p(x_1,x_2)$')\nplt.xlabel(r'$x_1$')\nplt.ylabel(r'$x_2$')\nplt.scatter(X[:,0], X[:,1], alpha=0.5)\nplt.show()\n\nplt.subplot(1, 2, 1)\nsns.distplot(X[:,0], hist=False, kde=True, \n bins=None,\n hist_kws={'edgecolor':'black'},\n kde_kws={'linewidth': 2})\nplt.title(r'$p(x_1)$')\nplt.subplot(1, 2, 2)\nsns.distplot(X[:,1], hist=False, kde=True, \n bins=None,\n hist_kws={'edgecolor':'black'},\n kde_kws={'linewidth': 2})\nplt.title(r'$p(x_2)$')\nplt.show()",
"Standard transforms derive from the Transform class and are not designed to contain learnable parameters. Learnable transforms, on the other hand, derive from TransformModule, which is a torch.nn.Module and registers parameters with the object.\nWe will learn the marginals of the above distribution using such a transform, Spline [5,6], defined on a two-dimensional input:",
"base_dist = dist.Normal(torch.zeros(2), torch.ones(2))\nspline_transform = T.Spline(2, count_bins=16)\nflow_dist = dist.TransformedDistribution(base_dist, [spline_transform])",
"This transform passes each dimension of its input through a separate monotonically increasing function known as a spline. From a high-level, a spline is a complex parametrizable curve for which we can define specific points known as knots that it passes through and the derivatives at the knots. The knots and their derivatives are parameters that can be learnt, e.g., through stochastic gradient descent on a maximum likelihood objective, as we now demonstrate:",
"%%time\nsteps = 1 if smoke_test else 1001\ndataset = torch.tensor(X, dtype=torch.float)\noptimizer = torch.optim.Adam(spline_transform.parameters(), lr=1e-2)\nfor step in range(steps):\n optimizer.zero_grad()\n loss = -flow_dist.log_prob(dataset).mean()\n loss.backward()\n optimizer.step()\n flow_dist.clear_cache()\n \n if step % 200 == 0:\n print('step: {}, loss: {}'.format(step, loss.item()))",
"Note that we call flow_dist.clear_cache() after each optimization step to clear the transform's forward-inverse cache. This is required because flow_dist's spline_transform is a stateful TransformModule rather than a purely stateless Transform object. Purely functional Pyro code typically creates Transform objects each model execution, then discards them after .backward(), effectively clearing the transform caches. By contrast in this tutorial we create stateful module objects and need to manually clear their cache after update.\nPlotting samples drawn from the transformed distribution after learning:",
"X_flow = flow_dist.sample(torch.Size([1000,])).detach().numpy()\nplt.title(r'Joint Distribution')\nplt.xlabel(r'$x_1$')\nplt.ylabel(r'$x_2$')\nplt.scatter(X[:,0], X[:,1], label='data', alpha=0.5)\nplt.scatter(X_flow[:,0], X_flow[:,1], color='firebrick', label='flow', alpha=0.5)\nplt.legend()\nplt.show()\n\nplt.subplot(1, 2, 1)\nsns.distplot(X[:,0], hist=False, kde=True, \n bins=None,\n hist_kws={'edgecolor':'black'},\n kde_kws={'linewidth': 2},\n label='data')\nsns.distplot(X_flow[:,0], hist=False, kde=True, \n bins=None, color='firebrick',\n hist_kws={'edgecolor':'black'},\n kde_kws={'linewidth': 2},\n label='flow')\nplt.title(r'$p(x_1)$')\nplt.subplot(1, 2, 2)\nsns.distplot(X[:,1], hist=False, kde=True, \n bins=None,\n hist_kws={'edgecolor':'black'},\n kde_kws={'linewidth': 2},\n label='data')\nsns.distplot(X_flow[:,1], hist=False, kde=True, \n bins=None, color='firebrick',\n hist_kws={'edgecolor':'black'},\n kde_kws={'linewidth': 2},\n label='flow')\nplt.title(r'$p(x_2)$')\nplt.show()",
"As we can see, we have learnt close approximations to the marginal distributions, $p(x_1),p(x_2)$. It would have been challenging to fit the irregularly shaped marginals with standard methods, e.g., a mixture of normal distributions. As expected, since there is a dependency between the two dimensions, we do not learn a good representation of the joint, $p(x_1,x_2)$. In the next section, we explain how to learn multivariate distributions whose dimensions are not independent.\nMultivariate Distributions\nBackground\nThe fundamental idea of normalizing flows also applies to multivariate random variables, and this is where its value is clearly seen - representing complex high-dimensional distributions. In this case, a simple multivariate source of noise, for example a standard i.i.d. normal distribution, $X\\sim\\mathcal{N}(\\mathbf{0},I_{D\\times D})$, is passed through a vector-valued bijection, $g:\\mathbb{R}^D\\rightarrow\\mathbb{R}^D$, to produce the more complex transformed variable $Y=g(X)$.\nSampling $Y$ is again trivial and involves evaluation of the forward pass of $g$. We can score $Y$ using the multivariate substitution rule of integral calculus,\n\\begin{align}\n\\mathbb{E}{p_X(\\cdot)}\\left[f(X)\\right] &= \\int{\\text{supp}(X)}f(\\mathbf{x})p_X(\\mathbf{x})d\\mathbf{x}\\\n&= \\int_{\\text{supp}(Y)}f(g^{-1}(\\mathbf{y}))p_X(g^{-1}(\\mathbf{y}))\\det\\left|\\frac{d\\mathbf{x}}{d\\mathbf{y}}\\right|d\\mathbf{y}\\\n&= \\mathbb{E}_{p_Y(\\cdot)}\\left[f(g^{-1}(Y))\\right],\n\\end{align}\nwhere $d\\mathbf{x}/d\\mathbf{y}$ denotes the Jacobian matrix of $g^{-1}(\\mathbf{y})$. Equating the last two lines we get,\n\\begin{align}\n\\log(p_Y(y)) &= \\log(p_X(g^{-1}(y)))+\\log\\left(\\det\\left|\\frac{d\\mathbf{x}}{d\\mathbf{y}}\\right|\\right)\\\n&= \\log(p_X(g^{-1}(y)))-\\log\\left(\\det\\left|\\frac{d\\mathbf{y}}{d\\mathbf{x}}\\right|\\right).\n\\end{align}\nInituitively, this equation says that the density of $Y$ is equal to the density at the corresponding point in $X$ plus a term that corrects for the warp in volume around an infinitesimally small volume around $Y$ caused by the transformation. For instance, in $2$-dimensions, the geometric interpretation of the absolute value of the determinant of a Jacobian is that it represents the area of a parallelogram with edges defined by the columns of the Jacobian. In $n$-dimensions, the geometric interpretation of the absolute value of the determinant Jacobian is that is represents the hyper-volume of a parallelepiped with $n$ edges defined by the columns of the Jacobian (see a calculus reference such as [7] for more details).\nSimilar to the univariate case, we can compose such bijective transformations to produce even more complex distributions. By an inductive argument, if we have $L$ transforms $g_{(0)}, g_{(1)},\\ldots,g_{(L-1)}$, then the log-density of the transformed variable $Y=(g_{(0)}\\circ g_{(1)}\\circ\\cdots\\circ g_{(L-1)})(X)$ is\n\\begin{align}\n\\log(p_Y(y)) &= \\log\\left(p_X\\left(\\left(g_{(L-1)}^{-1}\\circ\\cdots\\circ g_{(0)}^{-1}\\right)\\left(y\\right)\\right)\\right)+\\sum^{L-1}{l=0}\\log\\left(\\left|\\frac{dg^{-1}{(l)}(y_{(l)})}{dy'}\\right|\\right),\n%\\left( g^{(l)}(y^{(l)})\n%\\right).\n\\end{align}\nwhere we've defined $y_{(0)}=x$, $y_{(L-1)}=y$ for convenience of notation.\nThe main challenge is in designing parametrizable multivariate bijections that have closed form expressions for both $g$ and $g^{-1}$, a tractable Jacobian whose calculation scales with $O(D)$ rather than $O(D^3)$, and can express a flexible class of functions.\nMultivariate Transforms in Pyro\nUp to this point we have used element-wise transforms in Pyro. These are indicated by having the property transform.event_dim == 0 set on the transform object. Such element-wise transforms can only be used to represent univariate distributions and multivariate distributions whose dimensions are independent (known in variational inference as the mean-field approximation).\nThe power of Normalizing Flow, however, is most apparent in their ability to model complex high-dimensional distributions with neural networks and Pyro contains several such flows for accomplishing this. Transforms that operate on vectors have the property transform.event_dim == 1, transforms on matrices with transform.event_dim == 2, and so on. In general, the event_dim property of a transform indicates how many dependent dimensions there are in the output of a transform.\nIn this section, we show how to use SplineCoupling to learn the bivariate toy distribution from our running example. A coupling transform [8, 9] divides the input variable into two parts and applies an element-wise bijection to the section half whose parameters are a function of the first. Optionally, an element-wise bijection is also applied to the first half. Dividing the inputs at $d$, the transform is,\n\\begin{align}\n\\mathbf{y}{1:d} &= g\\theta(\\mathbf{x}{1:d})\\\n\\mathbf{y}{(d+1):D} &= h_\\phi(\\mathbf{x}{(d+1):D};\\mathbf{x}{1:d}),\n\\end{align}\nwhere $\\mathbf{x}{1:d}$ represents the first $d$ elements of the inputs, $g\\theta$ is either the identity function or an elementwise bijection parameters $\\theta$, and $h_\\phi$ is an element-wise bijection whose parameters are a function of $\\mathbf{x}_{1:d}$.\nThis type of transform is easily invertible. We invert the first half, $\\mathbf{y}{1:d}$, then use the resulting $\\mathbf{x}{1:d}$ to evaluate $\\phi$ and invert the second half,\n\\begin{align}\n\\mathbf{x}{1:d} &= g\\theta^{-1}(\\mathbf{y}{1:d})\\\n\\mathbf{x}{(d+1):D} &= h_\\phi^{-1}(\\mathbf{y}{(d+1):D};\\mathbf{x}{1:d}).\n\\end{align}\nDifference choices for $g$ and $h$ form different types of coupling transforms. When both are monotonic rational splines, the transform is the spline coupling layer of Neural Spline Flow [5,6], which is represented in Pyro by the SplineCoupling class. As shown in the references, when we combine a sequence of coupling layers sandwiched between random permutations so we introduce dependencies between all dimensions, we can model complex multivariate distributions.\nMost of the learnable transforms in Pyro have a corresponding helper function that takes care of constructing a neural network for the transform with the correct output shape. This neural network outputs the parameters of the transform and is known as a hypernetwork [10]. The helper functions are represented by lower-case versions of the corresponding class name, and usually input at the very least the input-dimension or shape of the distribution to model. For instance, the helper function corresponding to SplineCoupling is spline_coupling. We create a bivariate flow with a single spline coupling layer as follows:",
"base_dist = dist.Normal(torch.zeros(2), torch.ones(2))\nspline_transform = T.spline_coupling(2, count_bins=16)\nflow_dist = dist.TransformedDistribution(base_dist, [spline_transform])",
"Similarly to before, we train this distribution on the toy dataset and plot the results:",
"%%time\nsteps = 1 if smoke_test else 5001\ndataset = torch.tensor(X, dtype=torch.float)\noptimizer = torch.optim.Adam(spline_transform.parameters(), lr=5e-3)\nfor step in range(steps+1):\n optimizer.zero_grad()\n loss = -flow_dist.log_prob(dataset).mean()\n loss.backward()\n optimizer.step()\n flow_dist.clear_cache()\n \n if step % 500 == 0:\n print('step: {}, loss: {}'.format(step, loss.item()))\n\nX_flow = flow_dist.sample(torch.Size([1000,])).detach().numpy()\nplt.title(r'Joint Distribution')\nplt.xlabel(r'$x_1$')\nplt.ylabel(r'$x_2$')\nplt.scatter(X[:,0], X[:,1], label='data', alpha=0.5)\nplt.scatter(X_flow[:,0], X_flow[:,1], color='firebrick', label='flow', alpha=0.5)\nplt.legend()\nplt.show()\n\nplt.subplot(1, 2, 1)\nsns.distplot(X[:,0], hist=False, kde=True, \n bins=None,\n hist_kws={'edgecolor':'black'},\n kde_kws={'linewidth': 2},\n label='data')\nsns.distplot(X_flow[:,0], hist=False, kde=True, \n bins=None, color='firebrick',\n hist_kws={'edgecolor':'black'},\n kde_kws={'linewidth': 2},\n label='flow')\nplt.title(r'$p(x_1)$')\nplt.subplot(1, 2, 2)\nsns.distplot(X[:,1], hist=False, kde=True, \n bins=None,\n hist_kws={'edgecolor':'black'},\n kde_kws={'linewidth': 2},\n label='data')\nsns.distplot(X_flow[:,1], hist=False, kde=True, \n bins=None, color='firebrick',\n hist_kws={'edgecolor':'black'},\n kde_kws={'linewidth': 2},\n label='flow')\nplt.title(r'$p(x_2)$')\nplt.show()",
"We see from the output that this normalizing flow has successfully learnt both the univariate marginals and the bivariate distribution.\nConditional versus Joint Distributions\nBackground\nIn many cases, we wish to represent conditional rather than joint distributions. For instance, in performing variational inference, the variational family is a class of conditional distributions,\n$$\n\\begin{align}\n{q_\\psi(\\mathbf{z}\\mid\\mathbf{x})\\mid\\theta\\in\\Theta},\n\\end{align}\n$$\nwhere $\\mathbf{z}$ is the latent variable and $\\mathbf{x}$ the observed one, that hopefully contains a member close to the true posterior of the model, $p(\\mathbf{z}\\mid\\mathbf{x})$. In other cases, we may wish to learn to generate an object $\\mathbf{x}$ conditioned on some context $\\mathbf{c}$ using $p_\\theta(\\mathbf{x}\\mid\\mathbf{c})$ and observations ${(\\mathbf{x}n,\\mathbf{c}_n)}^N{n=1}$. For instance, $\\mathbf{x}$ may be a spoken sentence and $\\mathbf{c}$ a number of speech features.\nThe theory of Normalizing Flows is easily generalized to conditional distributions. We denote the variable to condition on by $C=\\mathbf{c}\\in\\mathbb{R}^M$. A simple multivariate source of noise, for example a standard i.i.d. normal distribution, $X\\sim\\mathcal{N}(\\mathbf{0},I_{D\\times D})$, is passed through a vector-valued bijection that also conditions on C, $g:\\mathbb{R}^D\\times\\mathbb{R}^M\\rightarrow\\mathbb{R}^D$, to produce the more complex transformed variable $Y=g(X;C=\\mathbf{c})$. In practice, this is usually accomplished by making the parameters for a known normalizing flow bijection $g$ the output of a hypernet neural network that inputs $\\mathbf{c}$.\nSampling of conditional transforms simply involves evaluating $Y=g(X; C=\\mathbf{c})$. Conditioning the bijections on $\\mathbf{c}$, the same formula holds for scoring as for the joint multivariate case.\nConditional Transforms in Pyro\nIn Pyro, most learnable transforms have a corresponding conditional version that derives from ConditionalTransformModule. For instance, the conditional version of the spline transform is ConditionalSpline with helper function conditional_spline.\nIn this section, we will show how we can learn our toy dataset as the decomposition of the product of a conditional and a univariate distribution,\n$$\n\\begin{align}\np(x_1,x_2) &= p(x_2\\mid x_1)p(x_1).\n\\end{align}\n$$\nFirst, we create the univariate distribution for $x_1$ as shown previously,",
"dist_base = dist.Normal(torch.zeros(1), torch.ones(1))\nx1_transform = T.spline(1)\ndist_x1 = dist.TransformedDistribution(dist_base, [x1_transform])",
"A conditional transformed distribution is created by passing the base distribution and list of conditional and non-conditional transforms to the ConditionalTransformedDistribution class:",
"x2_transform = T.conditional_spline(1, context_dim=1)\ndist_x2_given_x1 = dist.ConditionalTransformedDistribution(dist_base, [x2_transform])",
"You will notice that we pass the dimension of the context variable, $M=1$, to the conditional spline helper function.\nUntil we condition on a value of $x_1$, the ConditionalTransformedDistribution object is merely a placeholder and cannot be used for sampling or scoring. By calling its .condition(context) method, we obtain a TransformedDistribution for which all its conditional transforms have been conditioned on context.\nFor example, to draw a sample from $x_2\\mid x_1=1$:",
"x1 = torch.ones(1)\nprint(dist_x2_given_x1.condition(x1).sample())",
"In general, the context variable may have batch dimensions and these dimensions must broadcast over the batch dimensions of the input variable.\nNow, combining the two distributions and training it on the toy dataset:",
"%%time\nsteps = 1 if smoke_test else 5001\nmodules = torch.nn.ModuleList([x1_transform, x2_transform])\noptimizer = torch.optim.Adam(modules.parameters(), lr=3e-3)\nx1 = dataset[:,0][:,None]\nx2 = dataset[:,1][:,None]\nfor step in range(steps):\n optimizer.zero_grad()\n ln_p_x1 = dist_x1.log_prob(x1)\n ln_p_x2_given_x1 = dist_x2_given_x1.condition(x1.detach()).log_prob(x2.detach())\n loss = -(ln_p_x1 + ln_p_x2_given_x1).mean()\n loss.backward()\n optimizer.step()\n dist_x1.clear_cache()\n dist_x2_given_x1.clear_cache()\n \n if step % 500 == 0:\n print('step: {}, loss: {}'.format(step, loss.item()))\n\nX = torch.cat((x1, x2), dim=-1)\nx1_flow = dist_x1.sample(torch.Size([1000,]))\nx2_flow = dist_x2_given_x1.condition(x1_flow).sample(torch.Size([1000,]))\nX_flow = torch.cat((x1_flow, x2_flow), dim=-1)\n\nplt.title(r'Joint Distribution')\nplt.xlabel(r'$x_1$')\nplt.ylabel(r'$x_2$')\nplt.scatter(X[:,0], X[:,1], label='data', alpha=0.5)\nplt.scatter(X_flow[:,0], X_flow[:,1], color='firebrick', label='flow', alpha=0.5)\nplt.legend()\nplt.show()\n\nplt.subplot(1, 2, 1)\nsns.distplot(X[:,0], hist=False, kde=True, \n bins=None,\n hist_kws={'edgecolor':'black'},\n kde_kws={'linewidth': 2},\n label='data')\nsns.distplot(X_flow[:,0], hist=False, kde=True, \n bins=None, color='firebrick',\n hist_kws={'edgecolor':'black'},\n kde_kws={'linewidth': 2},\n label='flow')\nplt.title(r'$p(x_1)$')\nplt.subplot(1, 2, 2)\nsns.distplot(X[:,1], hist=False, kde=True, \n bins=None,\n hist_kws={'edgecolor':'black'},\n kde_kws={'linewidth': 2},\n label='data')\nsns.distplot(X_flow[:,1], hist=False, kde=True, \n bins=None, color='firebrick',\n hist_kws={'edgecolor':'black'},\n kde_kws={'linewidth': 2},\n label='flow')\nplt.title(r'$p(x_2)$')\nplt.show()",
"Conclusions\nIn this tutorial, we have explained the basic idea behind normalizing flows and the Pyro interface to create flows to represent univariate, multivariate, and conditional distributions. It is useful to think of flows as a powerful general-purpose tool in your probabilistic modelling toolkit, and you can replace any existing distribution in your model with one to increase its flexibility and performance. We hope you have fun exploring the power of normalizing flows!\nReferences\n\nE.G. Tabak, Christina Turner. A Family of Nonparametric Density Estimation Algorithms. Communications on Pure and Applied Mathematics, 66(2):145–164, 2013.\nDanilo Jimenez Rezende, Shakir Mohamed. Variational Inference with Normalizing Flows. ICML 2015.\nIvan Kobyzev, Simon J.D. Prince, and Marcus A. Brubaker. Normalizing Flows: An Introduction and Review of Current Methods. [arXiv:1908.09257] 2019.\nGeorge Papamakarios, Eric Nalisnick, Danilo Jimenez Rezende, Shakir Mohamed, Balaji Lakshminarayanan. Normalizing Flows for Probabilistic Modeling and Inference. [arXiv:1912.02762] 2019.\nConor Durkan, Artur Bekasov, Iain Murray, George Papamakarios. Neural Spline Flows. NeurIPS 2019.\nHadi M. Dolatabadi, Sarah Erfani, Christopher Leckie. Invertible Generative Modeling using Linear Rational Splines. AISTATS 2020.\nJames Stewart. Calculus. Cengage Learning. 9th Edition 2020.\nLaurent Dinh, David Krueger, Yoshua Bengio. NICE: Non-linear Independent Components Estimation. Workshop contribution at ICLR 2015.\nLaurent Dinh, Jascha Sohl-Dickstein, Samy Bengio. Density estimation using Real-NVP. Conference paper at ICLR 2017.\nDavid Ha, Andrew Dai, Quoc V. Le. HyperNetworks. Workshop contribution at ICLR 2017."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
saashimi/code_guild
|
interactive-coding-challenges/staging/sorting_searching/group_ordered/group_ordered_solution.ipynb
|
mit
|
[
"<small><i>This notebook was prepared by wdonahoe. Source and license info is on GitHub.</i></small>\nSolution Notebook\nProblem: Implement a function that groups identical items based on their order in the list.\n\nConstraints\nTest Cases\nAlgorithm: Modified Selection Sort\nCode: Modified Selection Sort\nAlgorithm: Ordered Dict\nCode: Ordered Dict\nUnit Test\n\nConstraints\n\nCan we use extra data structures?\nYes\n\n\n\nTest Cases\n\ngroup_ordered([1,2,1,3,2]) -> [1,1,2,2,3]\ngroup_ordered(['a','b','a') -> ['a','a','b']\ngroup_ordered([1,1,2,3,4,5,2,1]-> [1,1,1,2,2,3,4,5]\ngroup_ordered([]) -> []\ngroup_ordered([1]) -> [1]\ngroup_ordered(None) -> None\n\nAlgorithm: Modified Selection Sort\n\nSave the relative position of the first-occurence of each item in a list.\nIterate through list of unique items.\nKeep an outer index; scan rest of list, swapping matching items with outer index and incrementing outer index each time. \n\n\n\nComplexity:\n* Time: O(n^2)\n* Space: O(n)\nCode: Modified Selection Sort",
"def make_order_list(list_in):\n order_list = []\n for item in list_in:\n if item not in order_list:\n order_list.append(item)\n return order_list\n\n\ndef group_ordered(list_in):\n if list_in is None:\n return None\n order_list = make_order_list(list_in)\n current = 0\n for item in order_list:\n search = current + 1\n while True:\n try:\n if list_in[search] != item:\n search += 1\n else:\n current += 1\n list_in[current], list_in[search] = list_in[search], list_in[current]\n search += 1\n except IndexError:\n break\n return list_in",
"Algorithm: Ordered Dict.\n\nUse an ordered dict to track insertion order of each key\nFlatten list of values.\n\nComplexity:\n\nTime: O(n)\nSpace: O(n)\n\nCode: Ordered Dict",
"from collections import OrderedDict\n\ndef group_ordered_alt(list_in):\n if list_in is None:\n return None\n result = OrderedDict()\n for value in list_in:\n result.setdefault(value, []).append(value)\n return [v for group in result.values() for v in group]",
"Unit Test\nThe following unit test is expected to fail until you solve the challenge.",
"%%writefile test_group_ordered.py\nfrom nose.tools import assert_equal\n\n\nclass TestGroupOrdered(object):\n def test_group_ordered(self, func):\n\n assert_equal(func(None), None)\n print('Success: ' + func.__name__ + \" None case.\")\n assert_equal(func([]), [])\n print('Success: ' + func.__name__ + \" Empty case.\")\n assert_equal(func([1]), [1])\n print('Success: ' + func.__name__ + \" Single element case.\")\n assert_equal(func([1, 2, 1, 3, 2]), [1, 1, 2, 2, 3])\n assert_equal(func(['a', 'b', 'a']), ['a', 'a', 'b'])\n assert_equal(func([1, 1, 2, 3, 4, 5, 2, 1]), [1, 1, 1, 2, 2, 3, 4, 5])\n assert_equal(func([1, 2, 3, 4, 3, 4]), [1, 2, 3, 3, 4, 4])\n print('Success: ' + func.__name__)\n\n\ndef main():\n test = TestGroupOrdered()\n test.test_group_ordered(group_ordered)\n try:\n test.test_group_ordered(group_ordered_alt)\n except NameError:\n # Alternate solutions are only defined\n # in the solutions file\n pass\n\nif __name__ == '__main__':\n main()\n\n%run -i test_group_ordered.py"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
ucsd-ccbb/jupyter-genomics
|
notebooks/rnaSeq/Functional_Enrichment_Analysis_Pathway_Visualization.ipynb
|
mit
|
[
"ToppGene & Pathway Visualization\nAuthors: N. Mouchamel, L. Huang, T. Nguyen, K. Fisch\nEmail: Kfisch@ucsd.edu\nDate: June 2016\nGoal: Create Jupyter notebook that runs an enrichment analysis in ToppGene through the API and runs Pathview to visualize the significant pathways outputted by ToppGene.\ntoppgene website: https://toppgene.cchmc.org/enrichment.jsp\nSteps: \n1. Read in differentially expressed gene list.\n2. Convert differentially expressed gene list to xml file as input to ToppGene API.\n3. Run enrichment analysis of DE genes through ToppGene API.\n4. Parse ToppGene API results from xml to csv and Pandas data frame.\n5. Display results in notebook.\n6. Extract just the KEGG pathwway IDs from the ToppGene output.\n7. Manually switch from Python2 to R kernel.\n8. Extract entrez ID and log2FC from the input DE genes.\n9. Create vector of significant pathways from ToppGene.\n10. Run Pathview (https://bioconductor.org/packages/release/bioc/html/pathview.html) in R to create colored pathway maps.\n11. Manually switch from R kernel to Python2.\n12. Display each of the significant pathway colored overlay diagrams in the jupyter notebook.",
"#Import Python modules\nimport os\nimport pandas\nimport qgrid\nimport mygene\n\n#Change directory\nos.chdir(\"/data/test\")\n",
"Read in differential expression results as a Pandas data frame to get differentially expressed gene list",
"#Read in DESeq2 results\ngenes=pandas.read_csv(\"DE_genes.csv\")\n\n#View top of file\ngenes.head(10)\n\n#Extract genes that are differentially expressed with a pvalue less than a certain cutoff (pvalue < 0.05 or padj < 0.05)\ngenes_DE_only = genes.loc[(genes.pvalue < 0.05)]\n\n#View top of file\ngenes_DE_only.head(10)\n\n#Check how many rows in original genes file\nlen(genes)\n\n#Check how many rows in DE genes file\nlen(genes_DE_only)",
"Translate Ensembl IDs to Gene Symbols and Entrez IDs using mygene.info API",
"#Extract list of DE genes (Check to make sure this code works, this was adapted from a different notebook)\nde_list = genes_DE_only[genes_DE_only.columns[0]]\n\n#Remove .* from end of Ensembl ID\nde_list2 = de_list.replace(\"\\.\\d\",\"\",regex=True)\n\n#Add new column with reformatted Ensembl IDs\ngenes_DE_only[\"Full_Ensembl\"] = de_list2\n\n#View top of file \ngenes_DE_only.head(10)\n\n#Set up mygene.info API and query\nmg = mygene.MyGeneInfo()\ngene_ids = mg.getgenes(de_list2, 'name, symbol, entrezgene', as_dataframe=True)\ngene_ids.index.name = \"Ensembl\"\ngene_ids.reset_index(inplace=True)\n\n#View top of file\ngene_ids.head(10)\n\n#Merge mygene.info query results with original DE genes list\nDE_with_ids = genes_DE_only.merge(gene_ids, left_on=\"Full_Ensembl\", right_on=\"Ensembl\", how=\"outer\")\n\n#View top of file\nDE_with_ids.head(10)\n\n#Write results to file\nDE_with_ids.to_csv(\"./DE_genes_converted.csv\")\n\n#Dataframe to only contain gene symbol\nDE_with_ids=pandas.read_csv(\"./DE_genes_converted.csv\")\n\ncols = DE_with_ids.columns.tolist()\ncols.insert(0, cols.pop(cols.index('symbol')))\n\nfor_xmlfile = DE_with_ids.reindex(columns= cols)\n\n#Condense dataframe to contain only gene symbol\nfor_xmlfile.drop(for_xmlfile.columns[[1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11,12,13,14]], axis=1, inplace=True) \n\n#Exclude NaN values\nfor_xmlfile.dropna(axis=0, how='any', thresh=None, subset=None, inplace=True)\n\n#View top of file\nfor_xmlfile.head(10)\n\n#Write results to file\nfor_xmlfile.to_csv(\"./for_xmlfile.csv\", index=False)\n\n#.XML file generator from gene list in .csv file\nimport xml.etree.cElementTree as ET\nimport xml.etree.cElementTree as ElementTree\nimport lxml\n\n#Root element of .xml \"Tree\"\nroot=ET.Element(\"requests\")\n\n#Title/identifier for the gene list inputted into ToppGene API\n#Name it whatever you like\ndoc=ET.SubElement(root, \"toppfun\", id= \"nicole's gene list\")\n\nconfig=ET.SubElement(doc, \"enrichment-config\")\n\ngene_list=ET.SubElement(doc, \"trainingset\")\ngene_list.set('accession-source','HGNC')\n\n#For gene symbol in gene_list\n#Parse through gene_list to create the .xml file\ntoppgene = pandas.read_csv(\"./for_xmlfile.csv\")\n\nfor i in toppgene.ix[:,0]:\n gene_symbol = i\n gene = ET.SubElement(gene_list, \"gene\")\n gene.text= gene_symbol\n\n\ntree = ET.ElementTree(root)\n\n#Function needed for proper indentation of the .xml file\ndef indent(elem, level=0):\n i = \"\\n\" + level*\" \"\n if len(elem):\n if not elem.text or not elem.text.strip():\n elem.text = i + \" \"\n if not elem.tail or not elem.tail.strip():\n elem.tail = i\n for elem in elem:\n indent(elem, level+1)\n if not elem.tail or not elem.tail.strip():\n elem.tail = i\n else:\n if level and (not elem.tail or not elem.tail.strip()):\n elem.tail = i\nindent(root)\n\nimport xml.dom.minidom\nfrom lxml import etree\n\n#File to write the .xml file to\n#Include DOCTYPE\nwith open('/data/test/test.xml', 'w') as f:\n f.write('<?xml version=\"1.0\" encoding=\"UTF-8\" ?><!DOCTYPE requests SYSTEM \"https://toppgene.cchmc.org/toppgenereq.dtd\">')\n ElementTree.ElementTree(root).write(f, 'utf-8')\n \n\n#Display .xml file \nxml = xml.dom.minidom.parse('/data/test/test.xml')\npretty_xml_as_string = xml.toprettyxml()\n\nprint(pretty_xml_as_string)",
"Run ToppGene API\nInclude path for the input .xml file and path and name of the output .xml file.\nOutputs all 17 features of ToppGene.",
"!curl -v -H 'Content-Type: text/xml' --data @/data/test/test.xml -X POST https://toppgene.cchmc.org/api/44009585-27C5-41FD-8279-A5FE1C86C8DB > /data/test/testoutfile.xml \n\n#Display output .xml file \nimport xml.dom.minidom\n\nxml = xml.dom.minidom.parse(\"/data/test/testoutfile.xml\") \npretty_xml_as_string = xml.toprettyxml()\n\nprint(pretty_xml_as_string)",
"Parse ToppGene results into Pandas data frame",
"import xml.dom.minidom\nimport pandas as pd\nimport numpy\n\n#Parse through .xml file\ndef load_parse_xml(data_file):\n \"\"\"Check if file exists. If file exists, load and parse the data file. \"\"\"\n if os.path.isfile(data_file):\n print \"File exists. Parsing...\"\n data_parse = ET.ElementTree(file=data_file)\n print \"File parsed.\"\n return data_parse\n \nxmlfile = load_parse_xml(\"/data/test/testoutfile.xml\")\n\n#Generate array of annotation arrays for .csv file\nroot_tree = xmlfile.getroot()\n\ngene_list=[]\n\nfor child in root_tree:\n \n child.find(\"enrichment-results\")\n \n new_array = []\n array_of_arrays=[]\n for type in child.iter(\"enrichment-result\"):\n count = 0\n for annotation in type.iter(\"annotation\"):\n array_of_arrays.append(new_array)\n new_array = []\n new_array.append(type.attrib['type'])\n new_array.append(annotation.attrib['name'])\n new_array.append(annotation.attrib['id'])\n new_array.append(annotation.attrib['pvalue'])\n new_array.append(annotation.attrib['genes-in-query'])\n new_array.append(annotation.attrib['genes-in-term'])\n new_array.append(annotation.attrib['source'])\n \n for gene in annotation.iter(\"gene\"):\n gene_list.append(gene.attrib['symbol'])\n new_array.append(gene_list)\n gene_list =[]\n \n count+= 1\n print \"Number of Annotations for ToppGene Feature - %s: \" % type.attrib['type'] + str(count)\n print \"Total number of significant gene sets from ToppGene: \" + str(len(array_of_arrays))\n #print array_of_arrays\n \n\n\n#Convert array of annotation arrays into .csv file (to be viewed as dataframe)\nimport pyexcel\ndata = array_of_arrays\npyexcel.save_as(array = data, dest_file_name = '/data/test/results.csv')\n\n#Reading in the .csv ToppGene results\ndf=pandas.read_csv('/data/test/results.csv', header=None)\n\n#Label dataframe columns\ndf.columns=['ToppGene Feature','Annotation Name','ID','pValue','Genes-in-Query','Genes-in-Term','Source','Genes']",
"Display the dataframe of each ToppGene feature",
"#Dataframe for GeneOntologyMolecularFunction\ndf.loc[df['ToppGene Feature'] == 'GeneOntologyMolecularFunction']\n\n#Dataframe for GeneOntologyBiologicalProcess\ndf.loc[df['ToppGene Feature'] == 'GeneOntologyBiologicalProcess']\n\n#Dataframe for GeneOntologyCellularComponent\ndf.loc[df['ToppGene Feature'] == 'GeneOntologyCellularComponent']\n\n#Dataframe for Human Phenotype\ndf.loc[df['ToppGene Feature'] == 'HumanPheno']\n\n#Dataframe for Mouse Phenotype\ndf.loc[df['ToppGene Feature'] == 'MousePheno']\n\n#Dataframe for Domain\ndf.loc[df['ToppGene Feature'] == 'Domain']\n\n#Dataframe for Pathways\ndf.loc[df['ToppGene Feature'] == 'Pathway']\n\n#Dataframe for Pubmed\ndf.loc[df['ToppGene Feature'] == 'Pubmed']\n\n#Dataframe for Interactions\ndf.loc[df['ToppGene Feature'] == 'Interaction']\n\n#Dataframe for Cytobands\ndf.loc[df['ToppGene Feature'] == 'Cytoband']\n\n#Dataframe for Transcription Factor Binding Sites\ndf.loc[df['ToppGene Feature'] == 'TranscriptionFactorBindingSite']\n\n#Dataframe for Gene Family\ndf.loc[df['ToppGene Feature'] == 'GeneFamily']\n\n#Dataframe for Coexpression\ndf.loc[df['ToppGene Feature'] == 'Coexpression']\n\n#DataFrame for Coexpression Atlas\ndf.loc[df['ToppGene Feature'] == 'CoexpressionAtlas']\n\n#Dataframe for Computational\ndf.loc[df['ToppGene Feature'] == 'Computational']\n\n#Dataframe for MicroRNAs\ndf.loc[df['ToppGene Feature'] == 'MicroRNA']\n\n#Dataframe for Drugs\ndf.loc[df['ToppGene Feature'] == 'Drug']\n\n#Dataframe for Diseases\ndf.loc[df['ToppGene Feature'] == 'Disease']",
"Extract the KEGG pathway IDs from the ToppGene output (write to csv file)",
"#Number of significant KEGG pathways\ntotal_KEGG_pathways = df.loc[df['Source'] == 'BioSystems: KEGG']\nprint \"Number of significant KEGG pathways: \" + str(len(total_KEGG_pathways.index))\n\ndf = df.loc[df['Source'] == 'BioSystems: KEGG']\ndf.to_csv('/data/test/keggpathways.csv', index=False)\n\nmapping_df = pandas.read_csv('/data/test/KEGGmap.csv')\nmapping_df = mapping_df.loc[mapping_df['Organism'] == 'Homo sapiens ']\nmapping_df.head(10)",
"Create dataframe that includes the KEGG IDs that correspond to the significant pathways outputted by ToppGene",
"#Create array of KEGG IDs that correspond to the significant pathways outputted by ToppGene\nKEGG_ID_array = []\n\nfor ID in df.ix[:,2]:\n x = int(ID)\n \n for index,BSID in enumerate(mapping_df.ix[:,0]):\n y = int(BSID)\n if x == y:\n KEGG_ID_array.append(mapping_df.get_value(index,1,takeable=True))\n \nprint KEGG_ID_array\n\n#Transform array into KEGG ID dataframe\nKEGG_IDs = pandas.DataFrame()\nKEGG_IDs['KEGG ID'] = KEGG_ID_array\nKEGG_IDs.to_csv('/data/test/keggidlist.csv', index=False)\n\nno_KEGG_ID = pandas.read_csv('/data/test/keggpathways.csv')\nKEGG_IDs = pandas.read_csv('/data/test/keggidlist.csv')\n\n#Append KEGG ID dataframe to dataframe containing the significant pathways outputted by ToppGene\nKEGG_ID_included = pd.concat([no_KEGG_ID, KEGG_IDs], axis = 1)\nKEGG_ID_included.to_csv('/data/test/KEGG_ID_included.csv', index=False)\nKEGG_ID_included",
"Run Pathview to map and render user data on the pathway graphs outputted by ToppGene\nSwitch to R kernel here",
"#Set working directory\nworking_dir <- \"/data/test\" \nsetwd(working_dir)\ndate <- Sys.Date()\n\n#Set R options\noptions(jupyter.plot_mimetypes = 'image/png')\noptions(useHTTPS=FALSE)\noptions(scipen=500)\n\n#Load R packages from CRAN and Bioconductor\nrequire(limma)\nrequire(edgeR)\nrequire(DESeq2)\nrequire(RColorBrewer)\nrequire(cluster)\nlibrary(gplots)\nlibrary(SPIA)\nlibrary(graphite)\nlibrary(PoiClaClu)\nlibrary(ggplot2)\nlibrary(pathview)\nlibrary(KEGG.db)\nlibrary(mygene)\nlibrary(splitstackshape)\nlibrary(reshape)\nlibrary(hwriter)\nlibrary(ReportingTools)\nlibrary(\"EnrichmentBrowser\")\nlibrary(IRdisplay)\nlibrary(repr)\nlibrary(png)",
"Create matrix-like structure to contain entrez ID and log2FC for gene.data input",
"#Extract entrez ID and log2FC from the input DE genes\n#Read in differential expression results as a Pandas data frame to get differentially expressed gene list\n#Read in DE_genes_converted results (generated in jupyter notebook)\ngenes <- read.csv(\"DE_genes_converted.csv\")[,c('entrezgene', 'log2FoldChange')]\n\n#Remove NA values\ngenes<-genes[complete.cases(genes),]\n\nhead(genes,10)\n\n#Transform data frame into matrix (gene.data in Pathview only takes in a matrix formatted data)\ngenedata<-matrix(c(genes[,2]),ncol=1,byrow=TRUE)\nrownames(genedata)<-c(genes[,1])\ncolnames(genedata)<-c(\"log2FoldChange\")\ngenedata <- as.matrix(genedata)\nhead(genedata,10)",
"Create vector containing the KEGG IDs of all the significant target pathways",
"#Read in pathways that you want to map to (from toppgene pathway results)\n#Store as a vector\npathways <- read.csv(\"/data/test/keggidlist.csv\")\nhead(pathways, 12)\npathways.vector<-as.vector(pathways$KEGG.ID)\npathways.vector\n\n#Loop through all the pathways in pathways.vector\n#Generate Pathview pathways for each one (native KEGG graphs)\ni<-1\nfor (i in pathways.vector){\n pv.out <- pathview(gene.data = genedata[, 1], pathway.id = i, \n species = \"hsa\", out.suffix = \"toppgene_native_kegg_graph\", kegg.native = T)\n \n #str(pv.out)\n #head(pv.out$plot.data.gene)\n}\n\n#Loop through all the pathways in pathways.vector\n#Generate Pathview pathways for each one (Graphviz layouts)\ni<-1\nfor (i in pathways.vector){\n pv.out <- pathview(gene.data = genedata[, 1], pathway.id = i, \n species = \"hsa\", out.suffix = \"toppgene_graphviz_layout\", kegg.native = F)\n \n str(pv.out)\n head(pv.out$plot.data.gene)\n #head(pv.out$plot.data.gene)\n\n}",
"Display each of the signficant pathway colored overlay diagrams\nSwitch back to py27 kernel here",
"#Display native KEGG graphs\nimport matplotlib.image as mpimg\nimport matplotlib.pyplot as plt\nimport pandas\n%matplotlib inline\n\n#for loop that iterates through the pathway images and displays them \npathways = pandas.read_csv(\"/data/test/keggidlist.csv\")\npathways\n\nfor i in pathways.ix[:,0]:\n \n image = i\n address = \"/data/test/%s.toppgene_native_kegg_graph.png\" % image\n \n img = mpimg.imread(address)\n plt.imshow(img)\n plt.gcf().set_size_inches(50,50)\n print i\n plt.show()",
"Weijun Luo and Cory Brouwer. Pathview: an R/Bioconductor package for pathway-based data integration and visualization. \n Bioinformatics, 29(14):1830-1831, 2013. doi: 10.1093/bioinformatics/btt285.\nImplement KEGG_pathway_vis Jupyter Notebook (by L. Huang)\nOnly works for one pathway (first one)",
"#Import more python modules\nimport sys\n\n#To access visJS_module and entrez_to_symbol module\nsys.path.append(os.getcwd().replace('/data/test', '/data/CCBB_internal/interns/Lilith/PathwayViz')) \nimport visJS_module\nfrom ensembl_to_entrez import entrez_to_symbol\n\nimport networkx as nx\nimport matplotlib.pyplot as plt\nimport pymongo\nfrom itertools import islice\nimport requests\nimport math\nimport spectra\nfrom bioservices.kegg import KEGG\n\nimport imp\nimp.reload(visJS_module)\n\n#Latex rendering of text in graphs\nimport matplotlib as mpl\nmpl.rc('text', usetex = False)\nmpl.rc('font', family = 'serif')\n\n\n% matplotlib inline\n\ns = KEGG()\n\n#Lowest p value pathway\n#But you can change the first parameter in pathways.get_value to see different pathways in the pathways list!\npathway = pathways.get_value(0,0, takeable=True)\nprint pathway\naddress = \"/data/test/%s.xml\" % pathway\n\n#Parse pathway's xml file and get the root of the xml file\ntree = ET.parse(address)\nroot = tree.getroot()\n\nres = s.parse_kgml_pathway(pathway) \n\nprint res['relations']\n\nprint res['entries']\n\nG=nx.DiGraph()\n\n#Add nodes to networkx graph\nfor entry in res['entries']:\n G.add_node(entry['id'], entry )\n\nprint len(G.nodes(data=True))\n\n#Get symbol of each node\ntemp_node_id_array = []\nfor node, data in G.nodes(data=True):\n\n if data['type'] == 'gene':\n if ' ' not in data['name']:\n G.node[node]['symbol'] = data['gene_names'].split(',', 1)[0]\n else: \n result = data['name'].split(\"hsa:\")\n result = ''.join(result)\n result = result.split()\n for index, gene in enumerate(result):\n if index == 0:\n gene_symbol = str(entrez_to_symbol(gene))\n else:\n gene_symbol = gene_symbol + ', ' + str(entrez_to_symbol(gene))\n G.node[node]['symbol'] = gene_symbol\n elif data['type'] == 'compound':\n gene_symbol = s.parse(s.get(data['name']))['NAME']\n G.node[node]['gene_names'] = ' '.join(gene_symbol)\n G.node[node]['symbol'] = gene_symbol[0].replace(';', '')\n\nprint G.nodes(data=True)\n\n#Get x and y coordinates for each node\nseen_coord = set()\ncoord_array = []\ndupes_coord = []\nfor entry in root.findall('entry'):\n node_id = entry.attrib['id']\n graphics = entry.find('graphics')\n if (graphics.attrib['x'], graphics.attrib['y']) in seen_coord:\n G.node[node_id]['x'] = (int(graphics.attrib['x']) + .1) * 2.5\n G.node[node_id]['y'] = (int(graphics.attrib['y']) + .1) * 2.5\n seen_coord.add((G.node[node_id]['x'], G.node[node_id]['y']))\n print node_id\n\n else:\n seen_coord.add((graphics.attrib['x'], graphics.attrib['y']))\n G.node[node_id]['x'] = int(graphics.attrib['x']) * 2.5\n G.node[node_id]['y'] = int(graphics.attrib['y']) * 2.5\n \nprint dupes_coord\nprint seen_coord\n\n#Handle undefined nodes\ncomp_dict = dict()\nnode_to_comp = dict()\ncomp_array_total = [] #Array containing all component nodes\n\nfor entry in root.findall('entry'):\n #Array to store components of undefined nodes\n component_array = []\n \n if entry.attrib['name'] == 'undefined':\n node_id = entry.attrib['id']\n \n #Find components\n for index, component in enumerate(entry.iter('component')):\n component_array.append(component.get('id')) \n #Check to see which elements are components\n comp_array_total.append(component.get('id'))\n node_to_comp[component.get('id')] = node_id\n\n #Store into node dictionary\n G.node[node_id]['component'] = component_array\n comp_dict[node_id] = component_array\n \n #Store gene names\n gene_name_array = []\n for index, component_id in enumerate(component_array):\n if index == 0:\n gene_name_array.append(G.node[component_id]['gene_names'])\n else:\n gene_name_array.append('\\n' + G.node[component_id]['gene_names'])\n \n G.node[node_id]['gene_names'] = gene_name_array\n \n #Store gene symbols\n gene_symbol_array = []\n for index, component_id in enumerate(component_array):\n if index == 0:\n gene_symbol_array.append(G.node[component_id]['symbol'])\n else:\n gene_symbol_array.append('\\n' + G.node[component_id]['symbol'])\n\n G.node[node_id]['symbol'] = gene_symbol_array\n \nprint G.node\n\nedge_list = []\nedge_pairs = []\n\n#Add edges to networkx graph\n#Redirect edges to point to undefined nodes containing components in order to connect graph\nfor edge in res['relations']:\n source = edge['entry1']\n dest = edge['entry2']\n \n if (edge['entry1'] in comp_array_total) == True: \n source = node_to_comp[edge['entry1']]\n \n if (edge['entry2'] in comp_array_total) == True:\n dest = node_to_comp[edge['entry2']] \n edge_list.append((source, dest, edge))\n edge_pairs.append((source,dest))\n \n #Check for duplicates\n if (source, dest) in G.edges():\n name = []\n value = []\n link = []\n name.append(G.edge[source][dest]['name'])\n value.append(G.edge[source][dest]['value'])\n link.append(G.edge[source][dest]['link'])\n name.append(edge['name'])\n value.append(edge['value'])\n link.append(edge['link'])\n G.edge[source][dest]['name'] = '\\n'.join(name)\n G.edge[source][dest]['value'] = '\\n'.join(value)\n G.edge[source][dest]['link'] = '\\n'.join(link)\n else:\n G.add_edge(source, dest, edge)\n \nprint G.edges(data=True)\n\nedge_to_name = dict()\nfor edge in G.edges():\n edge_to_name[edge] = G.edge[edge[0]][edge[1]]['name']\n\nprint edge_to_name\n\n#Set colors of edges\nedge_to_color = dict()\nfor edge in G.edges():\n if 'activation' in G.edge[edge[0]][edge[1]]['name']:\n edge_to_color[edge] = 'green'\n elif 'inhibition' in G.edge[edge[0]][edge[1]]['name']:\n edge_to_color[edge] = 'red'\n else:\n edge_to_color[edge] = 'blue'\n \nprint edge_to_color\n\n#Remove component nodes from graph\nG.remove_nodes_from(comp_array_total)\n\n#Get nodes in graph\nnodes = G.nodes()\nnumnodes = len(nodes)\n\nprint numnodes\n\nprint G.node\n\n#Get symbol of nodes\nnode_to_symbol = dict()\nfor node in G.node:\n if G.node[node]['type'] == 'map':\n node_to_symbol[node] = G.node[node]['gene_names']\n else:\n if 'symbol' in G.node[node]:\n node_to_symbol[node] = G.node[node]['symbol']\n elif 'gene_names'in G.node[node]:\n node_to_symbol[node] = G.node[node]['gene_names']\n else: \n node_to_symbol[node] = G.node[node]['name']\n\n#Get name of nodes\nnode_to_gene = dict()\nfor node in G.node:\n node_to_gene[node] = G.node[node]['gene_names']\n\n#Get x coord of nodes\nnode_to_x = dict()\nfor node in G.node:\n node_to_x[node] = G.node[node]['x']\n\n#Get y coord of nodes\nnode_to_y = dict()\nfor node in G.node:\n node_to_y[node] = G.node[node]['y']\n\n#Log2FoldChange \nDE_genes_df = pandas.read_csv(\"/data/test/DE_genes_converted.csv\")\nDE_genes_df.head(10)\n\nshort_df = DE_genes_df[['_id', 'Ensembl', 'log2FoldChange']]\nshort_df.head(10)\n\nshort_df.to_dict('split')\n\n#Remove NA values\ngene_to_log2fold = dict()\n\nfor entry in short_df.to_dict('split')['data']:\n if isinstance(entry[0], float):\n if math.isnan(entry[0]):\n gene_to_log2fold[entry[1]] = entry[2]\n else:\n gene_to_log2fold[entry[0]] = entry[2]\n else:\n gene_to_log2fold[entry[0]] = entry[2]\n \nprint gene_to_log2fold\n\n#Create color scale with negative as green and positive as red\nmy_scale = spectra.scale([ \"green\", \"#CCC\", \"red\" ]).domain([ -4, 0, 4 ])\n\nid_to_log2fold = dict()\n\nfor node in res['entries']:\n log2fold_array = []\n if node['name'] == 'undefined':\n print 'node is undefined'\n elif node['type'] == 'map':\n print 'node is a pathway'\n else:\n #print node['name']\n result = node['name'].split(\"hsa:\")\n result = ''.join(result)\n result = result.split()\n #print result\n for item in result:\n if item in gene_to_log2fold.keys():\n log2fold_array.append(gene_to_log2fold[item])\n if len(log2fold_array) > 0:\n id_to_log2fold[node['id']] = log2fold_array\n \nprint id_to_log2fold\n\n#Color nodes based on log2fold data\nnode_to_color = dict()\nfor node in G.nodes():\n if node in id_to_log2fold:\n node_to_color[node] = my_scale(id_to_log2fold[node][0]).hexcode\n\n else:\n node_to_color[node] = '#f1f1f1'\n\nprint node_to_color\n\n#Get number of edges in graph\nedges = G.edges()\nnumedges = len(edges)\n\nprint numedges\n\nprint G.edges(data=True)\n\n#Change directory\nos.chdir(\"/data/CCBB_internal/interns/Nicole/ToppGene\")\n\n#Map to indices for source/target in edges\nnode_map = dict(zip(nodes,range(numnodes)))\n\n#Dictionaries that hold per node and per edge attributes\nnodes_dict = [{\"id\":node_to_gene[n],\"degree\":G.degree(n),\"color\":node_to_color[n], \"node_shape\":\"box\",\n \"node_size\":10,'border_width':1, \"id_num\":node_to_symbol[n], \"x\":node_to_x[n], \"y\":node_to_y[n]} for n in nodes]\n \nedges_dict = [{\"source\":node_map[edges[i][0]], \"target\":node_map[edges[i][1]], \n \"color\":edge_to_color[edges[i]], \"id\":edge_to_name[edges[i]], \"edge_label\":'',\n \"hidden\":'false', \"physics\":'true'} for i in range(numedges)] \n\n#HTML file label for first graph (must manually increment later)\ntime = 1700\n\n#Make edges thicker\n#Create and display the graph here\nvisJS_module.visjs_network(nodes_dict, edges_dict, time_stamp = time, node_label_field = \"id_num\", \n edge_width = 3, border_color = \"black\", edge_arrow_to = True, edge_font_size = 15, edge_font_align= \"top\",\n physics_enabled = False, graph_width = 1000, graph_height = 1000)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
MartyWeissman/Python-for-number-theory
|
PwNT Notebook 5.ipynb
|
gpl-3.0
|
[
"Part 5: Modular arithmetic and primality testing\nPython, like most programming languages, comes with a \"mod operation\" % to compute remainders. This makes basic modular arithmetic straightforward. The more interesting aspects -- from the standpoint of programming and number theory -- arise in algorithms related to modular arithmetic. Here we focus on Pingala's algorithm, a method for computing exponents based on an ancient method for enumerating poetic meters. We analyze the expected performance of this algorithm using Python's timeit function for timing and randint function to randomize input parameters. We also see how the performance depends on the number of bits of the input parameters. In this way, we gently introduce some practical and theoretical issues in computer science.\nWe apply this algorithm to implement the Miller-Rabin primality test. This test can very quickly determine (probabilistically) whether a large (hundreds or thousands of digits!) number is prime. Our implementation is deterministic for smaller (under 64 bits) numbers. This programming tutorial complements Chapters 5 and 6 of An Illustrated Theory of Numbers. \nTable of Contents\n\nCalculations in modular arithmetic\nThe Miller-Rabin Primality Test\n\n<a id='modcalc'></a>\nCalculations in modular arithmetic\nThe mod (%) operator\nFor basic modular arithmetic, one can use Python's \"mod operator\" % to obtain remainders. There is a conceptual difference between the \"mod\" of computer programming and the \"mod\" of number theorists. In computer programming, \"mod\" is typically the operator which outputs the remainder after division. So for a computer scientist, \"23 mod 5 = 3\", because 3 is the remainder after dividing 23 by 5.\nNumber theorists (starting with Gauss) take a radical conceptual shift. A number theorist would write $23 \\equiv 3$ mod $5$, to say that 23 is congruent to 3 modulo 5. In this sense \"mod 5\" (standing for \"modulo 5\") is a prepositional phrase, describing the \"modular world\" in which 23 is the same as (\"congruent to\") 3.\nTo connect these perspectives, we would say that the computer scientist's statement \"23 mod 5 = 3\" gives the natural representative 3 for the number 23 in the mathematician's \"ring of integers mod 5\". (Here \"ring\" is a term from abstract algebra.)",
"23 % 5 # What is the remainder after dividing 23 by 5? What is the natural representative of 23 modulo 5?",
"The miracle that makes modular arithmetic work is that the end-result of a computation \"mod m\" is not changed if one works \"mod m\" along the way. At least this is true if the computation only involves addition, subtraction, and multiplication.",
"((17 + 38) * (105 - 193)) % 13 # Do a bunch of stuff, then take the representative modulo 13.\n\n(((17%13) + (38%13)) * ((105%13) - (193%13)) ) % 13 # Working modulo 13 along the way.",
"It might seem tedious to carry out this \"reduction mod m\" at every step along the way. But the advantage is that you never have to work with numbers much bigger than the modulus (m) if you can reduce modulo m at each step.\nFor example, consider the following computation.",
"(3**999) % 1000 # What are the last 3 digits of 3 raised to the 999 power?",
"The result will probably have the letter \"L\" at the end, indicating that Python switched into \"long-integer\" mode along the way. Indeed, the computation asked Python to first raise 3 to the 999 power (a big number!) and then compute the remainder after division by 1000 (the last 3 digits).\nBut what if we could reduce modulo 1000 at every step? Then, as Python multiplies terms, it will never have to multiply numbers bigger than 1000. Here is a brute-force implementation.",
"P = 1 # The \"running product\" starts at 1.\nfor i in range(999): # We repeat the following line 999 times, as i traverses the list [0,1,...,998].\n P = (P * 3)%1000 # We reduce modulo 1000 along the way!\nprint P",
"The result of this computation should not have the letter \"L\" at the end, because Python never had to work with long integers. Computations with long integers are time-consuming, and unnecessary if you only care about the result of a computation modulo a small number m.\nPerformance analysis\nThe above loop works quickly, but it is far from optimal. Let's carry out some performance analysis by writing two powermod functions.",
"def powermod_1(base, exponent, modulus): # The naive approach.\n return (base**exponent) % modulus \n\ndef powermod_2(base, exponent, modulus):\n P = 1 # Start the running product at 1.\n e = 0\n while e < exponent: # The while loop saves memory, relative to a for loop, by avoiding the storage of a list.\n P = (P * base) % modulus\n e = e + 1\n return P",
"Now let's compare the performance of these two functions. It's also good to double check the code in powermod_2 and run it to check the results. The reason is that loops like the while loop above are classic sources of Off by One Errors. Should e start at zero or one? Should the while loop have the condition e < exponent or e <= exponent? One has to unravel the loop carefully to be completely certain, and testing is a necessity to avoid bugs!\nWe can compare the performance of the two functions with identical input parameters below.",
"%timeit powermod_1(3,999,1000)\n\n%timeit powermod_2(3,999,1000)",
"The second powermod function was probably much slower, even though we reduced the size of the numbers along the way. But perhaps we just chose some input parameters (3,999,1000) which were inconvenient for the second function. To compare the performance of the two functions, it would be useful to try many different inputs.\nFor this, we use Python's timeit features in a different way. Above we used the \"magic\" %timeit to time a line of code. The magic command %timeit is convenient but limited in flexibility. Here we use a larger Python timeit package which we import (as TI) and demonstrate below.",
"import timeit as TI\n\nTI.timeit('powermod_1(3,999,1000)', \"from __main__ import powermod_1\", number=10000)\n\nTI.timeit('powermod_2(3,999,1000)', \"from __main__ import powermod_2\", number=10000)",
"The syntax of the timeit function is a bit challenging. The first parameter is the Python code which we are timing (as a string), in this case powermod_*(3,999,1000). The second parameter probably looks strange. It exists because the timeit function sets up a little isolation chamber to run the code -- within this isolation chamber, it only knows standard Python commands and not the new functions you've created. So you have to import your functions (powermod_1 or powermod_2) into its isolation box. Where are these imported from? They are contained in __main__ which is the location of all of your other code. Finally, the third parameter number is the number of times the timeit function will repeat the code (by default, this might be a large number like 1000000). \nThe output of the timeit function is a float, which represents the number of seconds taken for all of the repetitions. Contrast this with the timeit magic, which found the average. So you need to divide by the number parameter (10000 in the above examples) to find the average time taken.\nUsing the timeit function, we can compare the performance of the two powermod functions on multiple inputs, wrapping the whole comparison process in a bigger function. We choose our inputs randomly in order to estimate the expected performance of our functions. To choose random inputs, Python has a package aptly called random.",
"from random import randint # randint chooses random integers.\n\nprint \"My number is \",randint(1,10) # Run this line many times over to see what happens!",
"The randint(a,b) command chooses a random integer between a and b, inclusive! Unlike the range(a,b) command which iterates from a to b-1, the randint command includes both a and b as possibilities. The following lines iterate the randint(1,10) and keep track of how often each output occurs. The resulting frequency distribution should be nearly flat, i.e., each number between 1 and 10 should occur about 10% of the time.",
"Freq = {1:0, 2:0, 3:0, 4:0, 5:0, 6:0, 7:0, 8:0, 9:0, 10:0} # We prefer a dictionary here.\nfor t in range(10000):\n n = randint(1,10) # Choose a random number between 1 and 10.\n Freq[n] = Freq[n] + 1\n\nprint Freq",
"For fun, and as a template for other explorations, we plot the frequencies in a histogram.",
"%matplotlib inline\nimport matplotlib.pyplot as plt\nplt.bar(Freq.keys(), Freq.values()) # The keys 1,...,10 are used as bins. The values are used as bar heights.\nplt.show()",
"Putting together the randint function and the timeit function, we can compare the performance of powermod_1 and powermod_2 when given random inputs.",
"time_1 = 0 # Tracking the time taken by the powermod_1 function.\ntime_2 = 0 # Tracking the time taken by the powermod_2 function.\nfor t in range(1000): # One thousand samples are taken!\n base = randint(10,99) # A random 2-digit base.\n exponent = randint(1000,1999) # A random 3-digit exponent.\n modulus = randint(1000,1999) # A random 3-digit modulus.\n \n # Note in the lines below that we have to import the functions powermod_1, powermod_2 and \n # the variables base, exponent, modulus, into the isolation chamber used by timeit.\n # We set number=10 to allow 10 trials of each function on each sample input.\n # We do a head-to-head comparison of the two functions on the same inputs!\n # Note that when the lines get too long in Python, you can press <enter>/<return> to start a new line.\n # Python will ignore the line break. Just keep things indented for clarity.\n \n time_1 = time_1 + TI.timeit('powermod_1(base,exponent,modulus)', \n \"from __main__ import powermod_1, base, exponent, modulus\", number=10)\n time_2 = time_2 + TI.timeit('powermod_2(base,exponent,modulus)', \n \"from __main__ import powermod_2, base, exponent, modulus\", number=10)\n \nprint \"powermod_1 took %f seconds.\"%(time_1)\nprint \"powermod_2 took %f seconds.\"%(time_2) # Which is faster?",
"Now we can be pretty sure that the powermod_1 function is faster (perhaps by a factor of 8-10) than the powermod_2 function we designed. At least, this is the case for inputs in the 2-3 digit range that we sampled. But why? We reduced the complexity of the calculation by using the mod operation % throughout. Here are a few issues one might suspect.\n\nThe mod operation itself takes a bit of time. Maybe that time added up in powermod_2?\nThe Python power operation ** is highly optimized already, and outperforms our while loop.\nWe used more multiplications than necessary.\n\nIt turns out that the mod operation is extremely fast... as in nanoseconds (billionths of a second) fast.",
"%timeit 1238712 % 1237",
"So the speed difference is not due to the number of mod operations. But the other issues are relevant. The Python developers have worked hard to make it run fast -- built-in operations like ** will almost certainly be faster than any function that you write with a loop in Python. The developers have written programs in the C programming language (typically) to implement operations like ** (see the CPython implementation, if you wish); their programs have been compiled into machine code -- the basic sets of instructions that your computer understands (and that are not meant for people to understand). When you call a built-in operation like **, Python just tells your computer to run the developers' optimized and precompiled machine code... this is very fast! When you run your own loop, Python is basically converting the code to machine code \"on the fly\" and this is slower.\nStill, it is unfortunate to use long integers if you ask Python to compute (3**999) % 1000. The good news is that such modular exponents are so frequently used that the Python developers have a built-in operation: the pow function.\nThe pow function has two versions. The simplest version pow(b,e) raises b to the e power. It is the same as computing b ** e. But it also has a modular version! The command pow(b,e,m) raises b to the e modulo m, efficiently reducing modulo m along the way.",
"pow(3,999) # A long number.\n\npow(3,999,1000) # Note that there's no L at the end!\n\npow(3,999) % 1000 # The old way\n\n%timeit pow(3,999,1000)\n\n%timeit pow(3,999) % 1000",
"The pow(b,e,m) command should give a significant speedup, as compared to the pow(b,e) command. Remember that ns stands for nanoseconds!\nExponentiation runs so quickly because not only is Python reducing modulo m along the way, it is performing a surprisingly small number of multiplications. In our loop approach, we computed $3^{999}$ by multiplying repeatedly. There were 999 multiplications! But consider this carefully -- did we need to perform so many multiplications? Can you compute $3^{999}$ with far fewer multiplications? What if you can place results in memory along the way? \nIn the next section, we study a very efficient algorithm for computing such exponents. The goal in designing a good algorithm is to create something which runs quickly, minimizes the need for memory, and runs reliably for all the necessary input values. Often there are trade-offs between speed and memory usage, but our exponentiation algorithm will be excellent in both respects. The ideas go back to Pingala, an Indian mathematician of the 2nd or 3rd century BCE, who developed his ideas to enumerate possible poetic meters (arrangements of long and short syllables into verses of a given length). \nYou may wonder why it is necessary to learn the algorithm at all, if Python has an optimized algorithm built into its pow command. First, it is interesting! But also, we will need to understand the algorithm in finer detail to implement the Miller-Rabin test: a way of quickly testing whether very large numbers are prime.\nExercises\n\n\nAdapt the solve_LDE function from PwNT Notebook 2, in order to write a function modular_inverse(a,m) which computes the multiplicative inverse of a modulo m (if they are coprime).\n\n\nUse the timeit and randint functions to investigate how the speed of the command pow(a,e,m) depends on how many digits the numbers a, e, and m have. Note that randint(10**(d-1), 10**d - 1) will produce a random integer with d digits. If you hold two of these variables fixed, consider how the time changes as the third variable is changed. \n\n\nImagine that you are going to compute $3^{100}$ by multiplying positive integers together. If each multiplication operation costs 1 dollar, how much money do you need to spend? You can assume that remembering previously computed numbers is free. What if it costs 1 dollar each time you need to place a number into memory or recover a number from memory? What is the cheapest way to compute $3^{100}$?\n\n\n<a id='millerrabin'></a>\nThe Miller-Rabin primality test\nFermat's Little Theorem and the ROO property of primes\nFermat's Little Theorem states that if $p$ is a prime number, and $GCD(a,p) = 1$, then $$a^{p-1} \\equiv 1 \\text{ mod } p.$$\nUnder the assumptions above, if we ask Python to compute (a**(p-1))%p, or even better, pow(a,p-1,p), the result should be 1. We use and refine this idea to develop a powerful and practical primality test. Let's begin with a few checks of Fermat's Little Theorem.",
"pow(3,36,37) # a = 3, p = 37, p-1 = 36\n\npow(17,100,101) # 101 is prime.\n\npow(303, 100, 101) # Why won't we get 1?\n\npow(5,90,91) # What's the answer?\n\npow(7,12318, 12319) # What's the answer?",
"We can learn something from the previous two examples. Namely, 91 and 12319 are not prime numbers. We say that 7 witnesses the non-primality of 12319. Moreover, we learned this fact without actually finding a factor of 12319! Indeed, the factors of 12319 are 97 and 127, which have no relationship to the \"witness\" 7.\nIn this way, Fermat's Little Theorem -- a statement about prime numbers -- can be turned into a way of discovering that numbers are not prime. After all, if $p$ is not prime, then what are the chances that $a^{p-1} \\equiv 1$ mod $p$ by coincidence?",
"pow(3,90,91)",
"Well, ok. Sometimes coincidences happen. We say that 3 is a bad witness for 91, since 91 is not prime, but $3^{90} \\equiv 1$ mod $91$. But we could try multiple bases (witnesses). We can expect that someone (some base) will witness the nonprimality. Indeed, for the non-prime 91 there are many good witnesses (ones that detect the nonprimality).",
"for witness in range(1,20):\n flt = pow(witness, 90, 91)\n if flt == 1:\n print \"%d is a bad witness.\"%(witness)\n else:\n print \"%d raised to the 90th power equals %d, mod 91\"%(witness, flt)",
"For some numbers -- the Carmichael numbers -- there are more bad witnesses than good witnesses. For example, take the Carmichael number 41041, which is not prime ($41041 = 7 \\cdot 11 \\cdot 13 \\cdot 41$).",
"for witness in range(1,20):\n flt = pow(witness, 41040, 41041)\n if flt == 1:\n print \"%d is a bad witness.\"%(witness)\n else:\n print \"%d raised to the 41040th power equals %d, mod 41041\"%(witness, flt)",
"For Carmichael numbers, it turns out that finding a good witness is just as difficult as finding a factor. Although Carmichael numbers are rare, they demonstrate that Fermat's Little Theorem by itself is not a great way to be certain of primality. Effectively, Fermat's Little Theorem can often be used to quickly prove that a number is not prime... but it is not so good if we want to be sure that a number is prime.\nThe Miller-Rabin primality test will refine the Fermat's Little Theorem test, by cleverly taking advantage of another property of prime numbers. We call this the ROO (Roots Of One) property: if $p$ is a prime number, and $x^2 \\equiv 1$ mod $p$, then $x \\equiv 1$ or $x \\equiv -1$ mod $p$.",
"for x in range(41): \n if x*x % 41 == 1:\n print \"%d squared is congruent to 1, mod 41.\"%(x) # What numbers do you think will be printed?",
"Note that we use \"natural representatives\" when doing modular arithmetic in Python. So the only numbers whose square is 1 mod 41 are 1 and 40. (Note that 40 is the natural representative of -1, mod 41). If we consider the \"square roots of 1\" with a composite modulus, we find more (as long as the modulus has at least two odd prime factors).",
"for x in range(91):\n if x*x % 91 == 1:\n print \"%d squared is congruent to 1, mod 91.\"%(x) # What numbers do you think will be printed?",
"We have described two properties of prime numbers, and therefore two possible indicators that a number is not prime.\n\n\nIf $p$ is a number which violates Fermat's Little Theorem, then $p$ is not prime.\n\n\nIf $p$ is a number which violates the ROO property, then $p$ is not prime.\n\n\nThe Miller Rabin test will combine these indicators. But first we have to introduce an ancient algorithm for exponentiation.\nPingala's exponentiation algorithm\nIf we wish to compute $5^{90}$ mod $91$, without the pow command, we don't have to carry out 90 multiplications. Instead, we carry out Pingala's algorithm. To understand this algorithm, we begin with the desired exponent (e.g. $e=90$), and carry out a series of steps: replace $e$ by $e/2$ if $e$ is even, and replace $e$ by $(e-1) / 2$ if $e$ is odd. Repeat this until the exponent is decreased to zero. The following function carries out this process on any input $e$.",
"def Pingala(e):\n current_number = e\n while current_number > 0:\n if current_number%2 == 0:\n current_number = current_number / 2\n print \"Exponent %d BIT 0\"%(current_number)\n if current_number%2 == 1:\n current_number = (current_number - 1) / 2\n print \"Exponent %d BIT 1\"%(current_number)\n\nPingala(90)",
"The codes \"BIT 1\" and \"BIT 0\" tell us what happened at each step, and allow the process to be reversed. In a line with BIT 0, the exponent gets doubled as one goes up one line (e.g., from 11 to 22). In a line with BIT 1, the exponent gets doubled then increased by 1 as one goes up one line (e.g., from 2 to 5).\nWe can use these BIT codes in order to compute an exponent. Below, we follow the BIT codes to compute $5^{90}$.",
"n = 1 # This is where we start.\nn = n*n * 5 # BIT 1 is interpreted as square-then-multiply-by-5, since the exponent is doubled then increased by 1.\nn = n*n # BIT 0 is interpreted as squaring, since the exponent is doubled.\nn = n*n * 5 # BIT 1\nn = n*n * 5 # BIT 1 again.\nn = n*n # BIT 0\nn = n*n * 5 # BIT 1\nn = n*n # BIT 0\n\nprint n # What we just computed.\nprint 5**90 # I hope these match!!",
"Note that along the way, we carried out 11 multiplications (count the * symbols), and didn't have to remember too many numbers along the way. So this process was efficient in both time and memory. We just followed the BIT code. The number of multiplications is bounded by twice the number of BITs, since each BIT requires at most two multiplications (squaring then multiplication by 5) to execute.\nWhy did we call the code a BIT code? It's because the code consists precisely of the bits (binary digits) of the exponent 90! Since computers store numbers in binary, the computer \"knows\" the BIT code as soon as it knows the exponent. In Python, the bin command recovers the binary expansion of a number.",
"bin(90) # Compare this to the sequence of bits, from bottom up.",
"Python outputs binary expansions as strings, beginning with '0b'. To summarize, we can compute an exponent like $b^e$ by the following process:\nPingala's Exponentiation Algorithm\n\n\nSet the number to 1.\n\n\nRead the bits of $e$, from left to right.\n a. When the bit is zero, square the number. \n b. When the bit is one, square the number, then multiply by $b$.\n\n\nOutput the number.",
"def pow_Pingala(base,exponent):\n result = 1\n bitstring = bin(exponent)[2:] # Chop off the '0b' part of the binary expansion of exponent\n for bit in bitstring: # Iterates through the \"letters\" of the string. Here the letters are '0' or '1'.\n if bit == '0':\n result = result*result\n if bit == '1':\n result = result*result * base\n return result\n\npow_Pingala(5,90)",
"It is straightforward to modify Pingala's algorithm to compute exponents in modular arithmetic. Just reduce along the way.",
"def powmod_Pingala(base,exponent,modulus):\n result = 1\n bitstring = bin(exponent)[2:] # Chop off the '0b' part of the binary expansion of exponent\n for bit in bitstring: # Iterates through the \"letters\" of the string. Here the letters are '0' or '1'.\n if bit == '0':\n result = (result*result) % modulus \n if bit == '1':\n result = (result*result * base) % modulus\n return result\n\npowmod_Pingala(5,90,91)",
"Let's compare the performance of our new modular exponentiation algorithm.",
"%timeit powmod_Pingala(3,999,1000) # Pingala's algorithm, modding along the way.\n\n%timeit powermod_1(3,999,1000) # Raise to the power, then mod, using Python built-in exponents.\n\n%timeit powermod_2(3,999,1000) # Multiply 999 times, modding along the way.\n\n%timeit pow(3,999,1000) # Use the Python built-in modular exponent.",
"The fully built-in modular exponentiation pow(b,e,m) command is probably the fastest. But our implementation of Pingala's algorithm isn't bad -- it probably beats the simple (b**e) % m command (in the powermod_1 function), and it's certainly faster than our naive loop in powermod_2. \nOne can quantify the efficiency of these algorithms by analyzing how the time depends on the size of the input parameters. For the sake of exposition, let us keep the base and modulus constant, and consider how the time varies with the size of the exponent.\nAs a function of the exponent $e$, our powmod_Pingala algorithm required some number of multiplications, bounded by twice the number of bits of $e$. The number of bits of $e$ is approximately $\\log_2(e)$. The size of the numbers multiplied is bounded by the size of the (constant) modulus. In this way, the time taken by the powmod_Pingala algorithm should be $O(\\log(e))$, meaning bounded by a constant times the logarithm of the exponent.\nContrast this with the slow powermod_2 algorithm, which performs $e$ multiplications, and has thus has runtime $O(e)$.\nThe Miller-Rabin test\nPingala's algorithm is effective for computing exponents, in ordinary arithmetic or in modular arithmetic. In this way, we can look for violations of Fermat's Little Theorem as before, to find witnesses to non-primality. But if we look more closely at the algorithm... we can sometimes find violations of the ROO property of primes. This strengthens the primality test.\nTo see this, we create out a \"verbose\" version of Pingala's algorithm for modular exponentiation.",
"def powmod_verbose(base, exponent, modulus):\n result = 1\n print \"Computing %d raised to %d, modulo %d.\"%(base, exponent, modulus)\n print \"The current number is %d\"%(result)\n bitstring = bin(exponent)[2:] # Chop off the '0b' part of the binary expansion of exponent\n for bit in bitstring: # Iterates through the \"letters\" of the string. Here the letters are '0' or '1'.\n sq_result = result*result % modulus # We need to compute this in any case.\n if bit == '0':\n print \"BIT 0: %d squared is congruent to %d, mod %d\"%(result, sq_result, modulus)\n result = sq_result \n if bit == '1':\n newresult = (sq_result * base) % modulus\n print \"BIT 1: %d squared times %d is congruent to %d, mod %d\"%(result, base, newresult, modulus)\n result = newresult\n return result\n\npowmod_verbose(2,560,561) # 561 is a Carmichael number.",
"The function has displayed every step in Pingala's algorithm. The final result is that $2^{560} \\equiv 1$ mod $561$. So in this sense, $2$ is a bad witness. For $561$ is not prime (3 is a factor), but it does not violate Fermat's Little Theorem when $2$ is the base.\nBut within the verbose output above, there is a violation of the ROO property. The penultimate line states that \"67 squared is congruent to 1, mod 561\". But if 561 were prime, only 1 and 560 are square roots of 1. Hence this penultimate line implies that 561 is not prime (again, without finding a factor!). \nThis underlies the Miller-Rabin test. We carry out Pingala's exponentiation algorithm to compute $b^{p-1}$ modulo $p$. If we find a violation of ROO along the way, then the test number $p$ is not prime. And if, at the end, the computation does not yield $1$, we have found a Fermat's Little Theorem (FLT) violation, and the test number $p$ is not prime.\nThe function below implements the Miller-Rabin test on a number $p$, using a given base.",
"def Miller_Rabin(p, base):\n '''\n Tests whether p is prime, using the given base.\n The result False implies that p is definitely not prime.\n The result True implies that p **might** be prime.\n It is not a perfect test!\n '''\n result = 1\n exponent = p-1\n modulus = p\n bitstring = bin(exponent)[2:] # Chop off the '0b' part of the binary expansion of exponent\n for bit in bitstring: # Iterates through the \"letters\" of the string. Here the letters are '0' or '1'.\n sq_result = result*result % modulus # We need to compute this in any case.\n if sq_result == 1:\n if (result != 1) and (result != exponent): # Note that exponent is congruent to -1, mod p.\n return False # a ROO violation occurred, so p is not prime\n if bit == '0':\n result = sq_result \n if bit == '1':\n result = (sq_result * base) % modulus\n if result != 1:\n return False # a FLT violation occurred, so p is not prime.\n \n return True # If we made it this far, no violation occurred and p might be prime.\n\n Miller_Rabin(101,6)",
"How good is the Miller-Rabin test? Will this modest improvement (looking for ROO violations) improve the reliability of witnesses? Let's see how many witnesses observe the nonprimality of 41041.",
"for witness in range(2,20):\n MR = Miller_Rabin(41041, witness) # \n if MR: \n print \"%d is a bad witness.\"%(witness)\n else:\n print \"%d detects that 41041 is not prime.\"%(witness)",
"In fact, one can prove that at least 3/4 of the witnesses will detect the non-primality of any non-prime. Thus, if you keep on asking witnesses at random, your chances of detecting non-primality increase exponentially! In fact, the witness 2 suffices to check whether any number is prime or not up to 2047. In other words, if $p < 2047$, then $p$ is prime if and only if Miller_Rabin(p,2) is True. Just using the witnesses 2 and 3 suffice to check primality for numbers up to a million (1373653, to be precise, according to Wikipedia.)\nThe general strategy behind the Miller-Rabin test then is to use just a few witnesses for smallish potential primes (say, up to $2^{64}$). For larger numbers, try some number $x$ (like 20 or 50) random bases. If the tested number is composite, then the probability of all witnesses reporting True is is less than $1 / 4^x$. With 50 random witnesses, the chance that a composite number tests as prime is less than $10^{-30}$. \nNote that these are statements about conditional probability. In more formal language,\n$$\\text{Prob} \\left( \\text{tests prime} \\ \\vert \\ \\text{ is composite} \\right) < \\frac{1}{4^{# \\text{witnesses} } }.$$\nAs those who study medical testing know, this probability differs from the probability that most people care about: the probability that a number is prime, given that it tests prime. The relationship between the two probabilities is given by Bayes Theorem, and depends on the prevalence of primes among the sample. If our sample consists of numbers of absolute value about $N$, then the prevalence of primes will be about $1 / \\log(N)$, and the probability of primality given a positive test result can be approximated. \n$$\\text{Prob} \\left( \\text{ is prime } \\ \\vert \\ \\text{ tests prime } \\right) > 1 - \\frac{\\log(N) - 1}{4^{# \\text{witnesses}}}.$$\nAs one chooses more witnesses, this probability becomes extremely close to $1$.",
"from mpmath import *\n# The mpmath package allows us to compute with arbitrary precision!\n# It has specialized functions for log, sin, exp, etc.., with arbitrary precision.\n# It is probably installed with your version of Python.\n\ndef prob_prime(N, witnesses):\n '''\n Conservatively estimates the probability of primality, given a positive test result.\n N is an approximation of the size of the tested number.\n witnesses is the number of witnesses.\n '''\n mp.dps = witnesses # mp.dps is the number of digits of precision. We adapt this as needed for input.\n prob_prime = 1 - (log(N) - 1) / (4**witnesses)\n print str(100*prob_prime)+\"% chance of primality\" # Use str to convert mpmath float to string for printing.\n\nprob_prime(10**100, 50) # Chance of primality with 50 witnesses, if a 100-digit number is tested.",
"We implement the Miller-Rabin test for primality in the is_prime function below.",
"def is_prime(p, witnesses=50): # witnesses is a parameter with a default value.\n '''\n Tests whether a positive integer p is prime.\n For p < 2^64, the test is deterministic, using known good witnesses.\n Good witnesses come from a table at Wikipedia's article on the Miller-Rabin test,\n based on research by Pomerance, Selfridge and Wagstaff, Jaeschke, Jiang and Deng.\n For larger p, a number (by default, 50) of witnesses are chosen at random.\n '''\n if (p%2 == 0): # Might as well take care of even numbers at the outset!\n if p == 2:\n return True\n else:\n return False \n \n if p > 2**64: # We use the probabilistic test for large p.\n trial = 0\n while trial < witnesses:\n trial = trial + 1\n witness = randint(2,p-2) # A good range for possible witnesses\n if Miller_Rabin(p,witness) == False:\n return False\n return True\n \n else: # We use a determinisic test for p <= 2**64.\n verdict = Miller_Rabin(p,2)\n if p < 2047:\n return verdict # The witness 2 suffices.\n verdict = verdict and Miller_Rabin(p,3)\n if p < 1373653:\n return verdict # The witnesses 2 and 3 suffice.\n verdict = verdict and Miller_Rabin(p,5)\n if p < 25326001:\n return verdict # The witnesses 2,3,5 suffice.\n verdict = verdict and Miller_Rabin(p,7)\n if p < 3215031751:\n return verdict # The witnesses 2,3,5,7 suffice.\n verdict = verdict and Miller_Rabin(p,11)\n if p < 2152302898747:\n return verdict # The witnesses 2,3,5,7,11 suffice.\n verdict = verdict and Miller_Rabin(p,13)\n if p < 3474749660383:\n return verdict # The witnesses 2,3,5,7,11,13 suffice.\n verdict = verdict and Miller_Rabin(p,17)\n if p < 341550071728321:\n return verdict # The witnesses 2,3,5,7,11,17 suffice.\n verdict = verdict and Miller_Rabin(p,19) and Miller_Rabin(p,23)\n if p < 3825123056546413051:\n return verdict # The witnesses 2,3,5,7,11,17,19,23 suffice.\n verdict = verdict and Miller_Rabin(p,29) and Miller_Rabin(p,31) and Miller_Rabin(p,37)\n return verdict # The witnesses 2,3,5,7,11,17,19,23,29,31,37 suffice for testing up to 2^64. \n \n\nis_prime(1000000000000066600000000000001) # This is Belphegor's prime.",
"How fast is our new is_prime function? Let's give it a try.",
"%timeit is_prime(234987928347928347928347928734987398792837491)\n\n%timeit is_prime(1000000000000066600000000000001)",
"The results will probably on the order of a millisecond, perhaps even a tenth of a millisecond ($10^{-4}$ seconds) for non-primes! That's much faster than looking for factors, for numbers of this size. In this way, we can test primality of numbers of hundreds of digits!\nFor an application, let's find some Mersenne primes. Recall that a Mersenne prime is a prime of the form $2^p - 1$. Note that when $2^p - 1$ is prime, it must be the case that $p$ is a prime too. We will quickly find the Mersenne primes with $p$ up to 1000 below!",
"for p in range(1,1000):\n if is_prime(p): # We only need to check these p.\n M = 2**p - 1 # A candidate for a Mersenne prime.\n if is_prime(M):\n print \"2^%d - 1 = %d is a Mersenne prime.\"%(p,M)",
"Exercises\n\n\nRecall that if $2^p - 1$ is a Mersenne prime, then Euclid proved that $(2^p - 1) \\cdot 2^{p-1}$ is a perfect number. Find all the (even) perfect numbers up to $2^{1000}$. (Note: nobody has ever found an odd perfect number. All even perfect numbers arise from Mersenne primes by Euclid's recipe.)\n\n\nThe Fermat sequence is the sequence of numbers 3, 5, 257, 65537, etc., of the form $2^{2^n} + 1$ for $n \\geq 0$. Test the primality of these numbers for $n$ up to 10.\n\n\nWhy do you think the is_prime function (using Miller-Rabin) runs more quickly on non-primes than it does on primes?\n\n\nCompare the performance of the new is_prime function to \"trial division\" (looking for factors up to the square root of the test number). Which is faster for small numbers (1-digit, 2-digits, 3-digits, etc.)? Adapt the is_prime function to perform trial division for small numbers in order to optimize performance. \n\n\nEstimate the probability that a randomly chosen 10-digit number is prime, by running is_prime on a large number of samples. How does this probability vary as the number of digits increases (e.g., from 10 digits to 11 digits to 12 digits, etc., onto 20 digits)?"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
kubernetes-client/python
|
examples/notebooks/create_secret.ipynb
|
apache-2.0
|
[
"How to create and use a Secret\nA Secret is an object that contains a small amount of sensitive data such as a password, a token, or a key. In this notebook, we would learn how to create a Secret and how to use Secrets as files from a Pod as seen in https://kubernetes.io/docs/concepts/configuration/secret/#using-secrets",
"from kubernetes import client, config",
"Load config from default location",
"config.load_kube_config()\nclient.configuration.assert_hostname = False",
"Create API endpoint instance and API resource instances",
"api_instance = client.CoreV1Api()\nsec = client.V1Secret()",
"Fill required Secret fields",
"sec.metadata = client.V1ObjectMeta(name=\"mysecret\")\nsec.type = \"Opaque\"\nsec.data = {\"username\": \"bXl1c2VybmFtZQ==\", \"password\": \"bXlwYXNzd29yZA==\"}",
"Create Secret",
"api_instance.create_namespaced_secret(namespace=\"default\", body=sec)",
"Create test Pod API resource instances",
"pod = client.V1Pod()\nspec = client.V1PodSpec()\npod.metadata = client.V1ObjectMeta(name=\"mypod\")\ncontainer = client.V1Container()\ncontainer.name = \"mypod\"\ncontainer.image = \"redis\"",
"Add volumeMount which would be used to hold secret",
"volume_mounts = [client.V1VolumeMount()]\nvolume_mounts[0].mount_path = \"/data/redis\"\nvolume_mounts[0].name = \"foo\"\ncontainer.volume_mounts = volume_mounts",
"Create volume required by secret",
"spec.volumes = [client.V1Volume(name=\"foo\")]\nspec.volumes[0].secret = client.V1SecretVolumeSource(secret_name=\"mysecret\")\n\nspec.containers = [container]\npod.spec = spec",
"Create the Pod",
"api_instance.create_namespaced_pod(namespace=\"default\",body=pod)",
"View secret being used within the pod\nWait for atleast 10 seconds to ensure pod is running before executing this section.",
"user = api_instance.connect_get_namespaced_pod_exec(name=\"mypod\", namespace=\"default\", command=[ \"/bin/sh\", \"-c\", \"cat /data/redis/username\" ], stderr=True, stdin=False, stdout=True, tty=False)\nprint(user)\npasswd = api_instance.connect_get_namespaced_pod_exec(name=\"mypod\", namespace=\"default\", command=[ \"/bin/sh\", \"-c\", \"cat /data/redis/password\" ], stderr=True, stdin=False, stdout=True, tty=False)\nprint(passwd)",
"Delete Pod",
"api_instance.delete_namespaced_pod(name=\"mypod\", namespace=\"default\", body=client.V1DeleteOptions())",
"Delete Secret",
"api_instance.delete_namespaced_secret(name=\"mysecret\", namespace=\"default\", body=sec)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
mne-tools/mne-tools.github.io
|
0.24/_downloads/e7de7baffeeb4beff147bad8657b46dc/opm_data.ipynb
|
bsd-3-clause
|
[
"%matplotlib inline",
"Optically pumped magnetometer (OPM) data\nIn this dataset, electrical median nerve stimulation was delivered to the\nleft wrist of the subject. Somatosensory evoked fields were measured using\nnine QuSpin SERF OPMs placed over the right-hand side somatomotor area.\nHere we demonstrate how to localize these custom OPM data in MNE.",
"import os.path as op\n\nimport numpy as np\nimport mne\n\ndata_path = mne.datasets.opm.data_path()\nsubject = 'OPM_sample'\nsubjects_dir = op.join(data_path, 'subjects')\nraw_fname = op.join(data_path, 'MEG', 'OPM', 'OPM_SEF_raw.fif')\nbem_fname = op.join(subjects_dir, subject, 'bem',\n subject + '-5120-5120-5120-bem-sol.fif')\nfwd_fname = op.join(data_path, 'MEG', 'OPM', 'OPM_sample-fwd.fif')\ncoil_def_fname = op.join(data_path, 'MEG', 'OPM', 'coil_def.dat')",
"Prepare data for localization\nFirst we filter and epoch the data:",
"raw = mne.io.read_raw_fif(raw_fname, preload=True)\nraw.filter(None, 90, h_trans_bandwidth=10.)\nraw.notch_filter(50., notch_widths=1)\n\n\n# Set epoch rejection threshold a bit larger than for SQUIDs\nreject = dict(mag=2e-10)\ntmin, tmax = -0.5, 1\n\n# Find median nerve stimulator trigger\nevent_id = dict(Median=257)\nevents = mne.find_events(raw, stim_channel='STI101', mask=257, mask_type='and')\npicks = mne.pick_types(raw.info, meg=True, eeg=False)\n# We use verbose='error' to suppress warning about decimation causing aliasing,\n# ideally we would low-pass and then decimate instead\nepochs = mne.Epochs(raw, events, event_id, tmin, tmax, verbose='error',\n reject=reject, picks=picks, proj=False, decim=10,\n preload=True)\nevoked = epochs.average()\nevoked.plot()\ncov = mne.compute_covariance(epochs, tmax=0.)\ndel epochs, raw",
"Examine our coordinate alignment for source localization and compute a\nforward operator:\n<div class=\"alert alert-info\"><h4>Note</h4><p>The Head<->MRI transform is an identity matrix, as the\n co-registration method used equates the two coordinate\n systems. This mis-defines the head coordinate system\n (which should be based on the LPA, Nasion, and RPA)\n but should be fine for these analyses.</p></div>",
"bem = mne.read_bem_solution(bem_fname)\ntrans = mne.transforms.Transform('head', 'mri') # identity transformation\n\n# To compute the forward solution, we must\n# provide our temporary/custom coil definitions, which can be done as::\n#\n# with mne.use_coil_def(coil_def_fname):\n# fwd = mne.make_forward_solution(\n# raw.info, trans, src, bem, eeg=False, mindist=5.0,\n# n_jobs=1, verbose=True)\n\nfwd = mne.read_forward_solution(fwd_fname)\n# use fixed orientation here just to save memory later\nmne.convert_forward_solution(fwd, force_fixed=True, copy=False)\n\nwith mne.use_coil_def(coil_def_fname):\n fig = mne.viz.plot_alignment(evoked.info, trans=trans, subject=subject,\n subjects_dir=subjects_dir,\n surfaces=('head', 'pial'), bem=bem)\n\nmne.viz.set_3d_view(figure=fig, azimuth=45, elevation=60, distance=0.4,\n focalpoint=(0.02, 0, 0.04))",
"Perform dipole fitting",
"# Fit dipoles on a subset of time points\nwith mne.use_coil_def(coil_def_fname):\n dip_opm, _ = mne.fit_dipole(evoked.copy().crop(0.040, 0.080),\n cov, bem, trans, verbose=True)\nidx = np.argmax(dip_opm.gof)\nprint('Best dipole at t=%0.1f ms with %0.1f%% GOF'\n % (1000 * dip_opm.times[idx], dip_opm.gof[idx]))\n\n# Plot N20m dipole as an example\ndip_opm.plot_locations(trans, subject, subjects_dir,\n mode='orthoview', idx=idx)",
"Perform minimum-norm localization\nDue to the small number of sensors, there will be some leakage of activity\nto areas with low/no sensitivity. Constraining the source space to\nareas we are sensitive to might be a good idea.",
"inverse_operator = mne.minimum_norm.make_inverse_operator(\n evoked.info, fwd, cov, loose=0., depth=None)\ndel fwd, cov\n\nmethod = \"MNE\"\nsnr = 3.\nlambda2 = 1. / snr ** 2\nstc = mne.minimum_norm.apply_inverse(\n evoked, inverse_operator, lambda2, method=method,\n pick_ori=None, verbose=True)\n\n# Plot source estimate at time of best dipole fit\nbrain = stc.plot(hemi='rh', views='lat', subjects_dir=subjects_dir,\n initial_time=dip_opm.times[idx],\n clim=dict(kind='percent', lims=[99, 99.9, 99.99]),\n size=(400, 300), background='w')"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
cathalmccabe/PYNQ
|
boards/Pynq-Z1/base/notebooks/pmod/pmod_tmp2.ipynb
|
bsd-3-clause
|
[
"PmodTMP2 Sensor example\nIn this example, the Pmod temperature sensor is initialized and set to log a reading every 1 second. \nThis example requires the PmodTMP2 sensor, and assumes it is attached to PMODB.\n1. Simple TMP2 read() to see current room temperature",
"from pynq.overlays.base import BaseOverlay\nbase = BaseOverlay(\"base.bit\")\n\nfrom pynq.lib import Pmod_TMP2\n\nmytmp = Pmod_TMP2(base.PMODB)\ntemperature = mytmp.read()\n\nprint(str(temperature) + \" C\")",
"2. Starting logging temperature once every second",
"mytmp.start_log()",
"3. Try to modify temperature reading by touching the sensor\nThe default interval between samples is 1 second. So wait for at least 10 seconds to get enough samples.\nDuring this period, try to press finger on the sensor to increase its temperature reading.\nStop the logging whenever done trying to change sensor's value.",
"mytmp.stop_log()\nlog = mytmp.get_log()",
"5. Plot values over time",
"%matplotlib inline\nimport matplotlib.pyplot as plt\n\nplt.plot(range(len(log)), log, 'ro')\nplt.title('TMP2 Sensor log')\nplt.axis([0, len(log), min(log), max(log)])\nplt.show()"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
ES-DOC/esdoc-jupyterhub
|
notebooks/dwd/cmip6/models/mpi-esm-1-2-hr/toplevel.ipynb
|
gpl-3.0
|
[
"ES-DOC CMIP6 Model Properties - Toplevel\nMIP Era: CMIP6\nInstitute: DWD\nSource ID: MPI-ESM-1-2-HR\nSub-Topics: Radiative Forcings. \nProperties: 85 (42 required)\nModel descriptions: Model description details\nInitialized From: -- \nNotebook Help: Goto notebook help page\nNotebook Initialised: 2018-02-15 16:53:57\nDocument Setup\nIMPORTANT: to be executed each time you run the notebook",
"# DO NOT EDIT ! \nfrom pyesdoc.ipython.model_topic import NotebookOutput \n\n# DO NOT EDIT ! \nDOC = NotebookOutput('cmip6', 'dwd', 'mpi-esm-1-2-hr', 'toplevel')",
"Document Authors\nSet document authors",
"# Set as follows: DOC.set_author(\"name\", \"email\") \n# TODO - please enter value(s)",
"Document Contributors\nSpecify document contributors",
"# Set as follows: DOC.set_contributor(\"name\", \"email\") \n# TODO - please enter value(s)",
"Document Publication\nSpecify document publication status",
"# Set publication status: \n# 0=do not publish, 1=publish. \nDOC.set_publication_status(0)",
"Document Table of Contents\n1. Key Properties\n2. Key Properties --> Flux Correction\n3. Key Properties --> Genealogy\n4. Key Properties --> Software Properties\n5. Key Properties --> Coupling\n6. Key Properties --> Tuning Applied\n7. Key Properties --> Conservation --> Heat\n8. Key Properties --> Conservation --> Fresh Water\n9. Key Properties --> Conservation --> Salt\n10. Key Properties --> Conservation --> Momentum\n11. Radiative Forcings\n12. Radiative Forcings --> Greenhouse Gases --> CO2\n13. Radiative Forcings --> Greenhouse Gases --> CH4\n14. Radiative Forcings --> Greenhouse Gases --> N2O\n15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3\n16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3\n17. Radiative Forcings --> Greenhouse Gases --> CFC\n18. Radiative Forcings --> Aerosols --> SO4\n19. Radiative Forcings --> Aerosols --> Black Carbon\n20. Radiative Forcings --> Aerosols --> Organic Carbon\n21. Radiative Forcings --> Aerosols --> Nitrate\n22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect\n23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect\n24. Radiative Forcings --> Aerosols --> Dust\n25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic\n26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic\n27. Radiative Forcings --> Aerosols --> Sea Salt\n28. Radiative Forcings --> Other --> Land Use\n29. Radiative Forcings --> Other --> Solar \n1. Key Properties\nKey properties of the model\n1.1. Model Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nTop level overview of coupled model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.model_overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.2. Model Name\nIs Required: TRUE Type: STRING Cardinality: 1.1\nName of coupled model.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.model_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"2. Key Properties --> Flux Correction\nFlux correction properties of the model\n2.1. Details\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe if/how flux corrections are applied in the model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.flux_correction.details') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"3. Key Properties --> Genealogy\nGenealogy and history of the model\n3.1. Year Released\nIs Required: TRUE Type: STRING Cardinality: 1.1\nYear the model was released",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.genealogy.year_released') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"3.2. CMIP3 Parent\nIs Required: FALSE Type: STRING Cardinality: 0.1\nCMIP3 parent if any",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP3_parent') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"3.3. CMIP5 Parent\nIs Required: FALSE Type: STRING Cardinality: 0.1\nCMIP5 parent if any",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.genealogy.CMIP5_parent') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"3.4. Previous Name\nIs Required: FALSE Type: STRING Cardinality: 0.1\nPreviously known as",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.genealogy.previous_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"4. Key Properties --> Software Properties\nSoftware properties of model\n4.1. Repository\nIs Required: FALSE Type: STRING Cardinality: 0.1\nLocation of code for this component.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.software_properties.repository') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"4.2. Code Version\nIs Required: FALSE Type: STRING Cardinality: 0.1\nCode version identifier.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.software_properties.code_version') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"4.3. Code Languages\nIs Required: FALSE Type: STRING Cardinality: 0.N\nCode language(s).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.software_properties.code_languages') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"4.4. Components Structure\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe how model realms are structured into independent software components (coupled via a coupler) and internal software components.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.software_properties.components_structure') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"4.5. Coupler\nIs Required: FALSE Type: ENUM Cardinality: 0.1\nOverarching coupling framework for model.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.software_properties.coupler') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"OASIS\" \n# \"OASIS3-MCT\" \n# \"ESMF\" \n# \"NUOPC\" \n# \"Bespoke\" \n# \"Unknown\" \n# \"None\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"5. Key Properties --> Coupling\n**\n5.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of coupling in the model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.coupling.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"5.2. Atmosphere Double Flux\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs the atmosphere passing a double flux to the ocean and sea ice (as opposed to a single one)?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_double_flux') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"5.3. Atmosphere Fluxes Calculation Grid\nIs Required: FALSE Type: ENUM Cardinality: 0.1\nWhere are the air-sea fluxes calculated",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_fluxes_calculation_grid') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Atmosphere grid\" \n# \"Ocean grid\" \n# \"Specific coupler grid\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"5.4. Atmosphere Relative Winds\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nAre relative or absolute winds used to compute the flux? I.e. do ocean surface currents enter the wind stress calculation?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.coupling.atmosphere_relative_winds') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"6. Key Properties --> Tuning Applied\nTuning methodology for model\n6.1. Description\nIs Required: TRUE Type: STRING Cardinality: 1.1\nGeneral overview description of tuning: explain and motivate the main targets and metrics/diagnostics retained. Document the relative weight given to climate performance metrics/diagnostics versus process oriented metrics/diagnostics, and on the possible conflicts with parameterization level tuning. In particular describe any struggle with a parameter value that required pushing it to its limits to solve a particular model deficiency.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.tuning_applied.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6.2. Global Mean Metrics Used\nIs Required: FALSE Type: STRING Cardinality: 0.N\nList set of metrics/diagnostics of the global mean state used in tuning model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.tuning_applied.global_mean_metrics_used') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6.3. Regional Metrics Used\nIs Required: FALSE Type: STRING Cardinality: 0.N\nList of regional metrics/diagnostics of mean state (e.g THC, AABW, regional means etc) used in tuning model/component",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.tuning_applied.regional_metrics_used') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6.4. Trend Metrics Used\nIs Required: FALSE Type: STRING Cardinality: 0.N\nList observed trend metrics/diagnostics used in tuning model/component (such as 20th century)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.tuning_applied.trend_metrics_used') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6.5. Energy Balance\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe how energy balance was obtained in the full system: in the various components independently or at the components coupling stage?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.tuning_applied.energy_balance') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6.6. Fresh Water Balance\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe how fresh_water balance was obtained in the full system: in the various components independently or at the components coupling stage?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.tuning_applied.fresh_water_balance') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"7. Key Properties --> Conservation --> Heat\nGlobal heat convervation properties of the model\n7.1. Global\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe if/how heat is conserved globally",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.heat.global') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"7.2. Atmos Ocean Interface\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe if/how heat is conserved at the atmosphere/ocean coupling interface",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_ocean_interface') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"7.3. Atmos Land Interface\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe if/how heat is conserved at the atmosphere/land coupling interface",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_land_interface') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"7.4. Atmos Sea-ice Interface\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe if/how heat is conserved at the atmosphere/sea-ice coupling interface",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.heat.atmos_sea-ice_interface') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"7.5. Ocean Seaice Interface\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe if/how heat is conserved at the ocean/sea-ice coupling interface",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.heat.ocean_seaice_interface') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"7.6. Land Ocean Interface\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe if/how heat is conserved at the land/ocean coupling interface",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.heat.land_ocean_interface') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8. Key Properties --> Conservation --> Fresh Water\nGlobal fresh water convervation properties of the model\n8.1. Global\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe if/how fresh_water is conserved globally",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.global') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8.2. Atmos Ocean Interface\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe if/how fresh_water is conserved at the atmosphere/ocean coupling interface",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_ocean_interface') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8.3. Atmos Land Interface\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe if/how fresh water is conserved at the atmosphere/land coupling interface",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_land_interface') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8.4. Atmos Sea-ice Interface\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe if/how fresh water is conserved at the atmosphere/sea-ice coupling interface",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.atmos_sea-ice_interface') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8.5. Ocean Seaice Interface\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe if/how fresh water is conserved at the ocean/sea-ice coupling interface",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.ocean_seaice_interface') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8.6. Runoff\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe how runoff is distributed and conserved",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.runoff') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8.7. Iceberg Calving\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe if/how iceberg calving is modeled and conserved",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.iceberg_calving') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8.8. Endoreic Basins\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe if/how endoreic basins (no ocean access) are treated",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.endoreic_basins') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8.9. Snow Accumulation\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe how snow accumulation over land and over sea-ice is treated",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.fresh_water.snow_accumulation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"9. Key Properties --> Conservation --> Salt\nGlobal salt convervation properties of the model\n9.1. Ocean Seaice Interface\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe if/how salt is conserved at the ocean/sea-ice coupling interface",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.salt.ocean_seaice_interface') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"10. Key Properties --> Conservation --> Momentum\nGlobal momentum convervation properties of the model\n10.1. Details\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe if/how momentum is conserved in the model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.key_properties.conservation.momentum.details') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"11. Radiative Forcings\nRadiative forcings of the model for historical and scenario (aka Table 12.1 IPCC AR5)\n11.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of radiative forcings (GHG and aerosols) implementation in model",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"12. Radiative Forcings --> Greenhouse Gases --> CO2\nCarbon dioxide forcing\n12.1. Provision\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"12.2. Additional Information\nIs Required: FALSE Type: STRING Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CO2.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"13. Radiative Forcings --> Greenhouse Gases --> CH4\nMethane forcing\n13.1. Provision\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"13.2. Additional Information\nIs Required: FALSE Type: STRING Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CH4.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"14. Radiative Forcings --> Greenhouse Gases --> N2O\nNitrous oxide forcing\n14.1. Provision\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"14.2. Additional Information\nIs Required: FALSE Type: STRING Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.N2O.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"15. Radiative Forcings --> Greenhouse Gases --> Tropospheric O3\nTroposheric ozone forcing\n15.1. Provision\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"15.2. Additional Information\nIs Required: FALSE Type: STRING Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.tropospheric_O3.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"16. Radiative Forcings --> Greenhouse Gases --> Stratospheric O3\nStratospheric ozone forcing\n16.1. Provision\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"16.2. Additional Information\nIs Required: FALSE Type: STRING Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.stratospheric_O3.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"17. Radiative Forcings --> Greenhouse Gases --> CFC\nOzone-depleting and non-ozone-depleting fluorinated gases forcing\n17.1. Provision\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"17.2. Equivalence Concentration\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nDetails of any equivalence concentrations used",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.equivalence_concentration') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"Option 1\" \n# \"Option 2\" \n# \"Option 3\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"17.3. Additional Information\nIs Required: FALSE Type: STRING Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.greenhouse_gases.CFC.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"18. Radiative Forcings --> Aerosols --> SO4\nSO4 aerosol forcing\n18.1. Provision\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"18.2. Additional Information\nIs Required: FALSE Type: STRING Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.SO4.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"19. Radiative Forcings --> Aerosols --> Black Carbon\nBlack carbon aerosol forcing\n19.1. Provision\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"19.2. Additional Information\nIs Required: FALSE Type: STRING Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.black_carbon.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"20. Radiative Forcings --> Aerosols --> Organic Carbon\nOrganic carbon aerosol forcing\n20.1. Provision\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"20.2. Additional Information\nIs Required: FALSE Type: STRING Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.organic_carbon.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"21. Radiative Forcings --> Aerosols --> Nitrate\nNitrate forcing\n21.1. Provision\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"21.2. Additional Information\nIs Required: FALSE Type: STRING Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.nitrate.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"22. Radiative Forcings --> Aerosols --> Cloud Albedo Effect\nCloud albedo effect forcing (RFaci)\n22.1. Provision\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"22.2. Aerosol Effect On Ice Clouds\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nRadiative effects of aerosols on ice clouds are represented?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.aerosol_effect_on_ice_clouds') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"22.3. Additional Information\nIs Required: FALSE Type: STRING Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_albedo_effect.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"23. Radiative Forcings --> Aerosols --> Cloud Lifetime Effect\nCloud lifetime effect forcing (ERFaci)\n23.1. Provision\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"23.2. Aerosol Effect On Ice Clouds\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nRadiative effects of aerosols on ice clouds are represented?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.aerosol_effect_on_ice_clouds') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"23.3. RFaci From Sulfate Only\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nRadiative forcing from aerosol cloud interactions from sulfate aerosol only?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.RFaci_from_sulfate_only') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"23.4. Additional Information\nIs Required: FALSE Type: STRING Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.cloud_lifetime_effect.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"24. Radiative Forcings --> Aerosols --> Dust\nDust forcing\n24.1. Provision\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"24.2. Additional Information\nIs Required: FALSE Type: STRING Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.dust.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"25. Radiative Forcings --> Aerosols --> Tropospheric Volcanic\nTropospheric volcanic forcing\n25.1. Provision\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"25.2. Historical Explosive Volcanic Aerosol Implementation\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nHow explosive volcanic aerosol is implemented in historical simulations",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.historical_explosive_volcanic_aerosol_implementation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Type A\" \n# \"Type B\" \n# \"Type C\" \n# \"Type D\" \n# \"Type E\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"25.3. Future Explosive Volcanic Aerosol Implementation\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nHow explosive volcanic aerosol is implemented in future simulations",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.future_explosive_volcanic_aerosol_implementation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Type A\" \n# \"Type B\" \n# \"Type C\" \n# \"Type D\" \n# \"Type E\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"25.4. Additional Information\nIs Required: FALSE Type: STRING Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.tropospheric_volcanic.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"26. Radiative Forcings --> Aerosols --> Stratospheric Volcanic\nStratospheric volcanic forcing\n26.1. Provision\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"26.2. Historical Explosive Volcanic Aerosol Implementation\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nHow explosive volcanic aerosol is implemented in historical simulations",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.historical_explosive_volcanic_aerosol_implementation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Type A\" \n# \"Type B\" \n# \"Type C\" \n# \"Type D\" \n# \"Type E\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"26.3. Future Explosive Volcanic Aerosol Implementation\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nHow explosive volcanic aerosol is implemented in future simulations",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.future_explosive_volcanic_aerosol_implementation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Type A\" \n# \"Type B\" \n# \"Type C\" \n# \"Type D\" \n# \"Type E\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"26.4. Additional Information\nIs Required: FALSE Type: STRING Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.stratospheric_volcanic.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"27. Radiative Forcings --> Aerosols --> Sea Salt\nSea salt forcing\n27.1. Provision\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"27.2. Additional Information\nIs Required: FALSE Type: STRING Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.aerosols.sea_salt.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"28. Radiative Forcings --> Other --> Land Use\nLand use forcing\n28.1. Provision\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nHow this forcing agent is provided (e.g. via concentrations, emission precursors, prognostically derived, etc.)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"M\" \n# \"Y\" \n# \"E\" \n# \"ES\" \n# \"C\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"28.2. Crop Change Only\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nLand use change represented via crop change only?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.crop_change_only') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"28.3. Additional Information\nIs Required: FALSE Type: STRING Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.other.land_use.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"29. Radiative Forcings --> Other --> Solar\nSolar forcing\n29.1. Provision\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nHow solar forcing is provided",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.provision') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"N/A\" \n# \"irradiance\" \n# \"proton\" \n# \"electron\" \n# \"cosmic ray\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"29.2. Additional Information\nIs Required: FALSE Type: STRING Cardinality: 0.1\nAdditional information relating to the provision and implementation of this forcing agent (e.g. citations, use of non-standard datasets, explaining how multiple provisions are used, etc.).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.toplevel.radiative_forcings.other.solar.additional_information') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"©2017 ES-DOC"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
mne-tools/mne-tools.github.io
|
0.20/_downloads/bc5044f9d3ef1d29067dd6b7d83ceed2/plot_20_visualize_epochs.ipynb
|
bsd-3-clause
|
[
"%matplotlib inline",
"Visualizing epoched data\nThis tutorial shows how to plot epoched data as time series, how to plot the\nspectral density of epoched data, how to plot epochs as an imagemap, and how to\nplot the sensor locations and projectors stored in :class:~mne.Epochs\nobjects.\n :depth: 2\nWe'll start by importing the modules we need, loading the continuous (raw)\nsample data, and cropping it to save memory:",
"import os\nimport mne\n\nsample_data_folder = mne.datasets.sample.data_path()\nsample_data_raw_file = os.path.join(sample_data_folder, 'MEG', 'sample',\n 'sample_audvis_raw.fif')\nraw = mne.io.read_raw_fif(sample_data_raw_file, verbose=False).crop(tmax=120)",
"To create the :class:~mne.Epochs data structure, we'll extract the event\nIDs stored in the :term:stim channel, map those integer event IDs to more\ndescriptive condition labels using an event dictionary, and pass those to the\n:class:~mne.Epochs constructor, along with the :class:~mne.io.Raw data\nand the desired temporal limits of our epochs, tmin and tmax (for a\ndetailed explanation of these steps, see tut-epochs-class).",
"events = mne.find_events(raw, stim_channel='STI 014')\nevent_dict = {'auditory/left': 1, 'auditory/right': 2, 'visual/left': 3,\n 'visual/right': 4, 'face': 5, 'buttonpress': 32}\nepochs = mne.Epochs(raw, events, tmin=-0.2, tmax=0.5, event_id=event_dict,\n preload=True)\ndel raw",
"Plotting Epochs as time series\n.. sidebar:: Interactivity in pipelines and scripts\nTo use the interactive features of the :meth:`~mne.Epochs.plot` method\nwhen running your code non-interactively, pass the ``block=True``\nparameter, which halts the Python interpreter until the figure window is\nclosed. That way, any channels or epochs that you mark as \"bad\" will be\ntaken into account in subsequent processing steps.\n\nTo visualize epoched data as time series (one time series per channel), the\n:meth:mne.Epochs.plot method is available. It creates an interactive window\nwhere you can scroll through epochs and channels, enable/disable any\nunapplied :term:SSP projectors <projector> to see how they affect the\nsignal, and even manually mark bad channels (by clicking the channel name) or\nbad epochs (by clicking the data) for later dropping. Channels marked \"bad\"\nwill be shown in light grey color and will be added to\nepochs.info['bads']; epochs marked as bad will be indicated as 'USER'\nin epochs.drop_log.\nHere we'll plot only the \"catch\" trials from the sample dataset\n<sample-dataset>, and pass in our events array so that the button press\nresponses also get marked (we'll plot them in red, and plot the \"face\" events\ndefining time zero for each epoch in blue). We also need to pass in\nour event_dict so that the :meth:~mne.Epochs.plot method will know what\nwe mean by \"buttonpress\" — this is because subsetting the conditions by\ncalling epochs['face'] automatically purges the dropped entries from\nepochs.event_id:",
"catch_trials_and_buttonpresses = mne.pick_events(events, include=[5, 32])\nepochs['face'].plot(events=catch_trials_and_buttonpresses, event_id=event_dict,\n event_colors=dict(buttonpress='red', face='blue'))",
"Plotting projectors from an Epochs object\nIn the plot above we can see heartbeat artifacts in the magnetometer\nchannels, so before we continue let's load ECG projectors from disk and apply\nthem to the data:",
"ecg_proj_file = os.path.join(sample_data_folder, 'MEG', 'sample',\n 'sample_audvis_ecg-proj.fif')\necg_projs = mne.read_proj(ecg_proj_file)\nepochs.add_proj(ecg_projs)\nepochs.apply_proj()",
"Just as we saw in the tut-section-raw-plot-proj section, we can plot\nthe projectors present in an :class:~mne.Epochs object using the same\n:meth:~mne.Epochs.plot_projs_topomap method. Since the original three\nempty-room magnetometer projectors were inherited from the\n:class:~mne.io.Raw file, and we added two ECG projectors for each sensor\ntype, we should see nine projector topomaps:",
"epochs.plot_projs_topomap(vlim='joint')",
"Note that these field maps illustrate aspects of the signal that have\nalready been removed (because projectors in :class:~mne.io.Raw data are\napplied by default when epoching, and because we called\n:meth:~mne.Epochs.apply_proj after adding additional ECG projectors from\nfile). You can check this by examining the 'active' field of the\nprojectors:",
"print(all(proj['active'] for proj in epochs.info['projs']))",
"Plotting sensor locations\nJust like :class:~mne.io.Raw objects, :class:~mne.Epochs objects\nkeep track of sensor locations, which can be visualized with the\n:meth:~mne.Epochs.plot_sensors method:",
"epochs.plot_sensors(kind='3d', ch_type='all')\nepochs.plot_sensors(kind='topomap', ch_type='all')",
"Plotting the power spectrum of Epochs\nAgain, just like :class:~mne.io.Raw objects, :class:~mne.Epochs objects\nhave a :meth:~mne.Epochs.plot_psd method for plotting the spectral\ndensity_ of the data.",
"epochs['auditory'].plot_psd(picks='eeg')",
"Plotting Epochs as an image map\nA convenient way to visualize many epochs simultaneously is to plot them as\nan image map, with each row of pixels in the image representing a single\nepoch, the horizontal axis representing time, and each pixel's color\nrepresenting the signal value at that time sample for that epoch. Of course,\nthis requires either a separate image map for each channel, or some way of\ncombining information across channels. The latter is possible using the\n:meth:~mne.Epochs.plot_image method; the former can be achieved with the\n:meth:~mne.Epochs.plot_image method (one channel at a time) or with the\n:meth:~mne.Epochs.plot_topo_image method (all sensors at once).\nBy default, the image map generated by :meth:~mne.Epochs.plot_image will be\naccompanied by a scalebar indicating the range of the colormap, and a time\nseries showing the average signal across epochs and a bootstrapped 95%\nconfidence band around the mean. :meth:~mne.Epochs.plot_image is a highly\ncustomizable method with many parameters, including customization of the\nauxiliary colorbar and averaged time series subplots. See the docstrings of\n:meth:~mne.Epochs.plot_image and mne.viz.plot_compare_evokeds (which is\nused to plot the average time series) for full details. Here we'll show the\nmean across magnetometers for all epochs with an auditory stimulus:",
"epochs['auditory'].plot_image(picks='mag', combine='mean')",
"To plot image maps for individual sensors or a small group of sensors, use\nthe picks parameter. Passing combine=None (the default) will yield\nseparate plots for each sensor in picks; passing combine='gfp' will\nplot the global field power (useful for combining sensors that respond with\nopposite polarity).",
"epochs['auditory'].plot_image(picks=['MEG 0242', 'MEG 0243'])\nepochs['auditory'].plot_image(picks=['MEG 0242', 'MEG 0243'], combine='gfp')",
"To plot an image map for all sensors, use\n:meth:~mne.Epochs.plot_topo_image, which is optimized for plotting a large\nnumber of image maps simultaneously, and (in interactive sessions) allows you\nto click on each small image map to pop open a separate figure with the\nfull-sized image plot (as if you had called :meth:~mne.Epochs.plot_image on\njust that sensor). At the small scale shown in this tutorial it's hard to see\nmuch useful detail in these plots; it's often best when plotting\ninteractively to maximize the topo image plots to fullscreen. The default is\na figure with black background, so here we specify a white background and\nblack foreground text. By default :meth:~mne.Epochs.plot_topo_image will\nshow magnetometers and gradiometers on the same plot (and hence not show a\ncolorbar, since the sensors are on different scales) so we'll also pass a\n:class:~mne.channels.Layout restricting each plot to one channel type.\nFirst, however, we'll also drop any epochs that have unusually high signal\nlevels, because they can cause the colormap limits to be too extreme and\ntherefore mask smaller signal fluctuations of interest.",
"reject_criteria = dict(mag=3000e-15, # 3000 fT\n grad=3000e-13, # 3000 fT/cm\n eeg=150e-6) # 150 µV\nepochs.drop_bad(reject=reject_criteria)\n\nfor ch_type, title in dict(mag='Magnetometers', grad='Gradiometers').items():\n layout = mne.channels.find_layout(epochs.info, ch_type=ch_type)\n epochs['auditory/left'].plot_topo_image(layout=layout, fig_facecolor='w',\n font_color='k', title=title)",
"To plot image maps for all EEG sensors, pass an EEG layout as the layout\nparameter of :meth:~mne.Epochs.plot_topo_image. Note also here the use of\nthe sigma parameter, which smooths each image map along the vertical\ndimension (across epochs) which can make it easier to see patterns across the\nsmall image maps (by smearing noisy epochs onto their neighbors, while\nreinforcing parts of the image where adjacent epochs are similar). However,\nsigma can also disguise epochs that have persistent extreme values and\nmaybe should have been excluded, so it should be used with caution.",
"layout = mne.channels.find_layout(epochs.info, ch_type='eeg')\nepochs['auditory/left'].plot_topo_image(layout=layout, fig_facecolor='w',\n font_color='k', sigma=1)",
".. LINKS"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
bicepjai/Puzzles
|
adventofcode/2017/.ipynb_checkpoints/day1_9-checkpoint.ipynb
|
bsd-3-clause
|
[
"Setup",
"import sys\nimport os\n\nimport re\nimport collections\nimport itertools\nimport bcolz\nimport pickle\n\nimport numpy as np\nimport pandas as pd\nimport gc\nimport random\nimport smart_open\nimport h5py\nimport csv\n\nimport tensorflow as tf\nimport gensim\nimport string\n\nimport datetime as dt\nfrom tqdm import tqdm_notebook as tqdm\n\nimport numpy as np\nimport pandas as pd\n\nimport matplotlib.pyplot as plt\nimport seaborn as sns\n\nrandom_state_number = 967898",
"Code\nDay 1: Inverse Captcha\nThe captcha requires you to review a sequence of digits (your puzzle input) and \nfind the sum of all digits that match the next digit in the list. The list is circular, \nso the digit after the last digit is the first digit in the list.",
"! cat day1_input.txt\n\ninput_data = None\nwith open(\"day1_input.txt\") as f:\n input_data = f.read().strip().split()\n input_data = [w.strip(\",\") for w in input_data ]",
"We will form the direction map since they are finite.",
"directions = {\n (\"N\",\"R\") : (\"E\",0,1),\n (\"N\",\"L\") : (\"W\",0,-1),\n \n (\"W\",\"R\") : (\"N\",1,1),\n (\"W\",\"L\") : (\"S\",1,-1),\n \n (\"E\",\"R\") : (\"S\",1,-1),\n (\"E\",\"L\") : (\"N\",1,1),\n \n (\"S\",\"R\") : (\"W\",0,-1),\n (\"S\",\"L\") : (\"E\",0,1)\n}\n\ndef get_distance(data):\n d,pos = \"N\",[0,0]\n for code in data:\n d1,v = code[0], int(code[1:])\n d,i,m = directions[(d, code[0])]\n pos[i] += m*v\n #print(code,d,v,pos)\n return sum([abs(n) for n in pos]) \n\ndata = [\"R2\", \"R2\", \"R2\"]\nget_distance(data)\n\ndata = [\"R5\", \"L5\", \"R5\", \"R3\"]\nget_distance(data)\n\nget_distance(input_data)",
"Day 2: Bathroom Security\npart1\nYou arrive at Easter Bunny Headquarters under cover of darkness. However, you left in such a rush that you forgot to use the bathroom! Fancy office buildings like this one usually have keypad locks on their bathrooms, so you search the front desk for the code.\n\"In order to improve security,\" the document you find says, \"bathroom codes will no longer be written down. Instead, please memorize and follow the procedure below to access the bathrooms.\"\nThe document goes on to explain that each button to be pressed can be found by starting on the previous button and moving to adjacent buttons on the keypad: U moves up, D moves down, L moves left, and R moves right. Each line of instructions corresponds to one button, starting at the previous button (or, for the first line, the \"5\" button); press whatever button you're on at the end of each line. If a move doesn't lead to a button, ignore it.\nYou can't hold it much longer, so you decide to figure out the code as you walk to the bathroom. You picture a keypad like this:\n1 2 3\n4 5 6\n7 8 9\nSuppose your instructions are:\nULL\nRRDDD\nLURDL\nUUUUD\nYou start at \"5\" and move up (to \"2\"), left (to \"1\"), and left (you can't, and stay on \"1\"), so the first button is 1.\nStarting from the previous button (\"1\"), you move right twice (to \"3\") and then down three times (stopping at \"9\" after two moves and ignoring the third), ending up with 9.\nContinuing from \"9\", you move left, up, right, down, and left, ending with 8.\nFinally, you move up four times (stopping at \"2\"), then down once, ending with 5.\nSo, in this example, the bathroom code is 1985.\nYour puzzle input is the instructions from the document you found at the front desk. What is the bathroom code?",
"input_data = None\nwith open(\"day2_input.txt\") as f:\n input_data = f.read().strip().split()\n\ndef get_codes(data, keypad, keypad_max_size, start_index=(1,1), verbose=False):\n r,c = start_index\n digit = \"\"\n for codes in data:\n if verbose: print(\" \",codes)\n for code in codes:\n if verbose: print(\" before\",r,c,keypad[r][c])\n if code == 'R' and c+1 < keypad_max_size and keypad[r][c+1] is not None:\n c += 1\n elif code == 'L' and c-1 >= 0 and keypad[r][c-1] is not None:\n c -= 1\n elif code == 'U' and r-1 >= 0 and keypad[r-1][c] is not None:\n r -= 1\n elif code == 'D' and r+1 < keypad_max_size and keypad[r+1][c] is not None:\n r += 1\n if verbose: print(\" after\",code,r,c,keypad[r][c])\n digit += str(keypad[r][c])\n return digit\n \n\nsample = [\"ULL\",\n\"RRDDD\",\n\"LURDL\",\n\"UUUUD\"]\n\nkeypad = [[1,2,3],[4,5,6],[7,8,9]]\nget_codes(sample, keypad, keypad_max_size=3)\n\nkeypad = [[1,2,3],[4,5,6],[7,8,9]]\nget_codes(input_data, keypad, keypad_max_size=3)",
"part2\nYou finally arrive at the bathroom (it's a several minute walk from the lobby so visitors can behold the many fancy conference rooms and water coolers on this floor) and go to punch in the code. Much to your bladder's dismay, the keypad is not at all like you imagined it. Instead, you are confronted with the result of hundreds of man-hours of bathroom-keypad-design meetings:\n1\n 2 3 4\n5 6 7 8 9\n A B C\n D\nYou still start at \"5\" and stop when you're at an edge, but given the same instructions as above, the outcome is very different:\nYou start at \"5\" and don't move at all (up and left are both edges), ending at 5.\nContinuing from \"5\", you move right twice and down three times (through \"6\", \"7\", \"B\", \"D\", \"D\"), ending at D.\nThen, from \"D\", you move five more times (through \"D\", \"B\", \"C\", \"C\", \"B\"), ending at B.\nFinally, after five more moves, you end at 3.\nSo, given the actual keypad layout, the code would be 5DB3.\nUsing the same instructions in your puzzle input, what is the correct bathroom code?\nAlthough it hasn't changed, you can still get your puzzle input.",
"input_data = None\nwith open(\"day21_input.txt\") as f:\n input_data = f.read().strip().split()\n\nkeypad = [[None, None, 1, None, None],\n [None, 2, 3, 4, None],\n [ 5, 6, 7, 8, None],\n [None, 'A', 'B', 'C', None],\n [None, None, 'D', None, None]]\n\nsample = [\"ULL\",\n\"RRDDD\",\n\"LURDL\",\n\"UUUUD\"]\nget_codes(sample, keypad, keypad_max_size=5, start_index=(2,0), verbose=False)\n\nget_codes(input_data, keypad, keypad_max_size=5, start_index=(2,0), verbose=False)",
"Day3 squares With Three Sides\npart1\nNow that you can think clearly, you move deeper into the labyrinth of hallways and office furniture that makes up this part of Easter Bunny HQ. This must be a graphic design department; the walls are covered in specifications for triangles.\nOr are they?\nThe design document gives the side lengths of each triangle it describes, but... 5 10 25? Some of these aren't triangles. You can't help but mark the impossible ones.\nIn a valid triangle, the sum of any two sides must be larger than the remaining side. For example, the \"triangle\" given above is impossible, because 5 + 10 is not larger than 25.\nIn your puzzle input, how many of the listed triangles are possible?",
"input_data = None\nwith open(\"day3_input.txt\") as f:\n input_data = f.read().strip().split(\"\\n\")\n\ninput_data = [list(map(int, l.strip().split())) for l in input_data]\n\nresult = [ (sides[0]+sides[1] > sides[2]) and (sides[2]+sides[1] > sides[0]) and (sides[0]+sides[2] > sides[1]) for sides in input_data]\n\nsum(result)",
"part2\nNow that you've helpfully marked up their design documents, it occurs to you that triangles are specified in groups of three vertically. Each set of three numbers in a column specifies a triangle. Rows are unrelated.\nFor example, given the following specification, numbers with the same hundreds digit would be part of the same triangle:\n101 301 501\n102 302 502\n103 303 503\n201 401 601\n202 402 602\n203 403 603\nIn your puzzle input, and instead reading by columns, how many of the listed triangles are possible?",
"input_data = None\nwith open(\"day31_input.txt\") as f:\n input_data = f.read().strip().split(\"\\n\")\n\ninput_data = [list(map(int, l.strip().split())) for l in input_data]\ninput_data[:5]\n\ndef chunks(l, n):\n \"\"\"Yield successive n-sized chunks from l.\"\"\"\n for i in range(0, len(l), n):\n yield l[i:i + n]\n \nsingle_list = [input_data[r][c] for c in [0,1,2] for r in range(len(input_data))]\nresult = [ (sides[0]+sides[1] > sides[2]) and (sides[2]+sides[1] > sides[0]) and (sides[0]+sides[2] > sides[1]) for sides in chunks(single_list, 3)]\nsum(result)",
"Day4\npart1: Security Through Obscurity\nFinally, you come across an information kiosk with a list of rooms. Of course, the list is encrypted and full of decoy data, but the instructions to decode the list are barely hidden nearby. Better remove the decoy data first.\nEach room consists of an encrypted name (lowercase letters separated by dashes) followed by a dash, a sector ID, and a checksum in square brackets.\nA room is real (not a decoy) if the checksum is the five most common letters in the encrypted name, in order, with ties broken by alphabetization. For example:\naaaaa-bbb-z-y-x-123[abxyz] is a real room because the most common letters are a (5), b (3), and then a tie between x, y, and z, which are listed alphabetically.\na-b-c-d-e-f-g-h-987[abcde] is a real room because although the letters are all tied (1 of each), the first five are listed alphabetically.\nnot-a-real-room-404[oarel] is a real room.\ntotally-real-room-200[decoy] is not.\nOf the real rooms from the list above, the sum of their sector IDs is 1514.\nWhat is the sum of the sector IDs of the real rooms?",
"input_data = None\nwith open(\"day4_input.txt\") as f:\n input_data = f.read().strip().split(\"\\n\")\nlen(input_data), input_data[:5]\n\nanswer = 0\nfor code in input_data:\n m = re.match(r'(.+)-(\\d+)\\[([a-z]*)\\]', code)\n code, sector, checksum = m.groups()\n code = code.replace(\"-\",\"\")\n counts = collections.Counter(code).most_common()\n counts.sort(key=lambda k: (-k[1], k[0]))\n if ''.join([ch for ch,_ in counts[:5]]) == checksum:\n answer += int(sector)\nanswer",
"part2\nWith all the decoy data out of the way, it's time to decrypt this list and get moving.\nThe room names are encrypted by a state-of-the-art shift cipher, which is nearly unbreakable without the right software. However, the information kiosk designers at Easter Bunny HQ were not expecting to deal with a master cryptographer like yourself.\nTo decrypt a room name, rotate each letter forward through the alphabet a number of times equal to the room's sector ID. A becomes B, B becomes C, Z becomes A, and so on. Dashes become spaces.\nFor example, the real name for qzmt-zixmtkozy-ivhz-343 is very encrypted name.\nWhat is the sector ID of the room where North Pole objects are stored?",
"for code in input_data:\n m = re.match(r'(.+)-(\\d+)\\[([a-z]*)\\]', code)\n code, sector, checksum = m.groups()\n sector = int(sector)\n code = code.replace(\"-\",\"\")\n counts = collections.Counter(code).most_common()\n counts.sort(key=lambda k: (-k[1], k[0]))\n string_maps = string.ascii_lowercase\n cipher_table = str.maketrans(string_maps, string_maps[sector%26:] + string_maps[:sector%26])\n if ''.join([ch for ch,_ in counts[:5]]) == checksum:\n if \"north\" in code.translate(cipher_table):\n print(code.translate(cipher_table))\n print(\"sector\",sector)\n",
"Day5 How About a Nice Game of Chess?\npart1\nYou are faced with a security door designed by Easter Bunny engineers that seem to have acquired most of their security knowledge by watching hacking movies.\nThe eight-character password for the door is generated one character at a time by finding the MD5 hash of some Door ID (your puzzle input) and an increasing integer index (starting with 0).\nA hash indicates the next character in the password if its hexadecimal representation starts with five zeroes. If it does, the sixth character in the hash is the next character of the password.\nFor example, if the Door ID is abc:\nThe first index which produces a hash that starts with five zeroes is 3231929, which we find by hashing abc3231929; the sixth character of the hash, and thus the first character of the password, is 1.\n5017308 produces the next interesting hash, which starts with 000008f82..., so the second character of the password is 8.\nThe third time a hash starts with five zeroes is for abc5278568, discovering the character f.\nIn this example, after continuing this search a total of eight times, the password is 18f47a30.\nGiven the actual Door ID, what is the password?\npart2"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
johnnyliu27/openmc
|
examples/jupyter/mdgxs-part-i.ipynb
|
mit
|
[
"This IPython Notebook introduces the use of the openmc.mgxs module to calculate multi-energy-group and multi-delayed-group cross sections for an infinite homogeneous medium. In particular, this Notebook introduces the the following features:\n\nCreation of multi-delayed-group cross sections for an infinite homogeneous medium\nCalculation of delayed neutron precursor concentrations\n\nIntroduction to Multi-Delayed-Group Cross Sections (MDGXS)\nMany Monte Carlo particle transport codes, including OpenMC, use continuous-energy nuclear cross section data. However, most deterministic neutron transport codes use multi-group cross sections defined over discretized energy bins or energy groups. Furthermore, kinetics calculations typically separate out parameters that involve delayed neutrons into prompt and delayed components and further subdivide delayed components by delayed groups. An example is the energy spectrum for prompt and delayed neutrons for U-235 and Pu-239 computed for a light water reactor spectrum.",
"from IPython.display import Image\nImage(filename='images/mdgxs.png', width=350)",
"A variety of tools employing different methodologies have been developed over the years to compute multi-group cross sections for certain applications, including NJOY (LANL), MC$^2$-3 (ANL), and Serpent (VTT). The openmc.mgxs Python module is designed to leverage OpenMC's tally system to calculate multi-group cross sections with arbitrary energy discretizations and different delayed group models (e.g. 6, 7, or 8 delayed group models) for fine-mesh heterogeneous deterministic neutron transport applications.\nBefore proceeding to illustrate how one may use the openmc.mgxs module, it is worthwhile to define the general equations used to calculate multi-energy-group and multi-delayed-group cross sections. This is only intended as a brief overview of the methodology used by openmc.mgxs - we refer the interested reader to the large body of literature on the subject for a more comprehensive understanding of this complex topic.\nIntroductory Notation\nThe continuous real-valued microscopic cross section may be denoted $\\sigma_{n,x}(\\mathbf{r}, E)$ for position vector $\\mathbf{r}$, energy $E$, nuclide $n$ and interaction type $x$. Similarly, the scalar neutron flux may be denoted by $\\Phi(\\mathbf{r},E)$ for position $\\mathbf{r}$ and energy $E$. Note: Although nuclear cross sections are dependent on the temperature $T$ of the interacting medium, the temperature variable is neglected here for brevity.\nSpatial and Energy Discretization\nThe energy domain for critical systems such as thermal reactors spans more than 10 orders of magnitude of neutron energies from 10$^{-5}$ - 10$^7$ eV. The multi-group approximation discretization divides this energy range into one or more energy groups. In particular, for $G$ total groups, we denote an energy group index $g$ such that $g \\in {1, 2, ..., G}$. The energy group indices are defined such that the smaller group the higher the energy, and vice versa. The integration over neutron energies across a discrete energy group is commonly referred to as energy condensation.\nThe delayed neutrons created from fissions are created from > 30 delayed neutron precursors. Modeling each of the delayed neutron precursors is possible, but this approach has not recieved much attention due to large uncertainties in certain precursors. Therefore, the delayed neutrons are often combined into \"delayed groups\" that have a set time constant, $\\lambda_d$. Some cross section libraries use the same group time constants for all nuclides (e.g. JEFF 3.1) while other libraries use different time constants for all nuclides (e.g. ENDF/B-VII.1). Multi-delayed-group cross sections can either be created with the entire delayed group set, a subset of delayed groups, or integrated over all delayed groups.\nMulti-group cross sections are computed for discretized spatial zones in the geometry of interest. The spatial zones may be defined on a structured and regular fuel assembly or pin cell mesh, an arbitrary unstructured mesh or the constructive solid geometry used by OpenMC. For a geometry with $K$ distinct spatial zones, we designate each spatial zone an index $k$ such that $k \\in {1, 2, ..., K}$. The volume of each spatial zone is denoted by $V_{k}$. The integration over discrete spatial zones is commonly referred to as spatial homogenization.\nGeneral Scalar-Flux Weighted MDGXS\nThe multi-group cross sections computed by openmc.mgxs are defined as a scalar flux-weighted average of the microscopic cross sections across each discrete energy group. This formulation is employed in order to preserve the reaction rates within each energy group and spatial zone. In particular, spatial homogenization and energy condensation are used to compute the general multi-group cross section. For instance, the delayed-nu-fission multi-energy-group and multi-delayed-group cross section, $\\nu_d \\sigma_{f,x,k,g}$, can be computed as follows:\n$$\\nu_d \\sigma_{n,x,k,g} = \\frac{\\int_{E_{g}}^{E_{g-1}}\\mathrm{d}E'\\int_{\\mathbf{r} \\in V_{k}}\\mathrm{d}\\mathbf{r} \\nu_d \\sigma_{f,x}(\\mathbf{r},E')\\Phi(\\mathbf{r},E')}{\\int_{E_{g}}^{E_{g-1}}\\mathrm{d}E'\\int_{\\mathbf{r} \\in V_{k}}\\mathrm{d}\\mathbf{r}\\Phi(\\mathbf{r},E')}$$\nThis scalar flux-weighted average microscopic cross section is computed by openmc.mgxs for only the delayed-nu-fission and delayed neutron fraction reaction type at the moment. These double integrals are stochastically computed with OpenMC's tally system - in particular, filters on the energy range and spatial zone (material, cell, universe, or mesh) define the bounds of integration for both numerator and denominator.\nMulti-Group Prompt and Delayed Fission Spectrum\nThe energy spectrum of neutrons emitted from fission is denoted by $\\chi_{n}(\\mathbf{r},E' \\rightarrow E'')$ for incoming and outgoing energies $E'$ and $E''$, respectively. Unlike the multi-group cross sections $\\sigma_{n,x,k,g}$ considered up to this point, the fission spectrum is a probability distribution and must sum to unity. The outgoing energy is typically much less dependent on the incoming energy for fission than for scattering interactions. As a result, it is common practice to integrate over the incoming neutron energy when computing the multi-group fission spectrum. The fission spectrum may be simplified as $\\chi_{n}(\\mathbf{r},E)$ with outgoing energy $E$.\nComputing the cumulative energy spectrum of emitted neutrons, $\\chi_{n}(\\mathbf{r},E)$, has been presented in the mgxs-part-i.ipynb notebook. Here, we will present the energy spectrum of prompt and delayed emission neutrons, $\\chi_{n,p}(\\mathbf{r},E)$ and $\\chi_{n,d}(\\mathbf{r},E)$, respectively. Unlike the multi-group cross sections defined up to this point, the multi-group fission spectrum is weighted by the fission production rate rather than the scalar flux. This formulation is intended to preserve the total fission production rate in the multi-group deterministic calculation. In order to mathematically define the multi-group fission spectrum, we denote the microscopic fission cross section as $\\sigma_{n,f}(\\mathbf{r},E)$ and the average number of neutrons emitted from fission interactions with nuclide $n$ as $\\nu_{n,p}(\\mathbf{r},E)$ and $\\nu_{n,d}(\\mathbf{r},E)$ for prompt and delayed neutrons, respectively. The multi-group fission spectrum $\\chi_{n,k,g,d}$ is then the probability of fission neutrons emitted into energy group $g$ and delayed group $d$. There are not prompt groups, so inserting $p$ in place of $d$ just denotes all prompt neutrons. \nSimilar to before, spatial homogenization and energy condensation are used to find the multi-energy-group and multi-delayed-group fission spectrum $\\chi_{n,k,g,d}$ as follows:\n$$\\chi_{n,k,g',d} = \\frac{\\int_{E_{g'}}^{E_{g'-1}}\\mathrm{d}E''\\int_{0}^{\\infty}\\mathrm{d}E'\\int_{\\mathbf{r} \\in V_{k}}\\mathrm{d}\\mathbf{r}\\chi_{n,d}(\\mathbf{r},E'\\rightarrow E'')\\nu_{n,d}(\\mathbf{r},E')\\sigma_{n,f}(\\mathbf{r},E')\\Phi(\\mathbf{r},E')}{\\int_{0}^{\\infty}\\mathrm{d}E'\\int_{\\mathbf{r} \\in V_{k}}\\mathrm{d}\\mathbf{r}\\nu_{n,d}(\\mathbf{r},E')\\sigma_{n,f}(\\mathbf{r},E')\\Phi(\\mathbf{r},E')}$$\nThe fission production-weighted multi-energy-group and multi-delayed-group fission spectrum for delayed neutrons is computed using OpenMC tallies with energy in, energy out, and delayed group filters. Alternatively, the delayed group filter can be omitted to compute the fission spectrum integrated over all delayed groups.\nThis concludes our brief overview on the methodology to compute multi-energy-group and multi-delayed-group cross sections. The following sections detail more concretely how users may employ the openmc.mgxs module to power simulation workflows requiring multi-group cross sections for downstream deterministic calculations.\nGenerate Input Files",
"%matplotlib inline\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nimport openmc\nimport openmc.mgxs as mgxs",
"First we need to define materials that will be used in the problem. Before defining a material, we must create nuclides that are used in the material.",
"# Instantiate some Nuclides\nh1 = openmc.Nuclide('H1')\no16 = openmc.Nuclide('O16')\nu235 = openmc.Nuclide('U235')\nu238 = openmc.Nuclide('U238')\npu239 = openmc.Nuclide('Pu239')\nzr90 = openmc.Nuclide('Zr90')",
"With the nuclides we defined, we will now create a material for the homogeneous medium.",
"# Instantiate a Material and register the Nuclides\ninf_medium = openmc.Material(name='moderator')\ninf_medium.set_density('g/cc', 5.)\ninf_medium.add_nuclide(h1, 0.03)\ninf_medium.add_nuclide(o16, 0.015)\ninf_medium.add_nuclide(u235 , 0.0001)\ninf_medium.add_nuclide(u238 , 0.007)\ninf_medium.add_nuclide(pu239, 0.00003)\ninf_medium.add_nuclide(zr90, 0.002)",
"With our material, we can now create a Materials object that can be exported to an actual XML file.",
"# Instantiate a Materials collection and export to XML\nmaterials_file = openmc.Materials([inf_medium])\nmaterials_file.default_xs = '71c'\nmaterials_file.export_to_xml()",
"Now let's move on to the geometry. This problem will be a simple square cell with reflective boundary conditions to simulate an infinite homogeneous medium. The first step is to create the outer bounding surfaces of the problem.",
"# Instantiate boundary Planes\nmin_x = openmc.XPlane(boundary_type='reflective', x0=-0.63)\nmax_x = openmc.XPlane(boundary_type='reflective', x0=0.63)\nmin_y = openmc.YPlane(boundary_type='reflective', y0=-0.63)\nmax_y = openmc.YPlane(boundary_type='reflective', y0=0.63)",
"With the surfaces defined, we can now create a cell that is defined by intersections of half-spaces created by the surfaces.",
"# Instantiate a Cell\ncell = openmc.Cell(cell_id=1, name='cell')\n\n# Register bounding Surfaces with the Cell\ncell.region = +min_x & -max_x & +min_y & -max_y\n\n# Fill the Cell with the Material\ncell.fill = inf_medium",
"OpenMC requires that there is a \"root\" universe. Let us create a root universe and add our square cell to it.",
"# Instantiate Universe\nroot_universe = openmc.Universe(universe_id=0, name='root universe')\nroot_universe.add_cell(cell)",
"We now must create a geometry that is assigned a root universe and export it to XML.",
"# Create Geometry and set root Universe\nopenmc_geometry = openmc.Geometry()\nopenmc_geometry.root_universe = root_universe\n\n# Export to \"geometry.xml\"\nopenmc_geometry.export_to_xml()",
"Next, we must define simulation parameters. In this case, we will use 10 inactive batches and 40 active batches each with 2500 particles.",
"# OpenMC simulation parameters\nbatches = 50\ninactive = 10\nparticles = 5000\n\n# Instantiate a Settings object\nsettings_file = openmc.Settings()\nsettings_file.batches = batches\nsettings_file.inactive = inactive\nsettings_file.particles = particles\nsettings_file.output = {'tallies': True}\n\n# Create an initial uniform spatial source distribution over fissionable zones\nbounds = [-0.63, -0.63, -0.63, 0.63, 0.63, 0.63]\nuniform_dist = openmc.stats.Box(bounds[:3], bounds[3:], only_fissionable=True)\nsettings_file.source = openmc.source.Source(space=uniform_dist)\n\n# Export to \"settings.xml\"\nsettings_file.export_to_xml()",
"Now we are ready to generate multi-group cross sections! First, let's define a 100-energy-group structure and 1-energy-group structure using the built-in EnergyGroups class. We will also create a 6-delayed-group list.",
"# Instantiate a 100-group EnergyGroups object\nenergy_groups = mgxs.EnergyGroups()\nenergy_groups.group_edges = np.logspace(-3, 7.3, 101)\n\n# Instantiate a 1-group EnergyGroups object\none_group = mgxs.EnergyGroups()\none_group.group_edges = np.array([energy_groups.group_edges[0], energy_groups.group_edges[-1]])\n\ndelayed_groups = list(range(1,7))",
"We can now use the EnergyGroups object and delayed group list, along with our previously created materials and geometry, to instantiate some MGXS objects from the openmc.mgxs module. In particular, the following are subclasses of the generic and abstract MGXS class:\n\nTotalXS\nTransportXS\nAbsorptionXS\nCaptureXS\nFissionXS\nNuFissionMatrixXS\nKappaFissionXS\nScatterXS\nScatterMatrixXS\nChi\nInverseVelocity\n\nA separate abstract MDGXS class is used for cross-sections and parameters that involve delayed neutrons. The subclasses of MDGXS include:\n\nDelayedNuFissionXS\nChiDelayed\nBeta\nDecayRate\n\nThese classes provide us with an interface to generate the tally inputs as well as perform post-processing of OpenMC's tally data to compute the respective multi-group cross sections. \nIn this case, let's create the multi-group chi-prompt, chi-delayed, and prompt-nu-fission cross sections with our 100-energy-group structure and multi-group delayed-nu-fission and beta cross sections with our 100-energy-group and 6-delayed-group structures. \nThe prompt chi and nu-fission data can actually be gathered using the Chi and FissionXS classes, respectively, by passing in a value of True for the optional prompt parameter upon initialization.",
"# Instantiate a few different sections\nchi_prompt = mgxs.Chi(domain=cell, groups=energy_groups, by_nuclide=True, prompt=True)\nprompt_nu_fission = mgxs.FissionXS(domain=cell, groups=energy_groups, by_nuclide=True, nu=True, prompt=True)\nchi_delayed = mgxs.ChiDelayed(domain=cell, energy_groups=energy_groups, by_nuclide=True)\ndelayed_nu_fission = mgxs.DelayedNuFissionXS(domain=cell, energy_groups=energy_groups, delayed_groups=delayed_groups, by_nuclide=True)\nbeta = mgxs.Beta(domain=cell, energy_groups=energy_groups, delayed_groups=delayed_groups, by_nuclide=True)\ndecay_rate = mgxs.DecayRate(domain=cell, energy_groups=one_group, delayed_groups=delayed_groups, by_nuclide=True)\n\nchi_prompt.nuclides = ['U235', 'Pu239']\nprompt_nu_fission.nuclides = ['U235', 'Pu239']\nchi_delayed.nuclides = ['U235', 'Pu239']\ndelayed_nu_fission.nuclides = ['U235', 'Pu239']\nbeta.nuclides = ['U235', 'Pu239']\ndecay_rate.nuclides = ['U235', 'Pu239']",
"Each multi-group cross section object stores its tallies in a Python dictionary called tallies. We can inspect the tallies in the dictionary for our Decay Rate object as follows.",
"decay_rate.tallies",
"The Beta object includes tracklength tallies for the 'nu-fission' and 'delayed-nu-fission' scores in the 100-energy-group and 6-delayed-group structure in cell 1. Now that each MGXS and MDGXS object contains the tallies that it needs, we must add these tallies to a Tallies object to generate the \"tallies.xml\" input file for OpenMC.",
"# Instantiate an empty Tallies object\ntallies_file = openmc.Tallies()\n\n# Add chi-prompt tallies to the tallies file\ntallies_file += chi_prompt.tallies.values()\n\n# Add prompt-nu-fission tallies to the tallies file\ntallies_file += prompt_nu_fission.tallies.values()\n\n# Add chi-delayed tallies to the tallies file\ntallies_file += chi_delayed.tallies.values()\n\n# Add delayed-nu-fission tallies to the tallies file\ntallies_file += delayed_nu_fission.tallies.values()\n\n# Add beta tallies to the tallies file\ntallies_file += beta.tallies.values()\n\n# Add decay rate tallies to the tallies file\ntallies_file += decay_rate.tallies.values()\n\n# Export to \"tallies.xml\"\ntallies_file.export_to_xml()",
"Now we a have a complete set of inputs, so we can go ahead and run our simulation.",
"# Run OpenMC\nopenmc.run()",
"Tally Data Processing\nOur simulation ran successfully and created statepoint and summary output files. We begin our analysis by instantiating a StatePoint object.",
"# Load the last statepoint file\nsp = openmc.StatePoint('statepoint.50.h5')",
"In addition to the statepoint file, our simulation also created a summary file which encapsulates information about the materials and geometry. By default, a Summary object is automatically linked when a StatePoint is loaded. This is necessary for the openmc.mgxs module to properly process the tally data.\nThe statepoint is now ready to be analyzed by our multi-group cross sections. We simply have to load the tallies from the StatePoint into each object as follows and our MGXS objects will compute the cross sections for us under-the-hood.",
"# Load the tallies from the statepoint into each MGXS object\nchi_prompt.load_from_statepoint(sp)\nprompt_nu_fission.load_from_statepoint(sp)\nchi_delayed.load_from_statepoint(sp)\ndelayed_nu_fission.load_from_statepoint(sp)\nbeta.load_from_statepoint(sp)\ndecay_rate.load_from_statepoint(sp)",
"Voila! Our multi-group cross sections are now ready to rock 'n roll!\nExtracting and Storing MGXS Data\nLet's first inspect our delayed-nu-fission section by printing it to the screen after condensing the cross section down to one group.",
"delayed_nu_fission.get_condensed_xs(one_group).get_xs()",
"Since the openmc.mgxs module uses tally arithmetic under-the-hood, the cross section is stored as a \"derived\" Tally object. This means that it can be queried and manipulated using all of the same methods supported for the Tally class in the OpenMC Python API. For example, we can construct a Pandas DataFrame of the multi-group cross section data.",
"df = delayed_nu_fission.get_pandas_dataframe()\ndf.head(10)\n\ndf = decay_rate.get_pandas_dataframe()\ndf.head(12)",
"Each multi-group cross section object can be easily exported to a variety of file formats, including CSV, Excel, and LaTeX for storage or data processing.",
"beta.export_xs_data(filename='beta', format='excel')",
"The following code snippet shows how to export the chi-prompt and chi-delayed MGXS to the same HDF5 binary data store.",
"chi_prompt.build_hdf5_store(filename='mdgxs', append=True)\nchi_delayed.build_hdf5_store(filename='mdgxs', append=True)",
"Using Tally Arithmetic to Compute the Delayed Neutron Precursor Concentrations\nFinally, we illustrate how one can leverage OpenMC's tally arithmetic data processing feature with MGXS objects. The openmc.mgxs module uses tally arithmetic to compute multi-group cross sections with automated uncertainty propagation. Each MGXS object includes an xs_tally attribute which is a \"derived\" Tally based on the tallies needed to compute the cross section type of interest. These derived tallies can be used in subsequent tally arithmetic operations. For example, we can use tally artithmetic to compute the delayed neutron precursor concentrations using the Beta, DelayedNuFissionXS, and DecayRate objects. The delayed neutron precursor concentrations are modeled using the following equations:\n$$\\frac{\\partial}{\\partial t} C_{k,d} (t) = \\int_{0}^{\\infty}\\mathrm{d}E'\\int_{\\mathbf{r} \\in V_{k}}\\mathrm{d}\\mathbf{r} \\beta_{k,d} (t) \\nu_d \\sigma_{f,x}(\\mathbf{r},E',t)\\Phi(\\mathbf{r},E',t) - \\lambda_{d} C_{k,d} (t) $$\n$$C_{k,d} (t=0) = \\frac{1}{\\lambda_{d}} \\int_{0}^{\\infty}\\mathrm{d}E'\\int_{\\mathbf{r} \\in V_{k}}\\mathrm{d}\\mathbf{r} \\beta_{k,d} (t=0) \\nu_d \\sigma_{f,x}(\\mathbf{r},E',t=0)\\Phi(\\mathbf{r},E',t=0) $$\nFirst, let's investigate the decay rates for U235 and Pu235. The fraction of the delayed neutron precursors remaining as a function of time after fission for each delayed group and fissioning isotope have been plotted below.",
"# Get the decay rate data\ndr_tally = decay_rate.xs_tally\ndr_u235 = dr_tally.get_values(nuclides=['U235']).flatten()\ndr_pu239 = dr_tally.get_values(nuclides=['Pu239']).flatten()\n\n# Compute the exponential decay of the precursors\ntime = np.logspace(-3,3)\ndr_u235_points = np.exp(-np.outer(dr_u235, time))\ndr_pu239_points = np.exp(-np.outer(dr_pu239, time))\n\n# Create a plot of the fraction of the precursors remaining as a f(time)\ncolors = ['b', 'g', 'r', 'c', 'm', 'k']\nlegend = []\nfig = plt.figure(figsize=(8,6))\nfor g,c in enumerate(colors):\n plt.semilogx(time, dr_u235_points [g,:], color=c, linestyle='--', linewidth=3)\n plt.semilogx(time, dr_pu239_points[g,:], color=c, linestyle=':' , linewidth=3)\n legend.append('U-235 $t_{1/2}$ = ' + '{0:1.2f} seconds'.format(np.log(2) / dr_u235[g]))\n legend.append('Pu-239 $t_{1/2}$ = ' + '{0:1.2f} seconds'.format(np.log(2) / dr_pu239[g]))\n\nplt.title('Delayed Neutron Precursor Decay Rates')\nplt.xlabel('Time (s)')\nplt.ylabel('Fraction Remaining')\nplt.legend(legend, loc=1, bbox_to_anchor=(1.55, 0.95))",
"Now let's compute the initial concentration of the delayed neutron precursors:",
"# Use tally arithmetic to compute the precursor concentrations\nprecursor_conc = beta.get_condensed_xs(one_group).xs_tally.summation(filter_type=openmc.EnergyFilter, remove_filter=True) * \\\n delayed_nu_fission.get_condensed_xs(one_group).xs_tally.summation(filter_type=openmc.EnergyFilter, remove_filter=True) / \\\n decay_rate.xs_tally.summation(filter_type=openmc.EnergyFilter, remove_filter=True)\n\n# Get the Pandas DataFrames for inspection\nprecursor_conc.get_pandas_dataframe()",
"We can plot the delayed neutron fractions for each nuclide.",
"energy_filter = [f for f in beta.xs_tally.filters if type(f) is openmc.EnergyFilter]\nbeta_integrated = beta.get_condensed_xs(one_group).xs_tally.summation(filter_type=openmc.EnergyFilter, remove_filter=True)\nbeta_u235 = beta_integrated.get_values(nuclides=['U235'])\nbeta_pu239 = beta_integrated.get_values(nuclides=['Pu239'])\n\n# Reshape the betas\nbeta_u235.shape = (beta_u235.shape[0])\nbeta_pu239.shape = (beta_pu239.shape[0])\n\ndf = beta_integrated.summation(filter_type=openmc.DelayedGroupFilter, remove_filter=True).get_pandas_dataframe()\nprint('Beta (U-235) : {:.6f} +/- {:.6f}'.format(df[df['nuclide'] == 'U235']['mean'][0], df[df['nuclide'] == 'U235']['std. dev.'][0]))\nprint('Beta (Pu-239): {:.6f} +/- {:.6f}'.format(df[df['nuclide'] == 'Pu239']['mean'][1], df[df['nuclide'] == 'Pu239']['std. dev.'][1]))\n\nbeta_u235 = np.append(beta_u235[0], beta_u235)\nbeta_pu239 = np.append(beta_pu239[0], beta_pu239)\n\n# Create a step plot for the MGXS\nplt.plot(np.arange(0.5, 7.5, 1), beta_u235, drawstyle='steps', color='b', linewidth=3)\nplt.plot(np.arange(0.5, 7.5, 1), beta_pu239, drawstyle='steps', color='g', linewidth=3)\n\nplt.title('Delayed Neutron Fraction (beta)')\nplt.xlabel('Delayed Group')\nplt.ylabel('Beta(fraction total neutrons)')\nplt.legend(['U-235', 'Pu-239'])\nplt.xlim([0,7])",
"We can also plot the energy spectrum for fission emission of prompt and delayed neutrons.",
"chi_d_u235 = np.squeeze(chi_delayed.get_xs(nuclides=['U235'], order_groups='decreasing'))\nchi_d_pu239 = np.squeeze(chi_delayed.get_xs(nuclides=['Pu239'], order_groups='decreasing'))\nchi_p_u235 = np.squeeze(chi_prompt.get_xs(nuclides=['U235'], order_groups='decreasing'))\nchi_p_pu239 = np.squeeze(chi_prompt.get_xs(nuclides=['Pu239'], order_groups='decreasing'))\n\nchi_d_u235 = np.append(chi_d_u235 , chi_d_u235[0])\nchi_d_pu239 = np.append(chi_d_pu239, chi_d_pu239[0])\nchi_p_u235 = np.append(chi_p_u235 , chi_p_u235[0])\nchi_p_pu239 = np.append(chi_p_pu239, chi_p_pu239[0])\n\n# Create a step plot for the MGXS\nplt.semilogx(energy_groups.group_edges, chi_d_u235 , drawstyle='steps', color='b', linestyle='--', linewidth=3)\nplt.semilogx(energy_groups.group_edges, chi_d_pu239, drawstyle='steps', color='g', linestyle='--', linewidth=3)\nplt.semilogx(energy_groups.group_edges, chi_p_u235 , drawstyle='steps', color='b', linestyle=':', linewidth=3)\nplt.semilogx(energy_groups.group_edges, chi_p_pu239, drawstyle='steps', color='g', linestyle=':', linewidth=3)\n\nplt.title('Energy Spectrum for Fission Neutrons')\nplt.xlabel('Energy (eV)')\nplt.ylabel('Fraction on emitted neutrons')\nplt.legend(['U-235 delayed', 'Pu-239 delayed', 'U-235 prompt', 'Pu-239 prompt'],loc=2)\nplt.xlim(1.0e3, 20.0e6)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
muatik/dm
|
maxloglikelihood.ipynb
|
mit
|
[
"Maximum Likelihood Estimation",
"from math import factorial as fac\nfrom numpy import math\nimport numpy as np\nimport random\nfrom collections import Counter\n%matplotlib inline\n\nn = 1000\nexperiments = []\n\nfor i in range(n):\n a = random.randint(1, 6)\n # key = \"{}-{}\".format(a, b)\n # experiments[key] = experiments.get(key, 0) + 1\n experiments.append(a)\n\nfrom matplotlib import pyplot as plt\n%matplotlib inline\n\nx = experiments\nplt.hist(x)\nplt.show()\n\nA = [1, 2, 3, 4, 5, 6]\nn4 = 10\nexperiments = [1] * 22 + [2] * 16 + [3] * 21 + [4] * n4 + [5] * 19 + [6] * 12\nn = len(experiments) * 1.0\nprint(n)\n\nlh = []\nfor i in range(1, 100):\n P4 = 1.0/i\n lh.append(math.factorial(n) / math.factorial(n4) * math.factorial(n-n4) * ( (P4**n4) * ((1-P4) ** (n-n4)) ))\n\nplt.hist(experiments)\nplt.show()\n\nplt.plot(lh)",
"Best parameter for n4",
"n4/n",
"after removing the constant part",
"lh = []\nfor i in range(1, 100):\n P4 = 1.0/i\n lh.append( (P4**n4) * ((1-P4) ** (n-n4)) )\nplt.plot(lh)",
"Coin example\nSetup problem",
"# denoting tails by 0 and heads by 1\nTAIL = 0\nHEAD = 1\n\n# tossing coint N times\nN = 10\n\n# 8 of N times tail occurs\nTAIL_COUNT = 8\n\nexperiments = [TAIL] * TAIL_COUNT + [HEAD] * (N - TAIL_COUNT)\nprint(experiments, N)",
"Looking at the experient shown above, we can easily predict that the probability of TAIL is 6/10, that is higher than the probability of HEAD 4/10.\nIt is easy to calculate the probabilities in this setup without getting involved in Maximum likelihood. However, there are other problems which are not so obvious as this coin tossing. So, we are now going to calculate this probabilities with a more methodological way.\n\nProbability\n\nKnowing parameters -> Prediction of outcome\n\nLikelihood\n\nObservation of data -> Estimation of parameters\n\n\nBernoulli distribution\n$$p(X=r; N, p) = \\frac{N!}{r! (N-r)!} p^r * (1-p)^{N-r} $$\nApplying the formula to the coing tossing problem, we end up with the following expression:\n$$\n\\frac{N!}{TAILS_COUNT! (N-TAILS_COUNT)!} P_TAIL^{TAILS_COUNT} * P_HEAD^{HEADS_COUNT} \n$$\n\\begin{eqnarray}\n{\\cal L}(\\theta |\\pi_1\\cdots \\pi_n) & = & \\prod_{n=1}^{N} f(x_{n};\\theta ) \n\\\n& = & \\prod_{n=1}^{N} \\theta^{x_{n}} (1-\\theta)^{1-x_{n}}\n\\\nlog{\\cal L}(\\theta) & = & \\sum_{n=1}^N x^{(n)} \\log (\\theta) + \\sum_{n=1}^N (1- x^{(n)}) \\log (1 - \\theta)\n\\\n& = & \\log (\\theta) \\sum_{n=1}^N x^{(n)} + \\log (1 - \\theta) \\sum_{n=1}^N (1- x^{(n)}) \n\\end{eqnarray}\nThe likelihood function is simply the joint probability of observing the data.",
"PROBABILITY_SCALE = 100\nlikelihoods = []\nfor i in range(1, PROBABILITY_SCALE + 1):\n P_TAIL = float(i) / PROBABILITY_SCALE\n constant_part = (\n math.factorial(N) / \n (math.factorial(TAIL_COUNT) * math.factorial(N-TAIL_COUNT)))\n likelihood = (\n constant_part * \n np.power(P_TAIL, TAIL_COUNT) * \n np.power(1 - P_TAIL, N - TAIL_COUNT))\n \n likelihoods.append((P_TAIL, likelihood))\nplt.grid(True)\nplt.plot(np.array(likelihoods)[:,0], np.array(likelihoods)[:,1])",
"This binomial distribution figure illustrates the probability of TAIL is maximized around 6/10. It could also be other values such as 5/10 or 7/10, but the aim here is to find the parameter which makes the observed data most likely. \n\nBeyond parameter estimation, the likelihood framework allows us to make tests of parameter values. For example, we might want to ask whether or not the estimated p differs significantly from 0.5 or not. This test is essentially asking: is there evidence that the coin is biased? We will see how such tests can be performed when we introduce the concept of a likelihood ratio test below.\nWe will learn that especially for large samples, the maximum likelihood estimators have many desirable properties. However, especially for high dimensional data, the likelihood can have many local maxima. Thus, finding the global maximum can be a major computational challenge."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
ardiya/siamesenetwork-tensorflow
|
Similar image retrieval.ipynb
|
mit
|
[
"%matplotlib inline\nimport tensorflow as tf\nimport numpy as np\nimport matplotlib.pyplot as plt\n\nfrom dataset import MNISTDataset\nfrom model import *\n\nfrom scipy.spatial.distance import cdist\nfrom matplotlib import gridspec\n\ndataset = MNISTDataset()\ntrain_images = dataset.images_train[:20000]\ntest_images = dataset.images_test\nlen_test = len(test_images)\nlen_train = len(train_images)\n\n#helper function to plot image\ndef show_image(idxs, data):\n if type(idxs) != np.ndarray:\n idxs = np.array([idxs])\n fig = plt.figure()\n gs = gridspec.GridSpec(1,len(idxs))\n for i in range(len(idxs)):\n ax = fig.add_subplot(gs[0,i])\n ax.imshow(data[idxs[i],:,:,0])\n ax.axis('off')\n plt.show()",
"Create the siamese net feature extraction model",
"img_placeholder = tf.placeholder(tf.float32, [None, 28, 28, 1], name='img')\nnet = mnist_model(img_placeholder, reuse=False)",
"Restore from checkpoint and calc the features from all of train data",
"saver = tf.train.Saver()\nwith tf.Session() as sess:\n sess.run(tf.global_variables_initializer())\n ckpt = tf.train.get_checkpoint_state(\"model\")\n saver.restore(sess, \"model/model.ckpt\")\n \n train_feat = sess.run(net, feed_dict={img_placeholder:train_images[:10000]}) ",
"Searching for similar test images from trainset based on siamese feature",
"#generate new random test image\nidx = np.random.randint(0, len_test)\nim = test_images[idx]\n\n#show the test image\nshow_image(idx, test_images)\nprint(\"This is image from id:\", idx)\n\n#run the test image through the network to get the test features\nsaver = tf.train.Saver()\nwith tf.Session() as sess:\n sess.run(tf.global_variables_initializer())\n ckpt = tf.train.get_checkpoint_state(\"model\")\n saver.restore(sess, \"model/model.ckpt\")\n search_feat = sess.run(net, feed_dict={img_placeholder:[im]})\n \n#calculate the cosine similarity and sort\ndist = cdist(train_feat, search_feat, 'cosine')\nrank = np.argsort(dist.ravel())\n\n#show the top n similar image from train data\nn = 7\nshow_image(rank[:n], train_images)\nprint(\"retrieved ids:\", rank[:n])"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
mbeyeler/opencv-machine-learning
|
notebooks/10.04-Implementing-AdaBoost.ipynb
|
mit
|
[
"<!--BOOK_INFORMATION-->\n<a href=\"https://www.packtpub.com/big-data-and-business-intelligence/machine-learning-opencv\" target=\"_blank\"><img align=\"left\" src=\"data/cover.jpg\" style=\"width: 76px; height: 100px; background: white; padding: 1px; border: 1px solid black; margin-right:10px;\"></a>\nThis notebook contains an excerpt from the book Machine Learning for OpenCV by Michael Beyeler.\nThe code is released under the MIT license,\nand is available on GitHub.\nNote that this excerpt contains only the raw code - the book is rich with additional explanations and illustrations.\nIf you find this content useful, please consider supporting the work by\nbuying the book!\n<!--NAVIGATION-->\n< Using Random Forests for Face Recognition | Contents | Combining Different Models Into a Voting Classifier >\nImplementing AdaBoost\nWhen the trees in the forest are trees of depth 1 (also known as decision stumps) and we\nperform boosting instead of bagging, the resulting algorithm is called AdaBoost.\nAdaBoost adjusts the dataset at each iteration by performing the following actions:\n- Selecting a decision stump\n- Increasing the weighting of cases that the decision stump labeled incorrectly while reducing the weighting of correctly labeled cases\nThis iterative weight adjustment causes each new classifier in the ensemble to prioritize\ntraining the incorrectly labeled cases. As a result, the model adjusts by targeting highlyweighted\ndata points.\nEventually, the stumps are combined to form a final classifier.\nImplementing AdaBoost in OpenCV\nAlthough OpenCV provides a very efficient implementation of AdaBoost, it is hidden\nunder the Haar cascade classifier. Haar cascade classifiers are a very popular tool for face\ndetection, which we can illustrate through the example of the Lena image:",
"import cv2\nimg_bgr = cv2.imread('data/lena.jpg', cv2.IMREAD_COLOR)\nimg_gray = cv2.cvtColor(img_bgr, cv2.COLOR_BGR2GRAY)",
"After loading the image in both color and grayscale, we load a pretrained Haar cascade:",
"filename = 'data/haarcascade_frontalface_default.xml'\nface_cascade = cv2.CascadeClassifier(filename)",
"The classifier will then detect faces present in the image using the following function call:",
"faces = face_cascade.detectMultiScale(img_gray, 1.1, 5)",
"Note that the algorithm operates only on grayscale images. That's why we saved two\npictures of Lena, one to which we can apply the classifier (img_gray), and one on which we\ncan draw the resulting bounding box (img_bgr):",
"color = (255, 0, 0)\nthickness = 2\nfor (x, y, w, h) in faces:\n cv2.rectangle(img_bgr, (x, y), (x + w, y + h),\n color, thickness)",
"Then we can plot the image using the following code:",
"import matplotlib.pyplot as plt\n%matplotlib inline\nplt.figure(figsize=(10, 6))\nplt.imshow(cv2.cvtColor(img_bgr, cv2.COLOR_BGR2RGB));",
"Obviously, this picture contains only a single face. However, the preceding code will work\neven on images where multiple faces could be detected. Try it out!\nImplementing AdaBoost in scikit-learn\nIn scikit-learn, AdaBoost is just another ensemble estimator. We can create an ensemble\nfrom 100 decision stumps as follows:",
"from sklearn.ensemble import AdaBoostClassifier\nada = AdaBoostClassifier(n_estimators=100,\n random_state=456)",
"We can load the breast cancer set once more and split it 75-25:",
"from sklearn.datasets import load_breast_cancer\ncancer = load_breast_cancer()\nX = cancer.data\ny = cancer.target\n\nfrom sklearn.model_selection import train_test_split\nX_train, X_test, y_train, y_test = train_test_split(\n X, y, random_state=456\n)",
"Then fit and score AdaBoost using the familiar procedure:",
"ada.fit(X_train, y_train)\nada.score(X_test, y_test)",
"The result is remarkable, 97.9% accuracy!\nWe might want to compare this result to a random forest. However, to be fair, we should\nmake the trees in the forest all decision stumps. Then we will know the difference between\nbagging and boosting:",
"from sklearn.ensemble import RandomForestClassifier\nforest = RandomForestClassifier(n_estimators=100,\n max_depth=1,\n random_state=456)\nforest.fit(X_train, y_train)\nforest.score(X_test, y_test)",
"Of course, if we let the trees be as deep as needed, we might get a better score:",
"forest = RandomForestClassifier(n_estimators=100,\n random_state=456)\nforest.fit(X_train, y_train)\nforest.score(X_test, y_test)",
"As a last step in this chapter, let's talk about how to combine different types of models into\nan ensemble.\n<!--NAVIGATION-->\n< Using Random Forests for Face Recognition | Contents | Combining Different Models Into a Voting Classifier >"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
sastels/Onboarding
|
3 - Lists.ipynb
|
mit
|
[
"Python Lists\nPython has a great built-in list type named \"list\". List literals are written within square brackets [ ]. Lists work similarly to strings -- use the len() function and square brackets [ ] to access data, with the first element at index 0. (See the official python.org list docs.)",
"colours = ['red', 'blue', 'green']\nprint colours[0]\nprint colours[2]\nprint len(colours)",
"Assignment with an = on lists does not make a copy. Instead, assignment makes the two variables point to the one list in memory.",
"b = colours ## does not copy list",
"The \"empty list\" is just an empty pair of brackets [ ]. The '+' works to append two lists, so [1, 2] + [3, 4] yields [1, 2, 3, 4] (this is just like + with strings).\nFOR and IN\nPython's for and in constructs are extremely useful, and the first use of them we'll see is with lists. The for construct -- for var in list -- is an easy way to look at each element in a list (or other collection). Do not add or remove from the list during iteration.",
"squares = [1, 4, 9, 16]\nsum = 0\nfor num in squares:\n sum += num\nprint sum ",
"If you know what sort of thing is in the list, use a variable name in the loop that captures that information such as \"num\", or \"name\", or \"url\". Since python code does not have other syntax to remind you of types, your variable names are a key way for you to keep straight what is going on.\nThe in construct on its own is an easy way to test if an element appears in a list (or other collection) -- value in collection -- tests if the value is in the collection, returning True/False.",
"list = ['larry', 'curly', 'moe']\nif 'curly' in list:\n print 'yay'",
"The for/in constructs are very commonly used in Python code and work on data types other than list, so should just memorize their syntax. You may have habits from other languages where you start manually iterating over a collection, where in Python you should just use for/in.\nYou can also use for/in to work on a string. The string acts like a list of its chars, so for ch in s: print ch prints all the chars in a string.\nRange\nThe range(n) function yields the numbers 0, 1, ... n-1, and range(a, b) returns a, a+1, ... b-1 -- up to but not including the last number. The combination of the for-loop and the range() function allow you to build a traditional numeric for loop:",
"for i in range(100):\n print i,",
"There is a variant xrange() which avoids the cost of building the whole list for performance sensitive cases (in Python 3, range() will have the good performance behavior and you can forget about xrange() ).\nWhile Loop\nPython also has the standard while-loop, and the break and continue statements work as in C++ and Java, altering the course of the innermost loop. The above for/in loops solves the common case of iterating over every element in a list, but the while loop gives you total control over the index numbers. Here's a while loop which accesses every 3rd element in a list:",
"a = ['a', 34, 3.14, [1,2], 'c']\ni = 0\nwhile i < len(a):\n print a[i]\n i = i + 3",
"List Methods\nHere are some other common list methods.\n\nlist.append(elem) -- adds a single element to the end of the list. Common error: does not return the new list, just modifies the original.\nlist.insert(index, elem) -- inserts the element at the given index, shifting elements to the right.\nlist.extend(list2) adds the elements in list2 to the end of the list. Using + or += on a list is similar to using extend().\nlist.index(elem) -- searches for the given element from the start of the list and returns its index. Throws a ValueError if the element does not appear (use \"in\" to check without a ValueError).\nlist.remove(elem) -- searches for the first instance of the given element and removes it (throws ValueError if not present)\nlist.sort() -- sorts the list in place (does not return it). (The sorted() function shown below is preferred.)\nlist.reverse() -- reverses the list in place (does not return it)\nlist.pop(index) -- removes and returns the element at the given index. Returns the rightmost element if index is omitted (roughly the opposite of append()).\n\nNotice that these are methods on a list object, while len() is a function that takes the list (or string or whatever) as an argument.",
"list = ['larry', 'curly', 'moe']\n\nlist.append('shemp')\nlist\n\nlist.insert(0, 'xxx')\nlist\n\nlist.extend(['yyy', 'zzz'])\nlist\n\nprint list.index('curly')\n\nlist.remove('curly')\nlist\n\nprint(list.pop(1))\nlist",
"Common error: note that the above methods do not return the modified list, they just modify the original list.",
"list = [1, 2, 3]\nprint(list.append(4))",
"So list.append() doesn't return a value. 'None' is a python value that means there is no value (roll with it). It's great for situations where in other languages you'd set variables to -1 or something.\nList Build Up\nOne common pattern is to start a list a the empty list [], then use append() or extend() to add elements to it:",
"list = []\nlist.append('a')\nlist.append('b')\nlist",
"List Slices\nSlices work on lists just as with strings, and can also be used to change sub-parts of the list.",
"list = ['a', 'b', 'c', 'd']\nlist[1:-1]\n\nlist[0:2] = 'z'\nlist",
"Exercise\nFor practice with lists, go to the notebook 3.5 - List exercises\nNote: This notebook is based on Google's python tutorial https://developers.google.com/edu/python"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
4dsolutions/Python5
|
Merging DataFrames.ipynb
|
mit
|
[
"Merging DataFrames\npd.merge(left, right, how='inner', on=None, left_on=None, right_on=None,\nleft_index=False, right_index=False, sort=True)\nmerge and update have a lot in common.\nThere's a whole family of methods.\nI can understand where a busy IT worker, with the job of helping numerous IT workers, might get frustrated with the redundancy of it all. How many times must we reinvent the same wheels? That's the skeptic's view.\nThe good news the same as the bad news: redundancy means generic fluency is possible. Some background in SQL helps with pandas. Some background in pandas helps with SQL.",
"import pandas as pd\nimport numpy as np\n\ndfA = pd.DataFrame({'A':[1,1,1,1,1,1,1,1],\n 'B':[2,2,2,2,2,2,2,2]})\n\ndfA\n\ndfB = pd.DataFrame({'C':[3,3,3,3,3,3,3,3],\n 'D':[4,4,4,4,4,4,4,4]})",
"You get to choose which columns, left and right, serve as \"gear teeth\" for synchronizing rows (sewing them together). Or choose the index, not a column.\nIn the expression below, we go with the one synchronizing element: the index, on both input tables.",
"pd.merge(dfA, dfB, left_index=True, right_index=True)\n\nimport string\ndfA.index = list(string.ascii_lowercase[:8]) # new index, of letters instead\n\ndfA\n\ndfB.index = list(string.ascii_lowercase[5:8+5]) # overlapping letters\n\ndfB\n\npd.merge(dfA, dfB, left_index=True, right_index=True) # intersection, not the union\n\npd.merge(dfA, dfB, left_index=True, right_index=True, how=\"left\") # left side governs\n\npd.merge(dfA, dfB, left_index=True, right_index=True, how=\"right\") # right side governs\n\npd.merge(dfA, dfB, left_index=True, right_index=True, how=\"outer\") # the full union\n\npd.merge(dfA, dfB, left_index=True, right_index=True, how=\"inner\") # same as intersection\n\ndfA = pd.DataFrame({'A':[1,2,3,4,5,6,7,8],\n 'B':[2,2,2,2,2,2,2,2],\n 'key':['dog', 'pig', 'rooster', 'monkey', \n 'hen', 'cat', 'slug', 'human']})\n\nfrom numpy.random import shuffle\nkeys = dfA.key.values.copy() # copy or dfA key will also reorder\nshuffle(keys) # in place\ndfB = pd.DataFrame({'C':[1,2,3,4,5,6,7,8],\n 'D':[4,4,4,4,4,4,4,4],\n 'key': keys})\nkeys\n\ndfA\n\ndfB\n\npd.merge(dfA, dfB, on='key') # like \"zipping together\" on a common column\n\ndfB.rename({\"C\":\"A\", \"D\":\"B\"}, axis=1, inplace = True)\ndfB\n\ndfA\n\npd.merge(dfA, dfB, on='key', sort=False) # sort on key if sort is True\n\npd.merge(dfA, dfB, on='key', sort=True) # sort on key if sort is True\n\npd.merge(dfA, dfB, on='A')\n\npd.merge(dfA, dfB, left_index=True, right_on=\"A\")\n\nmerged = pd.merge(dfA, dfB, left_index=True, right_on=\"A\")\nmerged.to_json(\"merged.json\") # save for later",
"LAB CHALLENGE\nHow might one simply swap the axes? That's a hint."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
fdmazzone/Ecuaciones_Diferenciales
|
Teoria_Basica/scripts/EjerciciosGruposLie.ipynb
|
gpl-2.0
|
[
"Ejercicio N° 5 Recuperatorio Primer Parcial 2015\nLas siguientes relaciones\n$$\\hat{x}=\\frac{x(x+y)}{x+y+\\epsilon}$$\n$$\\hat{y}=\\frac{(x+y)(\\epsilon x+y)}{x+y+\\epsilon}$$\ndefinen un grupo de Lie uniparamétrico. Usando SymPy demostrar que $(\\hat{x},\\hat{y})$ son simetrías de la ecuación \n$$y'=\\frac{ x^2\\sin(x + y) + y}{x(1-x\\sin(x + y))}.$$\nEncontrar variables canónicas asocidas a las simetrías. Resolver la ecuación diferencial.\nPrimero veamos si el grupo propuesto es simetría de la ecuación.",
"from sympy import *\ninit_printing()\nx,epsilon=symbols('x,epsilon')\ny=Function('y')(x)\nx1=x*(x+y)/(y+(1+epsilon)*x)\ny1=(epsilon*x+y)*(x+y)/(y+(1+epsilon)*x)\nexp1=y1.diff(x)/x1.diff(x)\nexp2=exp1.subs(y.diff(x),(x**2*sin(x+y)+y)/x/(1-x*sin(x+y)))\nx1,y1=symbols('x1,y1')\nexp2=exp2.simplify()\nexp3=exp2.subs({y:(-epsilon*x1+y1)*(x1+y1)/(y1+(1-epsilon)*x1),x:x1*(x1+y1)/(y1+(1-epsilon)*x1)})\n",
"En exp3 tenemos el resultado del cambio de varibles en la ecuación. Veamos que tiene",
"exp3",
"Verdaderamente asusta. Veamos si lo simplifica",
"exp4=exp3.simplify()\nexp4",
"La ecuación luce parecida a la original, pero lamentablemete no simplifica el argumento de la función sen. Tomo esos argumentos por separado y le pido que me los simplifique",
"((-epsilon*x1**2 - epsilon*x1*y1 + x1**2 + 2*x1*y1 + y1**2)/(epsilon*x1 - x1 - y1)).simplify()",
"Los argumentos de la función sen es -x1-y1 que es justo lo que necesito para que quede la ecuación original. Ahora hallemos coordenadas canónicas",
"x1=x*(x+y)/(y+(1+epsilon)*x)\ny1=(epsilon*x+y)*(x+y)/(y+(1+epsilon)*x)\nxi=x1.diff(epsilon).subs(epsilon,0)\neta=y1.diff(epsilon).subs(epsilon,0)\nxi,eta\n\n\n(eta/xi).simplify()\n\ndsolve(y.diff(x)-eta/xi,y)",
"Vemos que r=y+x",
"r=symbols('r')\ns=Integral((1/xi).subs(y,r-x),x).doit()\ns\n\ns=s.subs(r,x+y)\ns\n\ns.expand()\n",
"La variable s sería 1+y/x. Pero las variables canónicas no son únicas, vimos que (F(r),G(r)+s) es canónica, para cualquier F no nula y G.En particular si tomamos F(r)=r y G(r)=-1, vemos que podemos elegir como coordenadas canónicas r=x+y y s=y/x. Hagamos la sustitución en coordenadas canónicas",
"r=x+y\ns=y/x\nexp5=r.diff(x)/s.diff(x)\nexp6=exp5.subs(y.diff(x),(x**2*sin(x+y)+y)/x/(1-x*sin(x+y)))\nexp6.simplify()\n",
"Vemos que el resultado es 1/sen(r). No obstante hagamos, de curiosidad, la sustitución como si no nos hubiesemos dado cuenta todavía del resultado. Hallemos los cambios inversos (r,s)->(x,y)",
"r2,s2,x2,y2=symbols('r2,s2,x2,y2')\nsolve([r2-x2-y2,s2-y2/x2],[x2,y2])\n\nexp6.subs({y:r2*s2/(s2+1),x:r2/(s2+1)}).simplify()",
"La ecuación que resulta r'=1/sen(r) se resuelve facilmente a mano. Nos queda cos(r)=s+C -> cos(x+y)=y/x+C que es la solución de la ecuación original\nEjercicio 2.2(c) p. 41 de Hydon. Hay que hallar el grupo de Lie cuyo generador infintesimal es\n$$X=2xy\\partial_x+(y^2-x^2)\\partial_y$$\nLa idea es\n1) Hallar coordenadas canónicas\n2) Usar que en canónicas el grupo de simetrías satisface \n$$\\hat{r}=r\\quad\\text{y}\\quad\\hat{s}=s+\\epsilon$$\n3) Escribir las relaciones anteriores en las variables originales.",
"x,y,r,s=symbols('x,y,r,s')\nf=Function('f')(x)\ndsolve(f.diff(x)-(f**2-x**2)/2/x/f)\n\nIntegral(1/(2*x*sqrt(x*(r-x))),x).doit()",
"No lo sabe hacer. Si hacemos la sustitución $x=u^2$ nos queda\n$$\\int\\frac{dx}{2x\\sqrt{x(r-x)}}=\\int\\frac{du}{u^2\\sqrt{r-u^2}}.$$\nY esta si la sabe resolver.",
"u=symbols('u')\nIntegral(1/u**2/sqrt(r-u**2),u).doit()",
"$s=-\\frac{1}{r}\\sqrt{\\frac{r}{u^2}-1}= -\\frac{1}{r} \\sqrt{ \\frac{r-x}{x}}= -\\frac{x}{x^2+y^2} \\sqrt{ \\frac{y^2/x}{x}}=-\\frac{y}{x^2+y^2}$\nAhora escribimos\n$$\\hat{r}=r\\quad\\text{y}\\quad\\hat{s}=s+\\epsilon$$\nen $x,y$.",
"x,y,xn,yn,epsilon=symbols('x,y,\\hat{x},\\hat{y},epsilon')\nA=solve([(xn**2+yn**2)/xn-(x**2+y**2)/x , -yn/(xn**2+yn**2)+y/(x**2+y**2)-epsilon],[xn,yn])\nA\n\nA[0]\n\nA=Matrix(A[0])\nA",
"Chequeemos que $\\left.\\frac{d}{d\\epsilon}(\\hat{x},\\hat{y})\\right|_{\\epsilon=0}=(2xy,y^2-x^2)$",
"A.diff(epsilon).subs(epsilon,0)",
"Chequeemos la propiedad de grupo de Lie. Definimos el operador $T$ con lambda",
"T=lambda x,y,epsilon: Matrix([ x/(epsilon**2*(x**2+y**2)-2*epsilon*y+1),-(epsilon*x**2+epsilon*y**2-y)/(epsilon**2*(x**2+y**2)-2*epsilon*y+1)])\n\nepsilon_1,epsilon_2=symbols('epsilon_1,epsilon_2')\nexpr=T(T(x,y,epsilon_1)[0],T(x,y,epsilon_1)[1],epsilon_2)-T(x,y,epsilon_1+epsilon_2)\nexpr\n\nsimplify(expr)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
anandha2017/udacity
|
nd101 Deep Learning Nanodegree Foundation/DockerImages/19_Autoencoders/notebooks/autoencoder/Simple_Autoencoder.ipynb
|
mit
|
[
"A Simple Autoencoder\nWe'll start off by building a simple autoencoder to compress the MNIST dataset. With autoencoders, we pass input data through an encoder that makes a compressed representation of the input. Then, this representation is passed through a decoder to reconstruct the input data. Generally the encoder and decoder will be built with neural networks, then trained on example data.\n\nIn this notebook, we'll be build a simple network architecture for the encoder and decoder. Let's get started by importing our libraries and getting the dataset.",
"%matplotlib inline\n%config InlineBackend.figure_format = 'retina'\n\nimport numpy as np\nimport tensorflow as tf\nimport matplotlib.pyplot as plt\n\nfrom tensorflow.examples.tutorials.mnist import input_data\nmnist = input_data.read_data_sets('MNIST_data', validation_size=0)",
"Below I'm plotting an example image from the MNIST dataset. These are 28x28 grayscale images of handwritten digits.",
"img = mnist.train.images[2]\nplt.imshow(img.reshape((28, 28)), cmap='Greys_r')",
"We'll train an autoencoder with these images by flattening them into 784 length vectors. The images from this dataset are already normalized such that the values are between 0 and 1. Let's start by building basically the simplest autoencoder with a single ReLU hidden layer. This layer will be used as the compressed representation. Then, the encoder is the input layer and the hidden layer. The decoder is the hidden layer and the output layer. Since the images are normalized between 0 and 1, we need to use a sigmoid activation on the output layer to get values matching the input.\n\n\nExercise: Build the graph for the autoencoder in the cell below. The input images will be flattened into 784 length vectors. The targets are the same as the inputs. And there should be one hidden layer with a ReLU activation and an output layer with a sigmoid activation. Feel free to use TensorFlow's higher level API, tf.layers. For instance, you would use tf.layers.dense(inputs, units, activation=tf.nn.relu) to create a fully connected layer with a ReLU activation. The loss should be calculated with the cross-entropy loss, there is a convenient TensorFlow function for this tf.nn.sigmoid_cross_entropy_with_logits (documentation). You should note that tf.nn.sigmoid_cross_entropy_with_logits takes the logits, but to get the reconstructed images you'll need to pass the logits through the sigmoid function.",
"# Size of the encoding layer (the hidden layer)\nencoding_dim = 32 # feel free to change this value\n\nimage_size = mnist.train.images.shape[1]\nprint(image_size)\n\n# Input and target placeholders\ninputs_ = tf.placeholder(tf.float32, (None, image_size), name='inputs')\ntargets_ = tf.placeholder(tf.float32, (None, image_size), name='targets')\nprint(inputs_)\nprint(targets_)\n\n# Output of hidden layer, single fully connected layer here with ReLU activation\nencoded = tf.layers.dense(inputs_, encoding_dim, activation=tf.nn.relu)\nprint(encoded)\n\n# Output layer logits, fully connected layer with no activation\nlogits = tf.layers.dense(encoded, image_size, activation=None)\nprint(logits)\n\n# Sigmoid output from logits\ndecoded = tf.nn.sigmoid(logits, name='output')\nprint(decoded)\n\n# Sigmoid cross-entropy loss\nloss = tf.nn.sigmoid_cross_entropy_with_logits(labels=targets_, logits=logits)\nprint(loss)\n\n# Mean of the loss\ncost = tf.reduce_mean(loss)\nprint(cost)\n\n# Adam optimizer\nopt = tf.train.AdamOptimizer(0.001).minimize(cost)\nprint(opt)",
"Training",
"# Create the session\nsess = tf.Session()",
"Here I'll write a bit of code to train the network. I'm not too interested in validation here, so I'll just monitor the training loss. \nCalling mnist.train.next_batch(batch_size) will return a tuple of (images, labels). We're not concerned with the labels here, we just need the images. Otherwise this is pretty straightfoward training with TensorFlow. We initialize the variables with sess.run(tf.global_variables_initializer()). Then, run the optimizer and get the loss with batch_cost, _ = sess.run([cost, opt], feed_dict=feed).",
"epochs = 20\nbatch_size = 200\nsess.run(tf.global_variables_initializer())\nfor e in range(epochs):\n for ii in range(mnist.train.num_examples//batch_size):\n batch = mnist.train.next_batch(batch_size)\n feed = {inputs_: batch[0], targets_: batch[0]}\n batch_cost, _ = sess.run([cost, opt], feed_dict=feed)\n\n print(\"Epoch: {}/{}...\".format(e+1, epochs),\n \"Training loss: {:.4f}\".format(batch_cost))",
"Checking out the results\nBelow I've plotted some of the test images along with their reconstructions. For the most part these look pretty good except for some blurriness in some parts.",
"fig, axes = plt.subplots(nrows=2, ncols=10, sharex=True, sharey=True, figsize=(20,4))\nin_imgs = mnist.test.images[:10]\nreconstructed, compressed = sess.run([decoded, encoded], feed_dict={inputs_: in_imgs})\n\nfor images, row in zip([in_imgs, reconstructed], axes):\n for img, ax in zip(images, row):\n ax.imshow(img.reshape((28, 28)), cmap='Greys_r')\n ax.get_xaxis().set_visible(False)\n ax.get_yaxis().set_visible(False)\n\nfig.tight_layout(pad=0.1)\n\nsess.close()",
"Up Next\nWe're dealing with images here, so we can (usually) get better performance using convolution layers. So, next we'll build a better autoencoder with convolutional layers.\nIn practice, autoencoders aren't actually better at compression compared to typical methods like JPEGs and MP3s. But, they are being used for noise reduction, which you'll also build."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
robertoalotufo/ia898
|
src/ptrans.ipynb
|
mit
|
[
"Function ptrans\nSynopse\nPerform periodic translation in 1-D, 2-D or 3-D space.\n\ng = ptrans(f, t)\nOUTPUT\ng: Image. Periodically translated image.\n\n\nINPUT\nf: Image ndarray. Image to be translated.\nt: Tuple. (tz,tr,tc)\n\n\n\n\n\nDescription\nTranslate a 1-D, 2-D or 3-dimesional image periodically. This translation can be seen as a window view\ndisplacement on an infinite tile wall where each tile is a copy of the original image. The\nperiodical translation is related to the periodic convolution and discrete Fourier transform.\nBe careful when implementing this function using the mod, some mod implementations in C does not\nfollow the correct definition when the number is negative.",
"def ptrans(f,t):\n import numpy as np\n g = np.empty_like(f) \n if f.ndim == 1:\n W = f.shape[0]\n col = np.arange(W)\n g = f[(col-t)%W]\n elif f.ndim == 2:\n H,W = f.shape\n rr,cc = t\n row,col = np.indices(f.shape)\n g = f[(row-rr)%H, (col-cc)%W]\n elif f.ndim == 3:\n Z,H,W = f.shape\n zz,rr,cc = t\n z,row,col = np.indices(f.shape)\n g = f[(z-zz)%Z, (row-rr)%H, (col-cc)%W]\n return g\n\n\n\n# implementation using periodic convolution\ndef ptrans2(f, t):\n\n f, t = np.asarray(f), np.asarray(t).astype('int32')\n h = np.zeros(2*np.abs(t) + 1)\n t = t + np.abs(t)\n h[tuple(t)] = 1\n g = ia.pconv(f, h)\n return g\n\ndef ptrans2d(f,t):\n rr,cc = t\n H,W = f.shape\n \n r = rr%H\n c = cc%W\n \n g = np.empty_like(f)\n \n g[:r,:c] = f[H-r:H,W-c:W]\n g[:r,c:] = f[H-r:H,0:W-c]\n g[r:,:c] = f[0:H-r,W-c:W]\n g[r:,c:] = f[0:H-r,0:W-c]\n\n return g",
"Examples",
"testing = (__name__ == '__main__')\nif testing:\n ! jupyter nbconvert --to python ptrans.ipynb\n import numpy as np\n %matplotlib inline\n import matplotlib.image as mpimg\n import matplotlib.pyplot as plt\n import sys,os\n ia898path = os.path.abspath('../../')\n if ia898path not in sys.path:\n sys.path.append(ia898path)\n import ia898.src as ia",
"Example 1\nNumeric examples in 2D and 3D.",
"if testing:\n # 2D example\n f = np.arange(15).reshape(3,5)\n\n print(\"Original 2D image:\\n\",f,\"\\n\\n\")\n print(\"Image translated by (0,0):\\n\",ia.ptrans(f, (0,0)).astype(int),\"\\n\\n\")\n print(\"Image translated by (0,1):\\n\",ia.ptrans(f, (0,1)).astype(int),\"\\n\\n\")\n print(\"Image translated by (-1,2):\\n\",ia.ptrans(f, (-1,2)).astype(int),\"\\n\\n\")\n\nif testing:\n # 3D example\n f1 = np.arange(60).reshape(3,4,5)\n\n print(\"Original 3D image:\\n\",f1,\"\\n\\n\")\n print(\"Image translated by (0,0,0):\\n\",ia.ptrans(f1, (0,0,0)).astype(int),\"\\n\\n\")\n print(\"Image translated by (0,1,0):\\n\",ia.ptrans(f1, (0,1,0)).astype(int),\"\\n\\n\")\n print(\"Image translated by (-1,3,2):\\n\",ia.ptrans(f1, (-1,3,2)).astype(int),\"\\n\\n\")",
"Example 2\nImage examples in 2D",
"if testing:\n # 2D example\n f = mpimg.imread('../data/cameraman.tif')\n plt.imshow(f,cmap='gray'), plt.title('Original 2D image - Cameraman')\n plt.imshow(ia.ptrans(f, np.array(f.shape)//3),cmap='gray'), plt.title('Cameraman periodically translated')",
"Equation\nFor 2D case we have\n$$ \\begin{matrix}\n t &=& (t_r t_c),\\\n g = f_t &=& f_{tr,tc},\\\n g(rr,cc) &=& f((rr-t_r)\\ mod\\ H, (cc-t_c) \\ mod\\ W), 0 \\leq rr < H, 0 \\leq cc < W,\\\n \\mbox{where} & & \\ a \\ mod\\ N &=& (a + k N) \\ mod\\ N, k \\in Z.\n\\end{matrix} $$\nThe equation above can be extended to n-dimensional space.",
"if testing:\n print('testing ptrans')\n f = np.array([[1,2,3,4,5],[6,7,8,9,10],[11,12,13,14,15]],'uint8')\n print(repr(ia.ptrans(f, [-1,2]).astype(np.uint8)) == repr(np.array(\n [[ 9, 10, 6, 7, 8],\n [14, 15, 11, 12, 13],\n [ 4, 5, 1, 2, 3]],'uint8')))",
"Contributions\n\nRoberto A Lotufo, Sept 2013, converted to index computation\nAndré Luis da Costa, 1st semester 2011"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
rishuatgithub/MLPy
|
nlp/UPDATED_NLP_COURSE/05-Topic-Modeling/03-LDA-NMF-Assessment-Project-Solutions.ipynb
|
apache-2.0
|
[
"<a href='http://www.pieriandata.com'> <img src='../Pierian_Data_Logo.png' /></a>\n\nTopic Modeling Assessment Project\nTask: Import pandas and read in the quora_questions.csv file.",
"import pandas as pd\n\nquora = pd.read_csv('quora_questions.csv')\n\nquora.head()",
"Preprocessing\nTask: Use TF-IDF Vectorization to create a vectorized document term matrix. You may want to explore the max_df and min_df parameters.",
"from sklearn.feature_extraction.text import TfidfVectorizer\n\ntfidf = TfidfVectorizer(max_df=0.95, min_df=2, stop_words='english')\n\ndtm = tfidf.fit_transform(quora['Question'])\n\ndtm",
"Non-negative Matrix Factorization\nTASK: Using Scikit-Learn create an instance of NMF with 20 expected components. (Use random_state=42)..",
"from sklearn.decomposition import NMF\n\nnmf_model = NMF(n_components=20,random_state=42)\n\nnmf_model.fit(dtm)",
"TASK: Print our the top 15 most common words for each of the 20 topics.",
"for index,topic in enumerate(nmf_model.components_):\n print(f'THE TOP 15 WORDS FOR TOPIC #{index}')\n print([tfidf.get_feature_names()[i] for i in topic.argsort()[-15:]])\n print('\\n')",
"TASK: Add a new column to the original quora dataframe that labels each question into one of the 20 topic categories.",
"quora.head()\n\ntopic_results = nmf_model.transform(dtm)\n\ntopic_results.argmax(axis=1)\n\nquora['Topic'] = topic_results.argmax(axis=1)\n\nquora.head(10)",
"Great job!"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
mne-tools/mne-tools.github.io
|
0.17/_downloads/4db67f73b2950e88bd1e641ba8cf44c0/plot_modifying_data_inplace.ipynb
|
bsd-3-clause
|
[
"%matplotlib inline",
"Modifying data in-place\nIt is often necessary to modify data once you have loaded it into memory.\nCommon examples of this are signal processing, feature extraction, and data\ncleaning. Some functionality is pre-built into MNE-python, though it is also\npossible to apply an arbitrary function to the data.",
"import mne\nimport os.path as op\nimport numpy as np\nfrom matplotlib import pyplot as plt",
"Load an example dataset, the preload flag loads the data into memory now",
"data_path = op.join(mne.datasets.sample.data_path(), 'MEG',\n 'sample', 'sample_audvis_raw.fif')\nraw = mne.io.read_raw_fif(data_path, preload=True)\nraw = raw.crop(0, 10)\nprint(raw)",
"Signal processing\nMost MNE objects have in-built methods for filtering:",
"filt_bands = [(1, 3), (3, 10), (10, 20), (20, 60)]\nf, (ax, ax2) = plt.subplots(2, 1, figsize=(15, 10))\ndata, times = raw[0]\n_ = ax.plot(data[0])\nfor fmin, fmax in filt_bands:\n raw_filt = raw.copy()\n raw_filt.filter(fmin, fmax, fir_design='firwin')\n _ = ax2.plot(raw_filt[0][0][0])\nax2.legend(filt_bands)\nax.set_title('Raw data')\nax2.set_title('Band-pass filtered data')",
"In addition, there are functions for applying the Hilbert transform, which is\nuseful to calculate phase / amplitude of your signal.",
"# Filter signal with a fairly steep filter, then take hilbert transform\n\nraw_band = raw.copy()\nraw_band.filter(12, 18, l_trans_bandwidth=2., h_trans_bandwidth=2.,\n fir_design='firwin')\nraw_hilb = raw_band.copy()\nhilb_picks = mne.pick_types(raw_band.info, meg=False, eeg=True)\nraw_hilb.apply_hilbert(hilb_picks)\nprint(raw_hilb[0][0].dtype)",
"Finally, it is possible to apply arbitrary functions to your data to do\nwhat you want. Here we will use this to take the amplitude and phase of\nthe hilbert transformed data.\n<div class=\"alert alert-info\"><h4>Note</h4><p>You can also use ``amplitude=True`` in the call to\n :meth:`mne.io.Raw.apply_hilbert` to do this automatically.</p></div>",
"# Take the amplitude and phase\nraw_amp = raw_hilb.copy()\nraw_amp.apply_function(np.abs, hilb_picks)\nraw_phase = raw_hilb.copy()\nraw_phase.apply_function(np.angle, hilb_picks)\n\nf, (a1, a2) = plt.subplots(2, 1, figsize=(15, 10))\na1.plot(raw_band[hilb_picks[0]][0][0].real)\na1.plot(raw_amp[hilb_picks[0]][0][0].real)\na2.plot(raw_phase[hilb_picks[0]][0][0].real)\na1.set_title('Amplitude of frequency band')\na2.set_title('Phase of frequency band')"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
Upward-Spiral-Science/team1
|
other/Descriptive and Exploratory_oldversion.ipynb
|
apache-2.0
|
[
"Descriptive and Exploratory Answers",
"from mpl_toolkits.mplot3d import axes3d\nimport matplotlib.pyplot as plt\n%matplotlib inline \nimport numpy as np\nimport urllib2\n\ndef check_condition(row):\n if row[-1] == 0:\n return False\n return True\n\nurl = ('https://raw.githubusercontent.com/Upward-Spiral-Science'\n '/data/master/syn-density/output.csv')\ndata = urllib2.urlopen(url)\ncsv = np.genfromtxt(data, delimiter=\",\")\n\n# only look at data points with nonzero synapse value\na = np.apply_along_axis(check_condition, 1, csv)\na = np.where(a == True)[0]\nnonzero_rows = csv[a, :]\nnonzero_rows = nonzero_rows[1:, :]",
"What is the total number of synapses in our data set?",
"# Total number of synapses\nprint np.sum(nonzero_rows[:,4])",
"What is the maximum number of synapses at a given point in our data set?",
"# Max number of synapses\nmax_syn = np.argmax(nonzero_rows[:,4])\nprint max_syn\nloc = (nonzero_rows[max_syn,0],nonzero_rows[max_syn,1],nonzero_rows[max_syn,2]);\nprint loc",
"What are the minimum and maximum x, y, and z values? (and thus, the set of (x,y,z) for our data set?",
"print [min(csv[1:,1]),min(csv[1:,2]),min(csv[1:,3])] #(x,y,z) minimum\nprint [max(csv[1:,1]),max(csv[1:,2]),max(csv[1:,3])] #(x,y,z) maximum\n",
"What does the histogram of our data look like?",
"# Histogram\nfig = plt.figure()\nax = fig.gca()\nplt.hist(nonzero_rows[:,4])\nax.set_title('Synapse Density')\nax.set_xlabel('Number of Synapses')\nax.set_ylabel('Number of (x,y,z) points with synapse density = x')\nplt.show()",
"What does the probability mass function of our data look like?",
"# PMF\nsyns = csv[1:,4]\nsum = np.sum(syns)\ndensity = syns/sum\nmean = np.mean(density)\nstd = np.std(density)\nprint std, mean\n\n#for locating synapse values of zero\ndef check_condition(row):\n if row[-1] == 0:\n return False\n return True\n\n#for filtering by the mean number of synapses\ndef synapse_filt(row, avg):\n if row[-1] > avg:\n return True\n return False\n\nsamples = 5000\n\n# only look at data points where the number of synapses is greater than avg\na = np.apply_along_axis(check_condition, 1, csv)\na = np.where(a == True)[0]\nnonzero_rows = csv[a, :]\navg_synapse = np.mean(nonzero_rows[1:, -1])\nprint avg_synapse\nfilter_avg_synapse = np.apply_along_axis(synapse_filt, 1,\n nonzero_rows, avg_synapse)\na = np.where(filter_avg_synapse == True)[0]\nnonzero_filtered = nonzero_rows[a, :]\nxyz_only = nonzero_filtered[:, [1, 2, 3]]\n\n#randomly sample from the remaining data points\nperm = np.random.permutation(xrange(1, len(xyz_only[:])))\nxyz_only = xyz_only[perm[:samples]]\n# get range for graphing\nx_min = np.amin(xyz_only[:, 0])\nx_max = np.amax(xyz_only[:, 0])\ny_max = np.amax(xyz_only[:, 1])\ny_min = np.amin(xyz_only[:, 1])\nz_min = np.amin(xyz_only[:, 2])\nz_max = np.amax(xyz_only[:, 2])",
"What does our data look like in a 3-D scatter plot?",
"# following code adopted from\n# https://www.getdatajoy.com/examples/python-plots/3d-scatter-plot\n\nfig = plt.figure()\nax = fig.gca(projection='3d')\n\nax.set_title('3D Scatter Plot')\nax.set_xlabel('x')\nax.set_ylabel('y')\nax.set_zlabel('z')\n\nax.set_xlim(x_min, x_max)\nax.set_ylim(y_min, y_max)\nax.set_zlim(z_min, z_max)\n\nax.view_init()\nax.dist = 12 # distance\n\nax.scatter(\n xyz_only[:, 0], xyz_only[:, 1], xyz_only[:, 2], # data\n color='purple', # marker colour\n marker='o', # marker shape\n s=30 # marker size\n)\nplt.show() # render the plot"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
Weenkus/Machine-Learning-University-of-Washington
|
Regression/assignments/Feature selection and LASSO Programming Assignment 1.ipynb
|
mit
|
[
"Initialise the libraries",
"import numpy as np\nimport pandas as pd\nimport matplotlib.pyplot as plt\nfrom math import log, sqrt\nfrom sklearn import linear_model",
"Load the data",
"dtype_dict = {'bathrooms':float, 'waterfront':int, 'sqft_above':int, 'sqft_living15':float, 'grade':int, 'yr_renovated':int, 'price':float, 'bedrooms':float, 'zipcode':str, 'long':float, 'sqft_lot15':float, 'sqft_living':float, 'floors':float, 'condition':int, 'lat':float, 'date':str, 'sqft_basement':int, 'yr_built':int, 'id':str, 'sqft_lot':int, 'view':int}\n\nsales = pd.read_csv('../datasets/kc_house_data.csv', dtype=dtype_dict)\n\ntesting = pd.read_csv('../datasets/wk3_kc_house_test_data.csv', dtype=dtype_dict)\ntraining = pd.read_csv('../datasets/wk3_kc_house_train_data.csv', dtype=dtype_dict)\nvalidation = pd.read_csv('../datasets/wk3_kc_house_valid_data.csv', dtype=dtype_dict)",
"Feature engineering",
"sales['sqft_living_sqrt'] = sales['sqft_living'].apply(sqrt)\nsales['sqft_lot_sqrt'] = sales['sqft_lot'].apply(sqrt)\nsales['bedrooms_square'] = sales['bedrooms']*sales['bedrooms']\nsales['floors_square'] = sales['floors']*sales['floors']\n\ntesting['sqft_living_sqrt'] = testing['sqft_living'].apply(sqrt)\ntesting['sqft_lot_sqrt'] = testing['sqft_lot'].apply(sqrt)\ntesting['bedrooms_square'] = testing['bedrooms']*testing['bedrooms']\ntesting['floors_square'] = testing['floors']*testing['floors']\n\ntraining['sqft_living_sqrt'] = training['sqft_living'].apply(sqrt)\ntraining['sqft_lot_sqrt'] = training['sqft_lot'].apply(sqrt)\ntraining['bedrooms_square'] = training['bedrooms']*training['bedrooms']\ntraining['floors_square'] = training['floors']*training['floors']\n\nvalidation['sqft_living_sqrt'] = validation['sqft_living'].apply(sqrt)\nvalidation['sqft_lot_sqrt'] = validation['sqft_lot'].apply(sqrt)\nvalidation['bedrooms_square'] = validation['bedrooms']*validation['bedrooms']\nvalidation['floors_square'] = validation['floors']*validation['floors']\n\nall_features = ['bedrooms', 'bedrooms_square',\n 'bathrooms',\n 'sqft_living', 'sqft_living_sqrt',\n 'sqft_lot', 'sqft_lot_sqrt',\n 'floors', 'floors_square',\n 'waterfront', 'view', 'condition', 'grade',\n 'sqft_above',\n 'sqft_basement',\n 'yr_built', 'yr_renovated']",
"LASSO",
"model_all = linear_model.Lasso(alpha=5e2, normalize=True) # set parameters\nmodel_all.fit(sales[all_features], sales['price']) # learn weights\n\nprint (all_features, '\\n')\nprint (model_all.coef_)\n\nprint ('With intercetp, nonzeros: ', np.count_nonzero(model_all.coef_) + np.count_nonzero(model_all.intercept_))\nprint ('No intercept: ', np.count_nonzero(model_all.coef_))\n\nl1pentalities = np.logspace(1, 7, num=13)\nl1pentalities\n\nimport sys\nminRSS = sys.maxsize\nbestL1 = 0\nfor l1p in l1pentalities:\n model = linear_model.Lasso(alpha=l1p, normalize=True)\n model.fit(training[all_features], training['price']) # learn weights\n \n # Calculate the RSS\n RSS = ((model.predict(validation[all_features]) - validation.price) ** 2).sum()\n print (\"RSS: \", RSS, \" for L1: \", l1p)\n \n # Remember the min RSS\n if RSS < minRSS:\n minRSS = RSS\n bestL1 = l1p\n \nprint ('\\nMinimum RSS:', minRSS, ' for L1: ', bestL1)",
"Calculate the RSS for best L1 on test data",
"model = linear_model.Lasso(alpha=bestL1, normalize=True)\nmodel.fit(training[all_features], training['price']) # learn weights\n\n# Calculate the RSS\nRSS = ((model.predict(testing[all_features]) - testing.price) ** 2).sum()\nprint (\"RSS: \", RSS, \" for L1: \", bestL1)\n\nprint (model.coef_)\n\nprint (model.intercept_)\n\nnp.count_nonzero(model.coef_) + np.count_nonzero(model.intercept_)",
"2 Phase LASSO for finding a desired number of nonzero features\nFirst we loop through an array of L1 alues and find an interval where our model gets the desired number of nonzero coefficients. In phase 2 we loop once more through the interval (from the first phase) but with small steps, the goal of the second phase is to find the optimal L1 for our nonzero number.",
"max_nonzeros = 7\nL1_range = np.logspace(1, 4, num=20)\nprint (L1_range)\n\nl1_penalty_min = L1_range[0]\nl1_penalty_max = L1_range[19]\n\nfor L1 in L1_range:\n model = linear_model.Lasso(alpha=L1, normalize=True)\n model.fit(training[all_features], training['price']) # learn weights\n \n nonZeroes = np.count_nonzero(model.coef_) + np.count_nonzero(model.intercept_)\n \n # The largest l1_penalty that has more non-zeros than ‘max_nonzeros’\n if (nonZeroes > max_nonzeros) and (L1 > l1_penalty_min) :\n l1_penalty_min = L1\n \n # The smallest l1_penalty that has fewer non-zeros than ‘max_nonzeros’\n if (nonZeroes < max_nonzeros) and (L1 < l1_penalty_max):\n l1_penalty_max = L1\n \nprint ('l1_penalty_min: ', l1_penalty_min)\nprint ('l1_penalty_max: ', l1_penalty_max)\n\nL1_narrow_interval = np.linspace(l1_penalty_min,l1_penalty_max,20)\nprint (L1_narrow_interval)\n\nminRSS = sys.maxsize\nbestL1 = 0\n\n# Find the model with desired number of nonzeroes with the best RSS\nfor L1 in L1_narrow_interval:\n model = linear_model.Lasso(alpha=L1, normalize=True)\n model.fit(validation[all_features], validation['price']) # learn weights\n \n # Calculate the RSS\n RSS = ((model.predict(validation[all_features]) - validation.price) ** 2).sum()\n print (\"RSS: \", RSS, \" for L1: \", L1)\n \n # Remember the min RSS\n if RSS < minRSS:\n minRSS = RSS\n bestL1 = L1\n \nprint ('\\nMinimum RSS:', minRSS, ' for L1: ', bestL1)\n\nmodel = linear_model.Lasso(alpha=bestL1, normalize=True)\nmodel.fit(validation[all_features], validation['price']) # learn weights\n\nprint (all_features)\nprint ()\nprint (model.coef_)\nprint ()\nprint (model.intercept_)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
Bio204-class/bio204-notebooks
|
2016-04-11-Sampling-Distribution-Correlation-Coefficient.ipynb
|
cc0-1.0
|
[
"import numpy as np\nimport scipy.stats as stats\nimport pandas as pd\nimport matplotlib.pyplot as plt\nimport seaborn as sbn\nsbn.set_style(\"white\") # get rid of seaborn grid lines\n\n%matplotlib inline\nnp.random.seed(20160410)",
"Sampling Distribution of Pearson Correlation\nAs with all the other statistics we've looked at over the course of the semester, the sample correlation coefficient may differ substantially from the underly population correlation coefficient, depending on the vagaries of sampling. \nFirst we'll generate a single sample, drawn from an underlying bivariate normal distribution with uncorrelated variables.",
"# generate bivariate normal data for uncorrelated variables\n# See the docs on scipy.stats.multivariate_normal\n\n# bivariate mean\nmean = [0,0] \n\n# covariance matrix\ncov = np.array([[1,0],\n [0,1]])\n\nsample = stats.multivariate_normal.rvs(mean=mean, cov=cov, size=30) \n\n\nsbn.jointplot(sample[:,0], sample[:,1])\nprint(\"Correlation matrix:\\n\", np.corrcoef(sample, rowvar=False, ddof=1))",
"Simulate the sampling distribution of correlation coefficient, uncorrelated variables\nFirst we'll simulate the sampling distribution of $r_{xy}$ when the true population correlation $\\rho_{xy} = 0$ and the sample size $n = 30$.",
"mean = [0,0]\ncov = [[1,0],\n [0,1]]\nssize = 30\nnsims = 2500\n\ncors = []\nfor n in range(nsims):\n sample = stats.multivariate_normal.rvs(mean=mean, cov=cov, size=ssize)\n r = np.corrcoef(sample, rowvar=False, ddof=1)[0,1]\n cors.append(r)\nsbn.distplot(cors)\npass",
"The above plot looks fairly symmetrical, and approximately normal. However, now let's look at the sampling distribution for $\\rho_{xy} = 0.9$ and $n=30$.",
"mean = [0,0]\ncov = [[1,0.9],\n [0.9,1]]\nssize = 30\nnsims = 2500\n\ncors090 = []\nfor n in range(nsims):\n sample = stats.multivariate_normal.rvs(mean=mean, cov=cov, size=ssize)\n r = np.corrcoef(sample, rowvar=False, ddof=1)[0,1]\n cors090.append(r)\n \nsbn.distplot(cors090)\npass",
"We see that the sampling correlation is strongly skewed for larger values of $\\rho_{xy}$\nFisher's transformation normalizes the distribution of the Pearson product-moment correlation, and is usually used to calculate confidence intervals and carry out hypothesis test with correlations.\n$$\nF(r) = \\frac{1}{2}\\ln \\frac{1+r}{1-r} = \\mbox{arctanh}(r)\n$$\nWhere $\\mbox{arctanh}(r)$ is the inverse hyperbolic tangent of the correlation coefficient.\nIf the underlying true correlation is $\\rho$, the sampling distribution of $F(\\rho)$ is approximately normal:\n$$\nF(\\rho) \\sim N(\\mu = \\mbox{arctanh}(\\rho),\\ \\sigma = \\frac{1}{\\sqrt{n-3}})\n$$",
"# plot for sampling distribution when pho = 0.9\n# using Fisher's transformation\nsbn.distplot(np.arctanh(cors090))\nprint(\"\")\npass",
"Confidence intervals in the space of the transformed variables are:\n$$\n100(1 - \\alpha)\\%\\text{CI}: \\operatorname{arctanh}(\\rho) \\in [\\operatorname{arctanh}(r) \\pm z_{\\alpha/2}SE]\n$$\nTo put this back in terms of untransformed correlations:\n$$\n100(1 - \\alpha)\\%\\text{CI}: \\rho \\in [\\operatorname{tanh}(\\operatorname{arctanh}(r) - z_{\\alpha/2}SE), \\operatorname{tanh}(\\operatorname{arctanh}(r) + z_{\\alpha/2}SE)]\n$$\nLet's write a Python function to calculate the confidence intervals for correlations:",
"def correlationCI(r, n, alpha=0.05):\n mu = np.arctanh(r)\n sigma = 1.0/np.sqrt(n-3)\n z = stats.norm.ppf(alpha/2)\n left = np.tanh(mu) - z*sigma\n right = np.tanh(mu) + z*sigma\n return (left, right)\n \n\ncorrelationCI(0, 30)\n\nssizes = np.arange(10,250,step=10)\ncis = []\nfor i in ssizes:\n cis.append(correlationCI(0, i)[0])\n\nplt.plot(ssizes, cis, '-o')\nplt.xlabel(\"Sample size\")\nplt.ylabel(\"Half-width of CI\")\nplt.title(r\"\"\"Half-width of CIs for $\\rho=0$\nas as function of sample size\"\"\")\n\n\npass\n\nstats.pearsonr(sample[:,0], sample[:,1])\n\n?stats.pearsonr",
"Rank Correlation\nThere are two popular \"robust\" estimators of correlation, based on a consideration of correlations of ranks. These are known as Spearman's Rho and Kendall's Tau.\nSpearman's Rho\nSpearman's rank correlation, or Spearman's Rho, for variables $X$ and $Y$ is simply the correlation of the ranks of the $X$ and $Y$.\nLet $R_X$ and $R_Y$ be the rank values of the variables $X$ and $Y$. \n$$\n\\rho_S = \\frac{\\operatorname{cov}(R_X, R_Y)}{\\sigma_{R_X}\\sigma_{R_Y}}\n$$\nKendall's Tau\nKendall's rank correlation, or Kendall's Tau, is defined in terms of concordant and discordant pairs of observations, in the rank scale.\n$$\n\\tau = \\frac{(\\text{number of concordant pairs}) - (\\text{number of discordant pairs})}{n (n-1) /2}\n$$\nThe definition of concordant vs discordant pairs considers all pairs of observations and asks whether they are in the same relative rank order for X and Y (if so, than consider concordant) or in different relative rank order (if so, then discordant).\nRank correlation measures are more robust to outliers at the extremes of the distribution",
"n = 50\nx = np.linspace(1,10,n) + stats.norm.rvs(size=n)\ny = x + stats.norm.rvs(loc=1, scale=1.5, size=n)\nplt.scatter(x,y)\n\nprint(\"Pearson r: \", stats.pearsonr(x, y)[0])\nprint(\"Spearman's rho: \", stats.spearmanr(x, y)[0])\nprint(\"Kendall's tau: \", stats.kendalltau(x, y)[0])\n\npass\n\npollute_X = np.concatenate([x, stats.norm.rvs(loc=14, size=1), stats.norm.rvs(loc=-1, size=1)])\npollute_Y = np.concatenate([y, stats.norm.rvs(loc=6, size=1), stats.norm.rvs(loc=8, size=1)])\n\nplt.scatter(pollute_X, pollute_Y)\n\nprint(\"Pearson r: \", stats.pearsonr(pollute_X, pollute_Y)[0])\nprint(\"Spearman's rho: \", stats.spearmanr(pollute_X,pollute_Y)[0])\nprint(\"Kendall's tau: \", stats.kendalltau(pollute_X,pollute_Y)[0])",
"Association Between Categorical Variables\nA standard approach for testing the independence/dependence of a pair of categorical variables is to use a $\\chi^2$ (Chi-square) test of independence. \nThe null and alternative hypotheses for the $\\chi^2$ test are as follows:\n\n$H_0$: the two categorical variables are independent\n$H_A$: the two categorical variables are dependent\n\nContingency tables\nWe typically depict the relationship between categorical variables using a \"contingency table\".\n| | B1 | B2 | Total |\n|-------|---------------------|---------------------|-------------------------------|\n| A1 | $c_{11}$ | $c_{12}$ | $c_{11}$+$c_{12}$ |\n| A2 | $c_{21}$ | $c_{22}$ | $c_{12}$+$c_{22}$ |\n| Total | $c_{11}$+$c_{21}$ | $c_{12}$+$c_{22}$ | $c_{11}$+$c_{12}$+$c_{12}$+$c_{22}$ |\nThe rows and columns indicate the different categories for variables A and B respectively, and the cells, $c_{ij}$ give the counts of the number of observations for the corresponding combination of A and B. For example, the cell $c_{11}$ gives the number of observations that that belong to both the category A1 and B1, while $c_{12}$ gives the number that are both A1 and B2, etc.",
"# construct a contingency table for the sex and survival categorical variables\n# from the bumpus data set\ndataurl = \"https://github.com/Bio204-class/bio204-datasets/raw/master/bumpus-data.txt\"\nbumpus = pd.read_table(dataurl)\n\nobserved = pd.crosstab(bumpus.survived, bumpus.sex, margins=True)\nobserved",
"Expected counts\nIf the two categorical variables were independent than we would expect that the count in each cell of the contigency table to equal the product of the marginal probalities times the total number of observations.",
"nobs = observed.ix['All','All']\n\nprob_female = observed.ix['All','f']/nobs\nprob_male = observed.ix['All', 'm']/nobs\n\nprob_surv = observed.ix['T', 'All']/nobs\nprob_died = observed.ix['F', 'All']/nobs\n\nexpected_counts = []\nfor i in (prob_died, prob_surv):\n row = []\n for j in (prob_female, prob_male):\n row.append(i * j * nobs)\n expected_counts.append(row + [np.sum(row)])\nexpected_counts.append(np.sum(expected_counts,axis=0).tolist())\n\nexpected = pd.DataFrame(expected_counts, index=observed.index, columns=observed.columns)\n\nprint(\"Table of Expected Counts\")\nexpected\n\nZ2 = (observed-expected)**2/expected\nZ2\n\nchi2 = np.sum(Z2.values)\nchi2\n\ndf = (len(bumpus.survived.unique()) - 1) * (len(bumpus.sex.unique()) - 1)\nprint(\"chi2 = {}, df = {}, pval = {}\".format(chi2, df, stats.chi2.sf(chi2, df=df)))",
"$X^2$-square test of independence using scipy.stats",
"# Same analysis with scipy.stats fxns\nobserved_nomargin = pd.crosstab(bumpus.survived, bumpus.sex, margins=False)\nChi2, Pval, Dof, Expected = stats.chi2_contingency(observed_nomargin.values, \n correction=False)\n\nChi2, Pval, Dof\n\nbumpus.survived.unique()"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
sassoftware/sas-viya-machine-learning
|
overview/vdmml_bank-Python.ipynb
|
apache-2.0
|
[
"Build models programmatically using Python API\nThe SAS Python SWAT package enables you to connect to SAS Cloud Analytic Services (CAS) engine that is the centerpiece of the SAS Viya framework.\nIn order to access this functionality, the SAS SWAT package must first be downloaded and installed from https://github.com/sassoftware/python-swat",
"# Import packages\nfrom swat import *\nfrom pprint import pprint\nfrom swat.render import render_html\nfrom matplotlib import pyplot as plt\nimport pandas as pd\nimport sys\n%matplotlib inline\n\n# Start a CAS session\ncashost='<your CAS server here>'\ncasport=<your CAS server port here>\ncasauth=\"~/.authinfo\"\nsess = CAS(cashost, casport, authinfo=casauth, caslib=\"public\")\n\n# Set helper variables\ngcaslib=\"public\"\nprepped_data=\"bank_prepped\"\ntarget = {\"b_tgt\"}\nclass_inputs = {\"cat_input1\", \"cat_input2\", \"demog_ho\", \"demog_genf\"}\ninterval_inputs = {\"im_demog_age\", \"im_demog_homeval\", \"im_demog_inc\", \"demog_pr\", \"log_rfm1\", \"rfm2\", \"log_im_rfm3\", \"rfm4\", \"rfm5\", \"rfm6\", \"rfm7\", \"rfm8\", \"rfm9\", \"rfm10\", \"rfm11\", \"rfm12\"}\nclass_vars = target | class_inputs",
"Train and score Stepwise Regression model using the data prepared in SAS Studio",
"# Load action set\nsess.loadactionset(actionset=\"regression\")\n\n# Train Logistic Regression\nlr=sess.regression.logistic(\n table={\"name\":prepped_data, \"caslib\":gcaslib},\n classVars=[{\"vars\":class_vars}],\n model={\n \"depVars\":[{\"name\":\"b_tgt\", \"options\":{\"event\":\"1\"}}],\n \"effects\":[{\"vars\":class_inputs | interval_inputs}]\n },\n partByVar={\"name\":\"_partind_\", \"train\":\"1\", \"valid\":\"0\"},\n selection={\"method\":\"STEPWISE\"},\n output={\"casOut\":{\"name\":\"_scored_logistic\", \"replace\":True}, \"copyVars\":{\"account\", \"b_tgt\", \"_partind_\"}}\n)\n\n# Output model statistics\nrender_html(lr)\n\n# Compute p_b_tgt0 and p_b_tgt1 for assessment\nsess.dataStep.runCode(\n code=\"data _scored_logistic; set _scored_logistic; p_b_tgt0=1-_pred_; rename _pred_=p_b_tgt1; run;\"\n)",
"Load the GBM model create in SAS Visual Analytics and score using this model",
"# 1. Load GBM model (ASTORE) created in VA\nsess.loadTable(\n caslib=\"models\", path=\"Gradient_Boosting_VA.sashdat\", \n casout={\"name\":\"gbm_astore_model\",\"caslib\":\"casuser\", \"replace\":True}\n)\n\n# 2. Score code from VA (for data preparation)\nsess.dataStep.runCode(\n code=\"\"\"data bank_part_post; \n set bank_part(caslib='public'); \n _va_calculated_54_1=round('b_tgt'n,1.0);\n _va_calculated_54_2=round('demog_genf'n,1.0);\n _va_calculated_54_3=round('demog_ho'n,1.0);\n _va_calculated_54_4=round('_PartInd_'n,1.0);\n run;\"\"\"\n)\n\n# 3. Score using ASTORE\nsess.loadactionset(actionset=\"astore\")\n\nsess.astore.score(\n table={\"name\":\"bank_part_post\"},\n rstore={\"name\":\"gbm_astore_model\"},\n out={\"name\":\"_scored_gbm\", \"replace\":True},\n copyVars={\"account\", \"_partind_\", \"b_tgt\"}\n)\n\n# 4. Rename p_b_tgt0 and p_b_tgt1 for assessment\nsess.dataStep.runCode(\n code=\"\"\"data _scored_gbm; \n set _scored_gbm; \n rename p__va_calculated_54_10=p_b_tgt0\n p__va_calculated_54_11=p_b_tgt1;\n run;\"\"\"\n)",
"Load the Forest model created in SAS Studio and score using this model",
"# Load action set \nsess.loadactionset(actionset=\"decisionTree\")\n\n# Score using forest_model table\nsess.decisionTree.forestScore(\n table={\"name\":prepped_data, \"caslib\":gcaslib},\n modelTable={\"name\":\"forest_model\", \"caslib\":\"public\"},\n casOut={\"name\":\"_scored_rf\", \"replace\":True},\n copyVars={\"account\", \"b_tgt\", \"_partind_\"},\n vote=\"PROB\"\n)\n\n# Create p_b_tgt0 and p_b_tgt1 as _rf_predp_ is the probability of event in _rf_predname_\nsess.dataStep.runCode(\n code=\"\"\"data _scored_rf; \n set _scored_rf; \n if _rf_predname_=1 then do; \n p_b_tgt1=_rf_predp_; \n p_b_tgt0=1-p_b_tgt1; \n end; \n if _rf_predname_=0 then do; \n p_b_tgt0=_rf_predp_; \n p_b_tgt1=1-p_b_tgt0; \n end; \n run;\"\"\"\n)",
"Load the SVM model created in SAS Studio and score using this model",
"# Score using ASTORE\nsess.loadactionset(actionset=\"astore\")\n\nsess.astore.score(\n table={\"name\":prepped_data, \"caslib\":gcaslib},\n rstore={\"name\":\"svm_astore_model\", \"caslib\":\"public\"},\n out={\"name\":\"_scored_svm\", \"replace\":True},\n copyVars={\"account\", \"_partind_\", \"b_tgt\"}\n)",
"Assess models from SAS Visual Analytics, SAS Studio and the new models created in Python interface",
"# Assess models\ndef assess_model(prefix):\n return sess.percentile.assess(\n table={\n \"name\":\"_scored_\" + prefix, \n \"where\": \"strip(put(_partind_, best.))='0'\"\n },\n inputs=[{\"name\":\"p_b_tgt1\"}], \n response=\"b_tgt\",\n event=\"1\",\n pVar={\"p_b_tgt0\"},\n pEvent={\"0\"} \n )\n\nlrAssess=assess_model(prefix=\"logistic\") \nlr_fitstat =lrAssess.FitStat\nlr_rocinfo =lrAssess.ROCInfo\nlr_liftinfo=lrAssess.LIFTInfo\n\nrfAssess=assess_model(prefix=\"rf\") \nrf_fitstat =rfAssess.FitStat\nrf_rocinfo =rfAssess.ROCInfo\nrf_liftinfo=rfAssess.LIFTInfo\n\ngbmAssess=assess_model(prefix=\"gbm\") \ngbm_fitstat =gbmAssess.FitStat\ngbm_rocinfo =gbmAssess.ROCInfo\ngbm_liftinfo=gbmAssess.LIFTInfo\n\nsvmAssess=assess_model(prefix=\"svm\") \nsvm_fitstat =svmAssess.FitStat\nsvm_rocinfo =svmAssess.ROCInfo\nsvm_liftinfo=svmAssess.LIFTInfo\n\n# Add new variable to indicate type of model\nlr_liftinfo[\"model\"]=\"Logistic (Python API)\"\nlr_rocinfo[\"model\"]='Logistic (Python API)'\nrf_liftinfo[\"model\"]=\"Autotuned Forest (SAS Studio)\"\nrf_rocinfo[\"model\"]=\"Autotuned Forest (SAS Studio)\"\ngbm_liftinfo[\"model\"]=\"Gradient Boosting (SAS VA)\"\ngbm_rocinfo[\"model\"]=\"Gradient Boosting (SAS VA)\"\nsvm_liftinfo[\"model\"]=\"SVM (SAS Studio)\"\nsvm_rocinfo[\"model\"]=\"SVM (SAS Studio)\"\n\n# Append data\nall_liftinfo=lr_liftinfo.append(rf_liftinfo, ignore_index=True) \\\n .append(gbm_liftinfo, ignore_index=True) \\\n .append(svm_liftinfo, ignore_index=True) \nall_rocinfo=lr_rocinfo.append(rf_rocinfo, ignore_index=True) \\\n .append(gbm_rocinfo, ignore_index=True) \\\n .append(svm_rocinfo, ignore_index=True) \n \nprint(\"AUC (using validation data)\".center(80, '-'))\nall_rocinfo[[\"model\", \"C\"]].drop_duplicates(keep=\"first\").sort_values(by=\"C\", ascending=False) ",
"Draw Assessment Plots",
"# Draw ROC charts \nplt.figure()\nfor key, grp in all_rocinfo.groupby([\"model\"]):\n plt.plot(grp[\"FPR\"], grp[\"Sensitivity\"], label=key)\nplt.plot([0,1], [0,1], \"k--\")\nplt.xlabel(\"False Positive Rate\")\nplt.ylabel(\"True Positive Rate\")\nplt.grid(True)\nplt.legend(loc=\"best\")\nplt.title(\"ROC Curve (using validation data)\")\nplt.show()\n\n# Draw lift charts \nplt.figure()\nfor key, grp in all_liftinfo.groupby([\"model\"]):\n plt.plot(grp[\"Depth\"], grp[\"Lift\"], label=key)\nplt.xlabel(\"Depth\")\nplt.ylabel(\"Lift\")\nplt.grid(True)\nplt.legend(loc=\"best\")\nplt.title(\"Lift Chart (using validation data)\")\nplt.show()\n\n# Close the CAS session\n# sess.close()"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
GoogleCloudPlatform/vertex-ai-samples
|
notebooks/community/gapic/automl/showcase_automl_image_object_detection_online.ipynb
|
apache-2.0
|
[
"# Copyright 2020 Google LLC\n#\n# Licensed under the Apache License, Version 2.0 (the \"License\");\n# you may not use this file except in compliance with the License.\n# You may obtain a copy of the License at\n#\n# https://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.",
"Vertex client library: AutoML image object detection model for online prediction\n<table align=\"left\">\n <td>\n <a href=\"https://colab.research.google.com/github/GoogleCloudPlatform/vertex-ai-samples/blob/master/notebooks/community/gapic/automl/showcase_automl_image_object_detection_online.ipynb\">\n <img src=\"https://cloud.google.com/ml-engine/images/colab-logo-32px.png\" alt=\"Colab logo\"> Run in Colab\n </a>\n </td>\n <td>\n <a href=\"https://github.com/GoogleCloudPlatform/vertex-ai-samples/blob/master/notebooks/community/gapic/automl/showcase_automl_image_object_detection_online.ipynb\">\n <img src=\"https://cloud.google.com/ml-engine/images/github-logo-32px.png\" alt=\"GitHub logo\">\n View on GitHub\n </a>\n </td>\n</table>\n<br/><br/><br/>\nOverview\nThis tutorial demonstrates how to use the Vertex client library for Python to create image object detection models and do online prediction using Google Cloud's AutoML.\nDataset\nThe dataset used for this tutorial is the Salads category of the OpenImages dataset from TensorFlow Datasets. This dataset does not require any feature engineering. The version of the dataset you will use in this tutorial is stored in a public Cloud Storage bucket. The trained model predicts the bounding box locations and corresponding type of salad items in an image from a class of five items: salad, seafood, tomato, baked goods, or cheese.\nObjective\nIn this tutorial, you create an AutoML image object detection model and deploy for online prediction from a Python script using the Vertex client library. You can alternatively create and deploy models using the gcloud command-line tool or online using the Google Cloud Console.\nThe steps performed include:\n\nCreate a Vertex Dataset resource.\nTrain the model.\nView the model evaluation.\nDeploy the Model resource to a serving Endpoint resource.\nMake a prediction.\nUndeploy the Model.\n\nCosts\nThis tutorial uses billable components of Google Cloud (GCP):\n\nVertex AI\nCloud Storage\n\nLearn about Vertex AI\npricing and Cloud Storage\npricing, and use the Pricing\nCalculator\nto generate a cost estimate based on your projected usage.\nInstallation\nInstall the latest version of Vertex client library.",
"import os\nimport sys\n\n# Google Cloud Notebook\nif os.path.exists(\"/opt/deeplearning/metadata/env_version\"):\n USER_FLAG = \"--user\"\nelse:\n USER_FLAG = \"\"\n\n! pip3 install -U google-cloud-aiplatform $USER_FLAG",
"Install the latest GA version of google-cloud-storage library as well.",
"! pip3 install -U google-cloud-storage $USER_FLAG",
"Restart the kernel\nOnce you've installed the Vertex client library and Google cloud-storage, you need to restart the notebook kernel so it can find the packages.",
"if not os.getenv(\"IS_TESTING\"):\n # Automatically restart kernel after installs\n import IPython\n\n app = IPython.Application.instance()\n app.kernel.do_shutdown(True)",
"Before you begin\nGPU runtime\nMake sure you're running this notebook in a GPU runtime if you have that option. In Colab, select Runtime > Change Runtime Type > GPU\nSet up your Google Cloud project\nThe following steps are required, regardless of your notebook environment.\n\n\nSelect or create a Google Cloud project. When you first create an account, you get a $300 free credit towards your compute/storage costs.\n\n\nMake sure that billing is enabled for your project.\n\n\nEnable the Vertex APIs and Compute Engine APIs.\n\n\nThe Google Cloud SDK is already installed in Google Cloud Notebook.\n\n\nEnter your project ID in the cell below. Then run the cell to make sure the\nCloud SDK uses the right project for all the commands in this notebook.\n\n\nNote: Jupyter runs lines prefixed with ! as shell commands, and it interpolates Python variables prefixed with $ into these commands.",
"PROJECT_ID = \"[your-project-id]\" # @param {type:\"string\"}\n\nif PROJECT_ID == \"\" or PROJECT_ID is None or PROJECT_ID == \"[your-project-id]\":\n # Get your GCP project id from gcloud\n shell_output = !gcloud config list --format 'value(core.project)' 2>/dev/null\n PROJECT_ID = shell_output[0]\n print(\"Project ID:\", PROJECT_ID)\n\n! gcloud config set project $PROJECT_ID",
"Region\nYou can also change the REGION variable, which is used for operations\nthroughout the rest of this notebook. Below are regions supported for Vertex. We recommend that you choose the region closest to you.\n\nAmericas: us-central1\nEurope: europe-west4\nAsia Pacific: asia-east1\n\nYou may not use a multi-regional bucket for training with Vertex. Not all regions provide support for all Vertex services. For the latest support per region, see the Vertex locations documentation",
"REGION = \"us-central1\" # @param {type: \"string\"}",
"Timestamp\nIf you are in a live tutorial session, you might be using a shared test account or project. To avoid name collisions between users on resources created, you create a timestamp for each instance session, and append onto the name of resources which will be created in this tutorial.",
"from datetime import datetime\n\nTIMESTAMP = datetime.now().strftime(\"%Y%m%d%H%M%S\")",
"Authenticate your Google Cloud account\nIf you are using Google Cloud Notebook, your environment is already authenticated. Skip this step.\nIf you are using Colab, run the cell below and follow the instructions when prompted to authenticate your account via oAuth.\nOtherwise, follow these steps:\nIn the Cloud Console, go to the Create service account key page.\nClick Create service account.\nIn the Service account name field, enter a name, and click Create.\nIn the Grant this service account access to project section, click the Role drop-down list. Type \"Vertex\" into the filter box, and select Vertex Administrator. Type \"Storage Object Admin\" into the filter box, and select Storage Object Admin.\nClick Create. A JSON file that contains your key downloads to your local environment.\nEnter the path to your service account key as the GOOGLE_APPLICATION_CREDENTIALS variable in the cell below and run the cell.",
"# If you are running this notebook in Colab, run this cell and follow the\n# instructions to authenticate your GCP account. This provides access to your\n# Cloud Storage bucket and lets you submit training jobs and prediction\n# requests.\n\n# If on Google Cloud Notebook, then don't execute this code\nif not os.path.exists(\"/opt/deeplearning/metadata/env_version\"):\n if \"google.colab\" in sys.modules:\n from google.colab import auth as google_auth\n\n google_auth.authenticate_user()\n\n # If you are running this notebook locally, replace the string below with the\n # path to your service account key and run this cell to authenticate your GCP\n # account.\n elif not os.getenv(\"IS_TESTING\"):\n %env GOOGLE_APPLICATION_CREDENTIALS ''",
"Set up variables\nNext, set up some variables used throughout the tutorial.\nImport libraries and define constants\nImport Vertex client library\nImport the Vertex client library into our Python environment.",
"import time\n\nfrom google.cloud.aiplatform import gapic as aip\nfrom google.protobuf import json_format\nfrom google.protobuf.json_format import MessageToJson, ParseDict\nfrom google.protobuf.struct_pb2 import Struct, Value",
"Vertex constants\nSetup up the following constants for Vertex:\n\nAPI_ENDPOINT: The Vertex API service endpoint for dataset, model, job, pipeline and endpoint services.\nPARENT: The Vertex location root path for dataset, model, job, pipeline and endpoint resources.",
"# API service endpoint\nAPI_ENDPOINT = \"{}-aiplatform.googleapis.com\".format(REGION)\n\n# Vertex location root path for your dataset, model and endpoint resources\nPARENT = \"projects/\" + PROJECT_ID + \"/locations/\" + REGION",
"AutoML constants\nSet constants unique to AutoML datasets and training:\n\nDataset Schemas: Tells the Dataset resource service which type of dataset it is.\nData Labeling (Annotations) Schemas: Tells the Dataset resource service how the data is labeled (annotated).\nDataset Training Schemas: Tells the Pipeline resource service the task (e.g., classification) to train the model for.",
"# Image Dataset type\nDATA_SCHEMA = \"gs://google-cloud-aiplatform/schema/dataset/metadata/image_1.0.0.yaml\"\n# Image Labeling type\nLABEL_SCHEMA = \"gs://google-cloud-aiplatform/schema/dataset/ioformat/image_bounding_box_io_format_1.0.0.yaml\"\n# Image Training task\nTRAINING_SCHEMA = \"gs://google-cloud-aiplatform/schema/trainingjob/definition/automl_image_object_detection_1.0.0.yaml\"",
"Tutorial\nNow you are ready to start creating your own AutoML image object detection model.\nSet up clients\nThe Vertex client library works as a client/server model. On your side (the Python script) you will create a client that sends requests and receives responses from the Vertex server.\nYou will use different clients in this tutorial for different steps in the workflow. So set them all up upfront.\n\nDataset Service for Dataset resources.\nModel Service for Model resources.\nPipeline Service for training.\nEndpoint Service for deployment.\nPrediction Service for serving.",
"# client options same for all services\nclient_options = {\"api_endpoint\": API_ENDPOINT}\n\n\ndef create_dataset_client():\n client = aip.DatasetServiceClient(client_options=client_options)\n return client\n\n\ndef create_model_client():\n client = aip.ModelServiceClient(client_options=client_options)\n return client\n\n\ndef create_pipeline_client():\n client = aip.PipelineServiceClient(client_options=client_options)\n return client\n\n\ndef create_endpoint_client():\n client = aip.EndpointServiceClient(client_options=client_options)\n return client\n\n\ndef create_prediction_client():\n client = aip.PredictionServiceClient(client_options=client_options)\n return client\n\n\nclients = {}\nclients[\"dataset\"] = create_dataset_client()\nclients[\"model\"] = create_model_client()\nclients[\"pipeline\"] = create_pipeline_client()\nclients[\"endpoint\"] = create_endpoint_client()\nclients[\"prediction\"] = create_prediction_client()\n\nfor client in clients.items():\n print(client)",
"Dataset\nNow that your clients are ready, your first step in training a model is to create a managed dataset instance, and then upload your labeled data to it.\nCreate Dataset resource instance\nUse the helper function create_dataset to create the instance of a Dataset resource. This function does the following:\n\nUses the dataset client service.\nCreates an Vertex Dataset resource (aip.Dataset), with the following parameters:\ndisplay_name: The human-readable name you choose to give it.\nmetadata_schema_uri: The schema for the dataset type.\nCalls the client dataset service method create_dataset, with the following parameters:\nparent: The Vertex location root path for your Database, Model and Endpoint resources.\ndataset: The Vertex dataset object instance you created.\nThe method returns an operation object.\n\nAn operation object is how Vertex handles asynchronous calls for long running operations. While this step usually goes fast, when you first use it in your project, there is a longer delay due to provisioning.\nYou can use the operation object to get status on the operation (e.g., create Dataset resource) or to cancel the operation, by invoking an operation method:\n| Method | Description |\n| ----------- | ----------- |\n| result() | Waits for the operation to complete and returns a result object in JSON format. |\n| running() | Returns True/False on whether the operation is still running. |\n| done() | Returns True/False on whether the operation is completed. |\n| canceled() | Returns True/False on whether the operation was canceled. |\n| cancel() | Cancels the operation (this may take up to 30 seconds). |",
"TIMEOUT = 90\n\n\ndef create_dataset(name, schema, labels=None, timeout=TIMEOUT):\n start_time = time.time()\n try:\n dataset = aip.Dataset(\n display_name=name, metadata_schema_uri=schema, labels=labels\n )\n\n operation = clients[\"dataset\"].create_dataset(parent=PARENT, dataset=dataset)\n print(\"Long running operation:\", operation.operation.name)\n result = operation.result(timeout=TIMEOUT)\n print(\"time:\", time.time() - start_time)\n print(\"response\")\n print(\" name:\", result.name)\n print(\" display_name:\", result.display_name)\n print(\" metadata_schema_uri:\", result.metadata_schema_uri)\n print(\" metadata:\", dict(result.metadata))\n print(\" create_time:\", result.create_time)\n print(\" update_time:\", result.update_time)\n print(\" etag:\", result.etag)\n print(\" labels:\", dict(result.labels))\n return result\n except Exception as e:\n print(\"exception:\", e)\n return None\n\n\nresult = create_dataset(\"salads-\" + TIMESTAMP, DATA_SCHEMA)",
"Now save the unique dataset identifier for the Dataset resource instance you created.",
"# The full unique ID for the dataset\ndataset_id = result.name\n# The short numeric ID for the dataset\ndataset_short_id = dataset_id.split(\"/\")[-1]\n\nprint(dataset_id)",
"Data preparation\nThe Vertex Dataset resource for images has some requirements for your data:\n\nImages must be stored in a Cloud Storage bucket.\nEach image file must be in an image format (PNG, JPEG, BMP, ...).\nThere must be an index file stored in your Cloud Storage bucket that contains the path and label for each image.\nThe index file must be either CSV or JSONL.\n\nCSV\nFor image object detection, the CSV index file has the requirements:\n\nNo heading.\nFirst column is the Cloud Storage path to the image.\nSecond column is the label.\nThird/Fourth columns are the upper left corner of bounding box. Coordinates are normalized, between 0 and 1.\nFifth/Sixth/Seventh columns are not used and should be 0.\nEighth/Ninth columns are the lower right corner of the bounding box.\n\nLocation of Cloud Storage training data.\nNow set the variable IMPORT_FILE to the location of the CSV index file in Cloud Storage.",
"IMPORT_FILE = \"gs://cloud-samples-data/vision/salads.csv\"",
"Quick peek at your data\nYou will use a version of the Salads dataset that is stored in a public Cloud Storage bucket, using a CSV index file.\nStart by doing a quick peek at the data. You count the number of examples by counting the number of rows in the CSV index file (wc -l) and then peek at the first few rows.",
"if \"IMPORT_FILES\" in globals():\n FILE = IMPORT_FILES[0]\nelse:\n FILE = IMPORT_FILE\n\ncount = ! gsutil cat $FILE | wc -l\nprint(\"Number of Examples\", int(count[0]))\n\nprint(\"First 10 rows\")\n! gsutil cat $FILE | head",
"Import data\nNow, import the data into your Vertex Dataset resource. Use this helper function import_data to import the data. The function does the following:\n\nUses the Dataset client.\nCalls the client method import_data, with the following parameters:\nname: The human readable name you give to the Dataset resource (e.g., salads).\n\nimport_configs: The import configuration.\n\n\nimport_configs: A Python list containing a dictionary, with the key/value entries:\n\ngcs_sources: A list of URIs to the paths of the one or more index files.\nimport_schema_uri: The schema identifying the labeling type.\n\nThe import_data() method returns a long running operation object. This will take a few minutes to complete. If you are in a live tutorial, this would be a good time to ask questions, or take a personal break.",
"def import_data(dataset, gcs_sources, schema):\n config = [{\"gcs_source\": {\"uris\": gcs_sources}, \"import_schema_uri\": schema}]\n print(\"dataset:\", dataset_id)\n start_time = time.time()\n try:\n operation = clients[\"dataset\"].import_data(\n name=dataset_id, import_configs=config\n )\n print(\"Long running operation:\", operation.operation.name)\n\n result = operation.result()\n print(\"result:\", result)\n print(\"time:\", int(time.time() - start_time), \"secs\")\n print(\"error:\", operation.exception())\n print(\"meta :\", operation.metadata)\n print(\n \"after: running:\",\n operation.running(),\n \"done:\",\n operation.done(),\n \"cancelled:\",\n operation.cancelled(),\n )\n\n return operation\n except Exception as e:\n print(\"exception:\", e)\n return None\n\n\nimport_data(dataset_id, [IMPORT_FILE], LABEL_SCHEMA)",
"Train the model\nNow train an AutoML image object detection model using your Vertex Dataset resource. To train the model, do the following steps:\n\nCreate an Vertex training pipeline for the Dataset resource.\nExecute the pipeline to start the training.\n\nCreate a training pipeline\nYou may ask, what do we use a pipeline for? You typically use pipelines when the job (such as training) has multiple steps, generally in sequential order: do step A, do step B, etc. By putting the steps into a pipeline, we gain the benefits of:\n\nBeing reusable for subsequent training jobs.\nCan be containerized and ran as a batch job.\nCan be distributed.\nAll the steps are associated with the same pipeline job for tracking progress.\n\nUse this helper function create_pipeline, which takes the following parameters:\n\npipeline_name: A human readable name for the pipeline job.\nmodel_name: A human readable name for the model.\ndataset: The Vertex fully qualified dataset identifier.\nschema: The dataset labeling (annotation) training schema.\ntask: A dictionary describing the requirements for the training job.\n\nThe helper function calls the Pipeline client service'smethod create_pipeline, which takes the following parameters:\n\nparent: The Vertex location root path for your Dataset, Model and Endpoint resources.\ntraining_pipeline: the full specification for the pipeline training job.\n\nLet's look now deeper into the minimal requirements for constructing a training_pipeline specification:\n\ndisplay_name: A human readable name for the pipeline job.\ntraining_task_definition: The dataset labeling (annotation) training schema.\ntraining_task_inputs: A dictionary describing the requirements for the training job.\nmodel_to_upload: A human readable name for the model.\ninput_data_config: The dataset specification.\ndataset_id: The Vertex dataset identifier only (non-fully qualified) -- this is the last part of the fully-qualified identifier.\nfraction_split: If specified, the percentages of the dataset to use for training, test and validation. Otherwise, the percentages are automatically selected by AutoML.",
"def create_pipeline(pipeline_name, model_name, dataset, schema, task):\n\n dataset_id = dataset.split(\"/\")[-1]\n\n input_config = {\n \"dataset_id\": dataset_id,\n \"fraction_split\": {\n \"training_fraction\": 0.8,\n \"validation_fraction\": 0.1,\n \"test_fraction\": 0.1,\n },\n }\n\n training_pipeline = {\n \"display_name\": pipeline_name,\n \"training_task_definition\": schema,\n \"training_task_inputs\": task,\n \"input_data_config\": input_config,\n \"model_to_upload\": {\"display_name\": model_name},\n }\n\n try:\n pipeline = clients[\"pipeline\"].create_training_pipeline(\n parent=PARENT, training_pipeline=training_pipeline\n )\n print(pipeline)\n except Exception as e:\n print(\"exception:\", e)\n return None\n return pipeline",
"Construct the task requirements\nNext, construct the task requirements. Unlike other parameters which take a Python (JSON-like) dictionary, the task field takes a Google protobuf Struct, which is very similar to a Python dictionary. Use the json_format.ParseDict method for the conversion.\nThe minimal fields you need to specify are:\n\nbudget_milli_node_hours: The maximum time to budget (billed) for training the model, where 1000 = 1 hour. For image object detection, the budget must be a minimum of 20 hours.\nmodel_type: The type of deployed model:\nCLOUD_HIGH_ACCURACY_1: For deploying to Google Cloud and optimizing for accuracy.\nCLOUD_LOW_LATENCY_1: For deploying to Google Cloud and optimizing for latency (response time),\nMOBILE_TF_HIGH_ACCURACY_1: For deploying to the edge and optimizing for accuracy.\nMOBILE_TF_LOW_LATENCY_1: For deploying to the edge and optimizing for latency (response time).\nMOBILE_TF_VERSATILE_1: For deploying to the edge and optimizing for a trade off between latency and accuracy.\ndisable_early_stopping: Whether True/False to let AutoML use its judgement to stop training early or train for the entire budget.\n\nFinally, create the pipeline by calling the helper function create_pipeline, which returns an instance of a training pipeline object.",
"PIPE_NAME = \"salads_pipe-\" + TIMESTAMP\nMODEL_NAME = \"salads_model-\" + TIMESTAMP\n\ntask = json_format.ParseDict(\n {\n \"budget_milli_node_hours\": 20000,\n \"model_type\": \"CLOUD_HIGH_ACCURACY_1\",\n \"disable_early_stopping\": False,\n },\n Value(),\n)\n\nresponse = create_pipeline(PIPE_NAME, MODEL_NAME, dataset_id, TRAINING_SCHEMA, task)",
"Now save the unique identifier of the training pipeline you created.",
"# The full unique ID for the pipeline\npipeline_id = response.name\n# The short numeric ID for the pipeline\npipeline_short_id = pipeline_id.split(\"/\")[-1]\n\nprint(pipeline_id)",
"Get information on a training pipeline\nNow get pipeline information for just this training pipeline instance. The helper function gets the job information for just this job by calling the the job client service's get_training_pipeline method, with the following parameter:\n\nname: The Vertex fully qualified pipeline identifier.\n\nWhen the model is done training, the pipeline state will be PIPELINE_STATE_SUCCEEDED.",
"def get_training_pipeline(name, silent=False):\n response = clients[\"pipeline\"].get_training_pipeline(name=name)\n if silent:\n return response\n\n print(\"pipeline\")\n print(\" name:\", response.name)\n print(\" display_name:\", response.display_name)\n print(\" state:\", response.state)\n print(\" training_task_definition:\", response.training_task_definition)\n print(\" training_task_inputs:\", dict(response.training_task_inputs))\n print(\" create_time:\", response.create_time)\n print(\" start_time:\", response.start_time)\n print(\" end_time:\", response.end_time)\n print(\" update_time:\", response.update_time)\n print(\" labels:\", dict(response.labels))\n return response\n\n\nresponse = get_training_pipeline(pipeline_id)",
"Deployment\nTraining the above model may take upwards of 60 minutes time.\nOnce your model is done training, you can calculate the actual time it took to train the model by subtracting end_time from start_time. For your model, you will need to know the fully qualified Vertex Model resource identifier, which the pipeline service assigned to it. You can get this from the returned pipeline instance as the field model_to_deploy.name.",
"while True:\n response = get_training_pipeline(pipeline_id, True)\n if response.state != aip.PipelineState.PIPELINE_STATE_SUCCEEDED:\n print(\"Training job has not completed:\", response.state)\n model_to_deploy_id = None\n if response.state == aip.PipelineState.PIPELINE_STATE_FAILED:\n raise Exception(\"Training Job Failed\")\n else:\n model_to_deploy = response.model_to_upload\n model_to_deploy_id = model_to_deploy.name\n print(\"Training Time:\", response.end_time - response.start_time)\n break\n time.sleep(60)\n\nprint(\"model to deploy:\", model_to_deploy_id)",
"Model information\nNow that your model is trained, you can get some information on your model.\nEvaluate the Model resource\nNow find out how good the model service believes your model is. As part of training, some portion of the dataset was set aside as the test (holdout) data, which is used by the pipeline service to evaluate the model.\nList evaluations for all slices\nUse this helper function list_model_evaluations, which takes the following parameter:\n\nname: The Vertex fully qualified model identifier for the Model resource.\n\nThis helper function uses the model client service's list_model_evaluations method, which takes the same parameter. The response object from the call is a list, where each element is an evaluation metric.\nFor each evaluation -- you probably only have one, we then print all the key names for each metric in the evaluation, and for a small set (evaluatedBoundingBoxCount and boundingBoxMeanAveragePrecision) you will print the result.",
"def list_model_evaluations(name):\n response = clients[\"model\"].list_model_evaluations(parent=name)\n for evaluation in response:\n print(\"model_evaluation\")\n print(\" name:\", evaluation.name)\n print(\" metrics_schema_uri:\", evaluation.metrics_schema_uri)\n metrics = json_format.MessageToDict(evaluation._pb.metrics)\n for metric in metrics.keys():\n print(metric)\n print(\"evaluatedBoundingBoxCount\", metrics[\"evaluatedBoundingBoxCount\"])\n print(\n \"boundingBoxMeanAveragePrecision\",\n metrics[\"boundingBoxMeanAveragePrecision\"],\n )\n\n return evaluation.name\n\n\nlast_evaluation = list_model_evaluations(model_to_deploy_id)",
"Deploy the Model resource\nNow deploy the trained Vertex Model resource you created with AutoML. This requires two steps:\n\n\nCreate an Endpoint resource for deploying the Model resource to.\n\n\nDeploy the Model resource to the Endpoint resource.\n\n\nCreate an Endpoint resource\nUse this helper function create_endpoint to create an endpoint to deploy the model to for serving predictions, with the following parameter:\n\ndisplay_name: A human readable name for the Endpoint resource.\n\nThe helper function uses the endpoint client service's create_endpoint method, which takes the following parameter:\n\ndisplay_name: A human readable name for the Endpoint resource.\n\nCreating an Endpoint resource returns a long running operation, since it may take a few moments to provision the Endpoint resource for serving. You call response.result(), which is a synchronous call and will return when the Endpoint resource is ready. The helper function returns the Vertex fully qualified identifier for the Endpoint resource: response.name.",
"ENDPOINT_NAME = \"salads_endpoint-\" + TIMESTAMP\n\n\ndef create_endpoint(display_name):\n endpoint = {\"display_name\": display_name}\n response = clients[\"endpoint\"].create_endpoint(parent=PARENT, endpoint=endpoint)\n print(\"Long running operation:\", response.operation.name)\n\n result = response.result(timeout=300)\n print(\"result\")\n print(\" name:\", result.name)\n print(\" display_name:\", result.display_name)\n print(\" description:\", result.description)\n print(\" labels:\", result.labels)\n print(\" create_time:\", result.create_time)\n print(\" update_time:\", result.update_time)\n return result\n\n\nresult = create_endpoint(ENDPOINT_NAME)",
"Now get the unique identifier for the Endpoint resource you created.",
"# The full unique ID for the endpoint\nendpoint_id = result.name\n# The short numeric ID for the endpoint\nendpoint_short_id = endpoint_id.split(\"/\")[-1]\n\nprint(endpoint_id)",
"Compute instance scaling\nYou have several choices on scaling the compute instances for handling your online prediction requests:\n\nSingle Instance: The online prediction requests are processed on a single compute instance.\n\nSet the minimum (MIN_NODES) and maximum (MAX_NODES) number of compute instances to one.\n\n\nManual Scaling: The online prediction requests are split across a fixed number of compute instances that you manually specified.\n\n\nSet the minimum (MIN_NODES) and maximum (MAX_NODES) number of compute instances to the same number of nodes. When a model is first deployed to the instance, the fixed number of compute instances are provisioned and online prediction requests are evenly distributed across them.\n\n\nAuto Scaling: The online prediction requests are split across a scaleable number of compute instances.\n\nSet the minimum (MIN_NODES) number of compute instances to provision when a model is first deployed and to de-provision, and set the maximum (`MAX_NODES) number of compute instances to provision, depending on load conditions.\n\nThe minimum number of compute instances corresponds to the field min_replica_count and the maximum number of compute instances corresponds to the field max_replica_count, in your subsequent deployment request.",
"MIN_NODES = 1\nMAX_NODES = 1",
"Deploy Model resource to the Endpoint resource\nUse this helper function deploy_model to deploy the Model resource to the Endpoint resource you created for serving predictions, with the following parameters:\n\nmodel: The Vertex fully qualified model identifier of the model to upload (deploy) from the training pipeline.\ndeploy_model_display_name: A human readable name for the deployed model.\nendpoint: The Vertex fully qualified endpoint identifier to deploy the model to.\n\nThe helper function calls the Endpoint client service's method deploy_model, which takes the following parameters:\n\nendpoint: The Vertex fully qualified Endpoint resource identifier to deploy the Model resource to.\ndeployed_model: The requirements specification for deploying the model.\ntraffic_split: Percent of traffic at the endpoint that goes to this model, which is specified as a dictionary of one or more key/value pairs.\nIf only one model, then specify as { \"0\": 100 }, where \"0\" refers to this model being uploaded and 100 means 100% of the traffic.\nIf there are existing models on the endpoint, for which the traffic will be split, then use model_id to specify as { \"0\": percent, model_id: percent, ... }, where model_id is the model id of an existing model to the deployed endpoint. The percents must add up to 100.\n\nLet's now dive deeper into the deployed_model parameter. This parameter is specified as a Python dictionary with the minimum required fields:\n\nmodel: The Vertex fully qualified model identifier of the (upload) model to deploy.\ndisplay_name: A human readable name for the deployed model.\ndisable_container_logging: This disables logging of container events, such as execution failures (default is container logging is enabled). Container logging is typically enabled when debugging the deployment and then disabled when deployed for production.\nautomatic_resources: This refers to how many redundant compute instances (replicas). For this example, we set it to one (no replication).\n\nTraffic Split\nLet's now dive deeper into the traffic_split parameter. This parameter is specified as a Python dictionary. This might at first be a tad bit confusing. Let me explain, you can deploy more than one instance of your model to an endpoint, and then set how much (percent) goes to each instance.\nWhy would you do that? Perhaps you already have a previous version deployed in production -- let's call that v1. You got better model evaluation on v2, but you don't know for certain that it is really better until you deploy to production. So in the case of traffic split, you might want to deploy v2 to the same endpoint as v1, but it only get's say 10% of the traffic. That way, you can monitor how well it does without disrupting the majority of users -- until you make a final decision.\nResponse\nThe method returns a long running operation response. We will wait sychronously for the operation to complete by calling the response.result(), which will block until the model is deployed. If this is the first time a model is deployed to the endpoint, it may take a few additional minutes to complete provisioning of resources.",
"DEPLOYED_NAME = \"salads_deployed-\" + TIMESTAMP\n\n\ndef deploy_model(\n model, deployed_model_display_name, endpoint, traffic_split={\"0\": 100}\n):\n\n deployed_model = {\n \"model\": model,\n \"display_name\": deployed_model_display_name,\n \"automatic_resources\": {\n \"min_replica_count\": MIN_NODES,\n \"max_replica_count\": MAX_NODES,\n },\n }\n\n response = clients[\"endpoint\"].deploy_model(\n endpoint=endpoint, deployed_model=deployed_model, traffic_split=traffic_split\n )\n\n print(\"Long running operation:\", response.operation.name)\n result = response.result()\n print(\"result\")\n deployed_model = result.deployed_model\n print(\" deployed_model\")\n print(\" id:\", deployed_model.id)\n print(\" model:\", deployed_model.model)\n print(\" display_name:\", deployed_model.display_name)\n print(\" create_time:\", deployed_model.create_time)\n\n return deployed_model.id\n\n\ndeployed_model_id = deploy_model(model_to_deploy_id, DEPLOYED_NAME, endpoint_id)",
"Make a online prediction request\nNow do a online prediction to your deployed model.\nGet test item\nYou will use an arbitrary example out of the dataset as a test item. Don't be concerned that the example was likely used in training the model -- we just want to demonstrate how to make a prediction.",
"test_items = !gsutil cat $IMPORT_FILE | head -n1\ncols = str(test_items[0]).split(\",\")\nif len(cols) == 11:\n test_item = str(cols[1])\n test_label = str(cols[2])\nelse:\n test_item = str(cols[0])\n test_label = str(cols[1])\n\nprint(test_item, test_label)",
"Make a prediction\nNow you have a test item. Use this helper function predict_item, which takes the following parameters:\n\nfilename: The Cloud Storage path to the test item.\nendpoint: The Vertex fully qualified identifier for the Endpoint resource where the Model resource was deployed.\nparameters_dict: Additional filtering parameters for serving prediction results.\n\nThis function calls the prediction client service's predict method with the following parameters:\n\nendpoint: The Vertex fully qualified identifier for the Endpoint resource where the Model resource was deployed.\ninstances: A list of instances (encoded images) to predict.\nparameters: Additional parameters for serving.\nconfidence_threshold: The threshold for returning predictions. Must be between 0 and 1.\nmax_predictions: The maximum number of predictions per object to return, sorted by confidence.\n\nYou might ask, how does confidence_threshold affect the model accuracy? The threshold won't change the accuracy. What it changes is recall and precision.\n- Precision: The higher the precision the more likely what is predicted is the correct prediction, but return fewer predictions. Increasing the confidence threshold increases precision.\n- Recall: The higher the recall the more likely a correct prediction is returned in the result, but return more prediction with incorrect prediction. Decreasing the confidence threshold increases recall.\n\nIn this example, you will predict for precision. You set the confidence threshold to 0.5 and the maximum number of predictions for an object to two. Since, all the confidence values across the classes must add up to one, there are only two possible outcomes:\n1. There is a tie, both 0.5, and returns two predictions.\n2. One value is above 0.5 and all the rest are below 0.5, and returns one prediction.\n\nRequest\nSince in this example your test item is in a Cloud Storage bucket, you will open and read the contents of the image using tf.io.gfile.Gfile(). To pass the test data to the prediction service, we will encode the bytes into base 64 -- This makes binary data safe from modification while it is transferred over the Internet.\nThe format of each instance is:\n{ 'content': { 'b64': [base64_encoded_bytes] } }\n\nSince the predict() method can take multiple items (instances), you send our single test item as a list of one test item. As a final step, you package the instances list into Google's protobuf format -- which is what we pass to the predict() method.\nResponse\nThe response object returns a list, where each element in the list corresponds to the corresponding image in the request. You will see in the output for each prediction -- in our case there is just one:\n\nconfidences: Confidence level in the prediction.\ndisplayNames: The predicted label.\nbboxes: The bounding box for the label.",
"import base64\n\nimport tensorflow as tf\n\n\ndef predict_item(filename, endpoint, parameters_dict):\n\n parameters = json_format.ParseDict(parameters_dict, Value())\n with tf.io.gfile.GFile(filename, \"rb\") as f:\n content = f.read()\n # The format of each instance should conform to the deployed model's prediction input schema.\n instances_list = [{\"content\": base64.b64encode(content).decode(\"utf-8\")}]\n instances = [json_format.ParseDict(s, Value()) for s in instances_list]\n\n response = clients[\"prediction\"].predict(\n endpoint=endpoint, instances=instances, parameters=parameters\n )\n print(\"response\")\n print(\" deployed_model_id:\", response.deployed_model_id)\n predictions = response.predictions\n print(\"predictions\")\n for prediction in predictions:\n print(\" prediction:\", dict(prediction))\n\n\npredict_item(test_item, endpoint_id, {\"confidenceThreshold\": 0.5, \"maxPredictions\": 2})",
"Undeploy the Model resource\nNow undeploy your Model resource from the serving Endpoint resoure. Use this helper function undeploy_model, which takes the following parameters:\n\ndeployed_model_id: The model deployment identifier returned by the endpoint service when the Model resource was deployed to.\nendpoint: The Vertex fully qualified identifier for the Endpoint resource where the Model is deployed to.\n\nThis function calls the endpoint client service's method undeploy_model, with the following parameters:\n\ndeployed_model_id: The model deployment identifier returned by the endpoint service when the Model resource was deployed.\nendpoint: The Vertex fully qualified identifier for the Endpoint resource where the Model resource is deployed.\ntraffic_split: How to split traffic among the remaining deployed models on the Endpoint resource.\n\nSince this is the only deployed model on the Endpoint resource, you simply can leave traffic_split empty by setting it to {}.",
"def undeploy_model(deployed_model_id, endpoint):\n response = clients[\"endpoint\"].undeploy_model(\n endpoint=endpoint, deployed_model_id=deployed_model_id, traffic_split={}\n )\n print(response)\n\n\nundeploy_model(deployed_model_id, endpoint_id)",
"Cleaning up\nTo clean up all GCP resources used in this project, you can delete the GCP\nproject you used for the tutorial.\nOtherwise, you can delete the individual resources you created in this tutorial:\n\nDataset\nPipeline\nModel\nEndpoint\nBatch Job\nCustom Job\nHyperparameter Tuning Job\nCloud Storage Bucket",
"delete_dataset = True\ndelete_pipeline = True\ndelete_model = True\ndelete_endpoint = True\ndelete_batchjob = True\ndelete_customjob = True\ndelete_hptjob = True\ndelete_bucket = True\n\n# Delete the dataset using the Vertex fully qualified identifier for the dataset\ntry:\n if delete_dataset and \"dataset_id\" in globals():\n clients[\"dataset\"].delete_dataset(name=dataset_id)\nexcept Exception as e:\n print(e)\n\n# Delete the training pipeline using the Vertex fully qualified identifier for the pipeline\ntry:\n if delete_pipeline and \"pipeline_id\" in globals():\n clients[\"pipeline\"].delete_training_pipeline(name=pipeline_id)\nexcept Exception as e:\n print(e)\n\n# Delete the model using the Vertex fully qualified identifier for the model\ntry:\n if delete_model and \"model_to_deploy_id\" in globals():\n clients[\"model\"].delete_model(name=model_to_deploy_id)\nexcept Exception as e:\n print(e)\n\n# Delete the endpoint using the Vertex fully qualified identifier for the endpoint\ntry:\n if delete_endpoint and \"endpoint_id\" in globals():\n clients[\"endpoint\"].delete_endpoint(name=endpoint_id)\nexcept Exception as e:\n print(e)\n\n# Delete the batch job using the Vertex fully qualified identifier for the batch job\ntry:\n if delete_batchjob and \"batch_job_id\" in globals():\n clients[\"job\"].delete_batch_prediction_job(name=batch_job_id)\nexcept Exception as e:\n print(e)\n\n# Delete the custom job using the Vertex fully qualified identifier for the custom job\ntry:\n if delete_customjob and \"job_id\" in globals():\n clients[\"job\"].delete_custom_job(name=job_id)\nexcept Exception as e:\n print(e)\n\n# Delete the hyperparameter tuning job using the Vertex fully qualified identifier for the hyperparameter tuning job\ntry:\n if delete_hptjob and \"hpt_job_id\" in globals():\n clients[\"job\"].delete_hyperparameter_tuning_job(name=hpt_job_id)\nexcept Exception as e:\n print(e)\n\nif delete_bucket and \"BUCKET_NAME\" in globals():\n ! gsutil rm -r $BUCKET_NAME"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
ES-DOC/esdoc-jupyterhub
|
notebooks/snu/cmip6/models/sandbox-3/seaice.ipynb
|
gpl-3.0
|
[
"ES-DOC CMIP6 Model Properties - Seaice\nMIP Era: CMIP6\nInstitute: SNU\nSource ID: SANDBOX-3\nTopic: Seaice\nSub-Topics: Dynamics, Thermodynamics, Radiative Processes. \nProperties: 80 (63 required)\nModel descriptions: Model description details\nInitialized From: -- \nNotebook Help: Goto notebook help page\nNotebook Initialised: 2018-02-15 16:54:38\nDocument Setup\nIMPORTANT: to be executed each time you run the notebook",
"# DO NOT EDIT ! \nfrom pyesdoc.ipython.model_topic import NotebookOutput \n\n# DO NOT EDIT ! \nDOC = NotebookOutput('cmip6', 'snu', 'sandbox-3', 'seaice')",
"Document Authors\nSet document authors",
"# Set as follows: DOC.set_author(\"name\", \"email\") \n# TODO - please enter value(s)",
"Document Contributors\nSpecify document contributors",
"# Set as follows: DOC.set_contributor(\"name\", \"email\") \n# TODO - please enter value(s)",
"Document Publication\nSpecify document publication status",
"# Set publication status: \n# 0=do not publish, 1=publish. \nDOC.set_publication_status(0)",
"Document Table of Contents\n1. Key Properties --> Model\n2. Key Properties --> Variables\n3. Key Properties --> Seawater Properties\n4. Key Properties --> Resolution\n5. Key Properties --> Tuning Applied\n6. Key Properties --> Key Parameter Values\n7. Key Properties --> Assumptions\n8. Key Properties --> Conservation\n9. Grid --> Discretisation --> Horizontal\n10. Grid --> Discretisation --> Vertical\n11. Grid --> Seaice Categories\n12. Grid --> Snow On Seaice\n13. Dynamics\n14. Thermodynamics --> Energy\n15. Thermodynamics --> Mass\n16. Thermodynamics --> Salt\n17. Thermodynamics --> Salt --> Mass Transport\n18. Thermodynamics --> Salt --> Thermodynamics\n19. Thermodynamics --> Ice Thickness Distribution\n20. Thermodynamics --> Ice Floe Size Distribution\n21. Thermodynamics --> Melt Ponds\n22. Thermodynamics --> Snow Processes\n23. Radiative Processes \n1. Key Properties --> Model\nName of seaice model used.\n1.1. Model Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of sea ice model.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.model.model_overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.2. Model Name\nIs Required: TRUE Type: STRING Cardinality: 1.1\nName of sea ice model code (e.g. CICE 4.2, LIM 2.1, etc.)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.model.model_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"2. Key Properties --> Variables\nList of prognostic variable in the sea ice model.\n2.1. Prognostic\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nList of prognostic variables in the sea ice component.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.variables.prognostic') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Sea ice temperature\" \n# \"Sea ice concentration\" \n# \"Sea ice thickness\" \n# \"Sea ice volume per grid cell area\" \n# \"Sea ice u-velocity\" \n# \"Sea ice v-velocity\" \n# \"Sea ice enthalpy\" \n# \"Internal ice stress\" \n# \"Salinity\" \n# \"Snow temperature\" \n# \"Snow depth\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"3. Key Properties --> Seawater Properties\nProperties of seawater relevant to sea ice\n3.1. Ocean Freezing Point\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nEquation used to compute the freezing point (in deg C) of seawater, as a function of salinity and pressure",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"TEOS-10\" \n# \"Constant\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"3.2. Ocean Freezing Point Value\nIs Required: FALSE Type: FLOAT Cardinality: 0.1\nIf using a constant seawater freezing point, specify this value.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.seawater_properties.ocean_freezing_point_value') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"4. Key Properties --> Resolution\nResolution of the sea ice grid\n4.1. Name\nIs Required: TRUE Type: STRING Cardinality: 1.1\nThis is a string usually used by the modelling group to describe the resolution of this grid e.g. N512L180, T512L70, ORCA025 etc.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.resolution.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"4.2. Canonical Horizontal Resolution\nIs Required: TRUE Type: STRING Cardinality: 1.1\nExpression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.resolution.canonical_horizontal_resolution') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"4.3. Number Of Horizontal Gridpoints\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nTotal number of horizontal (XY) points (or degrees of freedom) on computational grid.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.resolution.number_of_horizontal_gridpoints') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"5. Key Properties --> Tuning Applied\nTuning applied to sea ice model component\n5.1. Description\nIs Required: TRUE Type: STRING Cardinality: 1.1\nGeneral overview description of tuning: explain and motivate the main targets and metrics retained. Document the relative weight given to climate performance metrics versus process oriented metrics, and on the possible conflicts with parameterization level tuning. In particular describe any struggle with a parameter value that required pushing it to its limits to solve a particular model deficiency.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.tuning_applied.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"5.2. Target\nIs Required: TRUE Type: STRING Cardinality: 1.1\nWhat was the aim of tuning, e.g. correct sea ice minima, correct seasonal cycle.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.tuning_applied.target') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"5.3. Simulations\nIs Required: TRUE Type: STRING Cardinality: 1.1\n*Which simulations had tuning applied, e.g. all, not historical, only pi-control? *",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.tuning_applied.simulations') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"5.4. Metrics Used\nIs Required: TRUE Type: STRING Cardinality: 1.1\nList any observed metrics used in tuning model/parameters",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.tuning_applied.metrics_used') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"5.5. Variables\nIs Required: FALSE Type: STRING Cardinality: 0.1\nWhich variables were changed during the tuning process?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.tuning_applied.variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6. Key Properties --> Key Parameter Values\nValues of key parameters\n6.1. Typical Parameters\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nWhat values were specificed for the following parameters if used?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.key_parameter_values.typical_parameters') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Ice strength (P*) in units of N m{-2}\" \n# \"Snow conductivity (ks) in units of W m{-1} K{-1} \" \n# \"Minimum thickness of ice created in leads (h0) in units of m\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"6.2. Additional Parameters\nIs Required: FALSE Type: STRING Cardinality: 0.N\nIf you have any additional paramterised values that you have used (e.g. minimum open water fraction or bare ice albedo), please provide them here as a comma separated list",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.key_parameter_values.additional_parameters') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"7. Key Properties --> Assumptions\nAssumptions made in the sea ice model\n7.1. Description\nIs Required: TRUE Type: STRING Cardinality: 1.N\nGeneral overview description of any key assumptions made in this model.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.assumptions.description') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"7.2. On Diagnostic Variables\nIs Required: TRUE Type: STRING Cardinality: 1.N\nNote any assumptions that specifically affect the CMIP6 diagnostic sea ice variables.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.assumptions.on_diagnostic_variables') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"7.3. Missing Processes\nIs Required: TRUE Type: STRING Cardinality: 1.N\nList any key processes missing in this model configuration? Provide full details where this affects the CMIP6 diagnostic sea ice variables?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.assumptions.missing_processes') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8. Key Properties --> Conservation\nConservation in the sea ice component\n8.1. Description\nIs Required: TRUE Type: STRING Cardinality: 1.1\nProvide a general description of conservation methodology.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.conservation.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8.2. Properties\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nProperties conserved in sea ice by the numerical schemes.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.conservation.properties') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Energy\" \n# \"Mass\" \n# \"Salt\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"8.3. Budget\nIs Required: TRUE Type: STRING Cardinality: 1.1\nFor each conserved property, specify the output variables which close the related budgets. as a comma separated list. For example: Conserved property, variable1, variable2, variable3",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.conservation.budget') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8.4. Was Flux Correction Used\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nDoes conservation involved flux correction?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.conservation.was_flux_correction_used') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"8.5. Corrected Conserved Prognostic Variables\nIs Required: TRUE Type: STRING Cardinality: 1.1\nList any variables which are conserved by more than the numerical scheme alone.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.key_properties.conservation.corrected_conserved_prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"9. Grid --> Discretisation --> Horizontal\nSea ice discretisation in the horizontal\n9.1. Grid\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nGrid on which sea ice is horizontal discretised?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Ocean grid\" \n# \"Atmosphere Grid\" \n# \"Own Grid\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"9.2. Grid Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nWhat is the type of sea ice grid?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.discretisation.horizontal.grid_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Structured grid\" \n# \"Unstructured grid\" \n# \"Adaptive grid\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"9.3. Scheme\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nWhat is the advection scheme?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.discretisation.horizontal.scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Finite differences\" \n# \"Finite elements\" \n# \"Finite volumes\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"9.4. Thermodynamics Time Step\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nWhat is the time step in the sea ice model thermodynamic component in seconds.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.discretisation.horizontal.thermodynamics_time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"9.5. Dynamics Time Step\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nWhat is the time step in the sea ice model dynamic component in seconds.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.discretisation.horizontal.dynamics_time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"9.6. Additional Details\nIs Required: FALSE Type: STRING Cardinality: 0.1\nSpecify any additional horizontal discretisation details.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.discretisation.horizontal.additional_details') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"10. Grid --> Discretisation --> Vertical\nSea ice vertical properties\n10.1. Layering\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nWhat type of sea ice vertical layers are implemented for purposes of thermodynamic calculations?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.discretisation.vertical.layering') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Zero-layer\" \n# \"Two-layers\" \n# \"Multi-layers\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"10.2. Number Of Layers\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nIf using multi-layers specify how many.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.discretisation.vertical.number_of_layers') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"10.3. Additional Details\nIs Required: FALSE Type: STRING Cardinality: 0.1\nSpecify any additional vertical grid details.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.discretisation.vertical.additional_details') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"11. Grid --> Seaice Categories\nWhat method is used to represent sea ice categories ?\n11.1. Has Mulitple Categories\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nSet to true if the sea ice model has multiple sea ice categories.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.seaice_categories.has_mulitple_categories') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"11.2. Number Of Categories\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nIf using sea ice categories specify how many.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.seaice_categories.number_of_categories') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"11.3. Category Limits\nIs Required: TRUE Type: STRING Cardinality: 1.1\nIf using sea ice categories specify each of the category limits.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.seaice_categories.category_limits') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"11.4. Ice Thickness Distribution Scheme\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the sea ice thickness distribution scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.seaice_categories.ice_thickness_distribution_scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"11.5. Other\nIs Required: FALSE Type: STRING Cardinality: 0.1\nIf the sea ice model does not use sea ice categories specify any additional details. For example models that paramterise the ice thickness distribution ITD (i.e there is no explicit ITD) but there is assumed distribution and fluxes are computed accordingly.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.seaice_categories.other') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"12. Grid --> Snow On Seaice\nSnow on sea ice details\n12.1. Has Snow On Ice\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs snow on ice represented in this model?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.snow_on_seaice.has_snow_on_ice') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"12.2. Number Of Snow Levels\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nNumber of vertical levels of snow on ice?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.snow_on_seaice.number_of_snow_levels') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"12.3. Snow Fraction\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe how the snow fraction on sea ice is determined",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.snow_on_seaice.snow_fraction') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"12.4. Additional Details\nIs Required: FALSE Type: STRING Cardinality: 0.1\nSpecify any additional details related to snow on ice.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.grid.snow_on_seaice.additional_details') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"13. Dynamics\nSea Ice Dynamics\n13.1. Horizontal Transport\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nWhat is the method of horizontal advection of sea ice?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.dynamics.horizontal_transport') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Incremental Re-mapping\" \n# \"Prather\" \n# \"Eulerian\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"13.2. Transport In Thickness Space\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nWhat is the method of sea ice transport in thickness space (i.e. in thickness categories)?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.dynamics.transport_in_thickness_space') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Incremental Re-mapping\" \n# \"Prather\" \n# \"Eulerian\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"13.3. Ice Strength Formulation\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nWhich method of sea ice strength formulation is used?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.dynamics.ice_strength_formulation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Hibler 1979\" \n# \"Rothrock 1975\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"13.4. Redistribution\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nWhich processes can redistribute sea ice (including thickness)?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.dynamics.redistribution') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Rafting\" \n# \"Ridging\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"13.5. Rheology\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nRheology, what is the ice deformation formulation?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.dynamics.rheology') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Free-drift\" \n# \"Mohr-Coloumb\" \n# \"Visco-plastic\" \n# \"Elastic-visco-plastic\" \n# \"Elastic-anisotropic-plastic\" \n# \"Granular\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"14. Thermodynamics --> Energy\nProcesses related to energy in sea ice thermodynamics\n14.1. Enthalpy Formulation\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nWhat is the energy formulation?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.energy.enthalpy_formulation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Pure ice latent heat (Semtner 0-layer)\" \n# \"Pure ice latent and sensible heat\" \n# \"Pure ice latent and sensible heat + brine heat reservoir (Semtner 3-layer)\" \n# \"Pure ice latent and sensible heat + explicit brine inclusions (Bitz and Lipscomb)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"14.2. Thermal Conductivity\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nWhat type of thermal conductivity is used?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.energy.thermal_conductivity') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Pure ice\" \n# \"Saline ice\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"14.3. Heat Diffusion\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nWhat is the method of heat diffusion?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.energy.heat_diffusion') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Conduction fluxes\" \n# \"Conduction and radiation heat fluxes\" \n# \"Conduction, radiation and latent heat transport\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"14.4. Basal Heat Flux\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nMethod by which basal ocean heat flux is handled?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.energy.basal_heat_flux') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Heat Reservoir\" \n# \"Thermal Fixed Salinity\" \n# \"Thermal Varying Salinity\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"14.5. Fixed Salinity Value\nIs Required: FALSE Type: FLOAT Cardinality: 0.1\nIf you have selected {Thermal properties depend on S-T (with fixed salinity)}, supply fixed salinity value for each sea ice layer.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.energy.fixed_salinity_value') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"14.6. Heat Content Of Precipitation\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the method by which the heat content of precipitation is handled.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.energy.heat_content_of_precipitation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"14.7. Precipitation Effects On Salinity\nIs Required: FALSE Type: STRING Cardinality: 0.1\nIf precipitation (freshwater) that falls on sea ice affects the ocean surface salinity please provide further details.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.energy.precipitation_effects_on_salinity') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"15. Thermodynamics --> Mass\nProcesses related to mass in sea ice thermodynamics\n15.1. New Ice Formation\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the method by which new sea ice is formed in open water.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.mass.new_ice_formation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"15.2. Ice Vertical Growth And Melt\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the method that governs the vertical growth and melt of sea ice.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.mass.ice_vertical_growth_and_melt') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"15.3. Ice Lateral Melting\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nWhat is the method of sea ice lateral melting?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.mass.ice_lateral_melting') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Floe-size dependent (Bitz et al 2001)\" \n# \"Virtual thin ice melting (for single-category)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"15.4. Ice Surface Sublimation\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the method that governs sea ice surface sublimation.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.mass.ice_surface_sublimation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"15.5. Frazil Ice\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe the method of frazil ice formation.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.mass.frazil_ice') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"16. Thermodynamics --> Salt\nProcesses related to salt in sea ice thermodynamics.\n16.1. Has Multiple Sea Ice Salinities\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nDoes the sea ice model use two different salinities: one for thermodynamic calculations; and one for the salt budget?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.salt.has_multiple_sea_ice_salinities') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"16.2. Sea Ice Salinity Thermal Impacts\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nDoes sea ice salinity impact the thermal properties of sea ice?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.salt.sea_ice_salinity_thermal_impacts') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"17. Thermodynamics --> Salt --> Mass Transport\nMass transport of salt\n17.1. Salinity Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nHow is salinity determined in the mass transport of salt calculation?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.salinity_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Constant\" \n# \"Prescribed salinity profile\" \n# \"Prognostic salinity profile\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"17.2. Constant Salinity Value\nIs Required: FALSE Type: FLOAT Cardinality: 0.1\nIf using a constant salinity value specify this value in PSU?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.constant_salinity_value') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"17.3. Additional Details\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the salinity profile used.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.salt.mass_transport.additional_details') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"18. Thermodynamics --> Salt --> Thermodynamics\nSalt thermodynamics\n18.1. Salinity Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nHow is salinity determined in the thermodynamic calculation?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.salinity_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Constant\" \n# \"Prescribed salinity profile\" \n# \"Prognostic salinity profile\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"18.2. Constant Salinity Value\nIs Required: FALSE Type: FLOAT Cardinality: 0.1\nIf using a constant salinity value specify this value in PSU?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.constant_salinity_value') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"18.3. Additional Details\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the salinity profile used.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.salt.thermodynamics.additional_details') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"19. Thermodynamics --> Ice Thickness Distribution\nIce thickness distribution details.\n19.1. Representation\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nHow is the sea ice thickness distribution represented?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.ice_thickness_distribution.representation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Explicit\" \n# \"Virtual (enhancement of thermal conductivity, thin ice melting)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"20. Thermodynamics --> Ice Floe Size Distribution\nIce floe-size distribution details.\n20.1. Representation\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nHow is the sea ice floe-size represented?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.representation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Explicit\" \n# \"Parameterised\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"20.2. Additional Details\nIs Required: FALSE Type: STRING Cardinality: 0.1\nPlease provide further details on any parameterisation of floe-size.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.ice_floe_size_distribution.additional_details') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"21. Thermodynamics --> Melt Ponds\nCharacteristics of melt ponds.\n21.1. Are Included\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nAre melt ponds included in the sea ice model?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.are_included') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"21.2. Formulation\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nWhat method of melt pond formulation is used?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.formulation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Flocco and Feltham (2010)\" \n# \"Level-ice melt ponds\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"21.3. Impacts\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nWhat do melt ponds have an impact on?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.melt_ponds.impacts') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Albedo\" \n# \"Freshwater\" \n# \"Heat\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"22. Thermodynamics --> Snow Processes\nThermodynamic processes in snow on sea ice\n22.1. Has Snow Aging\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.N\nSet to True if the sea ice model has a snow aging scheme.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_aging') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"22.2. Snow Aging Scheme\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the snow aging scheme.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_aging_scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"22.3. Has Snow Ice Formation\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.N\nSet to True if the sea ice model has snow ice formation.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.snow_processes.has_snow_ice_formation') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"22.4. Snow Ice Formation Scheme\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe the snow ice formation scheme.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.snow_processes.snow_ice_formation_scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"22.5. Redistribution\nIs Required: TRUE Type: STRING Cardinality: 1.1\nWhat is the impact of ridging on snow cover?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.snow_processes.redistribution') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"22.6. Heat Diffusion\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nWhat is the heat diffusion through snow methodology in sea ice thermodynamics?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.thermodynamics.snow_processes.heat_diffusion') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Single-layered heat diffusion\" \n# \"Multi-layered heat diffusion\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"23. Radiative Processes\nSea Ice Radiative Processes\n23.1. Surface Albedo\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nMethod used to handle surface albedo.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.radiative_processes.surface_albedo') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Delta-Eddington\" \n# \"Parameterized\" \n# \"Multi-band albedo\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"23.2. Ice Radiation Transmission\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nMethod by which solar radiation through sea ice is handled.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.seaice.radiative_processes.ice_radiation_transmission') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Delta-Eddington\" \n# \"Exponential attenuation\" \n# \"Ice radiation transmission per category\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"©2017 ES-DOC"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
Bismarrck/deep-learning
|
language-translation/dlnd_language_translation.ipynb
|
mit
|
[
"Language Translation\nIn this project, you’re going to take a peek into the realm of neural network machine translation. You’ll be training a sequence to sequence model on a dataset of English and French sentences that can translate new sentences from English to French.\nGet the Data\nSince translating the whole language of English to French will take lots of time to train, we have provided you with a small portion of the English corpus.",
"\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nimport helper\nimport problem_unittests as tests\n\nsource_path = 'data/small_vocab_en'\ntarget_path = 'data/small_vocab_fr'\nsource_text = helper.load_data(source_path)\ntarget_text = helper.load_data(target_path)",
"Explore the Data\nPlay around with view_sentence_range to view different parts of the data.",
"view_sentence_range = (0, 10)\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nimport numpy as np\n\nprint('Dataset Stats')\nprint('Roughly the number of unique words: {}'.format(len({word: None for word in source_text.split()})))\n\nsentences = source_text.split('\\n')\nword_counts = [len(sentence.split()) for sentence in sentences]\nprint('Number of sentences: {}'.format(len(sentences)))\nprint('Average number of words in a sentence: {}'.format(np.average(word_counts)))\n\nprint()\nprint('English sentences {} to {}:'.format(*view_sentence_range))\nprint('\\n'.join(source_text.split('\\n')[view_sentence_range[0]:view_sentence_range[1]]))\nprint()\nprint('French sentences {} to {}:'.format(*view_sentence_range))\nprint('\\n'.join(target_text.split('\\n')[view_sentence_range[0]:view_sentence_range[1]]))",
"Implement Preprocessing Function\nText to Word Ids\nAs you did with other RNNs, you must turn the text into a number so the computer can understand it. In the function text_to_ids(), you'll turn source_text and target_text from words to ids. However, you need to add the <EOS> word id at the end of each sentence from target_text. This will help the neural network predict when the sentence should end.\nYou can get the <EOS> word id by doing:\npython\ntarget_vocab_to_int['<EOS>']\nYou can get other word ids using source_vocab_to_int and target_vocab_to_int.",
"def text_to_ids(source_text, target_text, source_vocab_to_int, target_vocab_to_int):\n \"\"\"\n Convert source and target text to proper word ids\n :param source_text: String that contains all the source text.\n :param target_text: String that contains all the target text.\n :param source_vocab_to_int: Dictionary to go from the source words to an id\n :param target_vocab_to_int: Dictionary to go from the target words to an id\n :return: A tuple of lists (source_id_text, target_id_text)\n \"\"\"\n # TODO: Implement Function\n source_id_text = []\n for words in source_text.split('\\n'):\n word_ids = []\n for word in words.split():\n word_ids.append(source_vocab_to_int[word])\n source_id_text.append(word_ids)\n target_id_text = []\n for words in target_text.split('\\n'):\n word_ids = []\n for word in words.split():\n word_ids.append(target_vocab_to_int[word])\n word_ids.append(target_vocab_to_int['<EOS>'])\n target_id_text.append(word_ids)\n return (source_id_text, target_id_text)\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_text_to_ids(text_to_ids)",
"Preprocess all the data and save it\nRunning the code cell below will preprocess all the data and save it to file.",
"\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nhelper.preprocess_and_save_data(source_path, target_path, text_to_ids)",
"Check Point\nThis is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.",
"\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nimport numpy as np\nimport helper\n\n(source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess()",
"Check the Version of TensorFlow and Access to GPU\nThis will check to make sure you have the correct version of TensorFlow and access to a GPU",
"\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nfrom distutils.version import LooseVersion\nimport warnings\nimport tensorflow as tf\n\n# Check TensorFlow Version\nassert LooseVersion(tf.__version__) in [LooseVersion('1.0.0'), LooseVersion('1.0.1')], 'This project requires TensorFlow version 1.0 You are using {}'.format(tf.__version__)\nprint('TensorFlow Version: {}'.format(tf.__version__))\n\n# Check for a GPU\nif not tf.test.gpu_device_name():\n warnings.warn('No GPU found. Please use a GPU to train your neural network.')\nelse:\n print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))",
"Build the Neural Network\nYou'll build the components necessary to build a Sequence-to-Sequence model by implementing the following functions below:\n- model_inputs\n- process_decoding_input\n- encoding_layer\n- decoding_layer_train\n- decoding_layer_infer\n- decoding_layer\n- seq2seq_model\nInput\nImplement the model_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders:\n\nInput text placeholder named \"input\" using the TF Placeholder name parameter with rank 2.\nTargets placeholder with rank 2.\nLearning rate placeholder with rank 0.\nKeep probability placeholder named \"keep_prob\" using the TF Placeholder name parameter with rank 0.\n\nReturn the placeholders in the following the tuple (Input, Targets, Learing Rate, Keep Probability)",
"def model_inputs():\n \"\"\"\n Create TF Placeholders for input, targets, and learning rate.\n :return: Tuple (input, targets, learning rate, keep probability)\n \"\"\"\n # TODO: Implement Function\n inputs = tf.placeholder(tf.int32, shape=[None, None], name=\"input\")\n targets = tf.placeholder(tf.int32, shape=[None, None], name=\"targets\")\n lr = tf.placeholder(tf.float32, name=\"learning_rate\")\n keep_prob = tf.placeholder(tf.float32, name=\"keep_prob\")\n return inputs, targets, lr, keep_prob\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_model_inputs(model_inputs)",
"Process Decoding Input\nImplement process_decoding_input using TensorFlow to remove the last word id from each batch in target_data and concat the GO ID to the beginning of each batch.",
"def process_decoding_input(target_data, target_vocab_to_int, batch_size):\n \"\"\"\n Preprocess target data for decoding\n :param target_data: Target Placeholder\n :param target_vocab_to_int: Dictionary to go from the target words to an id\n :param batch_size: Batch Size\n :return: Preprocessed target data\n \"\"\"\n # TODO: Implement Function\n ending = tf.strided_slice(target_data, [0, 0], [batch_size, -1], [1, 1])\n starting = tf.fill([batch_size, 1], target_vocab_to_int['<GO>'])\n axis = tf.constant(1, name=\"dec_axis\")\n dec_input = tf.concat([starting, ending], axis=1)\n return dec_input\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_process_decoding_input(process_decoding_input)",
"Encoding\nImplement encoding_layer() to create a Encoder RNN layer using tf.nn.dynamic_rnn().",
"def encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob):\n \"\"\"\n Create encoding layer\n :param rnn_inputs: Inputs for the RNN\n :param rnn_size: RNN Size\n :param num_layers: Number of layers\n :param keep_prob: Dropout keep probability\n :return: RNN state\n \"\"\"\n cell = tf.contrib.rnn.BasicLSTMCell(rnn_size)\n dropout_wrapper = tf.contrib.rnn.DropoutWrapper(cell, output_keep_prob=keep_prob)\n rnn_cell = tf.contrib.rnn.MultiRNNCell([cell] * num_layers)\n _, rnn_state = tf.nn.dynamic_rnn(rnn_cell, rnn_inputs, dtype=tf.float32)\n return rnn_state\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_encoding_layer(encoding_layer)",
"Decoding - Training\nCreate training logits using tf.contrib.seq2seq.simple_decoder_fn_train() and tf.contrib.seq2seq.dynamic_rnn_decoder(). Apply the output_fn to the tf.contrib.seq2seq.dynamic_rnn_decoder() outputs.",
"def decoding_layer_train(encoder_state, dec_cell, dec_embed_input, sequence_length, decoding_scope,\n output_fn, keep_prob):\n \"\"\"\n Create a decoding layer for training\n :param encoder_state: Encoder State\n :param dec_cell: Decoder RNN Cell\n :param dec_embed_input: Decoder embedded input\n :param sequence_length: Sequence Length\n :param decoding_scope: TenorFlow Variable Scope for decoding\n :param output_fn: Function to apply the output layer\n :param keep_prob: Dropout keep probability\n :return: Train Logits\n \"\"\"\n # TODO: Implement Function\n train_decoder_fn = tf.contrib.seq2seq.simple_decoder_fn_train(encoder_state)\n train_pred, _, _ = tf.contrib.seq2seq.dynamic_rnn_decoder(\n dec_cell, train_decoder_fn, dec_embed_input, sequence_length, scope=decoding_scope)\n \n # Apply output function\n train_logits = output_fn(train_pred)\n return tf.nn.dropout(train_logits, keep_prob=keep_prob)\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_decoding_layer_train(decoding_layer_train)",
"Decoding - Inference\nCreate inference logits using tf.contrib.seq2seq.simple_decoder_fn_inference() and tf.contrib.seq2seq.dynamic_rnn_decoder().",
"def decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id,\n maximum_length, vocab_size, decoding_scope, output_fn, keep_prob):\n \"\"\"\n Create a decoding layer for inference\n :param encoder_state: Encoder state\n :param dec_cell: Decoder RNN Cell\n :param dec_embeddings: Decoder embeddings\n :param start_of_sequence_id: GO ID\n :param end_of_sequence_id: EOS Id\n :param maximum_length: The maximum allowed time steps to decode\n :param vocab_size: Size of vocabulary\n :param decoding_scope: TensorFlow Variable Scope for decoding\n :param output_fn: Function to apply the output layer\n :param keep_prob: Dropout keep probability\n :return: Inference Logits\n \"\"\"\n # TODO: Implement Function\n infer_decoder_fn = tf.contrib.seq2seq.simple_decoder_fn_inference(\n output_fn, encoder_state, dec_embeddings, start_of_sequence_id, end_of_sequence_id, \n maximum_length - 1, vocab_size)\n inference_logits, _, _ = tf.contrib.seq2seq.dynamic_rnn_decoder(dec_cell, infer_decoder_fn, scope=decoding_scope)\n return tf.nn.dropout(inference_logits, keep_prob=keep_prob)\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_decoding_layer_infer(decoding_layer_infer)",
"Build the Decoding Layer\nImplement decoding_layer() to create a Decoder RNN layer.\n\nCreate RNN cell for decoding using rnn_size and num_layers.\nCreate the output fuction using lambda to transform it's input, logits, to class logits.\nUse the your decoding_layer_train(encoder_state, dec_cell, dec_embed_input, sequence_length, decoding_scope, output_fn, keep_prob) function to get the training logits.\nUse your decoding_layer_infer(encoder_state, dec_cell, dec_embeddings, start_of_sequence_id, end_of_sequence_id, maximum_length, vocab_size, decoding_scope, output_fn, keep_prob) function to get the inference logits.\n\nNote: You'll need to use tf.variable_scope to share variables between training and inference.",
"def decoding_layer(dec_embed_input, dec_embeddings, encoder_state, vocab_size, sequence_length, rnn_size,\n num_layers, target_vocab_to_int, keep_prob):\n \"\"\"\n Create decoding layer\n :param dec_embed_input: Decoder embedded input\n :param dec_embeddings: Decoder embeddings\n :param encoder_state: The encoded state\n :param vocab_size: Size of vocabulary\n :param sequence_length: Sequence Length\n :param rnn_size: RNN Size\n :param num_layers: Number of layers\n :param target_vocab_to_int: Dictionary to go from the target words to an id\n :param keep_prob: Dropout keep probability\n :return: Tuple of (Training Logits, Inference Logits)\n \"\"\"\n # TODO: Implement Function\n \n with tf.variable_scope(\"decoding\") as decoding_scope:\n dec_cell = tf.contrib.rnn.MultiRNNCell([tf.contrib.rnn.BasicLSTMCell(rnn_size)] * num_layers)\n\n # Output Layer\n output_fn = lambda x: tf.contrib.layers.fully_connected(\n x, vocab_size, None, scope=decoding_scope\n )\n train_logits = decoding_layer_train(\n encoder_state, dec_cell, dec_embed_input, sequence_length, decoding_scope, output_fn, keep_prob\n )\n \n with tf.variable_scope(\"decoding\", reuse=True) as decoding_scope:\n infer_logits = decoding_layer_infer(\n encoder_state, dec_cell, dec_embeddings, target_vocab_to_int['<GO>'], target_vocab_to_int['<EOS>'], \n sequence_length, vocab_size, decoding_scope, output_fn, keep_prob\n )\n return train_logits, infer_logits\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_decoding_layer(decoding_layer)",
"Build the Neural Network\nApply the functions you implemented above to:\n\nApply embedding to the input data for the encoder.\nEncode the input using your encoding_layer(rnn_inputs, rnn_size, num_layers, keep_prob).\nProcess target data using your process_decoding_input(target_data, target_vocab_to_int, batch_size) function.\nApply embedding to the target data for the decoder.\nDecode the encoded input using your decoding_layer(dec_embed_input, dec_embeddings, encoder_state, vocab_size, sequence_length, rnn_size, num_layers, target_vocab_to_int, keep_prob).",
"def seq2seq_model(input_data, target_data, keep_prob, batch_size, sequence_length, source_vocab_size, \n target_vocab_size, enc_embedding_size, dec_embedding_size, rnn_size, num_layers, \n target_vocab_to_int):\n \"\"\"\n Build the Sequence-to-Sequence part of the neural network\n :param input_data: Input placeholder\n :param target_data: Target placeholder\n :param keep_prob: Dropout keep probability placeholder\n :param batch_size: Batch Size\n :param sequence_length: Sequence Length\n :param source_vocab_size: Source vocabulary size\n :param target_vocab_size: Target vocabulary size\n :param enc_embedding_size: Decoder embedding size\n :param dec_embedding_size: Encoder embedding size\n :param rnn_size: RNN Size\n :param num_layers: Number of layers\n :param target_vocab_to_int: Dictionary to go from the target words to an id\n :return: Tuple of (Training Logits, Inference Logits)\n \"\"\"\n # Apply embedding to the input data\n enc_embed_input = tf.contrib.layers.embed_sequence(input_data, source_vocab_size, enc_embedding_size)\n \n # Encode the input\n enc_state = encoding_layer(enc_embed_input, rnn_size=rnn_size, num_layers=num_layers, keep_prob=keep_prob)\n \n # Process target data\n dec_input = process_decoding_input(target_data, target_vocab_to_int, batch_size)\n \n # Decoder Embedding\n dec_embeddings = tf.Variable(tf.random_uniform([target_vocab_size, dec_embedding_size]))\n dec_embed_input = tf.nn.embedding_lookup(dec_embeddings, dec_input)\n \n train_logits, infer_logits = decoding_layer(\n dec_embed_input, dec_embeddings, enc_state, target_vocab_size, sequence_length, rnn_size, \n num_layers, target_vocab_to_int, keep_prob\n )\n return train_logits, infer_logits\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_seq2seq_model(seq2seq_model)",
"Neural Network Training\nHyperparameters\nTune the following parameters:\n\nSet epochs to the number of epochs.\nSet batch_size to the batch size.\nSet rnn_size to the size of the RNNs.\nSet num_layers to the number of layers.\nSet encoding_embedding_size to the size of the embedding for the encoder.\nSet decoding_embedding_size to the size of the embedding for the decoder.\nSet learning_rate to the learning rate.\nSet keep_probability to the Dropout keep probability",
"# Number of Epochs\nepochs = 5\n\n# Batch Size\nbatch_size = 256\n\n# RNN Size\nrnn_size = 1000\n\n# Number of Layers\nnum_layers = 2\n\n# Embedding Size\nencoding_embedding_size = 50\ndecoding_embedding_size = 50\n\n# Learning Rate\nlearning_rate = 0.001\n\n# Dropout Keep Probability\nkeep_probability = 0.75",
"Build the Graph\nBuild the graph using the neural network you implemented.",
"\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nsave_path = 'checkpoints/dev'\n(source_int_text, target_int_text), (source_vocab_to_int, target_vocab_to_int), _ = helper.load_preprocess()\nmax_source_sentence_length = max([len(sentence) for sentence in source_int_text])\n\ntrain_graph = tf.Graph()\nwith train_graph.as_default():\n input_data, targets, lr, keep_prob = model_inputs()\n sequence_length = tf.placeholder_with_default(max_source_sentence_length, None, name='sequence_length')\n input_shape = tf.shape(input_data)\n \n train_logits, inference_logits = seq2seq_model(\n tf.reverse(input_data, [-1]), targets, keep_prob, batch_size, sequence_length, len(source_vocab_to_int), len(target_vocab_to_int),\n encoding_embedding_size, decoding_embedding_size, rnn_size, num_layers, target_vocab_to_int)\n\n tf.identity(inference_logits, 'logits')\n with tf.name_scope(\"optimization\"):\n # Loss function\n cost = tf.contrib.seq2seq.sequence_loss(\n train_logits,\n targets,\n tf.ones([input_shape[0], sequence_length]))\n\n # Optimizer\n optimizer = tf.train.AdamOptimizer(lr)\n\n # Gradient Clipping\n gradients = optimizer.compute_gradients(cost)\n capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None]\n train_op = optimizer.apply_gradients(capped_gradients)",
"Train\nTrain the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forums to see if anyone is having the same problem.",
"\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nimport time\n\ndef get_accuracy(target, logits):\n \"\"\"\n Calculate accuracy\n \"\"\"\n max_seq = max(target.shape[1], logits.shape[1])\n if max_seq - target.shape[1]:\n target = np.pad(\n target,\n [(0,0),(0,max_seq - target.shape[1])],\n 'constant')\n if max_seq - logits.shape[1]:\n logits = np.pad(\n logits,\n [(0,0),(0,max_seq - logits.shape[1]), (0,0)],\n 'constant')\n\n return np.mean(np.equal(target, np.argmax(logits, 2)))\n\ntrain_source = source_int_text[batch_size:]\ntrain_target = target_int_text[batch_size:]\n\nvalid_source = helper.pad_sentence_batch(source_int_text[:batch_size])\nvalid_target = helper.pad_sentence_batch(target_int_text[:batch_size])\n\nwith tf.Session(graph=train_graph) as sess:\n sess.run(tf.global_variables_initializer())\n\n for epoch_i in range(epochs):\n for batch_i, (source_batch, target_batch) in enumerate(\n helper.batch_data(train_source, train_target, batch_size)):\n start_time = time.time()\n \n _, loss = sess.run(\n [train_op, cost],\n {input_data: source_batch,\n targets: target_batch,\n lr: learning_rate,\n sequence_length: target_batch.shape[1],\n keep_prob: keep_probability})\n \n batch_train_logits = sess.run(\n inference_logits,\n {input_data: source_batch, keep_prob: 1.0})\n batch_valid_logits = sess.run(\n inference_logits,\n {input_data: valid_source, keep_prob: 1.0})\n \n train_acc = get_accuracy(target_batch, batch_train_logits)\n valid_acc = get_accuracy(np.array(valid_target), batch_valid_logits)\n end_time = time.time()\n print('Epoch {:>3} Batch {:>4}/{} - Train Accuracy: {:>6.3f}, Validation Accuracy: {:>6.3f}, Loss: {:>6.3f}'\n .format(epoch_i, batch_i, len(source_int_text) // batch_size, train_acc, valid_acc, loss))\n\n # Save Model\n saver = tf.train.Saver()\n saver.save(sess, save_path)\n print('Model Trained and Saved')",
"Save Parameters\nSave the batch_size and save_path parameters for inference.",
"\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\n# Save parameters for checkpoint\nhelper.save_params(save_path)",
"Checkpoint",
"\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nimport tensorflow as tf\nimport numpy as np\nimport helper\nimport problem_unittests as tests\n\n_, (source_vocab_to_int, target_vocab_to_int), (source_int_to_vocab, target_int_to_vocab) = helper.load_preprocess()\nload_path = helper.load_params()",
"Sentence to Sequence\nTo feed a sentence into the model for translation, you first need to preprocess it. Implement the function sentence_to_seq() to preprocess new sentences.\n\nConvert the sentence to lowercase\nConvert words into ids using vocab_to_int\nConvert words not in the vocabulary, to the <UNK> word id.",
"def sentence_to_seq(sentence, vocab_to_int):\n \"\"\"\n Convert a sentence to a sequence of ids\n :param sentence: String\n :param vocab_to_int: Dictionary to go from the words to an id\n :return: List of word ids\n \"\"\"\n # TODO: Implement Function\n word_ids = []\n uks_id = vocab_to_int[\"<UNK>\"]\n for word in sentence.lower().split():\n word_ids.append(vocab_to_int.get(word, uks_id))\n return word_ids\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_sentence_to_seq(sentence_to_seq)",
"Translate\nThis will translate translate_sentence from English to French.",
"translate_sentence = 'he saw a old yellow truck .'\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\ntranslate_sentence = sentence_to_seq(translate_sentence, source_vocab_to_int)\n\nloaded_graph = tf.Graph()\nwith tf.Session(graph=loaded_graph) as sess:\n # Load saved model\n loader = tf.train.import_meta_graph(load_path + '.meta')\n loader.restore(sess, load_path)\n\n input_data = loaded_graph.get_tensor_by_name('input:0')\n logits = loaded_graph.get_tensor_by_name('logits:0')\n keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')\n\n translate_logits = sess.run(logits, {input_data: [translate_sentence], keep_prob: 1.0})[0]\n\nprint('Input')\nprint(' Word Ids: {}'.format([i for i in translate_sentence]))\nprint(' English Words: {}'.format([source_int_to_vocab[i] for i in translate_sentence]))\n\nprint('\\nPrediction')\nprint(' Word Ids: {}'.format([i for i in np.argmax(translate_logits, 1)]))\nprint(' French Words: {}'.format([target_int_to_vocab[i] for i in np.argmax(translate_logits, 1)]))\n\ntranslate_sentence = 'he saw a old yellow truck .'\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\ntranslate_sentence = sentence_to_seq(translate_sentence, source_vocab_to_int)\n\nloaded_graph = tf.Graph()\nwith tf.Session(graph=loaded_graph) as sess:\n # Load saved model\n loader = tf.train.import_meta_graph(load_path + '.meta')\n loader.restore(sess, load_path)\n\n input_data = loaded_graph.get_tensor_by_name('input:0')\n logits = loaded_graph.get_tensor_by_name('logits:0')\n keep_prob = loaded_graph.get_tensor_by_name('keep_prob:0')\n\n translate_logits = sess.run(logits, {input_data: [translate_sentence], keep_prob: 1.0})[0]\n\nprint('Input')\nprint(' Word Ids: {}'.format([i for i in translate_sentence]))\nprint(' English Words: {}'.format(\" \".join([source_int_to_vocab[i] for i in translate_sentence])))\n\nprint('\\nPrediction')\nprint(' Word Ids: {}'.format([i for i in np.argmax(translate_logits, 1)]))\nprint(' French Words: {}'.format(\" \".join([target_int_to_vocab[i] for i in np.argmax(translate_logits, 1)])))",
"Imperfect Translation\nYou might notice that some sentences translate better than others. Since the dataset you're using only has a vocabulary of 227 English words of the thousands that you use, you're only going to see good results using these words. Additionally, the translations in this data set were made by Google translate, so the translations themselves aren't particularly good. (We apologize to the French speakers out there!) Thankfully, for this project, you don't need a perfect translation. However, if you want to create a better translation model, you'll need better data.\nYou can train on the WMT10 French-English corpus. This dataset has more vocabulary and richer in topics discussed. However, this will take you days to train, so make sure you've a GPU and the neural network is performing well on dataset we provided. Just make sure you play with the WMT10 corpus after you've submitted this project.\nSubmitting This Project\nWhen submitting this project, make sure to run all the cells before saving the notebook. Save the notebook file as \"dlnd_language_translation.ipynb\" and save it as a HTML file under \"File\" -> \"Download as\". Include the \"helper.py\" and \"problem_unittests.py\" files in your submission."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
DB2-Samples/db2jupyter
|
Db2 Statistical Functions.ipynb
|
apache-2.0
|
[
"<a id='top'></a>\nDb2 Statistical Functions\nDb2 already has a variety of Statistical functions built in. In Db2 11.1, a number of new\nfunctions have been added including:\n\nCOVARIANCE_SAMP - The COVARIANCE_SAMP function returns the sample covariance of a set of number pairs\nSTDDEV_SAMP - The STDDEV_SAMP column function returns the sample standard deviation (division by [n-1]) of a set of numbers.\nVARIANCE_SAMP or VAR_SAMP - The VARIANCE_SAMP column function returns the sample variance (division by [n-1]) of a set of numbers.\nCUME_DIST - The CUME_DIST column function returns the cumulative distribution of a row that is hypothetically inserted into a group of rows\nPERCENT_RANK - The PERCENT_RANK column function returns the relative percentile rank of a row that is hypothetically inserted into a group of rows.\nPERCENTILE_DISC, PERCENTILE_CONT - Returns the value that corresponds to the specified percentile given a sort specification by using discrete (DISC) or continuous (CONT) distribution\nMEDIAN - The MEDIAN column function returns the median value in a set of values\nWIDTH_BUCKET - The WIDTH_BUCKET function is used to create equal-width histograms\n\nSampling Functions\nThe traditional VARIANCE, COVARIANCE, and STDDEV functions have been available in Db2 for a long time. When computing these values, the formulae assume that the entire population has been counted (N). The traditional formula for standard deviation is:\n$$\\sigma=\\sqrt{\\frac{1}{N}\\sum_{i=1}^N(x_{i}-\\mu)^{2}}$$\nN refers to the size of the population and in many cases, we only have a sample, not the entire population of values. \nIn this case, the formula needs to be adjusted to account for the sampling.\n$$s=\\sqrt{\\frac{1}{N-1}\\sum_{i=1}^N(x_{i}-\\bar{x})^{2}}$$\nSet up the connection to the database.",
"%run db2.ipynb",
"We populate the database with the EMPLOYEE and DEPARTMENT tables so that we can run the various examples.",
"%sql -sampledata",
"<a id=\"covariance\"></a>\nCOVARIANCE_SAMP\nThe COVARIANCE_SAMP function returns the sample covariance of a set of number pairs.",
"%%sql\nSELECT COVARIANCE_SAMP(SALARY, BONUS) \n FROM EMPLOYEE \nWHERE WORKDEPT = 'A00'",
"<a id=\"stddev\"></a>\nSTDDEV_SAMP\nThe STDDEV_SAMP column function returns the sample standard deviation (division by [n-1]) of a set of numbers.",
"%%sql\nSELECT STDDEV_SAMP(SALARY) \n FROM EMPLOYEE \nWHERE WORKDEPT = 'A00'",
"<a id=\"variance\"></a>\nVARIANCE_SAMP\nThe VARIANCE_SAMP column function returns the sample variance (division by [n-1]) of a set of numbers.",
"%%sql\nSELECT VARIANCE_SAMP(SALARY) \n FROM EMPLOYEE \nWHERE WORKDEPT = 'A00'",
"<a id=\"median\"></a>\nMEDIAN\nThe MEDIAN column function returns the median value in a set of values.",
"%%sql\nSELECT MEDIAN(SALARY) AS MEDIAN, AVG(SALARY) AS AVERAGE \n FROM EMPLOYEE \nWHERE WORKDEPT = 'E21'",
"<a id=\"cume\"></a>\nCUME_DIST\nThe CUME_DIST column function returns the cumulative distribution of a row that is hypothetically inserted into \na group of rows.",
"%%sql\nSELECT CUME_DIST(47000) WITHIN GROUP (ORDER BY SALARY) \n FROM EMPLOYEE \nWHERE WORKDEPT = 'A00'",
"<a id=\"rank\"></a>\nPERCENT_RANK\nThe PERCENT_RANK column function returns the relative percentile rank of a\nrow that is hypothetically inserted into a group of rows.",
"%%sql\nSELECT PERCENT_RANK(47000) WITHIN GROUP (ORDER BY SALARY) \n FROM EMPLOYEE \nWHERE WORKDEPT = 'A00'",
"<a id=\"disc\"></a>\nPERCENTILE_DISC\nThe PERCENTILE_DISC/CONT returns the value that corresponds to the specified percentile \ngiven a sort specification by using discrete (DISC) or continuous (CONT) distribution.",
"%%sql\nSELECT PERCENTILE_DISC(0.75) WITHIN GROUP (ORDER BY SALARY) \n FROM EMPLOYEE \nWHERE WORKDEPT = 'E21'",
"<a id=\"cont\"></a>\nPERCENTILE_CONT\nThis is a function that gives you a continuous percentile calculation.",
"%%sql\nSELECT PERCENTILE_CONT(0.75) WITHIN GROUP (ORDER BY SALARY) \n FROM EMPLOYEE \nWHERE WORKDEPT = 'E21'",
"<a id=\"width\"></a>\nWIDTH BUCKET and Histogram Example\nThe WIDTH_BUCKET function is used to create equal-width histograms. Using the EMPLOYEE table, \nThis SQL will assign a bucket to each employee's salary using a range of 35000 to 100000 divided into 13 buckets.",
"%%sql\nSELECT EMPNO, SALARY, WIDTH_BUCKET(SALARY, 35000, 100000, 13) \n FROM EMPLOYEE \nORDER BY EMPNO",
"We can plot this information by adding some more details to the bucket output.",
"%%sql -a\nWITH BUCKETS(EMPNO, SALARY, BNO) AS \n ( \n SELECT EMPNO, SALARY, \n WIDTH_BUCKET(SALARY, 35000, 100000, 9) AS BUCKET \n FROM EMPLOYEE ORDER BY EMPNO \n ) \nSELECT BNO, COUNT(*) AS COUNT FROM BUCKETS \nGROUP BY BNO \nORDER BY BNO ASC ",
"And here is a plot of the data to make sense of the histogram.",
"%%sql -pb\nWITH BUCKETS(EMPNO, SALARY, BNO) AS \n ( \n SELECT EMPNO, SALARY, \n WIDTH_BUCKET(SALARY, 35000, 100000, 9) AS BUCKET \n FROM EMPLOYEE ORDER BY EMPNO \n ) \nSELECT BNO, COUNT(*) AS COUNT FROM BUCKETS \nGROUP BY BNO \nORDER BY BNO ASC ",
"Back to Top\nCredits: IBM 2018, George Baklarz [baklarz@ca.ibm.com]"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
arcyfelix/Courses
|
18-05-28-Complete-Guide-to-Tensorflow-for-Deep-Learning-with-Python/04-Recurrent-Neural-Networks/.ipynb_checkpoints/03-Time-Series-Exercise-Solutions-Final-checkpoint.ipynb
|
apache-2.0
|
[
"Time Series Exercise - Solutions\nFollow along with the instructions in bold. Watch the solutions video if you get stuck!\nThe Data\n Source: https://datamarket.com/data/set/22ox/monthly-milk-production-pounds-per-cow-jan-62-dec-75#!ds=22ox&display=line \nMonthly milk production: pounds per cow. Jan 62 - Dec 75\n Import numpy pandas and matplotlib",
"import numpy as np\nimport pandas as pd\n\nimport matplotlib.pyplot as plt\n%matplotlib inline",
"Use pandas to read the csv of the monthly-milk-production.csv file and set index_col='Month'",
"milk = pd.read_csv('monthly-milk-production.csv',index_col='Month')",
"Check out the head of the dataframe",
"milk.head()",
"Make the index a time series by using: \nmilk.index = pd.to_datetime(milk.index)",
"milk.index = pd.to_datetime(milk.index)",
"Plot out the time series data.",
"milk.plot()",
"Train Test Split\n Let's attempt to predict a year's worth of data. (12 months or 12 steps into the future) \n Create a test train split using indexing (hint: use .head() or tail() or .iloc[]). We don't want a random train test split, we want to specify that the test set is the last 3 months of data is the test set, with everything before it is the training.",
"milk.info()\n\ntrain_set = milk.head(156)\n\ntest_set = milk.tail(12)",
"Scale the Data\n Use sklearn.preprocessing to scale the data using the MinMaxScaler. Remember to only fit_transform on the training data, then transform the test data. You shouldn't fit on the test data as well, otherwise you are assuming you would know about future behavior!",
"from sklearn.preprocessing import MinMaxScaler\n\nscaler = MinMaxScaler()\n\ntrain_scaled = scaler.fit_transform(train_set)\n\ntest_scaled = scaler.transform(test_set)",
"Batch Function\n We'll need a function that can feed batches of the training data. We'll need to do several things that are listed out as steps in the comments of the function. Remember to reference the previous batch method from the lecture for hints. Try to fill out the function template below, this is a pretty hard step, so feel free to reference the solutions!",
"def next_batch(training_data,batch_size,steps):\n \"\"\"\n INPUT: Data, Batch Size, Time Steps per batch\n OUTPUT: A tuple of y time series results. y[:,:-1] and y[:,1:]\n \"\"\"\n \n # STEP 1: Use np.random.randint to set a random starting point index for the batch.\n # Remember that each batch needs have the same number of steps in it.\n # This means you should limit the starting point to len(data)-steps\n \n # STEP 2: Now that you have a starting index you'll need to index the data from\n # the random start to random start + steps. Then reshape this data to be (1,steps)\n \n # STEP 3: Return the batches. You'll have two batches to return y[:,:-1] and y[:,1:]\n # You'll need to reshape these into tensors for the RNN. Depending on your indexing it\n # will be either .reshape(-1,steps-1,1) or .reshape(-1,steps,1)\n\ndef next_batch(training_data,batch_size,steps):\n \n \n # Grab a random starting point for each batch\n rand_start = np.random.randint(0,len(training_data)-steps) \n\n # Create Y data for time series in the batches\n y_batch = np.array(training_data[rand_start:rand_start+steps+1]).reshape(1,steps+1)\n\n return y_batch[:, :-1].reshape(-1, steps, 1), y_batch[:, 1:].reshape(-1, steps, 1) ",
"Setting Up The RNN Model\n Import TensorFlow",
"import tensorflow as tf",
"The Constants\n Define the constants in a single cell. You'll need the following (in parenthesis are the values I used in my solution, but you can play with some of these): \n* Number of Inputs (1)\n* Number of Time Steps (12)\n* Number of Neurons per Layer (100)\n* Number of Outputs (1)\n* Learning Rate (0.003)\n* Number of Iterations for Training (4000)\n* Batch Size (1)",
"# Just one feature, the time series\nnum_inputs = 1\n# Num of steps in each batch\nnum_time_steps = 12\n# 100 neuron layer, play with this\nnum_neurons = 100\n# Just one output, predicted time series\nnum_outputs = 1\n\n## You can also try increasing iterations, but decreasing learning rate\n# learning rate you can play with this\nlearning_rate = 0.03 \n# how many iterations to go through (training steps), you can play with this\nnum_train_iterations = 4000\n# Size of the batch of data\nbatch_size = 1",
"Create Placeholders for X and y. (You can change the variable names if you want). The shape for these placeholders should be [None,num_time_steps-1,num_inputs] and [None, num_time_steps-1, num_outputs] The reason we use num_time_steps-1 is because each of these will be one step shorter than the original time steps size, because we are training the RNN network to predict one point into the future based on the input sequence.",
"X = tf.placeholder(tf.float32, [None, num_time_steps, num_inputs])\ny = tf.placeholder(tf.float32, [None, num_time_steps, num_outputs])",
"Now create the RNN Layer, you have complete freedom over this, use tf.contrib.rnn and choose anything you want, OutputProjectionWrappers, BasicRNNCells, BasicLSTMCells, MultiRNNCell, GRUCell etc... Keep in mind not every combination will work well! (If in doubt, the solutions used an Outputprojection Wrapper around a basic LSTM cell with relu activation.",
"# Also play around with GRUCell\ncell = tf.contrib.rnn.OutputProjectionWrapper(\n tf.contrib.rnn.BasicLSTMCell(num_units=num_neurons, activation=tf.nn.relu),\n output_size=num_outputs) ",
"Now pass in the cells variable into tf.nn.dynamic_rnn, along with your first placeholder (X)",
"outputs, states = tf.nn.dynamic_rnn(cell, X, dtype=tf.float32)",
"Loss Function and Optimizer\n Create a Mean Squared Error Loss Function and use it to minimize an AdamOptimizer, remember to pass in your learning rate.",
"loss = tf.reduce_mean(tf.square(outputs - y)) # MSE\noptimizer = tf.train.AdamOptimizer(learning_rate=learning_rate)\ntrain = optimizer.minimize(loss)",
"Initialize the global variables",
"init = tf.global_variables_initializer()",
"Create an instance of tf.train.Saver()",
"saver = tf.train.Saver()",
"Session\n Run a tf.Session that trains on the batches created by your next_batch function. Also add an a loss evaluation for every 100 training iterations. Remember to save your model after you are done training.",
"gpu_options = tf.GPUOptions(per_process_gpu_memory_fraction=0.9)\n\nwith tf.Session(config=tf.ConfigProto(gpu_options=gpu_options)) as sess:\n sess.run(init)\n \n for iteration in range(num_train_iterations):\n \n X_batch, y_batch = next_batch(train_scaled,batch_size,num_time_steps)\n sess.run(train, feed_dict={X: X_batch, y: y_batch})\n \n if iteration % 100 == 0:\n \n mse = loss.eval(feed_dict={X: X_batch, y: y_batch})\n print(iteration, \"\\tMSE:\", mse)\n \n # Save Model for Later\n saver.save(sess, \"./ex_time_series_model\")",
"Predicting Future (Test Data)\n Show the test_set (the last 12 months of your original complete data set)",
"test_set",
"Now we want to attempt to predict these 12 months of data, using only the training data we had. To do this we will feed in a seed training_instance of the last 12 months of the training_set of data to predict 12 months into the future. Then we will be able to compare our generated 12 months to our actual true historical values from the test set! \nGenerative Session\nNOTE: Recall that our model is really only trained to predict 1 time step ahead, asking it to generate 12 steps is a big ask, and technically not what it was trained to do! Think of this more as generating new values based off some previous pattern, rather than trying to directly predict the future. You would need to go back to the original model and train the model to predict 12 time steps ahead to really get a higher accuracy on the test data. (Which has its limits due to the smaller size of our data set)\n Fill out the session code below to generate 12 months of data based off the last 12 months of data from the training set. The hardest part about this is adjusting the arrays with their shapes and sizes. Reference the lecture for hints.",
"with tf.Session() as sess:\n \n # Use your Saver instance to restore your saved rnn time series model\n saver.restore(sess, \"./ex_time_series_model\")\n\n # Create a numpy array for your genreative seed from the last 12 months of the \n # training set data. Hint: Just use tail(12) and then pass it to an np.array\n train_seed = list(train_scaled[-12:])\n \n ## Now create a for loop that \n for iteration in range(12):\n X_batch = np.array(train_seed[-num_time_steps:]).reshape(1, num_time_steps, 1)\n y_pred = sess.run(outputs, feed_dict={X: X_batch})\n train_seed.append(y_pred[0, -1, 0])",
"Show the result of the predictions.",
"train_seed",
"Grab the portion of the results that are the generated values and apply inverse_transform on them to turn them back into milk production value units (lbs per cow). Also reshape the results to be (12,1) so we can easily add them to the test_set dataframe.",
"results = scaler.inverse_transform(np.array(train_seed[12:]).reshape(12,1))",
"Create a new column on the test_set called \"Generated\" and set it equal to the generated results. You may get a warning about this, feel free to ignore it.",
"test_set['Generated'] = results",
"View the test_set dataframe.",
"test_set",
"Plot out the two columns for comparison.",
"test_set.plot()",
"Great Job!\nPlay around with the parameters and RNN layers, does a faster learning rate with more steps improve the model? What about GRU or BasicRNN units? What if you train the original model to not just predict one timestep ahead into the future, but 3 instead? Lots of stuff to add on here!"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
harpolea/r3d2
|
docs/p_v_plots.ipynb
|
mit
|
[
"Parametric plots\nWhen looking at Riemann problems, it is common to think about how a wave connects a state that is known to other states. For example, the Hugoniot curve gives the set of states that can be connected to a known state by a shock. Solutions to Riemann problems can be illustrated by the intersections of such curves in some parametric space.\nHere we will illustrate this by showing how the curves for various problems behave in the pressure-velocity ($P-v$) space.\nInert case",
"from r3d2 import eos_defns, State, RiemannProblem, utils\nfrom matplotlib import pyplot\n%matplotlib inline\n\ngamma = 5.0/3.0\neos = eos_defns.eos_gamma_law(gamma)\ntest_1_U_left = State(10.0, 0.0, 0.0, 2.0, eos, label=\"L\")\ntest_1_U_right = State(1.0, 0.0, 0.0, 0.5, eos, label=\"R\")\n\ntest_1_rp = RiemannProblem(test_1_U_left, test_1_U_right)",
"This Riemann problem has a simple rarefaction-shock structure: a left going rarefaction, a contact, and a right going shock.",
"from IPython.display import display, display_png\ndisplay_png(test_1_rp)",
"The solution in parameter space shows the curves that can connect to the left and right states. Both a shock and a rarefaction can connect to either state. However, the only intersection (for the central, \"star\" state) is when a left going rarefaction connects to the left state, and a right going shock connects to the right state:",
"fig = pyplot.figure(figsize=(10,6))\nax = fig.add_subplot(111)\nutils.plot_P_v(test_1_rp, ax, fig)",
"Varying EOS\nAs noted previously, it's not necessary for the equation of state to match in the different states. Here is a problem with two inert EOSs. This problem is solved by two rarefactions.",
"gamma_air = 1.4\neos_air = eos_defns.eos_gamma_law(gamma_air)\nU_vary_eos_L = State(1.0, -0.5, 0.0, 2.0, eos, label=\"L\")\nU_vary_eos_R = State(1.0, +0.5, 0.0, 2.0, eos_air, label=\"R\")\ntest_vary_eos_rp = RiemannProblem(U_vary_eos_L, U_vary_eos_R)\ndisplay_png(test_vary_eos_rp)",
"The parametric solution in this case shows the pressure decreasing across both curves along rarefactions to get to the star state:",
"fig2 = pyplot.figure(figsize=(10,6))\nax2 = fig2.add_subplot(111)\nutils.plot_P_v(test_vary_eos_rp, ax2, fig2)",
"Reactive cases\nOnce reactions start, the behaviour gets more complex, and the parametric pictures can clarify some aspects. An example would be a single deflagration wave that is preceded by a shock to ignite the reaction:",
"eos = eos_defns.eos_gamma_law(5.0/3.0)\neos_reactive = eos_defns.eos_gamma_law_react(5.0/3.0, 0.1, 1.0, 1.0, eos)\nU_reactive_right = State(0.5, 0.0, 0.0, 1.0, eos_reactive)\nU_reactive_left = State(0.24316548798524526, -0.39922932397353039, 0.0,\n 0.61686385086179807, eos)\ntest_precursor_rp = RiemannProblem(U_reactive_left, U_reactive_right)\ndisplay_png(test_precursor_rp)",
"The structure here looks like previous Riemann problems, but there is in fact only one wave. The right state is connected directly to the left state across a compound wave formed from a precursor shock, which raises the temperature until the gas ignites, then a Chapman-Jouget deflagration which is attached to a rarefaction.\nIn parameter space we see this behaviour more directly:",
"fig3 = pyplot.figure(figsize=(10,6))\nax3 = fig3.add_subplot(111)\nutils.plot_P_v(test_precursor_rp, ax3, fig3)",
"The smooth joining of the deflagration and rarefaction curves at the Chapman-Jouget (CJ) point is expected, as the propagation speed of the waves must match there. The ignition point (where the temperature is high enough for the reaction to take place) is illustrated by the \"$i$\"."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
ealogar/curso-python
|
basic/1_Numbers_Strings.ipynb
|
apache-2.0
|
[
"Numbers",
"spam = 65 # an integer declaration. Yes, no semicolon ; at the end\nprint spam # print is a reserved keyword / statement\n\nprint type(spam) # this is a functional call\n\neggs = 2\nprint eggs\nprint type(eggs)",
"Let's see the numeric operations",
"print spam + eggs # sum\n\nprint spam - eggs # difference\n\nprint spam * eggs # product\n\nprint spam / eggs # quotient\n\nprint spam // eggs # floored quotient\n\nprint spam % eggs # remainder or module\n\nprint pow(spam, eggs) # power (yes, this is how a funtion is called)\n\nprint spam ** eggs # power\n\nfooo = -2 # negative value\nprint fooo\nprint type(fooo)\n\nprint -fooo # negated\n\nprint +fooo # unchanged\n\nprint abs(fooo) # absolute value\n\nprint int(fooo) # convert to integer\n\nprint long(fooo) # convert to long integer\n\nprint float(fooo) # convert to float\n\nfooo += 1 # autoincremental (there is no ++)\nprint fooo\n\n# More on the quotient\n\nprint spam / eggs # quotient\nprint spam / float(eggs) # quotient\nprint spam // float(eggs) # floored quotient, aka. integer division\n\n# More on the operations result type\n\nprint type(spam + eggs)\nprint type(long(spam) + eggs)\nprint type(float(spam) + eggs)\nprint type(float(spam) + long(eggs))",
"Python automatically infers the type of the result depending on operands type",
"# Let's try again the power\n\nprint eggs ** spam\nprint type(eggs ** spam)\n\nfrom sys import maxint\nprint maxint\n\n# Let's instantiate other values\n\nspam = 65L # a long\nprint spam\nprint type(spam)\n\neggs = 2.0 # a float\nprint eggs\nprint type(eggs)\n\nspam = 0101 # an integer in octet\nprint spam\nprint type(spam)\n\nspam = 0x41 # an integer in hexadecimal\nprint spam\nprint type(spam)\n\n# Let's do more complex operations\n\nprint round(spam + eggs / spam, 3) # round to n digits, 0 by default\nprint round((spam + eggs) / spam, 3) # round to n digits, 0 by default\nprint round(spam + (eggs / spam), 3) # round to n digits, 0 by default",
"Use parentheses to alter operations order\nSOURCES\nhttp://docs.python.org/2/library/stdtypes.html#numeric-types-int-float-long-complex\nSTRINGS",
"spam = \"spam\" # a string\nprint spam\nprint type(spam)\n\neggs = '\"eggs\"' # another string\nprint eggs\nprint type(eggs)\n\neggs = '\\'eggs\\'' # another string\nprint eggs\n\nspam_eggs = \"'\\tspam\\n\\teggs'\" # another string\nprint spam_eggs\nprint type(spam_eggs)",
"Remember\nString literal are written in ingle or double quotes. It's exactly the same\nBackslash \\ is the escape character\nEscape sequences in strings are interpreted according to rules similar to those used by Standard C",
"spam_eggs = r\"'\\tspam\\n\\teggs'\" # a raw string\nprint spam_eggs\nprint type(spam_eggs)",
"Raw strings are prefixed with 'r' or 'R'\nRaw strings use different rules for interpreting interpreting backslash escape sequences",
"spam_eggs = u\"'\\tspam\\n\\teggs'\" # a unicode string\nprint spam_eggs\nprint type(spam_eggs)",
"Unicode strings are prefixed with 'u' or 'U'\n - unicode is a basic data type different than str:\n - str is ALWAYS an encoded text (although you never notice it)\n - unicode use the Unicode character set as defined by the Unicode Consortium and ISO 10646. We could say unicode are encoding-free strings\n - str string literals are encoded in the default system encoding ( sys.getdefaultencoding() ) or using the encoding specified in modules encoding header\n - The module encoding header is set : #-- coding: utf-8 --\n - There is no way to fully check a string encoding (chardet module)\n - Depending on encoding used it is possible to operate with unicode and str together",
"spam_eggs = u\"'spam\\u0020eggs'\" # a unicode string with Unicode-Escape encoding\nprint spam_eggs\nprint type(spam_eggs)",
"WARNING! Note that in Py3k this approach was radically changed:\n - All string literals are unicode by default and its type is 'str'\n - Encoded strings are specified with 'b' or 'B' prefix and its type is 'bytes'\n - Operations mixing str and bytes always raise a TypeError exception",
"spam_eggs = \"\"\"'\\tspam\n\\teggs'\"\"\" # another string\nprint spam_eggs\nprint type(spam_eggs)",
"Three single or double quotes also work, and they support multiline text",
"spam_eggs = \"'\\tspam\" \"\\n\\teggs'\" # another string\nprint spam_eggs\nprint type(spam_eggs)",
"Several consecutive string literals are automatically concatenated, even if declared in different consecutive lines\nUseful to declare too much long string literals",
"spam_eggs = u\"'\\tspam\\n \\\n\\teggs'\" # a unicode string\nprint spam_eggs\nprint type(spam_eggs)",
"Let's see strings operations",
"spam = \"spam\"\neggs = u'\"Eggs\"'\n\nprint spam.capitalize() # Return a copy with first character in upper case\n\nprint spam",
"WARNING! String are immutables. Its methods return always a new copy",
"print spam.decode() # Decode the str with given or default encoding\nprint type(spam.decode('utf8'))\n\nprint eggs.encode() # Encode the unicode with given or default encoding\nprint type(eggs.encode('utf8'))\n\nprint spam.endswith(\"am\") # Check string suffix (optionally use a tuple)\n\nprint eggs.startswith((\"eggs\", \"huevos\")) # Check string prefix (optionally use a tuple)\n\nprint spam.find(\"pa\") # Get index of substring (or -1)\n\nprint eggs.upper()\nprint eggs.lower() # Convert to upper or lower case\n\nprint spam.isupper()\nprint spam.islower() # Check if string is in upper or lower case\n\nprint repr(\" | spam # \".strip())\nprint repr(\" | spam # \".strip(' |#')) # Remove leading and trailing characters (only whitespace by default)\n\nprint spam.isalpha()\nprint eggs.isalnum()\nprint spam.isdigit() # Check if string is numeric, alpha, both...\n\nprint repr(eggs)\nprint str(eggs)",
"NOTE:\nThe repr module provides a means for producing object representations with limits on the size of the resulting strings. This is used in the Python debugger and may be useful in other contexts as well.\nstr Return a string containing a nicely printable representation of an object. For strings, this returns the string itself. The difference with repr(object) is that str(object) does not always attempt to return a string that is acceptable to eval(); its goal is to return a printable string. If no argument is given, returns the empty string, ''.",
"print \"spam, eggs, foo\".split(\", \")\nprint \"spam, eggs, foo\".split(\", \", 1) # Split by given character, returning a list (optionally specify times to split)\n\nprint \", \".join((\"spam\", \"eggs\", \"foo\")) # Use string as separator to concatenate an iterable of strings",
"Let's format strings",
"print \"%s %s %d\" % (spam, spam, 7) # This is the old string formatting, similar to C\n\nprint \"{0} {0} {1}\".format(spam, 7) # This is the new string formatting method, standard in Py3k\n\nprint \"{} {}\".format(spam, 7.12345)\n\nprint \"[{0:16}|{1:16}]\".format(-7.12345, 7.12345) # Use colon and width of formatted value\n\nprint \"[{0:>16}|{1:<16}]\".format(-7.12345, 7.12345) # Use <, >, =, ^ to specify the alignment of the value\n\nprint \"[{0:^16.3f}|{1:^16.3f}]\".format(-7.12345, 7.12345) # For floats, use the dot . and the f to specify precission of floats\n\nprint \"[{0:_^16.3f}|{1:_^16.3f}]\".format(-7.12345, 7.12345) # Specify the filling value before the alignment\n\nprint \"[{0:^+16.3f}|{1:^+16.3f}]\".format(-7.12345, 7.12345) # Force the sign appearance\n\nprint \"{0:b} {0:c} {0:o} {0:x}\".format(65) # For integers, specify base representation (binary, unicode character, octal, hexadecimal",
"SOURCES:\n\nhttp://docs.python.org/2/library/stdtypes.html#numeric-types-int-float-long-complex\nhttp://docs.python.org/2/reference/lexical_analysis.html#strings\nhttp://docs.python.org/2/reference/lexical_analysis.html#encodings\nhttp://docs.python.org/2/library/stdtypes.html#string-methods\nhttp://docs.python.org/2/library/string.html#formatstrings\n\nTIME TO START WORKING",
"# use every cell to run and make the assert pass\n# Exercise: write the code \ndef minimum(a, b):\n \"\"\"Computes minimum of given 2 numbers.\n \n >>> minimum(2, 3)\n 2\n >>> minimum(8, 5)\n 5\n \"\"\"\n # your code here\n \nassert minimum(2,3) == 2\n\ndef istrcmp(s1, s2):\n \"\"\"Compare given two strings for equality, ignoring the case.\n \n \t>>> istrcmp(\"python\", \"Python\")\n True\n \t>>> istrcmp(\"latex\", \"LaTeX\")\n True\n \t>>> istrcmp(\"foo\", \"Bar\")\n False\n \"\"\"\n # your code here\n\nassert istrcmp(\"HoLa\", \"hola\") == True\n\ndef reverse_text_by_word(text):\n '''given a text, reverse it by words\n\n >>> reverse_text_by_word('Sparse is better than dense.')\n 'dense. than better is Sparse'\n '''\n # your code here\n\nassert reverse_text_by_word('Sparse is better than dense.') == 'dense. than better is Sparse'\n \n\ndef sum_chars_text(text):\n '''given a text str, add all chars by its code\n tip:\n >>> c = 'c'\n >>> ord(c)\n 99\n '''\n # your code here\n\nassert sum_chars_text(\"Sume de caracteres\") == 1728"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
mattpitkin/corner.py
|
docs/_static/notebooks/quickstart.ipynb
|
bsd-2-clause
|
[
"%matplotlib inline\n%config InlineBackend.figure_format = 'retina'\nfrom matplotlib import rcParams\nrcParams[\"font.size\"] = 16\nrcParams[\"font.family\"] = \"sans-serif\"\nrcParams[\"font.sans-serif\"] = [\"Computer Modern Sans\"]\nrcParams[\"text.usetex\"] = True\nrcParams[\"text.latex.preamble\"] = r\"\\usepackage{cmbright}\"\nrcParams[\"savefig.dpi\"] = 100",
"Getting started\nThe only user-facing function in the module is corner.corner and, in its simplest form, you use it like this:",
"import corner\nimport numpy as np\n\nndim, nsamples = 2, 10000\nnp.random.seed(42)\nsamples = np.random.randn(ndim * nsamples).reshape([nsamples, ndim])\nfigure = corner.corner(samples)",
"The following snippet demonstrates a few more bells and whistles:",
"# Set up the parameters of the problem.\nndim, nsamples = 3, 50000\n\n# Generate some fake data.\nnp.random.seed(42)\ndata1 = np.random.randn(ndim * 4 * nsamples // 5).reshape([4 * nsamples // 5, ndim])\ndata2 = (4*np.random.rand(ndim)[None, :] + np.random.randn(ndim * nsamples // 5).reshape([nsamples // 5, ndim]))\ndata = np.vstack([data1, data2])\n\n# Plot it.\nfigure = corner.corner(data, labels=[r\"$x$\", r\"$y$\", r\"$\\log \\alpha$\", r\"$\\Gamma \\, [\\mathrm{parsec}]$\"],\n quantiles=[0.16, 0.5, 0.84],\n show_titles=True, title_kwargs={\"fontsize\": 12})",
"The API documentation gives more details about all the arguments available for customization."
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
Danghor/Algorithms
|
Python/Chapter-06/AVL-Trees.ipynb
|
gpl-2.0
|
[
"from IPython.core.display import HTML\nwith open('../style.css') as file:\n css = file.read()\nHTML(css)",
"AVL Trees\nThis notebook implements\nAVL trees. The set $\\mathcal{A}$ of AVL trees is defined inductively:\n\n$\\texttt{Nil} \\in \\mathcal{A}$.\n$\\texttt{Node}(k,v,l,r) \\in \\mathcal{A}\\quad$ iff \n$\\texttt{Node}(k,v,l,r) \\in \\mathcal{B}$,\n$l, r \\in \\mathcal{A}$, and\n$|l.\\texttt{height}() - r.\\texttt{height}()| \\leq 1$.\n\n\n\nAccording to this definition, an AVL tree is an ordered binary tree\nsuch that for every node $\\texttt{Node}(k,v,l,r)$ in this tree the height of the left subtree $l$ and the right\nsubtree $r$ differ at most by one. \nThe class AVLTree represents the nodes of an AVL tree. This class has the following member variables:\n\nmKey is the key stored at the root of the tree,\nmValue is the values associated with this key,\nmLeft is the left subtree, \nmRight is the right subtree, and\nmHeight is the height.\n\nThe constructor __init__ creates the empty tree.",
"class AVLTree:\n def __init__(self):\n self.mKey = None\n self.mValue = None\n self.mLeft = None\n self.mRight = None\n self.mHeight = 0",
"Given an ordered binary tree $t$, the expression $t.\\texttt{isEmpty}()$ checks whether $t$ is the empty tree.",
"def isEmpty(self):\n return self.mHeight == 0\n\nAVLTree.isEmpty = isEmpty",
"Given an ordered binary tree $t$ and a key $k$, the expression $t.\\texttt{find}(k)$ returns the value stored unter the key $k$.\nThe method find is defined inductively as follows:\n - $\\texttt{Nil}.\\texttt{find}(k) = \\Omega$,\nbecause the empty tree is interpreted as the empty map.\n\n\n\n$\\texttt{Node}(k, v, l, r).\\texttt{find}(k) = v$,\nbecause the node $\\texttt{Node}(k,v,l,r)$ stores the assignment $k \\mapsto v$.\n - $k_1 < k_2 \\rightarrow \\texttt{Node}(k_2, v, l, r).\\texttt{find}(k_1) = l.\\texttt{find}(k_1)$,\nbecause if $k_1$ is less than $k_2$, then any mapping for $k_1$ has to be stored in the left subtree $l$.\n - $k_1 > k_2 \\rightarrow \\texttt{Node}(k_2, v, l, r).\\texttt{find}(k_1) = r.\\texttt{find}(k_1)$,\nbecause if $k_1$ is greater than $k_2$, then any mapping for $k_1$ has to be stored in the right subtree $r$.",
"def find(self, key):\n if self.isEmpty():\n return\n elif self.mKey == key:\n return self.mValue\n elif key < self.mKey:\n return self.mLeft.find(key)\n else:\n return self.mRight.find(key)\n \nAVLTree.find = find",
"The method $\\texttt{insert}()$ is specified via recursive equations.\n - $\\texttt{Nil}.\\texttt{insert}(k,v) = \\texttt{Node}(k,v, \\texttt{Nil}, \\texttt{Nil})$,\n - $\\texttt{Node}(k, v_2, l, r).\\texttt{insert}(k,v_1) = \\texttt{Node}(k, v_1, l, r)$,\n - $k_1 < k_2 \\rightarrow \n \\texttt{Node}(k_2, v_2, l, r).\\texttt{insert}(k_1, v_1) =\n \\texttt{Node}\\bigl(k_2, v_2, l.\\texttt{insert}(k_1,v_1), r\\bigr).\\texttt{restore}()$,\n - $k_1 > k_2 \\rightarrow \n \\texttt{Node}(k_2, v_2, l, r).\\texttt{insert}\\bigl(k_1, v_1\\bigr) = \n \\texttt{Node}\\bigl(k_2, v_2, l, r.\\texttt{insert}(k_1,v_1)\\bigr).\\texttt{restore}()$.\nThe function $\\texttt{restore}$ is an auxiliary function that is defined below. This function restores the balancing condition if it is violated after an insertion.",
"def insert(self, key, value):\n if self.isEmpty():\n self.mKey = key\n self.mValue = value\n self.mLeft = AVLTree()\n self.mRight = AVLTree()\n self.mHeight = 1\n elif self.mKey == key:\n self.mValue = value\n elif key < self.mKey:\n self.mLeft.insert(key, value)\n self._restore()\n else:\n self.mRight.insert(key, value)\n self._restore()\n\nAVLTree.insert = insert",
"The method $\\texttt{self}.\\texttt{delete}(k)$ removes the key $k$ from the tree $\\texttt{self}$. It is defined as follows:\n\n$\\texttt{Nil}.\\texttt{delete}(k) = \\texttt{Nil}$,\n$\\texttt{Node}(k,v,\\texttt{Nil},r).\\texttt{delete}(k) = r$,\n$\\texttt{Node}(k,v,l,\\texttt{Nil}).\\texttt{delete}(k) = l$,\n$l \\not= \\texttt{Nil} \\,\\wedge\\, r \\not= \\texttt{Nil} \\,\\wedge\\, \n \\langle r',k_{min}, v_{min} \\rangle := r.\\texttt{delMin}() \\;\\rightarrow\\;\n \\texttt{Node}(k,v,l,r).\\texttt{delete}(k) = \\texttt{Node}(k_{min},v_{min},l,r').\\texttt{restore}()$\n$k_1 < k_2 \\rightarrow \\texttt{Node}(k_2,v_2,l,r).\\texttt{delete}(k_1) = \n \\texttt{Node}\\bigl(k_2,v_2,l.\\texttt{delete}(k_1),r\\bigr).\\texttt{restore}()$,\n$k_1 > k_2 \\rightarrow \\texttt{Node}(k_2,v_2,l,r).\\texttt{delete}(k_1) = \n \\texttt{Node}\\bigl(k_2,v_2,l,r.\\texttt{delete}(k_1)\\bigr).\\texttt{restore}()$.",
"def delete(self, key):\n if self.isEmpty():\n return\n if key == self.mKey:\n if self.mLeft.isEmpty():\n self._update(self.mRight)\n elif self.mRight.isEmpty():\n self._update(self.mLeft)\n else:\n self.mRight, self.mKey, self.mValue = self.mRight._delMin()\n self._restore()\n elif key < self.mKey:\n self.mLeft.delete(key)\n self._restore()\n else:\n self.mRight.delete(key)\n self._restore()\n \nAVLTree.delete = delete",
"The method $\\texttt{self}.\\texttt{delMin}()$ removes the smallest key from the given tree $\\texttt{self}$\nand returns a triple of the form\n$$ (\\texttt{self}, k_m, v_m) $$\nwhere $\\texttt{self}$ is the tree that remains after removing the smallest key, while $k_m$ is the smallest key that has been found and $v_m$ is the associated value. \nThe function is defined as follows:\n\n$\\texttt{Node}(k, v, \\texttt{Nil}, r).\\texttt{delMin}() = \\langle r, k, v \\rangle$,\n$l\\not= \\texttt{Nil} \\wedge \\langle l',k_{min}, v_{min}\\rangle := l.\\texttt{delMin}() \n \\;\\rightarrow\\;\n \\texttt{Node}(k, v, l, r).\\texttt{delMin}() = \n \\langle \\texttt{Node}(k, v, l', r).\\texttt{restore}(), k_{min}, v_{min} \\rangle\n $",
"def _delMin(self):\n if self.mLeft.isEmpty():\n return self.mRight, self.mKey, self.mValue\n else:\n ls, km, vm = self.mLeft._delMin()\n self.mLeft = ls\n self._restore()\n return self, km, vm\n \nAVLTree._delMin = _delMin",
"Given two ordered binary trees $s$ and $t$, the expression $s.\\texttt{update}(t)$ overwrites the attributes of $s$ with the corresponding attributes of $t$.",
"def _update(self, t):\n self.mKey = t.mKey\n self.mValue = t.mValue\n self.mLeft = t.mLeft\n self.mRight = t.mRight\n self.mHeight = t.mHeight\n \nAVLTree._update = _update",
"The function $\\texttt{restore}(\\texttt{self})$ restores the balancing condition of the given binary tree\nat the root node and recompute the variable $\\texttt{mHeight}$.\nThe method $\\texttt{restore}$ is specified via conditional equations.\n\n\n$\\texttt{Nil}.\\texttt{restore}() = \\texttt{Nil}$,\nbecause the empty tree already is an AVL tree.\n - $|l.\\texttt{height}() - r.\\texttt{height}()| \\leq 1 \\rightarrow \n \\texttt{Node}(k,v,l,r).\\texttt{restore}() = \\texttt{Node}(k,v,l,r)$.\nIf the balancing condition is satisfied, then nothing needs to be done. \n - $\\begin{array}[t]{cl}\n & l_1.\\texttt{height}() = r_1.\\texttt{height}() + 2 \\ \n \\wedge & l_1 = \\texttt{Node}(k_2,v_2,l_2,r_2) \\\n \\wedge & l_2.\\texttt{height}() \\geq r_2.\\texttt{height}() \\[0.2cm]\n \\rightarrow & \\texttt{Node}(k_1,v_1,l_1,r_1).\\texttt{restore}() = \n \\texttt{Node}\\bigl(k_2,v_2,l_2,\\texttt{Node}(k_1,v_1,r_2,r_1)\\bigr)\n \\end{array}\n$\n - $\\begin{array}[t]{cl}\n & l_1.\\texttt{height}() = r_1.\\texttt{height}() + 2 \\ \n \\wedge & l_1 = \\texttt{Node}(k_2,v_2,l_2,r_2) \\\n \\wedge & l_2.\\texttt{height}() < r_2.\\texttt{height}() \\\n \\wedge & r_2 = \\texttt{Node}(k_3,v_3,l_3,r_3) \\\n \\rightarrow & \\texttt{Node}(k_1,v_1,l_1,r_1).\\texttt{restore}() = \n \\texttt{Node}\\bigl(k_3,v_3,\\texttt{Node}(k_2,v_2,l_2,l_3),\\texttt{Node}(k_1,v_1,r_3,r_1) \\bigr)\n \\end{array}\n$\n\n\n$\\begin{array}[t]{cl}\n & r_1.\\texttt{height}() = l_1.\\texttt{height}() + 2 \\ \n \\wedge & r_1 = \\texttt{Node}(k_2,v_2,l_2,r_2) \\\n \\wedge & r_2.\\texttt{height}() \\geq l_2.\\texttt{height}() \\[0.2cm]\n \\rightarrow & \\texttt{Node}(k_1,v_1,l_1,r_1).\\texttt{restore}() = \n \\texttt{Node}\\bigl(k_2,v_2,\\texttt{Node}(k_1,v_1,l_1,l_2),r_2\\bigr)\n \\end{array}\n $\n\n$\\begin{array}[t]{cl}\n & r_1.\\texttt{height}() = l_1.\\texttt{height}() + 2 \\ \n \\wedge & r_1 = \\texttt{Node}(k_2,v_2,l_2,r_2) \\\n \\wedge & r_2.\\texttt{height}() < l_2.\\texttt{height}() \\\n \\wedge & l_2 = \\texttt{Node}(k_3,v_3,l_3,r_3) \\\n \\rightarrow & \\texttt{Node}(k_1,v_1,l_1,r_1).\\texttt{restore}() = \n \\texttt{Node}\\bigl(k_3,v_3,\\texttt{Node}(k_1,v_1,l_1,l_3),\\texttt{Node}(k_2,v_2,r_3,r_2) \\bigr)\n \\end{array}\n $",
"def _restore(self):\n if abs(self.mLeft.mHeight - self.mRight.mHeight) <= 1:\n self._restoreHeight()\n return\n if self.mLeft.mHeight > self.mRight.mHeight:\n k1, v1, l1, r1 = self.mKey, self.mValue, self.mLeft, self.mRight\n k2, v2, l2, r2 = l1.mKey, l1.mValue, l1.mLeft, l1.mRight\n if l2.mHeight >= r2.mHeight:\n self._setValues(k2, v2, l2, createNode(k1, v1, r2, r1))\n else: \n k3, v3, l3, r3 = r2.mKey, r2.mValue, r2.mLeft, r2.mRight\n self._setValues(k3, v3, createNode(k2, v2, l2, l3),\n createNode(k1, v1, r3, r1))\n elif self.mRight.mHeight > self.mLeft.mHeight:\n k1, v1, l1, r1 = self.mKey, self.mValue, self.mLeft, self.mRight\n k2, v2, l2, r2 = r1.mKey, r1.mValue, r1.mLeft, r1.mRight\n if r2.mHeight >= l2.mHeight:\n self._setValues(k2, v2, createNode(k1, v1, l1, l2), r2)\n else:\n k3, v3, l3, r3 = l2.mKey, l2.mValue, l2.mLeft, l2.mRight\n self._setValues(k3, v3, createNode(k1, v1, l1, l3),\n createNode(k2, v2, r3, r2))\n self._restoreHeight()\n \nAVLTree._restore = _restore",
"The function $\\texttt{self}.\\texttt{_setValues}(k, v, l, r)$ overwrites the member variables of the node $\\texttt{self}$ with the given values.",
"def _setValues(self, k, v, l, r):\n self.mKey = k\n self.mValue = v\n self.mLeft = l\n self.mRight = r\n \nAVLTree._setValues = _setValues\n\ndef _restoreHeight(self):\n self.mHeight = max(self.mLeft.mHeight, self.mRight.mHeight) + 1\n \nAVLTree._restoreHeight = _restoreHeight",
"The function $\\texttt{createNode}(k, v, l, r)$ creates an AVL-tree of that has the pair $(k, v)$ stored at its root, left subtree $l$ and right subtree $r$.",
"def createNode(key, value, left, right):\n node = AVLTree()\n node.mKey = key\n node.mValue = value\n node.mLeft = left\n node.mRight = right\n node.mHeight = max(left.mHeight, right.mHeight) + 1\n return node\n\nimport graphviz as gv",
"Given an ordered binary tree, this function renders the tree graphically using graphviz.",
"def toDot(self):\n AVLTree.sNodeCount = 0 # this is a static variable of the class AVLTree\n dot = gv.Digraph(node_attr={'shape': 'record', 'style': 'rounded'})\n NodeDict = {}\n self._assignIDs(NodeDict)\n for n, t in NodeDict.items():\n if t.mValue != None:\n dot.node(str(n), label='{' + str(t.mKey) + '|' + str(t.mValue) + '}')\n elif t.mKey != None:\n dot.node(str(n), label=str(t.mKey))\n else:\n dot.node(str(n), label='', shape='point')\n for n, t in NodeDict.items():\n if not t.mLeft == None:\n dot.edge(str(n), str(t.mLeft.mID))\n if not t.mRight == None:\n dot.edge(str(n), str(t.mRight.mID))\n return dot\n\nAVLTree.toDot = toDot",
"This method assigns a unique identifier with each node. The dictionary NodeDict maps these identifiers to the nodes where they occur.",
"def _assignIDs(self, NodeDict):\n AVLTree.sNodeCount += 1\n self.mID = AVLTree.sNodeCount\n NodeDict[self.mID] = self\n if self.isEmpty():\n return\n self.mLeft ._assignIDs(NodeDict)\n self.mRight._assignIDs(NodeDict)\n \nAVLTree._assignIDs = _assignIDs",
"The function $\\texttt{demo}()$ creates a small ordered binary tree.",
"def demo():\n m = AVLTree()\n m.insert(\"anton\", 123)\n m.insert(\"hugo\" , 345)\n m.insert(\"gustav\", 789)\n m.insert(\"jens\", 234)\n m.insert(\"hubert\", 432)\n m.insert(\"andre\", 342)\n m.insert(\"philip\", 342)\n m.insert(\"rene\", 345)\n return m\n\nt = demo()\nprint(t.toDot())\n\nt.toDot()\n\nt.delete('gustav')\nt.toDot()\n\nt.delete('hubert')\nt.toDot()",
"Let's generate an ordered binary tree with random keys.",
"import random as rnd\n\nt = AVLTree()\nfor k in range(30):\n k = rnd.randrange(100)\n t.insert(k, None)\nt.toDot()",
"This tree looks more or less balanced. Lets us try to create a tree by inserting sorted numbers because that resulted in linear complexity for ordered binary trees.",
"t = AVLTree()\nfor k in range(30):\n t.insert(k, None)\nt.toDot()",
"Next, we compute the set of prime numbers $\\leq 100$. Mathematically, this set is given as follows:\n$$ \\bigl{2, \\cdots, 100 \\bigr} - \\bigl{ i \\cdot j \\bigm| i, j \\in {2, \\cdots, 100 }\\bigr}$$",
"S = AVLTree()\nfor k in range(2, 101):\n S.insert(k, None)\nfor i in range(2, 101):\n for j in range(2, 101):\n S.delete(i * j)\nS.toDot()",
"The function $t.\\texttt{maxKey}()$ returns the biggest key in the tree $t$. It is defined inductively:\n - $\\texttt{Nil}.\\texttt{maxKey}() = \\Omega$,\n - $\\texttt{Node}(k,v,l,\\texttt{Nil}).\\texttt{maxKey}() = k$,\n - $r \\not= \\texttt{Nil} \\rightarrow \\texttt{Node}(k,v,l,r).\\texttt{maxKey}() = r.\\texttt{maxKey}()$.",
"def maxKey(self):\n if self.isEmpty():\n return None\n if self.mRight.isEmpty():\n return self.mKey\n return self.mRight.maxKey()\n\nAVLTree.maxKey = maxKey",
"The function $\\texttt{leanTree}(h, k)$ computes an AVL tree of height $h$ that is as lean as possible.\nAll key in the tree will be integers that are bigger than $k$. The definition by induction:\n - $\\texttt{leanTree}(0, k) = \\texttt{Nil}$,\nbecause there is only one AVL tree of height $0$ and this is the tree $\\texttt{Nil}$.\n\n\n\n$\\texttt{leanTree}(1, k) = \\texttt{Node}(k+1,0,\\texttt{Nil}, \\texttt{Nil})$,\nsince, if we abstract from the actual keys and values, there is exactly one AVL tree of height $1$.\n - $\\texttt{leanTree}(h+1,k).\\texttt{maxKey}() = l \\rightarrow \n \\texttt{leanTree}(h+2,k) = \\texttt{Node}\\bigl(l+1,\\,0,\\,\\texttt{leanTree}(h+1,k),\\,\\texttt{leanTree}(h, l+1)\\bigr)\n$.",
"def leanTree(h, k):\n if h == 0: \n return AVLTree()\n if h == 1:\n return createNode(k + 1, None, AVLTree(), AVLTree())\n left = leanTree(h - 1, k)\n l = left.maxKey()\n return createNode(l + 1, None, left, leanTree(h - 2, l + 1))\n\nl = leanTree(6, 0)\nl.toDot()\n\nfor k in range(6):\n l = leanTree(k, 0)\n print(f'Height {k}:')\n display(l.toDot())"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
hpparvi/PyTransit
|
notebooks/osmodel_example_1.ipynb
|
gpl-2.0
|
[
"Oblate fast-rotating star model example",
"%pylab inline\n\nfrom pytransit import OblateStarModel, QuadraticModel\n\ntmo = OblateStarModel(sres=100, pres=8, rstar=1.65)\ntmc = QuadraticModel(interpolate=False)\n\ntimes = linspace(-0.35, 0.35, 500)\ntmo.set_data(times)\ntmc.set_data(times)",
"Comparison with the quadratic model\nThe model can be evaluated for a set of scalar parameters using the tm.evaluate_ps method. The oblate model takes, in addition to the basic orbital parameters, the stellar rotation period rperiod, pole temperature tpole, obliquity phi, gravity-darkening parameter beta, and azimuthal angle az.\nThe oblate model should be identical to the quadratic model if we set either the rotation period to a large value or the gravity-darkening parameter to zero.",
"k = array([0.1])\nt0, p, a, i, az, e, w = 0.0, 4.0, 4.5, 0.5*pi, 0.0, 0.0, 0.0\nrho, rperiod, tpole, phi, beta = 1.4, 0.25, 6500., -0.2*pi, 0.3\nldc = array([0.3, 0.1]) # Quadtratic limb darkening coefficients\n\nflux_qm = tmc.evaluate_ps(k, ldc, t0, p, a, i, e, w)\n\nrperiod = 10\nflux_om = tmo.evaluate_ps(k, rho, rperiod, tpole, phi, beta, ldc, t0, p, a, i, az, e, w)\n\nplot(flux_qm, lw=6, c='k')\nplot(flux_om, lw=2, c='w');",
"Changing obliquity",
"rperiod = 0.15\nb = 0.25\nfor phi in (-0.25*pi, 0.0, 0.25*pi, 0.5*pi):\n tmo.visualize(0.1, b, 0.0, rho, rperiod, tpole, phi, beta, ldc, ires=256)",
"Changing azimuth angle",
"rperiod = 0.15\nphi = 0.25\nb = 0.00\nfor az in (-0.25*pi, 0.0, 0.25*pi, 0.5*pi):\n tmo.visualize(0.1, b, az, rho, rperiod, tpole, phi, beta, ldc, ires=256)",
"<center>©2020 Hannu Parviainen</center>"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
ktmud/deep-learning
|
tv-script-generation/dlnd_tv_script_generation.ipynb
|
mit
|
[
"TV Script Generation\nIn this project, you'll generate your own Simpsons TV scripts using RNNs. You'll be using part of the Simpsons dataset of scripts from 27 seasons. The Neural Network you'll build will generate a new TV script for a scene at Moe's Tavern.\nGet the Data\nThe data is already provided for you. You'll be using a subset of the original dataset. It consists of only the scenes in Moe's Tavern. This doesn't include other versions of the tavern, like \"Moe's Cavern\", \"Flaming Moe's\", \"Uncle Moe's Family Feed-Bag\", etc..",
"\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nimport helper\n\ndata_dir = './data/simpsons/moes_tavern_lines.txt'\ntext = helper.load_data(data_dir)\n# Ignore notice, since we don't use it for analysing the data\ntext = text[81:]",
"Explore the Data\nPlay around with view_sentence_range to view different parts of the data.",
"view_sentence_range = (0, 10)\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nimport numpy as np\n\nprint('Dataset Stats')\nprint('Roughly the number of unique words: {}'.format(len({word: None for word in text.split()})))\nscenes = text.split('\\n\\n')\nprint('Number of scenes: {}'.format(len(scenes)))\nsentence_count_scene = [scene.count('\\n') for scene in scenes]\nprint('Average number of sentences in each scene: {}'.format(np.average(sentence_count_scene)))\n\nsentences = [sentence for scene in scenes for sentence in scene.split('\\n')]\nprint('Number of lines: {}'.format(len(sentences)))\nword_count_sentence = [len(sentence.split()) for sentence in sentences]\nprint('Average number of words in each line: {}'.format(np.average(word_count_sentence)))\n\nprint()\nprint('The sentences {} to {}:'.format(*view_sentence_range))\nprint('\\n'.join(text.split('\\n')[view_sentence_range[0]:view_sentence_range[1]]))",
"Implement Preprocessing Functions\nThe first thing to do to any dataset is preprocessing. Implement the following preprocessing functions below:\n- Lookup Table\n- Tokenize Punctuation\nLookup Table\nTo create a word embedding, you first need to transform the words to ids. In this function, create two dictionaries:\n- Dictionary to go from the words to an id, we'll call vocab_to_int\n- Dictionary to go from the id to word, we'll call int_to_vocab\nReturn these dictionaries in the following tuple (vocab_to_int, int_to_vocab)",
"import numpy as np\nimport problem_unittests as tests\n\ndef create_lookup_tables(text):\n \"\"\"\n Create lookup tables for vocabulary\n :param text: The text of tv scripts split into words\n :return: A tuple of dicts (vocab_to_int, int_to_vocab)\n \"\"\"\n # TODO: Implement Function\n return None, None\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_create_lookup_tables(create_lookup_tables)",
"Tokenize Punctuation\nWe'll be splitting the script into a word array using spaces as delimiters. However, punctuations like periods and exclamation marks make it hard for the neural network to distinguish between the word \"bye\" and \"bye!\".\nImplement the function token_lookup to return a dict that will be used to tokenize symbols like \"!\" into \"||Exclamation_Mark||\". Create a dictionary for the following symbols where the symbol is the key and value is the token:\n- Period ( . )\n- Comma ( , )\n- Quotation Mark ( \" )\n- Semicolon ( ; )\n- Exclamation mark ( ! )\n- Question mark ( ? )\n- Left Parentheses ( ( )\n- Right Parentheses ( ) )\n- Dash ( -- )\n- Return ( \\n )\nThis dictionary will be used to token the symbols and add the delimiter (space) around it. This separates the symbols as it's own word, making it easier for the neural network to predict on the next word. Make sure you don't use a token that could be confused as a word. Instead of using the token \"dash\", try using something like \"||dash||\".",
"def token_lookup():\n \"\"\"\n Generate a dict to turn punctuation into a token.\n :return: Tokenize dictionary where the key is the punctuation and the value is the token\n \"\"\"\n # TODO: Implement Function\n return None\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_tokenize(token_lookup)",
"Preprocess all the data and save it\nRunning the code cell below will preprocess all the data and save it to file.",
"\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\n# Preprocess Training, Validation, and Testing Data\nhelper.preprocess_and_save_data(data_dir, token_lookup, create_lookup_tables)",
"Check Point\nThis is your first checkpoint. If you ever decide to come back to this notebook or have to restart the notebook, you can start from here. The preprocessed data has been saved to disk.",
"\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nimport helper\nimport numpy as np\nimport problem_unittests as tests\n\nint_text, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()",
"Build the Neural Network\nYou'll build the components necessary to build a RNN by implementing the following functions below:\n- get_inputs\n- get_init_cell\n- get_embed\n- build_rnn\n- build_nn\n- get_batches\nCheck the Version of TensorFlow and Access to GPU",
"\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nfrom distutils.version import LooseVersion\nimport warnings\nimport tensorflow as tf\n\n# Check TensorFlow Version\nassert LooseVersion(tf.__version__) >= LooseVersion('1.3'), 'Please use TensorFlow version 1.3 or newer'\nprint('TensorFlow Version: {}'.format(tf.__version__))\n\n# Check for a GPU\nif not tf.test.gpu_device_name():\n warnings.warn('No GPU found. Please use a GPU to train your neural network.')\nelse:\n print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))",
"Input\nImplement the get_inputs() function to create TF Placeholders for the Neural Network. It should create the following placeholders:\n- Input text placeholder named \"input\" using the TF Placeholder name parameter.\n- Targets placeholder\n- Learning Rate placeholder\nReturn the placeholders in the following tuple (Input, Targets, LearningRate)",
"def get_inputs():\n \"\"\"\n Create TF Placeholders for input, targets, and learning rate.\n :return: Tuple (input, targets, learning rate)\n \"\"\"\n # TODO: Implement Function\n return None, None, None\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_get_inputs(get_inputs)",
"Build RNN Cell and Initialize\nStack one or more BasicLSTMCells in a MultiRNNCell.\n- The Rnn size should be set using rnn_size\n- Initalize Cell State using the MultiRNNCell's zero_state() function\n - Apply the name \"initial_state\" to the initial state using tf.identity()\nReturn the cell and initial state in the following tuple (Cell, InitialState)",
"def get_init_cell(batch_size, rnn_size):\n \"\"\"\n Create an RNN Cell and initialize it.\n :param batch_size: Size of batches\n :param rnn_size: Size of RNNs\n :return: Tuple (cell, initialize state)\n \"\"\"\n # TODO: Implement Function\n return None, None\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_get_init_cell(get_init_cell)",
"Word Embedding\nApply embedding to input_data using TensorFlow. Return the embedded sequence.",
"def get_embed(input_data, vocab_size, embed_dim):\n \"\"\"\n Create embedding for <input_data>.\n :param input_data: TF placeholder for text input.\n :param vocab_size: Number of words in vocabulary.\n :param embed_dim: Number of embedding dimensions\n :return: Embedded input.\n \"\"\"\n # TODO: Implement Function\n return None\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_get_embed(get_embed)",
"Build RNN\nYou created a RNN Cell in the get_init_cell() function. Time to use the cell to create a RNN.\n- Build the RNN using the tf.nn.dynamic_rnn()\n - Apply the name \"final_state\" to the final state using tf.identity()\nReturn the outputs and final_state state in the following tuple (Outputs, FinalState)",
"def build_rnn(cell, inputs):\n \"\"\"\n Create a RNN using a RNN Cell\n :param cell: RNN Cell\n :param inputs: Input text data\n :return: Tuple (Outputs, Final State)\n \"\"\"\n # TODO: Implement Function\n return None, None\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_build_rnn(build_rnn)",
"Build the Neural Network\nApply the functions you implemented above to:\n- Apply embedding to input_data using your get_embed(input_data, vocab_size, embed_dim) function.\n- Build RNN using cell and your build_rnn(cell, inputs) function.\n- Apply a fully connected layer with a linear activation and vocab_size as the number of outputs.\nReturn the logits and final state in the following tuple (Logits, FinalState)",
"def build_nn(cell, rnn_size, input_data, vocab_size, embed_dim):\n \"\"\"\n Build part of the neural network\n :param cell: RNN cell\n :param rnn_size: Size of rnns\n :param input_data: Input data\n :param vocab_size: Vocabulary size\n :param embed_dim: Number of embedding dimensions\n :return: Tuple (Logits, FinalState)\n \"\"\"\n # TODO: Implement Function\n return None, None\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_build_nn(build_nn)",
"Batches\nImplement get_batches to create batches of input and targets using int_text. The batches should be a Numpy array with the shape (number of batches, 2, batch size, sequence length). Each batch contains two elements:\n- The first element is a single batch of input with the shape [batch size, sequence length]\n- The second element is a single batch of targets with the shape [batch size, sequence length]\nIf you can't fill the last batch with enough data, drop the last batch.\nFor example, get_batches([1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20], 3, 2) would return a Numpy array of the following:\n```\n[\n # First Batch\n [\n # Batch of Input\n [[ 1 2], [ 7 8], [13 14]]\n # Batch of targets\n [[ 2 3], [ 8 9], [14 15]]\n ]\n# Second Batch\n [\n # Batch of Input\n [[ 3 4], [ 9 10], [15 16]]\n # Batch of targets\n [[ 4 5], [10 11], [16 17]]\n ]\n# Third Batch\n [\n # Batch of Input\n [[ 5 6], [11 12], [17 18]]\n # Batch of targets\n [[ 6 7], [12 13], [18 1]]\n ]\n]\n```\nNotice that the last target value in the last batch is the first input value of the first batch. In this case, 1. This is a common technique used when creating sequence batches, although it is rather unintuitive.",
"def get_batches(int_text, batch_size, seq_length):\n \"\"\"\n Return batches of input and target\n :param int_text: Text with the words replaced by their ids\n :param batch_size: The size of batch\n :param seq_length: The length of sequence\n :return: Batches as a Numpy array\n \"\"\"\n # TODO: Implement Function\n return None\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_get_batches(get_batches)",
"Neural Network Training\nHyperparameters\nTune the following parameters:\n\nSet num_epochs to the number of epochs.\nSet batch_size to the batch size.\nSet rnn_size to the size of the RNNs.\nSet embed_dim to the size of the embedding.\nSet seq_length to the length of sequence.\nSet learning_rate to the learning rate.\nSet show_every_n_batches to the number of batches the neural network should print progress.",
"# Number of Epochs\nnum_epochs = None\n# Batch Size\nbatch_size = None\n# RNN Size\nrnn_size = None\n# Embedding Dimension Size\nembed_dim = None\n# Sequence Length\nseq_length = None\n# Learning Rate\nlearning_rate = None\n# Show stats for every n number of batches\nshow_every_n_batches = None\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\nsave_dir = './save'",
"Build the Graph\nBuild the graph using the neural network you implemented.",
"\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nfrom tensorflow.contrib import seq2seq\n\ntrain_graph = tf.Graph()\nwith train_graph.as_default():\n vocab_size = len(int_to_vocab)\n input_text, targets, lr = get_inputs()\n input_data_shape = tf.shape(input_text)\n cell, initial_state = get_init_cell(input_data_shape[0], rnn_size)\n logits, final_state = build_nn(cell, rnn_size, input_text, vocab_size, embed_dim)\n\n # Probabilities for generating words\n probs = tf.nn.softmax(logits, name='probs')\n\n # Loss function\n cost = seq2seq.sequence_loss(\n logits,\n targets,\n tf.ones([input_data_shape[0], input_data_shape[1]]))\n\n # Optimizer\n optimizer = tf.train.AdamOptimizer(lr)\n\n # Gradient Clipping\n gradients = optimizer.compute_gradients(cost)\n capped_gradients = [(tf.clip_by_value(grad, -1., 1.), var) for grad, var in gradients if grad is not None]\n train_op = optimizer.apply_gradients(capped_gradients)",
"Train\nTrain the neural network on the preprocessed data. If you have a hard time getting a good loss, check the forums to see if anyone is having the same problem.",
"\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nbatches = get_batches(int_text, batch_size, seq_length)\n\nwith tf.Session(graph=train_graph) as sess:\n sess.run(tf.global_variables_initializer())\n\n for epoch_i in range(num_epochs):\n state = sess.run(initial_state, {input_text: batches[0][0]})\n\n for batch_i, (x, y) in enumerate(batches):\n feed = {\n input_text: x,\n targets: y,\n initial_state: state,\n lr: learning_rate}\n train_loss, state, _ = sess.run([cost, final_state, train_op], feed)\n\n # Show every <show_every_n_batches> batches\n if (epoch_i * len(batches) + batch_i) % show_every_n_batches == 0:\n print('Epoch {:>3} Batch {:>4}/{} train_loss = {:.3f}'.format(\n epoch_i,\n batch_i,\n len(batches),\n train_loss))\n\n # Save Model\n saver = tf.train.Saver()\n saver.save(sess, save_dir)\n print('Model Trained and Saved')",
"Save Parameters\nSave seq_length and save_dir for generating a new TV script.",
"\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\n# Save parameters for checkpoint\nhelper.save_params((seq_length, save_dir))",
"Checkpoint",
"\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL\n\"\"\"\nimport tensorflow as tf\nimport numpy as np\nimport helper\nimport problem_unittests as tests\n\n_, vocab_to_int, int_to_vocab, token_dict = helper.load_preprocess()\nseq_length, load_dir = helper.load_params()",
"Implement Generate Functions\nGet Tensors\nGet tensors from loaded_graph using the function get_tensor_by_name(). Get the tensors using the following names:\n- \"input:0\"\n- \"initial_state:0\"\n- \"final_state:0\"\n- \"probs:0\"\nReturn the tensors in the following tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)",
"def get_tensors(loaded_graph):\n \"\"\"\n Get input, initial state, final state, and probabilities tensor from <loaded_graph>\n :param loaded_graph: TensorFlow graph loaded from file\n :return: Tuple (InputTensor, InitialStateTensor, FinalStateTensor, ProbsTensor)\n \"\"\"\n # TODO: Implement Function\n return None, None, None, None\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_get_tensors(get_tensors)",
"Choose Word\nImplement the pick_word() function to select the next word using probabilities.",
"def pick_word(probabilities, int_to_vocab):\n \"\"\"\n Pick the next word in the generated text\n :param probabilities: Probabilites of the next word\n :param int_to_vocab: Dictionary of word ids as the keys and words as the values\n :return: String of the predicted word\n \"\"\"\n # TODO: Implement Function\n return None\n\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\ntests.test_pick_word(pick_word)",
"Generate TV Script\nThis will generate the TV script for you. Set gen_length to the length of TV script you want to generate.",
"gen_length = 200\n# homer_simpson, moe_szyslak, or Barney_Gumble\nprime_word = 'moe_szyslak'\n\n\"\"\"\nDON'T MODIFY ANYTHING IN THIS CELL THAT IS BELOW THIS LINE\n\"\"\"\nloaded_graph = tf.Graph()\nwith tf.Session(graph=loaded_graph) as sess:\n # Load saved model\n loader = tf.train.import_meta_graph(load_dir + '.meta')\n loader.restore(sess, load_dir)\n\n # Get Tensors from loaded model\n input_text, initial_state, final_state, probs = get_tensors(loaded_graph)\n\n # Sentences generation setup\n gen_sentences = [prime_word + ':']\n prev_state = sess.run(initial_state, {input_text: np.array([[1]])})\n\n # Generate sentences\n for n in range(gen_length):\n # Dynamic Input\n dyn_input = [[vocab_to_int[word] for word in gen_sentences[-seq_length:]]]\n dyn_seq_length = len(dyn_input[0])\n\n # Get Prediction\n probabilities, prev_state = sess.run(\n [probs, final_state],\n {input_text: dyn_input, initial_state: prev_state})\n \n pred_word = pick_word(probabilities[0][dyn_seq_length-1], int_to_vocab)\n\n gen_sentences.append(pred_word)\n \n # Remove tokens\n tv_script = ' '.join(gen_sentences)\n for key, token in token_dict.items():\n ending = ' ' if key in ['\\n', '(', '\"'] else ''\n tv_script = tv_script.replace(' ' + token.lower(), key)\n tv_script = tv_script.replace('\\n ', '\\n')\n tv_script = tv_script.replace('( ', '(')\n \n print(tv_script)",
"The TV Script is Nonsensical\nIt's ok if the TV script doesn't make any sense. We trained on less than a megabyte of text. In order to get good results, you'll have to use a smaller vocabulary or get more data. Luckily there's more data! As we mentioned in the beggining of this project, this is a subset of another dataset. We didn't have you train on all the data, because that would take too long. However, you are free to train your neural network on all the data. After you complete the project, of course.\nSubmitting This Project\nWhen submitting this project, make sure to run all the cells before saving the notebook. Save the notebook file as \"dlnd_tv_script_generation.ipynb\" and save it as a HTML file under \"File\" -> \"Download as\". Include the \"helper.py\" and \"problem_unittests.py\" files in your submission."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
azjps/usau-py
|
notebooks/2016-D-I_College_Nationals_Data_Quality.ipynb
|
mit
|
[
"Stats Quality for 2016 D-I College Nationals\nAs one of the biggest tournaments hosted by USAU, the D-I College Nationals is one of the few tournaments where player statistics are relatively reliably tracked. For each tournament game, each player's aggregate scores, assists, Ds, and turns are counted, although its quite possible the definition of a \"D\" or a \"Turn\" could differ across stat-keepers.\nData below was scraped from the USAU website. First we'll set up some imports to be able to load this data.",
"import usau.reports\nimport usau.fantasy\n\nfrom IPython.display import display, HTML\nimport pandas as pd\npd.options.display.width = 200\npd.options.display.max_colwidth = 200\npd.options.display.max_columns = 200\n\ndef display_url_column(df):\n \"\"\"Helper for formatting url links\"\"\"\n df.url = df.url.apply(lambda url: \"<a href='{base}{url}'>Match Report Link</a>\"\n .format(base=usau.reports.USAUResults.BASE_URL, url=url))\n display(HTML(df.to_html(escape=False)))",
"Since we should already have the data downloaded as csv files in this repository, we will not need to re-scrape the data. Omit this cell to directly download from the USAU website (may be slow).",
"# Read data from csv files\nusau.reports.d1_college_nats_men_2016.load_from_csvs()\nusau.reports.d1_college_nats_women_2016.load_from_csvs()",
"Let's take a look at the games for which the sum of the player goals/assists is less than the final score of the game:",
"display_url_column(pd.concat([usau.reports.d1_college_nats_men_2016.missing_tallies,\n usau.reports.d1_college_nats_women_2016.missing_tallies])\n [[\"Score\", \"Gs\", \"As\", \"Ds\", \"Ts\", \"Team\", \"Opponent\", \"url\"]])",
"All in all, not too bad! A few of the women's consolation games are missing player statistics, and there are several other games for which a couple of goals or assists were missed. For missing assists, it is technically possible that there were one or more callahans scored in those game, but obviously that's not the case with all ~14 missing assists. Surprisingly, there were 10 more assists recorded by the statkeepers than goals; I would have guessed that assists would be harder to keep track. \nTurns and Ds are the other stats available. In past tournaments these haven't been tracked very well, but actually there was only one game where no Turns or Ds were recorded:",
"men_matches = usau.reports.d1_college_nats_men_2016.match_results\nwomen_matches = usau.reports.d1_college_nats_women_2016.match_results\ndisplay_url_column(pd.concat([men_matches[(men_matches.Ts == 0) & (men_matches.Gs > 0)],\n women_matches[(women_matches.Ts == 0) & (women_matches.Gs > 0)]])\n [[\"Score\", \"Gs\", \"As\", \"Ds\", \"Ts\", \"Team\", \"Opponent\", \"url\"]])",
"This implies that there was a pretty good effort made to keep up with counting turns and Ds. By contrast, see how many teams did not keep track of Ds and turns last year (2015)!",
"# Read last year's data from csv files\nusau.reports.d1_college_nats_men_2015.load_from_csvs()\nusau.reports.d1_college_nats_women_2015.load_from_csvs()\ndisplay_url_column(pd.concat([usau.reports.d1_college_nats_men_2015.missing_tallies,\n usau.reports.d1_college_nats_women_2015.missing_tallies])\n [[\"Score\", \"Gs\", \"As\", \"Ds\", \"Ts\", \"Team\", \"Opponent\", \"url\"]])"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
ES-DOC/esdoc-jupyterhub
|
notebooks/ec-earth-consortium/cmip6/models/ec-earth3-cc/ocean.ipynb
|
gpl-3.0
|
[
"ES-DOC CMIP6 Model Properties - Ocean\nMIP Era: CMIP6\nInstitute: EC-EARTH-CONSORTIUM\nSource ID: EC-EARTH3-CC\nTopic: Ocean\nSub-Topics: Timestepping Framework, Advection, Lateral Physics, Vertical Physics, Uplow Boundaries, Boundary Forcing. \nProperties: 133 (101 required)\nModel descriptions: Model description details\nInitialized From: -- \nNotebook Help: Goto notebook help page\nNotebook Initialised: 2018-02-15 16:53:59\nDocument Setup\nIMPORTANT: to be executed each time you run the notebook",
"# DO NOT EDIT ! \nfrom pyesdoc.ipython.model_topic import NotebookOutput \n\n# DO NOT EDIT ! \nDOC = NotebookOutput('cmip6', 'ec-earth-consortium', 'ec-earth3-cc', 'ocean')",
"Document Authors\nSet document authors",
"# Set as follows: DOC.set_author(\"name\", \"email\") \n# TODO - please enter value(s)",
"Document Contributors\nSpecify document contributors",
"# Set as follows: DOC.set_contributor(\"name\", \"email\") \n# TODO - please enter value(s)",
"Document Publication\nSpecify document publication status",
"# Set publication status: \n# 0=do not publish, 1=publish. \nDOC.set_publication_status(0)",
"Document Table of Contents\n1. Key Properties\n2. Key Properties --> Seawater Properties\n3. Key Properties --> Bathymetry\n4. Key Properties --> Nonoceanic Waters\n5. Key Properties --> Software Properties\n6. Key Properties --> Resolution\n7. Key Properties --> Tuning Applied\n8. Key Properties --> Conservation\n9. Grid\n10. Grid --> Discretisation --> Vertical\n11. Grid --> Discretisation --> Horizontal\n12. Timestepping Framework\n13. Timestepping Framework --> Tracers\n14. Timestepping Framework --> Baroclinic Dynamics\n15. Timestepping Framework --> Barotropic\n16. Timestepping Framework --> Vertical Physics\n17. Advection\n18. Advection --> Momentum\n19. Advection --> Lateral Tracers\n20. Advection --> Vertical Tracers\n21. Lateral Physics\n22. Lateral Physics --> Momentum --> Operator\n23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff\n24. Lateral Physics --> Tracers\n25. Lateral Physics --> Tracers --> Operator\n26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff\n27. Lateral Physics --> Tracers --> Eddy Induced Velocity\n28. Vertical Physics\n29. Vertical Physics --> Boundary Layer Mixing --> Details\n30. Vertical Physics --> Boundary Layer Mixing --> Tracers\n31. Vertical Physics --> Boundary Layer Mixing --> Momentum\n32. Vertical Physics --> Interior Mixing --> Details\n33. Vertical Physics --> Interior Mixing --> Tracers\n34. Vertical Physics --> Interior Mixing --> Momentum\n35. Uplow Boundaries --> Free Surface\n36. Uplow Boundaries --> Bottom Boundary Layer\n37. Boundary Forcing\n38. Boundary Forcing --> Momentum --> Bottom Friction\n39. Boundary Forcing --> Momentum --> Lateral Friction\n40. Boundary Forcing --> Tracers --> Sunlight Penetration\n41. Boundary Forcing --> Tracers --> Fresh Water Forcing \n1. Key Properties\nOcean key properties\n1.1. Model Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of ocean model.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.model_overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.2. Model Name\nIs Required: TRUE Type: STRING Cardinality: 1.1\nName of ocean model code (NEMO 3.6, MOM 5.0,...)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.model_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"1.3. Model Family\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nType of ocean model.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.model_family') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"OGCM\" \n# \"slab ocean\" \n# \"mixed layer ocean\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"1.4. Basic Approximations\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nBasic approximations made in the ocean.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.basic_approximations') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Primitive equations\" \n# \"Non-hydrostatic\" \n# \"Boussinesq\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"1.5. Prognostic Variables\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nList of prognostic variables in the ocean component.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.prognostic_variables') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Potential temperature\" \n# \"Conservative temperature\" \n# \"Salinity\" \n# \"U-velocity\" \n# \"V-velocity\" \n# \"W-velocity\" \n# \"SSH\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"2. Key Properties --> Seawater Properties\nPhysical properties of seawater in ocean\n2.1. Eos Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nType of EOS for sea water",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Linear\" \n# \"Wright, 1997\" \n# \"Mc Dougall et al.\" \n# \"Jackett et al. 2006\" \n# \"TEOS 2010\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"2.2. Eos Functional Temp\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nTemperature used in EOS for sea water",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_temp') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Potential temperature\" \n# \"Conservative temperature\" \n# TODO - please enter value(s)\n",
"2.3. Eos Functional Salt\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nSalinity used in EOS for sea water",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_salt') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Practical salinity Sp\" \n# \"Absolute salinity Sa\" \n# TODO - please enter value(s)\n",
"2.4. Eos Functional Depth\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nDepth or pressure used in EOS for sea water ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.seawater_properties.eos_functional_depth') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Pressure (dbars)\" \n# \"Depth (meters)\" \n# TODO - please enter value(s)\n",
"2.5. Ocean Freezing Point\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nEquation used to compute the freezing point (in deg C) of seawater, as a function of salinity and pressure",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_freezing_point') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"TEOS 2010\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"2.6. Ocean Specific Heat\nIs Required: TRUE Type: FLOAT Cardinality: 1.1\nSpecific heat in ocean (cpocean) in J/(kg K)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_specific_heat') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"2.7. Ocean Reference Density\nIs Required: TRUE Type: FLOAT Cardinality: 1.1\nBoussinesq reference density (rhozero) in kg / m3",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.seawater_properties.ocean_reference_density') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"3. Key Properties --> Bathymetry\nProperties of bathymetry in ocean\n3.1. Reference Dates\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nReference date of bathymetry",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.bathymetry.reference_dates') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Present day\" \n# \"21000 years BP\" \n# \"6000 years BP\" \n# \"LGM\" \n# \"Pliocene\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"3.2. Type\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs the bathymetry fixed in time in the ocean ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.bathymetry.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"3.3. Ocean Smoothing\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe any smoothing or hand editing of bathymetry in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.bathymetry.ocean_smoothing') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"3.4. Source\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe source of bathymetry in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.bathymetry.source') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"4. Key Properties --> Nonoceanic Waters\nNon oceanic waters treatement in ocean\n4.1. Isolated Seas\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe if/how isolated seas is performed",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.isolated_seas') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"4.2. River Mouth\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe if/how river mouth mixing or estuaries specific treatment is performed",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.nonoceanic_waters.river_mouth') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"5. Key Properties --> Software Properties\nSoftware properties of ocean code\n5.1. Repository\nIs Required: FALSE Type: STRING Cardinality: 0.1\nLocation of code for this component.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.software_properties.repository') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"5.2. Code Version\nIs Required: FALSE Type: STRING Cardinality: 0.1\nCode version identifier.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.software_properties.code_version') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"5.3. Code Languages\nIs Required: FALSE Type: STRING Cardinality: 0.N\nCode language(s).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.software_properties.code_languages') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6. Key Properties --> Resolution\nResolution in the ocean grid\n6.1. Name\nIs Required: TRUE Type: STRING Cardinality: 1.1\nThis is a string usually used by the modelling group to describe the resolution of this grid, e.g. ORCA025, N512L180, T512L70 etc.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.resolution.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6.2. Canonical Horizontal Resolution\nIs Required: TRUE Type: STRING Cardinality: 1.1\nExpression quoted for gross comparisons of resolution, eg. 50km or 0.1 degrees etc.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.resolution.canonical_horizontal_resolution') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6.3. Range Horizontal Resolution\nIs Required: TRUE Type: STRING Cardinality: 1.1\nRange of horizontal resolution with spatial details, eg. 50(Equator)-100km or 0.1-0.5 degrees etc.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.resolution.range_horizontal_resolution') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"6.4. Number Of Horizontal Gridpoints\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nTotal number of horizontal (XY) points (or degrees of freedom) on computational grid.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.resolution.number_of_horizontal_gridpoints') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"6.5. Number Of Vertical Levels\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nNumber of vertical levels resolved on computational grid.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.resolution.number_of_vertical_levels') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"6.6. Is Adaptive Grid\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nDefault is False. Set true if grid resolution changes during execution.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.resolution.is_adaptive_grid') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"6.7. Thickness Level 1\nIs Required: TRUE Type: FLOAT Cardinality: 1.1\nThickness of first surface ocean level (in meters)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.resolution.thickness_level_1') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"7. Key Properties --> Tuning Applied\nTuning methodology for ocean component\n7.1. Description\nIs Required: TRUE Type: STRING Cardinality: 1.1\nGeneral overview description of tuning: explain and motivate the main targets and metrics retained. &Document the relative weight given to climate performance metrics versus process oriented metrics, &and on the possible conflicts with parameterization level tuning. In particular describe any struggle &with a parameter value that required pushing it to its limits to solve a particular model deficiency.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.tuning_applied.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"7.2. Global Mean Metrics Used\nIs Required: FALSE Type: STRING Cardinality: 0.N\nList set of metrics of the global mean state used in tuning model/component",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.tuning_applied.global_mean_metrics_used') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"7.3. Regional Metrics Used\nIs Required: FALSE Type: STRING Cardinality: 0.N\nList of regional metrics of mean state (e.g THC, AABW, regional means etc) used in tuning model/component",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.tuning_applied.regional_metrics_used') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"7.4. Trend Metrics Used\nIs Required: FALSE Type: STRING Cardinality: 0.N\nList observed trend metrics used in tuning model/component",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.tuning_applied.trend_metrics_used') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8. Key Properties --> Conservation\nConservation in the ocean component\n8.1. Description\nIs Required: TRUE Type: STRING Cardinality: 1.1\nBrief description of conservation methodology",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.conservation.description') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8.2. Scheme\nIs Required: TRUE Type: ENUM Cardinality: 1.N\nProperties conserved in the ocean by the numerical schemes",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.conservation.scheme') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Energy\" \n# \"Enstrophy\" \n# \"Salt\" \n# \"Volume of ocean\" \n# \"Momentum\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"8.3. Consistency Properties\nIs Required: FALSE Type: STRING Cardinality: 0.1\nAny additional consistency properties (energy conversion, pressure gradient discretisation, ...)?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.conservation.consistency_properties') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8.4. Corrected Conserved Prognostic Variables\nIs Required: FALSE Type: STRING Cardinality: 0.1\nSet of variables which are conserved by more than the numerical scheme alone.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.conservation.corrected_conserved_prognostic_variables') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"8.5. Was Flux Correction Used\nIs Required: FALSE Type: BOOLEAN Cardinality: 0.1\nDoes conservation involve flux correction ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.key_properties.conservation.was_flux_correction_used') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"9. Grid\nOcean grid\n9.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of grid in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.grid.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"10. Grid --> Discretisation --> Vertical\nProperties of vertical discretisation in ocean\n10.1. Coordinates\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nType of vertical coordinates in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.grid.discretisation.vertical.coordinates') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Z-coordinate\" \n# \"Z*-coordinate\" \n# \"S-coordinate\" \n# \"Isopycnic - sigma 0\" \n# \"Isopycnic - sigma 2\" \n# \"Isopycnic - sigma 4\" \n# \"Isopycnic - other\" \n# \"Hybrid / Z+S\" \n# \"Hybrid / Z+isopycnic\" \n# \"Hybrid / other\" \n# \"Pressure referenced (P)\" \n# \"P*\" \n# \"Z**\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"10.2. Partial Steps\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nUsing partial steps with Z or Z vertical coordinate in ocean ?*",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.grid.discretisation.vertical.partial_steps') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"11. Grid --> Discretisation --> Horizontal\nType of horizontal discretisation scheme in ocean\n11.1. Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nHorizontal grid type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.grid.discretisation.horizontal.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Lat-lon\" \n# \"Rotated north pole\" \n# \"Two north poles (ORCA-style)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"11.2. Staggering\nIs Required: FALSE Type: ENUM Cardinality: 0.1\nHorizontal grid staggering type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.grid.discretisation.horizontal.staggering') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Arakawa B-grid\" \n# \"Arakawa C-grid\" \n# \"Arakawa E-grid\" \n# \"N/a\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"11.3. Scheme\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nHorizontal discretisation scheme in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.grid.discretisation.horizontal.scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Finite difference\" \n# \"Finite volumes\" \n# \"Finite elements\" \n# \"Unstructured grid\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"12. Timestepping Framework\nOcean Timestepping Framework\n12.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of time stepping in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.timestepping_framework.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"12.2. Diurnal Cycle\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nDiurnal cycle type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.timestepping_framework.diurnal_cycle') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"None\" \n# \"Via coupling\" \n# \"Specific treatment\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"13. Timestepping Framework --> Tracers\nProperties of tracers time stepping in ocean\n13.1. Scheme\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nTracers time stepping scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.timestepping_framework.tracers.scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Leap-frog + Asselin filter\" \n# \"Leap-frog + Periodic Euler\" \n# \"Predictor-corrector\" \n# \"Runge-Kutta 2\" \n# \"AM3-LF\" \n# \"Forward-backward\" \n# \"Forward operator\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"13.2. Time Step\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nTracers time step (in seconds)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.timestepping_framework.tracers.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"14. Timestepping Framework --> Baroclinic Dynamics\nBaroclinic dynamics in ocean\n14.1. Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nBaroclinic dynamics type",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Preconditioned conjugate gradient\" \n# \"Sub cyling\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"14.2. Scheme\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nBaroclinic dynamics scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Leap-frog + Asselin filter\" \n# \"Leap-frog + Periodic Euler\" \n# \"Predictor-corrector\" \n# \"Runge-Kutta 2\" \n# \"AM3-LF\" \n# \"Forward-backward\" \n# \"Forward operator\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"14.3. Time Step\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nBaroclinic time step (in seconds)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.timestepping_framework.baroclinic_dynamics.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"15. Timestepping Framework --> Barotropic\nBarotropic time stepping in ocean\n15.1. Splitting\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nTime splitting method",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.timestepping_framework.barotropic.splitting') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"None\" \n# \"split explicit\" \n# \"implicit\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"15.2. Time Step\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nBarotropic time step (in seconds)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.timestepping_framework.barotropic.time_step') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"16. Timestepping Framework --> Vertical Physics\nVertical physics time stepping in ocean\n16.1. Method\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDetails of vertical time stepping in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.timestepping_framework.vertical_physics.method') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"17. Advection\nOcean advection\n17.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of advection in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.advection.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"18. Advection --> Momentum\nProperties of lateral momemtum advection scheme in ocean\n18.1. Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nType of lateral momemtum advection scheme in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.advection.momentum.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Flux form\" \n# \"Vector form\" \n# TODO - please enter value(s)\n",
"18.2. Scheme Name\nIs Required: TRUE Type: STRING Cardinality: 1.1\nName of ocean momemtum advection scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.advection.momentum.scheme_name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"18.3. ALE\nIs Required: FALSE Type: BOOLEAN Cardinality: 0.1\nUsing ALE for vertical advection ? (if vertical coordinates are sigma)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.advection.momentum.ALE') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"19. Advection --> Lateral Tracers\nProperties of lateral tracer advection scheme in ocean\n19.1. Order\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nOrder of lateral tracer advection scheme in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.advection.lateral_tracers.order') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"19.2. Flux Limiter\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nMonotonic flux limiter for lateral tracer advection scheme in ocean ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.advection.lateral_tracers.flux_limiter') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"19.3. Effective Order\nIs Required: TRUE Type: FLOAT Cardinality: 1.1\nEffective order of limited lateral tracer advection scheme in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.advection.lateral_tracers.effective_order') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"19.4. Name\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescriptive text for lateral tracer advection scheme in ocean (e.g. MUSCL, PPM-H5, PRATHER,...)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.advection.lateral_tracers.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"19.5. Passive Tracers\nIs Required: FALSE Type: ENUM Cardinality: 0.N\nPassive tracers advected",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers') \n\n# PROPERTY VALUE(S): \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Ideal age\" \n# \"CFC 11\" \n# \"CFC 12\" \n# \"SF6\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"19.6. Passive Tracers Advection\nIs Required: FALSE Type: STRING Cardinality: 0.1\nIs advection of passive tracers different than active ? if so, describe.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.advection.lateral_tracers.passive_tracers_advection') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"20. Advection --> Vertical Tracers\nProperties of vertical tracer advection scheme in ocean\n20.1. Name\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescriptive text for vertical tracer advection scheme in ocean (e.g. MUSCL, PPM-H5, PRATHER,...)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.advection.vertical_tracers.name') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"20.2. Flux Limiter\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nMonotonic flux limiter for vertical tracer advection scheme in ocean ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.advection.vertical_tracers.flux_limiter') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"21. Lateral Physics\nOcean lateral physics\n21.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of lateral physics in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"21.2. Scheme\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nType of transient eddy representation in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"None\" \n# \"Eddy active\" \n# \"Eddy admitting\" \n# TODO - please enter value(s)\n",
"22. Lateral Physics --> Momentum --> Operator\nProperties of lateral physics operator for momentum in ocean\n22.1. Direction\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nDirection of lateral physics momemtum scheme in the ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.direction') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Horizontal\" \n# \"Isopycnal\" \n# \"Isoneutral\" \n# \"Geopotential\" \n# \"Iso-level\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"22.2. Order\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nOrder of lateral physics momemtum scheme in the ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.order') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Harmonic\" \n# \"Bi-harmonic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"22.3. Discretisation\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nDiscretisation of lateral physics momemtum scheme in the ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.momentum.operator.discretisation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Second order\" \n# \"Higher order\" \n# \"Flux limiter\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"23. Lateral Physics --> Momentum --> Eddy Viscosity Coeff\nProperties of eddy viscosity coeff in lateral physics momemtum scheme in the ocean\n23.1. Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nLateral physics momemtum eddy viscosity coeff type in the ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Constant\" \n# \"Space varying\" \n# \"Time + space varying (Smagorinsky)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"23.2. Constant Coefficient\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nIf constant, value of eddy viscosity coeff in lateral physics momemtum scheme (in m2/s)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.constant_coefficient') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"23.3. Variable Coefficient\nIs Required: FALSE Type: STRING Cardinality: 0.1\nIf space-varying, describe variations of eddy viscosity coeff in lateral physics momemtum scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.variable_coefficient') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"23.4. Coeff Background\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe background eddy viscosity coeff in lateral physics momemtum scheme (give values in m2/s)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_background') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"23.5. Coeff Backscatter\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs there backscatter in eddy viscosity coeff in lateral physics momemtum scheme ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.momentum.eddy_viscosity_coeff.coeff_backscatter') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"24. Lateral Physics --> Tracers\nProperties of lateral physics for tracers in ocean\n24.1. Mesoscale Closure\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs there a mesoscale closure in the lateral physics tracers scheme ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.mesoscale_closure') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"24.2. Submesoscale Mixing\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs there a submesoscale mixing parameterisation (i.e Fox-Kemper) in the lateral physics tracers scheme ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.submesoscale_mixing') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"25. Lateral Physics --> Tracers --> Operator\nProperties of lateral physics operator for tracers in ocean\n25.1. Direction\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nDirection of lateral physics tracers scheme in the ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.direction') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Horizontal\" \n# \"Isopycnal\" \n# \"Isoneutral\" \n# \"Geopotential\" \n# \"Iso-level\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"25.2. Order\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nOrder of lateral physics tracers scheme in the ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.order') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Harmonic\" \n# \"Bi-harmonic\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"25.3. Discretisation\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nDiscretisation of lateral physics tracers scheme in the ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.operator.discretisation') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Second order\" \n# \"Higher order\" \n# \"Flux limiter\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"26. Lateral Physics --> Tracers --> Eddy Diffusity Coeff\nProperties of eddy diffusity coeff in lateral physics tracers scheme in the ocean\n26.1. Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nLateral physics tracers eddy diffusity coeff type in the ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Constant\" \n# \"Space varying\" \n# \"Time + space varying (Smagorinsky)\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"26.2. Constant Coefficient\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nIf constant, value of eddy diffusity coeff in lateral physics tracers scheme (in m2/s)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.constant_coefficient') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"26.3. Variable Coefficient\nIs Required: FALSE Type: STRING Cardinality: 0.1\nIf space-varying, describe variations of eddy diffusity coeff in lateral physics tracers scheme",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.variable_coefficient') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"26.4. Coeff Background\nIs Required: TRUE Type: INTEGER Cardinality: 1.1\nDescribe background eddy diffusity coeff in lateral physics tracers scheme (give values in m2/s)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_background') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"26.5. Coeff Backscatter\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs there backscatter in eddy diffusity coeff in lateral physics tracers scheme ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_diffusity_coeff.coeff_backscatter') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"27. Lateral Physics --> Tracers --> Eddy Induced Velocity\nProperties of eddy induced velocity (EIV) in lateral physics tracers scheme in the ocean\n27.1. Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nType of EIV in lateral physics tracers in the ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"GM\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"27.2. Constant Val\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nIf EIV scheme for tracers is constant, specify coefficient value (M2/s)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.constant_val') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"27.3. Flux Type\nIs Required: TRUE Type: STRING Cardinality: 1.1\nType of EIV flux (advective or skew)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.flux_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"27.4. Added Diffusivity\nIs Required: TRUE Type: STRING Cardinality: 1.1\nType of EIV added diffusivity (constant, flow dependent or none)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.lateral_physics.tracers.eddy_induced_velocity.added_diffusivity') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"28. Vertical Physics\nOcean Vertical Physics\n28.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of vertical physics in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"29. Vertical Physics --> Boundary Layer Mixing --> Details\nProperties of vertical physics in ocean\n29.1. Langmuir Cells Mixing\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs there Langmuir cells mixing in upper ocean ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.details.langmuir_cells_mixing') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"30. Vertical Physics --> Boundary Layer Mixing --> Tracers\n*Properties of boundary layer (BL) mixing on tracers in the ocean *\n30.1. Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nType of boundary layer mixing for tracers in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Constant value\" \n# \"Turbulent closure - TKE\" \n# \"Turbulent closure - KPP\" \n# \"Turbulent closure - Mellor-Yamada\" \n# \"Turbulent closure - Bulk Mixed Layer\" \n# \"Richardson number dependent - PP\" \n# \"Richardson number dependent - KT\" \n# \"Imbeded as isopycnic vertical coordinate\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"30.2. Closure Order\nIs Required: FALSE Type: FLOAT Cardinality: 0.1\nIf turbulent BL mixing of tracers, specific order of closure (0, 1, 2.5, 3)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.closure_order') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"30.3. Constant\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nIf constant BL mixing of tracers, specific coefficient (m2/s)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.constant') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"30.4. Background\nIs Required: TRUE Type: STRING Cardinality: 1.1\nBackground BL mixing of tracers coefficient, (schema and value in m2/s - may by none)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.tracers.background') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"31. Vertical Physics --> Boundary Layer Mixing --> Momentum\n*Properties of boundary layer (BL) mixing on momentum in the ocean *\n31.1. Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nType of boundary layer mixing for momentum in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Constant value\" \n# \"Turbulent closure - TKE\" \n# \"Turbulent closure - KPP\" \n# \"Turbulent closure - Mellor-Yamada\" \n# \"Turbulent closure - Bulk Mixed Layer\" \n# \"Richardson number dependent - PP\" \n# \"Richardson number dependent - KT\" \n# \"Imbeded as isopycnic vertical coordinate\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"31.2. Closure Order\nIs Required: FALSE Type: FLOAT Cardinality: 0.1\nIf turbulent BL mixing of momentum, specific order of closure (0, 1, 2.5, 3)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.closure_order') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"31.3. Constant\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nIf constant BL mixing of momentum, specific coefficient (m2/s)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.constant') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"31.4. Background\nIs Required: TRUE Type: STRING Cardinality: 1.1\nBackground BL mixing of momentum coefficient, (schema and value in m2/s - may by none)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.boundary_layer_mixing.momentum.background') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"32. Vertical Physics --> Interior Mixing --> Details\n*Properties of interior mixing in the ocean *\n32.1. Convection Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nType of vertical convection in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.convection_type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Non-penetrative convective adjustment\" \n# \"Enhanced vertical diffusion\" \n# \"Included in turbulence closure\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"32.2. Tide Induced Mixing\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe how tide induced mixing is modelled (barotropic, baroclinic, none)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.tide_induced_mixing') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"32.3. Double Diffusion\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs there double diffusion",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.double_diffusion') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"32.4. Shear Mixing\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs there interior shear mixing",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.details.shear_mixing') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"33. Vertical Physics --> Interior Mixing --> Tracers\n*Properties of interior mixing on tracers in the ocean *\n33.1. Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nType of interior mixing for tracers in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Constant value\" \n# \"Turbulent closure / TKE\" \n# \"Turbulent closure - Mellor-Yamada\" \n# \"Richardson number dependent - PP\" \n# \"Richardson number dependent - KT\" \n# \"Imbeded as isopycnic vertical coordinate\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"33.2. Constant\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nIf constant interior mixing of tracers, specific coefficient (m2/s)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.constant') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"33.3. Profile\nIs Required: TRUE Type: STRING Cardinality: 1.1\nIs the background interior mixing using a vertical profile for tracers (i.e is NOT constant) ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.profile') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"33.4. Background\nIs Required: TRUE Type: STRING Cardinality: 1.1\nBackground interior mixing of tracers coefficient, (schema and value in m2/s - may by none)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.tracers.background') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"34. Vertical Physics --> Interior Mixing --> Momentum\n*Properties of interior mixing on momentum in the ocean *\n34.1. Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nType of interior mixing for momentum in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Constant value\" \n# \"Turbulent closure / TKE\" \n# \"Turbulent closure - Mellor-Yamada\" \n# \"Richardson number dependent - PP\" \n# \"Richardson number dependent - KT\" \n# \"Imbeded as isopycnic vertical coordinate\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"34.2. Constant\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nIf constant interior mixing of momentum, specific coefficient (m2/s)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.constant') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"34.3. Profile\nIs Required: TRUE Type: STRING Cardinality: 1.1\nIs the background interior mixing using a vertical profile for momentum (i.e is NOT constant) ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.profile') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"34.4. Background\nIs Required: TRUE Type: STRING Cardinality: 1.1\nBackground interior mixing of momentum coefficient, (schema and value in m2/s - may by none)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.vertical_physics.interior_mixing.momentum.background') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"35. Uplow Boundaries --> Free Surface\nProperties of free surface in ocean\n35.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of free surface in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"35.2. Scheme\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nFree surface scheme in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Linear implicit\" \n# \"Linear filtered\" \n# \"Linear semi-explicit\" \n# \"Non-linear implicit\" \n# \"Non-linear filtered\" \n# \"Non-linear semi-explicit\" \n# \"Fully explicit\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"35.3. Embeded Seaice\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs the sea-ice embeded in the ocean model (instead of levitating) ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.uplow_boundaries.free_surface.embeded_seaice') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"36. Uplow Boundaries --> Bottom Boundary Layer\nProperties of bottom boundary layer in ocean\n36.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of bottom boundary layer in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"36.2. Type Of Bbl\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nType of bottom boundary layer in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.type_of_bbl') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Diffusive\" \n# \"Acvective\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"36.3. Lateral Mixing Coef\nIs Required: FALSE Type: INTEGER Cardinality: 0.1\nIf bottom BL is diffusive, specify value of lateral mixing coefficient (in m2/s)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.lateral_mixing_coef') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# TODO - please enter value(s)\n",
"36.4. Sill Overflow\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe any specific treatment of sill overflows",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.uplow_boundaries.bottom_boundary_layer.sill_overflow') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"37. Boundary Forcing\nOcean boundary forcing\n37.1. Overview\nIs Required: TRUE Type: STRING Cardinality: 1.1\nOverview of boundary forcing in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.overview') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"37.2. Surface Pressure\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe how surface pressure is transmitted to ocean (via sea-ice, nothing specific,...)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.surface_pressure') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"37.3. Momentum Flux Correction\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe any type of ocean surface momentum flux correction and, if applicable, how it is applied and where.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.momentum_flux_correction') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"37.4. Tracers Flux Correction\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe any type of ocean surface tracers flux correction and, if applicable, how it is applied and where.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.tracers_flux_correction') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"37.5. Wave Effects\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe if/how wave effects are modelled at ocean surface.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.wave_effects') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"37.6. River Runoff Budget\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe how river runoff from land surface is routed to ocean and any global adjustment done.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.river_runoff_budget') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"37.7. Geothermal Heating\nIs Required: TRUE Type: STRING Cardinality: 1.1\nDescribe if/how geothermal heating is present at ocean bottom.",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.geothermal_heating') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"38. Boundary Forcing --> Momentum --> Bottom Friction\nProperties of momentum bottom friction in ocean\n38.1. Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nType of momentum bottom friction in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.momentum.bottom_friction.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Linear\" \n# \"Non-linear\" \n# \"Non-linear (drag function of speed of tides)\" \n# \"Constant drag coefficient\" \n# \"None\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"39. Boundary Forcing --> Momentum --> Lateral Friction\nProperties of momentum lateral friction in ocean\n39.1. Type\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nType of momentum lateral friction in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.momentum.lateral_friction.type') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"None\" \n# \"Free-slip\" \n# \"No-slip\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"40. Boundary Forcing --> Tracers --> Sunlight Penetration\nProperties of sunlight penetration scheme in ocean\n40.1. Scheme\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nType of sunlight penetration scheme in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.scheme') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"1 extinction depth\" \n# \"2 extinction depth\" \n# \"3 extinction depth\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"40.2. Ocean Colour\nIs Required: TRUE Type: BOOLEAN Cardinality: 1.1\nIs the ocean sunlight penetration scheme ocean colour dependent ?",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.ocean_colour') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(value) \n# Valid Choices: \n# True \n# False \n# TODO - please enter value(s)\n",
"40.3. Extinction Depth\nIs Required: FALSE Type: STRING Cardinality: 0.1\nDescribe and list extinctions depths for sunlight penetration scheme (if applicable).",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.tracers.sunlight_penetration.extinction_depth') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"41. Boundary Forcing --> Tracers --> Fresh Water Forcing\nProperties of surface fresh water forcing in ocean\n41.1. From Atmopshere\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nType of surface fresh water forcing from atmos in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_atmopshere') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Freshwater flux\" \n# \"Virtual salt flux\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"41.2. From Sea Ice\nIs Required: TRUE Type: ENUM Cardinality: 1.1\nType of surface fresh water forcing from sea-ice in ocean",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.from_sea_ice') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# Valid Choices: \n# \"Freshwater flux\" \n# \"Virtual salt flux\" \n# \"Real salt flux\" \n# \"Other: [Please specify]\" \n# TODO - please enter value(s)\n",
"41.3. Forced Mode Restoring\nIs Required: TRUE Type: STRING Cardinality: 1.1\nType of surface salinity restoring in forced mode (OMIP)",
"# PROPERTY ID - DO NOT EDIT ! \nDOC.set_id('cmip6.ocean.boundary_forcing.tracers.fresh_water_forcing.forced_mode_restoring') \n\n# PROPERTY VALUE: \n# Set as follows: DOC.set_value(\"value\") \n# TODO - please enter value(s)\n",
"©2017 ES-DOC"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
mne-tools/mne-tools.github.io
|
0.20/_downloads/ae8fb158e1a8fbcc6dff5d3e55a698dc/plot_30_filtering_resampling.ipynb
|
bsd-3-clause
|
[
"%matplotlib inline",
"Filtering and resampling data\nThis tutorial covers filtering and resampling, and gives examples of how\nfiltering can be used for artifact repair.\n :depth: 2\nWe begin as always by importing the necessary Python modules and loading some\nexample data <sample-dataset>. We'll also crop the data to 60 seconds\n(to save memory on the documentation server):",
"import os\nimport numpy as np\nimport matplotlib.pyplot as plt\nimport mne\n\nsample_data_folder = mne.datasets.sample.data_path()\nsample_data_raw_file = os.path.join(sample_data_folder, 'MEG', 'sample',\n 'sample_audvis_raw.fif')\nraw = mne.io.read_raw_fif(sample_data_raw_file)\nraw.crop(0, 60).load_data() # use just 60 seconds of data, to save memory",
"Background on filtering\nA filter removes or attenuates parts of a signal. Usually, filters act on\nspecific frequency ranges of a signal — for example, suppressing all\nfrequency components above or below a certain cutoff value. There are many\nways of designing digital filters; see disc-filtering for a longer\ndiscussion of the various approaches to filtering physiological signals in\nMNE-Python.\nRepairing artifacts by filtering\nArtifacts that are restricted to a narrow frequency range can sometimes\nbe repaired by filtering the data. Two examples of frequency-restricted\nartifacts are slow drifts and power line noise. Here we illustrate how each\nof these can be repaired by filtering.\nSlow drifts\nLow-frequency drifts in raw data can usually be spotted by plotting a fairly\nlong span of data with the :meth:~mne.io.Raw.plot method, though it is\nhelpful to disable channel-wise DC shift correction to make slow drifts\nmore readily visible. Here we plot 60 seconds, showing all the magnetometer\nchannels:",
"mag_channels = mne.pick_types(raw.info, meg='mag')\nraw.plot(duration=60, order=mag_channels, proj=False,\n n_channels=len(mag_channels), remove_dc=False)",
"A half-period of this slow drift appears to last around 10 seconds, so a full\nperiod would be 20 seconds, i.e., $\\frac{1}{20} \\mathrm{Hz}$. To be\nsure those components are excluded, we want our highpass to be higher than\nthat, so let's try $\\frac{1}{10} \\mathrm{Hz}$ and $\\frac{1}{5}\n\\mathrm{Hz}$ filters to see which works best:",
"for cutoff in (0.1, 0.2):\n raw_highpass = raw.copy().filter(l_freq=cutoff, h_freq=None)\n fig = raw_highpass.plot(duration=60, order=mag_channels, proj=False,\n n_channels=len(mag_channels), remove_dc=False)\n fig.subplots_adjust(top=0.9)\n fig.suptitle('High-pass filtered at {} Hz'.format(cutoff), size='xx-large',\n weight='bold')",
"Looks like 0.1 Hz was not quite high enough to fully remove the slow drifts.\nNotice that the text output summarizes the relevant characteristics of the\nfilter that was created. If you want to visualize the filter, you can pass\nthe same arguments used in the call to :meth:raw.filter()\n<mne.io.Raw.filter> above to the function :func:mne.filter.create_filter\nto get the filter parameters, and then pass the filter parameters to\n:func:mne.viz.plot_filter. :func:~mne.filter.create_filter also requires\nparameters data (a :class:NumPy array <numpy.ndarray>) and sfreq\n(the sampling frequency of the data), so we'll extract those from our\n:class:~mne.io.Raw object:",
"filter_params = mne.filter.create_filter(raw.get_data(), raw.info['sfreq'],\n l_freq=0.2, h_freq=None)",
"Notice that the output is the same as when we applied this filter to the data\nusing :meth:raw.filter() <mne.io.Raw.filter>. You can now pass the filter\nparameters (and the sampling frequency) to :func:~mne.viz.plot_filter to\nplot the filter:",
"mne.viz.plot_filter(filter_params, raw.info['sfreq'], flim=(0.01, 5))",
"Power line noise\nPower line noise is an environmental artifact that manifests as persistent\noscillations centered around the AC power line frequency_. Power line\nartifacts are easiest to see on plots of the spectrum, so we'll use\n:meth:~mne.io.Raw.plot_psd to illustrate. We'll also write a little\nfunction that adds arrows to the spectrum plot to highlight the artifacts:",
"def add_arrows(axes):\n # add some arrows at 60 Hz and its harmonics\n for ax in axes:\n freqs = ax.lines[-1].get_xdata()\n psds = ax.lines[-1].get_ydata()\n for freq in (60, 120, 180, 240):\n idx = np.searchsorted(freqs, freq)\n # get ymax of a small region around the freq. of interest\n y = psds[(idx - 4):(idx + 5)].max()\n ax.arrow(x=freqs[idx], y=y + 18, dx=0, dy=-12, color='red',\n width=0.1, head_width=3, length_includes_head=True)\n\n\nfig = raw.plot_psd(fmax=250, average=True)\nadd_arrows(fig.axes[:2])",
"It should be evident that MEG channels are more susceptible to this kind of\ninterference than EEG that is recorded in the magnetically shielded room.\nRemoving power-line noise can be done with a notch filter,\napplied directly to the :class:~mne.io.Raw object, specifying an array of\nfrequencies to be attenuated. Since the EEG channels are relatively\nunaffected by the power line noise, we'll also specify a picks argument\nso that only the magnetometers and gradiometers get filtered:",
"meg_picks = mne.pick_types(raw.info) # meg=True, eeg=False are the defaults\nfreqs = (60, 120, 180, 240)\nraw_notch = raw.copy().notch_filter(freqs=freqs, picks=meg_picks)\nfor title, data in zip(['Un', 'Notch '], [raw, raw_notch]):\n fig = data.plot_psd(fmax=250, average=True)\n fig.subplots_adjust(top=0.85)\n fig.suptitle('{}filtered'.format(title), size='xx-large', weight='bold')\n add_arrows(fig.axes[:2])",
":meth:~mne.io.Raw.notch_filter also has parameters to control the notch\nwidth, transition bandwidth and other aspects of the filter. See the\ndocstring for details.\nResampling\nEEG and MEG recordings are notable for their high temporal precision, and are\noften recorded with sampling rates around 1000 Hz or higher. This is good\nwhen precise timing of events is important to the experimental design or\nanalysis plan, but also consumes more memory and computational resources when\nprocessing the data. In cases where high-frequency components of the signal\nare not of interest and precise timing is not needed (e.g., computing EOG or\nECG projectors on a long recording), downsampling the signal can be a useful\ntime-saver.\nIn MNE-Python, the resampling methods (:meth:raw.resample()\n<mne.io.Raw.resample>, :meth:epochs.resample() <mne.Epochs.resample> and\n:meth:evoked.resample() <mne.Evoked.resample>) apply a low-pass filter to\nthe signal to avoid aliasing, so you don't need to explicitly filter it\nyourself first. This built-in filtering that happens when using\n:meth:raw.resample() <mne.io.Raw.resample>, :meth:epochs.resample()\n<mne.Epochs.resample>, or :meth:evoked.resample() <mne.Evoked.resample> is\na brick-wall filter applied in the frequency domain at the Nyquist\nfrequency of the desired new sampling rate. This can be clearly seen in the\nPSD plot, where a dashed vertical line indicates the filter cutoff; the\noriginal data had an existing lowpass at around 172 Hz (see\nraw.info['lowpass']), and the data resampled from 600 Hz to 200 Hz gets\nautomatically lowpass filtered at 100 Hz (the Nyquist frequency_ for a\ntarget rate of 200 Hz):",
"raw_downsampled = raw.copy().resample(sfreq=200)\n\nfor data, title in zip([raw, raw_downsampled], ['Original', 'Downsampled']):\n fig = data.plot_psd(average=True)\n fig.subplots_adjust(top=0.9)\n fig.suptitle(title)\n plt.setp(fig.axes, xlim=(0, 300))",
"Because resampling involves filtering, there are some pitfalls to resampling\nat different points in the analysis stream:\n\n\nPerforming resampling on :class:~mne.io.Raw data (before epoching) will\n negatively affect the temporal precision of Event arrays, by causing\n jitter_ in the event timing. This reduced temporal precision will\n propagate to subsequent epoching operations.\n\n\nPerforming resampling after epoching can introduce edge artifacts on\n every epoch, whereas filtering the :class:~mne.io.Raw object will only\n introduce artifacts at the start and end of the recording (which is often\n far enough from the first and last epochs to have no affect on the\n analysis).\n\n\nThe following section suggests best practices to mitigate both of these\nissues.\nBest practices\nTo avoid the reduction in temporal precision of events that comes with\nresampling a :class:~mne.io.Raw object, and also avoid the edge artifacts\nthat come with filtering an :class:~mne.Epochs or :class:~mne.Evoked\nobject, the best practice is to:\n\n\nlow-pass filter the :class:~mne.io.Raw data at or below\n $\\frac{1}{3}$ of the desired sample rate, then\n\n\ndecimate the data after epoching, by either passing the decim\n parameter to the :class:~mne.Epochs constructor, or using the\n :meth:~mne.Epochs.decimate method after the :class:~mne.Epochs have\n been created.\n\n\n<div class=\"alert alert-danger\"><h4>Warning</h4><p>The recommendation for setting the low-pass corner frequency at\n $\\frac{1}{3}$ of the desired sample rate is a fairly safe rule of\n thumb based on the default settings in :meth:`raw.filter()\n <mne.io.Raw.filter>` (which are different from the filter settings used\n inside the :meth:`raw.resample() <mne.io.Raw.resample>` method). If you\n use a customized lowpass filter (specifically, if your transition\n bandwidth is wider than 0.5× the lowpass cutoff), downsampling to 3× the\n lowpass cutoff may still not be enough to avoid `aliasing`_, and\n MNE-Python will not warn you about it (because the :class:`raw.info\n <mne.Info>` object only keeps track of the lowpass cutoff, not the\n transition bandwidth). Conversely, if you use a steeper filter, the\n warning may be too sensitive. If you are unsure, plot the PSD of your\n filtered data *before decimating* and ensure that there is no content in\n the frequencies above the `Nyquist frequency`_ of the sample rate you'll\n end up with *after* decimation.</p></div>\n\nNote that this method of manually filtering and decimating is exact only when\nthe original sampling frequency is an integer multiple of the desired new\nsampling frequency. Since the sampling frequency of our example data is\n600.614990234375 Hz, ending up with a specific sampling frequency like (say)\n90 Hz will not be possible:",
"current_sfreq = raw.info['sfreq']\ndesired_sfreq = 90 # Hz\ndecim = np.round(current_sfreq / desired_sfreq).astype(int)\nobtained_sfreq = current_sfreq / decim\nlowpass_freq = obtained_sfreq / 3.\n\nraw_filtered = raw.copy().filter(l_freq=None, h_freq=lowpass_freq)\nevents = mne.find_events(raw_filtered)\nepochs = mne.Epochs(raw_filtered, events, decim=decim)\n\nprint('desired sampling frequency was {} Hz; decim factor of {} yielded an '\n 'actual sampling frequency of {} Hz.'\n .format(desired_sfreq, decim, epochs.info['sfreq']))",
"If for some reason you cannot follow the above-recommended best practices,\nyou should at the very least either:\n\n\nresample the data after epoching, and make your epochs long enough that\n edge effects from the filtering do not affect the temporal span of the\n epoch that you hope to analyze / interpret; or\n\n\nperform resampling on the :class:~mne.io.Raw object and its\n corresponding Events array simultaneously so that they stay more or less\n in synch. This can be done by passing the Events array as the\n events parameter to :meth:raw.resample() <mne.io.Raw.resample>.\n\n\n.. LINKS\nhttps://en.wikipedia.org/wiki/Mains_electricity"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
johnpfay/environ859
|
07_DataWrangling/notebooks/03-Using-NumPy-With-Rasters.ipynb
|
gpl-3.0
|
[
"Using NumPy with Rasters\nIn addition to converting feature classes in to NumPy arrays, we can also convert entire raster datasets into 2-dimensional arrays. This allows us, as we'll see below, to programmatically extract values from these rasters, or we can integrate these arrays with other packages to perform custom analyses with the data. \nFor more information on this topic see:\nhttps://4326.us/esri/scipy/devsummit-2016-scipy-arcgis-presentation-handout.pdf",
"# Import the modules\nimport arcpy\nimport numpy as np\n\n#Set the name of the file we'll import\ndemFilename = '../Data/DEM.tif'\n\n#Import the DEM as a NumPy array, using only a 200 x 200 pixel subset of it\ndemRaster = arcpy.RasterToNumPyArray(demFilename)\n\n#What is the shape of the raster (i.e. the # of rows and columns)? \ndemRaster.shape\n#...note it's a 2d array\n\n#Here, we re-import the raster, this time only grabbing a 150 x 150 pixel subset - to speed processing\ndemRaster = arcpy.RasterToNumPyArray(demFilename,'',150,150)\n\n#What is the pixel type?\ndemRaster.dtype",
"As 2-dimensional NumPy array, we can perform rapid stats on the pixel values. We can do this for all the pixels at once, or we can perform stats on specific pixels or subsets of pixels, using their row and columns to identify the subsets.",
"#Get the value of a specific pixel, here one at the 100th row and 50th column\ndemRaster[99,49]\n\n#Compute the min, max and mean value of all pixels in the 200 x 200 DEM snipped\nprint \"Min:\", demRaster.min()\nprint \"Max:\", demRaster.max()\nprint \"Mean:\", demRaster.mean()",
"To define subsets, we use the same slicing techniques we use for lists and strings, but as this is a 2 dimensional array, we provide two sets of slices; the first slices will be the rows and the second for the columns. We can use a : to select all rows or columns.",
"#Get the max for the 10th column of pixels, \n# (using `:` to select all rows and `9` to select just the 10th column)\nprint(demRaster[:,9].max())\n\n#Get the mean for the first 10 rows of pixels, selecting all columns\n# (We can put a `:` as the second slice, or leave it blank after the comma...)\nx = demRaster[:10,]\nx.shape\nx.mean()",
"The SciPy package has a number of multi-dimensional image processing capabilities (see https://docs.scipy.org/doc/scipy/reference/ndimage.html). Here is a somewhat complex example that runs through 10 iterations of computing a neighborhood mean (using the nd.median_filter) with an incrementally growing neighorhood. We then subtract that neighborhood median elevation from the original elevation to compute Topographic Position Index (TPI, see http://www.jennessent.com/downloads/tpi-poster-tnc_18x22.pdf)\nIf you don't fully understand how it works, at least appreciate that converting a raster to a NumPy array enables us to use other packages to execute custom analyses on the data.",
"#Import the SciPy and plotting packages\nimport scipy.ndimage as nd\nfrom matplotlib import pyplot as plt\n\n#Allows plots in our Jupyter Notebook\n%matplotlib inline\n\n#Create a 'canvas' onto which we can add plots\nfig = plt.figure(figsize=(20,20))\n\n#Loop through 10 iterations\nfor i in xrange(10):\n #Create a kernel, intially 3 x 3, then expanding 3 x 3 at each iteration \n size = (i+1) * 3\n print size,\n #Compute the median value for the kernel surrounding each pixel\n med = nd.median_filter(demRaster, size)\n #Subtract the median elevation from the original to compute TPI\n tpi = demRaster - med\n #Create a subplot frame\n a = fig.add_subplot(5, 5,i+1)\n #Show the median interpolated DEM\n plt.imshow(tpi, interpolation='nearest')\n #Set titles for the plot\n a.set_title('{}x{}'.format(size, size))\n plt.axis('off')\n plt.subplots_adjust(hspace = 0.1)\n prev = med"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
sdpython/teachpyx
|
_doc/notebooks/python/hypercube.ipynb
|
mit
|
[
"Hypercube et autres exercices\nExercices autour de tableaux en plusieurs dimensions et autres exercices.",
"from jyquickhelper import add_notebook_menu\nadd_notebook_menu()",
"Q1 - triple récursivité\nRéécrire la fonction u de façon à ce qu'elle ne soit plus récurrente.",
"def u(n):\n if n <= 2:\n return 1\n else: \n return u(n-1) + u(n-2) + u(n-3)\n\nu(5)",
"Le problème de cette écriture est que la fonction est triplement récursive et que son coût est aussi grand que la fonction elle-même. Vérifions.",
"compteur = []\n\ndef u_st(n):\n global compteur\n compteur.append(n)\n if n <= 2:\n return 1\n else: \n return u_st(n-1) + u_st(n-2) + u_st(n-3)\n\nu_st(5), compteur",
"La seconde liste retourne tous les n pour lesquels la fonction u_st a été appelée.",
"def u_non_recursif(n):\n if n <= 2:\n return 1\n u0 = 1\n u1 = 1\n u2 = 1\n i = 3\n while i <= n:\n u = u0 + u1 + u2\n u0 = u1\n u1 = u2\n u2 = u\n i += 1\n return u\n\nu_non_recursif(5)",
"Q2 - comparaison de listes\nOn considère deux listes d'entiers. La première est inférieure à la seconde si l'une des deux conditions suivantes est vérifiée :\n\nles $n$ premiers nombres sont égaux mais la première liste ne contient que $n$ éléments tandis que la seconde est plus longue, \nles $n$ premiers nombres sont égaux mais que le $n+1^{\\text{ème}}$ de la première liste est inférieur au $n+1^{\\text{ème}}$ de la seconde liste\n\nPar conséquent, si $l$ est la longueur de la liste la plus courte, comparer ces deux listes d'entiers revient à parcourir tous les indices depuis 0 jusqu'à $l$ exclu et à s'arrêter sur la première différence qui détermine le résultat. S'il n'y pas de différence, alors la liste la plus courte est la première. Il faut écrire une fonction compare_liste(p,q) qui implémente cet algorithme.",
"def compare_liste(p, q):\n i = 0\n while i < len(p) and i < len(q):\n if p [i] < q [i]: \n return -1 # on peut décider\n elif p [i] > q [i]: \n return 1 # on peut décider\n i += 1 # on ne peut pas décider\n # fin de la boucle, il faut décider à partir des longueurs des listes\n if len (p) < len (q): \n return -1\n elif len (p) > len (q): \n return 1\n else : \n return 0\n \ncompare_liste([0, 1], [0, 1, 2])\n\ncompare_liste([0, 1, 3], [0, 1, 2])\n\ncompare_liste([0, 1, 2], [0, 1, 2])\n\ncompare_liste([0, 1, 2, 4], [0, 1, 2])",
"Q3 - précision des calculs\nOn cherche à calculer la somme des termes d'une suite géométriques de raison~$\\frac{1}{2}$. On définit $r=\\frac{1}{2}$, on cherche donc à calculer $\\sum_{i=0}^{\\infty} r^i$ qui une somme convergente mais infinie. Le programme suivant permet d'en calculer une valeur approchée. Il retourne, outre le résultat, le nombre d'itérations qui ont permis d'estimer le résultat.",
"def suite_geometrique_1(r):\n x = 1.0\n y = 0.0\n n = 0\n while x > 0:\n y += x\n x *= r\n n += 1\n return y, n\n \nprint(suite_geometrique_1(0.5))",
"Un informaticien plus expérimenté a écrit le programme suivant qui retourne le même résultat mais avec un nombre d'itérations beaucoup plus petit.",
"def suite_geometrique_2(r):\n x = 1.0\n y = 0.0\n n = 0\n yold = y + 1\n while abs (yold - y) > 0:\n yold = y\n y += x\n x *= r\n n += 1\n return y,n\n \nprint(suite_geometrique_2(0.5))",
"Expliquez pourquoi le second programme est plus rapide tout en retournant le même résultat. Repère numérique : $2^{-55} \\sim 2,8.10^{-17}$.\nTout d'abord le second programme est plus rapide car il effectue moins d'itérations, 55 au lieu de 1075. Maintenant, il s'agit de savoir pourquoi le second programme retourne le même résultat que le premier mais plus rapidement. L'ordinateur ne peut pas calculer numériquement une somme infinie, il s'agit toujours d'une valeur approchée. L'approximation dépend de la précision des calculs, environ 14 chiffres pour python. Dans le premier programme, on s'arrête lorsque $r^n$ devient nul, autrement dit, on s'arrête lorsque $x$ est si petit que python ne peut plus le représenter autrement que par~0, c'est-à-dire qu'il n'est pas possible de représenter un nombre dans l'intervalle $[0,2^{-1055}]$.\nToutefois, il n'est pas indispensable d'aller aussi loin car l'ordinateur n'est de toute façon pas capable d'ajouter un nombre aussi petit à un nombre plus grand que~1. Par exemple, $1 + 10^{17} = 1,000\\, 000\\, 000\\, 000\\, 000\\, 01$. Comme la précision des calculs n'est que de 15 chiffres, pour python, $1 + 10^{17} = 1$. Le second programme s'inspire de cette remarque : le calcul s'arrête lorsque le résultat de la somme n'évolue plus car il additionne des nombres trop petits à un nombre trop grand. L'idée est donc de comparer la somme d'une itération à l'autre et de s'arrêter lorsqu'elle n'évolue plus.\nCe raisonnement n'est pas toujours applicable. Il est valide dans ce cas car la série $s_n = \\sum_{i=0}^{n} r^i$ est croissante et positive. Il est valide pour une série convergente de la forme $s_n = \\sum_{i=0}^{n} u_i$ et une suite $u_n$ de module décroissant.\nQ4 - hypercube\nUn chercheur cherche à vérifier qu'une suite de~0 et de~1 est aléatoire. Pour cela, il souhaite compter le nombre de séquences de $n$ nombres successifs. Par exemple, pour la suite 01100111 et $n=3$, les triplets sont 011, 110, 100, 001, 011, 111. Le triplet 011 apparaît deux fois, les autres une fois. Si la suite est aléatoire, les occurrences de chaque triplet sont en nombres équivalents.",
"def hyper_cube_liste(n, m=None):\n if m is None:\n m = [0, 0]\n if n > 1 :\n m[0] = [0,0]\n m[1] = [0,0]\n m[0] = hyper_cube_liste (n-1, m[0])\n m[1] = hyper_cube_liste (n-1, m[1])\n return m\n\nhyper_cube_liste(3)",
"La seconde à base de dictionnaire (plus facile à manipuler) :",
"def hyper_cube_dico (n) :\n r = { }\n ind = [ 0 for i in range (0,n) ]\n while ind [0] <= 1 :\n cle = tuple(ind) # conversion d'une liste en tuple\n r[cle] = 0\n ind[-1] += 1\n k = len(ind)-1\n while ind[k] == 2 and k > 0:\n ind[k] = 0\n ind[k-1] += 1\n k -= 1\n return r\n\nhyper_cube_dico(3)",
"Le chercheur a commencé à écrire son programme :",
"def occurrence(l,n) :\n # d = ....... # choix d'un hyper_cube (n)\n # .....\n # return d\n pass\n\nsuite = [ 0, 1, 0, 1, 1, 0, 1, 0, 0, 0, 1, 1, 0, 1, 1, 1, 0, 0, 1, 1 ]\nh = occurrence(suite, 3)\nh",
"Sur quelle structure se porte votre choix (a priori celle avec dictionnaire), compléter la fonction occurrence.",
"def occurrence(tu, n):\n d = hyper_cube_dico(n)\n for i in range (0, len(tu)-n) :\n cle = tu[i:i+n]\n d[cle] += 1\n return d\n\noccurrence((1, 0, 1, 1, 0, 1, 0), 3)",
"Il est même possible de se passer de la fonction hyper_cube_dico :",
"def occurrence2(tu, n):\n d = { }\n for i in range (0, len(tu)-n) :\n cle = tu[i:i+n]\n if cle not in d: \n d[cle] = 0\n d [cle] += 1\n return d\n\noccurrence2((1, 0, 1, 1, 0, 1, 0), 3)",
"La seule différence apparaît lorsqu'un n-uplet n'apparaît pas dans la liste. Avec la fonction hyper_cube_dico, ce n-uplet recevra la fréquence 0, sans cette fonction, ce n-uplet ne sera pas présent dans le dictionnaire d. Le même programme avec la structure matricielle est plus une curiosité qu'un cas utile.",
"def occurrence3(li, n):\n d = hyper_cube_liste(n)\n for i in range (0, len(li)-n) :\n cle = li[i:i+n]\n t = d # \n for k in range (0,n-1) : # point clé de la fonction : \n t = t[cle[k]] # accès à un élément\n t [cle [ n-1] ] += 1\n return d\n\noccurrence3((1, 0, 1, 1, 0, 1, 0), 3)",
"Une autre écriture...",
"def hyper_cube_liste2(n, m=[0, 0], m2=[0, 0]):\n if n > 1 :\n m[0] = list(m2)\n m[1] = list(m2)\n m[0] = hyper_cube_liste2(n-1, m[0])\n m[1] = hyper_cube_liste2(n-1, m[1])\n return m\n\ndef occurrence4(li, n):\n d = hyper_cube_liste2(n) # * remarque voir plus bas\n for i in range (0, len(li)-n) :\n cle = li[i:i+n]\n t = d # \n for k in range (0,n-1) : # point clé de la fonction : \n t = t[cle[k]] # accès à un élément\n t [cle [ n-1] ] += 1\n return d\n\noccurrence4((1, 0, 1, 1, 0, 1, 0), 3)",
"Et si on remplace list(m2) par m2.",
"def hyper_cube_liste3(n, m=[0, 0], m2=[0, 0]):\n if n > 1 :\n m[0] = m2\n m[1] = m2\n m[0] = hyper_cube_liste3(n-1, m[0])\n m[1] = hyper_cube_liste3(n-1, m[1])\n return m\n\ndef occurrence5(li, n):\n d = hyper_cube_liste3(n) # * remarque voir plus bas\n for i in range (0, len(li)-n) :\n cle = li[i:i+n]\n t = d # \n for k in range (0,n-1) : # point clé de la fonction : \n t = t[cle[k]] # accès à un élément\n t [cle [ n-1] ] += 1\n return d\n\ntry:\n occurrence5((1, 0, 1, 1, 0, 1, 0), 3)\nexcept Exception as e:\n print(e)",
"Intéressant..."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
ucsd-ccbb/jupyter-genomics
|
notebooks/crispr/Dual CRISPR 4-Count Combination.ipynb
|
mit
|
[
"Dual CRISPR Screen Analysis\nCount Combination\nAmanda Birmingham, CCBB, UCSD (abirmingham@ucsd.edu)\nInstructions\nTo run this notebook reproducibly, follow these steps:\n1. Click Kernel > Restart & Clear Output\n2. When prompted, click the red Restart & clear all outputs button\n3. Fill in the values for your analysis for each of the variables in the Input Parameters section\n4. Click Cell > Run All\n<a name = \"input-parameters\"></a>\nInput Parameters",
"g_timestamp = \"\"\ng_dataset_name = \"20160510_A549\"\ng_count_alg_name = \"19mer_1mm_py\"\ng_fastq_counts_dir = '/Users/Birmingham/Repositories/ccbb_tickets/20160210_mali_crispr/data/interim/20160510_D00611_0278_BHK55CBCXX_A549'\ng_fastq_counts_run_prefix = \"19mer_1mm_py_20160615223822\"\ng_collapsed_counts_dir = \"/Users/Birmingham/Repositories/ccbb_tickets/20160210_mali_crispr/data/processed/20160510_A549\"\ng_collapsed_counts_run_prefix = \"\"\ng_combined_counts_dir = \"\"\ng_combined_counts_run_prefix = \"\"\ng_code_location = \"/Users/Birmingham/Repositories/ccbb_tickets/20160210_mali_crispr/src/python\"",
"CCBB Library Imports",
"import sys\nsys.path.append(g_code_location)",
"Automated Set-Up",
"# %load -s describe_var_list /Users/Birmingham/Repositories/ccbb_tickets/20160210_mali_crispr/src/python/ccbbucsd/utilities/analysis_run_prefixes.py\ndef describe_var_list(input_var_name_list):\n description_list = [\"{0}: {1}\\n\".format(name, eval(name)) for name in input_var_name_list]\n return \"\".join(description_list)\n\n\nfrom ccbbucsd.utilities.analysis_run_prefixes import check_or_set, get_run_prefix, get_timestamp\ng_timestamp = check_or_set(g_timestamp, get_timestamp())\ng_collapsed_counts_dir = check_or_set(g_collapsed_counts_dir, g_fastq_counts_dir)\ng_collapsed_counts_run_prefix = check_or_set(g_collapsed_counts_run_prefix, \n get_run_prefix(g_dataset_name, g_count_alg_name, g_timestamp))\ng_combined_counts_dir = check_or_set(g_combined_counts_dir, g_collapsed_counts_dir)\ng_combined_counts_run_prefix = check_or_set(g_combined_counts_run_prefix, g_collapsed_counts_run_prefix)\nprint(describe_var_list(['g_timestamp','g_collapsed_counts_dir','g_collapsed_counts_run_prefix', \n 'g_combined_counts_dir', 'g_combined_counts_run_prefix']))\n\nfrom ccbbucsd.utilities.files_and_paths import verify_or_make_dir\nverify_or_make_dir(g_collapsed_counts_dir)\nverify_or_make_dir(g_combined_counts_dir)",
"Count Combination Functions",
"# %load -s get_counts_file_suffix /Users/Birmingham/Repositories/ccbb_tickets/20160210_mali_crispr/src/python/ccbbucsd/malicrispr/construct_counter.py\ndef get_counts_file_suffix():\n return \"counts.txt\"\n\n\n# %load /Users/Birmingham/Repositories/ccbb_tickets/20160210_mali_crispr/src/python/ccbbucsd/malicrispr/count_combination.py\n# ccbb libraries\nfrom ccbbucsd.utilities.analysis_run_prefixes import strip_run_prefix\nfrom ccbbucsd.utilities.files_and_paths import build_multipart_fp, group_files, get_filepaths_by_prefix_and_suffix\n\n# project-specific libraries\nfrom ccbbucsd.malicrispr.count_files_and_dataframes import get_counts_df\n\n__author__ = \"Amanda Birmingham\"\n__maintainer__ = \"Amanda Birmingham\"\n__email__ = \"abirmingham@ucsd.edu\"\n__status__ = \"prototype\"\n\n\ndef get_collapsed_counts_file_suffix():\n return \"collapsed.txt\"\n\n\ndef get_combined_counts_file_suffix():\n return \"counts_combined.txt\"\n\n\ndef group_lane_and_set_files(filepaths):\n # NB: this regex assumes read designator has *already* been removed\n # and replaced with _ as done by group_read_pairs\n return group_files(filepaths, \"_L\\d\\d\\d_\\d\\d\\d\", \"\")\n\n\ndef combine_count_files(counts_fp_for_dataset, run_prefix):\n combined_df = None\n \n for curr_counts_fp in counts_fp_for_dataset:\n count_header, curr_counts_df = get_counts_df(curr_counts_fp, run_prefix)\n \n if combined_df is None:\n combined_df = curr_counts_df\n else:\n combined_df[count_header] = curr_counts_df[count_header]\n \n return combined_df\n\n\ndef write_collapsed_count_files(input_dir, output_dir, curr_run_prefix, counts_run_prefix, counts_suffix, counts_collapsed_file_suffix):\n counts_fps_for_dataset = get_filepaths_by_prefix_and_suffix(input_dir, counts_run_prefix, counts_suffix)\n fps_by_sample = group_lane_and_set_files(counts_fps_for_dataset)\n \n for curr_sample, curr_fps in fps_by_sample.items():\n stripped_sample = strip_run_prefix(curr_sample, counts_run_prefix)\n output_fp = build_multipart_fp(output_dir, [curr_run_prefix, stripped_sample, counts_collapsed_file_suffix]) \n combined_df = None \n \n for curr_fp in curr_fps:\n count_header, curr_counts_df = get_counts_df(curr_fp, counts_run_prefix)\n \n if combined_df is None:\n combined_df = curr_counts_df\n combined_df.rename(columns = {count_header:stripped_sample}, inplace = True) \n else:\n combined_df[stripped_sample] = combined_df[stripped_sample] + curr_counts_df[count_header]\n \n combined_df.to_csv(output_fp, sep=\"\\t\", index=False) \n\n\ndef write_combined_count_file(input_dir, output_dir, curr_run_prefix, counts_run_prefix, counts_suffix, combined_suffix):\n output_fp = build_multipart_fp(output_dir, [curr_run_prefix, combined_suffix])\n counts_fps_for_run = get_filepaths_by_prefix_and_suffix(input_dir, counts_run_prefix, counts_suffix)\n combined_df = combine_count_files(counts_fps_for_run, curr_run_prefix)\n combined_df.to_csv(output_fp, sep=\"\\t\", index=False)",
"Input Count Filenames",
"from ccbbucsd.utilities.files_and_paths import summarize_filenames_for_prefix_and_suffix\nprint(summarize_filenames_for_prefix_and_suffix(g_fastq_counts_dir, g_fastq_counts_run_prefix, get_counts_file_suffix()))",
"Count Combination Execution",
"write_collapsed_count_files(g_fastq_counts_dir, g_collapsed_counts_dir, g_collapsed_counts_run_prefix, \n g_fastq_counts_run_prefix, get_counts_file_suffix(), get_collapsed_counts_file_suffix())\n\nwrite_combined_count_file(g_collapsed_counts_dir, g_combined_counts_dir, g_collapsed_counts_run_prefix, \n g_combined_counts_run_prefix, get_collapsed_counts_file_suffix(), \n get_combined_counts_file_suffix())"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
utensil/julia-playground
|
py/CGA-clifford.ipynb
|
mit
|
[
"The following experimenting is based on https://github.com/pygae/clifford/blob/master/docs/ConformalGeometricAlgebra.ipynb ( see it at https://nbviewer.jupyter.org/github/pygae/clifford/blob/master/docs/ConformalGeometricAlgebra.ipynb )",
"!pip install clifford\n!pip install matplotlib\n\nfrom IPython.display import display, Math, Latex, Markdown\n\n%matplotlib inline\n%config InlineBackend.figure_format = 'svg'\nimport matplotlib.pyplot as plt\nfrom mpl_toolkits.mplot3d import Axes3D\nimport numpy as np\n\ndef str_ga(*gas, expand_wedge=False):\n rets = []\n for ga in gas:\n ret = []\n first = True\n for i, v in enumerate(ga.value):\n if not first:\n if v > 0:\n ret.append(' + ')\n if v < 0:\n ret.append(' - ')\n if v != 0:\n v = abs(v)\n blade_tuple = ga.layout.bladeTupList[i]\n if blade_tuple != ():\n if v != 1:\n ret.append('%g ' % v)\n if expand_wedge:\n ret.append(' \\\\wedge '.join([ ('e_%d' % base) for base in blade_tuple ]))\n else:\n ret.append('e_{%s}' % ''.join([ ('%d' % base) for base in blade_tuple ]))\n else:\n ret.append('%g ' % v)\n first = False\n if ret == []:\n ret.append('0')\n rets.append('%s' % ''.join(ret))\n return rets\n\ndef print_ga(*gas, expand_wedge=False):\n for ga_latex in str_ga(*gas, expand_wedge=False):\n display(Math(ga_latex))",
"Conformal Geometric Algebra (CGA) is a projective geometry tool which allows conformal transformations to be implemented with rotations. To do this, the original geometric algebra is extended by two dimensions, one of positive signature $e_+$ and one of negative signature $e_-$. Thus, if we started with $G_p$, the conformal algebra is $G_{p+1,1}$.\nIt is convenient to define a null basis given by \n$$e_{o} = \\frac{1}{2}(e_{-} -e_{+})\\e_{\\infty} = e_{-}+e_{+}$$\nA vector in the original space $x$ is up-projected into a conformal vector $X$ by \n$$X = x + \\frac{1}{2} x^2 e_{\\infty} +e_o $$\nTo map a conformal vector back into a vector from the original space, the vector is first normalized, then rejected from the minkowski plane $E_0$,\n$$ X = \\frac{X}{X \\cdot e_{\\infty}}$$\nthen \n$$x = X \\wedge E_0\\, E_0^{-1}$$\nTo implement this in clifford we could create a CGA by instantiating the it directly, like Cl(3,1) for example, and then making the definitions and maps described above relating the various subspaces. Or, we you can use the helper function conformalize(). \nTo demonstrate we will conformalize $G_2$, producing a CGA of $G_{3,1}$.",
"from numpy import pi,e\nfrom clifford import Cl, conformalize\n\nG2, blades_g2 = Cl(2)\n\nblades_g2\n\nG2c, blades_g2c, stuff = conformalize(G2)\n\nblades_g2c #inspect the CGA blades\n\nprint_ga(blades_g2c['e4'])\n\nstuff",
"It contains the following:\n\nep - postive basis vector added\nen - negative basis vector added\neo - zero vecror of null basis (=.5*(en-ep))\neinf - infinity vector of null basis (=en+ep)\nE0 - minkowski bivector (=einf^eo)\nup() - function to up-project a vector from GA to CGA\ndown() - function to down-project a vector from CGA to GA\nhomo() - function ot homogenize a CGA vector",
"locals().update(blades_g2c)\nlocals().update(stuff)\n\nx = e1+e2\n\nprint_ga(x)\n\nprint_ga(up(x))\n\nprint_ga(down(up(x)))\n\na= 1*e1 + 2*e2\nb= 3*e1 + 4*e2\n\nprint_ga(a, b)\n\nprint_ga(down(ep*up(a)*ep), a.inv())\n\nprint_ga(down(E0*up(a)*E0), -a)",
"Dilations\n$$D_{\\alpha} = e^{-\\frac{\\ln{\\alpha}}{2} \\,E_0} $$\n$$D_{\\alpha} \\, X \\, \\tilde{D_{\\alpha}} $$",
"from scipy import rand,log\n\nD = lambda alpha: e**((-log(alpha)/2.)*(E0)) \nalpha = rand()\nprint_ga(down( D(alpha)*up(a)*~D(alpha)), (alpha*a))",
"Translations\n$$ V = e ^{\\frac{1}{2} e_{\\infty} a } = 1 + e_{\\infty}a$$",
"T = lambda x: e**(1/2.*(einf*x)) \nprint_ga(down( T(a)*up(b)*~T(a)), b+a)\n\nfrom pprint import pprint\n\npprint(vars(einf))\n\nprint_ga(ep, en, eo)",
"Transversions\nA transversion is an inversion, followed by a translation, followed by a inversion. The verser is \n$$V= e_+ T_a e_+$$\nwhich is recognised as the translation bivector reflected in the $e_+$ vector. From the diagram, it is seen that this is equivalent to the bivector in $x\\wedge e_o$,\n$$ e_+ (1+e_{\\infty}a)e_+ $$\n$$ e_+^2 + e_+e_{\\infty}a e_+$$\n$$2 +2e_o a$$\nthe factor of 2 may be dropped, because the conformal vectors are null",
"V = ep * T(a) * ep\nassert ( V == 1+(eo*a))\n\nK = lambda x: 1+(eo*a) \n\nB= up(b)\nprint_ga( down(K(a)*B*~K(a)) , 1/(a+1/b) ) \n\nprint_ga(a, 1/a)\n\nprint_ga(e1,e2, e1 | e2, e1^e2, e1 * e2)\n\nprint_ga(a,b, a | b, a^b, a * b)\n\nsoa = np.array([[0, 0, 1, 1, -2, 0], [0, 0, 2, 1, 1, 0],\n [0, 0, 3, 2, 1, 0], [0, 0, 4, 0.5, 0.7, 0]])\n\nX, Y, Z, U, V, W = zip(*soa)\nfig = plt.figure()\nax = fig.add_subplot(111, projection='3d')\nax.quiver(X, Y, Z, U, V, W)\nax.set_xlim([-1, 0.5])\nax.set_ylim([-1, 1.5])\nax.set_zlim([-1, 8])\nplt.show()\n\nprint_ga(a)\n\ndef to_vector(ga, max=-1):\n vec = [ ga.value[i] for i, t in enumerate(ga.layout.bladeTupList) if len(t) == 1 ]\n max = len(vec) if max == -1 else max\n return vec[0:max]\n\n[ a.value[i] for i, t in enumerate(a.layout.bladeTupList) if len(t) == 1 ]\n\nto_vector(a)\n\nto_vector(a, max=2)\n\ndef plot_as_vector(*gas):\n fig = plt.figure()\n ax = fig.add_subplot(111, projection='3d')\n l = len(gas)\n # print_ga(*gas)\n arr = [to_vector(ga, max=3) for ga in gas]\n soa = np.array(arr)\n ga_latexes = str_ga(*gas)\n for i, v in enumerate(arr):\n ax.text(v[0], v[1], v[2], ('$ %s $' % ga_latexes[i]))\n # print(soa)\n X, Y, Z = np.zeros(l), np.zeros(l), np.zeros(l)\n U, V, W = [list(a) for a in zip(*soa)]\n # print(X, Y, Z, U, V, W)\n ax.quiver(X, Y, Z, U, V, W)\n xlim = max(abs(min(U)), abs(max(U)), 2)\n ylim = max(abs(min(V)), abs(max(V)), 2)\n zlim = max(abs(min(W)), abs(max(W)), 2)\n ax.set_xlim([-xlim, xlim])\n ax.set_ylim([-ylim, ylim])\n ax.set_zlim([-zlim, zlim])\n\n# ax.set_xlim([-1, 0.5])\n# ax.set_ylim([-1, 1.5])\n# ax.set_zlim([-1, 8])\n plt.show()\n\nplot_as_vector(a)",
"Reflections\n$$ -mam^{-1} \\rightarrow MA\\tilde{M} $$",
"m = 5*e1 + 6*e2\nn = 7*e1 + 8*e2\n\n\nprint_ga(down(m*up(a)*m), -m*a*m.inv())\n\nstr_ga(a, m, down(m*up(a)*m))\n\nprint_ga(a, m, down(m*up(a)*m))\n\nplot_as_vector(a)\nplot_as_vector(m)\nplot_as_vector(down(m*up(a)*m))",
"Rotations\n$$ mnanm = Ra\\tilde{R} \\rightarrow RA\\tilde{R} $$",
"R = lambda theta: e**((-.5*theta)*(e12))\ntheta = pi/2\nprint_ga(down( R(theta)*up(a)*~R(theta)))\nprint_ga(R(theta)*a*~R(theta))\n\nplot_as_vector(a, down( R(theta)*up(a)*~R(theta)))\n\nfrom ipywidgets import interact, interactive, fixed, interact_manual\nimport ipywidgets as widgets\n\ndef plot_rotate(origin, theta=pi/2):\n R = lambda theta: e**((-.5*theta)*(e12))\n rotated = (down( R(theta)*up(origin)*~R(theta)))\n plot_as_vector(origin, rotated)\n\ninteractive_plot = interactive(plot_rotate, origin=fixed(a), theta=(0, 2 * pi, 0.1))\noutput = interactive_plot.children[-1]\noutput.layout.height = '350px'\ninteractive_plot",
"As a simple example consider the combination operations of translation,scaling, and inversion. \n$$b=-2a+e_0 \\quad \\rightarrow \\quad B= (T_{e_0}E_0 D_2) A \\tilde{ (D_2 E_0 T_{e_0})} $$",
"A = up(a)\nV = T(e1)*E0*D(2)\nB = V*A*~V\nassert(down(B) == (-2*a)+e1 )\n\nplot_as_vector(a, down(B))",
"Transversion\nA transversion may be built from a inversion, translation, and inversion. \n$$c = (a^{-1}+b)^{-1}$$\nIn conformal GA, this is accomplished by \n$$C = VA\\tilde{V}$$\n$$V= e_+ T_b e_+$$",
"A = up(a)\nV = ep*T(b)*ep\nC = V*A*~V\nassert(down(C) ==1/(1/a +b))\n\nplot_as_vector(a, down(C))"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
nick-youngblut/SIPSim
|
ipynb/bac_genome/fullCyc/detection_threshold.ipynb
|
mit
|
[
"Description\n\nThe relatively abundant taxa seem to be present in all gradient fractions.\n>0.1% pre-fractionation abundance (at least for 1 12C-con gradient)\nGoal: \nFor all 12C-Con gradients determine the detection threshold:\nWhat is the % abundance cutoff where taxa are no longer detected in all gradients\n\n\n\nSetting parameters",
"%load_ext rpy2.ipython\n\n%%R\nphyseqDir = '/var/seq_data/fullCyc/MiSeq_16SrRNA/515f-806r/lib1-7/phyloseq/'\nphyseq_bulk_core = 'bulk-core'\nphyseq_SIP_core = 'SIP-core_unk'",
"Init",
"%%R\nlibrary(dplyr)\nlibrary(tidyr)\nlibrary(ggplot2)\nlibrary(phyloseq)",
"Mapping bulk and SIP data\nSIP dataset",
"%%R\nF = file.path(physeqDir, physeq_SIP_core)\nphyseq.SIP = readRDS(F)\nphyseq.SIP.m = physeq.SIP %>% sample_data\nphyseq.SIP",
"Pre-fraction dataset",
"%%R\nF = file.path(physeqDir, physeq_bulk_core)\nphyseq.bulk = readRDS(F)\nphyseq.bulk.m = physeq.bulk %>% sample_data\nphyseq.bulk\n\n%%R \n# parsing out to just 12C-Con gradients\nphyseq.bulk.f = prune_samples((physeq.bulk.m$Exp_type == 'microcosm_bulk') | \n (physeq.bulk.m$Exp_type == 'SIP' & \n physeq.bulk.m$Substrate == '12C-Con'),\n physeq) %>% \n filter_taxa(function(x) sum(x) > 0, TRUE)\nphyseq.bulk.f.m = physeq.bulk.f %>% sample_data %>% as.matrix %>% as.data.frame \nphyseq.bulk.f ",
"TODO"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
GoogleCloudPlatform/training-data-analyst
|
courses/machine_learning/deepdive2/building_production_ml_systems/labs/4b_streaming_data_inference_vertex.ipynb
|
apache-2.0
|
[
"Working with Streaming Data\nLearning Objectives\n 1. Learn how to process real-time data for ML models using Cloud Dataflow\n 2. Learn how to serve online predictions using real-time data\nIntroduction\nIt can be useful to leverage real time data in a machine learning model when making a prediction. However, doing so requires setting up a streaming data pipeline which can be non-trivial. \nTypically you will have the following:\n - A series of IoT devices generating and sending data from the field in real-time (in our case these are the taxis)\n - A messaging bus to that receives and temporarily stores the IoT data (in our case this is Cloud Pub/Sub)\n - A streaming processing service that subscribes to the messaging bus, windows the messages and performs data transformations on each window (in our case this is Cloud Dataflow)\n - A persistent store to keep the processed data (in our case this is BigQuery)\nThese steps happen continuously and in real-time, and are illustrated by the blue arrows in the diagram below. \nOnce this streaming data pipeline is established, we need to modify our model serving to leverage it. This simply means adding a call to the persistent store (BigQuery) to fetch the latest real-time data when a prediction request comes in. This flow is illustrated by the red arrows in the diagram below. \n<img src='../assets/taxi_streaming_data.png' width='80%'>\nIn this lab we will address how to process real-time data for machine learning models. We will use the same data as our previous 'taxifare' labs, but with the addition of trips_last_5min data as an additional feature. This is our proxy for real-time traffic.",
"import numpy as np\nimport os\nimport shutil\nimport tensorflow as tf\n\nfrom google.cloud import aiplatform\nfrom google.cloud import bigquery\nfrom google.protobuf import json_format\nfrom google.protobuf.struct_pb2 import Value\nfrom matplotlib import pyplot as plt\nfrom tensorflow import keras\nfrom tensorflow.keras.callbacks import TensorBoard\nfrom tensorflow.keras.layers import Dense, DenseFeatures\nfrom tensorflow.keras.models import Sequential\n\nprint(tf.__version__)\n\nPROJECT = 'cloud-training-demos' # REPLACE WITH YOUR PROJECT ID\nBUCKET = 'cloud-training-demos' # REPLACE WITH YOUR BUCKET NAME\nREGION = 'us-central1' # REPLACE WITH YOUR BUCKET REGION e.g. us-central1\n\n# For Bash Code\nos.environ['PROJECT'] = PROJECT\nos.environ['BUCKET'] = BUCKET\nos.environ['REGION'] = REGION\n\n%%bash\ngcloud config set project $PROJECT",
"Re-train our model with trips_last_5min feature\nIn this lab, we want to show how to process real-time data for training and prediction. So, we need to retrain our previous model with this additional feature. Go through the notebook 4a_streaming_data_training.ipynb. Open and run the notebook to train and save a model. This notebook is very similar to what we did in the Introduction to Tensorflow module but note the added feature for trips_last_5min in the model and the dataset.\nSimulate Real Time Taxi Data\nSince we don’t actually have real-time taxi data we will synthesize it using a simple python script. The script publishes events to Google Cloud Pub/Sub.\nInspect the iot_devices.py script in the taxicab_traffic folder. It is configured to send about 2,000 trip messages every five minutes with some randomness in the frequency to mimic traffic fluctuations. These numbers come from looking at the historical average of taxi ride frequency in BigQuery. \nIn production this script would be replaced with actual taxis with IoT devices sending trip data to Cloud Pub/Sub. \nTo execute the iot_devices.py script, launch a terminal and navigate to the training-data-analyst/courses/machine_learning/deepdive2/building_production_ml_systems/solutions directory. Then run the following two commands.\nbash\nPROJECT_ID=$(gcloud config list project --format \"value(core.project)\")\npython3 ./taxicab_traffic/iot_devices.py --project=$PROJECT_ID\nYou will see new messages being published every 5 seconds. Keep this terminal open so it continues to publish events to the Pub/Sub topic. If you open Pub/Sub in your Google Cloud Console, you should be able to see a topic called taxi_rides.\nCreate a BigQuery table to collect the processed data\nIn the next section, we will create a dataflow pipeline to write processed taxifare data to a BigQuery Table, however that table does not yet exist. Execute the following commands to create a BigQuery dataset called taxifare and a table within that dataset called traffic_realtime.",
"bq = bigquery.Client()\n\ndataset = bigquery.Dataset(bq.dataset(\"taxifare\"))\ntry:\n bq.create_dataset(dataset) # will fail if dataset already exists\n print(\"Dataset created.\")\nexcept:\n print(\"Dataset already exists.\")",
"Next, we create a table called traffic_realtime and set up the schema.",
"dataset = bigquery.Dataset(bq.dataset(\"taxifare\"))\n\ntable_ref = dataset.table(\"traffic_realtime\")\nSCHEMA = [\n bigquery.SchemaField(\"trips_last_5min\", \"INTEGER\", mode=\"REQUIRED\"),\n bigquery.SchemaField(\"time\", \"TIMESTAMP\", mode=\"REQUIRED\"),\n]\ntable = bigquery.Table(table_ref, schema=SCHEMA)\n\ntry:\n bq.create_table(table)\n print(\"Table created.\")\nexcept:\n print(\"Table already exists.\")",
"Launch Streaming Dataflow Pipeline\nNow that we have our taxi data being pushed to Pub/Sub, and our BigQuery table set up, let’s consume the Pub/Sub data using a streaming DataFlow pipeline.\nThe pipeline is defined in ./taxicab_traffic/streaming_count.py. Open that file and inspect it. \nThere are 5 transformations being applied:\n - Read from PubSub\n - Window the messages\n - Count number of messages in the window\n - Format the count for BigQuery\n - Write results to BigQuery\nTODO: Open the file ./taxicab_traffic/streaming_count.py and find the TODO there. Specify a sliding window that is 5 minutes long, and gets recalculated every 15 seconds. Hint: Reference the beam programming guide for guidance. To check your answer reference the solution. \nFor the second transform, we specify a sliding window that is 5 minutes long, and recalculate values every 15 seconds. \nIn a new terminal, launch the dataflow pipeline using the command below. You can change the BUCKET variable, if necessary. Here it is assumed to be your PROJECT_ID.\nbash\nPROJECT_ID=$(gcloud config list project --format \"value(core.project)\")\nBUCKET=$PROJECT_ID # CHANGE AS NECESSARY \npython3 ./taxicab_traffic/streaming_count.py \\\n --input_topic taxi_rides \\\n --runner=DataflowRunner \\\n --project=$PROJECT_ID \\\n --temp_location=gs://$BUCKET/dataflow_streaming\nOnce you've submitted the command above you can examine the progress of that job in the Dataflow section of Cloud console. \nExplore the data in the table\nAfter a few moments, you should also see new data written to your BigQuery table as well. \nRe-run the query periodically to observe new data streaming in! You should see a new row every 15 seconds.",
"%%bigquery\nSELECT\n *\nFROM\n `taxifare.traffic_realtime`\nORDER BY\n time DESC\nLIMIT 10",
"Make predictions from the new data\nIn the rest of the lab, we'll referece the model we trained and deployed from the previous labs, so make sure you have run the code in the 4a_streaming_data_training.ipynb notebook. \nThe add_traffic_last_5min function below will query the traffic_realtime table to find the most recent traffic information and add that feature to our instance for prediction.\nExercise. Complete the code in the function below. Write a SQL query that will return the most recent entry in traffic_realtime and add it to the instance.",
"# TODO 2a. Write a function to take most recent entry in `traffic_realtime` table and add it to instance.\ndef add_traffic_last_5min(instance):\n bq = bigquery.Client()\n query_string = \"\"\"\n TODO: Your code goes here\n \"\"\"\n trips = bq.query(query_string).to_dataframe()['trips_last_5min'][0]\n instance['traffic_last_5min'] = # TODO: Your code goes here.\n return instance",
"The traffic_realtime table is updated in realtime using Cloud Pub/Sub and Dataflow so, if you run the cell below periodically, you should see the traffic_last_5min feature added to the instance and change over time.",
"add_traffic_last_5min(instance={'dayofweek': 4,\n 'hourofday': 13,\n 'pickup_longitude': -73.99,\n 'pickup_latitude': 40.758,\n 'dropoff_latitude': 41.742,\n 'dropoff_longitude': -73.07})",
"Finally, we'll use the python api to call predictions on an instance, using the realtime traffic information in our prediction. Just as above, you should notice that our resulting predicitons change with time as our realtime traffic information changes as well.\nExercise. Complete the code below to call prediction on an instance incorporating realtime traffic info. You should\n- use the function add_traffic_last_5min to add the most recent realtime traffic data to the prediction instance\n- call prediction on your model for this realtime instance and save the result as a variable called response\n- parse the json of response to print the predicted taxifare cost\nCopy the ENDPOINT_ID from the deployment in the previous lab to the beginning of the block below.",
"# TODO 2b. Write code to call prediction on instance using realtime traffic info.\n#Hint: Look at this sample https://github.com/googleapis/python-aiplatform/blob/master/samples/snippets/predict_custom_trained_model_sample.py\n\nENDPOINT_ID = # TODO: Copy the `ENDPOINT_ID` from the deployment in the previous lab.\n\napi_endpoint = f'{REGION}-aiplatform.googleapis.com'\n\n# The AI Platform services require regional API endpoints.\nclient_options = {\"api_endpoint\": api_endpoint}\n# Initialize client that will be used to create and send requests.\n# This client only needs to be created once, and can be reused for multiple requests.\nclient = aiplatform.gapic.PredictionServiceClient(client_options=client_options)\n\ninstance = {'dayofweek': 4,\n 'hourofday': 13,\n 'pickup_longitude': -73.99,\n 'pickup_latitude': 40.758,\n 'dropoff_latitude': 41.742,\n 'dropoff_longitude': -73.07}\n\n# The format of each instance should conform to the deployed model's prediction input schema.\ninstance_dict = # TODO: Your code goes here.\n\ninstance = json_format.ParseDict(instance_dict, Value())\ninstances = [instance]\nendpoint = client.endpoint_path(\n project=PROJECT, location=REGION, endpoint=ENDPOINT_ID\n)\nresponse = # TODO: Your code goes here.\n \n# The predictions are a google.protobuf.Value representation of the model's predictions.\nprint(\" prediction:\", # TODO: Your code goes here.\n",
"Cleanup\nIn order to avoid ongoing charges, when you are finished with this lab, you can delete your Dataflow job of that job from the Dataflow section of Cloud console.\nAn endpoint with a model deployed to it incurs ongoing charges, as there must be at least one replica defined (the min-replica-count parameter is at least 1). In order to stop incurring charges, you can click on the endpoint on the Endpoints page of the Cloud Console and un-deploy your model.\nCopyright 2021 Google Inc. Licensed under the Apache License, Version 2.0 (the \"License\"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
OMS-NetZero/FAIR
|
notebooks/Example-Usage.ipynb
|
apache-2.0
|
[
"FaIR\nThis notebook gives some simple examples of how to run and use the Finite Amplitude Impulse Response (FaIR) model. \nThe Finite Amplitude Impulse Response (FaIR) model is a simple emissions-based climate model. It allows the user to input emissions of greenhouse gases and short lived climate forcers in order to estimate global mean atmospheric GHG concentrations, radiative forcing and temperature anomalies.\nThe original FaIR model (v1.0) was developed to simulate the earth system response to CO$_2$ emissions, with all non-CO$_2$ forcing implemented as an \"external\" source. It was developed by Richard Millar, Zebedee Nicholls, Pierre Friedlingstein and Myles Allen. The motivation for developing it and its formulation is documented in a paper published in Atmospheric Chemistry and Physics in 2017 (doi:10.5194/acp-2016-405).\nThe emissions-based model extends FaIR by replacing all sources of non-CO$_2$ forcing with relationships that are based on the source emissions, with the exception of natural forcings (viz. variations in solar irradiance and volcanic eruptions). It is useful for assessing future policy commitments to anthropogenic emissions (something which we can control) than to radiative forcing (something which is less certain and which we can only partially control).\nThe emissions based model was developed by Chris Smith with input from Piers Forster, Leighton Regayre and Giovanni Passerello, in parallel with Nicolas Leach, Richard Millar and Myles Allen.",
"%matplotlib inline\n\nimport fair\n\nimport numpy as np\n\nfrom matplotlib import pyplot as plt\nplt.style.use('seaborn-darkgrid')\nplt.rcParams['figure.figsize'] = (16, 9)",
"Basic run\nHere we show how FaIR can be run with step change CO$_2$ emissions and sinusoidal non-CO$_2$ forcing timeseries.",
"emissions = np.zeros(250)\nemissions[125:] = 10.0\nother_rf = np.zeros(emissions.size)\nfor x in range(0, emissions.size):\n other_rf[x] = 0.5 * np.sin(2 * np.pi * (x) / 14.0)\n \nC,F,T = fair.forward.fair_scm(\n emissions=emissions,\n other_rf=other_rf,\n useMultigas=False\n)\n\nfig = plt.figure()\nax1 = fig.add_subplot(221)\nax1.plot(range(0, emissions.size), emissions, color='black')\nax1.set_ylabel('Emissions (GtC)')\nax2 = fig.add_subplot(222)\nax2.plot(range(0, emissions.size), C, color='blue')\nax2.set_ylabel('CO$_2$ concentrations (ppm)')\nax3 = fig.add_subplot(223)\nax3.plot(range(0, emissions.size), other_rf, color='orange')\nax3.set_ylabel('Other radiative forcing (W.m$^{-2}$)')\nax4 = fig.add_subplot(224)\nax4.plot(range(0, emissions.size), T, color='red')\nax4.set_ylabel('Temperature anomaly (K)');",
"RCPs\nWe can run FaIR with the CO$_2$ emissions and non-CO$_2$ forcing from the four representative concentration pathway scenarios. To use the emissions-based version specify useMultigas=True in the call to fair_scm().\nBy default in multi-gas mode, volcanic and solar forcing plus natural emissions of methane and nitrous oxide are switched on.",
"from fair.RCPs import rcp3pd, rcp45, rcp6, rcp85\n\nfig = plt.figure()\nax1 = fig.add_subplot(221)\nax2 = fig.add_subplot(222)\nax3 = fig.add_subplot(223)\nax4 = fig.add_subplot(224)\n\nC26, F26, T26 = fair.forward.fair_scm(emissions=rcp3pd.Emissions.emissions)\nax1.plot(rcp3pd.Emissions.year, rcp3pd.Emissions.co2_fossil, color='green', label='RCP3PD')\nax2.plot(rcp3pd.Emissions.year, C26[:, 0], color='green')\nax3.plot(rcp3pd.Emissions.year, np.sum(F26, axis=1), color='green')\nax4.plot(rcp3pd.Emissions.year, T26, color='green')\n\nC45, F45, T45 = fair.forward.fair_scm(emissions=rcp45.Emissions.emissions)\nax1.plot(rcp45.Emissions.year, rcp45.Emissions.co2_fossil, color='blue', label='RCP4.5')\nax2.plot(rcp45.Emissions.year, C45[:, 0], color='blue')\nax3.plot(rcp45.Emissions.year, np.sum(F45, axis=1), color='blue')\nax4.plot(rcp45.Emissions.year, T45, color='blue')\n\nC60, F60, T60 = fair.forward.fair_scm(emissions=rcp6.Emissions.emissions)\nax1.plot(rcp6.Emissions.year, rcp6.Emissions.co2_fossil, color='red', label='RCP6')\nax2.plot(rcp6.Emissions.year, C60[:, 0], color='red')\nax3.plot(rcp6.Emissions.year, np.sum(F60, axis=1), color='red')\nax4.plot(rcp6.Emissions.year, T60, color='red')\n\nC85, F85, T85 = fair.forward.fair_scm(emissions=rcp85.Emissions.emissions)\nax1.plot(rcp85.Emissions.year, rcp85.Emissions.co2_fossil, color='black', label='RCP8.5')\nax2.plot(rcp85.Emissions.year, C85[:, 0], color='black')\nax3.plot(rcp85.Emissions.year, np.sum(F85, axis=1), color='black')\nax4.plot(rcp85.Emissions.year, T85, color='black')\n\nax1.set_ylabel('Fossil CO$_2$ Emissions (GtC)')\nax1.legend()\nax2.set_ylabel('CO$_2$ concentrations (ppm)')\nax3.set_ylabel('Total radiative forcing (W.m$^{-2}$)')\nax4.set_ylabel('Temperature anomaly (K)');",
"Concentrations of well-mixed greenhouse gases\nThe output of FaIR (in most cases) is a 3-element tuple of concentrations, effective radiative forcing and temperature change since pre-industrial. Concentrations are a 31-column array of greenhouse gases. The indices correspond to the order given in the RCP concentration datasets (table 2 in Smith et al., https://www.geosci-model-dev-discuss.net/gmd-2017-266/). We can investigate the GHG concentrations coming out of the model:",
"fig = plt.figure()\nax1 = fig.add_subplot(221)\nax2 = fig.add_subplot(222)\nax3 = fig.add_subplot(223)\nax4 = fig.add_subplot(224)\n\nax1.plot(rcp3pd.Emissions.year, C26[:,1], color='green', label='RCP3PD')\nax1.plot(rcp45.Emissions.year, C45[:,1], color='blue', label='RCP4.5')\nax1.plot(rcp6.Emissions.year, C60[:,1], color='red', label='RCP6')\nax1.plot(rcp85.Emissions.year, C85[:,1], color='black', label='RCP8.5')\nax1.set_title(\"Methane concentrations, ppb\")\n\nax2.plot(rcp3pd.Emissions.year, C26[:,2], color='green', label='RCP3PD')\nax2.plot(rcp45.Emissions.year, C45[:,2], color='blue', label='RCP4.5')\nax2.plot(rcp6.Emissions.year, C60[:,2], color='red', label='RCP6')\nax2.plot(rcp85.Emissions.year, C85[:,2], color='black', label='RCP8.5')\nax2.set_title(\"Nitrous oxide concentrations, ppb\")\n\n# How to convert the H and F gases to single-species equivalents? Weight by radiative efficiency.\nfrom fair.constants import radeff\nC26_hfc134a_eq = np.sum(C26[:,3:15]*radeff.aslist[3:15],axis=1)/radeff.HFC134A # indices 3:15 are HFCs and PFCs\nC45_hfc134a_eq = np.sum(C45[:,3:15]*radeff.aslist[3:15],axis=1)/radeff.HFC134A\nC60_hfc134a_eq = np.sum(C60[:,3:15]*radeff.aslist[3:15],axis=1)/radeff.HFC134A\nC85_hfc134a_eq = np.sum(C85[:,3:15]*radeff.aslist[3:15],axis=1)/radeff.HFC134A\n\nC26_cfc12_eq = np.sum(C26[:,15:31]*radeff.aslist[15:31],axis=1)/radeff.CFC12 # indices 15:31 are ozone depleters\nC45_cfc12_eq = np.sum(C45[:,15:31]*radeff.aslist[15:31],axis=1)/radeff.CFC12\nC60_cfc12_eq = np.sum(C60[:,15:31]*radeff.aslist[15:31],axis=1)/radeff.CFC12\nC85_cfc12_eq = np.sum(C85[:,15:31]*radeff.aslist[15:31],axis=1)/radeff.CFC12\n\nax3.plot(rcp3pd.Emissions.year, C26_hfc134a_eq, color='green', label='RCP3PD')\nax3.plot(rcp45.Emissions.year, C45_hfc134a_eq, color='blue', label='RCP4.5')\nax3.plot(rcp6.Emissions.year, C60_hfc134a_eq, color='red', label='RCP6')\nax3.plot(rcp85.Emissions.year, C85_hfc134a_eq, color='black', label='RCP8.5')\nax3.set_title(\"HFC134a equivalent concentrations, ppt\")\n\nax4.plot(rcp3pd.Emissions.year, C26_cfc12_eq, color='green', label='RCP3PD')\nax4.plot(rcp45.Emissions.year, C45_cfc12_eq, color='blue', label='RCP4.5')\nax4.plot(rcp6.Emissions.year, C60_cfc12_eq, color='red', label='RCP6')\nax4.plot(rcp85.Emissions.year, C85_cfc12_eq, color='black', label='RCP8.5')\nax4.set_title(\"CFC12 equivalent concentrations, ppt\")\nax1.legend()",
"Radiative forcing\nWe consider 13 separate species of radiative forcing: CO$_2$, CH$_4$, N$_2$O, minor GHGs, tropospheric ozone, stratospheric ozone, stratospheric water vapour from methane oxidation, contrails, aerosols, black carbon on snow, land use change, volcanic and solar (table 3 in Smith et al., https://www.geosci-model-dev.net/11/2273/2018/gmd-11-2273-2018.pdf). Here we show some of the more interesting examples.",
"fig = plt.figure()\nax1 = fig.add_subplot(221)\nax2 = fig.add_subplot(222)\nax3 = fig.add_subplot(223)\nax4 = fig.add_subplot(224)\n\nax1.plot(rcp3pd.Emissions.year, F26[:,4], color='green', label='RCP3PD')\nax1.plot(rcp45.Emissions.year, F45[:,4], color='blue', label='RCP4.5')\nax1.plot(rcp6.Emissions.year, F60[:,4], color='red', label='RCP6')\nax1.plot(rcp85.Emissions.year, F85[:,4], color='black', label='RCP8.5')\nax1.set_title(\"Tropospheric ozone forcing, W m$^{-2}$\")\n\nax2.plot(rcp3pd.Emissions.year, F26[:,5], color='green', label='RCP3PD')\nax2.plot(rcp45.Emissions.year, F45[:,5], color='blue', label='RCP4.5')\nax2.plot(rcp6.Emissions.year, F60[:,5], color='red', label='RCP6')\nax2.plot(rcp85.Emissions.year, F85[:,5], color='black', label='RCP8.5')\nax2.set_title(\"Stratospheric ozone forcing, W m$^{-2}$\")\n\nax3.plot(rcp3pd.Emissions.year, F26[:,8], color='green', label='RCP3PD')\nax3.plot(rcp45.Emissions.year, F45[:,8], color='blue', label='RCP4.5')\nax3.plot(rcp6.Emissions.year, F60[:,8], color='red', label='RCP6')\nax3.plot(rcp85.Emissions.year, F85[:,8], color='black', label='RCP8.5')\nax3.set_title(\"Aerosol forcing, W ~m$^{-2}$\")\n\nax4.plot(rcp3pd.Emissions.year, F26[:,10], color='green', label='RCP3PD')\nax4.plot(rcp45.Emissions.year, F45[:,10], color='blue', label='RCP4.5')\nax4.plot(rcp6.Emissions.year, F60[:,10], color='red', label='RCP6')\nax4.plot(rcp85.Emissions.year, F85[:,10], color='black', label='RCP8.5')\nax4.set_title(\"Land use forcing, W m$^{-2}$\")\nax1.legend();",
"Ensemble generation\nAn advantage of FaIR is that it is very quick to run (much less than a second on an average machine). Therefore it can be used to generate probabilistic future ensembles. We'll show a 100-member ensemble.",
"from scipy import stats\nfrom fair.tools.ensemble import tcrecs_generate\n\n# generate some joint lognormal TCR and ECS pairs\ntcrecs = tcrecs_generate(n=100, seed=38571)\n\n# generate some forcing scale factors with SD of 10% of the best estimate\nF_scale = stats.norm.rvs(size=(100,13), loc=1, scale=0.1, random_state=40000)\n\n# do the same for the carbon cycle parameters\nr0 = stats.norm.rvs(size=100, loc=35, scale=3.5, random_state=41000)\nrc = stats.norm.rvs(size=100, loc=0.019, scale=0.0019, random_state=42000)\nrt = stats.norm.rvs(size=100, loc=4.165, scale=0.4165, random_state=45000)\n\nT = np.zeros((736,100))\n\n%%time\nfor i in range(100):\n _, _, T[:,i] = fair.forward.fair_scm(emissions=rcp85.Emissions.emissions,\n r0 = r0[i],\n rc = rc[i],\n rt = rt[i],\n tcrecs = tcrecs[i,:],\n scale = F_scale[i,:],\n F2x = 3.74*F_scale[i,0]) # scale F2x with the CO2 scaling factor for consistency\n\nfig = plt.figure()\nax1 = fig.add_subplot(111)\nax1.plot(rcp85.Emissions.year, T);",
"The resulting projections show a large spread. Some of these ensemble members are unrealistic, ranging from around 0.4 to 2.0 K temperature change in the present day, whereas we know in reality it is more like 0.9 (plus or minus 0.2). Therefore we can constrain this ensemble to observations.",
"try:\n # For Python 3.0 and later\n from urllib.request import urlopen\nexcept ImportError:\n # Fall back to Python 2's urllib2\n from urllib2 import urlopen\n \nfrom fair.tools.constrain import hist_temp\n\n# load up Cowtan and Way data remotely\nurl = 'http://www-users.york.ac.uk/~kdc3/papers/coverage2013/had4_krig_annual_v2_0_0.txt'\nresponse = urlopen(url)\n\nCW = np.loadtxt(response)\nconstrained = np.zeros(100, dtype=bool)\n\nfor i in range(100):\n # we use observed trend from 1880 to 2016\n constrained[i], _, _, _, _ = hist_temp(CW[30:167,1], T[1880-1765:2017-1765,i], CW[30:167,0])\n \n# How many ensemble members passed the constraint?\nprint('%d ensemble members passed historical constraint' % np.sum(constrained))\n\n# What does this do to the ensemble?\nfig = plt.figure()\nax1 = fig.add_subplot(111)\nax1.plot(rcp85.Emissions.year, T[:,constrained]);",
"Some, but not all, of the higher end scenarios have been constrained out, but there is still quite a large range of total temperature change projected for 2500 even under this constraint.\nFrom these constraints it is possible to obtain posterior distributions on effective radiative forcing, ECS, TCR, TCRE and other metrics."
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
planet-os/notebooks
|
api-examples/cams_air_quality_demo_calu2020.ipynb
|
mit
|
[
"Analyzing the Air Pollution Spike Caused by the Thomas Fire\nThe ongoing Thomas Fire north of Los Angeles has already burned across 270,000 acres and is causing hazardous air pollution in the region. In light of that, we’ve added a high-quality global air pollution dataset to the Planet OS Datahub that provides a 5-day air quality forecast.\nThe Copernicus Atmosphere Monitoring Service uses a comprehensive global monitoring and forecasting system that estimates the state of the atmosphere on a daily basis, combining information from models and observations, to provide a daily 5-day global surface forecast. \nThe Planet OS Datahub provides 28 different variables from the CAMS Air Quality Forecast dataset. I’ve used PM2.5 in my analysis of the Thomas Fire as these particles, often described as the fine particles, are up to 30 times smaller than the width of a human hair. These tiny particles are small enough to be breathed deep into the lungs, making them very dangerous to people’s health.\nAs we would like to have data about the Continental United States we will download data by using Package API. Then we will create a widget where you can choose timestamp by using a slider. After that, we will also save the same data as a GIF to make sharing the results with friends and colleagues more fun. And finally, we make a plot from the specific location where the wildfires are occuring right now - Santa Barbara, California.",
"%matplotlib notebook\n%matplotlib inline\nimport numpy as np\nimport dh_py_access.lib.datahub as datahub\nimport xarray as xr\nimport matplotlib.pyplot as plt\nimport ipywidgets as widgets\nfrom mpl_toolkits.basemap import Basemap\nimport dh_py_access.package_api as package_api\nimport matplotlib.colors as colors\nimport warnings\nimport shutil\nimport imageio\nimport os\nwarnings.filterwarnings(\"ignore\")",
"<font color='red'>Please put your datahub API key into a file called APIKEY and place it to the notebook folder or assign your API key directly to the variable API_key!</font>",
"server = 'api.planetos.com'\nAPI_key = open('APIKEY').readlines()[0].strip() #'<YOUR API KEY HERE>'\nversion = 'v1'",
"At first, we need to define the dataset name and a variable we want to use.",
"dh = datahub.datahub(server,version,API_key)\ndataset = 'cams_nrt_forecasts_global'\nvariable_name1 = 'pm2p5'",
"Then we define spatial range. We decided to analyze US, where unfortunately catastrofic wildfires are taking place at the moment and influeces air quality.",
"area_name = 'USA'\nlatitude_north = 49.138; longitude_west = -128.780\nlatitude_south = 24.414; longitude_east = -57.763",
"Download the data with package API\n\nCreate package objects\nSend commands for the package creation\nDownload the package files",
"package_cams = package_api.package_api(dh,dataset,variable_name1,longitude_west,longitude_east,latitude_south,latitude_north,area_name=area_name)\n\npackage_cams.make_package()\n\npackage_cams.download_package()",
"Work with the downloaded files\nWe start with opening the files with xarray and adding PM2.5 as micrograms per cubic meter as well to make the values easier to understand and compare. After that, we will create a map plot with a time slider, then make a GIF using the images, and finally, we will look into a specific location.",
"dd1 = xr.open_dataset(package_cams.local_file_name)\ndd1['longitude'] = ((dd1.longitude+180) % 360) - 180\ndd1['pm2p5_micro'] = dd1.pm2p5 * 1000000000.\ndd1.pm2p5_micro.data[dd1.pm2p5_micro.data < 0] = np.nan",
"Here we are making a Basemap of the US that we will use for showing the data.",
"m = Basemap(projection='merc', lat_0 = 55, lon_0 = -4,\n resolution = 'i', area_thresh = 0.05,\n llcrnrlon=longitude_west, llcrnrlat=latitude_south,\n urcrnrlon=longitude_east, urcrnrlat=latitude_north)\nlons,lats = np.meshgrid(dd1.longitude.data,dd1.latitude.data)\nlonmap,latmap = m(lons,lats)",
"Now it is time to plot all the data. A great way to do it is to make an interactive widget, where you can choose time stamp by using a slider. \nAs the minimum and maximum values are very different, we are using logarithmic colorbar to visualize it better.\nOn the map we can see that the areas near Los Angeles have very high PM2.5 values due to the Thomas Fire. By using the slider we can see the air quality forecast, which shows how the pollution is expected to expand. \nWe are also adding a red dot to the map to mark the area, where the PM2.5 is the highest. Seems like most of the time it is near the Thomas Fire. We can also see that most of the Continental US is having PM2.5 values below the standard, which is 25 µg/m3.",
"vmax = np.nanmax(dd1.pm2p5_micro.data)\nvmin = 2\n\ndef loadimg(k):\n fig=plt.figure(figsize=(10,7))\n ax = fig.add_subplot(111)\n pcm = m.pcolormesh(lonmap,latmap,dd1.pm2p5_micro.data[k],\n norm = colors.LogNorm(vmin=vmin, vmax=vmax),cmap = 'rainbow')\n ilat,ilon = np.unravel_index(np.nanargmax(dd1.pm2p5_micro.data[k]),dd1.pm2p5_micro.data[k].shape)\n m.plot(lonmap[ilat,ilon],latmap[ilat,ilon],'ro')\n m.drawcoastlines()\n m.drawcountries()\n m.drawstates()\n cbar = plt.colorbar(pcm,fraction=0.02, pad=0.040,ticks=[10**0, 10**1, 10**2,10**3])\n cbar.ax.set_yticklabels([0,10,100,1000]) \n plt.title(str(dd1.pm2p5_micro.time[k].data)[:-10])\n cbar.set_label('micrograms m^3')\n print(\"Maximum: \",\"%.2f\" % np.nanmax(dd1.pm2p5_micro.data[k]))\n\n plt.show()\nwidgets.interact(loadimg, k=widgets.IntSlider(min=0,max=(len(dd1.pm2p5_micro.data)-1),step=1,value=0, layout=widgets.Layout(width='100%')))",
"Let's include an image from the last time-step as well, because GitHub Preview doesn't show the time slider images.",
"loadimg(10)",
"With the function below we will save images you saw above to the local filesystem as a GIF, so it is easily shareable with others.",
"def make_ani():\n folder = './anim/'\n for k in range(len(dd1.pm2p5_micro)):\n filename = folder + 'ani_' + str(k).rjust(3,'0') + '.png'\n if not os.path.exists(filename):\n fig=plt.figure(figsize=(10,7))\n ax = fig.add_subplot(111)\n pcm = m.pcolormesh(lonmap,latmap,dd1.pm2p5_micro.data[k],\n norm = colors.LogNorm(vmin=vmin, vmax=vmax),cmap = 'rainbow')\n m.drawcoastlines()\n m.drawcountries()\n m.drawstates()\n cbar = plt.colorbar(pcm,fraction=0.02, pad=0.040,ticks=[10**0, 10**1, 10**2,10**3])\n cbar.ax.set_yticklabels([0,10,100,1000]) \n plt.title(str(dd1.pm2p5_micro.time[k].data)[:-10])\n ax.set_xlim()\n cbar.set_label('micrograms m^3')\n if not os.path.exists(folder):\n os.mkdir(folder)\n plt.savefig(filename,bbox_inches = 'tight')\n plt.close()\n\n files = sorted(os.listdir(folder))\n images = []\n for file in files:\n if not file.startswith('.'):\n filename = folder + file\n images.append(imageio.imread(filename))\n kargs = { 'duration': 0.1,'quantizer':2,'fps':5.0}\n imageio.mimsave('cams_pm2p5.gif', images, **kargs)\n print ('GIF is saved as cams_pm2p5.gif under current working directory')\n shutil.rmtree(folder)\nmake_ani()",
"To see data more specifically we need to choose the location. This time we decided to look into the place where PM2.5 is highest. Seems like at the moment it is the Santa Barbara area, where the Thomas Fire is taking place.",
"ilat,ilon = np.unravel_index(np.nanargmax(dd1.pm2p5_micro.data[1]),dd1.pm2p5_micro.data[1].shape)\nlon_max = -121.9; lat_max = 37.33 #dd1.latitude.data[ilat]\ndata_in_spec_loc = dd1.sel(longitude = lon_max,latitude=lat_max,method='nearest')\nprint ('Latitude ' + str(lat_max) + ' ; Longitude ' + str(lon_max))",
"In the plot below we can see the PM2.5 forecast on the surface layer. Note that the time zone on the graph is UTC while the time zone in Santa Barbara is UTC-08:00. The air pollution from the wildfire has exceeded a record 5,000 µg/m3, while the hourly norm is 25 µg/m3. We can also see some peaks every day around 12 pm UTC (4 am PST) and the lowest values are around 12 am UTC (4 pm PST). \nAlso, it is predicted that the pollution will start to go down on December the 21st. However, the values will continue to be very high during the night. This daily pattern where the air quality is the worst at night is caused by the temperature inversion. As the land is not heated by the sun during the night, and the winds tend to be weaker as well, the pollution gets trapped near the ground. Pollution also tends to be higher in the winter time when the days are shorter.",
"fig = plt.figure(figsize=(10,5))\nplt.plot(data_in_spec_loc.time,data_in_spec_loc.pm2p5_micro, '*-',linewidth = 1,c='blue',label = dataset) \nplt.xlabel('Time')\nplt.title('PM2.5 forecast for San Jose')\nplt.grid()",
"Finally, we will remove the package we downloaded.",
"os.remove(package_cams.local_file_name)"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
darkomen/TFG
|
ipython_notebooks/06_regulador_experto/ensayo4.ipynb
|
cc0-1.0
|
[
"Análisis de los datos obtenidos\nUso de ipython para el análsis y muestra de los datos obtenidos durante la producción.Se implementa un regulador experto. Los datos analizados son del día 13 de Agosto del 2015\nLos datos del experimento:\n* Hora de inicio: 10:30\n* Hora final : 11:00\n* Filamento extruido: 447cm \n* $T: 150ºC$\n* $V_{min} tractora: 1.5 mm/s$\n* $V_{max} tractora: 3.4 mm/s$\n* Los incrementos de velocidades en las reglas del sistema experto son distintas:\n * En los caso 3 y 5 se mantiene un incremento de +2.\n * En los casos 4 y 6 se reduce el incremento a -1.",
"#Importamos las librerías utilizadas\nimport numpy as np\nimport pandas as pd\nimport seaborn as sns\n\n#Mostramos las versiones usadas de cada librerías\nprint (\"Numpy v{}\".format(np.__version__))\nprint (\"Pandas v{}\".format(pd.__version__))\nprint (\"Seaborn v{}\".format(sns.__version__))\n\n#Abrimos el fichero csv con los datos de la muestra\ndatos = pd.read_csv('ensayo4.CSV')\n\n%pylab inline\n\n#Almacenamos en una lista las columnas del fichero con las que vamos a trabajar\ncolumns = ['Diametro X','Diametro Y', 'RPM TRAC']\n\n#Mostramos un resumen de los datos obtenidoss\ndatos[columns].describe()\n#datos.describe().loc['mean',['Diametro X [mm]', 'Diametro Y [mm]']]",
"Representamos ambos diámetro y la velocidad de la tractora en la misma gráfica",
"graf = datos.ix[:, \"Diametro X\"].plot(figsize=(16,10),ylim=(0.5,3))\ngraf.axhspan(1.65,1.85, alpha=0.2)\ngraf.set_xlabel('Tiempo (s)')\ngraf.set_ylabel('Diámetro (mm)')\n#datos['RPM TRAC'].plot(secondary_y='RPM TRAC')\n\nbox = datos.ix[:, \"Diametro X\":\"Diametro Y\"].boxplot(return_type='axes')\nbox.axhspan(1.65,1.85, alpha=0.2)",
"Con esta segunda aproximación se ha conseguido estabilizar los datos. Se va a tratar de bajar ese porcentaje. Como cuarta aproximación, vamos a modificar las velocidades de tracción. El rango de velocidades propuesto es de 1.5 a 5.3, manteniendo los incrementos del sistema experto como en el actual ensayo.\nComparativa de Diametro X frente a Diametro Y para ver el ratio del filamento",
"plt.scatter(x=datos['Diametro X'], y=datos['Diametro Y'], marker='.')",
"Filtrado de datos\nLas muestras tomadas $d_x >= 0.9$ or $d_y >= 0.9$ las asumimos como error del sensor, por ello las filtramos de las muestras tomadas.",
"datos_filtrados = datos[(datos['Diametro X'] >= 0.9) & (datos['Diametro Y'] >= 0.9)]\n\n#datos_filtrados.ix[:, \"Diametro X\":\"Diametro Y\"].boxplot(return_type='axes')",
"Representación de X/Y",
"plt.scatter(x=datos_filtrados['Diametro X'], y=datos_filtrados['Diametro Y'], marker='.')",
"Analizamos datos del ratio",
"ratio = datos_filtrados['Diametro X']/datos_filtrados['Diametro Y']\nratio.describe()\n\nrolling_mean = pd.rolling_mean(ratio, 50)\nrolling_std = pd.rolling_std(ratio, 50)\nrolling_mean.plot(figsize=(12,6))\n# plt.fill_between(ratio, y1=rolling_mean+rolling_std, y2=rolling_mean-rolling_std, alpha=0.5)\nratio.plot(figsize=(12,6), alpha=0.6, ylim=(0.5,1.5))",
"Límites de calidad\nCalculamos el número de veces que traspasamos unos límites de calidad. \n$Th^+ = 1.85$ and $Th^- = 1.65$",
"Th_u = 1.85\nTh_d = 1.65\n\ndata_violations = datos[(datos['Diametro X'] > Th_u) | (datos['Diametro X'] < Th_d) |\n (datos['Diametro Y'] > Th_u) | (datos['Diametro Y'] < Th_d)]\n\ndata_violations.describe()\n\ndata_violations.plot(subplots=True, figsize=(12,12))"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
quiltdata/quilt-compiler
|
docs/Walkthrough/Editing a Package.ipynb
|
apache-2.0
|
[
"Data in Quilt is organized in terms of data packages. A data package is a logical group of files, directories, and metadata.\nInitializing a package\nTo edit a new empty package, use the package constructor:",
"import quilt3\np = quilt3.Package()",
"To edit a preexisting package, we need to first make sure to install the package:",
"quilt3.Package.install(\n \"examples/hurdat\",\n \"s3://quilt-example\",\n)",
"Use browse to edit the package:",
"p = quilt3.Package.browse('examples/hurdat')",
"For more information on accessing existing packages see the section \"Installing a Package\".\nAdding data to a package\nUse the set and set_dir commands to add individual files and whole directories, respectively, to a Package:",
"# add entries individually using `set`\n# ie p.set(\"foo.csv\", \"/local/path/foo.csv\"),\n# p.set(\"bar.csv\", \"s3://bucket/path/bar.csv\")\n\n# create test data\nwith open(\"data.csv\", \"w\") as f:\n f.write(\"id, value\\na, 42\")\n\np = quilt3.Package()\np.set(\"data.csv\", \"data.csv\")\np.set(\"banner.png\", \"s3://quilt-example/imgs/banner.png\")\n\n# or grab everything in a directory at once using `set_dir`\n# ie p.set_dir(\"stuff/\", \"/path/to/stuff/\"),\n# p.set_dir(\"things/\", \"s3://path/to/things/\")\n\n# create test directory\nimport os\nos.mkdir(\"data\")\np.set_dir(\"stuff/\", \"./data/\")\np.set_dir(\"imgs/\", \"s3://quilt-example/imgs/\")",
"The first parameter to these functions is the logical key, which will determine where the file lives within the package. So after running the commands above our package will look like this:",
"p",
"The second parameter is the physical key, which states the file's actual location. The physical key may point to either a local file or a remote object (with an s3:// path).\nIf the physical key and the logical key are the same, you may omit the second argument:",
"# assuming data.csv is in the current directory\np = quilt3.Package()\np.set(\"data.csv\")",
"Another useful trick. Use \".\" to set the contents of the package to that of the current directory:",
"# switch to a test directory and create some test files\nimport os\n%cd data/\nos.mkdir(\"stuff\")\nwith open(\"new_data.csv\", \"w\") as f:\n f.write(\"id, value\\na, 42\")\n\n# set the contents of the package to that of the current directory\np.set_dir(\".\", \".\")",
"Deleting data in a package\nUse delete to remove entries from a package:",
"p.delete(\"data.csv\")",
"Note that this will only remove this piece of data from the package. It will not delete the actual data itself.\nAdding metadata to a package\nPackages support metadata anywhere in the package. To set metadata on package entries or directories, use the meta argument:",
"p = quilt3.Package()\np.set(\"data.csv\", \"new_data.csv\", meta={\"type\": \"csv\"})\np.set_dir(\"stuff/\", \"stuff/\", meta={\"origin\": \"unknown\"})",
"You can also set metadata on the package as a whole using set_meta.",
"# set metadata on a package\np.set_meta({\"package-type\": \"demo\"})"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
zerothi/sisl
|
docs/tutorials/tutorial_siesta_1.ipynb
|
mpl-2.0
|
[
"import os\nos.chdir('siesta_1')\nimport numpy as np\nfrom sisl import *\nimport matplotlib.pyplot as plt\n%matplotlib inline",
"Siesta --- the H2O molecule\nThis tutorial will describe a complete walk-through of some of the sisl functionalities that may be related to the Siesta code.\nCreating the geometry\nOur system of interest will be the $\\mathrm H_2\\mathrm O$ system. The first task will be to create the molecule geometry.\nThis is done using lists of atomic coordinates and atomic species. Additionally one needs to define the supercell (or if you prefer: unit-cell) where the molecule resides in. Siesta is a periodic DFT code and thus all directions are periodic. I.e. when simulating molecules it is vital to have a large vacuum gap between periodic images. In this case we use a supercell of side-lengths $10\\mathrm{Ang}$.",
"h2o = Geometry([[0, 0, 0], [0.8, 0.6, 0], [-0.8, 0.6, 0.]], \n [Atom('O'), Atom('H'), Atom('H')], \n sc=SuperCell(10, origin=[-5] * 3))",
"The input are the 1) xyz coordinates, 2) the atomic species and 3) the supercell that is attached.\nBy printing the object one gets basic information regarding the geometry, such as 1) number of atoms, 2) species of atoms, 3) number of orbitals, 4) orbitals associated with each atom and 5) number of supercells.",
"print(h2o)",
"So there are 3 atoms, 1 Oxygen and 2 Hydrogen. Currently there are only 1 orbital per atom.\nLater we will look into the details of orbitals associated with atoms and how they may be used for wavefunctions etc.\nLets visualize the atomic positions (here adding atomic indices)",
"plot(h2o)",
"Now we need to create the input fdf file for Siesta:",
"open('RUN.fdf', 'w').write(\"\"\"%include STRUCT.fdf\nSystemLabel siesta_1\nPAO.BasisSize SZP\nMeshCutoff 250. Ry\nCDF.Save true\nCDF.Compress 9\nSaveHS true\nSaveRho true\n\"\"\")\nh2o.write('STRUCT.fdf')",
"The first block of code simply writes a text-file with the required input, the last line of code tells the geometry (h2o) to write its information to the file STRUCT.fdf.\nIt automatically writes the following geometry information to the fdf file:\n- LatticeConstant\n- LatticeVectors\n- NumberOfAtoms\n- AtomicCoordinatesFormat\n- AtomicCoordinatesAndAtomicSpecies\n- NumberOfSpecies\n- ChemicalSpeciesLabel\nCreating the electronic structure\nBefore continuing we need to run Siesta to calculate the electronic structure.\nbash\nsiesta RUN.fdf\nAfter having completed the Siesta run we may read the Siesta output to manipulate and extract different information.",
"fdf = get_sile('RUN.fdf')\nH = fdf.read_hamiltonian()\n# Create a short-hand to handle the geometry\nh2o = H.geometry\nprint(H)",
"A lot of new information has appeared. The Hamiltonian object describes the non-orthogonal basis and the \"hopping\" elements between the orbitals. We see it is a non-orthogonal basis via: orthogonal: False. Secondly, we see it was an un-polarized calculation (Spin{unpolarized...).\nLastly the geometry information is printed again. Contrary to the previous printing of the geometry we now find additional information based on the orbitals on each of the atoms. This information is read from the Siesta output and thus the basic information regarding the orbital symmetry and the basis functions are now handled by sisl. The oxygen has 9 orbitals ($s+p+d$ where the $d$ orbitals are $p$-polarizations denoted by capital P). We also see that it is a single-$\\zeta$ calculation Z1 (for double-$\\zeta$ Z2 would also appear in the list). The hydrogens only has 4 orbitals $s+p$.\nFor each orbital one can see its maximal radial part and how initial charges are distributed.\nPlotting orbitals\nOften it may be educational (and fun) to plot the orbital wavefunctions. To do this we use the intrinsic method in the Orbital class named toGrid. The below code is rather complicated, but the complexity is simply because we want to show the orbitals in a rectangular grid of plots.",
"def plot_atom(atom):\n no = len(atom) # number of orbitals\n nx = no // 4\n ny = no // nx\n if nx * ny < no:\n nx += 1\n fig, axs = plt.subplots(nx, ny, figsize=(20, 5*nx))\n fig.suptitle('Atom: {}'.format(atom.symbol), fontsize=14)\n def my_plot(i, orb):\n grid = orb.toGrid(atom=atom)\n # Also write to a cube file\n grid.write('{}_{}.cube'.format(atom.symbol, orb.name()))\n c, r = i // 4, (i - 4) % 4\n if nx == 1:\n ax = axs[r]\n else:\n ax = axs[c][r]\n ax.imshow(grid.grid[:, :, grid.shape[2] // 2])\n ax.set_title(r'${}$'.format(orb.name(True)))\n ax.set_xlabel(r'$x$ [Ang]')\n ax.set_ylabel(r'$y$ [Ang]')\n i = 0\n for orb in atom:\n my_plot(i, orb)\n i += 1\n if i < nx * ny:\n # This removes the empty plots\n for j in range(i, nx * ny):\n c, r = j // 4, (j - 4) % 4\n if nx == 1:\n ax = axs[r]\n else:\n ax = axs[c][r]\n fig.delaxes(ax)\n plt.draw()\nplot_atom(h2o.atoms[0])\nplot_atom(h2o.atoms[1])",
"Hamiltonian eigenstates\nAt this point we have the full Hamiltonian as well as the basis functions used in the Siesta calculation.\nThis completes what is needed to calculate a great deal of physical quantities, e.g. eigenstates, density of states, projected density of states and wavefunctions.\nTo begin with we calculate the $\\Gamma$-point eigenstates and plot a subset of the eigenstates' norm on the geometry. An important aspect of the electronic structure handled by Siesta is that it is shifted to $E_F=0$ meaning that the HOMO level is the smallest negative eigenvalue, while the LUMO is the smallest positive eigenvalue:",
"es = H.eigenstate()\n\n# We specify an origin to center the molecule in the grid\nh2o.sc.origin = [-4, -4, -4]\n\n# Reduce the contained eigenstates to only the HOMO and LUMO\n# Find the index of the smallest positive eigenvalue\nidx_lumo = (es.eig > 0).nonzero()[0][0]\nes = es.sub([idx_lumo - 1, idx_lumo])\n_, ax = plt.subplots(1, 2, figsize=(14, 2));\nfor i, (norm2, color) in enumerate(zip(es.norm2(sum=False), 'rbg')):\n for ia in h2o:\n ax[i].scatter(h2o.xyz[ia, 0], h2o.xyz[ia, 1], 600 * norm2[h2o.a2o(ia, True)].sum(), facecolor=color, alpha=0.5);",
"These are not that interesting. The projection of the HOMO and LUMO states show where the largest weight of the HOMO and LUMO states, however we can't see the orbital symmetry differences between the HOMO and LUMO states.\nInstead of plotting the weight on each orbital it is more interesting to plot the actual wavefunctions which contains the orbital symmetries, however, matplotlib is currently not capable of plotting real-space iso-surface plots. To do this, please use VMD or your preferred software.",
"def integrate(g):\n print('Real space integrated wavefunction: {:.4f}'.format((np.absolute(g.grid) ** 2).sum() * g.dvolume))\ng = Grid(0.2, sc=h2o.sc)\nes.sub(0).wavefunction(g)\nintegrate(g)\n#g.write('HOMO.cube')\ng.fill(0) # reset the grid values to 0\nes.sub(1).wavefunction(g)\nintegrate(g)\n#g.write('LUMO.cube')",
"Real space charge\nSince we have the basis functions we can also plot the charge in the grid. We can do this via either reading the density matrix or read in the charge output directly from Siesta.\nSince both should yield the same value we can compare the output from Siesta with that calculated in sisl.\nYou will notice that re-creating the density on a real space grid in sisl is much slower than creating the wavefunction. This is because we need orbital multiplications.",
"DM = fdf.read_density_matrix()\nrho = get_sile('siesta_1.nc').read_grid('Rho')\n\nDM_rho = rho.copy()\nDM_rho.fill(0)\nDM.density(DM_rho)\ndiff = DM_rho - rho\nprint('Real space integrated density difference: {:.3e}'.format(diff.grid.sum() * diff.dvolume))"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
mne-tools/mne-tools.github.io
|
0.20/_downloads/7bbeb6a728b7d16c6e61cd487ba9e517/plot_morph_volume_stc.ipynb
|
bsd-3-clause
|
[
"%matplotlib inline",
"Morph volumetric source estimate\nThis example demonstrates how to morph an individual subject's\n:class:mne.VolSourceEstimate to a common reference space. We achieve this\nusing :class:mne.SourceMorph. Pre-computed data will be morphed based on\nan affine transformation and a nonlinear registration method\nknown as Symmetric Diffeomorphic Registration (SDR) by Avants et al. [1]_.\nTransformation is estimated from the subject's anatomical T1 weighted MRI\n(brain) to FreeSurfer's 'fsaverage' T1 weighted MRI (brain)\n<https://surfer.nmr.mgh.harvard.edu/fswiki/FsAverage>__.\nAfterwards the transformation will be applied to the volumetric source\nestimate. The result will be plotted, showing the fsaverage T1 weighted\nanatomical MRI, overlaid with the morphed volumetric source estimate.\nReferences\n.. [1] Avants, B. B., Epstein, C. L., Grossman, M., & Gee, J. C. (2009).\n Symmetric Diffeomorphic Image Registration with Cross- Correlation:\n Evaluating Automated Labeling of Elderly and Neurodegenerative\n Brain, 12(1), 26-41.",
"# Author: Tommy Clausner <tommy.clausner@gmail.com>\n#\n# License: BSD (3-clause)\nimport os\n\nimport nibabel as nib\nimport mne\nfrom mne.datasets import sample, fetch_fsaverage\nfrom mne.minimum_norm import apply_inverse, read_inverse_operator\nfrom nilearn.plotting import plot_glass_brain\n\nprint(__doc__)",
"Setup paths",
"sample_dir_raw = sample.data_path()\nsample_dir = os.path.join(sample_dir_raw, 'MEG', 'sample')\nsubjects_dir = os.path.join(sample_dir_raw, 'subjects')\n\nfname_evoked = os.path.join(sample_dir, 'sample_audvis-ave.fif')\nfname_inv = os.path.join(sample_dir, 'sample_audvis-meg-vol-7-meg-inv.fif')\n\nfname_t1_fsaverage = os.path.join(subjects_dir, 'fsaverage', 'mri',\n 'brain.mgz')\nfetch_fsaverage(subjects_dir) # ensure fsaverage src exists\nfname_src_fsaverage = subjects_dir + '/fsaverage/bem/fsaverage-vol-5-src.fif'",
"Compute example data. For reference see\nsphx_glr_auto_examples_inverse_plot_compute_mne_inverse_volume.py\nLoad data:",
"evoked = mne.read_evokeds(fname_evoked, condition=0, baseline=(None, 0))\ninverse_operator = read_inverse_operator(fname_inv)\n\n# Apply inverse operator\nstc = apply_inverse(evoked, inverse_operator, 1.0 / 3.0 ** 2, \"dSPM\")\n\n# To save time\nstc.crop(0.09, 0.09)",
"Get a SourceMorph object for VolSourceEstimate\nsubject_from can typically be inferred from\n:class:src <mne.SourceSpaces>,\nand subject_to is set to 'fsaverage' by default. subjects_dir can be\nNone when set in the environment. In that case SourceMorph can be initialized\ntaking src as only argument. See :class:mne.SourceMorph for more\ndetails.\nThe default parameter setting for zooms will cause the reference volumes\nto be resliced before computing the transform. A value of '5' would cause\nthe function to reslice to an isotropic voxel size of 5 mm. The higher this\nvalue the less accurate but faster the computation will be.\nThe recommended way to use this is to morph to a specific destination source\nspace so that different subject_from morphs will go to the same space.`\nA standard usage for volumetric data reads:",
"src_fs = mne.read_source_spaces(fname_src_fsaverage)\nmorph = mne.compute_source_morph(\n inverse_operator['src'], subject_from='sample', subjects_dir=subjects_dir,\n niter_affine=[10, 10, 5], niter_sdr=[10, 10, 5], # just for speed\n src_to=src_fs, verbose=True)",
"Apply morph to VolSourceEstimate\nThe morph can be applied to the source estimate data, by giving it as the\nfirst argument to the :meth:morph.apply() <mne.SourceMorph.apply> method:",
"stc_fsaverage = morph.apply(stc)",
"Convert morphed VolSourceEstimate into NIfTI\nWe can convert our morphed source estimate into a NIfTI volume using\n:meth:morph.apply(..., output='nifti1') <mne.SourceMorph.apply>.",
"# Create mri-resolution volume of results\nimg_fsaverage = morph.apply(stc, mri_resolution=2, output='nifti1')",
"Plot results",
"# Load fsaverage anatomical image\nt1_fsaverage = nib.load(fname_t1_fsaverage)\n\n# Plot glass brain (change to plot_anat to display an overlaid anatomical T1)\ndisplay = plot_glass_brain(t1_fsaverage,\n title='subject results to fsaverage',\n draw_cross=False,\n annotate=True)\n\n# Add functional data as overlay\ndisplay.add_overlay(img_fsaverage, alpha=0.75)",
"Reading and writing SourceMorph from and to disk\nAn instance of SourceMorph can be saved, by calling\n:meth:morph.save <mne.SourceMorph.save>.\nThis methods allows for specification of a filename under which the morph\nwill be save in \".h5\" format. If no file extension is provided, \"-morph.h5\"\nwill be appended to the respective defined filename::\n>>> morph.save('my-file-name')\n\nReading a saved source morph can be achieved by using\n:func:mne.read_source_morph::\n>>> morph = mne.read_source_morph('my-file-name-morph.h5')\n\nOnce the environment is set up correctly, no information such as\nsubject_from or subjects_dir must be provided, since it can be\ninferred from the data and used morph to 'fsaverage' by default, e.g.::\n>>> morph.apply(stc)"
] |
[
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
chinapnr/python_study
|
Python 基础课程/Python Basic Lesson 06 - 随机数.ipynb
|
gpl-3.0
|
[
"Lesson 6\nv1.1, 2020.4 5, edit by David Yi\n本次内容要点\n\n随机数\n思考:猜数游戏等\n\n\n随机数\n随机数这一概念在不同领域有着不同的含义,在密码学、通信领域有着非常重要的用途。\nPython 的随机数模块是 random,random 模块主要有以下函数,结合例子来看看。\n\nrandom.choice() 从序列中获取一个随机元素\nrandom.sample() 创建指定范围内指定个数的随机数\nrandom.random() 用于生成一个0到1的随机浮点数\nrandom.uniform() 用于生成一个指定范围内的随机浮点数\nrandom.randint() 生成一个指定范围内的整数\nrandom.shuffle() 将一个列表中的元素打乱",
"import random\n\n# random.choice(sequence)。参数sequence表示一个有序类型。\n# random.choice 从序列中获取一个随机元素。\n\nprint(random.choice(range(1,100)))\n\n# 从一个列表中产生随机元素\nlist1 = ['a', 'b', 'c']\nprint(random.choice(list1))\n\n# random.sample()\n\n# 创建指定范围内指定个数的整数随机数\nprint(random.sample(range(1,100), 10))\n\nprint(random.sample(range(1,10), 5))\n\n# 如果要产生的随机数数量大于范围边界,会怎么样?\n# print(random.sample(range(1,10), 15))\n\n# random.randint(a, b),用于生成一个指定范围内的整数。\n# 其中参数a是下限,参数b是上限,生成的随机数n: a <= n <= b\nprint(random.randint(1,100))\n\n# random.randrange([start], stop[, step]),\n# 从指定范围内,按指定基数递增的集合中 获取一个随机数。\nprint(random.randrange(1,10))\n\n# 可以多运行几次,看看结果总是哪几个数字\nprint(random.randrange(1,10,3))\n\n# random.random()用于生成一个0到1的随机浮点数: 0 <= n < 1.0\n\nprint(random.random())\n\n\n# random.uniform(a, b),\n# 用于生成一个指定范围内的随机浮点数,两个参数其中一个是上限,一个是下限。\n# 如果a < b,则生成的随机数n: a <= n <= b。如果 a > b, 则 b <= n <= a。\n\nprint(random.uniform(1,100))\nprint(random.uniform(50,10))\n\n# random.shuffle(x[, random]),\n# 用于将一个列表中的元素打乱\na = [12, 23, 1, 5, 87]\nrandom.shuffle(a)\nprint(a)\n\n# random.sample(sequence, k),\n# 从指定序列中随机获取指定长度的片断。sample函数不会修改原有序列。\n\nprint(random.sample(range(10),5))\nprint(random.sample(range(10),7))",
"思考\n\n一个猜数程序",
"# 猜数,人猜\n# 简单版本\n\nimport random\n\na = random.randint(1,1000)\n\nprint('Now you can guess...')\n\nguess_mark = True\n\nwhile guess_mark:\n user_number =int(input('please input number:'))\n if user_number > a:\n print('too big')\n if user_number < a:\n print('too small')\n if user_number == a:\n print('bingo!')\n guess_mark = False\n\n# 猜数,人猜\n# 记录猜数的过程\n\nimport random\n\n# 记录人猜了多少数字\nuser_number_list = []\n\n# 记录人猜了几次\nuser_guess_count = 0\n\na = random.randint(1,100)\n\nprint('Now you can guess...')\nguess_mark = True\n\n# 主循环\nwhile guess_mark:\n user_number =int(input('please input number:'))\n \n user_number_list.append(user_number)\n user_guess_count += 1\n \n if user_number > a:\n print('too big')\n if user_number < a:\n print('too small')\n if user_number == a:\n print('bingo!')\n print('your guess number list:', user_number_list)\n print('you try times:', user_guess_count)\n guess_mark = False\n\n# 猜数,人猜\n# 增加判断次数,如果猜了大于4次显示不同提示语\n\nimport random\n\n# 记录人猜了多少数字\nuser_number_list = []\n\n# 记录人猜了几次\nuser_guess_count = 0\n\na = random.randint(1,100)\n\nprint('Now you can guess...')\n\nguess_mark = True\n\n# 主循环\nwhile guess_mark:\n \n if 0 <= user_guess_count <= 4:\n user_number =int(input('please input number:'))\n if 4 < user_guess_count <= 100:\n user_number =int(input('try harder, please input number:'))\n \n user_number_list.append(user_number)\n user_guess_count += 1\n \n if user_number > a:\n print('too big')\n if user_number < a:\n print('too small')\n if user_number == a:\n print('bingo!')\n print('your guess number list:', user_number_list)\n print('you try times:', user_guess_count)\n guess_mark = False\n",
"更加复杂的生成随机内容。\n\n可以参考我们开发的python 函数包中的 random 部分,https://fishbase.readthedocs.io/en/latest/fish_random.html\nfish_random.gen_random_address(zone) 通过省份行政区划代码,返回该省份的随机地址\nfish_random.get_random_areanote(zone) 省份行政区划代码,返回下辖的随机地区名称\nfish_random.gen_random_bank_card([…]) 通过指定的银行名称,随机生成该银行的卡号\nfish_random.gen_random_company_name() 随机生成一个公司名称\nfish_random.gen_random_float(minimum, maximum) 指定一个浮点数范围,随机生成并返回区间内的一个浮点数,区间为闭区间 受限于 random.random 精度限制,支持最大 15 位精度\nfish_random.gen_random_id_card([zone, …]) 根据指定的省份编号、性别或年龄,随机生成一个身份证号\nfish_random.gen_random_mobile() 随机生成一个手机号\nfish_random.gen_random_name([family_name, …]) 指定姓氏、性别、长度,返回随机人名,也可不指定生成随机人名\nfish_random.gen_random_str(min_length, …) 指定一个前后缀、字符串长度以及字符串包含字符类型,返回随机生成带有前后缀及指定长度的字符串",
"from fishbase.fish_random import *\n\n# 这些银行卡卡号只是符合规范,可以通过最基本的银行卡号规范检查,但是实际上是不存在的\n\n# 随机生成一张银行卡卡号\nprint(gen_random_bank_card())\n\n# 随机生成一张中国银行的借记卡卡号\nprint(gen_random_bank_card('中国银行', 'CC'))\n\n# 随机生成一张中国银行的贷记卡卡号\nprint(gen_random_bank_card('中国银行', 'DC'))\n\nfrom fishbase.fish_random import *\n\n# 生成假的身份证号码,符合标准身份证的分段设置和校验位\n\n# 指定身份证的地域\nprint(gen_random_id_card('310000'))\n\n# 增加指定年龄\nprint(gen_random_id_card('310000', age=70))\n\n# 增加年龄和性别\nprint(gen_random_id_card('310000', age=30, gender='00'))\n\n# 生成一组\nprint(gen_random_id_card(age=30, gender='01', result_type='LIST'))",
"猜数程序修改为机器猜,根据每次人返回的结果来调整策略",
"# 猜数,机器猜\n\nmin = 0\nmax = 1000\n\nguess_ok_mark = False\n\nwhile not guess_ok_mark:\n\n cur_guess = int((min + max) / 2)\n print('I guess:', cur_guess)\n human_answer = input('Please tell me big or small:')\n if human_answer == 'big':\n max = cur_guess\n if human_answer == 'small':\n min = cur_guess\n if human_answer == 'ok':\n print('HAHAHA')\n guess_ok_mark = True"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code"
] |
arcyfelix/Courses
|
17-09-17-Python-for-Financial-Analysis-and-Algorithmic-Trading/04-Visualization-Matplotlib-Pandas/04a-Matplotlib/Matplotlib Exercises A - Solved! .ipynb
|
apache-2.0
|
[
"<a href='http://www.pieriandata.com'> <img src='../../Pierian_Data_Logo.png' /></a>\n\nMatplotlib Exercises\nWelcome to the exercises for reviewing matplotlib! Take your time with these, Matplotlib can be tricky to understand at first. These are relatively simple plots, but they can be hard if this is your first time with matplotlib, feel free to reference the solutions as you go along.\nAlso don't worry if you find the matplotlib syntax frustrating, we actually won't be using it that often throughout the course, we will switch to using pandas built-in visualization capabilities. But, those are built-off of matplotlib, which is why it is still important to get exposure to it!\n * NOTE: ALL THE COMMANDS FOR PLOTTING A FIGURE SHOULD ALL GO IN THE SAME CELL. SEPARATING THEM OUT INTO MULTIPLE CELLS MAY CAUSE NOTHING TO SHOW UP. * \nExercises\nFollow the instructions to recreate the plots using this data:\nData",
"import numpy as np\nx = np.arange(0,100)\ny = x * 2\nz = x ** 2",
"Import matplotlib.pyplot as plt and set %matplotlib inline if you are using the jupyter notebook. What command do you use if you aren't using the jupyter notebook?",
"import matplotlib.pyplot as plt\n%matplotlib inline\n\nplt.show()",
"Exercise 1\n Follow along with these steps: \n* Create a figure object called fig using plt.figure() \n* Use add_axes to add an axis to the figure canvas at [0,0,1,1]. Call this new axis ax. \n* Plot (x,y) on that axes and set the labels and titles to match the plot below:",
"fig = plt.figure()\nax = fig.add_axes([0, 0, 1, 1])\n\nax.plot(x, y)\nax.set_xlabel('x')\nax.set_ylabel('y')\nax.set_title('title')",
"Exercise 2\n Create a figure object and put two axes on it, ax1 and ax2. Located at [0,0,1,1] and [0.2,0.5,.2,.2] respectively.",
"fig = plt.figure()\nax1 = fig.add_axes([0, 0, 1, 1])\nax2 = fig.add_axes([0.2, 0.5, .2, .2])",
"Now plot (x,y) on both axes. And call your figure object to show it.",
"fig = plt.figure()\nax1 = fig.add_axes([0, 0, 1, 1])\nax2 = fig.add_axes([0.2, 0.5, .2, .2])\n\nax1.plot(x, y)\nax1.set_xlabel('x')\nax1.set_ylabel('y')\nax1.set_title('title')\n\nax2.plot(x, y)\nax2.set_xlabel('x')\nax2.set_ylabel('y')\nax2.set_title('title')",
"Exercise 3\n Create the plot below by adding two axes to a figure object at [0,0,1,1] and [0.2,0.5,.4,.4]",
"fig = plt.figure()\nax1 = fig.add_axes([0, 0, 1, 1])\nax2 = fig.add_axes([0.2, 0.5, 0.4, 0.4])",
"Now use x,y, and z arrays to recreate the plot below. Notice the xlimits and y limits on the inserted plot:",
"fig = plt.figure()\nax1 = fig.add_axes([0, 0, 1, 1])\nax2 = fig.add_axes([0.2, 0.5, 0.4, 0.4])\n\nax1.plot(x, z)\nax1.set_xbound(lower = 0, upper = 100)\nax1.set_xlabel('X')\nax1.set_ylabel('Z')\n\nax2.plot(x, y)\nax2.set_title('zoom')\nax2.set_xbound(lower = 20, upper = 22)\nax2.set_ybound(lower = 30, upper = 50)\nax2.set_xlabel('X')\nax2.set_ylabel('Y')\n\n",
"Exercise 4\n Use plt.subplots(nrows=1, ncols=2) to create the plot below.",
"fig, ax = plt.subplots(nrows = 1, ncols = 2)",
"Now plot (x,y) and (x,z) on the axes. Play around with the linewidth and style",
"fig, ax = plt.subplots(nrows = 1, ncols = 2)\nax[0].plot(x, y, \n 'b--', \n lw = 3)\nax[1].plot(x, z, \n 'r', \n lw = 3 )\nplt.tight_layout()",
"See if you can resize the plot by adding the figsize() argument in plt.subplots() are copying and pasting your previous code.",
"fig, ax = plt.subplots(figsize = (12, 2) ,nrows = 1, ncols = 2)\nax[0].plot(x, y, \n 'b', \n lw = 3)\nax[0].set_xlabel('x')\nax[0].set_ylabel('y')\n\nax[1].plot(x, z, \n 'r--', \n lw = 3 )\nax[1].set_xlabel('x')\nax[1].set_xlabel('z')\n\n\n",
"Great Job!"
] |
[
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown",
"code",
"markdown"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.